Quantcast
Channel: Baeldung
Viewing all 3741 articles
Browse latest View live

Guide to the Guava BiMap

$
0
0

1. Overview

In this tutorial, we’ll show how to use the Google Guava’s BiMap interface and its multiple implementations.

A BiMap (or “bidirectional map”) is a special kind of a map which maintains an inverse view of the map while ensuring that no duplicate values are present and a value can always be used safely to get the key back.

The basic implementation of BiMap is HashBiMap where internally it makes use of two Maps, one for the key to value mapping and the other for the value to key mapping.

2. Google Guava’s BiMap

Let’s have a look at how to use the BiMap class.

We’ll start by adding the Google Guava library dependency in the pom.xml:

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>21.0</version>
</dependency>

The latest version of the dependency can be checked here.

3. Creating a BiMap

You can create an instance of BiMap in multiple ways as follows:

  • If you are going to deal with a custom Java object, use the create method from the class HashBiMap:
BiMap<String, String> capitalCountryBiMap = HashBiMap.create();
  • If we already have an existing map, you may create an instance of a BiMap using an overloaded version of the create method from a class HashBiMap:
Map<String, String> capitalCountryBiMap = new HashMap<>();
//...
HashBiMap.create(capitalCountryBiMap);
  • If you are going to deal with a key of type Enum, use the create method from the class EnumHashBiMap:
BiMap<MyEnum, String> operationStringBiMap = EnumHashBiMap.create(MyEnum.class);
  • If you intend to create an immutable map, use the ImmutableBiMap class (which follows a builder pattern):
BiMap<String, String> capitalCountryBiMap
  = new ImmutableBiMap.Builder<>()
    .put("New Delhi", "India")
    .build();

4. Using the BiMap

Let’s start with a simple example showing the usage of BiMap, where we can get a key based on a value and a value based on a key:

@Test
public void givenBiMap_whenQueryByValue_shouldReturnKey() {
    BiMap<String, String> capitalCountryBiMap = HashBiMap.create();
    capitalCountryBiMap.put("New Delhi", "India");
    capitalCountryBiMap.put("Washington, D.C.", "USA");
    capitalCountryBiMap.put("Moscow", "Russia");

    String keyFromBiMap = capitalCountryBiMap.inverse().get("Russia");
    String valueFromBiMap = capitalCountryBiMap.get("Washington, D.C.");
 
    assertEquals("Moscow", keyFromBiMap);
    assertEquals("USA", valueFromBiMap);
}

Note: the inverse method above returns the inverse view of the BiMap, which maps each of the BiMap’s values to its associated keys.

BiMap throws an IllegalArgumentException when we try to store a duplicate value twice.

Let’s see an example of the same:

@Test(expected = IllegalArgumentException.class)
public void givenBiMap_whenSameValueIsPresent_shouldThrowException() {
    BiMap<String, String> capitalCountryBiMap = HashBiMap.create();
    capitalCountryBiMap.put("Mumbai", "India");
    capitalCountryBiMap.put("Washington, D.C.", "USA");
    capitalCountryBiMap.put("Moscow", "Russia");
    capitalCountryBiMap.put("New Delhi", "India");
}

If we wish to override the value already present in BiMap, we can make use of the forcePut method:

@Test
public void givenSameValueIsPresent_whenForcePut_completesSuccessfully() {
    BiMap<String, String> capitalCountryBiMap = HashBiMap.create();
    capitalCountryBiMap.put("Mumbai", "India");
    capitalCountryBiMap.put("Washington, D.C.", "USA");
    capitalCountryBiMap.put("Moscow", "Russia");
    capitalCountryBiMap.forcePut("New Delhi", "India");

    assertEquals("USA", capitalCountryBiMap.get("Washington, D.C."));
    assertEquals("Washington, D.C.", capitalCountryBiMap.inverse().get("USA"));
}

5. Conclusion

In this concise tutorial, we illustrated examples of using the BiMap in the Guava library. It is predominantly used to get a key based on the value from the map.

The implementation of these examples can be found in the GitHub project – this is a Maven-based project, so it should be easy to import and run as is.


Java Web Weekly, Issue 160

$
0
0

Lots of solid reactive focused talks this week.

Here we go…

1. Spring and Java

>> Java 10 Could Bring Upgraded Lambdas [infoq.com]

A short report about a cool possible enhancement of Lambda Expressions in Java 10.

>> Reflection vs Encapsulation [blog.codefx.org]

The introduction of modularity to the JVM casts a new light on the age-old Reflection vs Encapsulation discussions.

>> Open your classes and methods in Kotlin [blog.frankel.ch]

Kotlin’s features can sometimes be quite unhandy when working with Spring Boot.

>> Web frameworks and how to survive them [blog.codecentric.de]

Most web frameworks don’t stand the test of time – here are just a couple of reasons what that’s usually the case.

>> How to TDD FizzBuzz with JUnit Theories [opencredo.com]

This is how you overengineer FizzBuzz 🙂

>> Ultimate Guide to JPQL Queries with JPA and Hibernate [thoughts-on-java.org]

A comprehensive guide to JPQL with JPA / Hibernate.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Deploying Pull Requests with Docker [blog.codecentric.de]

A good way you can make your Pull Request easily testable by making good use of Docker containerization.

>> A Probably Incomplete, Comprehensive Guide to the Many Different Ways to JOIN Tables in SQL [blog.jooq.org]

A solid reference to JOINing in SQL.

>> Microservice using AWS API Gateway, AWS Lambda and Couchbase [blog.couchbase.com]

Short tutorial showing how to create a less standard style of microservice – using AWS API Gateway, AWS Lambda and Couchbase.

>> Flyway Tutorial – Managing Database Migrations [blog.codecentric.de]

Quick write-up showcasing Flyway – a database migration tool that uses immutable migration files.

>> Property Based Testing with Javaslang [sitepoint.com]

It turns out you can do Property Testing with Javaslang too 🙂

Also worth reading:

3. Musings

>> Types and Tests  [blog.cleancoder.com]

Continuation of the discussion about types and pros/cons of Static Typing.

>> Technodiversity [pointersgonewild.com]

Looks like technological diversity has more ‘pros’ than ‘cons’. Definitely an interesting read.

>> Couchbase Customer Advisory Note – Security  [blog.couchbase.com]

A few security “rules of thumb” for Couchbase users.

Considering just how many production instances seem to be wide-open – this one’s surprisingly relevant. And not just for Couchbase.

>> How to Turn Requirements into User Stories [daedtech.com]

A short guide to the effective conversion of requirements into User Stories.

Throughout my career, this has been an interesting skill to track, because it looks deceptively simple, but it’s generally quite the opposite.

>> 5 Code Review Tricks the Experts Use – Based on 3.2 Million Lines of Code [blog.takipi.com]

The title says everything 🙂

>> Top Heavy Department Growth [daedtech.com]

A few interesting insights about how organizations grow.

There are a few good ways to organically grow an organization well, and a whole lot of not so good ways as well.

>> Forget ISO-8859-1 [techblog.bozho.net]

Arguments for sticking to UTF-8.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Optimist employees [dilbert.com]

>> CEO Wisdom [dilbert.com]

>>Why are you wearing gloves? [dilbert.com]

5. Pick of the Week

>> Laws of 10x found everywhere. For good reason? [asmartbear.com]

Iterable to Stream in Java

$
0
0

1. Overview

In this short tutorial, let’s convert a Java Iterable object into a Stream and perform some standard operations on it.

2. Converting Iterable to Stream

The Iterable interface is designed keeping generality in mind and does not provide any stream() method on its own.

Simply put, you can pass it to StreamSupport.stream() method and get a Stream from the given Iterable instance.

Let’s consider our Iterable instance:

Iterable<String> iterable 
  = Arrays.asList("Testing", "Iterable", "conversion", "to", "Stream");

And here’s how we can convert this Iterable instance into a Stream:

StreamSupport.stream(iterable.spliterator(), false);

Note that the second param in StreamSupport.stream() determines if the resulting Stream should be parallel or sequential. You should set it true, for a parallel Stream.

Now let’s test our implementation:

@Test
public void givenIterable_whenConvertedToStream_thenNotNull() {
    Iterable<String> iterable 
      = Arrays.asList("Testing", "Iterable", "conversion", "to", "Stream");

    Assert.assertNotNull(StreamSupport.stream(iterable.spliterator(), false));
}

Also, a quick side-note – streams are not reusable, while Iterable is; it also provides a spliterator() method, which returns a java.lang.Spliterator instance over the elements described by the given Iterable.

3. Performing Stream Operation

Let’s perform a simple stream operation:

@Test
public void whenConvertedToList_thenCorrect() {
    Iterable<String> iterable 
      = Arrays.asList("Testing", "Iterable", "conversion", "to", "Stream");

    List<String> result = StreamSupport.stream(iterable.spliterator(), false)
      .map(String::toUpperCase)
      .collect(Collectors.toList());

    assertThat(
      result, contains("TESTING", "ITERABLE", "CONVERSION", "TO", "STREAM"));
}

4. Conclusion

This simple tutorial shows how you can convert an Iterable instance into a Stream instance and perform standard operations on it, just like you would have done for any other Collection instance.

The implementation of all the code snippets can be found in the Github project.

Concurrency with LMAX Disruptor – An Introduction

$
0
0

1. Overview

This article introduces the LMAX Disruptor and talks about how it helps to achieve software concurrency with low latency. We will also see a basic usage of the Disruptor library.

2. What is a Disruptor?

Disruptor is an open source Java library written by LMAX. It is a concurrent programming framework for the processing of a large number of transactions, with low-latency (and without the complexities of concurrent code). The performance optimization is achieved by a software design that exploits the efficiency of underlying hardware.

2.1. Mechanical Sympathy

Let’s start with the core concept of mechanical sympathy – that is all about understanding how the underlying hardware operates and programming in a way that best works with that hardware.

For example, let’s see how CPU and memory organization can impact software performance. The CPU has several layers of cache between it and main memory. When the CPU is performing an operation, it first looks in L1 for the data, then L2, then L3, and finally, the main memory.  The further it has to go, the longer the operation will take.

If the same operation is performed on a piece of data multiple times (for example, a loop counter), it makes sense to load that data into a place very close to the CPU.

Some indicative figures for the cost of cache misses:

Latency from CPU to CPU cycles Time
Main memory Multiple ~60-80 ns
L3 cache ~40-45 cycles ~15 ns
L2 cache ~10 cycles ~3 ns
L1 cache ~3-4 cycles ~1 ns
Register 1 cycle Very very quick

2.2. Why not Queues

Queue implementations tend to have write contention on the head, tail, and size variables. Queues are typically always close to full or close to empty due to the differences in pace between consumers and producers.  They very rarely operate in a balanced middle ground where the rate of production and consumption is evenly matched.

To deal with the write contention, a queue often uses locks, which can cause a context switch to the kernel. When this happens the processor involved is likely to lose the data in its caches.

To get the best caching behavior, the design should have only one core writing to any memory location (multiple readers are fine, as processors often use special high-speed links between their caches). Queues fail the one-writer principle.

If two separate threads are writing to two different values, each core invalidates the cache line of the other (data is transferred between main memory and cache in blocks of fixed size, called cache lines).  That is a write-contention between the two threads even though they’re writing to two different variables. This is called false sharing, because every time the head is accessed, the tail gets accessed too, and vice versa.

2.3. How the Disruptor Works

Ringbuffer overview and its API

Disruptor has an array based circular data structure (ring buffer).  It is an array that has a pointer to next available slot.  It is filled with pre-allocated transfer objects. Producers and consumers perform writing and reading of data to the ring without locking or contention.

In a Disruptor, all events are published to all consumers (multicast), for parallel consumption through separate downstream queues. Due to parallel processing by consumers, it is necessary to coordinate dependencies between the consumers (dependency graph).

Producers and consumers have a sequence counter to indicate which slot in the buffer it is currently working on. Each producer/consumer can write its own sequence counter but can read other’s sequence counters. The producers and consumers read the counters to ensure the slot it wants to write in is available without any locks.

3. Using the Disruptor Library

3.1. Maven Dependency

Let’s start by adding Disruptor library dependency in pom.xml:

<dependency>
    <groupId>com.lmax</groupId>
    <artifactId>disruptor</artifactId>
    <version>3.3.6</version>
</dependency>

The latest version of the dependency can be checked here.

3.2. Defining an Event

Let’s define the event that carries the data:

public static class ValueEvent {
    private int value;
    public final static EventFactory EVENT_FACTORY 
      = () -> new ValueEvent();

    // standard getters and setters
}

The EventFactory lets the Disruptor preallocate the events.

3.3. Consumer

Consumers read data from the ring buffer. Let’s define a consumer that will handle the events:

public class SingleEventPrintConsumer {
    ...

    public EventHandler<ValueEvent>[] getEventHandler() {
        EventHandler<ValueEvent> eventHandler 
          = (event, sequence, endOfBatch) 
            -> print(event.getValue(), sequence);
        return new EventHandler[] { eventHandler };
    }
 
    private void print(int id, long sequenceId) {
        logger.info("Id is " + id 
          + " sequence id that was used is " + sequenceId);
    }
}

In our example, the consumer is just printing to a log.

3.4. Constructing the Disruptor

Construct the Disruptor:

ThreadFactory threadFactory = DaemonThreadFactory.INSTANCE;

WaitStrategy waitStrategy = new BusySpinWaitStrategy();
Disruptor<ValueEvent> disruptor 
  = new Disruptor<>(
    ValueEvent.EVENT_FACTORY, 
    16, 
    threadFactory, 
    ProducerType.SINGLE, 
    waitStrategy);

In the constructor of Disruptor, the following are defined:

  • Event Factory – Responsible for generating objects which will be stored in ring buffer during initialization
  • The size of Ring Buffer – We have defined 16 as the size of the ring buffer. It has to be a power of 2 else it would throw an exception while initialization. This is important because it is easy to perform most of the operations using logical binary operators e.g. mod operation
  • Thread Factory – Factory to create threads for event processors
  • Producer Type – Specifies whether we will have single or multiple producers
  • Waiting strategy – Defines how we would like to handle slow subscriber who doesn’t keep up with producer’s pace

Connect the consumer handler:

disruptor.handleEventsWith(getEventHandler());

It is possible to supply multiple consumers with Disruptor to handle the data that is produced by producer. In the example above, we have just one consumer a.k.a. event handler.

3.5. Starting the Disruptor

To start the Disruptor:

RingBuffer<ValueEvent> ringBuffer = disruptor.start();

3.6. Producing and Publishing Events

Producers place the data in the ring buffer in a sequence. Producers have to be aware of the next available slot so that they don’t overwrite data that is not yet consumed.

Use the RingBuffer from Disruptor for publishing:

for (int eventCount = 0; eventCount < 32; eventCount++) {
    long sequenceId = ringBuffer.next();
    ValueEvent valueEvent = ringBuffer.get(sequenceId);
    valueEvent.setValue(eventCount);
    ringBuffer.publish(sequenceId);
}

Here, the producer is producing and publishing items in sequence. It is important to note here that Disruptor works similar to 2 phase commit protocol. It reads a new sequenceId and publishes. The next time it should get sequenceId + 1 as the next sequenceId.

4. Conclusion

In this tutorial, we have seen what a Disruptor is and how it achieves concurrency with low latency. We have seen the concept of mechanical sympathy and how it may be exploited to achieve low latency. We have then seen an example using the Disruptor library.

The example code can be found in the GitHub project – this is a Maven based project, so it should be easy to import and run as is.

A Guide to LinkedHashMap in Java

$
0
0

1. Overview

In this article, we are going to explore the internal implementation of LinkedHashMap class. LinkedHashMap is a common implementation of Map interface.

This particular implementation is a subclass of HashMap and therefore shares the core building blocks of the HashMap implementation. As a result, it’s highly recommended to brush up on that before proceeding with this article.

2. LinkedHashMap vs HashMap

The LinkedHashMap class is very similar to HashMap in most aspects. However, the linked hash map is based on both hash table and linked list to enhance the functionality of hash map.

It maintains a doubly-linked list running through all its entries in addition to an underlying array of default size 16.

To maintain the order of elements, the linked hashmap modifies the Map.Entry class of HashMap by adding pointers to the next and previous entries:

static class Entry<K,V> extends HashMap.Node<K,V> {
    Entry<K,V> before, after;
    Entry(int hash, K key, V value, Node<K,V> next) {
        super(hash, key, value, next);
    }
}

Notice that the Entry class simply adds two pointers; before and after which enable it to hook itself to the linked list. Aside from that, it uses the Entry class implementation of a the HashMap.

Finally, remember that this linked list defines the order of iteration, which by default is the order of insertion of elements (insertion-order).

3. Insertion-Order LinkedHashMap

Let’s have a look at a linked hash map instance which orders its entries according to how they’re inserted into the map. It also guarantees that this order will be maintained throughout the life cycle of the map:

@Test
public void givenLinkedHashMap_whenGetsOrderedKeyset_thenCorrect() {
    LinkedHashMap<Integer, String> map = new LinkedHashMap<>();
    map.put(1, null);
    map.put(2, null);
    map.put(3, null);
    map.put(4, null);
    map.put(5, null);

    Set<Integer> keys = map.keySet();
    Integer[] arr = keys.toArray(new Integer[0]);

    for (int i = 0; i < arr.length; i++) {
        assertEquals(new Integer(i + 1), arr[i]);
    }
}

Here, we’re simply making a rudimentary, non-conclusive test on the ordering of entries in the linked hash map.

We can guarantee that this test will always pass as the insertion order will always be maintained. We cannot make the same guarantee for a HashMap.

This attribute can be of great advantage in an API that receives any map, makes a copy to manipulate and returns it to the calling code. If the client needs the returned map to be ordered the same way before calling the API, then a linked hashmap is the way to go.

Insertion order is not affected if a key is re-inserted into the map.

4. Access-Order LinkedHashMap

LinkedHashMap provides a special constructor which enables us to specify, among custom load factor (LF) and initial capacity, a different ordering mechanism/strategy called access-order:

LinkedHashMap<Integer, String> map = new LinkedHashMap<>(16, .75f, true);

The first parameter is the initial capacity, followed by the load factor and the last param is the ordering mode. So, by passing in true, we turned out access-order, whereas the default was insertion-order.

This mechanism ensures that the order of iteration of elements is the order in which the elements were last accessed, from least-recently accessed to most-recently accessed.

And so, building a Least Recently Used (LRU) cache is quite easy and practical with kind of a map. A successful put or get operation results in an access for the entry:

@Test
public void givenLinkedHashMap_whenAccessOrderWorks_thenCorrect() {
    LinkedHashMap<Integer, String> map 
      = new LinkedHashMap<>(16, .75f, true);
    map.put(1, null);
    map.put(2, null);
    map.put(3, null);
    map.put(4, null);
    map.put(5, null);

    Set<Integer> keys = map.keySet();
    assertEquals("[1, 2, 3, 4, 5]", keys.toString());
 
    map.get(4);
    assertEquals("[1, 2, 3, 5, 4]", keys.toString());
 
    map.get(1);
    assertEquals("[2, 3, 5, 4, 1]", keys.toString());
 
    map.get(3);
    assertEquals("[2, 5, 4, 1, 3]", keys.toString());
}

Notice how the order of elements in the key set is transformed as we perform access operations on the map.

Simply put, any access operation on the map results in an order such that the element that was accessed would appear last if an iteration were to be carried out right away.

After the above examples, it should be obvious that a putAll operation generates one entry access for each of the mappings in the specified map.

Naturally, iteration over a view of the map does’t affect the order of iteration of the backing map; only explicit access operations on the map will affect the order.

LinkedHashMap also provides a mechanism for maintaining a fixed number of mappings and to keep dropping off the oldest entries in case a new one needs to be added.

The removeEldestEntry method may be overridden to enforce this policy for removing stale mappings automatically.

To see this in practice, let us create our own linked hash map class, for the sole purpose of enforcing the removal of stale mappings by extending LinkedHashMap:

public class MyLinkedHashMap<K, V> extends LinkedHashMap<K, V> {

    private static final int MAX_ENTRIES = 5;

    public MyLinkedHashMap(
      int initialCapacity, float loadFactor, boolean accessOrder) {
        super(initialCapacity, loadFactor, accessOrder);
    }

    @Override
    protected boolean removeEldestEntry(Map.Entry eldest) {
        return size() > MAX_ENTRIES;
    }

}

Our override above will allow the map to grow to a maximum size of 5 entries. When the size exceeds that, each new entry will be inserted at the cost of losing the eldest entry in the map i.e. the entry whose last-access time precedes all the other entries:

@Test
public void givenLinkedHashMap_whenRemovesEldestEntry_thenCorrect() {
    LinkedHashMap<Integer, String> map
      = new MyLinkedHashMap<>(16, .75f, true);
    map.put(1, null);
    map.put(2, null);
    map.put(3, null);
    map.put(4, null);
    map.put(5, null);
    Set<Integer> keys = map.keySet();
    assertEquals("[1, 2, 3, 4, 5]", keys.toString());
 
    map.put(6, null);
    assertEquals("[2, 3, 4, 5, 6]", keys.toString());
 
    map.put(7, null);
    assertEquals("[3, 4, 5, 6, 7]", keys.toString());
 
    map.put(8, null);
    assertEquals("[4, 5, 6, 7, 8]", keys.toString());
}

Notice how oldest entries at the start of the key set keep dropping off as we add new ones to the map.

5. Performance Considerations

Just like HashMap, LinkedHashMap performs the basic Map operations of add, remove and contains in constant-time, as long as the hash function is well-dimensioned. It also accepts a null key as well as null values.

However, this constant-time performance of LinkedHashMap is likely to be a little worse than the constant-time of HashMap due to the added overhead of maintaining a doubly-linked list.

Iteration over collection views of LinkedHashMap also takes linear time O(n) similar to that of HashMap. On the flip side, LinkedHashMap‘s linear time performance during iteration is better than HashMap‘s linear time.

This is because, for LinkedHashMap, n in O(n) is only the number of entries in the map regardless of the capacity. Whereas, for HashMap, n is capacity and the size summed up, O(size+capacity).

Load Factor and Initial Capacity are defined precisely as for HashMap. Note, however, that the penalty for choosing an excessively high value for initial capacity is less severe for LinkedHashMap than for HashMap, as iteration times for this class are unaffected by capacity.

6. Concurrency

Just like HashMap, LinkedHashMap implementation is not synchronized. So if you are going to access it from multiple threads and at least one of these threads is likely to change it structurally, then it must be externally synchronized.

It’s best to do this at creation:

Map m = Collections.synchronizedMap(new LinkedHashMap());

The difference with HashMap lies in what entails a structural modification. In access-ordered linked hash maps, merely calling the get API results in a structural modification. Alongside this, are operations like put and remove.

7. Conclusion

In this article, we have explored Java LinkedHashMap class as one of the foremost implementations of Map interface in terms of usage. We have also explored its internal workings in terms of the difference from HashMap which is its superclass.

Hopefully, after having read this post, you can make more informed and effective decisions as to which Map implementation to employ in your use case.

The full source code for all the examples used in this article can be found in the GitHub project.

findFirst() and findAny() in the Java 8 Stream API

$
0
0

1. Introduction

The Java 8 Stream API introduced two methods that are often being misunderstood: findAny() and findFirst().

In this quick tutorial, we will be looking at the difference between these two methods and when to use them.

2. Using the Stream.findAny()

As the name suggests, the findAny() method allows you to find any element from a Stream. Use it when you are looking for an element without paying an attention to the encounter order:

The method returns an Optional instance which is empty if the Stream is empty:

@Test
public void createStream_whenFindAnyResultIsPresent_thenCorrect() {
    List<String> list = Arrays.asList("A","B","C","D");

    Optional<String> result = list.stream().findAny();

    assertTrue(result.isPresent());
    assertThat(result.get(), anyOf(is("A"), is("B"), is("C"), is("D")));
}

In a non-parallel operation, it will most likely return the first element in the Stream but there is no guarantee for this.

For maximum performance when processing the parallel operation the result cannot be reliably determined:

@Test
public void createParallelStream_whenFindAnyResultIsNotFirst_thenCorrect() {
    List<Integer> list = Arrays.asList(1, 2, 3, 4, 5);
    Optional<Integer> result = list
      .stream().parallel()
      .filter(num -> num < 4).findAny();

    assertTrue(result.isPresent());
    assertThat(result.get(), anyOf(is(1), is(2), is(3)));
}

3. Using the Stream.findFirst()

The findFirst() method finds the first element in a Stream. Obviously, this method is used when you specifically want the first element from a sequence.

When there is no encounter order it returns any element from the Stream. The java.util.streams package documentation says:

Streams may or may not have a defined encounter order. It depends on the source and the intermediate operations.

The return type is also an Optional instance which is empty if the Stream is empty too:

@Test
public void createStream_whenFindFirstResultIsPresent_thenCorrect() {

    List<String> list = Arrays.asList("A", "B", "C", "D");

    Optional<String> result = list.stream().findFirst();

    assertTrue(result.isPresent());
    assertThat(result.get(), is("A"));
}

The behavior of the findFirst method does not change in the parallel scenario. If the encounter order exists, it will always behave deterministically.

4. Conclusion

In this tutorial, we looked at the findAny() and findFirst() methods of the Java 8 Streams API. The findAny() method returns any element from a Stream while the findFirst() method returns the first element in a Stream.

You can find the complete source code and all code snippets for this article over on GitHub.

Spring Cloud Sleuth in a Monolith Application

$
0
0

1. Overview

In this article, we’re introducing Spring Cloud Sleuth – a powerful tool for enhancing logs in any application, but especially in a system built up of multiple services.

And for this writeup we’re going to focus on using Sleuth in a monolith application, not across microservices.

We’ve all had the unfortunate experience of trying to diagnose a problem with a scheduled task, a multi-threaded operation, or a complex web request. Often, even when there is logging, it is hard to tell what actions need to be correlated together to create a single request.

This can make diagnosing a complex action very difficult or even impossible. Often resulting in solutions like passing a unique id to each method in the request to identify the logs.

In comes Sleuth. This library makes it possible to identify logs pertaining to a specific job, thread, or request. Sleuth integrates effortlessly with logging frameworks like Logback and SLF4J to add unique identifiers that help track and diagnose issues using logs.

Let’s take a look at how it works.

2. Setup

We’ll start by creating a Spring Boot web project in our favorite IDE and adding this dependency to our pom.xml file:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-sleuth</artifactId>
</dependency>

Our application runs with Spring Boot and the parent pom provides versions for each entry. The latest version of this dependency can be found here: spring-cloud-starter-sleuth. To see the entire POM check out the project on Github.

Additionally, let’s add an application name to instruct Sleuth to identify this application’s logs.

In our application.properties file add this line:

spring.application.name=Baeldung Sleuth Tutorial

3. Sleuth Configurations

Sleuth is capable of enhancing logs in many situations. The library comes ready with filters that add unique ids to each web request that enters our application. Furthermore, the Spring team has added support for sharing these ids across thread boundaries.

Traces can be thought of like a single request or job that is triggered in an application. All the various steps in that request, even across application and thread boundaries, will have the same traceId.

Spans on the other hand can be thought of as sections of a job or request. A single trace can be composed of multiple spans each correlating to a specific step or section of the request. Using trace and span ids we can pinpoint exactly when and where our application is as it processes a request. Making reading our logs much easier.

In our examples, we will explore these capabilities in a single application.

3.1. Simple Web Request

First, let’s create a controller class to be an entry point to work with:

@RestController
public class SleuthController {

    @GetMapping("/")
    public String helloSleuth() {
        logger.info("Hello Sleuth");
        return "success";
    }
}

Let’s run our application and navigate to “http://localhost:8080”. Watch the logs for output that looks like:

2017-01-10 22:36:38.254  INFO 
  [Baeldung Sleuth Tutorial,4e30f7340b3fb631,7fd8bd536e28479b,false] 12516 
  --- [nio-8080-exec-1] c.b.spring.session.SleuthController : Hello Sleuth

This looks like a normal log, except for the part in the beginning between the brackets. This is the core information that Spring Sleuth has added. This data follows the format of:

[application name, traceId, spanId, export]

  • Application name – This is the name we set in the properties file and can be used to aggregate logs from multiple instances of the same application.
  • TraceId – This is an id that is assigned to a single request, job, or action. Something like each unique user initiated web request will have its own traceId.
  • SpanId – Tracks a unit of work. Think of a request that consists of multiple steps. Each step could have its own spanId and be tracked individually.
  • Export – This property is a boolean that indicates whether or not this log was exported to an aggregator like Zipkin. Zipkin is beyond the scope of this article but plays an important role in analyzing logs created by Sleuth.

By now, you should have some idea of the power of this library. Let’s take a look at another example to further demonstrate how integral this library is to logging.

3.2. Simple Web Request with Service Access

Let’s start by creating a service with a single method:

@Service
public class SleuthService {

    public void doSomeWorkSameSpan() {
        Thread.sleep(1000L);
        logger.info("Doing some work");
    }
}

Now let’s inject our service into our controller and add a request mapping method that accesses it:

@Autowired
private SleuthService sleuthService;
    
    @GetMapping("/same-span")
    public String helloSleuthSameSpan() throws InterruptedException {
        logger.info("Same Span");
        sleuthService.doSomeWorkSameSpan();
        return "success";
}

Finally, restart the application and navigate to “http://localhost:8080/same-span”. Watch for log output that looks like:

2017-01-10 22:51:47.664  INFO 
  [Baeldung Sleuth Tutorial,b77a5ea79036d5b9,661b8087cd9d8f51,false] 12516 
  --- [nio-8080-exec-3] c.b.spring.session.SleuthController      : Same Span
2017-01-10 22:51:48.664  INFO 
  [Baeldung Sleuth Tutorial,b77a5ea79036d5b9,661b8087cd9d8f51,false] 12516 
  --- [nio-8080-exec-3] c.baeldung.spring.session.SleuthService  : Doing some work

Take note that the trace and span ids are the same between the two logs even though the messages originate from two different classes. This makes it trivial to identify each log during a request by searching for the traceId of that request.

This is the default behavior, one request gets a single traceId and spanId. But we can manually add spans as we see fit. Let’s take a look at an example that uses this feature.

3.3. Manually Adding a Span

To start, let’s add a new controller:

@GetMapping("/new-span")
public String helloSleuthNewSpan() {
    logger.info("New Span");
    sleuthService.doSomeWorkNewSpan();
    return "success";
}

And now let’s add the new method inside our service:

@Autowired
private Tracer tracer;
// ...
public void doSomeWorkNewSpan() throws InterruptedException {
    logger.info("I'm in the original span");

    Span newSpan = tracer.createSpan("newSpan");
    try {
        Thread.sleep(1000L);
        logger.info("I'm in the new span doing some cool work that needs its own span");
    } finally {
        tracer.close(newSpan);
    }

    logger.info("I'm in the original span");
}

Note that we also added a new object, Tracer. The tracer instance is created by Spring Sleuth during startup and is made available to our class through dependency injection.

Traces must be manually started and stopped. To accomplish this, code that runs in a manually created span is placed inside a try-finally block to ensure the span is closed regardless of the operation’s success.

Restart the application and navigate to “http://localhost:8080/new-span”. Watch for the log output that looks like:

2017-01-11 21:07:54.924  
  INFO [Baeldung Sleuth Tutorial,9cdebbffe8bbbade,1e706f252a0ee9c2,false] 12516 
  --- [nio-8080-exec-6] c.b.spring.session.SleuthController      : New Span
2017-01-11 21:07:54.924  
  INFO [Baeldung Sleuth Tutorial,9cdebbffe8bbbade,1e706f252a0ee9c2,false] 12516 
  --- [nio-8080-exec-6] c.baeldung.spring.session.SleuthService  : 
  I'm in the original span
2017-01-11 21:07:55.924  
  INFO [Baeldung Sleuth Tutorial,9cdebbffe8bbbade,9e9ddea8f2a7c8ce,false] 12516 
  --- [nio-8080-exec-6] c.baeldung.spring.session.SleuthService  : 
  I'm in the new span doing some cool work that needs its own span
2017-01-11 21:07:55.924  
  INFO [Baeldung Sleuth Tutorial,9cdebbffe8bbbade,1e706f252a0ee9c2,false] 12516 
  --- [nio-8080-exec-6] c.baeldung.spring.session.SleuthService  : 
  I'm in the original span

We can see that the third log shares the traceId with the others, but it has a unique spanId. This can be used to locate different sections in a single request for more fine grained tracing.

Now let’s take a look at Sleuth’s support for threads.

3.4. Spanning Runnables

To demonstrate the threading capabilities of Sleuth let’s first add a configuration class to set up a thread pool:

@Configuration
public class ThreadConfig {

    @Autowired
    private BeanFactory beanFactory;

    @Bean
    public Executor executor() {
        ThreadPoolTaskExecutor threadPoolTaskExecutor
         = new ThreadPoolTaskExecutor();
        threadPoolTaskExecutor.setCorePoolSize(1);
        threadPoolTaskExecutor.setMaxPoolSize(1);
        threadPoolTaskExecutor.initialize();

        return new LazyTraceExecutor(beanFactory, threadPoolTaskExecutor);
    }
}

It is important to note here the use of LazyTraceExecutor. This class comes from the Sleuth library and is a special kind of executor that will propagate our traceIds to new threads and create new spanIds in the process.

Now let’s wire this executor into our controller and use it in a new request mapping method:

@Autowired
private Executor executor;
    
    @GetMapping("/new-thread")
    public String helloSleuthNewThread() {
        logger.info("New Thread");
        Runnable runnable = () -> {
            try {
                Thread.sleep(1000L);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
            logger.info("I'm inside the new thread - with a new span");
        };
        executor.execute(runnable);

        logger.info("I'm done - with the original span");
        return "success";
}

With our runnable in place, let’s restart our application and navigate to “http://localhost:8080/new-thread”. Watch for log output that looks like:

2017-01-11 21:18:15.949  
  INFO [Baeldung Sleuth Tutorial,96076a78343c364d,5179d4eeb0037a86,false] 12516 
  --- [nio-8080-exec-9] c.b.spring.session.SleuthController      : New Thread
2017-01-11 21:18:15.950  
  INFO [Baeldung Sleuth Tutorial,96076a78343c364d,5179d4eeb0037a86,false] 12516 
  --- [nio-8080-exec-9] c.b.spring.session.SleuthController      : 
  I'm done - with the original span
2017-01-11 21:18:16.953  
  INFO [Baeldung Sleuth Tutorial,96076a78343c364d,e3b6a68013ddfeea,false] 12516 
  --- [lTaskExecutor-1] c.b.spring.session.SleuthController      : 
  I'm inside the new thread - with a new span

Much like the previous example we can see that all the logs share the same traceId. But the log coming from the runnable has a unique span that will track the work done in that thread. Remember that this happens because of the LazyTraceExecutor, if we were to use a normal executor we would continue to see the same spanId used in the new thread.

Now let’s look into Sleuth’s support for @Async methods.

3.5. @Async Support

To add async support let’s first modify our ThreadConfig class to enable this feature:

@Configuration
@EnableAsync
public class ThreadConfig extends AsyncConfigurerSupport {
    
    //...
    @Override
    public Executor getAsyncExecutor() {
        ThreadPoolTaskExecutor threadPoolTaskExecutor = new ThreadPoolTaskExecutor();
        threadPoolTaskExecutor.setCorePoolSize(1);
        threadPoolTaskExecutor.setMaxPoolSize(1);
        threadPoolTaskExecutor.initialize();

        return new LazyTraceExecutor(beanFactory, threadPoolTaskExecutor);
    }
}

Note that we extend AsyncConfigurerSupport to specify our async executor and use LazyTraceExecutor to ensure traceIds and spanIds are propagated correctly. We have also added @EnableAsync to the top of our class.

Let’s now add an async method to our service:

@Async
public void asyncMethod() {
    logger.info("Start Async Method");
    Thread.sleep(1000L);
    logger.info("End Async Method");
}

Now let’s call into this method from our controller:

@GetMapping("/async")
public String helloSleuthAsync() {
    logger.info("Before Async Method Call");
    sleuthService.asyncMethod();
    logger.info("After Async Method Call");
    
    return "success";
}

Finally, let’s restart our service and navigate to “http://localhost:8080/async”. Watch for the log output that looks like:

2017-01-11 21:30:40.621  
  INFO [Baeldung Sleuth Tutorial,c187f81915377fff,65f7e9a59b52e82d,false] 10072 
  --- [nio-8080-exec-2] c.b.spring.session.SleuthController      : 
  Before Async Method Call
2017-01-11 21:30:40.622  
  INFO [Baeldung Sleuth Tutorial,c187f81915377fff,65f7e9a59b52e82d,false] 10072 
  --- [nio-8080-exec-2] c.b.spring.session.SleuthController      : 
  After Async Method Call
2017-01-11 21:30:40.622  
  INFO [Baeldung Sleuth Tutorial,c187f81915377fff,8a9f3f097dca6a9e,false] 10072 
  --- [lTaskExecutor-1] c.baeldung.spring.session.SleuthService  : 
  Start Async Method
2017-01-11 21:30:41.622  
  INFO [Baeldung Sleuth Tutorial,c187f81915377fff,8a9f3f097dca6a9e,false] 10072 
  --- [lTaskExecutor-1] c.baeldung.spring.session.SleuthService  : 
  End Async Method

We can see here that much like our runnable example, Sleuth propagates the traceId into the async method and adds a unique spanId.

Let’s now work through an example using spring support for scheduled tasks.

3.6. @Scheduled Support

Finally, let’s look at how Sleuth works with @Scheduled methods. To do this let’s update our ThreadConfig class to enable scheduling:

@Configuration
@EnableAsync
@EnableScheduling
public class ThreadConfig extends AsyncConfigurerSupport
  implements SchedulingConfigurer {
 
    //...
    
    @Override
    public void configureTasks(ScheduledTaskRegistrar scheduledTaskRegistrar) {
        scheduledTaskRegistrar.setScheduler(schedulingExecutor());
    }

    @Bean(destroyMethod = "shutdown")
    public Executor schedulingExecutor() {
        return Executors.newScheduledThreadPool(1);
    }
}

Note that we have implemented the SchedulingConfigurer interface and overridden its configureTasks method. We have also added @EnableScheduling to the top of our class.

Next, let’s add a service for our scheduled tasks:

@Service
public class SchedulingService {

    private Logger logger = LoggerFactory.getLogger(this.getClass());
 
    @Autowired
    private SleuthService sleuthService;

    @Scheduled(fixedDelay = 30000)
    public void scheduledWork() throws InterruptedException {
        logger.info("Start some work from the scheduled task");
        sleuthService.asyncMethod();
        logger.info("End work from scheduled task");
    }
}

In this class, we have created a single scheduled task with a fixed delay of 30 seconds.

Let’s now restart our application and wait for our task to be executed. Watch the console for output like this:

2017-01-11 21:30:58.866  
  INFO [Baeldung Sleuth Tutorial,3605f5deaea28df2,3605f5deaea28df2,false] 10072 
  --- [pool-1-thread-1] c.b.spring.session.SchedulingService     : 
  Start some work from the scheduled task
2017-01-11 21:30:58.866  
  INFO [Baeldung Sleuth Tutorial,3605f5deaea28df2,3605f5deaea28df2,false] 10072 
  --- [pool-1-thread-1] c.b.spring.session.SchedulingService     : 
  End work from scheduled task

We can see here that Sleuth has created new trace and span ids for our task. Each instance of a task will get it’s own trace and span by default.

4. Conclusion

In conclusion, we have seen how Spring Sleuth can be used in a variety of situations inside a single web application. We can use this technology to easily correlate logs from a single request, even when that request spans multiple threads.

By now we can see how Spring Cloud Sleuth can help us keep our sanity when debugging a multi-threaded environment. By identifying each operation in a traceId and each step in a spanId we can really begin to break down our analysis of complex jobs in our logs.

Even if we don’t go to the cloud, Spring Sleuth is likely a critical dependency in almost any project; it’s seamless to integrate and is a massive addition of value.

From here you may want to investigate other features of Sleuth. It can support tracing in distributed systems using RestTemplate, across messaging protocols used by RabbitMQ and Redis, and through a gateway like Zuul.

As always you can find the source code over on Github.

JSON Processing in Java EE 7

$
0
0

1. Overview

This article will show you how to process JSON using only core Java EE, without the use of third-party dependencies like Jersey or Jackson. Pretty much everything we’ll be using is provided by the javax.json package.

2. Writing an Object to JSON String

Converting a Java object into a JSON String is super easy. Let’s assume we have a simple Person class:

public class Person {
    private String firstName;
    private String lastName;
    private Date birthdate;

    // getters and setters
}

To convert an instance of that class to a JSON String, first we need to create an instance of JsonObjectBuilder and add property/value pairs using the add() method:

JsonObjectBuilder objectBuilder = Json.createObjectBuilder()
  .add("firstName", person.getFirstName())
  .add("lastName", person.getLastName())
  .add("birthdate", new SimpleDateFormat("DD/MM/YYYY")
  .format(person.getBirthdate()));

Notice that the add() method has a few overloaded versions. It can receive most of the primitive types (as well as boxed objects) as its second parameter.

Once we’re done setting the properties we just need to write the object into a String:

JsonObject jsonObject = objectBuilder.build();
        
String jsonString;
try(Writer writer = new StringWriter()) {
    Json.createWriter(writer).write(jsonObject);
    jsonString = writer.toString();
}

And that’s it! The generated String will look like this:

{"firstName":"Michael","lastName":"Scott","birthdate":"06/15/1978"}

2.1. Using JsonArrayBuilder to Build Arrays

Now, to add a little more complexity to our example, let’s assume that the Person class was modified to add a new property called emails which will contain a list of email addresses:

public class Person {
    private String firstName;
    private String lastName;
    private Date birthdate;
    private List<String> emails;
    
    // getters and setters

}

To add all the values from that list to the JsonObjectBuilder we’ll need the help of JsonArrayBuilder:

JsonArrayBuilder arrayBuilder = Json.createArrayBuilder();
                
for(String email : person.getEmails()) {
    arrayBuilder.add(email);
}
        
objectBuilder.add("emails", arrayBuilder);

Notice that we’re using yet another overloaded version of the add() method that takes a JsonArrayBuilder object as its second parameter.

So, let’s look at the generated String for a Person object with two email addresses:

{"firstName":"Michael","lastName":"Scott","birthdate":"06/15/1978",
 "emails":["michael.scott@dd.com","michael.scarn@gmail.com"]}

2.2. Formatting the Output with PRETTY_PRINTING

So we have successfully converted a Java object to a valid JSON String. Now, before moving to the next section, let’s add some simple formatting to make the output more “JSON-like” and easier to read.

In the previous examples, we created a JsonWriter using the straightforward Json.createWriter() static method. In order to get more control of the generated String, we will leverage Java 7’s JsonWriterFactory ability to create a writer with a specific configuration.

Map<String, Boolean> config = new HashMap<>();

config.put(JsonGenerator.PRETTY_PRINTING, true);
        
JsonWriterFactory writerFactory = Json.createWriterFactory(config);
        
String jsonString;
 
try(Writer writer = new StringWriter()) {
    writerFactory.createWriter(writer).write(jsonObject);
    jsonString = writer.toString();
}

The code may look a bit verbose, but it really doesn’t do much.

First, it creates an instance of JsonWriterFactory passing a configuration map to its constructor. The map contains only one entry which sets true to the PRETTY_PRINTING property. Then, we use that factory instance to create a writer, instead of using Json.createWriter().

The new output will contain the distinctive line breaks and tabulation that characterizes a JSON String:

{
    "firstName":"Michael",
    "lastName":"Scott",
    "birthdate":"06/15/1978",
    "emails":[
        "michael.scott@dd.com",
        "michael.scarn@gmail.com"
    ]
}

3. Building a Java Object From a String

Now let’s do the opposite operation: convert a JSON String into a Java object.

The main part of the conversion process revolves around JsonObject. To create an instance of this class, use the static method Json.createReader() followed by readObject():

JsonReader reader = Json.createReader(new StringReader(jsonString));

JsonObject jsonObject = reader.readObject();

The createReader() method takes an InputStream as a parameter. In this example, we’re using a StringReader, since our JSON is contained in a String object, but this same method could be used to read content from a file, for example, using FileInputStream.

With an instance of JsonObject at hand, we can read the properties using the getString() method and assign the obtained values to a newly created instance of our Person class:

Person person = new Person();

person.setFirstName(jsonObject.getString("firstName"));
person.setLastName(jsonObject.getString("lastName"));
person.setBirthdate(dateFormat.parse(jsonObject.getString("birthdate")));

3.1. Using JsonArray to Get List Values

We’ll need to use a special class, called JsonArray to extract list values from JsonObject:

JsonArray emailsJson = jsonObject.getJsonArray("emails");

List<String> emails = new ArrayList<>();

for (JsonString j : emailsJson.getValuesAs(JsonString.class)) {
    emails.add(j.getString());
}

person.setEmails(emails);

That’s it! We have created a complete instance of Person from a Json String.

4. Querying for Values 

Now, let’s assume we are interested in a very specific piece of data that lies inside a JSON String.

Consider the JSON below representing a client from a pet shop. Let’s say that, for some reason, you need to get the name of the third pet from the pets list:

{
    "ownerName": "Robert",
    "pets": [{
        "name": "Kitty",
        "type": "cat"
    }, {
        "name": "Rex",
        "type": "dog"
    }, {
        "name": "Jake",
        "type": "dog"
    }]
}

Converting the whole text into a Java object just to get a single value wouldn’t be very efficient. So, let’s check a couple of strategies to query JSON Strings without having to go through the whole conversion ordeal.

4.1. Querying Using Object Model API

Querying for a property’s value with a known location in the JSON structure is straightforward. We can use an instance of JsonObject, the same class used in previous examples:

JsonReader reader = Json.createReader(new StringReader(jsonString));

JsonObject jsonObject = reader.readObject();

String searchResult = jsonObject
  .getJsonArray("pets")
  .getJsonObject(2)
  .getString("name");

The catch here is to navigate through jsonObject properties using the correct sequence of get*() methods.

In this example, we first get a reference to the “pets” list using getJsonArray(), which returns a list with 3 records. Then, we use getJsonObject() method, which takes an index as a parameter, returning another JsonObject representing the third item in the list. Finally, we use getString() to get the string value we are looking for.

4.2. Querying Using Streaming API

Another way to perform precise queries on a JSON String is using the Streaming API, which has JsonParser as its main class.

JsonParser provides extremely fast, read-only, forward access to JS, with the drawback of being somewhat more complicated than the Object Model:

JsonParser jsonParser = Json.createParser(new StringReader(jsonString));

int count = 0;
String result = null;

while(jsonParser.hasNext()) {
    Event e = jsonParser.next();
    
    if (e == Event.KEY_NAME) {
        if(jsonParser.getString().equals("name")) {
            jsonParser.next();
           
            if(++count == 3) {
                result = jsonParser.getString();
                break;
            }
        }   
    }
}

This example delivers the same result as the previous one. It returns the name from the third pet in the pets list.

Once a JsonParser is created using Json.createParser(), we need to use an iterator (hence the “forward access” nature of the JsonParser) to navigate through the JSON tokens until we get to the property (or properties) we are looking for.

Every time we step through the iterator we move to the next token of the JSON data. So we have to be careful to check if the current token has the expected type. This is done by checking the Event returned by the next() call.

There are many different types of tokens. In this example, we are interested in the KEY_NAME types, which represent the name of a property (e.g. “ownerName”, “pets”, “name”, “type”). Once we stepped through a KEY_NAME token with a value of “name” for the third time, we know that the next token will contain a string value representing the name of the third pet from the list.

This is definitely harder than using the Object Model API, especially for more complicated JSON structures. The choice between one or the other, as always, depends on the specific scenario you will be dealing with.

5. Conclusion

We have covered a lot of ground on the Java EE JSON Processing API with a couple of simple examples. To learn other cool stuff about JSON processing, check our series of Jackson articles.

Check the source code of the classes used in this article, as well as some unit tests, in our GitHub repository.


Guide to Guava RangeSet

$
0
0

1. Overview

In this tutorial, we’ll show how to use the Google Guava’s RangeSet interface and its implementations.

A RangeSet is a set comprising of zero or more non-empty, disconnected ranges. When adding a range to a mutable RangeSet, any connected ranges are merged together while empty ranges are ignored.

The basic implementation of RangeSet is a TreeRangeSet.

2. Google Guava’s RangeSet

Let’s have a look at how to use the RangeSet class.

2.1. Maven Dependency

Let’s start by adding Google’s Guava library dependency in the pom.xml:

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>21.0</version>
</dependency>

The latest version of the dependency can be checked here.

3. Creation

Let’s explore some of the ways in which we can create an instance of RangeSet.

First, we can use the create method from the class TreeRangeSet to create a mutable set:

RangeSet<Integer> numberRangeSet = TreeRangeSet.create();

If we already have collections in place, use the create method from the class TreeRangeSet to create a mutable set by passing that collection:

List<Range<Integer>> numberList = Arrays.asList(Range.closed(0, 2));
RangeSet<Integer> numberRangeSet = TreeRangeSet.create(numberList);

Finally, if we need to create an immutable range set, use the ImmutableRangeSet class (creating which follows a builder pattern):

RangeSet<Integer> numberRangeSet 
  = new ImmutableRangeSet.<Integer>builder().add(Range.closed(0, 2)).build();

4. Usage

Let’s start with a simple example that shows the usage of RangeSet.

4.1. Adding to a Range

We can check whether the input supplied is within a range present in any of the range items in a set:

@Test
public void givenRangeSet_whenQueryWithinRange_returnsSucessfully() {
    RangeSet<Integer> numberRangeSet = TreeRangeSet.create();

    numberRangeSet.add(Range.closed(0, 2));
    numberRangeSet.add(Range.closed(3, 5));
    numberRangeSet.add(Range.closed(6, 8));

    assertTrue(numberRangeSet.contains(1));
    assertFalse(numberRangeSet.contains(9));
}

Notes:

  • The closed method of the Range class assumes the range of integer values to be between 0 to 2 (both inclusive)
  • The Range in above example consists of integers. We can use a range consisting of any type as long as it implements the Comparable interface such as String, Character, floating point decimals etc
  • In the case of an ImmutableRangeSet, a range item present in the set cannot overlap with a range item that one would like to add. If that happens, we get an IllegalArgumentException
  • Range input to a RangeSet cannot be null. If the input is null, we will get a NullPointerException

4.2. Removing a Range

Let’s see how we can remove values from a RangeSet:

@Test
public void givenRangeSet_whenRemoveRangeIsCalled_removesSucessfully() {
    RangeSet<Integer> numberRangeSet = TreeRangeSet.create();

    numberRangeSet.add(Range.closed(0, 2));
    numberRangeSet.add(Range.closed(3, 5));
    numberRangeSet.add(Range.closed(6, 8));
    numberRangeSet.add(Range.closed(9, 15));
    numberRangeSet.remove(Range.closed(3, 5));
    numberRangeSet.remove(Range.closed(7, 10));

    assertTrue(numberRangeSet.contains(1));
    assertFalse(numberRangeSet.contains(9));
    assertTrue(numberRangeSet.contains(12));
}

As can be seen, after removal we can still access values present in any of the range items left in the set.

4.3. Range Span

Let’s now see what the overall span of a RangeSet is:

@Test
public void givenRangeSet_whenSpanIsCalled_returnsSucessfully() {
    RangeSet<Integer> numberRangeSet = TreeRangeSet.create();

    numberRangeSet.add(Range.closed(0, 2));
    numberRangeSet.add(Range.closed(3, 5));
    numberRangeSet.add(Range.closed(6, 8));
    Range<Integer> experienceSpan = numberRangeSet.span();

    assertEquals(0, experienceSpan.lowerEndpoint().intValue());
    assertEquals(8, experienceSpan.upperEndpoint().intValue());
}

4.4. Getting a Subrange

If we wish to get part of RangeSet based on a given Range, we can use the subRangeSet method:

@Test
public void 
  givenRangeSet_whenSubRangeSetIsCalled_returnsSubRangeSucessfully() {
  
    RangeSet<Integer> numberRangeSet = TreeRangeSet.create();

    numberRangeSet.add(Range.closed(0, 2));
    numberRangeSet.add(Range.closed(3, 5));
    numberRangeSet.add(Range.closed(6, 8));
    RangeSet<Integer> numberSubRangeSet 
      = numberRangeSet.subRangeSet(Range.closed(4, 14));

    assertFalse(numberSubRangeSet.contains(3));
    assertFalse(numberSubRangeSet.contains(14));
    assertTrue(numberSubRangeSet.contains(7));
}

4.5. Complement Method

Next, let’s get all the values except the one present in RangeSet, using the complement method:

@Test
public void givenRangeSet_whenComplementIsCalled_returnsSucessfully() {
    RangeSet<Integer> numberRangeSet = TreeRangeSet.create();

    numberRangeSet.add(Range.closed(0, 2));
    numberRangeSet.add(Range.closed(3, 5));
    numberRangeSet.add(Range.closed(6, 8));
    RangeSet<Integer> numberRangeComplementSet
      = numberRangeSet.complement();

    assertTrue(numberRangeComplementSet.contains(-1000));
    assertFalse(numberRangeComplementSet.contains(2));
    assertFalse(numberRangeComplementSet.contains(3));
    assertTrue(numberRangeComplementSet.contains(1000));
}

4.6. Intersection with a Range

Finally, when we would like to check whether a range interval present in RangeSet intersects with some or all the values in another given range, we can make use of the intersect method:

@Test
public void givenRangeSet_whenIntersectsWithinRange_returnsSucessfully() {
    RangeSet<Integer> numberRangeSet = TreeRangeSet.create();

    numberRangeSet.add(Range.closed(0, 2));
    numberRangeSet.add(Range.closed(3, 10));
    numberRangeSet.add(Range.closed(15, 18));

    assertTrue(numberRangeSet.intersects(Range.closed(4, 17)));
    assertFalse(numberRangeSet.intersects(Range.closed(19, 200)));
}

5. Conclusion

In this tutorial, we illustrated the RangeSet of the Guava library using some examples. The RangeSet is predominantly used to check whether a value falls within a certain range present in the set.

The implementation of these examples can be found in the GitHub project – this is a Maven-based project, so it should be easy to import and run as is.

Simple Inheritance with Jackson

$
0
0

1. Overview

In this tutorial, we’re going to take a look at inheritance and in particular how to handle JSON serialization and deserialization of Java classes that extend a superclass.

In order to get started with Jackson, you can have a look at this article here.

2. JSON Inheritance – Real World Scenario

Let’s say that we have an abstract Event class which is used for some sort of event processing:

abstract public class Event {
    private String id;
    private Long timestamp;

    // standard constructors, getters/setters
}

There are two subclasses that extend the Event class, the first is ItemIdAddedToUser, and the second is ItemIdRemovedFromUser.

The problem is that if we want to represent these classes as JSON, we would need to serialize their type information in addition to their ordinary fields. Otherwise, when deserializing into an Event there would be no way of knowing what subclass the JSON represents. Fortunately, there is a mechanism to achieve this in the Jackson library.

3. Implementing Inheritance Using Jackson

There is a @JsonTypeInfo annotation that allows us to store an object’s type information as a JSON field.

Let’s enrich the Event class with the annotation:

@JsonTypeInfo(
  use = JsonTypeInfo.Id.MINIMAL_CLASS,
  include = JsonTypeInfo.As.PROPERTY,
  property = "eventType")
abstract public class Event {
    private String id;
    private Long timestamp;

    @JsonCreator
    public Event(
      @JsonProperty("id") String id,
      @JsonProperty("timestamp") Long timestamp) {
 
        this.id = id;
        this.timestamp = timestamp;
    }
    
    // standard getters
}

In this case, using the annotation will cause the ObjectMapper to add an additional field called eventType to the resulting JSON, with a value equal to the object’s type. By using JsonTypeInfo.Id. MINIMAL_CLASS, it means that the value of the eventType property will be equal to the name of the class. Let’s serialize an instance of an ItemIdRemovedFromUserEvent:

Event event = new ItemIdRemovedFromUser("1", 12345567L, "item_1", 2L);
ObjectMapper objectMapper = new ObjectMapper();
String eventJson = objectMapper.writeValueAsString(event);

When we print the JSON we will now see that the additional type information is stored:

{  
    "eventType":".ItemIdRemovedFromUser",
    "id":"1",
    "timestamp":12345567,
    "itemId":"item_1",
    "quantity":2
}

Let’s validate that the deserialization works by asserting that the ObjectMapper creates an instance of an ItemIdRemovedFromUser class:

@Test
public void givenRemoveItemJson_whenDeserialize_shouldHaveProperClassType()
  throws IOException {
 
    //given
    Event event = new ItemIdRemovedFromUser("1", 12345567L, "item_1", 2L);
    ObjectMapper objectMapper = new ObjectMapper();
    String eventJson = objectMapper.writeValueAsString(event);

    //when
    Event result = new ObjectMapper().readValue(eventJson, Event.class);

    //then
    assertTrue(result instanceof ItemIdRemovedFromUser);
    assertEquals("item_1", ((ItemIdRemovedFromUser) result).getItemId());
}

6. Ignoring Fields From a Super-Class

Let’s say that we want to extend an Event class, but we want our ObjectMapper to ignore it’s id field so it is not present in the resulting JSON.

It’s quite easily achieve this by using the @JsonIgnoreProperties annotation:

@JsonIgnoreProperties("id")
public class ItemIdAddedToUser extends Event {
    private String itemId;
    private Long quantity;

    @JsonCreator
    public ItemIdAddedToUser(
      @JsonProperty("id") String id,
      @JsonProperty("timestamp") Long timestamp,
      @JsonProperty("itemId") String itemId,
      @JsonProperty("quantity") Long quantity) {
 
        super(id, timestamp);
        this.itemId = itemId;
        this.quantity = quantity;
    }

    // standard getters
}

Let’s serialize an ItemAddedToUserEvent, and see which fields are ignored by the ObjectMapper:

Event event = new ItemIdAddedToUser("1", 12345567L, "item_1", 2L);
ObjectMapper objectMapper = new ObjectMapper();
String eventJson = objectMapper.writeValueAsString(event);

The resulting JSON will look like this (note that there is no id field from Event super-class):

{  
    "eventType":".ItemIdAddedToUser",
    "timestamp":12345567,
    "itemId":"item_1",
    "quantity":2
}

Let’s assert that the id field is missing with a test case:

@Test
public void givenAdddItemJson_whenSerialize_shouldIgnoreIdPropertyFromSuperclass()
  throws IOException {
 
    // given
    Event event = new ItemIdAddedToUser("1", 12345567L, "item_1", 2L);
    ObjectMapper objectMapper = new ObjectMapper();
        
    // when
    String eventJson = objectMapper.writeValueAsString(event);

    // then
    assertFalse(eventJson.contains("id"));
}

7. Conclusion

This article demonstrates how to use inheritance with Jackson library.

The implementation of all these examples and code snippets can be found in the GitHub project – this is a Maven project, so it should be easy to import and run as it is.

Guide to Guava RangeMap

$
0
0

1. Overview

In this tutorial, we’ll show how to use the Google Guava’s RangeMap interface and its implementations.

A RangeMap is a special kind of mapping from disjoint non-empty ranges to non-null values. Using queries, we may look up the value for any particular range in that map.

The basic implementation of RangeMap is a TreeRangeMap. Internally the map makes use of a TreeMap to store the key as a range and the value as any custom Java object.

2. Google Guava’s RangeMap

Let’s have a look at how to use the RangeMap class.

2.1. Maven Dependency

Let’s start by adding Google’s Guava library dependency in the pom.xml:

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>21.0</version>
</dependency>

The latest version of the dependency can be checked here.

3. Creating

Some of the ways in which we may create an instance of RangeMap are:

  • Use the create method from the TreeRangeMap class to create a mutable map:
RangeMap<Integer, String> experienceRangeDesignationMap
  = TreeRangeMap.create();
  • If we intend to create an immutable range map, use the ImmutableRangeMap class (which follows a builder pattern):
RangeMap<Integer, String> experienceRangeDesignationMap
  = new ImmutableRangeMap.<Integer, String>builder()
  .put(Range.closed(0, 2), "Associate")
  .build();

4. Using

Let’s start with a simple example showing the usage of RangeMap.

4.1. Retrieval Based on Input Within a Range

We can get a value associated with a value within a range of integers:

@Test
public void givenRangeMap_whenQueryWithinRange_returnsSucessfully() {
    RangeMap<Integer, String> experienceRangeDesignationMap 
     = TreeRangeMap.create();

    experienceRangeDesignationMap.put(
      Range.closed(0, 2), "Associate");
    experienceRangeDesignationMap.put(
      Range.closed(3, 5), "Senior Associate");
    experienceRangeDesignationMap.put(
      Range.closed(6, 8),  "Vice President");
    experienceRangeDesignationMap.put(
      Range.closed(9, 15), "Executive Director");

    assertEquals("Vice President", 
      experienceRangeDesignationMap.get(6));
    assertEquals("Executive Director", 
      experienceRangeDesignationMap.get(15));
}

Note:

  • The closed method of the Range class assumes the range of integer values to be between 0 to 2 (both inclusive)
  • The Range in the above example consists of integers. We may use a range of any type, as long as it implements the Comparable interface such as String, Character, floating point decimals etc.
  • RangeMap returns Null when we try to get the value for a range that is not present in map
  • In a case of an ImmutableRangeMap, a range of one key cannot overlap with a range of a key that needs to be inserted. If that happens, we get an IllegalArgumentException
  • Both keys and values in the RangeMap cannot be null. If either one of them is null, we get a NullPointerException

4.2. Removing a Value Based on a Range

Let’s see how we can remove values. In this example, we show how to remove a value associated with an entire range. We also show how to remove a value based on a partial key range:

@Test
public void givenRangeMap_whenRemoveRangeIsCalled_removesSucessfully() {
    RangeMap<Integer, String> experienceRangeDesignationMap 
      = TreeRangeMap.create();

    experienceRangeDesignationMap.put(
      Range.closed(0, 2), "Associate");
    experienceRangeDesignationMap.put(
      Range.closed(3, 5), "Senior Associate");
    experienceRangeDesignationMap.put(
      Range.closed(6, 8), "Vice President");
    experienceRangeDesignationMap.put(
      Range.closed(9, 15), "Executive Director");
 
    experienceRangeDesignationMap.remove(Range.closed(9, 15));
    experienceRangeDesignationMap.remove(Range.closed(1, 4));
  
    assertNull(experienceRangeDesignationMap.get(9));
    assertEquals("Associate", 
      experienceRangeDesignationMap.get(0));
    assertEquals("Senior Associate", 
      experienceRangeDesignationMap.get(5));
    assertNull(experienceRangeDesignationMap.get(1));
}

As can be seen, even after partially removing values from a range, we still can get the values if the range is still valid.

4.3. Span of Key Range

In case we would like to know what the overall span of a RangeMap is, we may use the span method:

@Test
public void givenRangeMap_whenSpanIsCalled_returnsSucessfully() {
    RangeMap<Integer, String> experienceRangeDesignationMap = 
      TreeRangeMap.create();

    experienceRangeDesignationMap.put(
      Range.closed(0, 2), "Associate");
    experienceRangeDesignationMap.put(
      Range.closed(3, 5), "Senior Associate");
    experienceRangeDesignationMap.put(
      Range.closed(6, 8), "Vice President");
    experienceRangeDesignationMap.put(
      Range.closed(9, 15), "Executive Director");
    experienceRangeDesignationMap.put(
        
    assertEquals(0, experienceSpan.lowerEndpoint().intValue());
    assertEquals(15, experienceSpan.upperEndpoint().intValue());
}

4.4. Getting a SubRangeMap 

When we want to select a part from a RangeMap, we may use the subRangeMap method:

@Test
public void givenRangeMap_whenSpanIsCalled_returnsSucessfully() {
    RangeMap<Integer, String> experienceRangeDesignationMap 
      = TreeRangeMap.create();

    experienceRangeDesignationMap.put(
      Range.closed(0, 2), "Associate");
    experienceRangeDesignationMap.put(
      Range.closed(3, 5), "Senior Associate");
    experienceRangeDesignationMap.put(
      Range.closed(6, 8), "Vice President");
    experienceRangeDesignationMap.put(
      Range.closed(9, 15), "Executive Director");
    RangeMap<Integer, String> experiencedSubRangeDesignationMap         
      = experienceRangeDesignationMap.subRangeMap(Range.closed(4, 14));
        
    assertNull(experiencedSubRangeDesignationMap.get(3));
    assertEquals("Executive Director", 
      experiencedSubRangeDesignationMap.get(14));
    assertEquals("Vice President", 
      experiencedSubRangeDesignationMap.get(7));
}

4.5. Getting an Entry

Finally, if we are looking for an Entry from a RangeMap, we use the getEntry method:

@Test
public void givenRangeMap_whenGetEntryIsCalled_returnsEntrySucessfully() {
    RangeMap<Integer, String> experienceRangeDesignationMap 
      = TreeRangeMap.create();

    experienceRangeDesignationMap.put(
      Range.closed(0, 2), "Associate");
    experienceRangeDesignationMap.put(
      Range.closed(3, 5), "Senior Associate");
    experienceRangeDesignationMap.put(
      Range.closed(6, 8), "Vice President");
    experienceRangeDesignationMap.put(
      Range.closed(9, 15), "Executive Director");
    Map.Entry<Range<Integer>, String> experienceEntry 
      = experienceRangeDesignationMap.getEntry(10);
       
    assertEquals(Range.closed(9, 15), experienceEntry.getKey());
    assertEquals("Executive Director", experienceEntry.getValue());
}

5. Conclusion

In this tutorial, we illustrated examples of using the RangeMap in the Guava library. It is predominantly used to get a value based on the key specified as a from the map.

The implementation of these examples can be found in the GitHub project – this is a Maven-based project, so it should be easy to import and run as is.

A Guide to TreeMap in Java

$
0
0

1. Overview

In this article, we are going to explore TreeMap implementation of Map interface from Java Collections Framework(JCF).

TreeMap is a map implementation that keeps its entries sorted according to the natural ordering of its keys or better still using a comparator if provided by the user at construction time.

Previously, we have covered HashMap and LinkedHashMap implementations and we will realize that there is quite a bit of information about how these classes work that is similar.

The mentioned articles are highly recommended reading before going forth with this one.

2. Default Sorting in TreeMap

By default, TreeMap sorts all its entries according to their natural ordering. For an integer, this would mean ascending order and for strings, alphabetical order.

Let’s see the natural ordering in a test:

@Test
public void givenTreeMap_whenOrdersEntriesNaturally_thenCorrect() {
    TreeMap<Integer, String> map = new TreeMap<>();
    map.put(3, "val");
    map.put(2, "val");
    map.put(1, "val");
    map.put(5, "val");
    map.put(4, "val");

    assertEquals("[1, 2, 3, 4, 5]", map.keySet().toString());
}

Notice that we placed the integer keys in a non-orderly manner but on retrieving the key set, we confirm that they are indeed maintained in ascending order. This is the natural ordering of integers.

Likewise, when we use strings, they will be sorted in their natural order, i.e. alphabetically:

@Test
public void givenTreeMap_whenOrdersEntriesNaturally_thenCorrect2() {
    TreeMap<String, String> map = new TreeMap<>();
    map.put("c", "val");
    map.put("b", "val");
    map.put("a", "val");
    map.put("e", "val");
    map.put("d", "val");

    assertEquals("[a, b, c, d, e]", map.keySet().toString());
}

TreeMap, unlike a hash map and linked hash map, does not employ the hashing principle anywhere since it does not use an array to store its entries.

3. Custom Sorting in TreeMap

If we’re not satisfied with the natural ordering of TreeMap, we can also define our own rule for ordering by means of a comparator during construction of a tree map.

In the example below, we want the integer keys to be ordered in descending order:

@Test
public void givenTreeMap_whenOrdersEntriesByComparator_thenCorrect() {
    TreeMap<Integer, String> map = 
      new TreeMap<>(Comparator.reverseOrder());
    map.put(3, "val");
    map.put(2, "val");
    map.put(1, "val");
    map.put(5, "val");
    map.put(4, "val");
        
    assertEquals("[5, 4, 3, 2, 1]", map.keySet().toString());
}

A hash map does not guarantee the order of keys stored and specifically does not guarantee that this order will remain the same over time, but a tree map guarantees that the keys will always be sorted according to the specified order.

4. Importance of TreeMap Sorting

We now know that TreeMap stores all its entries in sorted order. Because of this attribute of tree maps, we can perform queries like; find “largest”, find “smallest”, find all keys less than or greater than a certain value, etc.

The code below only covers a small percentage of these cases:

@Test
public void givenTreeMap_whenPerformsQueries_thenCorrect() {
    TreeMap<Integer, String> map = new TreeMap<>();
    map.put(3, "val");
    map.put(2, "val");
    map.put(1, "val");
    map.put(5, "val");
    map.put(4, "val");
        
    Integer highestKey = map.lastKey();
    Integer lowestKey = map.firstKey();
    Set<Integer> keysLessThan3 = map.headMap(3).keySet();
    Set<Integer> keysGreaterThanEqTo3 = map.tailMap(3).keySet();

    assertEquals(new Integer(5), highestKey);
    assertEquals(new Integer(1), lowestKey);
    assertEquals("[1, 2]", keysLessThan3.toString());
    assertEquals("[3, 4, 5]", keysGreaterThanEqTo3.toString());
}

5. Internal Implementation of TreeMap

TreeMap implements NavigableMap interface and bases it’s internal working on the principles of red-black trees:

public class TreeMap<K,V> extends AbstractMap<K,V>
  implements NavigableMap<K,V>, Cloneable, java.io.Serializable

The principle of red-black trees is beyond the scope of this article, however, there are key things to remember in order to understand how they fit into TreeMap.

First of all, a red-black tree is a data structure that consists of nodes; picture an inverted mango tree with its root in the sky and the branches growing downward. The root will contain the first element added to the tree.

The rule is that starting from the root, any element in the left branch of any node is always less than the element in the node itself. Those on the right are always greater. What defines greater or less than is determined by the natural ordering of the elements or the defined comparator at construction as we saw earlier.

This rule guarantees that the entries of a treemap will always be in sorted and predictable order.

Secondly, a red-black tree is a self-balancing binary search tree. This attribute and the above guarantee that basic operations like search, get, put and remove take logarithmic time O(log  n).

Being self-balancing is key here. As we keep inserting and deleting entries, picture the tree growing longer on one edge or shorter on the other.

This would mean that an operation would take a shorter time on the shorter branch and longer time on the branch which is furthest from the root, something we would not want to happen.

Therefore, this is taken care of in the design of red-black trees. For every insertion and deletion, the maximum height of the tree on any edge is maintained at O(log n) i.e. the tree balances itself continuously.

Just like hash map and linked hash map, a tree map is not synchronized and therefore the rules for using it in a multi-threaded environment are similar to those in the other two map implementations.

6. Choosing the Right Map

Having looked at HashMap and LinkedHashMap implementations previously and now TreeMap, it is important to make a brief comparison between the three to guide us on which one fits where.

A hash map is good as a general purpose map implementation that provides rapid storage and retrieval operations. However, it falls short because of its chaotic and unorderly arrangement of entries.

This causes it to perform poorly in scenarios where there is a lot of iteration as the entire capacity of the underlying array affects traversal other than just the number of entries.

A linked hash map possesses the good attributes of hash maps and adds order to the entries. It performs better where there is a lot of iteration because only the number of entries is taken into account regardless of capacity.

A tree map takes ordering to the next level by providing complete control over how the keys should be sorted. On the flip side, it offers worse general performance than the other two alternatives.

We could say a linked hash map reduces the chaos in the ordering of a hash map without incurring the performance penalty of a tree map.

7. Conclusion

In this article, we have explored Java TreeMap class and its internal implementation. Since it is the last in a series of common Map interface implementations, we also went ahead to briefly discuss where it fits best in relation to the other two.

The full source code for all the examples used in this article can be found in the GitHub project.

Intro to Spring Remoting with HTTP Invokers

$
0
0

1. Overview

In some cases, we need to decompose a system into several processes, each taking responsibility for a different aspect of our application. In these scenarios is not uncommon that one of the processes needs to synchronously get data from another one.

The Spring Framework offers a range of tools comprehensively called Spring Remoting that allows us to invoke remote services as if they were, at least to some extent, available locally.

In this article, we will set up an application based on Spring’s HTTP invoker, which leverages native Java serialization and HTTP to provide remote method invocation between a client and a server application.

2. Service Definition

Let’s suppose we have to implement a system that allows users to book a ride in a cab.

Let’s also suppose that we choose to build two distinct applications to obtain this goal:

  • a booking engine application to check whether a cab request can be served, and
  • a front-end web application that allows customers to book their rides, ensuring the availability of a cab has been confirmed

2.1. Service Interface

When we use Spring Remoting with HTTP invoker, we have to define our remotely callable service trough an interface to let Spring create proxies at both client and server side that encapsulate the technicalities of the remote call. So let’s start with the interface of a service that allows to book a cab:

public interface CabBookingService {
    Booking bookRide(String pickUpLocation) throws BookingException;
}

When the service is able to allocate a cab, it returns a Booking object with a reservation code. Booking has to be a serializable because Spring’s HTTP invoker has to transfer its instances from the server to the client:

public class Booking implements Serializable {
    private String bookingCode;

    @Override public String toString() {
        return format("Ride confirmed: code '%s'.", bookingCode);
    }

    // standard getters/setters and a constructor
}

If the service is not able to book a cab, a BookingException is thrown. In this case, there’s no need to mark the class as Serializable because Exception already implements it:

public class BookingException extends Exception {
    public BookingException(String message) {
        super(message);
    }
}

2.2. Packaging the Service

The service interface along with all custom classes used as arguments, return types and exceptions have to be available in both client’s and server’s classpath. One of the most effective ways to do that is to pack all of them in a .jar file that can be later included as a dependency in the server’s and client’s pom.xml.

Let’s thus put all the code in a dedicated Maven module, called “api”; we’ll use the following Maven coordinates for this example:

<groupId>com.baeldung</groupId>
<artifactId>api</artifactId>
<version>1.0-SNAPSHOT</version>

3. Server Application

Let’s build the booking engine application to expose the service using Spring Boot.

3.1. Maven Dependencies

First, you’ll need to make sure your project is using Spring Boot:

<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>1.4.3.RELEASE</version>
</parent>

You can find the last Spring Boot version here. We then need the Web starter module:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>

And we need the service definition module that we assembled in the previous step:

<dependency>
    <groupId>com.baeldung</groupId>
    <artifactId>api</artifactId>
    <version>1.0-SNAPSHOT</version>
</dependency>

3.2. Service Implementation

We firstly define a class that implements the service’s interface:

public class CabBookingServiceImpl implements CabBookingService {

    @Override public Booking bookPickUp(String pickUpLocation) throws BookingException {
        if (random() < 0.3) throw new BookingException("Cab unavailable");
        return new Booking(randomUUID().toString());
    }
}

Let’s pretend that this is a likely implementation. Using a test with a random value we’ll be able to reproduce both successful scenarios ─ when an available cab has been found and a reservation code returned ─ and failing scenarios ─ when a BookingException is thrown to indicate that there is not any available cab.

3.3. Exposing the Service

We then need to define an application with a bean of type HttpInvokerServiceExporter in the context. It will take care of exposing an HTTP entry point in the web application that will be later invoked by the client:

@Configuration
@ComponentScan
@EnableAutoConfiguration
public class Server {

    @Bean(name = "/booking") HttpInvokerServiceExporter accountService() {
        HttpInvokerServiceExporter exporter = new HttpInvokerServiceExporter();
        exporter.setService( new CabBookingServiceImpl() );
        exporter.setServiceInterface( CabBookingService.class );
        return exporter;
    }

    public static void main(String[] args) {
        SpringApplication.run(Server.class, args);
    }
}

It is worth noting that Spring’s HTTP invoker uses the name of the HttpInvokerServiceExporter bean as a relative path for the HTTP endpoint URL.

We can now start the server application and keep it running while we set up the client application.

4. Client Application

Let’s now write the client application.

4.1. Maven Dependencies

We’ll use the same service definition and the same Spring Boot version we used at server side. We still need the web starter dependency, but since we don’t need to automatically start an embedded container, we can exclude the Tomcat starter from the dependency:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <exclusions>
        <exclusion>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-tomcat</artifactId>
        </exclusion>
    </exclusions>
</dependency>

4.2. Client Implementation

Let’s implement the client:

@Configuration
public class Client {

    @Bean
    public HttpInvokerProxyFactoryBean invoker() {
        HttpInvokerProxyFactoryBean invoker = new HttpInvokerProxyFactoryBean();
        invoker.setServiceUrl("http://localhost:8080/booking");
        invoker.setServiceInterface(CabBookingService.class);
        return invoker;
    }

    public static void main(String[] args) throws BookingException {
        CabBookingService service = SpringApplication
          .run(Client.class, args)
          .getBean(CabBookingService.class);
        out.println(service.bookRide("13 Seagate Blvd, Key Largo, FL 33037"));
    }
}

The @Bean annotated invoker() method creates an instance of HttpInvokerProxyFactoryBean. We need to provide the URL that the remote server responds at through the setServiceUrl() method.

Similarly to what we did for the server, we should also provide the interface of the service we want to invoke remotely through the setServiceInterface() method.

HttpInvokerProxyFactoryBean implements Spring’s FactoryBean. A FactoryBean is defined as a bean, but the Spring IoC container will inject the object it creates, not the factory itself. You can find more details about FactoryBean in our factory bean article.

The main() method bootstraps the stand alone application and obtains an instance of CabBookingService from the context. Under the hood this object is just a proxy created by the HttpInvokerProxyFactoryBean that takes care of all technicalities involved in the execution of the remote invocation. Thanks to it we can now easily use the proxy as we would do if the service implementation had been available locally.

Let’s run the application multiple times to execute several remote calls to verify how the client behaves when a cab is available and when it is not.

5. Caveat Emptor

When we work with technologies that allow remote invocations, there are some pitfalls we should be well aware of.

5.1. Beware of Network Related Exceptions

We should always expect the unexpected when we work with an unreliable resource as the network.

Let’s suppose the client is invoking the server while it cannot be reached ─ either because of a network problem or because the server is down ─ then Spring Remoting will raise a RemoteAccessException that is a RuntimeException.

The compiler will not then force us to include the invocation in a try-catch block, but we should always consider to do it, to properly manage network problems.

5.2. Objects are Transferred by Value, not by Reference

Spring Remoting HTTP marshals method arguments and returned values to transmit them on the network. This means that the server acts upon a copy of the provided argument and the client acts upon a copy of the result created by the server.

So we cannot expect, for instance, that invoking a method on the resulting object will change the status of the same object on the server side because there is not any shared object between client and server.

5.3. Beware of Fine-Grained Interfaces

Invoking a method across network boundaries is significantly slower than invoking it on an object in the same process.

For this reason, it is usually a good practice to define services that should be remotely invoked with coarser grained interfaces that are able to complete business transactions requiring fewer interactions, even at the expense of a more cumbersome interface.

6. Conclusion

With this example, we saw how it is easy with Spring Remoting to invoke a remote process.

The solution is slightly less open than other widespread mechanisms like REST or web services, but in scenarios where all the components are developed with Spring, it can represent a viable and far quicker alternative.

As usual, you’ll find the sources over on GitHub.

Guide to EJB Set-up

$
0
0

1. Overview

In this article, we’re going to discuss how to get started with Enterprise JavaBean (EJB) development.

Enterprise JavaBeans are used for developing scalable, distributed, server-side components and typically encapsulate the business logic of the application.

We’ll use WildFly 10.1.0 as our preferred server solution, however, you are free to use any Java Enterprise application server of your choice.

2. Setup

Let’s start by discussing the Maven dependencies required for EJB 3.2 development and how to configure the WildFly application server using either the Maven Cargo plugin or manually.

2.1. Maven Dependency

In order to use EJB 3.2, make sure you add the latest version to the dependencies section of your pom.xml file:

<dependency>
    <groupId>javax</groupId>
    <artifactId>javaee-api</artifactId>
    <version>7.0</version>
    <scope>provided</scope>
</dependency>
You will find the latest dependency in the Maven Repository. This dependency ensures that all Java EE 7 APIs are available during compile time. The provided scope ensures that once deployed, the dependency will be provided by the container where it has been deployed.

2.2. WildFly Setup With Maven Cargo

Let’s talk about how to use the Maven Cargo plugin to setup the server.

Here is the code for the Maven profile that provisions the WildFly server:

<profile>
    <id>wildfly-standalone</id>
    <build>
        <plugins>
            <plugin>
                <groupId>org.codehaus.cargo</groupId>
                <artifactId>cargo-maven2-plugin</artifactId>
                <version>${cargo-maven2-plugin.version</version>
                <configuration>
                    <container>
                        <containerId>wildfly10x</containerId>
                        <zipUrlInstaller>
                            <url>
                                http://download.jboss.org/
                                  wildfly/10.1.0.Final/
                                    wildfly-10.1.0.Final.zip
                            </url>
                        </zipUrlInstaller>
                    </container>
                    <configuration>
                        <properties>
                            <cargo.hostname>127.0.0.0</cargo.hostname>
                            <cargo.jboss.management-http.port>
                                9990
                            </cargo.jboss.management-http.port>
                            <cargo.servlet.users>
                                testUser:admin1234!
                            </cargo.servlet.users>
                        </properties>
                    </configuration>
                </configuration>
            </plugin>
        </plugins>
    </build>
</profile>

We use the plugin to download the WildFly 10.1 zip directly from the WildFly’s website. Which is then configured, by making sure that the hostname is 127.0.0.1 and setting the port to 9990.

Then we create a test user, by using the cargo.servlet.users property, with the user id testUser and the password admin1234!.

Now that configuration of the plugin is completed we should be able to call a Maven target and have the server download, installed, launched and the application deployed.

To do this, navigate to the ejb-remote directory and run the following command:

mvn clean package cargo:run

When you run this command for the first time, it will download the WildFly 10.1 zip file, extract it and execute the installation and then launch it. It will also add the test user discussed above. Any further executions will not download the zip file again.

2.3. Manual Setup of WildFly

In order to setup WildFly manually, you must download the installation zip file yourself from the wildfly.org website. The following steps are a high-level view of the WildFly server setup process:

After downloading and unzipping the file’s contents to the location where you want to install the server, configure the following environment variables:

JBOSS_HOME=/Users/$USER/../wildfly.x.x.Final
JAVA_HOME=`/usr/libexec/java_home -v 1.8`

Then in the bin directory, run the ./standalone.sh for Linux based operating systems or ./standalone.bat for Windows.

After this, you will have to add a user. This user will be used to connect to the remote EJB bean. To find out how to add a user you should take a look at the ‘add a user’ documentation.

For detailed setup instructions please visit WildFly’s Getting Started documentation.

The project POM has been configured to work with the Cargo plugin and manual server configuration by setting two profiles. By default, the Cargo plugin is selected. However to deploy the application to an already installed, configured and running Wildfly server execute the following command in the ejb-remote directory:

mvn clean install wildfly:deploy -Pwildfly-runtime

3. Remote vs Local

A business interface for a bean can be either local or remote.

A @Local annotated bean can only be accessed if it is in the same application as the bean that makes the invocation, i.e. if they reside in the same .ear or .war.

A @Remote annotated bean can be accessed from a different application, i.e. an application residing in a different JVM or application server.

There are some important points to keep in mind when designing a solution that includes EJBs:

  • The java.io.Serializable, java.io.Externalizable and interfaces defined by the javax.ejb package are always excluded when a bean is declared with @Local or @Remote
  • If a bean class is remote, then all implemented interfaces are to be remote
  • If a bean class contains no annotation or if the @Local annotation is specified, then all implemented interfaces are assumed to be local
  • Any interface that is explicitly defined for a bean which contains no interface must be declared as @Local
  • The EJB 3.2 release tends to provide more granularity for situations where local and remote interfaces need to explicitly defined

4. Creating the Remote EJB

Let’s first create the bean’s interface and call it HelloWorld:

@Remote
public interface HelloWorld {
    String getHelloWorld();
}

Now we will implement the above interface and name the concrete implementation HelloWorldBean:

@Stateless(name = "HelloWorld")
public class HelloWorldBean implements HelloWorld {

    @Resource
    private SessionContext context;

    @Override
    public String getHelloWorld() {
        return "Welcome to EJB Tutorial!";
    }
}

Note the @Stateless annotation on the class declaration. It denotes that this bean is a stateless session bean. This kind of bean does not have any associated client state, but it may preserve its instance state and is normally used to do independent operations.

The @Resource annotation injects the session context into the remote bean.

The SessionContext interface provides access to the runtime session context that the container provides for a session bean instance. The container then passes the SessionContext interface to an instance after the instance has been created. The session context remains associated with that instance for its lifetime.

The EJB container normally creates a pool of stateless bean’s objects and uses these objects to process client requests. As a result of this pooling mechanism, instance variable values are not guaranteed to be maintained across lookup method calls.

5. Remote Setup

In this section, we will discuss how to setup Maven to build and run the application on the server.

Let’s look at the plugins one by one.

5.1. The EJB Plugin

The EJB plugin which is given below is used to package an EJB module. We have specified the EJB version as 3.2.

The following plugin configuration is used to setup the target JAR for the bean:

<plugin>
    <artifactId>maven-ejb-plugin</artifactId>
    <version>2.4</version>
    <configuration>
        <ejbVersion>3.2</ejbVersion>
    </configuration>
</plugin>

5.2. Deploy the Remote EJB

To deploy the bean in a WildFly server ensure that the server is up and running.

Then to execute the remote setup we will need to run the following Maven commands against the pom file in the ejb-remote project:

mvn clean install

Then we should run:

mvn wildfly:deploy

Alternatively, we can deploy it manually as an admin user from the admin console of the application server

6. Client Setup 

After creating the remote bean we should test the deployed bean by creating a client.

First, let’s discuss the Maven setup for the client project.

6.1. Client-Side Maven Setup

In order to launch the EJB3 client we need to add the following dependencies:

<dependency>
    <groupId>org.wildfly</groupId>
    <artifactId>wildfly-ejb-client-bom</artifactId>
    <type>pom</type>
    <scope>import</scope>
</dependency>

We depend on the EJB remote business interfaces of this application to run the client. So we need to specify the EJB client JAR dependency. We add the following in the parent pom:

<dependency>
    <groupId>com.baeldung.ejb</groupId>
    <artifactId>ejb-remote</artifactId>
    <type>ejb</type>
</dependency>

The <type> is specified as ejb.

6.2. Accessing The Remote Bean

We need to create a file under src/main/resources and name it jboss-ejb-client.properties that will contain all the properties that are required to access the deployed bean:

remote.connections=default
remote.connection.default.host=127.0.0.1
remote.connection.default.port=8080
remote.connection.default.connect.options.org.xnio.Options
  .SASL_POLICY_NOANONYMOUS = false
remote.connection.default.connect.options.org.xnio.Options
  .SASL_POLICY_NOPLAINTEXT = false
remote.connection.default.connect.options.org.xnio.Options
  .SASL_DISALLOWED_MECHANISMS = ${host.auth:JBOSS-LOCAL-USER}
remote.connection.default.username=testUser
remote.connection.default.password=admin1234!

7. Creating the Client

The class that will access and use the remote HelloWorld bean has been created in EJBClient.java which is in the com.baeldung.ejb.client package.

7.1 Remote Bean URL 

The remote bean is located via a URL that conforms to the following format:

ejb:${appName}/${moduleName}/${distinctName}/${beanName}!${viewClassName}
  • The ${appName} is the application name of the deployment. Here we have not used any EAR file but a simple JAR or WAR deployment, so the application name will be empty
  • The ${moduleName} is the name we set for our deployment earlier, so it is ejb-remote
  • The ${distinctName} is a specific name which can be optionally assigned to the deployments that are deployed on the server. If a deployment doesn’t use distinct-name then we can use an empty String in the JNDI name, for the distinct-name, as we did in our example
  • The ${beanName} variable is the simple name of the implementation class of the EJB, so in our example it is HelloWorld
  • ${viewClassName} denotes the fully-qualified interface name of the remote interface

7.2 Look-up Logic

Next, let’s have a look at our simple look-up logic:

public HelloWorld lookup() throws NamingException { 
    String appName = ""; 
    String moduleName = "remote"; 
    String distinctName = ""; 
    String beanName = "HelloWorld"; 
    String viewClassName = HelloWorld.class.getName();
    String toLookup = String.format("ejb:%s/%s/%s/%s!%s",
      appName, moduleName, distinctName, beanName, viewClassName);
    return (HelloWorld) context.lookup(toLookup);
}

In order to connect to the bean we just created, we will need a URL which we can feed into the context.

7.3 The Initial Context

We’ll now create/initialize the session context:

public void createInitialContext() throws NamingException {
    Properties prop = new Properties();
    prop.put(Context.URL_PKG_PREFIXES, "org.jboss.ejb.client.naming");
    prop.put(Context.INITIAL_CONTEXT_FACTORY, 
      "org.jboss.naming.remote.client.InitialContextFacto[ERROR]
    prop.put(Context.PROVIDER_URL, "http-remoting://127.0.0.1:8080");
    prop.put(Context.SECURITY_PRINCIPAL, "testUser");
    prop.put(Context.SECURITY_CREDENTIALS, "admin1234!");
    prop.put("jboss.naming.client.ejb.context", false);
    context = new InitialContext(prop);
}

To connect to the remote bean we need a JNDI context. The context factory is provided by the Maven artifact org.jboss:jboss-remote-naming and this creates a JNDI context, which will resolve the URL constructed in the lookup method, into proxies to the remote application server process.

7.4 Define Lookup Parameters

We define the factory class with the parameter Context.INITIAL_CONTEXT_FACTORY.

The Context.URL_PKG_PREFIXES is used to define a package to scan for additional naming context.

The parameter org.jboss.ejb.client.scoped.context = false tells the context to read the connection parameters (such as the connection host and port) from the provided map instead of from a classpath configuration file. This is especially helpful if we want to create a JAR bundle that should be able to connect to different hosts.

The parameter Context.PROVIDER_URL defines the connection schema and should start with http-remoting://.

8. Testing

To test the deployment and check the setup, we can run the following test to make sure everything works properly:

@Test
public void testEJBClient() {
    EJBClient ejbClient = new EJBClient();
    HelloWorldBean bean = new HelloWorldBean();
    
    assertEquals(bean.getHelloWorld(), ejbClient.getEJBRemoteMessage());
}

With the test passing, we can now be sure everything is working as expected.

9. Conclusion

So we have created an EJB server and a client which invokes a method on a remote EJB. The project can be run on any application server by properly adding the dependencies for that server.

The entire project can be found over on GitHub.

Java Web Weekly, Issue 161

$
0
0

1. Spring and Java

>> Bean Validation 2.0 Progress Report [beanvalidation.org]

The new features of the Bean Validation 2.0 definitely look promising.

>> Swift for Beans – var, let and Type Inference [knitelius.com]

Swift-like features are making their way into Java.

>> New JEP Would Simplify Java Type Variance [infoq.com]

Simplified Type Variance possibly in JDK 10.

>> Declutter Your POJOs with Lombok [sitepoint.com]

A short overview of Lombok – the Java boilerplate killer.

>> Pivotal Releases First Milestone of Next-Generation Spring Data Featuring Reactive Database Access [infoq.com]

The first milestone of the new Spring Data was already released.

It looks like it will be possible to create “reactive” repositories making use of the Spring Reactor project.

>> Hibernate Tips: Use query comments to identify a query [thoughts-on-java.org]

A quick and very practical write-up about leveraging query comments in Hibernate.

>> JDK 9 is the End of the Road for Some Features [marxsoftware.com]

Most articles focus on JDK 9 additions. This one goes through the list of features to be removed from JVM.

>> Protecting JAX-RS Resources with RBAC and Apache Shiro [stormpath.com]

Implementing a fine-grained Role-Based Access Control with Apache Shiro.

>> Flyway Tutorial – Execute Migrations using Maven [codecentric.de]

Another short write-up about doing database migrations with Flyway. This time, focusing on the maven-flyway-plugin.

>> Building Reactive Applications with Akka Actors and Java 8 [infoq.com]

It turns out you do not need to use Scala in order to be able to use Akka 🙂

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Navigating the microservice architecture pattern language – part 1 [plainoldobjects.com]

A short write-up exploring and explaining the semantics of microservices.

>> Better performance: the case for timeouts [odino.org]

A very detailed experiment showing that such basic things as timeouts can noticeably impact the performance.

>> Exploring data sets with Kibana [frankel.ch]

The title says all 🙂

>> AWS Serverless Lambda Scheduled Events to Store Tweets in Couchbase [couchbase.com]

A short tutorial showing how to use Couchbase in a tweet-fetching AWS Lambda application.

Also worth reading:

3. Musings

>> Software Development and the Gig Economy [henrikwarne.com]

A few thoughts about the Software Development market and the direction it’s heading.

>> Managing a To Do list [kylecordes.com]

>> Really Managing a To Do list [kylecordes.com]

Tips on how to effectively manage your TODOs.

>> Automate Your Documentation [daedtech.com]

How to write documentation as easy as possible 🙂

>> Collaborating with Outsiders to the Dev Team [daedtech.com]

Trying to teach developers how to live with other forms of life 🙂

>> Nights and Weekends [swizec.com]

Building something interesting in your off hours isn’t supposed to be easy.

>> SyntheticMonitoring [martinfowler.com]

An explanation of the Synthetic Monitoring technique which revolves around running tests on a live system.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Secret Goals [dilbert.com]

>> Totally different [dilbert.com]

>> Work-life balance [dilbert.com]

5. Pick of the Week

>> How to do what you love and make good money [sivers.org]


Guide to CountDownLatch in Java

$
0
0

1. Introduction

In this article, we’ll give a guide to the CountDownLatch class and demonstrate how it can be used in a few practical examples.

Essentially, by using a CountDownLatch we can cause a thread to block until other threads have completed a given task.

2. Usage in Concurrent Programming

Simply put, a CountDownLatch has a counter field, which you can decrement as we require. We can then use it to block a calling thread until it’s been counted down to zero.

If we were doing some parallel processing, we could instantiate the CountDownLatch with the same value for the counter as a number of threads we want to work across. Then, we could just call countdown() after each thread finishes, guaranteeing that a dependent thread calling await() will block until the worker threads are finished.

3. Waiting for a Pool of Threads to Complete

Let’s try out this pattern by creating a Worker and using a CountDownLatch field to signal when it has completed:

public class Worker implements Runnable {
    private List<String> outputScraper;
    private CountDownLatch countDownLatch;

    public Worker(List<String> outputScraper, CountDownLatch countDownLatch) {
        this.outputScraper = outputScraper;
        this.countDownLatch = countDownLatch;
    }

    @Override
    public void run() {
        doSomeWork();
        countDownLatch.countDown();
        outputScraper.add("Counted down");
    }
}

Then, let’s create a test in order to prove that we can get a CountDownLatch to wait for the Worker instances to complete:

@Test
public void whenParallelProcessing_thenMainThreadWillBlockUntilCompletion()
  throws InterruptedException {

    List<String> outputScraper = Collections.synchronizedList(new ArrayList<>());
    CountDownLatch countDownLatch = new CountDownLatch(5);
    List<Thread> workers = Stream
      .generate(() -> new Thread(new Worker(outputScraper, countDownLatch)))
      .limit(5)
      .collect(toList());

      workers.forEach(Thread::start);
      countDownLatch.await(); 
      outputScraper.add("Latch released");

      assertThat(outputScraper)
        .containsExactly(
          "Counted down",
          "Counted down",
          "Counted down",
          "Counted down",
          "Counted down",
          "Latch released"
        );
    }

Naturally “Latch released” will always be the last output – as it’s dependant on the CountDownLatch releasing.

Note that if we didn’t call await(), we wouldn’t be able to guarantee the ordering of the execution of the threads, so the test would randomly fail.

4. A Pool of Threads Waiting to Begin

If we took the previous example, but this time started thousands of threads instead of five, it’s likely that many of the earlier ones will have finished processing before we have even called start() on the later ones. This could make it difficult to try and reproduce a concurrency problem, as we wouldn’t be able to get all our threads to run in parallel.

To get around this, let’s get the CountdownLatch to work differently than in the previous example. Instead of blocking a parent thread until some child threads have finished, we can block each child thread until all the others have started.

Let’s modify our run() method so it blocks before processing:

public class WaitingWorker implements Runnable {

    private List<String> outputScraper;
    private CountDownLatch readyThreadCounter;
    private CountDownLatch callingThreadBlocker;
    private CountDownLatch completedThreadCounter;

    public WaitingWorker(
      List<String> outputScraper,
      CountDownLatch readyThreadCounter,
      CountDownLatch callingThreadBlocker,
      CountDownLatch completedThreadCounter) {

        this.outputScraper = outputScraper;
        this.readyThreadCounter = readyThreadCounter;
        this.callingThreadBlocker = callingThreadBlocker;
        this.completedThreadCounter = completedThreadCounter;
    }

    @Override
    public void run() {
        readyThreadCounter.countDown();
        try {
            callingThreadBlocker.await();
            doSomeWork();
            outputScraper.add("Counted down");
        } catch (InterruptedException e) {
            e.printStackTrace();
        } finally {
            completedThreadCounter.countDown();
        }
    }
}

Now, let’s modify our test so it blocks until all the Workers have started, unblocks the Workers, and then blocks until the Workers have finished:

@Test
public void whenDoingLotsOfThreadsInParallel_thenStartThemAtTheSameTime()
 throws InterruptedException {
 
    List<String> outputScraper = Collections.synchronizedList(new ArrayList<>());
    CountDownLatch readyThreadCounter = new CountDownLatch(5);
    CountDownLatch callingThreadBlocker = new CountDownLatch(1);
    CountDownLatch completedThreadCounter = new CountDownLatch(5);
    List<Thread> workers = Stream
      .generate(() -> new Thread(new WaitingWorker(
        outputScraper, readyThreadCounter, callingThreadBlocker, completedThreadCounter)))
      .limit(5)
      .collect(toList());

    workers.forEach(Thread::start);
    readyThreadCounter.await(); 
    outputScraper.add("Workers ready");
    callingThreadBlocker.countDown(); 
    completedThreadCounter.await(); 
    outputScraper.add("Workers complete");

    assertThat(outputScraper)
      .containsExactly(
        "Workers ready",
        "Counted down",
        "Counted down",
        "Counted down",
        "Counted down",
        "Counted down",
        "Workers complete"
      );
}

This pattern is really useful for trying to reproduce concurrency bugs, as can be used to force thousands of threads to try and perform some logic in parallel.

5. Terminating a CountdownLatch Early

Sometimes, we may run into a situation where the Workers terminate in error before counting down the CountDownLatch. This could result in it never reaching zero and await() never terminating:

@Override
public void run() {
    if (true) {
        throw new RuntimeException("Oh dear, I'm a BrokenWorker");
    }
    countDownLatch.countDown();
    outputScraper.add("Counted down");
}

Let’s modify our earlier test to use a BrokenWorker, in order to show how await() will block forever:

@Test
public void whenFailingToParallelProcess_thenMainThreadShouldGetNotGetStuck()
  throws InterruptedException {
 
    List<String> outputScraper = Collections.synchronizedList(new ArrayList<>());
    CountDownLatch countDownLatch = new CountDownLatch(5);
    List<Thread> workers = Stream
      .generate(() -> new Thread(new BrokenWorker(outputScraper, countDownLatch)))
      .limit(5)
      .collect(toList());

    workers.forEach(Thread::start);
    countDownLatch.await(3L);
}

Clearly, this is not the behaviour we want – it would be much better for the application to continue than infinitely block.

To get around this, let’s add a timeout argument to our call to await().

boolean completed = countDownLatch.await(3L, TimeUnit.SECONDS);
assertThat(completed).isFalse();

As we can see, the test will eventually time out and await() will return false.

6. Conclusion

In this quick guide, we’ve demonstrated how we can use a CountDownLatch in order to block a thread until other threads have finished some processing.

We’ve also shown how it can be used to help debug concurrency issues by making sure threads run in parallel.

The implementation of these examples can be found over on GitHub; this is a Maven-based project, so should be easy to run as is.

Introduction to the Kotlin Language

$
0
0

1. Overview

In this tutorial, we’re going to take a look at Kotlin, a new language in the JVM world, and some of its basic features, including classes, inheritance, conditional statements, and looping constructs. Then we will look at some of the main features that make Kotlin an attractive language, including null safety, data classes, extension functions, and String templates.

2. Maven Dependencies

To use Kotlin in your Maven project, you need to add the Kotlin standard library to your pom.xml:

<dependency>
    <groupId>org.jetbrains.kotlin</groupId>
    <artifactId>kotlin-stdlib</artifactId>
    <version>1.0.4</version>
</dependency>

To add JUnit support for Kotlin, you will also need to include the following dependencies:

<dependency>
    <groupId>org.jetbrains.kotlin</groupId>
    <artifactId>kotlin-test-junit</artifactId>
    <version>1.0.4</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>junit</groupId>
    <artifactId>junit</artifactId>
    <version>4.12</version>
    <scope>test</scope>
</dependency>

You can find the latest versions of kotlin-stdlib, kotlin-test-junit, and junit on Maven Central.

Finally, you will need to configure the source directories and Kotlin plugin in order to perform a Maven build:

<build>
    <sourceDirectory>${project.basedir}/src/main/kotlin</sourceDirectory>
    <testSourceDirectory>${project.basedir}/src/test/kotlin</testSourceDirectory>
    <plugins>
        <plugin>
            <artifactId>kotlin-maven-plugin</artifactId>
            <groupId>org.jetbrains.kotlin</groupId>
            <version>1.0.4</version>
            <executions>
                <execution>
                    <id>compile</id>
                    <goals>
                        <goal>compile</goal>
                    </goals>
                </execution>
                <execution>
                    <id>test-compile</id>
                    <goals>
                        <goal>test-compile</goal>
                    </goals>
                </execution>
            </executions>
        </plugin>
    </plugins>
</build>

You can find the latest version of kotlin-maven-plugin in Maven Central.

3. Basic Syntax

Let’s look at basic building blocks of Kotlin Language.

There is some similarity to Java (e.g. defining packages is in the same way). Let’s take a look at differences.

3.1. Defining Functions

Let’s define a Function having two Int parameters with Int return type:

fun sum(a: Int, b: Int): Int {
    return a + b
}

3.2. Defining Local Variables

Assign-once (read-only) local variable:

val a: Int = 1
val b = 1 
val c: Int 
c = 1

Note that type of a variable b is inferred by a Kotlin compiler. We could also define mutable variables:

var x = 5 
x += 1

4. Optional Fields

Kotlin has basic syntax for defining a field that could be nullable (optional). When we want to declare that type of field is nullable we need to use type suffixed with a question mark:

val email: String?

When you defined nullable field it is perfectly valid to assign a null to it:

val email: String? = null

That means that in an email field could be a null. If we will write:

val email: String = "value"

Then we need to assign a value to email field in the same statement that we declare email. It can not have a null value. We will get back to Kotlin null safety in a later section.

5. Classes

Let’s demonstrate how to create a simple class for managing a specific category of a product. Our ItemManager class below has a default constructor that populates two fields — categoryId and dbConnection — and an optional email field:

class ItemManager(val categoryId: String, val dbConnection: String) {
    var email = ""
    // ...
}

That ItemManager(…) construct creates constructor and two fields in our class: categoryId and dbConnection

Note that our constructor uses the val keyword for its arguments — this means that the corresponding fields will be final and immutable. If we had used the var keyword (as we did when defining the email field), then those fields would be mutable.

Let’s create an instance of ItemManager using the default constructor:

ItemManager("cat_id", "db://connection")

We could construct ItemManager using named parameters. It is very useful when you have like in this example function that takes two parameters with the same type e.g. String, and you do not want to confuse an order of them. Using naming parameters you can explicitly write which parameter is assigned. In class ItemManager there are two fields, categoryId and dbConnection so both can be referenced using named parameters:

ItemManager(categoryId = "catId", dbConnection = "db://Connection")

It is very useful when we need to pass more arguments to a function.

If you need additional constructors, you would define them using the constructor keyword. Let’s define another constructor that also sets the email field:

constructor(categoryId: String, dbConnection: String, email: String) 
  : this(categoryId, dbConnection) {
    this.email = email
}

Note that this constructor invokes the default constructor that we defined above before setting the email field. And since we already defined categoryId and dbConnection to be immutable using the val keyword in the default constructor, we do not need to repeat the val keyword in the additional constructor.

Now, let’s create an instance using the additional constructor:

ItemManager("cat_id", "db://connection", "foo@bar.com")

If you want to define an instance method on ItemManager, you would do so using the fun keyword:

fun isFromSpecificCategory(catId: String): Boolean {
    return categoryId == catId
}

6. Inheritance

By default, Kotlin’s classes are closed for extension — the equivalent of a class marked final in Java.

In order to specify that a class is open for extension, you would use the open keyword when defining the class.

Let’s define an Item class that is open for extension:

open class Item(val id: String, val name: String = "unknown_name") {
    open fun getIdOfItem(): String {
        return id
    }
}

Note that we also denoted the getIdOfItem() method as open. This allows it to be overridden.

Now, let’s extend the Item class and override the getIdOfItem() method:

class ItemWithCategory(id: String, name: String, val categoryId: String) : Item(id, name) {
    override fun getIdOfItem(): String {
        return id + name
    }
}

7. Conditional Statements

In Kotlin, conditional statement if is an equivalent of a function that returns some value. Let’s look at an example:

fun makeAnalyisOfCategory(catId: String): Unit {
    val result = if (catId == "100") "Yes" else "No"
    println(result)
}

In this example, we see that if catId is equal to “100” conditional block returns “Yes” else it returns “No”Returned value gets assigned to result.

You could create a normal ifelse block:

val number = 2
if (number < 10) {
    println("number less that 10")
} else if (number > 10) {
    println("number is greater that 10")
}

Kotlin has also a very useful when command that acts like an advanced switch statement:

val name = "John"
when (name) {
    "John" -> println("Hi man")
    "Alice" -> println("Hi lady")
}

8. Collections

There are two types of collections in Kotlin: mutable and immutable. When we create immutable collection it means that is read only:

val items = listOf(1, 2, 3, 4)

There is no add function element on that list.

When we want to create a mutable list that could be altered, we need to use mutableListOf() method:

val rwList = mutableListOf(1, 2, 3)
rwList.add(5)

A mutable list has add() method so we could append an element to it. There are also equivalent method to other types of collections: mutableMapOf(), mapOf(), setOf(), mutableSetOf()

9. Exceptions

Mechanism of exception handling is very similar to the one in Java.

All exception classes extend Throwable.  The exception must have a message, stacktrace, and an optional cause. Every exception in Kotlin is unchecked, meaning that compiler does not force us to catch them.

To throw an exception object, we need to use the throw-expression:

throw Exception("msg")

Handling of exception is done by using try…catch block(finally optional):

try {

}
catch (e: SomeException) {

}
finally {

}

10. Lambdas

In Kotlin, we could define lambda functions and pass them as arguments to other functions.

Let’s see how to define a simple lambda:

val sumLambda = { a: Int, b: Int -> a + b }

We defined sumLambda function that takes two arguments of type Int as an argument and returns Int. 

We could pass a lambda around:

@Test
fun givenListOfNumber_whenDoingOperationsUsingLambda_shouldReturnProperResult() {
    // given
    val listOfNumbers = listOf(1, 2, 3)

    // when
    val sum = listOfNumbers.reduce { a, b -> a + b }

    // then
    assertEquals(6, sum)
}

11. Looping Constructs

In Kotlin, looping through collections could be done by using a standard for..in construct:

val numbers = arrayOf("first", "second", "third", "fourth")
for (n in numbers) {
    println(n)
}

If we want to iterate over a range of integers we could use a range construct:

for (i in 2..9 step 2) {
    println(i)
}

Note that the range in the example above is inclusive on both sides. The step parameter is optional and it is an equivalent to incrementing the counter twice in each iteration. The output will be following:

2
4
6
8

We could use a rangeTo() function that is defined on Int class in the following way:

1.rangeTo(10).map{ it * 2 }

The result will contain (note that rangeTo() is also inclusive):

[2, 4, 6, 8, 10, 12, 14, 16, 18, 20]

12. Null Safety

Let’s look at one of the key features of Kotlin – null safety, that is built into the language. To illustrate why this is useful, we will create simple service that returns an Item object:

class ItemService {
    fun findItemNameForId(id: String): Item? {
        val itemId = UUID.randomUUID().toString()
        return Item(itemId, "name-$itemId");
    }
}

The important thing to notice is returned type of that method. It is an object followed by the question mark. It is a construct from Kotlin language, meaning that Item returned from that method could be null. We need to handle that case at compile time, deciding what we want to do with that object (it is more or less equivalent to Java 8 Optional<T> type).

If the method signature has type without question mark:

fun findItemNameForId(id: String): Item

then calling code will not need to handle a null case because it is guaranteed by the compiler and Kotlin language, that returned object can not be null.

Otherwise, if there is a nullable object passed to a method, and that case is not handled, it will not compile.

Let’s write a test case for Kotlin type-safety:

val id = "item_id"
val itemService = ItemService()

val result = itemService.findItemNameForId(id)

assertNotNull(result?.let { it -> it.id })
assertNotNull(result!!.id)

We are seeing here that after executing method findItemNameForId(), the returned type is of Kotlin Nullable. To access a field of that object (id), we need to handle that case at compile time. Method let() will execute only if a result is non-nullable. Id field can be accessed inside of a lambda function because it is null safe.

Another way to access that nullable object field is to use Kotlin operator !!. It is equivalent to:

if (result == null){
    throwNpe(); 
}
return result;

Kotlin will check if that object is a null if so, it will throw a NullPointerException, otherwise it will return a proper object. Function throwNpe() is a Kotlin internal function.

13. Data Classes

A very nice language construct that could be found in Kotlin is data classes (it is equivalent to “case class” from Scala language). The purpose of such classes is to only hold data. In our example we had an Item class that only holds the data:

data class Item(val id: String, val name: String)

The compiler will create for us methods hashCode(), equals(), and toString(). It is good practice to make data classes immutable, by using a val keyword. Data classes could have default field values:

data class Item(val id: String, val name: String = "unknown_name")

We see that name field has a default value “unknown_name”.

14. Extension Functions

Suppose that we have a class that is a part of 3rd party library, but we want to extend it with an additional method. Kotlin allows us to do this by using extension functions.

Let’s consider an example in which we have a list of elements and we want to take a random element from that list. We want to add a new function random() to 3rd party List class.

Here’s how it looks like in Kotlin:

fun <T> List<T>.random(): T? {
    if (this.isEmpty()) return null
    return get(ThreadLocalRandom.current().nextInt(count()))
}

The most important thing to notice here is a signature of the method. The method is prefixed with a name of the class that we are adding this extra method to.

Inside the extension method, we operate on a scope of a list, therefore using this gave use access to list instance methods like isEmpty() or count(). Then we are able to call random() method on any list that is in that scope:

fun <T> getRandomElementOfList(list: List<T>): T? {
    return list.random()
}

We created a method that takes a list and then executes custom extension function random() that was previously defined. Let’s write a test case for our new function:

val elements = listOf("a", "b", "c")

val result = ListExtension().getRandomElementOfList(elements)

assertTrue(elements.contains(result))

The possibility of defining functions that “extends” 3rd party classes is a very powerful feature and can make our code more concise and readable.

15. String Templates

A very nice feature of Kotlin language is a possibility to use templates for Strings. It is very useful because we do not need to concatenate Strings manually:

val firstName = "Tom"
val secondName = "Mary"
val concatOfNames = "$firstName + $secondName"
val sum = "four: ${2 + 2}"

We can also evaluate an expression inside the ${} block:

val itemManager = ItemManager("cat_id", "db://connection")
val result = "function result: ${itemManager.isFromSpecificCategory("1")}"

16. Kotlin/Java Interoperability

Kotlin – Java interoperability is seamlessly easy. Let’s suppose that we have a Java class with a method that operates on String:

class StringUtils{
    public static String toUpperCase(String name) {
        return name.toUpperCase();
    }
}

Now we want to execute that code from our Kotlin class. We only need to import that class and we could execute java method from Kotlin without any problems:

val name = "tom"

val res = StringUtils.toUpperCase(name)

assertEquals(res, "TOM")

As we see, we used java method from Kotlin code.

Calling Kotlin code from a Java is also very easy. Let’s define simple Kotlin function:

class MathematicsOperations {
    fun addTwoNumbers(a: Int, b: Int): Int {
        return a + b
    }
}

Executing addTwoNumbers() from Java code is very easy:

int res = new MathematicsOperations().addTwoNumbers(2, 4);

assertEquals(6, res);

We see that call to Kotlin code was transparent to us.

When we define a method in java that return type is a void, in Kotlin returned value will be of a Unit type.

There are some special identifiers in Java language ( is, object, in, ..) that when used them in Kotlin code needs to be escaped. For example, we could define a method that has a name object() but we need to remember to escape that name as this is a special identifier in java:

fun `object`(): String {
    return "this is object"
}

Then we could execute that method:

`object`()

17. Conclusion

This article makes an introduction to Kotlin language and it’s key features. It starts by introducing simple concepts like loops, conditional statements, and defining classes. Then shows some more advanced features like extension functions and null safety.

The implementation of all these examples and code snippets can be found in the GitHub project – this is a Maven project, so it should be easy to import and run as it is.

New Stream Collectors in Java 9

$
0
0

1. Overview

Collectors were added in Java 8 which helped accumulate input elements into mutable containers such as MapList, and Set.

In this article, we’re going to explore two new collectors added in Java 9: Collectors.filtering and Collectors.flatMapping used in combination with Collectors.groupingBy providing intelligent collections of elements.

2. Filtering Collector

The Collectors.filtering is similar to the Stream filter(); it’s used for filtering input elements but used for different scenarios. The Stream’s filter is used in the stream chain whereas the filtering is a Collector which was designed to be used along with groupingBy.

With Stream’s filter, the values are filtered first and then it’s grouped. In this way, the values which are filtered out are gone and there is no trace of it. If we need a trace then we would need to group first and then apply filtering which actually the Collectors.filtering does.

The Collectors.filtering takes a function for filtering the input elements and a collector to collect the filtered elements:

@Test
public void givenList_whenSatifyPredicate_thenMapValueWithOccurences() {
    List<Integer> numbers = List.of(1, 2, 3, 5, 5);

    Map<Integer, Long> result = numbers.stream()
      .filter(val -> val > 3)
      .collect(Collectors.groupingBy(i -> i, Collectors.counting()));

    assertEquals(1, result.size());

    result = numbers.stream()
      .collect(Collectors.groupingBy(i -> i,
        Collectors.filtering(val -> val > 3, Collectors.counting())));

    assertEquals(4, result.size());
}

3. FlatMapping Collector

The Collectors.flatMapping is similar to Collectors.mapping but has a more fine-grained objective. Both the collectors takes a function and a collector where the elements are collected but flatMapping function accepts a Stream of elements which is then accumulated by the collector.

Let’s see the following model class:

class Blog {
    private String authorName;
    private List<String> comments;
      
    // constructor and getters
}

Collectors.flatMapping lets us skip intermediate collection and write directly to a single container which is mapped to that group defined by the Collectors.groupingBy:

@Test
public void givenListOfBlogs_whenAuthorName_thenMapAuthorWithComments() {
    Blog blog1 = new Blog("1", "Nice", "Very Nice");
    Blog blog2 = new Blog("2", "Disappointing", "Ok", "Could be better");
    List<Blog> blogs = List.of(blog1, blog2);
        
    Map<String,  List<List<String>>> authorComments1 = blogs.stream()
     .collect(Collectors.groupingBy(Blog::getAuthorName, 
       Collectors.mapping(Blog::getComments, Collectors.toList())));
       
    assertEquals(2, authorComments1.size());
    assertEquals(2, authorComments1.get("1").get(0).size());
    assertEquals(3, authorComments1.get("2").get(0).size());

    Map<String, List<String>> authorComments2 = blogs.stream()
      .collect(Collectors.groupingBy(Blog::getAuthorName, 
        Collectors.flatMapping(blog -> blog.getComments().stream(), 
        Collectors.toList())));

    assertEquals(2, authorComments2.size());
    assertEquals(2, authorComments2.get("1").size());
    assertEquals(3, authorComments2.get("2").size());
}

The Collectors.mapping maps all grouped author’s comments to the collector’s container i.e. List whereas this intermediate collection is removed with flatMapping as it gives a direct stream of the comment list to be mapped to the collector’s container.

4. Conclusion

This article illustrates the use of the new Collectors introduced in Java9 i.e. Collectors.filtering() and Collectors.flatMapping() used in combination with Collectors.groupingBy().

These Collectors can also be used along with Collectors.partitioningBy() but it only creates two partitions based on conditions and the real power of the collectors isn’t leveraged; hence left out of this tutorial.

The complete source code for the code snippets in this tutorial is available over on GitHub.

Spring Data MongoDB: Projections and Aggregations

$
0
0

1. Overview

Spring Data MongoDB provides simple high-level abstractions to MongoDB native query language. In this article, we will explore the support for Projections and Aggregation framework.

If you’re new to this topic, refer to our introductory article Introduction to Spring Data MongoDB.

2. Projection

In MongoDB, Projections are a way to fetch only the required fields of a document from a database. This reduces the amount of data that has to be transferred from database server to client and hence increases performance.

With Spring Data MongDB, projections can be used both with MongoTemplate and MongoRepository.

Before we move further, let’s look at the data model we will be using:

@Document
public class User {
    @Id
    private String id;
    private String name;
    private Integer age;
    
    // standard getters and setters
}

2.1. Projections Using MongoTemplate

The include() and exclude() methods on the Field class is used to include and exclude fields respectively:

Query query = new Query();
query.fields().include("name").exclude("id");
List<User> john = mongoTemplate.find(query, User.class);

These methods can be chained together to include or exclude multiple fields. The field marked as @Id (_id in the database) is always fetched unless explicitly excluded.

Excluded fields are null in the model class instance when records are fetched with projection. In the case where fields are of a primitive type or their wrapper class, then the value of excluded fields are default values of the primitive types.

For example, String would be nullint/Integer would be 0 and boolean/Boolean would be false.

Thus in the above example, the name field would be Johnid would be null and age would be 0.

2.2. Projections Using MongoRepository

While using MongoRepositories, the fields of @Query annotation can be defined in JSON format:

@Query(value="{}", fields="{name : 1, _id : 0}")
List<User> findNameAndExcludeId();

The result would be same as using the MongoTemplate. The value=”{}” denotes no filters and hence all the documents will be fetched.

3. Aggregation

Aggregation in MongoDB was built to process data and return computed results. Data is processed in stages and the output of one stage is provided as input to the next stage. This ability to apply transformations and do computations on data in stages makes aggregation a very powerful tool for analytics.

Spring Data MongoDB provides an abstraction for native aggregation queries using the three classes Aggregation which wraps an aggregation query, AggregationOperation which wraps individual pipeline stages and AggregationResults which is the container of the result produced by aggregation.

To perform and aggregation, first, create aggregation pipelines using the static builder methods on Aggregation class, then create an instance of Aggregation using the newAggregation() method on the Aggregation class and finally run the aggregation using MongoTemplate:

MatchOperation matchStage = Aggregation.match(new Criteria("foo").is("bar"));
ProjectionOperation projectStage = Aggregation.project("foo", "bar.baz");
        
Aggregation aggregation 
  = Aggregation.newAggregation(matchStage, projectStage);

AggregationResults<OutType> output 
  = mongoTemplate.aggregate(aggregation, "foobar", OutType.class);

Please note that both MatchOperation and ProjectionOperation implement AggregationOperation. There are similar implementations for other aggregation pipelines. OutType is the data model for expected output.

Now, we will look at a few examples and their explanations to cover the major aggregation pipelines and operators.

The dataset which we will be using in this article lists details about all the zip codes in the US which can be downloaded from MongoDB repository.

Let’s look at a sample document after importing it into a collection called zips in the test database.

{
    "_id" : "01001",
    "city" : "AGAWAM",
    "loc" : [
        -72.622739,
        42.070206
    ],
    "pop" : 15338,
    "state" : "MA"
}

For the sake of simplicity and to make code concise, in the next code snippets, we will assume that all the static methods of Aggregation class are statically imported.

3.1. Get all the States with a Population Greater Than 10 Million Order by Population Descending

Here we will have three pipelines:

  1. $group stage summing up the population of all zip codes
  2. $match stage to filter out states with population over 10 million
  3. $sort stage to sort all the documents in descending order of population

The expected output will have a field _id as state and a field statePop with the total state population. Let’s create a data model for this and run the aggregation:

public class StatePoulation {
 
    @Id
    private String state;
    private Integer statePop;
 
    // standard getters and setters
}

The @Id annotation will map the _id field from output to state in the model:

GroupOperation groupByStateAndSumPop = group("state")
  .sum("pop").as("statePop");
MatchOperation filterStates = match(new Criteria("statePop").gt(10000000));
SortOperation sortByPopDesc = sort(new Sort(Direction.DESC, "statePop"));

Aggregation aggregation = newAggregation(
  groupByStateAndSumPop, filterStates, sortByPopDesc);
AggregationResults<StatePopulation> result = mongoTemplate.aggregate(
  aggregation, "zips", StatePopulation.class);

The AggregationResults class implements Iterable and hence we can iterate over it and print the results.

If the output data model is not known, the standard MongoDB classes Document or DBObject can be used.

3.2. Get Smallest State by Average City Population

For this problem, we will need four stages:

  1. $group to sum the total population of each city
  2. $group to calculate average population of each state
  3. $sort stage to order states by their average city population in ascending order
  4. $limit to get the first state with lowest average city population

Although it’s not necessarily required, we will use an additional $project stage to reformat the document as per out StatePopulation data model.

GroupOperation sumTotalCityPop = group("state", "city")
  .sum("pop").as("cityPop");
GroupOperation averageStatePop = group("_id.state")
  .avg("cityPop").as("avgCityPop");
SortOperation sortByAvgPopAsc = sort(new Sort(Direction.ASC, "avgCityPop"));
LimitOperation limitToOnlyFirstDoc = limit(1);
ProjectionOperation projectToMatchModel = project()
  .andExpression("_id").as("state")
  .andExpression("avgCityPop").as("statePop");

Aggregation aggregation = newAggregation(
  sumTotalCityPop, averageStatePop, sortByAvgPopAsc,
  limitToOnlyFirstDoc, projectToMatchModel);

AggregationResults<StatePopulation> result = mongoTemplate
  .aggregate(aggregation, "zips", StatePopulation.class);
StatePopulation smallestState = result.getUniqueMappedResult();

In this example, we already know that there will be only one document in the result since we limit the number of output documents to 1 in the last stage. As such, we can invoke getUniqueMappedResult() to get the required StatePopulation instance.

Another thing to notice is that, instead of relying on the @Id annotation to map _id to state, we have explicitly done it in projection stage.

3.3. Get the State with Maximum and Minimum Zip Codes

For this example, we need three stages:

  1. $group to count the number of zip codes for each state
  2. $sort to order the states by the number of zip codes
  3. $group to find the state with max and min zip codes using $first and $last operators
GroupOperation sumZips = group("state").count().as("zipCount");
SortOperation sortByCount = sort(Direction.ASC, "zipCount");
GroupOperation groupFirstAndLast = group().first("_id").as("minZipState")
  .first("zipCount").as("minZipCount").last("_id").as("maxZipState")
  .last("zipCount").as("maxZipCount");

Aggregation aggregation = newAggregation(sumZips, sortByCount, groupFirstAndLast);

AggregationResults<DBObject> result = mongoTemplate
  .aggregate(aggregation, "zips", DBObject.class);
DBObject dbObject = result.getUniqueMappedResult();

Here we have not used any model but used the DBObject already provided with MongoDB driver.

4. Conclusion

In this article, we learned how to fetch specified fields of a document in MongoDB using projections in Spring Data MongoDB.

We also learned about the MongoDB aggregation framework support in Spring Data. We covered major aggregation phases – group, project, sort, limit, and match and looked at some examples of its practical applications. The complete source code is available over on GitHub.

MaxUploadSizeExceededException in Spring

$
0
0

1. Overview

In the Spring framework, a MaxUploadSizeExceededException is thrown when an application attempts to upload a file whose size exceeds a certain threshold as specified in the configuration.

In this tutorial, we will take a look at how to specify a maximum upload size. Then we will show a simple file upload controller and discuss different methods for handling this exception.

2. Setting a Maximum Upload Size

By default, there is no limit on the size of files that can be uploaded. In order to set a maximum upload size, you have to declare a bean of type MultipartResolver.

Let’s see an example that limits the file size to 5 MB:

@Bean
public MultipartResolver multipartResolver() {
    CommonsMultipartResolver multipartResolver
      = new CommonsMultipartResolver();
    multipartResolver.setMaxUploadSize(5242880);
    return multipartResolver;
}

3. File Upload Controller

Next, let’s define a controller method that handles the uploading and saving to the server of a file:

@RequestMapping(value = "/uploadFile", method = RequestMethod.POST)
public ModelAndView uploadFile(MultipartFile file) throws IOException {
 
    ModelAndView modelAndView = new ModelAndView("file");
    InputStream in = file.getInputStream();
    File currDir = new File(".");
    String path = currDir.getAbsolutePath();
    FileOutputStream f = new FileOutputStream(
      path.substring(0, path.length()-1)+ file.getOriginalFilename());
    int ch = 0;
    while ((ch = in.read()) != -1) {
        f.write(ch);
    }
    
    f.flush();
    f.close();
    
    modelAndView.getModel().put("message", "File uploaded successfully!");
    return modelAndView;
}

If the user attempts to upload a file with a size greater than 5 MB, the application will throw an exception of type MaxUploadSizeExceededException.

4. Handling MaxUploadSizeExceededException

In order to handle this exception, we can have our controller implement the interface HandlerExceptionResolver, or we can create a @ControllerAdvice annotated class.

4.1. Implementing HandlerExceptionResolver

The HandlerExceptionResolver interface declares a method called resolveException() where exceptions of different types can be handled.

Let’s override the resolveException() method to display a message in case the exception caught is of type MaxUploadSizeExceededException:

@Override
public ModelAndView resolveException(
  HttpServletRequest request,
  HttpServletResponse response, 
  Object object,
  Exception exc) {   
     
    ModelAndView modelAndView = new ModelAndView("file");
    if (exc instanceof MaxUploadSizeExceededException) {
        modelAndView.getModel().put("message", "File size exceeds limit!");
    }
    return modelAndView;
}

4.2. Creating a Controller Advice Interceptor

There are a couple of advantages of handling the exception through an interceptor rather than in the controller itself. One is that we can apply the same exception handling logic to multiple controllers.

Another is that we can create a method that targets only the exception we want to handle, allowing the framework to delegate the exception handling without our having to use instanceof to check what type of exception was thrown:

@ControllerAdvice
public class FileUploadExceptionAdvice {
     
    @ExceptionHandler(MaxUploadSizeExceededException.class)
    public ModelAndView handleMaxSizeException(
      MaxUploadSizeExceededException exc, 
      HttpServletRequest request,
      HttpServletResponse response) {
 
        ModelAndView modelAndView = new ModelAndView("file");
        modelAndView.getModel().put("message", "File too large!");
        return modelAndView;
    }
}

5. Tomcat Configuration

If you are deploying to Tomcat server version 7 and above, there is a configuration property called maxSwallowSize that you may have to set or change.

This property specifies the maximum number of bytes that Tomcat will “swallow” for an upload from the client when it knows the server will ignore the file.

The default value of the property is 2097152 (2 MB). If left unchanged or if set below the 5 MB limit that we set in our MultipartResolver, Tomcat will reject any attempt to upload a file over 2 MB, and our custom exception handling will never be invoked.

In order for the request to be successful and for the error message from the application to be displayed, you need to set maxSwallowSize property to a negative value. This instructs Tomcat to swallow all failed uploads regardless of file size.

This is done in the TOMCAT_HOME/conf/server.xml file:

<Connector port="8080" protocol="HTTP/1.1"
  connectionTimeout="20000"
  redirectPort="8443" 
  maxSwallowSize = "-1"/>

6. Conclusion

In this article, we have demonstrated how to configure a maximum file upload size in Spring and how to handle the MaxUploadSizeExceededException that results when a client attempts to upload a file exceeding this size limit.

The full source code for this article can be found in the GitHub project.

Viewing all 3741 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>