Quantcast
Channel: Baeldung
Viewing all 3692 articles
Browse latest View live

Guide to java.util.concurrent.BlockingQueue

$
0
0

1. Overview

In this article, we will look at one of the most useful constructs java.util.concurrent to solve the concurrent producer-consumer problem. We’ll look at an API of the BlockingQueue interface and how methods from that interface make writing concurrent programs easier.

Later in the article, we will show an example of a simple program that has multiple producer threads and multiple consumer threads.

2. BlockingQueue Types

We can distinguish two types of BlockingQueue:

  • unbounded queue – can grow almost indefinitely
  • bounded queue – with maximal capacity defined

2.1. Unbounded Queue

Creating unbounded queues is simple:

BlockingQueue<String> blockingQueue = new LinkedBlockingDeque<>();

The Capacity of blockingQueue will be set to Integer.MAX_VALUE. All operations that add an element to the unbounded queue will never block, thus it could grow to a very large size.

The most important thing when designing a producer-consumer program using unbounded BlockingQueue is that consumers should be able to consume messages as quickly as producers are adding messages to the queue. Otherwise, the memory could fill up and we would get an OutOfMemory exception.

2.2. Bounded Queue

The second type of queues is the bounded queue. We can create such queues by passing the capacity as an argument to a constructor:

BlockingQueue<String> blockingQueue = new LinkedBlockingDeque<>(10);

Here we have a blockingQueue that has a capacity equal to 10. It means that when a consumer tries to add an element to an already full queue, depending on a method that was used to add it (offer(), add() or put()), it will block until space for inserting object becomes available. Otherwise, the operations will fail.

Using bounded queue is a good way to design concurrent programs because when we insert an element to an already full queue, that operations need to wait until consumers catch up and make some space available in the queue. It gives us throttling without any effort on our part.

3. BlockingQueue API

There are two types of methods in the BlockingQueue interface – methods responsible for adding elements to a queue and methods that retrieve those elements. Each method from those two groups behaves differently in case the queue is full/empty.

3.1. Adding Elements

  • add() – returns true if insertion was successful, otherwise throws an IllegalStateException
  • put() – inserts the specified element into a queue, waiting for a free slot if necessary
  • offer() – returns true if insertion was successful, otherwise false
  • offer(E e, long timeout, TimeUnit unit) – tries to insert element into a queue and waits for an available slot within a specified timeout

3.2. Retrieving Elements

  • take() – waits for a head element of a queue and removes it. If the queue is empty, it blocks and waits for an element to become available
  • poll(long timeout, TimeUnit unit) –  retrieves and removes the head of the queue, waiting up to the specified wait time if necessary for an element to become available. Returns null after a timeout

These methods are the most important building blocks from BlockingQueue interface when building producer-consumer programs.

4. Multithreaded Producer-Consumer Example

Let’s create a program that consists of two parts – a Producer and a Consumer.

The Producer will be producing a random number from 0 to 100 and will put that number in a BlockingQueue. We’ll have 4 producer threads and use the put() method to block until there’s space available in the queue.

The important thing to remember is that we need to stop our consumer threads from waiting for an element to appear in a queue indefinitely.

A good technique to signal from producer to the consumer that there are no more messages to process is to send a special message called a poison pill. We need to send as many poison pills as we have consumers. Then when a consumer will take that special poison pill message from a queue, it will finish execution gracefully.

Let’s look at a producer class:

public class NumbersProducer implements Runnable {
    private BlockingQueue<Integer> numbersQueue;
    private final int poisonPill;
    private final int poisonPillPerProducer;
    
    public NumbersProducer(BlockingQueue<Integer> numbersQueue, int poisonPill, int poisonPillPerProducer) {
        this.numbersQueue = numbersQueue;
        this.poisonPill = poisonPill;
        this.poisonPillPerProducer = poisonPillPerProducer;
    }
    public void run() {
        try {
            generateNumbers();
        } catch (InterruptedException e) {
            Thread.currentThread().interrupt();
        }
    }
    
    private void generateNumbers() throws InterruptedException {
        for (int i = 0; i < 100; i++) {
            numbersQueue.put(ThreadLocalRandom.current().nextInt(100));
        }
        for (int j = 0; j < poisonPillPerProducer; j++) {
            numbersQueue.put(poisonPill);
        }
     }
}

Our producer constructor takes as an argument the BlockingQueue that is used to coordinate processing between the producer and the consumer. We see that method generateNumbers() will put 100 elements in a queue. It takes also poison pill message, to know what type of message put into a queue when the execution will be finished. That message needs to be put poisonPillPerProducer times into a queue.

Each consumer will take an element from a BlockingQueue using take() method so it will block until there is an element in a queue. After taking an Integer from a queue it checks if the message is a poison pill, if yes then execution of a thread is finished. Otherwise, it will print out the result on standard output along with current thread’s name.

This will give us insight into inner workings of our consumers:

public class NumbersConsumer implements Runnable {
    private BlockingQueue<Integer> queue;
    private final int poisonPill;
    
    public NumbersConsumer(BlockingQueue<Integer> queue, int poisonPill) {
        this.queue = queue;
        this.poisonPill = poisonPill;
    }
    public void run() {
        try {
            while (true) {
                Integer number = queue.take();
                if (number.equals(poisonPill)) {
                    return;
                }
                System.out.println(Thread.currentThread().getName() + " result: " + result);
            }
        } catch (InterruptedException e) {
            Thread.currentThread().interrupt();
        }
    }
}

The important thing to notice is the usage of a queue. Same as in the producer constructor, a queue is passed as an argument. We can do it because BlockingQueue can be shared between threads without any explicit synchronization.

Now that we have our producer and consumer, we can start our program. We need to define the queue’s capacity, and we set it to 100 elements.

We want to have 4 producer threads and a number of consumers threads will be equal to the number of available processors:

int BOUND = 10;
int N_PRODUCERS = 4;
int N_CONSUMERS = Runtime.getRuntime().availableProcessors();
int poisonPill = Integer.MAX_VALUE;
int poisonPillPerProducer = N_CONSUMERS / N_PRODUCERS;

BlockingQueue<Integer> queue = new LinkedBlockingQueue<>(BOUND);

for (int i = 0; i < N_PRODUCERS; i++) {
    new Thread(new NumbersProducer(queue, poisonPill, poisonPillPerProducer)).start();
}

for (int j = 0; j < N_CONSUMERS; j++) {
    new Thread(new NumbersConsumer(queue, poisonPill)).start();
}

BlockingQueue is created using construct with a capacity. We’re creating 4 producers and N consumers. We specify our poison pill message to be an Integer.MAX_VALUE because such value will never be sent by our producer under normal working conditions. The most important thing to notice here is that BlockingQueue is used to coordinate work between them.

When we run the program, 4 producer threads will be putting random Integers in a BlockingQueue and consumers will be taking those elements from the queue. Each thread will print to standard output the name of the thread together with a result.

5. Conclusion

This article shows a practical use of BlockingQueue and explains methods that are used to add and retrieve elements from it. Also, we’ve shown how to build a multithreaded producer-consumer program using BlockingQueue to coordinate work between producers and consumers.

The implementation of all these examples and code snippets can be found in the GitHub project – this is a Maven-based project, so it should be easy to import and run as it is.


Intro to Dropwizard Metrics

$
0
0

1. Introduction

Metrics is a Java library which provides measuring instruments for Java applications.

It has several modules, and in this article, we will elaborate metrics-core module, metrics-healthchecks module, metrics-servlets module, and metrics-servlet module, and sketch out the rest, for your reference.

2. Module metrics-core

2.1. Maven Dependencies

To use the metrics-core module, there’s only one dependency required which needs to be added to the pom.xml file:

<dependency>
    <groupId>io.dropwizard.metrics</groupId>
    <artifactId>metrics-core</artifactId>
    <version>3.1.2</version>
</dependency>

And you can find its latest version here.

2.2. MetricRegistry

Simply put, we’ll use the MetricRegistry class to register one or several metrics.

We can use one metrics registry for all of our metrics, but if we want to use different reporting methods for different metrics, we can also divide our metrics into groups and use different metrics registries for each group.

Let’s create a MetricRegistry now:

MetricRegistry metricRegistry = new MetricRegistry();

And then we can register some metrics with this MetricRegistry:

Meter meter1 = new Meter();
metricRegistry.register("meter1", meter1);

Meter meter2 = metricRegistry.meter("meter2");

There are two basic ways of creating a new metric: instantiating one yourself or obtaining one from the metric registry. As you can see, we used both of them in the example above, we are instantiating the Meter object “meter1” and we are getting another Meter object “meter2” which is created by the metricRegistry.

In a metric registry, every metric has a unique name, as we used “meter1” and “meter2” as metric names above. MetricRegistry also provides a set of static helper methods to help us create proper metric names:

String name1 = MetricRegistry.name(Filter.class, "request", "count");
String name2 = MetricRegistry.name("CustomFilter", "response", "count");

If we need to manage a set of metric registries, we can use SharedMetricRegistries class, which is a singleton and thread-safe. We can add a metric register into it, retrieve this metric register from it, and remove it:

SharedMetricRegistries.add("default", metricRegistry);
MetricRegistry retrievedMetricRegistry = SharedMetricRegistries.getOrCreate("default");
SharedMetricRegistries.remove("default");

3. Metrics Concepts

The metrics-core module provides several commonly used metric types: Meter, Gauge, Counter, Histogram and Timer, and Reporter to output metrics’ values.

3.1. Meter

A Meter measures event occurrences count and rate:

Meter meter = new Meter();
long initCount = meter.getCount();
assertThat(initCount, equalTo(0L));

meter.mark();
assertThat(meter.getCount(), equalTo(1L));

meter.mark(20);
assertThat(meter.getCount(), equalTo(21L));

double meanRate = meter.getMeanRate();
double oneMinRate = meter.getOneMinuteRate();
double fiveMinRate = meter.getFiveMinuteRate();
double fifteenMinRate = meter.getFifteenMinuteRate(); 

The getCount() method returns event occurrences count, and mark() method adds 1 or n to the event occurrences count. The Meter object provides four rates which represent average rates for the whole Meter lifetime, for the recent one minute, for the recent five minutes and for the recent quarter, respectively.

3.2. Gauge

Gauge is an interface which is simply used to return a particular value. The metrics-core module provides several implementations of it: RatioGauge, CachedGauge, DerivativeGauge and JmxAttributeGauge.

RatioGauge is an abstract class and it measures the ratio of one value to another.

Let’s see how to use it. First, we implement a class AttendanceRatioGauge:

public class AttendanceRatioGauge extends RatioGauge {
    private int attendanceCount;
    private int courseCount;

    @Override
    protected Ratio getRatio() {
        return Ratio.of(attendanceCount, courseCount);
    }
    
    // standard constructors
}

And then we test it:

RatioGauge ratioGauge = new AttendanceRatioGauge(15, 20);

assertThat(ratioGauge.getValue(), equalTo(0.75));

CachedGauge is another abstract class which can cache value, therefore, it is quite useful when the values are expensive to calculate. To use it, we need to implement a class ActiveUsersGauge:

public class ActiveUsersGauge extends CachedGauge<List<Long>> {
    
    @Override
    protected List<Long> loadValue() {
        return getActiveUserCount();
    }
 
    private List<Long> getActiveUserCount() {
        List<Long> result = new ArrayList<Long>();
        result.add(12L);
        return result;
    }

    // standard constructors
}

Then we test it to see if it works as expected:

Gauge<List<Long>> activeUsersGauge = new ActiveUsersGauge(15, TimeUnit.MINUTES);
List<Long> expected = new ArrayList<>();
expected.add(12L);

assertThat(activeUsersGauge.getValue(), equalTo(expected));

We set the cache’s expiration time to 15 minutes when instantiating the ActiveUsersGauge.

DerivativeGauge is also an abstract class and it allows you to derive a value from other Gauge as its value.

Let’s look at an example:

public class ActiveUserCountGauge extends DerivativeGauge<List<Long>, Integer> {
    
    @Override
    protected Integer transform(List<Long> value) {
        return value.size();
    }

    // standard constructors
}

This Gauge derives its value from an ActiveUsersGauge, so we expect it to be the value from the base list’s size:

Gauge<List<Long>> activeUsersGauge = new ActiveUsersGauge(15, TimeUnit.MINUTES);
Gauge<Integer> activeUserCountGauge = new ActiveUserCountGauge(activeUsersGauge);

assertThat(activeUserCountGauge.getValue(), equalTo(1));

JmxAttributeGauge is used when we need to access other libraries’ metrics exposed via JMX.

3.3. Counter

The Counter is used for recording incrementations and decrementations:

Counter counter = new Counter();
long initCount = counter.getCount();
assertThat(initCount, equalTo(0L));

counter.inc();
assertThat(counter.getCount(), equalTo(1L));

counter.inc(11);
assertThat(counter.getCount(), equalTo(12L));

counter.dec();
assertThat(counter.getCount(), equalTo(11L));

counter.dec(6);
assertThat(counter.getCount(), equalTo(5L));

3.4. Histogram

Histogram is used for keeping track of a stream of Long values and it analyzes their statistical characteristics such as max, min, mean, median, standard deviation, 75th percentile and so on:

Histogram histogram = new Histogram(new UniformReservoir());
histogram.update(5);
long count1 = histogram.getCount();
assertThat(count1, equalTo(1L));

Snapshot snapshot1 = histogram.getSnapshot();
assertThat(snapshot1.getValues().length, equalTo(1));
assertThat(snapshot1.getValues()[0], equalTo(5L));

histogram.update(20);
long count2 = histogram.getCount();
assertThat(count2, equalTo(2L));

Snapshot snapshot2 = histogram.getSnapshot();
assertThat(snapshot2.getValues().length, equalTo(2));
assertThat(snapshot2.getValues()[1], equalTo(20L));
assertThat(snapshot2.getMax(), equalTo(20L));
assertThat(snapshot2.getMean(), equalTo(12.5));
assertEquals(10.6, snapshot2.getStdDev(), 0.1);
assertThat(snapshot2.get75thPercentile(), equalTo(20.0));
assertThat(snapshot2.get999thPercentile(), equalTo(20.0));

Histogram samples the data by using reservoir sampling, and when we instantiate a Histogram object, we need to set its reservoir explicitly.

Reservoir is an interface and metrics-core provides four implementations of them: ExponentiallyDecayingReservoir, UniformReservoir, SlidingTimeWindowReservoir, SlidingWindowReservoir.

In the section above, we mentioned that a metric can also be created by MetricRegistry, besides using a constructor. When we use metricRegistry.histogram(), it returns a Histogram instance with ExponentiallyDecayingReservoir implementation.

3.5. Timer

Timer is used for keeping track of multiple timing durations which are represented by Context objects, and it also provides their statistical data:

Timer timer = new Timer();
Timer.Context context1 = timer.time();
TimeUnit.SECONDS.sleep(5);
long elapsed1 = context1.stop();

assertEquals(5000000000L, elapsed1, 1000000);
assertThat(timer.getCount(), equalTo(1L));
assertEquals(0.2, timer.getMeanRate(), 0.1);

Timer.Context context2 = timer.time();
TimeUnit.SECONDS.sleep(2);
context2.close();

assertThat(timer.getCount(), equalTo(2L));
assertEquals(0.3, timer.getMeanRate(), 0.1);

3.6. Reporter

When we need to output our measurements, we can use Reporter. This is an interface, and the metrics-core module provides several implementations of it, such as ConsoleReporter, CsvReporter, Slf4jReporter, JmxReporter and so on.

Here we use ConsoleReporter as an example:

MetricRegistry metricRegistry = new MetricRegistry();

Meter meter = metricRegistry.meter("meter");
meter.mark();
meter.mark(200);
Histogram histogram = metricRegistry.histogram("histogram");
histogram.update(12);
histogram.update(17);
Counter counter = metricRegistry.counter("counter");
counter.inc();
counter.dec();

ConsoleReporter reporter = ConsoleReporter.forRegistry(metricRegistry).build();
reporter.start(5, TimeUnit.MICROSECONDS);
reporter.report();

Here is the sample output of the ConsoleReporter:

-- Histograms ------------------------------------------------------------------
histogram
count = 2
min = 12
max = 17
mean = 14.50
stddev = 2.50
median = 17.00
75% <= 17.00
95% <= 17.00
98% <= 17.00
99% <= 17.00
99.9% <= 17.00

-- Meters ----------------------------------------------------------------------
meter
count = 201
mean rate = 1756.87 events/second
1-minute rate = 0.00 events/second
5-minute rate = 0.00 events/second
15-minute rate = 0.00 events/second

4. Module metrics-healthchecks

Metrics has an extension metrics-healthchecks module for dealing with health checks.

4.1. Maven Dependencies

To use the metrics-healthchecks module, we need to add this dependency to the pom.xml file:

<dependency>
    <groupId>io.dropwizard.metrics</groupId>
    <artifactId>metrics-healthchecks</artifactId>
    <version>3.1.2</version>
</dependency>

And you can find its latest version here.

4.2. Usage

First, we need several classes which are responsible for specific health check operations, and these classes must implement HealthCheck.

For example, we use DatabaseHealthCheck and UserCenterHealthCheck:

public class DatabaseHealthCheck extends HealthCheck {
 
    @Override
    protected Result check() throws Exception {
        return Result.healthy();
    }
}
public class UserCenterHealthCheck extends HealthCheck {
 
    @Override
    protected Result check() throws Exception {
        return Result.healthy();
    }
}

Then, we need a HealthCheckRegistry (which is just like MetricRegistry), and register the DatabaseHealthCheck and UserCenterHealthCheck with it:

HealthCheckRegistry healthCheckRegistry = new HealthCheckRegistry();
healthCheckRegistry.register("db", new DatabaseHealthCheck());
healthCheckRegistry.register("uc", new UserCenterHealthCheck());

assertThat(healthCheckRegistry.getNames().size(), equalTo(2));

We can also unregister the HealthCheck:

healthCheckRegistry.unregister("uc");
 
assertThat(healthCheckRegistry.getNames().size(), equalTo(1));

We can run all the HealthCheck instances:

Map<String, HealthCheck.Result> results = healthCheckRegistry.runHealthChecks();
for (Map.Entry<String, HealthCheck.Result> entry : results.entrySet()) {
    assertThat(entry.getValue().isHealthy(), equalTo(true));
}

Finally, we can run a specific HealthCheck instance:

healthCheckRegistry.runHealthCheck("db");

5. Module metrics-servlets

Metrics provides us a handful of useful servlets which allow us to access metrics related data through HTTP requests.

5.1. Maven Dependencies

To use the metrics-servlets module, we need to add this dependency to the pom.xml file:

<dependency>
    <groupId>io.dropwizard.metrics</groupId>
    <artifactId>metrics-servlets</artifactId>
    <version>3.1.2</version>
</dependency>

And you can find its latest version here.

5.2. HealthCheckServlet Usage

HealthCheckServlet provides health check results. First, we need to create a ServletContextListener which exposes our HealthCheckRegistry:

public class MyHealthCheckServletContextListener
  extends HealthCheckServlet.ContextListener {
 
    public static HealthCheckRegistry HEALTH_CHECK_REGISTRY
      = new HealthCheckRegistry();

    static {
        HEALTH_CHECK_REGISTRY.register("db", new DatabaseHealthCheck());
    }

    @Override
    protected HealthCheckRegistry getHealthCheckRegistry() {
        return HEALTH_CHECK_REGISTRY;
    }
}

Then, we add both this listener and HealthCheckServlet into the web.xml file:

<listener>
    <listener-class>com.baeldung.metrics.servlets.MyHealthCheckServletContextListener</listener-class>
</listener>
<servlet>
    <servlet-name>healthCheck</servlet-name>
    <servlet-class>com.codahale.metrics.servlets.HealthCheckServlet</servlet-class>
</servlet>
<servlet-mapping>
    <servlet-name>healthCheck</servlet-name>
    <url-pattern>/healthcheck</url-pattern>
</servlet-mapping>

Now we can start the web application, and send a GET request to “http://localhost:8080/healthcheck” to get health check results. Its response should be like this:

{
  "db": {
    "healthy": true
  }
}

5.3. ThreadDumpServlet Usage

ThreadDumpServlet provides information about all live threads in the JVM, their states, their stack traces, and the state of any locks they may be waiting for.
If we want to use it, we simply need to add these into the web.xml file:

<servlet>
    <servlet-name>threadDump</servlet-name>
    <servlet-class>com.codahale.metrics.servlets.ThreadDumpServlet</servlet-class>
</servlet>
<servlet-mapping>
    <servlet-name>threadDump</servlet-name>
    <url-pattern>/threaddump</url-pattern>
</servlet-mapping>

Thread dump data will be available at “http://localhost:8080/threaddump”.

5.4. PingServlet Usage

PingServlet can be used to test if the application is running. We add these into the web.xml file:

<servlet>
    <servlet-name>ping</servlet-name>
    <servlet-class>com.codahale.metrics.servlets.PingServlet</servlet-class>
</servlet>
<servlet-mapping>
    <servlet-name>ping</servlet-name>
    <url-pattern>/ping</url-pattern>
</servlet-mapping>

And then send a GET request to “http://localhost:8080/ping”. The response’s status code is 200 and its content is “pong”.

5.5. MetricsServlet Usage

MetricsServlet provides metrics data. First, we need to create a ServletContextListener which exposes our MetricRegistry:

public class MyMetricsServletContextListener
  extends MetricsServlet.ContextListener {
    private static MetricRegistry METRIC_REGISTRY
     = new MetricRegistry();

    static {
        Counter counter = METRIC_REGISTRY.counter("m01-counter");
        counter.inc();

        Histogram histogram = METRIC_REGISTRY.histogram("m02-histogram");
        histogram.update(5);
        histogram.update(20);
        histogram.update(100);
    }

    @Override
    protected MetricRegistry getMetricRegistry() {
        return METRIC_REGISTRY;
    }
}

Both this listener and MetricsServlet need to be added into web.xml:

<listener>
    <listener-class>com.codahale.metrics.servlets.MyMetricsServletContextListener</listener-class>
</listener>
<servlet>
    <servlet-name>metrics</servlet-name>
    <servlet-class>com.codahale.metrics.servlets.MetricsServlet</servlet-class>
</servlet>
<servlet-mapping>
    <servlet-name>metrics</servlet-name>
    <url-pattern>/metrics</url-pattern>
</servlet-mapping>

This will be exposed in our web application at “http://localhost:8080/metrics”. Its response should contain various metrics data:

{
  "version": "3.0.0",
  "gauges": {},
  "counters": {
    "m01-counter": {
      "count": 1
    }
  },
  "histograms": {
    "m02-histogram": {
      "count": 3,
      "max": 100,
      "mean": 41.66666666666666,
      "min": 5,
      "p50": 20,
      "p75": 100,
      "p95": 100,
      "p98": 100,
      "p99": 100,
      "p999": 100,
      "stddev": 41.69998667732268
    }
  },
  "meters": {},
  "timers": {}
}

5.6. AdminServlet Usage

AdminServlet aggregates HealthCheckServlet, ThreadDumpServlet, MetricsServlet, and PingServlet.

Let’s add these into the web.xml:

<servlet>
    <servlet-name>admin</servlet-name>
    <servlet-class>com.codahale.metrics.servlets.AdminServlet</servlet-class>
</servlet>
<servlet-mapping>
    <servlet-name>admin</servlet-name>
    <url-pattern>/admin/*</url-pattern>
</servlet-mapping>

It can now be accessed at “http://localhost:8080/admin”. We will get a page containing four links, one for each of those four servlets.

Note that, if we want to do health check and access metrics data, those two listeners are still needed.

6. Module metrics-servlet

The metrics-servlet module provides a Filter which has several metrics: meters for status codes, a counter for the number of active requests, and a timer for request duration.

6.1. Maven Dependencies

To use this module, let’s first add the dependency into the pom.xml:

<dependency>
    <groupId>io.dropwizard.metrics</groupId>
    <artifactId>metrics-servlet</artifactId>
    <version>3.1.2</version>
</dependency>

And you can find its latest version here.

6.2. Usage

To use it, we need to create a ServletContextListener which exposes our MetricRegistry to the InstrumentedFilter:

public class MyInstrumentedFilterContextListener
  extends InstrumentedFilterContextListener {
 
    public static MetricRegistry REGISTRY = new MetricRegistry();

    @Override
    protected MetricRegistry getMetricRegistry() {
        return REGISTRY;
    }
}

Then, we add these into the web.xml:

<listener>
     <listener-class>
         com.baeldung.metrics.servlet.MyInstrumentedFilterContextListener
     </listener-class>
</listener>

<filter>
    <filter-name>instrumentFilter</filter-name>
    <filter-class>
        com.codahale.metrics.servlet.InstrumentedFilter
    </filter-class>
</filter>
<filter-mapping>
    <filter-name>instrumentFilter</filter-name>
    <url-pattern>/*</url-pattern>
</filter-mapping>

Now the InstrumentedFilter can work. If we want to access its metrics data, we can do it through its MetricRegistry REGISTRY.

7. Other Modules

Except for the modules we introduced above, Metrics has some other modules for different purposes:

  • metrics-jvm: provides several useful metrics for instrumenting JVM internals
  • metrics-ehcache: provides InstrumentedEhcache, a decorator for Ehcache caches
  • metrics-httpclient: provides classes for instrumenting Apache HttpClient (4.x version)
  • metrics-log4j: provides InstrumentedAppender, a Log4j Appender implementation for log4j 1.x which records the rate of logged events by their logging level
  • metrics-log4j2: is similar to metrics-log4j, just for log4j 2.x
  • metrics-logback: provides InstrumentedAppender, a Logback Appender implementation which records the rate of logged events by their logging level
  • metrics-json: provides HealthCheckModule and MetricsModule for Jackson

What’s more, other than these main project modules, some other third party libraries provide integration with other libraries and frameworks.

8. Conclusion

Instrumenting applications is a common requirement, so in this article, we introduced Metrics, hoping that it can help you to solve your problem.

As always, the complete source code for the example is available over on GitHub.

Apache Maven Tutorial

$
0
0

1. Introduction

Building a software project typically consists of such tasks as downloading dependencies, putting additional jars on a classpath, compiling source code into binary code, running tests, packaging compiled code into deployable artifacts such as JAR, WAR, and ZIP files, and deploying these artifacts to an application server or repository.

Apache Maven automates these tasks, minimizing the risk of humans making errors while building the software manually and separating the work of compiling and packaging our code from that of code construction.

In this tutorial, we’re going to explore this powerful tool for describing, building, and managing Java software projects using a central piece of information — the Project Object Model (POM) — that is written in XML.

2. Why Use Maven?

The key features of Maven are:

  • simple project setup that follows best practices: Maven tries to avoid as much configuration as possible, by supplying project templates (named archetypes)
  • dependency management: it includes automatic updating, downloading and validating the compatibility, as well as reporting the dependency closures (known also as transitive dependencies)
  • isolation between project dependencies and plugins: with Maven, project dependencies are retrieved from the dependency repositories while any plugin’s dependencies are retrieved from the plugin repositories, resulting in fewer conflicts when plugins start to download additional dependencies
  • central repository system: project dependencies can be loaded from the local file system or public repositories, such as Maven Central
In order to learn how to install Maven on your system, please check this tutorial on Baeldung.

3. Project Object Model

The configuration of a Maven project is done via a Project Object Model (POM), represented by a pom.xml file. The POM describes the project, manages dependencies, and configures plugins for building the software.

The POM also defines the relationships among modules of multi-module projects. Let’s look at the basic structure of a typical POM file:

<project>
    <modelVersion>4.0.0</modelVersion>
    <groupId>org.baeldung</groupId>
    <artifactId>org.baeldung</artifactId>
    <packaging>jar</packaging>
    <version>1.0-SNAPSHOT</version>
    <name>org.baeldung</name>
    <url>http://maven.apache.org</url>
    <dependencies>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.12</version>
            <scope>test</scope>
        </dependency>
    </dependencies>
    <build>
        <plugins>
            <plugin>
            //...
            </plugin>
        </plugins>
    </build>
</project>

Let’s take a closer look at these constructs.

3.1. Project Identifiers

Maven uses a set of identifiers, also called coordinates, to uniquely identify a project and specify how the project artifact should be packaged:

  • groupId – a unique base name of the company or group that created the project
  • artifactId – a unique name of the project
  • version – a version of the project
  • packaging – a packaging method (e.g. WAR/JAR/ZIP)

The first three of these (groupId:artifactId:version) combine to form the unique identifier and are the mechanism by which you specify which versions of external libraries (e.g. JARs) your project will use.

3.2. Dependencies

These external libraries that a project uses are called dependencies. The dependency management feature in Maven ensures automatic download of those libraries from a central repository, so you don’t have to store them locally.

This is a key feature of Maven and provides the following benefits:

  • uses less storage by significantly reducing the number of downloads off remote repositories
  • makes checking out a project quicker
  • provides an effective platform for exchanging binary artifacts within your organization and beyond without the need for building artifact from source every time

In order to declare a dependency on an external library, you need to provide the groupId, artifactId, and the version of the library. Let’s take a look at an example:

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-core</artifactId>
    <version>4.3.5.RELEASE</version>
</dependency>

As Maven processes the dependencies, it will download Spring Core library into your local Maven repository.

3.3. Repositories

A repository in Maven is used to hold build artifacts and dependencies of varying types. The default local repository is located in the .m2/repository folder under the home directory of the user.

If an artifact or a plug-in is available in the local repository, Maven uses it. Otherwise, it is downloaded from a central repository and stored in the local repository. The default central repository is Maven Central.

Some libraries, such as JBoss server, are not available at the central repository but are available at an alternate repository. For those libraries, you need to provide the URL to the alternate repository inside pom.xml file:

<repositories>
    <repository>
        <id>JBoss repository</id>
        <url>http://repository.jboss.org/nexus/content/groups/public/</url>
    </repository>
</repositories>

Please note that you can use multiple repositories in your projects.

3.4. Properties

Custom properties can help to make your pom.xml file easier to read and maintain. In the classic use case, you would use custom properties to define versions for your project’s dependencies.

Maven properties are value-placeholders and are accessible anywhere within a pom.xml by using the notation ${name}, where name is the property.

Let’s see an example:

<properties>
    <spring.version>4.3.5.RELEASE</spring.version>
</properties>

<dependencies>
    <dependency>
        <groupId>org.springframework</groupId>
        <artifactId>spring-core</artifactId>
        <version>${spring.version}</version>
    </dependency>
    <dependency>
        <groupId>org.springframework</groupId>
        <artifactId>spring-context</artifactId>
        <version>${spring.version}</version>
    </dependency>
</dependencies>

Now if you want to upgrade Spring to a newer version, you only have to change the value inside the<spring.version> property tag and all the dependencies using that property in their <version> tags will be updated.

Properties are also often used to define build path variables:

<properties>
    <project.build.folder>${project.build.directory}/tmp/</project.build.folder>
</properties>

<plugin>
    //...
    <outputDirectory>${project.resources.build.folder}</outputDirectory>
    //...
</plugin>

3.5. Build

The build section is also a very important section of the Maven POM. It provides information about the default Maven goal, the directory for the compiled project, and the final name of the application. The default build section looks like this:

<build>
    <defaultGoal>install</defaultGoal>
    <directory>${basedir}/target</directory>
    <finalName>${artifactId}-${version}</finalName>
    <filters>
      <filter>filters/filter1.properties</filter>
    </filters>
    //...
</build>

The default output folder for compiled artifacts is named target, and the final name of the packaged artifact consists of the artifactId and version, but you can change it at any time.

3.6. Using Profiles

Another important feature of Maven is its support for profiles. A profile is basically a set of configuration values. By using profiles, you can customize the build for different environments such as Production/Test/Development:

<profiles>
    <profile>
        <id>production</id>
        <build>
            <plugins>
                <plugin>
                //...
                </plugin>
            </plugins>
        </build>
    </profile>
    <profile>
        <id>development</id>
        <activation>
            <activeByDefault>true</activeByDefault>
        </activation>
        <build>
            <plugins>
                <plugin>
                //...
                </plugin>
            </plugins>
        </build>
     </profile>
 </profiles>

As you can see in the example above, the default profile is set to development. If you want to run the production profile, you can use the following Maven command:

mvn clean install -Pproduction

4. Maven Build Lifecycles

Every Maven build follows a specified lifecycle. You can execute several build lifecycle goals, including the ones to compile the project’s code, create a package, and install the archive file in the local Maven dependency repository.

4.1. Lifecycle Phases

The following list shows the most important Maven lifecycle phases:

  • validate – checks the correctness of the project
  • compile – compiles the provided source code into binary artifacts
  • test – executes unit tests
  • package – packages compiled code into an archive file
  • integration-test – executes additional tests, which require the packaging
  • verify – checks if the package is valid
  • install – installs the package file into the local Maven repository
  • deploy – deploys the package file to a remote server or repository

4.2. Plugins and Goals

A Maven plugin is a collection of one or more goals. Goals are executed in phases, which helps to determine the order in which the goals are executed.

The rich list of plugins that are officially supported by Maven is available hereThere is also an interesting article how to build an executable JAR on Baeldung using various plugins.

To gain a better understanding of which goals are run in which phases by default, take a look at the default Maven lifecycle bindings.

To go through any one of the above phases, we just have to call one command:

mvn <phase>

For example, mvn clean install will remove the previously created jar/war/zip files and compiled classes (clean) and execute all the phases necessary to install new archive (install).

Please note that goals provided by plugins can be associated with different phases of the lifecycle.

5. Your First Maven Project

In this section, we will use the command line functionality of Maven to create a Java project.

5.1. Generating a Simple Java Project

In order to build a simple Java project, let’s run the following command:

mvn archetype:generate
  -DgroupId=org.baeldung
  -DartifactId=org.baeldung.java 
  -DarchetypeArtifactId=maven-archetype-quickstart
  -DinteractiveMode=false

The groupId is a parameter indicating the group or individual that created a project, which is often a reversed company domain name. The artifactId is the base package name used in the project, and we use the standard archetype.

Since we didn’t specify the version and the packaging type, these will be set to default values — the version will be set to 1.0-SNAPSHOT, and the packaging will be set to jar.

If you don’t know which parameters to provide, you can always specify interactiveMode=true, so that Maven asks for all the required parameters.

After the command completes, we have a Java project containing an App.java class, which is just a simple “Hello World” program, in the src/main/java folder.

We also have an example test class in src/test/java. The pom.xml of this project will look similar to this:

<project>
    <modelVersion>4.0.0</modelVersion>
    <groupId>org.baeldung</groupId>
    <artifactId>org.baeldung.java</artifactId>
    <packaging>jar</packaging>
    <version>1.0-SNAPSHOT</version>
    <name>org.baeldung.java</name>
    <url>http://maven.apache.org</url>
    <dependencies>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.1.2</version>
            <scope>test</scope>
        </dependency>
    </dependencies>
</project>

As you can see, the junit dependency is provided by default.

5.2. Compiling and Packaging a Project

The next step is to compile the project:

mvn compile

Maven will run through all lifecycle phases needed by the compile phase to build the project’s sources. If you want to run only the test phase, you can use:

mvn test

Now let’s invoke the package phasewhich will produce the compiled archive jar file:

mvn package

5.3. Executing an Application

Finally, we are going to execute our Java project with the exec-maven-plugin. Let’s configure the necessary plugins in the pom.xml:

<build>
    <sourceDirectory>src</sourceDirectory>
    <plugins>
        <plugin>
            <artifactId>maven-compiler-plugin</artifactId>
            <version>3.6.1</version>
            <configuration>
                <source>1.8</source>
                <target>1.8</target>
            </configuration>
        </plugin>
        <plugin>
            <groupId>org.codehaus.mojo</groupId>
            <artifactId>exec-maven-plugin</artifactId>
            <version>1.5.0</version>
            <configuration>
                <mainClass>org.baeldung.java.App</mainClass>
            </configuration>
        </plugin>
    </plugins>
</build>

The first plugin, maven-compiler-plugin, is responsible for compiling the source code using Java version 1.8. The exec-maven-plugin searches for the mainClass in our project.

To execute the application, we run the following command:

mvn exec:java

6. Multi-Module Projects

The mechanism in Maven that handles multi-module projects (also called aggregator projects) is called Reactor.

The Reactor collects all available modules to build, then sorts projects into the correct build order, and finally, builds them one by one.

Let’s see how to create a multi-module parent project.

6.1. Create Parent Project

First of all, we need to create a parent project. In order to create a new project with the name parent-project, we use the following command:

mvn archetype:create -DgroupId=org.baeldung -DartifactId=parent-project

Next, we update the packaging type inside the pom.xml file to indicate that this is a parent module:

<packaging>pom</packaging>

6.2. Create Submodule Projects

In the next step, we create submodule projects from the directory of parent-project:

cd parent-project
mvn archetype:generate -DgroupId=org.baeldung  -DartifactId=core
mvn archetype:generate -DgroupId=org.baeldung  -DartifactId=service
mvn archetype:generate -DgroupId=org.baeldung  -DartifactId=webapp

To verify if we created the submodules correctly, we look in the parent-project pom.xml file, where we should see three modules:

<modules>
    <module>core</module>
    <module>service</module>
    <module>webapp</module>
</modules>

Moreover, a parent section will be added in each submodule’s pom.xml:

<parent>
    <groupId>org.baeldung</groupId>
    <artifactId>parent-project</artifactId>
    <version>1.0-SNAPSHOT</version>
</parent>

6.3. Enable Dependency Management in Parent Project

Dependency management is a mechanism for centralizing the dependency information for a muti-module parent project and its children.

When you have a set of projects or modules that inherit a common parent, you can put all the required information about the dependencies in the common pom.xml file. This will simplify the references to the artifacts in the child POMs.

Let’s take a look at a sample parent’s pom.xml:

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-core</artifactId>
            <version>4.3.5.RELEASE</version>
        </dependency>
        //...
    </dependencies>
</dependencyManagement>

By declaring the spring-core version in the parent, all submodules that depend on spring-core can declare the dependency using only the groupId and artifactId, and the version will be inherited:

<dependencies>
    <dependency>
        <groupId>org.springframework</groupId>
        <artifactId>spring-core</artifactId>
    </dependency>
    //...
</dependencies>

Moreover, you can provide exclusions for dependency management in parent’s pom.xml, so that specific libraries will not be inherited by child modules:

<exclusions>
    <exclusion>
        <groupId>org.springframework</groupId>
        <artifactId>spring-context</artifactId>
    </exclusion>
</exclusions>

Finally, if a child module needs to use a different version of a managed dependency, you can override the managed version in child’s pom.xml file:

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-core</artifactId>
    <version>4.2.1.RELEASE</version>
</dependency>

Please note that while child modules inherit from their parent project, a parent project does not necessarily have any modules that it aggregates. On the other hand, a parent project may also aggregate projects that do not inherit from it.

For more information on inheritance and aggregation please refer to this documentation.

6.4. Updating the Submodules and Building a Project

We can change the packaging type of each submodule. For example, let’s change the packaging of the webapp module to WAR by updating the pom.xml file:

<packaging>war</packaging>

Now we can test the build of our project by using the mvn clean install command. The output of the Maven logs should be similar to this:

[INFO] Scanning for projects...
[INFO] Reactor build order:
[INFO]   parent-project
[INFO]   core
[INFO]   service
[INFO]   webapp
//.............
[INFO] -----------------------------------------
[INFO] Reactor Summary:
[INFO] -----------------------------------------
[INFO] parent-project .................. SUCCESS [2.041s]
[INFO] core ............................ SUCCESS [4.802s]
[INFO] service ......................... SUCCESS [3.065s]
[INFO] webapp .......................... SUCCESS [6.125s]
[INFO] -----------------------------------------

7. Conclusion

In this article, we discussed some of the more popular features of the Apache Maven build tool.

All code examples on Baeldung are built using Maven, so you can easily check our GitHub project website to see various Maven configurations.

CORS with Spring

$
0
0

1. Overview

In any modern browser, Cross-Origin Resource Sharing (CORS) is a relevant specification with the emergence of HTML5 and JS clients that consume data via REST APIs.

In many cases, the host that serves the JS (e.g. example.com) is different from the host that serves the data (e.g. api.example.com). In such a case, CORS enables the cross-domain communication.

Spring provides first class support for CORS, offering an easy and powerful way for configuring it.

2. Controller Method CORS Configuration

Enabling CORS is straightforward – just add the annotation @CrossOrigin.

We may implement this in a number of different ways.

2.1. @CrossOrigin on a @RequestMapping-Annotated Handler Method

@RestController
@RequestMapping("/account")
public class AccountController {

    @CrossOrigin
    @RequestMapping("/{id}")
    public Account retrieve(@PathVariable Long id) {
        // ...
    }

    @RequestMapping(method = RequestMethod.DELETE, path = "/{id}")
    public void remove(@PathVariable Long id) {
        // ...
    }
}

In the example above, CORS is only enabled for the retrieve() method. We can see that we didn’t set any configuration for the @CrossOrigin annotation, so the default configuration takes place:

  • All origins are allowed
  • The HTTP methods allowed are those specified in the @RequestMapping annotation (for this example is GET)
  • The time that the preflight response is cached (maxAge) is 30 minutes

2.2. @CrossOrigin on the Controller

@CrossOrigin(origins = "http://example.com", maxAge = 3600)
@RestController
@RequestMapping("/account")
public class AccountController {

    @RequestMapping("/{id}")
    public Account retrieve(@PathVariable Long id) {
        // ...
    }

    @RequestMapping(method = RequestMethod.DELETE, path = "/{id}")
    public void remove(@PathVariable Long id) {
        // ...
    }
}

Since @CrossOrigin is added to the Controller both retrieve() and remove() methods have it enabled. We can customize the configuration by specifying the value of one of the annotation attributes: origins, methods, allowedHeaders, exposedHeaders, allowCredentials or maxAge. 

2.3. @CrossOrigin on Controller and Handler Method 

@CrossOrigin(maxAge = 3600)
@RestController
@RequestMapping("/account")
public class AccountController {

    @CrossOrigin("http://example.com")
    @RequestMapping("/{id}")
    public Account retrieve(@PathVariable Long id) {
        // ...
    }

    @RequestMapping(method = RequestMethod.DELETE, path = "/{id}")
    public void remove(@PathVariable Long id) {
        // ...
    }
}

Spring will combine attributes from both annotations to create a merged CORS configuration.

In this example both methods will have a maxAge of 3600 seconds, the method remove() will allow all origins but the method retrieve() will only allow origins from http://example.com.

3. Global CORS Configuration

As an alternative to the fine-grained annotation-based configuration, Spring lets us define some global CORS configuration out of your controllers. This is similar to using a Filter based solution but can be declared within Spring MVC and combined with fine-grained @CrossOrigin configuration.

By default all origins and GET, HEAD and POST methods are allowed.

3.1. JavaConfig

@Configuration
@EnableWebMvc
public class WebConfig extends WebMvcConfigurerAdapter {

    @Override
    public void addCorsMappings(CorsRegistry registry) {
        registry.addMapping("/**");
    }
}

The example above enables CORS requests from any origin to any endpoint in the application.

If we want to lock this down a bit more, the registry.addMapping method returns a CorsRegistration object which can be used for additional configuration. There’s also an allowedOrigins method which you can use to specify an array of allowed origins. This can be useful if you need to load this array from an external source at runtime.

Additionally, there’s also allowedMethods, allowedHeaders, exposedHeaders, maxAge, and allowCredentials which can be used to set the response headers and provide us with more customization options.

3.2. XML Namespace

This minimal XML configuration enables CORS on a /** path pattern with the same default properties as the JavaConfig one:

<mvc:cors>
    <mvc:mapping path="/**" />
</mvc:cors>

It is also possible to declare several CORS mappings with customized properties:

<mvc:cors>

    <mvc:mapping path="/api/**"
        allowed-origins="http://domain1.com, http://domain2.com"
        allowed-methods="GET, PUT"
        allowed-headers="header1, header2, header3"
        exposed-headers="header1, header2" allow-credentials="false"
        max-age="123" />

    <mvc:mapping path="/resources/**"
        allowed-origins="http://domain1.com" />

</mvc:cors>

4. How It Works

CORS requests are automatically dispatched to the various HandlerMappings that are registered. They handle CORS preflight requests and intercept CORS simple and actual requests by means of a CorsProcessor implementation (DefaultCorsProcessor by default) in order to add the relevant CORS response headers (such as Access-Control-Allow-Origin).

CorsConfiguration allows one to specify how the CORS requests should be processed: allowed origins, headers and methods among others. This may be provided in various ways:

  • AbstractHandlerMapping#setCorsConfiguration() allows one to specify a Map with several CorsConfigurations mapped onto path patterns such as /api/**
  • Subclasses may provide their own CorsConfiguration by overriding the AbstractHandlerMapping#getCorsConfiguration(Object, HttpServletRequest) method
  • Handlers may implement the CorsConfigurationSource interface (like ResourceHttpRequestHandler now does) in order to provide a CorsConfiguration for each request

5. Conclusion

In this article, we showed how Spring provides support for enabling CORS in our application.

We started with the configuration of the controller. We saw that we only need to add the annotation @CrossOrigin in order to enable CORS either to one particular method or the entire controller.

Finally, we also saw that if we want to control the CORS configuration outside of the controllers we can perform this easily in the configuration files – either using JavaConfig or XML.

Querying Couchbase with MapReduce Views

$
0
0

1. Overview

In this tutorial, we will introduce some simple MapReduce views and demonstrate how to query them using the Couchbase Java SDK.

2. Maven Dependency

To work with Couchbase in a Maven project, import the Couchbase SDK into your pom.xml:

<dependency>
    <groupId>com.couchbase.client</groupId>
    <artifactId>java-client</artifactId>
    <version>2.4.0</version>
</dependency>

You can find the latest version on Maven Central.

3. MapReduce Views

In Couchbase, a MapReduce view is a type of index that can be used to query a data bucket. It is defined using a JavaScript map function and an optional reduce function.

3.1. The map Function

The map function is run against each document one time. When the view is created, the map function is run once against each document in the bucket, and the results are stored in the bucket.

Once a view is created, the map function is run only against newly inserted or updated documents in order to update the view incrementally.

Because the map function’s results are stored in the data bucket, queries against a view exhibit low latencies.

Let’s look at an example of a map function that creates an index on the name field of all documents in the bucket whose type field is equal to “StudentGrade”:

function (doc, meta) {
    if(doc.type == "StudentGrade" && doc.name) {    
        emit(doc.name, null);
    }
}

The emit function tells Couchbase which data field(s) to store in the index key (first parameter) and what value (second parameter) to associate with the indexed document.

In this case, we are storing only the document name property in the index key. And since we are not interested in associating any particular value with each entry, we pass null as the value parameter.

As Couchbase processes the view, it creates an index of the keys that are emitted by the map function, associating each key with all documents for which that key was emitted.

For example, if three documents have the name property set to “John Doe”, then the index key “John Doe” would be associated with those three documents.

3.2. The reduce Function

The reduce function is used to perform aggregate calculations using the results of a map function. The Couchbase Admin UI provides an easy way to apply the built-in reduce functions “_count”, “_sum”, and “_stats”, to your map function.

You can also write your own reduce functions for more complex aggregations. We will see examples of using the built-in reduce functions later in the tutorial.

4. Working with Views and Queries

4.1. Organizing the Views

Views are organized into one or more design document per bucket. In theory, there is no limit to the number of views per design document. However, for optimal performance, it has been suggested that you should limit each design document to fewer than ten views.

When you first create a view within a design document, Couchbase designates it as a development view. You can run queries against a development view to test its functionality. Once you are satisfied with the view, you would publish the design document, and the view becomes a production view.

4.2. Constructing Queries

In order to construct a query against a Couchbase view, you need to provide its design document name and view name to create a ViewQuery object:

ViewQuery query = ViewQuery.from("design-document-name", "view-name");

When executed, this query will return all rows of the view. We will see in later sections how to restrict the result set based on the key values.

To construct a query against a development view, you can apply the development() method when creating the query:

ViewQuery query 
  = ViewQuery.from("design-doc-name", "view-name").development();

4.3. Executing the Query

Once we have a ViewQuery object, we can execute the query to obtain a ViewResult:

ViewResult result = bucket.query(query);

4.4. Processing Query Results

And now that we have a ViewResult, we can iterate over the rows to get the document ids and/or content:

for(ViewRow row : result.allRows()) {
    JsonDocument doc = row.document();
    String id = doc.id();
    String json = doc.content().toString();
}

5. Sample Application

For the remainder of the tutorial, we will write MapReduce views and queries for a set of student grade documents having the following format, with grades constrained to the range 0 to 100:

{ 
    "type": "StudentGrade",
    "name": "John Doe",
    "course": "History",
    "hours": 3,
    "grade": 95
}

We will store these documents in the “baeldung-tutorial” bucket and all views in a design document named “studentGrades.” Let’s look at the code needed to open the bucket so that we can query it:

Bucket bucket = CouchbaseCluster.create("127.0.0.1")
  .openBucket("baeldung-tutorial");

6. Exact Match Queries

Suppose you want to find all student grades for a particular course or set of courses. Let’s write a view called “findByCourse” using the following map function:

function (doc, meta) {
    if(doc.type == "StudentGrade" && doc.course && doc.grade) {
        emit(doc.course, null);
    }
}

Note that in this simple view, we only need to emit the course field.

6.1. Matching on a Single Key

To find all grades for the History course, we apply the key method to our base query:

ViewQuery query 
  = ViewQuery.from("studentGrades", "findByCourse").key("History");

6.2. Matching on Multiple Keys

If you want to find all grades for Math and Science courses, you can apply the keys method to the base query, passing it an array of key values:

ViewQuery query = ViewQuery
  .from("studentGrades", "findByCourse")
  .keys(JsonArray.from("Math", "Science"));

7. Range Queries

In order to query for documents containing a range of values for one or more fields, we need a view that emits the field(s) we are interested in, and we must specify a lower and/or upper bound for the query.

Let’s take a look at how to perform range queries involving a single field and multiple fields.

7.1. Queries Involving a Single Field

To find all documents with a range of grade values regardless of the value of the course field, we need a view that emits only the grade field. Let’s write the map function for the “findByGrade” view:

function (doc, meta) {
    if(doc.type == "StudentGrade" && doc.grade) {
        emit(doc.grade, null);
    }
}

Let’s write a query in Java using this view to find all grades equivalent to a “B” letter grade (80 to 89 inclusive):

ViewQuery query = ViewQuery.from("studentGrades", "findByGrade")
  .startKey(80)
  .endKey(89)
  .inclusiveEnd(true);

Note that the start key value in a range query is always treated as inclusive.

And if all the grades are known to be integers, then the following query will yield the same results:

ViewQuery query = ViewQuery.from("studentGrades", "findByGrade")
  .startKey(80)
  .endKey(90)
  .inclusiveEnd(false);

To find all “A” grades (90 and above), we only need to specify the lower bound:

ViewQuery query = ViewQuery
  .from("studentGrades", "findByGrade")
  .startKey(90);

And to find all failing grades (below 60), we only need to specify the upper bound:

ViewQuery query = ViewQuery
  .from("studentGrades", "findByGrade")
  .endKey(60)
  .inclusiveEnd(false);

7.2. Queries Involving Multiple Fields

Now, suppose we want to find all students in a specific course whose grade falls into a certain range. This query requires a new view that emits both the course and grade fields.

With multi-field views, each index key is emitted as an array of values. Since our query involves a fixed value for course and a range of grade values, we will write the map function to emit each key as an array of the form [course, grade].

Let’s look at the map function for the view “findByCourseAndGrade“:

function (doc, meta) {
    if(doc.type == "StudentGrade" && doc.course && doc.grade) {
        emit([doc.course, doc.grade], null);
    }
}

When this view is populated in Couchbase, the index entries are sorted by course and grade. Here’s a subset of keys in the “findByCourseAndGrade” view shown in their natural sort order:

["History", 80]
["History", 90]
["History", 94]
["Math", 82]
["Math", 88]
["Math", 97]
["Science", 78]
["Science", 86]
["Science", 92]

Since the keys in this view are arrays, you would also use arrays of this format when specifying the lower and upper bounds of a range query against this view.

This means that in order to find all students who got a “B” grade (80 to 89) in the Math course, you would set the lower bound to:

["Math", 80]

and the upper bound to:

["Math", 89]

Let’s write the range query in Java:

ViewQuery query = ViewQuery
  .from("studentGrades", "findByCourseAndGrade")
  .startKey(JsonArray.from("Math", 80))
  .endKey(JsonArray.from("Math", 89))
  .inclusiveEnd(true);

If we want to find for all students who received an “A” grade (90 and above) in Math, then we would write:

ViewQuery query = ViewQuery
  .from("studentGrades", "findByCourseAndGrade")
  .startKey(JsonArray.from("Math", 90))
  .endKey(JsonArray.from("Math", 100));

Note that because we are fixing the course value to “Math“, we have to include an upper bound with the highest possible grade value. Otherwise, our result set would also include all documents whose course value is lexicographically greater than “Math“.

And to find all failing Math grades (below 60):

ViewQuery query = ViewQuery
  .from("studentGrades", "findByCourseAndGrade")
  .startKey(JsonArray.from("Math", 0))
  .endKey(JsonArray.from("Math", 60))
  .inclusiveEnd(false);

Much like the previous example, we must specify a lower bound with the lowest possible grade. Otherwise, our result set would also include all grades where the course value is lexicographically less than “Math“.

Finally, to find the five highest Math grades (barring any ties), you can tell Couchbase to perform a descending sort and to limit the size of the result set:

ViewQuery query = ViewQuery
  .from("studentGrades", "findByCourseAndGrade")
  .descending()
  .startKey(JsonArray.from("Math", 100))
  .endKey(JsonArray.from("Math", 0))
  .inclusiveEnd(true)
  .limit(5);

Note that when performing a descending sort, the startKey and endKey values are reversed, because Couchbase applies the sort before it applies the limit.

8. Aggregate Queries

A major strength of MapReduce views is that they are highly efficient for running aggregate queries against large datasets. In our student grades dataset, for example, we can easily calculate the following aggregates:

  • number of students in each course
  • sum of credit hours for each student
  • grade point average for each student across all courses

Let’s build a view and query for each of these calculations using built-in reduce functions.

8.1. Using the count() Function

First, let’s write the map function for a view to count the number of students in each course:

function (doc, meta) {
    if(doc.type == "StudentGrade" && doc.course && doc.name) {
        emit([doc.course, doc.name], null);
    }
}

We’ll call this view “countStudentsByCourse” and designate that it is to use the built-in “_count” function. And since we are only performing a simple count, we can still emit null as the value for each entry.

To count the number of students in the each course:

ViewQuery query = ViewQuery
  .from("studentGrades", "countStudentsByCourse")
  .reduce()
  .groupLevel(1);

Extracting data from aggregate queries is different from what we’ve seen up to this point. Instead of extracting a matching Couchbase document for each row in the result, we are extracting the aggregate keys and results.

Let’s run the query and extract the counts into a java.util.Map:

ViewResult result = bucket.query(query);
Map<String, Long> numStudentsByCourse = new HashMap<>();
for(ViewRow row : result.allRows()) {
    JsonArray keyArray = (JsonArray) row.key();
    String course = keyArray.getString(0);
    long count = Long.valueOf(row.value().toString());
    numStudentsByCourse.put(course, count);
}

8.2. Using the sum() Function

Next, let’s write a view that calculates the sum of each student’s credit hours attempted. We’ll call this view “sumHoursByStudent” and designate that it is to use the built-in “_sum” function:

function (doc, meta) {
    if(doc.type == "StudentGrade"
         && doc.name
         && doc.course
         && doc.hours) {
        emit([doc.name, doc.course], doc.hours);
    }
}

Note that when applying the “_sum” function, we have to emit the value to be summed — in this case, the number of credits — for each entry.

Let’s write a query to find the total number of credits for each student:

ViewQuery query = ViewQuery
  .from("studentGrades", "sumCreditsByStudent")
  .reduce()
  .groupLevel(1);

And now, let’s run the query and extract the aggregated sums into a java.util.Map:

ViewResult result = bucket.query(query);
Map<String, Long> hoursByStudent = new HashMap<>();
for(ViewRow row : result.allRows()) {
    String name = (String) row.key();
    long sum = Long.valueOf(row.value().toString());
    hoursByStudent.put(name, sum);
}

8.3. Calculating Grade Point Averages

Suppose we want to calculate each student’s grade point average (GPA) across all courses, using the conventional grade point scale based on the grades obtained and the number of credit hours that the course is worth (A=4 points per credit hour, B=3 points per credit hour, C=2 points per credit hour, and D=1 point per credit hour).

There is no built-in reduce function to calculate average values, so we’ll combine the output from two views to compute the GPA.

We already have the “sumHoursByStudent” view that sums the number of credit hours each student attempted. Now we need the total number of grade points each student earned.

Let’s create a view called “sumGradePointsByStudent” that calculates the number of grade points earned for each course taken. We’ll use the built-in “_sum” function to reduce the following map function:

function (doc, meta) {
    if(doc.type == "StudentGrade"
         && doc.name
         && doc.hours
         && doc.grade) {
        if(doc.grade >= 90) {
            emit(doc.name, 4*doc.hours);
        }
        else if(doc.grade >= 80) {
            emit(doc.name, 3*doc.hours);
        }
        else if(doc.grade >= 70) {
            emit(doc.name, 2*doc.hours);
        }
        else if(doc.grade >= 60) {
            emit(doc.name, doc.hours);
        }
        else {
            emit(doc.name, 0);
        }
    }
}

Now let’s query this view and extract the sums into a java.util.Map:

ViewQuery query = ViewQuery.from(
  "studentGrades",
  "sumGradePointsByStudent")
  .reduce()
  .groupLevel(1);
ViewResult result = bucket.query(query);

Map<String, Long> gradePointsByStudent = new HashMap<>();
for(ViewRow row : result.allRows()) {
    String course = (String) row.key();
    long sum = Long.valueOf(row.value().toString());
    gradePointsByStudent.put(course, sum);
}

Finally, let’s combine the two Maps in order to calculate GPA for each student:

Map<String, Float> result = new HashMap<>();
for(Entry<String, Long> creditHoursEntry : hoursByStudent.entrySet()) {
    String name = creditHoursEntry.getKey();
    long totalHours = creditHoursEntry.getValue();
    long totalGradePoints = gradePointsByStudent.get(name);
    result.put(name, ((float) totalGradePoints / totalHours));
}

9. Conclusion

We have demonstrated how to write some basic MapReduce views in Couchbase, and how to construct and execute queries against the views, and extract the results.

The code presented in this tutorial can be found in the GitHub project.

You can learn more about MapReduce views and how to query them in Java at the official Couchbase developer documentation site.

A Guide to Spring Mobile

$
0
0

1. Overview

Spring Mobile is a modern extension to the popular Spring Web MVC  framework that helps to simplify the development of web applications, which needs to be fully or partially compatible with cross device platforms, with minimal effort and less boilerplate coding.

In this article, we’ll learn about the Spring Mobile project and we would build a sample project to highlight uses of Spring Mobile.

2. Features of Spring Mobile

  • Automatic Device Detection: Spring Mobile has built-in server side device resolver abstraction layer. This analyzes all incoming requests and detects sender device information, for example, a device type, an operating system etc
  • Site Preference Management: Using Site Preference Management, Spring Mobile allows users to choose mobile/tablet/normal view of the website. It’s comparatively deprecated technique since by using DeviceDelegatingViewresolver we can persist the view layer depending on the device type without demanding any input from the user side
  • Site Switcher: Site Switcher is capable of automatically switch the users to the most appropriate view according to his/her device type (i.e. mobile, desktop etc)
  • Device Aware View Manager: Usually, depending on device type we forward the user request to specific site meant to handle specific device. Spring Mobile’s View Manager lets developer the flexibility to put all of the views in pre-defined format and Spring Mobile would auto-mange the different views based on device type

3. Building an Application

Let’s now create a demo application using Spring Mobile with Spring Boot and Freemarker Template Engine and try to capture device details with a minimal amount of coding.

3.1. Maven Dependencies

Before we start we need to add following Spring Mobile dependency in the pom.xml:

<dependency>
    <groupId>org.springframework.mobile</groupId>
    <artifactId>spring-mobile-device</artifactId>
    <version>1.1.5.RELEASE</version>
</dependency>

We can check the latest version of Spring Mobile in Central Maven Repository.

3.2. Create Freemarker Templates

First, let’s create our index page using Freemarker. Don’t forget to put necessary dependency to enable autoconfiguration for Freemarker.

Since we are trying to detect the sender device and route the request accordingly, we need to create three separate Freemarker files to address this; one to handle a mobile request, another one to handle tablet and the last one (default) to handle normal browser request.

We need to create two folders named ‘mobile‘ and ‘tablet‘ under src/main/resources/templates and put the Freemarker files accordingly. The final structure should look like this:

└── src
    └── main
        └── resources
            └── templates
                └── index.ftl
                └── mobile
                    └── index.ftl
                └── tablet
                    └── index.ftl

Now, let’s put the following HTML inside index.ftl files:

<h1>You are into browser version</h1>

Depending on the device type, we’ll change the content inside the <h1> tag,

3.3. Enable DeviceDelegatingViewresolver

To enable Spring Mobile DeviceDelegatingViewresolver service, we need to put the following property inside application.properties:

spring.mobile.devicedelegatingviewresolver.enabled: true

Site preference functionality is enabled by default in Spring Boot when you include the Spring Mobile starter. However, it can be disabled by setting the following property to false:

spring.mobile.sitepreference.enabled: true

3.4. Create a Controller

Now we need to create a Controller class to handle the incoming request. We would use simple @GetMapping annotation to handle the request:

@Controller
public class IndexController {

    @GetMapping("/")
    public String greeting(Device device) {
		
        String deviceType = "browser";
        String platform = "browser";
		
        if (device.isNormal()) {
            deviceType = "browser";
        } else if (device.isMobile()) {
            deviceType = "mobile";
        } else if (device.isTablet()) {
            deviceType = "tablet";
        }
        
        platform = device.getDevicePlatform().name();
        
        if (platform.equalsIgnoreCase("UNKNOWN")) {
            platform = "browser";
        }
     	
        return "index";
    }
}

A couple of things to note here:

  • In the handler mapping method, we are passing org.springframework.mobile.device.Device. This is the injected device information with each and every request. This is done by DeviceDelegatingViewresolver which we have enabled in the apllication.properties
  • The org.springframework.mobile.device.Device has a couple of inbuilt methods like isMobile(), isTablet(), getDevicePlatform() etc. Using these we can capture all device information we need and use it

3.5. Java Config

We are almost done. One last thing to do is to build a Spring Boot config class to start the application:

@SpringBootApplication
public class Application {

    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }
}

4. Testing the Application

Once we start the application, it will run on http://localhost:8080.

We will use Google Chrome’s Developer Console to emulate different kinds of device. We can enable it by pressing ctrl + shift + i or by pressing F12.

By default, if we open the main page, we could see that Spring Web is detecting the device as a desktop browser. We should see the following result:

Now, on the console panel, we click the second icon on the top left. It would enable a mobile view of the browser.

We could see a drop-down coming in the top left corner of the browser. In the drop-down, we can choose different kinds of device type. To emulate a mobile device let’s choose Nexus 6P and refresh the page.

As soon as we refresh the page, we’ll notice that the content of the page changes because DeviceDelegatingViewresolver has already detected that the last request came from a mobile device. Hence, it passed the index.ftl file inside the mobile folder in the templates.

Here’s the result:

In the same way, we are going to emulate a tablet version. Let’s choose iPad from the drop-down just like the last time and refresh the page. The content would be changed and it should be treated as a tablet view:

Now, we’ll see if Site Preference functionality is working as expected or not.

To simulate a real time scenario where user wants to view the site in mobile friendly way, just add following url parameter at the end of default url:

?site_preference=mobile

Once refreshed, the view should be automatically moved to mobile view i.e. following text would be displayed ‘You are into mobile version’.

In the same way to simulate tablet preference, just add following url parameter at the end of default url:

?site_preference=tablet

And just like the last time the view should be automatically refreshed to tablet view.

Please note that, the default url would remain as same and if the user again goes through default url, user will be redirect to respective view based on device type.

5. Conclusion

We just created a web application and implement the cross-platform functionality. From the productivity perspective, it’s a tremendous performance boost. Spring Mobile eliminates many front-end scripting to handle cross-browser behaviour, thus reducing development time.

Like always, updated source code are over on GitHub.

Guide to Guava Table

$
0
0

1. Overview

In this tutorial, we’ll show how to use the Google Guava’s Table interface and its multiple implementations.

Guava’s Table is a collection that represents a table like structure containing rows, columns and the associated cell values. The row and the column act as an ordered pair of keys.

2. Google Guava’s Table

Let’s have a look at how to use the Table class.

2.1. Maven Dependency

Let’s start by adding Google’s Guava library dependency in the pom.xml:

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>21.0</version>
</dependency>

The latest version of the dependency can be checked here.

2.2. About

If we were to represent Guava’s Table using Collections present in core Java, then the structure would be a map of rows where each row contains a map of columns with associated cell values. 

Table represents a special map where two keys can be specified in combined fashion to refer to a single value.

It’s similar to creating a map of maps, for example, Map<UniversityName, Map<CoursesOffered, SeatAvailable>>. Table would be also a perfect way of representing the Battleships game board.

3. Creating

You can create an instance of Table in multiple ways:

  • Using the create method from the class HashBasedTable which uses LinkedHashMap internally:
    Table<String, String, Integer> universityCourseSeatTable 
      = HashBasedTable.create();
  • If we need a Table whose row keys and the column keys need to be ordered by their natural ordering or by supplying comparators, you can create an instance of a Table using the create method from a class called TreeBasedTable, which uses TreeMap internally:
    Table<String, String, Integer> universityCourseSeatTable
      = TreeBasedTable.create();
    
  • If we know the row keys and the column keys beforehand and the table size is fixed, use the create method from the class ArrayTable:
    List<String> universityRowTable 
      = Lists.newArrayList("Mumbai", "Harvard");
    List<String> courseColumnTables 
      = Lists.newArrayList("Chemical", "IT", "Electrical");
    Table<String, String, Integer> universityCourseSeatTable
      = ArrayTable.create(universityRowTable, courseColumnTables);
    
  • If we intend to create an immutable instance of Table whose internal data are never going to change, use the ImmutableTable class (creating which follows a builder pattern):
    Table<String, String, Integer> universityCourseSeatTable
      = ImmutableTable.<String, String, Integer> builder()
      .put("Mumbai", "Chemical", 120).build();
    

4. Using

Let’s start with a simple example showing the usage of Table.

4.1. Retrieval

If we know the row key and the column key then we can get the value associated with the row and the column key:

@Test
public void givenTable_whenGet_returnsSuccessfully() {
    Table<String, String, Integer> universityCourseSeatTable 
      = HashBasedTable.create();
    universityCourseSeatTable.put("Mumbai", "Chemical", 120);
    universityCourseSeatTable.put("Mumbai", "IT", 60);
    universityCourseSeatTable.put("Harvard", "Electrical", 60);
    universityCourseSeatTable.put("Harvard", "IT", 120);

    int seatCount 
      = universityCourseSeatTable.get("Mumbai", "IT");
    Integer seatCountForNoEntry 
      = universityCourseSeatTable.get("Oxford", "IT");

    assertThat(seatCount).isEqualTo(60);
    assertThat(seatCountForNoEntry).isEqualTo(null);
}

4.2. Checking for an Entry

We can check the presence of an entry in a Table based on:

  • Row key
  • Column key
  • Both row key and column key
  • Value

Let’s see how to check for the presence of an entry:

@Test
public void givenTable_whenContains_returnsSuccessfully() {
    Table<String, String, Integer> universityCourseSeatTable 
      = HashBasedTable.create();
    universityCourseSeatTable.put("Mumbai", "Chemical", 120);
    universityCourseSeatTable.put("Mumbai", "IT", 60);
    universityCourseSeatTable.put("Harvard", "Electrical", 60);
    universityCourseSeatTable.put("Harvard", "IT", 120);

    boolean entryIsPresent
      = universityCourseSeatTable.contains("Mumbai", "IT");
    boolean courseIsPresent 
      = universityCourseSeatTable.containsColumn("IT");
    boolean universityIsPresent 
      = universityCourseSeatTable.containsRow("Mumbai");
    boolean seatCountIsPresent 
      = universityCourseSeatTable.containsValue(60);

    assertThat(entryIsPresent).isEqualTo(true);
    assertThat(courseIsPresent).isEqualTo(true);
    assertThat(universityIsPresent).isEqualTo(true);
    assertThat(seatCountIsPresent).isEqualTo(true);
}

4.3. Removal

We can remove an entry from the Table by supplying both the row key and column key:

@Test
public void givenTable_whenRemove_returnsSuccessfully() {
    Table<String, String, Integer> universityCourseSeatTable
      = HashBasedTable.create();
    universityCourseSeatTable.put("Mumbai", "Chemical", 120);
    universityCourseSeatTable.put("Mumbai", "IT", 60);

    int seatCount 
      = universityCourseSeatTable.remove("Mumbai", "IT");

    assertThat(seatCount).isEqualTo(60);
    assertThat(universityCourseSeatTable.remove("Mumbai", "IT")).
      isEqualTo(null);
}

4.4. Row Key to Cell Value Map

We can get a Map representation with the key as a row and the value as a CellValue by providing the column key:

@Test
public void givenTable_whenColumn_returnsSuccessfully() {
    Table<String, String, Integer> universityCourseSeatTable 
      = HashBasedTable.create();
    universityCourseSeatTable.put("Mumbai", "Chemical", 120);
    universityCourseSeatTable.put("Mumbai", "IT", 60);
    universityCourseSeatTable.put("Harvard", "Electrical", 60);
    universityCourseSeatTable.put("Harvard", "IT", 120);

    Map<String, Integer> universitySeatMap 
      = universityCourseSeatTable.column("IT");

    assertThat(universitySeatMap).hasSize(2);
    assertThat(universitySeatMap.get("Mumbai")).isEqualTo(60);
    assertThat(universitySeatMap.get("Harvard")).isEqualTo(120);
}

4.5. Map Representation of a Table

We can get a Map<UniversityName, Map<CoursesOffered, SeatAvailable>> representation by using the columnMap method:

@Test
public void givenTable_whenColumnMap_returnsSuccessfully() {
    Table<String, String, Integer> universityCourseSeatTable 
      = HashBasedTable.create();
    universityCourseSeatTable.put("Mumbai", "Chemical", 120);
    universityCourseSeatTable.put("Mumbai", "IT", 60);
    universityCourseSeatTable.put("Harvard", "Electrical", 60);
    universityCourseSeatTable.put("Harvard", "IT", 120);

    Map<String, Map<String, Integer>> courseKeyUniversitySeatMap 
      = universityCourseSeatTable.columnMap();

    assertThat(courseKeyUniversitySeatMap).hasSize(3);
    assertThat(courseKeyUniversitySeatMap.get("IT")).hasSize(2);
    assertThat(courseKeyUniversitySeatMap.get("Electrical")).hasSize(1);
    assertThat(courseKeyUniversitySeatMap.get("Chemical")).hasSize(1);
}

4.6. Column Key to Cell Value Map

We can get a Map representation with the key as a column and the value as a CellValue by providing row key:

@Test
public void givenTable_whenRow_returnsSuccessfully() {
    Table<String, String, Integer> universityCourseSeatTable 
      = HashBasedTable.create();
    universityCourseSeatTable.put("Mumbai", "Chemical", 120);
    universityCourseSeatTable.put("Mumbai", "IT", 60);
    universityCourseSeatTable.put("Harvard", "Electrical", 60);
    universityCourseSeatTable.put("Harvard", "IT", 120);

    Map<String, Integer> courseSeatMap 
      = universityCourseSeatTable.row("Mumbai");

    assertThat(courseSeatMap).hasSize(2);
    assertThat(courseSeatMap.get("IT")).isEqualTo(60);
    assertThat(courseSeatMap.get("Chemical")).isEqualTo(120);
}

4.7. Get Distinct Row Key

We can get all the row keys from a table using the rowKeySet method:

@Test
public void givenTable_whenRowKeySet_returnsSuccessfully() {
    Table<String, String, Integer> universityCourseSeatTable
      = HashBasedTable.create();
    universityCourseSeatTable.put("Mumbai", "Chemical", 120);
    universityCourseSeatTable.put("Mumbai", "IT", 60);
    universityCourseSeatTable.put("Harvard", "Electrical", 60);
    universityCourseSeatTable.put("Harvard", "IT", 120);

    Set<String> universitySet = universityCourseSeatTable.rowKeySet();

    assertThat(universitySet).hasSize(2);
}

4.8. Get Distinct Column Key

We can get all column keys from a table using the columnKeySet method:

@Test
public void givenTable_whenColKeySet_returnsSuccessfully() {
    Table<String, String, Integer> universityCourseSeatTable
      = HashBasedTable.create();
    universityCourseSeatTable.put("Mumbai", "Chemical", 120);
    universityCourseSeatTable.put("Mumbai", "IT", 60);
    universityCourseSeatTable.put("Harvard", "Electrical", 60);
    universityCourseSeatTable.put("Harvard", "IT", 120);

    Set<String> courseSet = universityCourseSeatTable.columnKeySet();

    assertThat(courseSet).hasSize(3);
}

5. Conclusion

In this tutorial, we illustrated the methods of the Table class from the Guava library. The Table class provides a collection that represents a table like structure containing rows, columns and associated cell values.

The code belonging to the above examples can be found in the GitHub project – this is a Maven-based project, so it should be easy to import and run as is.

Guide to java.util.concurrent.Future

$
0
0

1. Overview

In this article, we are going to learn about Future. An interface that’s been around since Java 1.5 and can be quite useful when working with asynchronous calls and concurrent processing.

2. Creating Futures

Simply put, the Future class represents a future result of an asynchronous computation – a result that will eventually appear in the Future after the processing is complete.

Let’s see how to write methods that create and return a Future instance.

Long running methods are good candidates for asynchronous processing and the Future interface. This enables us to execute some other process while we are waiting for the task encapsulated in Future to complete.

Some examples of operations that would leverage the async nature of Future are:

  • computational intensive processes (mathematical and scientific calculations)
  • manipulating large data structures (big data)
  • remote method calls (downloading files, HTML scrapping, web services).

2.1. Implementing Futures with FutureTask

For our example, we are going to create a very simple class that calculates the square of an Integer. This definitely doesn’t fit the “long-running” methods category, but we are going to put a Thread.sleep() call to it to make it last 1 second to complete:

public class SquareCalculator {    
    
    private ExecutorService executor 
      = Executors.newSingleThreadExecutor();
    
    public Future<Integer> calculate(Integer input) {        
        return executor.submit(() -> {
            Thread.sleep(1000);
            return input * input;
        });
    }
}

The bit of code that actually performs the calculation is contained within the call() method, supplied as a lambda expression. As you can see there’s nothing special about it, except for the sleep() call mentioned earlier.

It gets more interesting when we direct our attention to the usage of Callable and ExecutorService.

Callable is an interface representing a task that returns a result. Here, we’ve created an anonymous class and placed our business logic in the call() method.

Creating an instance of Callable does not take us anywhere, we still have to pass this instance to an executor that will take care of starting that task in a new thread and give us back the valuable Future object. That’s where ExecutorService comes in.

There are a few ways we can get ahold of an ExecutorService instance, most of them are provided by utility class Executors static factory methods. In this example, we’ve used the basic newSingleThreadExecutor(), which gives us an ExecutorService capable of handling a single thread at a time.

Once we have an ExecutorService object, we just need to call submit() passing our Callable as an argument. submit() will take care of starting the task and return a FutureTask object, which is an implementation of the Future interface.

3. Consuming Futures

Up to this point, we’ve learned how to create an instance of Future. In this section, we’ll learn how to work with this instance by exploring all methods that are part of

In this section, we’ll learn how to work with this instance by exploring all methods that are part of Future‘s API.

3.1. Using isDone() and get() to Obtain Results

Now we need to call calculate() and use the returned Future to get the resulting Integer. Two methods from the Future API will help us with this task.

Future.isDone() tells us if the executor has finished processing the task. If the task is completed, it will return true otherwise, it returns false.

The method that returns the actual result from the calculation is Future.get(). Notice that this method blocks the execution until the task is complete, but in our example, this won’t be an issue since we’ll check first if the task is completed by calling isDone().

By using these two methods we can run some other code while we wait for the main task to finish:

Future<Integer> future = new SquareCalculator().calculate(10);

while(!future.isDone()) {
    System.out.println("Calculating...");
    Thread.sleep(300);
}

Integer result = future.get();

In this example, we write a simple message on the output to let the user know the program is performing the calculation.

The method get() will block the execution until the task is complete. But we don’t have to worry about that since our example only get to the point where get() is called after making sure that the task is finished. So, in this scenario, future.get() will always return immediately.

It is worth mentioning that get() has an overloaded version that takes a timeout and a TimeUnit as arguments:

Integer result = future.get(500, TimeUnit.MILLISECONDS);

The difference between get(long, TimeUnit) and get(), is that the former will throw a TimeoutException if the task doesn’t return before the specified timeout period.

3.2. Canceling a Future with cancel()

Suppose we’ve triggered a task but, for some reason, we don’t care about the result anymore. We can use Future.cancel(boolean) to tell the executor to stop the operation and interrupt its underlying thread:

Future<Integer> future = new SquareCalculator().calculate(4);

boolean canceled = future.cancel(true);

Our instance of Future from the code above would never complete its operation. In fact, if we try to call get() from that instance, after the call to cancel(), the outcome would be a CancellationExceptionFuture.isCancelled() will tell us if a Future was already canceled. This can be quite useful to avoid getting a CancellationException.

It is possible that a call to cancel() fails. In that case, its returned value will be false. Notice that cancel() takes a boolean value as an argument – this controls whether the thread executing this task should be interrupted or not.

4. More Multithreading with Thread Pools

Our current ExecutorService is single threaded since it was obtained with the Executors.newSingleThreadExecutor. To highlight this “single threadness”, let’s trigger two calculations simultaneously:

SquareCalculator squareCalculator = new SquareCalculator();

Future<Integer> future1 = squareCalculator.calculate(10);
Future<Integer> future2 = squareCalculator.calculate(100);

while (!(future1.isDone() && future2.isDone())) {
    System.out.println(
      String.format(
        "future1 is %s and future2 is %s", 
        future1.isDone() ? "done" : "not done", 
        future2.isDone() ? "done" : "not done"
      )
    );
    Thread.sleep(300);
}

Integer result1 = future1.get();
Integer result2 = future2.get();

System.out.println(result1 + " and " + result2);

squareCalculator.shutdown();

Now let’s analyze the output for this code:

calculating square for: 10
future1 is not done and future2 is not done
future1 is not done and future2 is not done
future1 is not done and future2 is not done
future1 is not done and future2 is not done
calculating square for: 100
future1 is done and future2 is not done
future1 is done and future2 is not done
future1 is done and future2 is not done
100 and 10000

It is clear that the process is not parallel. Notice how the second task only starts once the first task is completed, making the whole process take around 2 seconds to finish.

To make our program really multi-threaded we should use a different flavor of ExecutorService. Let’s see how the behavior of our example changes if we use a thread pool, provided by the factory method Executors.newFixedThreadPool():

public class SquareCalculator {
 
    private ExecutorService executor = Executors.newFixedThreadPool(2);
    
    //...
}

With a simple change in our SquareCalculator class now we have an executor which is able to use 2 simultaneous threads.

If we run the exact same client code again, we’ll get the following output:

calculating square for: 10
calculating square for: 100
future1 is not done and future2 is not done
future1 is not done and future2 is not done
future1 is not done and future2 is not done
future1 is not done and future2 is not done
100 and 10000

This is looking much better now. Notice how the 2 tasks start and finish running simultaneously, and the whole process takes around 1 second to complete.

There are other factory methods that can be used to create thread pools, like Executors.newCachedThreadPool() that reuses previously used Threads when they are available, and Executors.newScheduledThreadPool() which schedules commands to run after a given delay

For more information about ExecutorService, read our article dedicated to the subject.

5. Overview of ForkJoinTask

ForkJoinTask is an abstract class which implements Future and is capable of running a large number of tasks hosted by a small number of actual threads in ForkJoinPool.

In this section, we are going quickly cover the main characteristics of ForkJoinPool. For a comprehensive guide about the topic, check our Guide to the Fork/Join Framework in Java.

Then the main characteristic of a ForkJoinTask is that it usually will spawn new subtasks as part of the work required to complete its main task. It generates new tasks by calling fork() and it gathers all results with join()thus the name of the class.

There are two abstract classes that implement ForkJoinTaskRecursiveTask which returns a value upon completion, and RecursiveAction which doesn’t return anything. As the names imply, those classes are to be used for recursive tasks, like for example file-system navigation or complex mathematical computation.

Let’s expand our previous example to create a class that, given an Integer, will calculate the sum squares for all its factorial elements. So, for instance, if we pass the number 4 to our calculator, we should get the result from the sum of 4² + 3² + 2² + 1² which is 30.

First of all, we need to create a concrete implementation of RecursiveTask and implement its compute() method. This is where we’ll write our business logic:

public class FactorialSquareCalculator extends RecursiveTask<Integer> {
 
    private Integer n;

    public FactorialSquareCalculator(Integer n) {
        this.n = n;
    }

    @Override
    protected Integer compute() {
        if (n <= 1) {
            return n;
        }

        FactorialSquareCalculator calculator 
          = new FactorialSquareCalculator(n - 1);

        calculator.fork();

        return n * n + calculator.join();
    }
}

Notice how we achieve recursiveness by creating a new instance of FactorialSquareCalculator within compute(). By calling fork(), a non-blocking method, we ask ForkJoinPool to initiate the execution of this subtask.

The join() method will return the result from that calculation, to which we add the square of the number we are currently visiting.

Now we just need to create a ForkJoinPool to handle the execution and thread management:

ForkJoinPool forkJoinPool = new ForkJoinPool();

FactorialSquareCalculator calculator = new FactorialSquareCalculator(10);

forkJoinPool.execute(calculator);

6. Conclusion

In this article, we had a comprehensive view of the Future interface, visiting all its methods. We’ve also learned how to leverage the power of thread pools to trigger multiple parallel operations. The main methods from the ForkJoinTask class, fork() and join() were briefly covered as well.

We have many other great articles on parallel and asynchronous operations in Java. Here are three of them that are closely related to the Future interface (some of them are already mentioned in the article):

Check the source code used in this article in our GitHub repository.


Building an API With the Spark Java Framework

$
0
0

1. Introduction

In this article, we will have a quick introduction to Spark framework. Spark framework is a rapid development web framework inspired by the Sinatra framework for Ruby and is built around Java 8 Lambda Expression philosophy, making it less verbose than most applications written in other Java frameworks.

It’s a good choice if you want to have a Node.js like experience when developing a web API or microservices in Java. With Spark, you can have a REST API ready to serve JSON in less than ten lines of code.

We will have a quick start with a “Hello World” example, followed by a simple REST API.

2. Maven Dependencies

2.1. Spark Framework

Include following Maven dependency in your pom.xml:

<dependency>
    <groupId>com.sparkjava</groupId>
    <artifactId>spark-core</artifactId>
    <version>2.5.4</version>
</dependency>

You can find the latest version of Spark on Maven Central.

2.2. Gson Library 

At various places in the example, we will be using Gson library for JSON operations. To include Gson in your project, include this dependency in your pom.xml:

<dependency>
    <groupId>com.google.code.gson</groupId>
    <artifactId>gson</artifactId>
    <version>2.8.0</version>
</dependency>

You can find the latest version of Gson on Maven Central.

3. Getting Started with Spark Framework

Let’s take a look at the basic building blocks of a Spark application and demonstrate a quick web service.

3.1. Routes

Web services in Spark Java are built upon routes and their handlers. Routes are essential elements in Spark. As per the documentation, each route is made up of three simple pieces – a verb, a path, and a callback.

  1. The verb is a method corresponding to an HTTP method. Verb methods include: get, post, put, delete, head, trace, connect, and options
  2. The path (also called a route pattern) determines which URI(s) the route should listen to and provide a response for
  3. The callback is a handler function that is invoked for a given verb and path in order to generate and return a response to the corresponding HTTP request. A callback takes a request object and response object as arguments

Here we show the basic structure for a route that uses the get verb:

get("/your-route-path/", (request, response) -> {
    // your callback code
});

3.2. Hello World API

Let’s create a simple web service that has two routes for GET requests and returns “Hello” messages in response. These routes use the get method, which is a static import from the class spark.Spark:

import static spark.Spark.*;

public class HelloWorldService {
    public static void main(String[] args) {
 
        get("/hello", (req, res)->"Hello, world");
        
        get("/hello/:name", (req,res)->{
            return "Hello, "+ req.params(":name");
        });
    }
}

The first argument to the get method is the path for the route. The first route contains a static path representing only a single URI (“/hello”).

The second route’s path (“/hello/:name”) contains a placeholder for the “name” parameter, as denoted by prefacing the parameter with a colon (“:”). This route will be invoked in response to GET requests to URIs such as “/hello/Joe” and “/hello/Mary”.

The second argument to the get method is a lambda expression giving a functional programming flavor to this framework.

The lambda expression has request and response as arguments and helps return the response. We will put our controller logic in the lambda expression for the REST API routes, as we shall see later in this tutorial.

3.3. Testing the Hello World API

After running the class HelloWorldService as a normal Java class, you will be able to access the service on its default port of 4567 using the routes defined with the get method above.

Let’s look at the request and response for the first route:

Request:

GET http://localhost:4567/hello

Response:

Hello, world

Let’s test the second route, passing the name parameter in its path:

Request:

GET http://localhost:4567/hello/baeldung

Response:

Hello, baeldung

See how the placement of the text “baeldung” in the URI was used to match the route pattern “/hello/:name” – causing the second route’s callback handler function to be invoked.

4. Designing a RESTful Service

In this section, we will design a simple REST web service for the following User entity:

public class User {
    private String id;
    private String firstName;
    private String lastName;
    private String email;

    // constructors, getters and setters
}

4.1. Routes

Let’s list the routes that make up our API:

  • GET /users —  get list of all users
  • GET /users/:id — get user with given id
  • POST /users/:id — add a user
  • PUT /users/:id — edit a particular user
  • OPTIONS /users/:id — check whether a user exists with given id
  • DELETE /users/:id — delete a particular user

4.2. The User Service

Below is the UserService interface declaring the CRUD operations for the User entity:

public interface UserService {
 
    public void addUser (User user);
    
    public Collection<User> getUsers ();
    public User getUser (String id);
    
    public User editUser (User user) 
      throws UserException;
    
    public void deleteUser (String id);
    
    public boolean userExist (String id);
}

For demonstration purposes, we provide a Map implementation of this UserService interface in the GitHub code to simulate persistence. You can supply your own implementation with the database and persistence layer of your choice.

4.3. The JSON Response Structure

Below is the JSON structure of the responses used in our REST service:

{
    status: <STATUS>
    message: <TEXT-MESSAGE>
    data: <JSON-OBJECT>
}

The status field value can be either SUCCESS or ERROR. The data field will contain the JSON representation of the return data, such as a User or collection of Users.

When there is no data being returned, or if the status is ERROR, we will populate the message field to convey a reason for the error or lack of return data.

Let’s represent the above JSON structure using a Java class:

public class StandardResponse {
 
    private StatusResponse status;
    private String message;
    private JsonElement data;
    
    public StandardResponse(StatusResponse status) {
        // ...
    }
    public StandardResponse(StatusResponse status, String message) {
        // ...
    }
    public StandardResponse(StatusResponse status, JsonElement data) {
        // ...
    }
    
    // getters and setters
}

where StatusResponse is an enum defined as below:

public enum StatusResponse {
    SUCCESS ("Success"),
    ERROR ("Error");
 
    private String status;       
    // constructors, getters
}

5. Implementing RESTful Services

Now let’s implement the routes and handlers for our REST API.

5.1. Creating Controllers

The following Java class contains the routes for our API, including the verbs and paths and an outline of the handlers for each route:

public class SparkRestExample {
    public static void main(String[] args) {
        post("/users", (request, response) -> {
            //...
        });
        get("/users", (request, response) -> {
            //...
        });
        get("/users/:id", (request, response) -> {
            //...
        });
        put("/users/:id", (request, response) -> {
            //...
        });
        delete("/users/:id", (request, response) -> {
            //...
        });
        options("/users/:id", (request, response) -> {
            //...
        });
    }
}

We will show the full implementation of each route handler in the following subsections.

5.2. Add User

Below is the post method response handler which will add a User:

post("/users", (request, response) -> {
    response.type("application/json");
    User user = new Gson().fromJson(request.body(), User.class);
    userService.addUser(user);

    return new Gson()
      .toJson(new StandardResponse(StatusResponse.SUCCESS));
});

Note: In this example, the JSON representation of the User object is passed as the raw body of a POST request.

Let’s test the route:

Request:

POST http://localhost:4567/users
{
    "id": "1012", 
    "email": "your-email@your-domain.com", 
    "firstName": "Mac",
    "lastName": "Mason1"
}

Response:

{
    "status":"SUCCESS"
}

5.3. Get All Users

Below is the get method response handler which returns all users from the UserService:

get("/users", (request, response) -> {
    response.type("application/json");
    return new Gson().toJson(
      new StandardResponse(StatusResponse.SUCCESS,new Gson()
        .toJsonTree(userService.getUsers())));
});

Now let’s test the route:

Request:

GET http://localhost:4567/users

Response:

{
    "status":"SUCCESS",
    "data":[
        {
            "id":"1014",
            "firstName":"John",
            "lastName":"Miller",
            "email":"your-email@your-domain.com"
        },
        {
            "id":"1012",
            "firstName":"Mac",
            "lastName":"Mason1",
            "email":"your-email@your-domain.com"
        }
    ]
}

5.4. Get User by Id

Below is the get method response handler which returns a User with the given id:

get("/users/:id", (request, response) -> {
    response.type("application/json");
    return new Gson().toJson(
      new StandardResponse(StatusResponse.SUCCESS,new Gson()
        .toJsonTree(userService.getUser(request.params(":id")))));
});

Now let’s test the route:

Request:

GET http://localhost:4567/users/1012

Response:

{
    "status":"SUCCESS",
    "data":{
        "id":"1012",
        "firstName":"Mac",
        "lastName":"Mason1",
        "email":"your-email@your-domain.com"
    }
}

5.5. Edit a User

Below is the put method response handler, which edits the user having the id supplied in the route pattern:

put("/users/:id", (request, response) -> {
    response.type("application/json");
    User toEdit = new Gson().fromJson(request.body(), User.class);
    User editedUser = userService.editUser(toEdit);
            
    if (editedUser != null) {
        return new Gson().toJson(
          new StandardResponse(StatusResponse.SUCCESS,new Gson()
            .toJsonTree(editedUser)));
    } else {
        return new Gson().toJson(
          new StandardResponse(StatusResponse.ERROR,new Gson()
            .toJson("User not found or error in edit")));
    }
});

Note: In this example, the data are passed in the raw body of a POST request as a JSON object whose property names match the fields of the User object to be edited.

Let’s test the route:

Request:

PUT http://localhost:4567/users/1012
{
    "lastName": "Mason"
}

Response:

{
    "status":"SUCCESS",
    "data":{
        "id":"1012",
        "firstName":"Mac",
        "lastName":"Mason",
        "email":"your-email@your-domain.com"
    }
}

5.6. Delete a User

Below is the delete method response handler, which will delete the User with the given id:

delete("/users/:id", (request, response) -> {
    response.type("application/json");
    userService.deleteUser(request.params(":id"));
    return new Gson().toJson(
      new StandardResponse(StatusResponse.SUCCESS, "user deleted"));
});

Now, let’s test the route:

Request:

DELETE http://localhost:4567/users/1012

Response:

{
    "status":"SUCCESS",
    "message":"user deleted"
}

5.7. Check if User Exists

The options method is a good choice for conditional checking. Below is the options method response handler which will check whether a User with the given id exists:

options("/users/:id", (request, response) -> {
    response.type("application/json");
    return new Gson().toJson(
      new StandardResponse(StatusResponse.SUCCESS, 
        (userService.userExist(
          request.params(":id"))) ? "User exists" : "User does not exists" ));
});

Now let’s test the route:

Request:

OPTIONS http://localhost:4567/users/1012

Response:

{
    "status":"SUCCESS",
    "message":"User exists"
}

6. Conclusion

In this article, we had a quick introduction to the Spark framework for rapid web development.

This framework is mainly promoted for generating microservices in Java. Node.js developers with Java knowledge who want to leverage libraries built on JVM libraries should feel at home using this framework.

And as always, you can find all the sources for this tutorial in the Github project.

Java 9 Convenience Factory Methods for Collections

$
0
0

1. Overview

Java 9 brings the long awaited syntactic sugar for creating small unmodifiable Collection instances using a concise one line code. As per JEP 269, new convenience factory methods will be included in JDK 9.

In this article, we will cover its usage along with the implementation details.

2. History and Motivation

Creating a small immutable Collection in Java using the traditional way is very verbose. Let’s take an example of a Set:

Set<String> set = new HashSet<>();
set.add("foo");
set.add("bar");
set.add("baz");
set = Collections.unmodifiableSet(set);

That’s too much code for a simple task and it should be possible to be done in a single expression.

The is also true for a Map, however, for List, there’s a factory method:

List<String> list = Arrays.asList("foo", "bar", "baz");

Although this List creation is better than the constructor initialization, this is less obvious as the common intuition would not be to look into Arrays class for methods to create a List:

There are other ways to reduce verbosity like the double brace technique:

Set<String> set = Collections.unmodifiableSet(new HashSet<String>() {{
    add("foo"); add("bar"); add("baz");
}});

or by using Java 8 Streams:

Stream.of("foo", "bar", "baz")
  .collect(collectingAndThen(toSet(), Collections::unmodifiableSet));

The double brace technique is only a little less verbose but greatly reduces the readability.

The Java 8 version, though, is a one-line expression, has some problems too. First, it’s not obvious and intuitive, second, it’s still verbose, third, it involves the creation of unnecessary objects and fourth, this method can’t be used to create a Map.

To summarize the shortcomings, none of the above approaches treat the specific use case creating a small unmodifiable Collection as a first class problem.

3. Description and Usage

Static methods have been provided on List, Set, and Map interfaces which take the elements as arguments and return an instance of List, Set and Map respectively. The method is named of(…) for all the three interfaces.

3.1. List and Set

The signature and characteristics of List and Set factory methods are same:

static <E> List<E> of(E e1, E e2, E e3)
static <E> Set<E>  of(E e1, E e2, E e3)

usage of the methods:

List<String> list = List.of("foo", "bar", "baz");
Set<String> set = Set.of("foo", "bar", "baz");

As you can see, it’s very simple, short and concise.

In the example, we have used the method with takes exactly three elements as parameters and returns a List / Set of size 3. But, there are 12 overloaded versions of this method – eleven with 0 to 10 parameters and one with var-args:

static <E> List<E> of()
static <E> List<E> of(E e1)
static <E> List<E> of(E e1, E e2)
// ....and so on

static <E> List<E> of(E... elems)

For most practical purposes, 10 elements would be sufficient but if more are required, the var-args version can be used.

Now you may ask, what is the point of having 11 extra methods if there’s a var-args version which can work for any number of elements. The answer to that is performance. Every var-args method call implicitly creates an array. Having the overloaded methods avoid unnecessary object creation and the garbage collection overhead thereof.

During the creation of a Set using a factory method, if duplicate elements are passed as parameters, then IllegalArgumentException is thrown at runtime:

@Test(expected = IllegalArgumentException.class)
public void onDuplicateElem_IfIllegalArgExp_thenSuccess() {
    Set.of("foo", "bar", "baz", "foo");
}

An important point to note here is that since the factory methods use generics, primitive types are autoboxed.

If an array of primitive type is passed, a List of array of that primitive type is returned. For example:

int[] arr = { 1, 2, 3, 4, 5 };
List<int[]> list = List.of(arr);

In this case, a List<int[]> of size 1 is returned and the element at index 0 contains the array.

3.2. Map

The signature of Map factory method is:

static <K,V> Map<K,V> of(K k1, V v1, K k2, V v2, K k3, V v3)

and the usage:

Map<String, String> map = Map.of("foo", "a", "bar", "b", "baz", "c");

Similarly to List and Set, the of(…) method is overloaded to have 0 to 10 key-value pairs.

In the case of Map, there is a different method for more than 10 key-value pairs:

static <K,V> Map<K,V> ofEntries(Map.Entry<? extends K,? extends V>... entries)

and it’s usage:

Map<String, String> map = Map.ofEntries(
  new AbstractMap.SimpleEntry<>("foo", "a"),
  new AbstractMap.SimpleEntry<>("bar", "b"),
  new AbstractMap.SimpleEntry<>("baz", "c"));

Passing in duplicate values for Key would throw an IllegalArgumentException:

@Test(expected = IllegalArgumentException.class)
public void givenDuplicateKeys_ifIllegalArgExp_thenSuccess() {
    Map.of("foo", "a", "foo", "b");
}

Again, in the case of Map too, the primitive types are autoboxed.

4. Implementation Notes

The collections created using the factory methods are not the most commonly used implementations.

For example, the List is not an ArrayList and the Map is not a HashMap. They are different implementations which are introduced in Java 9. These implementations are internal and their constructors are not made public.

In this section, we will see some important implementation differences which are common to all the three types of collections.

4.1. Immutable

The collections created using factory methods are immutable and changing an element, adding new elements or removing an element throws UnsupportedOperationException:

@Test(expected = UnsupportedOperationException.class)
public void onElemAdd_ifUnSupportedOpExpnThrown_thenSuccess() {
    Set<String> set = Set.of("foo", "bar");
    set.add("baz");
}
@Test(expected = UnsupportedOperationException.class)
public void onElemModify_ifUnSupportedOpExpnThrown_thenSuccess() {
    List<String> list = List.of("foo", "bar");
    list.set(0, "baz");
}
@Test(expected = UnsupportedOperationException.class)
public void onElemRemove_ifUnSupportedOpExpnThrown_thenSuccess() {
    Map<String, String> map = Map.of("foo", "a", "bar", "b");
    map.remove("foo");
}

4.2. No null Element Allowed

In the case of List and Set, no elements can be null. In the case of a Map, neither keys nor values can be null. Passing null argument throws a NullPoimterException:

@Test(expected = NullPointerException.class)
public void onNullElem_ifNullPtrExpnThrown_thenSuccess() {
    List.of("foo", "bar", null);
}

4.3. Value-Based Instances

The instances created by factory methods are value based. This means that factories are free to create a new instance or return an existing instance. Hence, if we create Lists with same values, they may or may not refer to the same object on the heap:

List<String> list1 = List.of("foo", "bar");
List<String> list2 = List.of("foo", "bar");

In this case, list1 == list2 may or may not evaluate to true depending on the JVM.

4.4. Serialization

Collections created from factory methods are Serializable if the elements of the collection are Serializable.

5. Conclusion

In this article, we introduced the new factory methods for Collections introduced in Java 9. We concluded why this feature is a welcome change by going over some past methods for creating unmodifiable collections. We covered it’s usage and highlighted key points to be considered while using them. Finally, we clarified that these collections are different from the commonly used implementations and pointed out key differences.

The complete source code for this article and the unit tests are available over on GitHub.

Guide to Try in Javaslang

$
0
0

1. Overview

In this article, we will look at a functional way of error handling other than a standard try-catch block.

We will be using Try class from Javaslang library that will allow us to create more fluent and conscious API by embedding error handling into normal program processing flow.

If you want to get more information about Javaslang, check this article.

2. Standard Way Of Handling Exceptions

Let’s say that we have a simple interface with a method call() that returns a Response or throws ClientException that is a checked exception in a case of a failure:

public interface HttpClient {
    Response call() throws ClientException;
}

The Response is a simple class with only one id field:

public class Response {
    public final String id;

    public Response(String id) {
        this.id = id;
    }
}

Let’s say that we have a service that calls that HttpClient, then we need to handle that checked exception in a standard try-catch block:

public Response getResponse() {
    try {
        return httpClient.call();
    } catch (ClientException e) {
        return null;
    }
}

When we want to create API that is fluent and is written in a functional way, each method that throws checked exceptions disrupts program flow and our program code consists of many try-catch blocks making it very hard to read.

Ideally, we will want to have a special class that encapsulates result state ( a success or a failure ) and then we can chain operations according to that result.

3. Handling Exceptions with Try

Javaslang library gives us a special container that represents a computation that may either result in an exception or complete successfully.

Enclosing operation within Try object gave us a result that is either Success or a Failure. Then we can execute further operations accordingly to that type.

Let’s look how the same method getResponse() as in a previous example will look like using Try:

public class JavaslangTry {
    private HttpClient httpClient;

    public Try<Response> getResponse() {
        return Try.of(httpClient::call);
    }

    // standard constructors
}

The important thing to notice is a return type Try<Response>. When a method returns such result type, we need to handle that in a proper way and keep in mind, that result type can be Success or Failure, so we need to handle that explicitly at a compile time.

3.1. Handling Success

Let’s write a test case that is using our JavaslangTry class in a case when httpClient is returning a successful result. The method getResponse() returns Try<Resposne> object, therefore we can call map() method on it that will execute an action on Response only when Try will be of Success type:

@Test
public void givenHttpClient_whenMakeACall_shouldReturnSuccess() {
    // given
    Integer defaultChainedResult = 1;
    String id = "a";
    HttpClient httpClient = () -> new Response(id);

    // when
    Try<Response> response = new JavaslangTry(httpClient).getResponse();
    Integer chainedResult = response
        .map(this::actionThatTakesResponse)
        .getOrElse(defaultChainedResult);
    Stream<String> stream = response.toStream().map(it -> it.id);

    // then
    assertTrue(!stream.isEmpty());
    assertTrue(response.isSuccess());
    response.onSuccess(r -> assertEquals(id, r.id));
    response.andThen(r -> assertEquals(id, r.id));
    assertNotEquals(defaultChainedResult, chainedResult);
}

Function actionThatTakesResponse() simply takes Response as an argument and returns hashCode of an id field:

public int actionThatTakesResponse(Response response) {
    return response.id.hashCode();
}

Once we map our value using actionThatTakesResponse() function we execute method getOrElse().

If Try has a Success inside it, it returns value of Try, otherwise, it returns defaultChainedResult. Our httpClient execution was successful thus the isSuccess method returns true. Then we can execute onSuccess() method that makes an action on a Response object. Try has also a method andThen that takes a Consumer that consume a value of a Try when that value is a Success. 

We can treat our Try response as a stream. To do so we need to convert it to a Stream using toStream() method, then all operations that are available in Stream class could be used to make operations on that result.

If we want to execute an action on Try type, we can use transform() method that takes Try as an argument and make an action on it without unwrapping enclosed value:

public int actionThatTakesTryResponse(Try<Response> response, int defaultTransformation){
    return response.transform(responses -> response.map(it -> it.id.hashCode())
      .getOrElse(defaultTransformation));
}

3.2. Handling Failure

Let’s write an example when our HttpClient will throw ClientException when executed.

Comparing to the previous example our getOrElse method will return defaultChainedResult because Try will be of a Failure type:

@Test
public void givenHttpClientFailure_whenMakeACall_shouldReturnFailure() {
    // given
    Integer defaultChainedResult = 1;
    HttpClient httpClient = () -> {
        throw new ClientException("problem");
    };

    // when
    Try<Response> response = new JavaslangTry(httpClient).getResponse();
    Integer chainedResult = response
        .map(this::actionThatTakesResponse)
        .getOrElse(defaultChainedResult);
     Option<Response> optionalResponse = response.toOption();

    // then
    assertTrue(optionalResponse.isEmpty());
    assertTrue(response.isFailure());
    response.onFailure(ex -> assertTrue(ex instanceof ClientException));
    assertEquals(defaultChainedResult, chainedResult);
}

The method getReposnse() returns Failure thus method isFailure returns true.

We could execute the onFailure() callback on returned response and see that exception is of ClientException type. The object that is of a Try type could be mapped to Option type using toOption() method.

It is useful when we do not want to carry our Try result throughout all codebase, but we have methods that are handling an explicit absence of value using Option type. When we map our Failure to Option then method isEmpty() is returning true. When Try object is a type Success calling toOption on it will make Option that is defined thus method isDefined() will return true.

3.3. Utilizing Pattern Matching

When our httpClient returns an Exception we could do a pattern matching on a type of that Exception. Then according to a type of that Exception in recover() a method we can decide if we want to recover from that exception and turn our Failure into Success or if we want to leave our computation result as a Failure:

@Test
public void givenHttpClientThatFailure_whenMakeACall_shouldReturnFailureAndNotRecover() {
    // given
    Response defaultResponse = new Response("b");
    HttpClient httpClient = () -> {
        throw new RuntimeException("critical problem");
    };

    // when
    Try<Response> recovered = new JavaslangTry(httpClient).getResponse()
      .recover(r -> Match(r).of(
          Case(instanceOf(ClientException.class), defaultResponse)
      ));

    // then
    assertTrue(recovered.isFailure());

Pattern matching inside the recover() method will turn Failure into Success only if a type of an Exception is a ClientException. Otherwise, it will leave it as a Failure(). We see that our httpClient is throwing RuntimeException thus our recovery method will not handle that case, therefore isFailure() returns true.

If we want to get the result from recovered object, but in a case of critical failure rethrows that exception we can do it using getOrElseThrow() method:

recovered.getOrElseThrow(throwable -> {
    throw new RuntimeException(throwable);
});

There are some errors that are critical and when they occur we want to signal that explicitly by throwing the exception higher in a call stack, to let the caller decide about further exception handling. In such cases, rethrowing exception like in above example is very useful.

When our client will throw a non-critical exception, our pattern matching in a recover() method will turn our Failure into Success. We are recovering from two types of exceptions ClientException and IllegalArgumentException:

@Test
public void givenHttpClientThatFailure_whenMakeACall_shouldReturnFailureAndRecover() {
    // given
    Response defaultResponse = new Response("b");
    HttpClient httpClient = () -> {
        throw new ClientException("non critical problem");
    };

    // when
    Try<Response> recovered = new JavaslangTry(httpClient).getResponse()
        .recover(r -> Match(r).of(
                 Case(instanceOf(ClientException.class), defaultResponse),
                 Case(instanceOf(IllegalArgumentException.class), defaultResponse)
                ));
    
    // then
    assertTrue(recovered.isSuccess());
}

We see that isSuccess() returns true, so our recovery handling code worked successfully.

4. Conclusion

This article shows a practical use of Try container from Javaslang library. We looked at the practical examples of using that construct by handling failure in the more functional way. Using Try will allow us to create more functional and readable API.

The implementation of all these examples and code snippets can be found in the GitHub project – this is a Maven-based project, so it should be easy to import and run as it is.

Guide to Guava’s Ordering

$
0
0

1. Overview

In this article, we will look at Ordering class from the Guava library.

Ordering class implements a Comparator interface and gives us a useful fluent API for creating and chaining comparators.

Java 8 has a method Comparator.comparing() that provides a similar functionality so Ordering will be particularly useful in projects that were not migrated to Java 8 yet and this is why we will be using anonymous classes instead of Lambda Expressions. You can read more about this here.

2. Creating Ordering

Ordering class has a useful builder method that returns a proper instance that can be used in a sort() method on collections or anywhere else where an instance of Comparator is needed.

We can create natural order instance by executing method natural():

List<Integer> integers = Arrays.asList(3, 2, 1);

integers.sort(Ordering.natural());

assertEquals(Arrays.asList(1,2,3), integers);

Let’s say that we have a collection of Person objects:

class Person {
    private String name;
    private Integer age;
    
    // standard constructors, getters
}

And we want to sort a list of such objects by age field. We can create our custom Ordering that will do exactly that by extending it:

List<Person> persons = Arrays.asList(new Person("Michael", 10), new Person("Alice", 3));
Ordering<Person> orderingByAge = new Ordering<Person>() {
    @Override
    public int compare(Person p1, Person p2) {
        return Ints.compare(p1.age, p2.age);
    }
};

persons.sort(orderingByAge);

assertEquals(Arrays.asList(new Person("Alice", 3), new Person("Michael", 10)), persons);

Then we can use our orderingByAge and pass it to sort() method.

3. Chaining Orderings

One useful feature of this class is that we can chain different ways of Ordering. Let’s say we have a collection of persons and we want to sort it by age field and have null age field values at the beginning of a list:

List<Person> persons = Arrays.asList(
  new Person("Michael", 10),
  new Person("Alice", 3), 
  new Person("Thomas", null));
 
Ordering<Person> ordering = Ordering
  .natural()
  .nullsFirst()
  .onResultOf(new Function<Person, Comparable>() {
      @Override
      public Comparable apply(Person person) {
          return person.age;
      }
});

persons.sort(ordering);
        
assertEquals(Arrays.asList(
  new Person("Thomas", null), 
  new Person("Alice", 3), 
  new Person("Michael", 10)), persons);

The important thing to notice here is an order in which particular Orderings are executed – order is from right to left. So firstly onResultOf() is executed and that method extracts the field that we want to compare.

Then, nullFirst() comparator is executed. Because of that, the resulting sorted collection will have a Person object that has a null as an age field at the beginning of the list.

At the end of the sorting process, age fields are compared using natural ordering as specified using method natural().

4. Conclusion

In this article, we looked at Ordering class from Guava library that allows us to create more fluent and elegant Comparators. We created our custom Ordering, we used predefined ones from the API, and we chained them to achieve more custom order.

The implementation of all these examples and code snippets can be found in the GitHub project – this is a Maven-based project, so it should be easy to import and run as it is.

Java Web Weekly, Issue 162

$
0
0

Lots of weekend reading for this week.

Let’s jump right in…

1. Spring and Java

>> Java 9 Enters First Bug Fixing Round [infoq.com]

Java 9 vs Bugs – the first round 🙂

>> Compilation of Java code on the fly [frankel.ch]

A short example showing how to compile Java code at runtime (yes, you read that right).

>> Surprising += Cast [javaspecialists.eu]

Exploring edge cases of casting in Java.

>> Hibernate Tips: How to map an Enum to a database column [thoughts-on-java.org]

A short write-up about a not-so-trivial problem of mapping enums to database columns using Hibernate.

>> Chronicle Queue storing 1 TB in virtual memory on a 128 GB machine [vanilla-java.github.io]

Chronicle Queue utilizes heap space economically 🙂

>> Why Elvis Should Not Visit Java [codefx.org]

As long as Java’s type system doesn’t distinguish between nullable and non-nullable types, the Elvis operator isn’t a good fit for Java.

>> How to automatically validate entities with Hibernate Validator [thoughts-on-java.org]

A short guide to the highly important Hibernate Validator.

>> Tool Time: Preventing leaky APIs with jQAssistant [in.relation.to]

You can now perform some interesting static analysis of your APIs.

>> Surprising += Cast [javaspecialists.eu]

Exploring edge cases of casting in Java.

>> Java Community Oscars – The Top 10 Posts of 2016 [takipi.com]

It turns out Java developers host their own Oscars too 🙂

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Building Event-driven Microservices Using CQRS and Serverless [kennybastani.com]

A rich introduction to building event-driven microservices and CQRS.

>> Revealing Interfaces [michaelfeathers.silvrback.com]

A short trick that might help you clean up your codebase.

Also worth reading:

3. Musings

>> Stop calling yourself a DevOps engineer [insaneprogramming.be]

DevOps is not a role, it’s a mentality.

>> Deep learning: the silver bullet? [lemire.me]

Thoughts about the future of deep learning.

>> Measure Your Code to Get Back on Track [daedtech.com]

What isn’t measured, doesn’t improve. Definitely measure the quality of your code/work as the first step towards improving it.

>> Trust automation [ontestautomation.com]

How to establish trust with your test automation 🙂

>> Processing billions of events/day [plumbr.eu]

An in-depth case study of going from a monolith to scalable Kafka-backed microservices.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Tupac is dead [dilbert.com]

>> How do you get social media followers? [dilbert.com]

>> So wise [dilbert.com]

5. Pick of the Week

>> The Cave Essentials [randsinrepose.com]

A Guide to ConcurrentMap

$
0
0

1. Overview

Maps are naturally one of the most widely style of Java collection.

And, importantly, HashMap is not a thread-safe implementation, while Hashtable does provide thread-safety by synchronizing operations.

Even though Hashtable is thread safe, it is not very efficient. Another fully synchronized Map, Collections.synchronizedMap, does not exhibit great efficiency either. If we want thread-safety with high throughput under high concurrency, these implementations aren’t the way to go.

To solve the problem, the Java Collections Framework introduced ConcurrentMap in Java 1.5.

The following discussions are based on Java 1.8.

2. ConcurrentMap

ConcurrentMap is an extension of the Map interface. It aims to provides a structure and guidance to solving the problem of reconciling throughput with thread-safety.

By overriding several interface default methods, ConcurrentMap gives guidelines for valid implementations to provide thread-safety and memory-consistent atomic operations.

Several default implementations are overridden, disabling the null key/value support:

  • getOrDefault
  • forEach
  • replaceAll
  • computeIfAbsent
  • computeIfPresent
  • compute
  • merge

The following APIs are also overridden to support atomicity, without a default interface implementation:

  • putIfAbsent
  • remove
  • replace(key, oldValue, newValue)
  • replace(key, value)

The rest of actions are directly inherited with basically consistent with Map.

3. ConcurrentHashMap

ConcurrentHashMap is the out-of-box ConcurrentMap implementation. For better performance, it consists of a set of tables (segments), each of which can be independently locked, thus read and update operations mostly do not block.

The number of segments required is relative to the number of threads accessing the table so that the update in progress per segment would be no more than one most of time.

That’s why constructors, compared to HashMap, provides the extra concurrencyLevel argument to control the number of estimated threads to use:

public ConcurrentHashMap(
  int initialCapacity, float loadFactor, int concurrencyLevel)

The other two arguments: initialCapacity and loadFactor, work quite the same as HashMap.

3.1. Thread-Safety

ConcurrentMap guarantees memory consistency on key/value operations in a multi-threading environment.

Actions in a thread prior to placing an object into a ConcurrentMap as a key or value happen-before actions subsequent to the access or removal of that object in another thread.

To confirm, let’s have a look at a memory inconsistent case:

@Test
public void givenHashMap_whenSumParallel_thenError() throws Exception {
    Map<String, Integer> map = new HashMap<>();
    List<Integer> sumList = parallelSum100(map, 100);

    assertNotEquals(1, sumList
      .stream()
      .distinct()
      .count());
    long wrongResultCount = sumList
      .stream()
      .filter(num -> num != 100)
      .count();
    
    assertTrue(wrongResultCount > 0);
}

private List<Integer> parallelSum100(Map<String, Integer> map, 
  int executionTimes) throws InterruptedException {
    List<Integer> sumList = new ArrayList<>(1000);
    for (int i = 0; i < executionTimes; i++) {
        map.put("test", 0);
        ExecutorService executorService = 
          Executors.newFixedThreadPool(4);
        for (int j = 0; j < 10; j++) {
            executorService.execute(() -> {
                for (int k = 0; k < 10; k++)
                    map.computeIfPresent(
                      "test", 
                      (key, value) -> value + 1
                    );
            });
        }
        executorService.shutdown();
        executorService.awaitTermination(5, TimeUnit.SECONDS);
        sumList.add(map.get("test"));
    }
    return sumList;
}

For each map.computeIfPresent action in parallel, HashMap does not provide a consistent view of what should be the present integer value, leading to inconsistent and undesirable results.

As for ConcurrentHashMap, we can get a consistent and correct result:

@Test
public void givenConcurrentMap_whenSumParallel_thenCorrect() 
  throws Exception {
    Map<String, Integer> map = new ConcurrentHashMap<>();
    List<Integer> sumList = parallelSum100(map, 1000);

    assertEquals(1, sumList
      .stream()
      .distinct()
      .count());
    long wrongResultCount = sumList
      .stream()
      .filter(num -> num != 100)
      .count();
    
    assertEquals(0, wrongResultCount);
}

3.2. Null Key/Value

Most APIs provided by ConcurrentMap does not allow null key or value, for example:

@Test(expected = NullPointerException.class)
public void givenConcurrentHashMap_whenPutWithNullKey_thenThrowsNPE() {
    concurrentMap.put(null, new Object());
}

@Test(expected = NullPointerException.class)
public void givenConcurrentHashMap_whenPutNullValue_thenThrowsNPE() {
    concurrentMap.put("test", null);
}

However, for compute* and merge actions, the computed value can be null, which indicates the key-value mapping is removed if present or remains absent if previously absent.

@Test
public void givenKeyPresent_whenComputeRemappingNull_thenMappingRemoved() {
    Object oldValue = new Object();
    concurrentMap.put("test", oldValue);
    concurrentMap.compute("test", (s, o) -> null);

    assertNull(concurrentMap.get("test"));
}

3.3. Performance

Under the hood, ConcurrentHashMap is somewhat similar to HashMap, with data access and update based on a hash table (though more complex).

And of course the ConcurrentHashMap should yield much better performance in most concurrent cases for data retrieval and update.

Let’s write a quick micro-benchmark for get and put performance and compare that to Hashtable and Collections.synchronizedMap, running both operations for 500,000 times in 4 threads.

@Test
public void givenMaps_whenGetPut500KTimes_thenConcurrentMapFaster() 
  throws Exception {
    Map<String, Object> hashtable = new Hashtable<>();
    Map<String, Object> synchronizedHashMap = 
      Collections.synchronizedMap(new HashMap<>());
    Map<String, Object> concurrentHashMap = new ConcurrentHashMap<>();

    long hashtableAvgRuntime = timeElapseForGetPut(hashtable);
    long syncHashMapAvgRuntime = 
      timeElapseForGetPut(synchronizedHashMap);
    long concurrentHashMapAvgRuntime = 
      timeElapseForGetPut(concurrentHashMap);

    assertTrue(hashtableAvgRuntime > concurrentHashMapAvgRuntime);
    assertTrue(syncHashMapAvgRuntime > concurrentHashMapAvgRuntime);
}

private long timeElapseForGetPut(Map<String, Object> map) 
  throws InterruptedException {
    ExecutorService executorService = 
      Executors.newFixedThreadPool(4);
    long startTime = System.nanoTime();
    for (int i = 0; i < 4; i++) {
        executorService.execute(() -> {
            for (int j = 0; j < 500_000; j++) {
                int value = ThreadLocalRandom
                  .current()
                  .nextInt(10000);
                String key = String.valueOf(value);
                map.put(key, value);
                map.get(key);
            }
        });
    }
    executorService.shutdown();
    executorService.awaitTermination(1, TimeUnit.MINUTES);
    return (System.nanoTime() - startTime) / 500_000;
}

Keep in mind micro-benchmarks are only looking at a single scenario and aren’t always a good reflection of real world performance.

That being said, on an OS X system with an an average dev system, we’re seeing an average sample result for 100 consecutive runs (in nanoseconds):

Hashtable: 1142.45
SynchronizedHashMap: 1273.89
ConcurrentHashMap: 230.2

In a multi-threading environment, where multiple threads are expected to access a common Map, the ConcurrentHashMap is clearly preferable.

However, when the Map is only accessible to a single thread, HashMap can be a better choice for its simplicity and solid performance.

3.4. Pitfalls

Retrieval operations generally do not block in ConcurrentHashMap and could overlap with update operations. So for better performance, they only reflect the results of the most recently completed update operations, as stated in the official Javadoc.

There are several other facts to bear in mind:

  • results of aggregate status methods including size, isEmpty, and containsValue are typically useful only when a map is not undergoing concurrent updates in other threads:
@Test
public void givenConcurrentMap_whenUpdatingAndGetSize_thenError() 
  throws InterruptedException {
    Runnable collectMapSizes = () -> {
        for (int i = 0; i < MAX_SIZE; i++) {
            mapSizes.add(concurrentMap.size());
        }
    };
    Runnable updateMapData = () -> {
        for (int i = 0; i < MAX_SIZE; i++) {
            concurrentMap.put(String.valueOf(i), i);
        }
    };
    executorService.execute(updateMapData);
    executorService.execute(collectMapSizes);
    executorService.shutdown();
    executorService.awaitTermination(1, TimeUnit.MINUTES);

    assertNotEquals(MAX_SIZE, mapSizes.get(MAX_SIZE - 1).intValue());
    assertEquals(MAX_SIZE, concurrentMap.size());
}

If concurrent updates are under strict control, aggregate status would still be reliable.

Although these aggregate status methods do not guarantee the real-time accuracy, they may be adequate for monitoring or estimation purposes.

  • hashCode matters: note that using many keys with exactly the same hashCode() is a sure way to slow down a performance of any hash table.

To ameliorate impact when keys are Comparable, ConcurrentHashMap may use comparison order among keys to help break ties. Still, we should avoid using the same hashCode() as much as we can.

  • iterators are only designed to use in a single thread as they provide weak consistency rather than fast-fail traversal, and they will never throw ConcurrentModificationException.
  • the default initial table capacity is 16, and it’s adjusted by the specified concurrency level:
public ConcurrentHashMap(
  int initialCapacity, float loadFactor, int concurrencyLevel) {
 
    //...
    if (initialCapacity < concurrencyLevel) {
        initialCapacity = concurrencyLevel;
    }
    //...
}
  • keys in ConcurrentHashMap are not in sorted order, so for cases when ordering is required, ConcurrentSkipListMap is a suitable choice.

4. ConcurrentNavigableMap

For cases when ordering of keys is required, we can use ConcurrentSkipListMap, a concurrent version of TreeMap.

As a supplement for ConcurrentMap, ConcurrentNavigableMap supports total ordering of its keys (in ascending order by default) and is concurrently navigable. Methods that return views of the map are overridden for concurrency compatibility:

  • subMap
  • headMap
  • tailMap
  • subMap
  • headMap
  • tailMap
  • descendingMap

keySet() views’ iterators and spliterators are enhanced with weak-memory-consistency:

  • navigableKeySet
  • keySet
  • descendingKeySet

5. ConcurrentSkipListMap

Previously, we have covered NavigableMap interface and its implementation TreeMap. ConcurrentSkipListMap can be seen a scalable concurrent version of TreeMap.

In practice, there’s no concurrent implementation of the red-black tree in Java. A concurrent variant of SkipLists is implemented in ConcurrentSkipListMap, providing an expected average log(n) time cost for the containsKey, get, put and remove operations and their variants.

In addition to TreeMap‘s features, key insertion, removal, update and access operations are guaranteed with thread-safety. Here’s a comparison to TreeMap when navigating concurrently:

@Test
public void givenSkipListMap_whenNavConcurrently_thenCountCorrect() 
  throws InterruptedException {
    NavigableMap<Integer, Integer> skipListMap
      = new ConcurrentSkipListMap<>();
    int count = countMapElementByPollingFirstEntry(skipListMap, 10000, 4);
 
    assertEquals(10000 * 4, count);
}

@Test
public void givenTreeMap_whenNavConcurrently_thenCountError() 
  throws InterruptedException {
    NavigableMap<Integer, Integer> treeMap = new TreeMap<>();
    int count = countMapElementByPollingFirstEntry(treeMap, 10000, 4);
 
    assertNotEquals(10000 * 4, count);
}

private int countMapElementByPollingFirstEntry(
  NavigableMap<Integer, Integer> navigableMap, 
  int elementCount, 
  int concurrencyLevel) throws InterruptedException {
 
    for (int i = 0; i < elementCount * concurrencyLevel; i++) {
        navigableMap.put(i, i);
    }
    
    AtomicInteger counter = new AtomicInteger(0);
    ExecutorService executorService
      = Executors.newFixedThreadPool(concurrencyLevel);
    for (int j = 0; j < concurrencyLevel; j++) {
        executorService.execute(() -> {
            for (int i = 0; i < elementCount; i++) {
                if (navigableMap.pollFirstEntry() != null) {
                    counter.incrementAndGet();
                }
            }
        });
    }
    executorService.shutdown();
    executorService.awaitTermination(1, TimeUnit.MINUTES);
    return counter.get();
}

A full explanation of the performance concerns behind the scenes is beyond the scope of this article. The details can be found in ConcurrentSkipListMap’s Javadoc, which is located under java/util/concurrent in the src.zip file.

6. Conclusion

In this article, we mainly introduced the ConcurrentMap interface and the features of ConcurrentHashMap and covered on ConcurrentNavigableMap being key-ordering required.

The full source code for all the examples used in this article can be found in the GitHub project.

Constructor Injection in Spring with Lombok

$
0
0

1. Introduction

Lombok is an extremely useful library overcoming boilerplate code. If you are not familiar with it yet, I highly recommend taking a look at the previous tutorial – Introduction to Project Lombok.

In this article, we’ll demonstrate its usability when combined with Spring’s Constructor-Based Dependency Injection.

2. Constructor-Based Dependency Injection

A good way to wire dependencies in Spring using constructor-based Dependency Injection. This approach forces us to explicitly pass component’s dependencies to a constructor.

As opposed to Field-Based Dependency Injection, it also provides a number of advantages:

  • no need to create a test-specific configuration component – dependencies are injected explicitly in a constructor
  • consistent design – all required dependencies are emphasized and looked after by constructor’s definition
  • simple unit tests – reduced Spring Framework’s overhead
  • reclaimed freedom of using final keywords

However, due to the need for writing a constructor, it uses to lead to a significantly larger code base. Consider the two examples of GreetingService and FarewellService:

@Component
public class GreetingService {

    @Autowired
    private Translator translator;

    public String produce() {
        return translator.translate("hello");
    }
}
@Component
public class FarewellService {

    private final Translator translator;

    public FarewellService(Translator translator) {
        this.translator = translator;
    }

    public String produce() {
        return translator.translate("bye");
    }
}

Basically, both of the components do the same thing – they call a configurable Translator with a task-specific word.

The second variation, though, is much more obfuscated because of the constructor’s boilerplate which doesn’t really bring any value to the code.

In the newest Spring release, it’s constructor does not need to be annotated with @Autowired annotation.

3. Constructor Injection with Lombok

With Lombok, it’s possible to generate a constructor for either all class’s fields (with @AllArgsConstructor) or all final class’s fields (with @RequiredArgsConstructor). Moreover, if you still need an empty constructor, you can append an additional @NoArgsConstructor annotation.

Let’s create a third component, analogous to the previous two:

@Component
@AllArgsConstructor
public class ThankingService {

    private final Translator translator;

    public String produce() {
        return translator.translate("thank you");
    }
}

The above annotation will cause Lombok to generate a constructor for us:

@Component
public class ThankingService {

    private final Translator translator;

    public String thank() {
        return translator.translate("thank you");
    }

    /* Generated by Lombok */
    public ThankingService(Translator translator) {
        this.translator = translator;
    }
}

4. Multiple Constructors

A constructor doesn’t have to be annotated as long as there is only one in a component and Spring can unambiguously choose it as the right one to instantiate a new object. Once there are more, you also need to annotate the one that is to be used by IoC container.

Consider the ApologizeService example:

@Component
@AllArgsConstructor
public class ApologizeService {

    private final Translator translator;
    private final String message;

    @Autowired
    public ApologizeService(Translator translator) {
        this(translator, "sorry");
    }

    public String produce() {
        return translator.translate(message);
    }
}

The above component is optionally configurable with the message field which cannot change after the component is created (hence the lack of a setter). It thus required us to provide two constructors – one with full configuration and the other with an implicit, default value of the message.

Unless one of the constructors is annotated with either @Autowired@Inject or @Resource, Spring will throw an error:

Failed to instantiate [...]: No default constructor found;

If we wanted to annotate the Lombok-generated constructor, we would have to pass the annotation with an onConstructor parameter of the @AllArgsConstructor:

@Component
@AllArgsConstructor(onConstructor = @__(@Autowired))
public class ApologyService {
    // ...
}

The onConstructor parameter accepts an array of annotations (or a single annotation like in this specific example) that are to be put on a generated constructor. The double underscore idiom has been introduced because of the backward compatibility issues. According to the documentation:

The reason of the weird syntax is to make this feature work in javac 7 compilers; the @__ type is an annotation reference to the annotation type __ (double underscore) which doesn’t actually exist; this makes javac 7 delay aborting the compilation process due to an error because it is possible an annotation processor will later create the __ type.

5. Summary

In this tutorial, we showed that there is no need to favor field-based DI over constructor-based DI in terms of increased boilerplate code.

Thanks to Lombok, it’s possible to automate common code generation without a performance impact on runtime, abbreviating long, obscuring code to the use of a single-line annotation.

The code used during the tutorial is available over on GitHub.


Working with Apache Thrift

$
0
0

1. Overview

In this article, we will discover how to develop cross-platform client-server applications with the help of RPC framework called Apache Thrift.

We will cover:

  • Defining data types and service interfaces with IDL
  • Installing the library and generating the sources for different languages
  • Implementing the defined interfaces in particular language
  • Implementing client/server software

If you want to go straight to examples, proceed straight to section 5.

2. Apache Thrift

Apache Thrift was originally developed by the Facebook development team and is currently maintained by Apache.

In comparison to Protocol Buffers, which manage cross-platform object serialization/deserialization processes, Thrift mainly focuses on the communication layer between components of your system.

Thrift uses a special Interface Description Language (IDL) to define data types and service interfaces which are stored as .thrift files and used later as input by the compiler for generating the source code of client and server software that communicate over different programming languages.

To use Apache Thrift in your project, add this Maven dependency:

<dependency>
    <groupId>org.apache.thrift</groupId>
    <artifactId>libthrift</artifactId>
    <version>0.10.0</version>
</dependency>

You can find the latest version in the Maven repository.

3. Interface Description Language

As already described, IDL allows defining of communication interfaces in a neutral language. Below you will find the currently supported types.

3.1. Base Types

  • bool – a boolean value (true or false)
  • byte – an 8-bit signed integer
  • i16 – a 16-bit signed integer
  • i32 – a 32-bit signed integer
  • i64 – a 64-bit signed integer
  • double – a 64-bit floating point number
  • string – a text string encoded using UTF-8 encoding

3.2. Special Types

  • binary – a sequence of unencoded bytes
  • optional – a Java 8’s Optional type

3.3. Structs

Thrift structs are the equivalent of classes in OOP languages but without inheritance. A struct has a set of strongly typed fields, each with a unique name as an identifier. Fields may have various annotations (numeric field IDs, optional default values, etc.).

3.4. Containers

Thrift containers are strongly typed containers:

  • list – an ordered list of elements
  • set – an unordered set of unique elements
  • map<type1,type2> – a map of strictly unique keys to values

Container elements may be of any valid Thrift type.

3.5. Exceptions

Exceptions are functionally equivalent to structs, except that they inherit from the native exceptions.

3.6. Services

Services are actually communication interfaces defined using Thrift types. They consist of a set of named functions, each with a list of parameters and a return type.

4. Source Code Generation

4.1. Language Support

There’s a long list of currently supported languages:

  • C++
  • C#
  • Go
  • Haskell
  • Java
  • Javascript
  • Node.js
  • Perl
  • PHP
  • Python
  • Ruby

You can check the full list here.

4.2. Using Library’s Executable File

Just download the latest version, build and install it if necessary, and use the following syntax:

cd path/to/thrift
thrift -r --gen [LANGUAGE] [FILENAME]

In the commands set above, [LANGUAGE] is one of the supported languages and [FILENAME] is a file with IDL definition.

Note the -r flag. It tells Thrift to generate code recursively once it notices includes in a given .thrift file.

4.3. Using Maven Plugin

Add the plugin in your pom.xml file:

<plugin>
   <groupId>org.apache.thrift.tools</groupId>
   <artifactId>maven-thrift-plugin</artifactId>
   <version>0.1.11</version>
   <configuration>
      <thriftExecutable>path/to/thrift</thriftExecutable>
   </configuration>
   <executions>
      <execution>
         <id>thrift-sources</id>
         <phase>generate-sources</phase>
         <goals>
            <goal>compile</goal>
         </goals>
      </execution>
   </executions>
</plugin>

After that just execute the following command:

mvn clean install

Note that this plugin will not have any further maintenance anymore. Please visit this page for more information.

5. Example of a Client-Server Application

5.1. Defining Thrift File

Let’s write some simple service with exceptions and structures:

namespace cpp com.baeldung.thrift.impl
namespace java com.baeldung.thrift.impl

exception InvalidOperationException {
    1: i32 code,
    2: string description
}

struct CrossPlatformResource {
    1: i32 id,
    2: string name,
    3: optional string salutation
}

service CrossPlatformService {

    CrossPlatformResource get(1:i32 id) throws (1:InvalidOperationException e),

    void save(1:CrossPlatformResource resource) throws (1:InvalidOperationException e),

    list <CrossPlatformResource> getList() throws (1:InvalidOperationException e),

    bool ping() throws (1:InvalidOperationException e)
}

As you can see, the syntax is pretty simple and self-explanatory. We define a set of namespaces (per implementation language), an exception type, a struct, and finally a service interface which will be shared across different components.

Then just store it as a service.thrift file.

5.2. Compiling and Generating a Code

Now it’s time to run a compiler which will generate the code for us:

thrift -r -out generated --gen java /path/to/service.thrift

As you might see, we added a special flag -out to specify the output directory for generated files. If you did not get any errors, the generated directory will contain 3 files:

  • CrossPlatformResource.java
  • CrossPlatformService.java
  • InvalidOperationException.java

Let’s  generate a C++ version of the service by running:

thrift -r -out generated --gen cpp /path/to/service.thrift

Now we get 2 different valid implementations (Java and C++) of the same service interface.

5.3. Adding a Service Implementation

Although Thrift has done most of the work for us, we still need to write our own implementations of the CrossPlatformService. In order to do that, we just need to implement a CrossPlatformService.Iface interface:

public class CrossPlatformServiceImpl implements CrossPlatformService.Iface {

    @Override
    public CrossPlatformResource get(int id) 
      throws InvalidOperationException, TException {
        return new CrossPlatformResource();
    }

    @Override
    public void save(CrossPlatformResource resource) 
      throws InvalidOperationException, TException {
        saveResource();
    }

    @Override
    public List<CrossPlatformResource> getList() 
      throws InvalidOperationException, TException {
        return Collections.emptyList();
    }

    @Override
    public boolean ping() throws InvalidOperationException, TException {
        return true;
    }
}

5.4. Writing a Server

As we said, we want to build a cross-platform client-server application, so we need a server for it. The great thing about Apache Thrift is that it has its own client-server communication framework which makes communication a piece of cake:

public class CrossPlatformServiceServer {
    public void start() throws TTransportException {
        TServerTransport serverTransport = new TServerSocket(9090);
        server = new TSimpleServer(new TServer.Args(serverTransport)
          .processor(new CrossPlatformService.Processor<>(new CrossPlatformServiceImpl())));

        System.out.print("Starting the server... ");

        server.serve();

        System.out.println("done.");
    }

    public void stop() {
        if (server != null && server.isServing()) {
            System.out.print("Stopping the server... ");

            server.stop();

            System.out.println("done.");
        }
    }
}

First thing is to define a transport layer with the implementation of TServerTransport interface (or abstract class, to be more precise). Since we are talking about server, we need to provide a port to listen to. Then we need to define a TServer instance and choose one of the available implementations:

  • TSimpleServer – for simple server
  • TThreadPoolServer – for multi-threaded server
  • TNonblockingServer – for non-blocking multi-threaded server

And finally, provide a processor implementation for chosen server which was already generated for us by Thrift, i.e. CrossPlatofformService.Processor class.

5.5. Writing a Client 

And here is the client’s implementation:

TTransport transport = new TSocket("localhost", 9090);
transport.open();

TProtocol protocol = new TBinaryProtocol(transport);
CrossPlatformService.Client client = new CrossPlatformService.Client(protocol);

boolean result = client.ping();

transport.close();

From a client perspective, the actions are pretty similar.

First of all, define the transport and point it to our server instance, then choose the suitable protocol. The only difference is that here we initialize the client instance which was, once again, already generated by Thrift, i.e. CrossPlatformService.Client class.

Since it is based on .thrift file definitions we can directly call methods described there. In this particular example, client.ping() will make a remote call to the server which will respond with true.

6. Conclusion

In this article, we’ve shown you the basic concepts and steps in working with Apache Thrift, and we’ve shown how to create a working example which utilizes Thrift library.

As usually, all the examples can be always found over in the GitHub repository.

Guide to Guava’s PreConditions

$
0
0

1. Overview

In this tutorial, we’ll show how to use the Google Guava’s Preconditions class.

The Preconditions class provides a list of static methods for checking that a method or a constructor is invoked with valid parameter values. If a precondition fails, a tailored exception is thrown.

2. Google Guava’s Preconditions

Each static method in the Preconditions class has three variants:

  • No arguments. Exceptions are thrown without an error message
  • An extra Object argument acting as an error message. Exceptions are thrown with an error message
  • An extra String argument, with an arbitrary number of additional Object arguments acting as an error message with a placeholder. This behaves a bit like printf, but for GWT compatibility and efficiency it only allows %s indicators

Let’s have a look at how to use the Preconditions class.

2.1. Maven Dependency

Let’s start by adding Google’s Guava library dependency in the pom.xml:

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>21.0</version>
</dependency>

The latest version of the dependency can be checked here.

3. checkArgument

The method checkArgument of the Preconditions class ensures the truthfulness of the parameters passed to the calling method. This method accepts a boolean condition and throws an IllegalArgumentException when the condition is false.

Let’s see how we can use this method with some examples.

3.1. Without an Error Message

We can use checkArgument without passing any extra parameter to the checkArgument method:

@Test
public void whenCheckArgumentEvaluatesFalse_throwsException() {
    int age = -18;
 
    assertThatThrownBy(() -> Preconditions.checkArgument(age > 0))
      .isInstanceOf(IllegalArgumentException.class)
      .hasMessage(null).hasNoCause();
}

3.2. With an Error Message

We can get a meaningful error message from the checkArgument method by passing an error message:

@Test
public void givenErrorMsg_whenCheckArgEvalsFalse_throwsException() {
    int age = -18;
    String message = "Age can't be zero or less than zero.";
 
    assertThatThrownBy(() -> Preconditions.checkArgument(age > 0, message))
      .isInstanceOf(IllegalArgumentException.class)
      .hasMessage(message).hasNoCause();
}

3.3. With a Template Error Message

We can get a meaningful error message along with dynamic data from the checkArgument method by passing an error message:

@Test
public void givenTemplateMsg_whenCheckArgEvalsFalse_throwsException() {
    int age = -18;
    String message = "Age should be positive number, you supplied %s.";
 
    assertThatThrownBy(
      () -> Preconditions.checkArgument(age > 0, message, age))
      .isInstanceOf(IllegalArgumentException.class)
      .hasMessage(message, age).hasNoCause();
}

4. checkElementIndex

The method checkElementIndex checks that an index is a valid index in a list, string or an array of a specified size. An element index may range from 0 inclusive to size exclusive. You don’t pass a list, string or array directly, you just pass its size. This method throws an IndexOutOfBoundsException if the index is not a valid element index, else it returns an index that’s being passed to the method.

Let’s see how we can use this method by showing a meaningful error message from the checkElementIndex method by passing an error message when it throws an exception:

@Test
public void givenArrayAndMsg_whenCheckElementEvalsFalse_throwsException() {
    int[] numbers = { 1, 2, 3, 4, 5 };
    String message = "Please check the bound of an array and retry";
 
    assertThatThrownBy(() -> 
      Preconditions.checkElementIndex(6, numbers.length - 1, message))
      .isInstanceOf(IndexOutOfBoundsException.class)
      .hasMessageStartingWith(message).hasNoCause();
}

5. checkNotNull

The method checkNotNull checks whether a value supplied as a parameter is null. It returns the value that’s been checked. If the value that has been passed to this method is null, then a NullPointerException is thrown.

Next, we are going to show how to use this method by showing how to get a meaningful error message from the checkNotNull method by passing an error message:

@Test
public void givenNullString_whenCheckNotNullWithMessage_throwsException () {
    String nullObject = null;
    String message = "Please check the Object supplied, its null!";
 
    assertThatThrownBy(() -> Preconditions.checkNotNull(nullObject, message))
      .isInstanceOf(NullPointerException.class)
      .hasMessage(message).hasNoCause();
}

We can also get a meaningful error message based on dynamic data from the checkNotNull method by passing a parameter to the error message:

@Test
public void whenCheckNotNullWithTemplateMessage_throwsException() {
    String nullObject = null;
    String message = "Please check the Object supplied, its %s!";
 
    assertThatThrownBy(
      () -> Preconditions.checkNotNull(nullObject, message, 
        new Object[] { null }))
      .isInstanceOf(NullPointerException.class)
      .hasMessage(message, nullObject).hasNoCause();
}

6. checkPositionIndex

The method checkPositionIndex checks that an index passed as an argument to this method is a valid index in a list, string or array of a specified size. A position index may range from 0 inclusive to size inclusive. You don’t pass the list, string or array directly, you just pass its size.

This method throws an IndexOutOfBoundsException if the index passed is not between 0 and the size given, else it returns the index value.

Let’s see how we can get a meaningful error message from the checkPositionIndex method:

@Test
public void givenArrayAndMsg_whenCheckPositionEvalsFalse_throwsException() {
    int[] numbers = { 1, 2, 3, 4, 5 };
    String message = "Please check the bound of an array and retry";
 
    assertThatThrownBy(
      () -> Preconditions.checkPositionIndex(6, numbers.length - 1, message))
      .isInstanceOf(IndexOutOfBoundsException.class)
      .hasMessageStartingWith(message).hasNoCause();
}

7. checkState

The method checkState checks the validity of the state of an object and is not dependent on the method arguments. For example, an Iterator might use this to check that next has been called before any call to remove. This method throws an IllegalStateException if the state of an object (boolean value passed as an argument to the method) is in an invalid state.

Let’s see how we can use this method by showing a meaningful error message from the checkState method by passing an error message when it throws an exception:

@Test
public void givenStatesAndMsg_whenCheckStateEvalsFalse_throwsException() {
    int[] validStates = { -1, 0, 1 };
    int givenState = 10;
    String message = "You have entered an invalid state";
 
    assertThatThrownBy(
      () -> Preconditions.checkState(
        Arrays.binarySearch(validStates, givenState) > 0, message))
      .isInstanceOf(IllegalStateException.class)
      .hasMessageStartingWith(message).hasNoCause();
}

8. Conclusion

In this tutorial, we illustrated the methods of the PreConditions class from the Guava library. The Preconditions class provides a collection of static methods that are used to validate that a method or a constructor is invoked with valid parameter values.

The code belonging to the above examples can be found in the GitHub project – this is a Maven-based project, so it should be easy to import and run as is.

Dealing with Backpressure with RxJava

$
0
0

1. Overview

In this article, we will look at the way the RxJava library helps us to handle backpressure.

Simply put – RxJava utilizes a concept of reactive streams by introducing Observables, to which one or many Observers can subscribe to. Dealing with possibly infinite streams is very challenging, as we need to face a problem of a backpressure.

It’s not difficult to get into a situation in which an Observable is emitting items more rapidly than a subscriber can consume them. We will look at the different solutions to the problem of growing buffer of unconsumed items.

2. Hot Observables Versus Cold Observables

First, let’s create a simple consumer function that will be used as a consumer of elements from Observables that we will define later:

public class ComputeFunction {
    public static void compute(Integer v) {
        try {
            System.out.println("compute integer v: " + v);
            Thread.sleep(1000);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
    }
}

Our compute() function is simply printing the argument. The important thing to notice here is an invocation of a Thread.sleep(1000) method – we are doing it to emulate some long running task that will cause Observable to fill up with items quicker that Observer can consume them.

We have two types of the Observables – Hot and Cold  that are totally different when it comes to a backpressure handling.

2.1. Cold Observables

A cold Observable emits a particular sequence of items but can begin emitting this sequence when its Observer finds it to be convenient, and at whatever rate the Observer desires, without disrupting the integrity of the sequence. Cold Observable is providing items in a lazy way.

The Observer is taking elements only when it is ready to process that item, and items do not need to be buffered in an Observable because they are requested in a pull fashion.

For example, if you create an Observable based on a static range of elements from one to one million, that Observable would emit the same sequence of items no matter how frequently those items are observed:

Observable.range(1, 1_000_000)
  .observeOn(Schedulers.computation())
  .subscribe(ComputeFunction::compute);

When we start our program, items will be computed by Observer lazily and will be requested in a pull fashion. The Schedulers.computation() method means that we want to run our Observer within a computation thread pool in RxJava. 

The output of a program will consist of a result of a compute() method invoked for one by one item from an Observable:

compute integer v: 1
compute integer v: 2
compute integer v: 3
compute integer v: 4
...

Cold Observables do not need to have any form of a backpressure because they work in a pull fashion. Examples of items emitted by a cold Observable might include the results of a database query, file retrieval, or web request.

2.2. Hot Observables

A hot Observable begins generating items and emits them immediately when they are created. It is contrary to a Cold Observables pull model of processing. Hot Observable emits items at its own pace, and it is up to its observers to keep up.

When the Observer is not able to consume items as quickly as they are produced by an Observable they need to be buffered or handled in some other way, as they will fill up the memory, finally causing OutOfMemoryException.

Let’s consider an example of hot Observable, that is producing a 1 million items to an end consumer that is processing those items. When a compute() method in the Observer takes some time to process every item, the Observable is starting to fill up a memory with items, causing a program to fail:

PublishSubject<Integer> source = PublishSubject.<Integer>create();

source.observeOn(Schedulers.computation())
  .subscribe(ComputeFunction::compute, Throwable::printStackTrace);

IntStream.range(1, 1_000_000).forEach(source::onNext);

Running that program will fail with a MissingBackpressureException because we didn’t define a way of handling overproducing Observable.

Examples of items emitted by a hot Observable might include mouse & keyboard events, system events, or stock prices.

3. Buffering Overproducing Observable

The first way to handle overproducing Observable is to define some kind of a buffer for elements that cannot be processed by an Observer. 

We can do it by calling a buffer() method:

PublishSubject<Integer> source = PublishSubject.<Integer>create();
        
source.buffer(1024)
  .observeOn(Schedulers.computation())
  .subscribe(ComputeFunction::compute, Throwable::printStackTrace);

Defining a buffer with a size of 1024 will give an Observer some time to catch up to an overproducing source. The buffer will store items that were not yet processed.

We can increase a buffer size to have enough room for produced values.

Note however that generally, this may be only a temporary fix as the overflow can still happen if the source overproduces the predicted buffer size.

4. Batching Emitted Items

We can batch overproduced items in windows of N elements.

When Observable is producing elements quicker than Observer can process them, we can alleviate this by grouping produced elements together and sending a batch of elements to Observer that is able to process a collection of elements instead of element one by one:

PublishSubject<Integer> source = PublishSubject.<Integer>create();

source.window(500)
  .observeOn(Schedulers.computation())
  .subscribe(ComputeFunction::compute, Throwable::printStackTrace);

Using window() method with argument 500, will tell Observable to group elements into the 500-sized batches. This technique can reduce a problem of overproducing Observable when Observer is able to process a batch of elements quicker comparing to processing elements one by one.

5. Skipping Elements

If some of the values produced by Observable can be safely ignored, we can use the sampling within a specific time and throttling operators.

The methods sample() and throttleFirst() are taking duration as a parameter:

  • The sample() method periodically looks into the sequence of elements and emits the last item that was produced within the duration specified as a parameter
  • The throttleFirst() method emits the first item that was produced after the duration specified as a parameter

The duration is a time after which one specific element is picked from the sequence of produced elements. We can specify a strategy for handling backpressure by skipping elements:

PublishSubject<Integer> source = PublishSubject.<Integer>create();

source.sample(100, TimeUnit.MILLISECONDS)
  .observeOn(Schedulers.computation())
  .subscribe(ComputeFunction::compute, Throwable::printStackTrace);

We specified that strategy of skipping elements will be a sample() method. We want a sample of a sequence from 100 milliseconds duration. That element will be emitted to the Observer.

Remember, however, that these operators only reduce the rate of value reception by the downstream Observer and thus they may still lead to MissingBackpressureException.

6. Handling a Filling Observable Buffer

In case that our strategies of sampling or batching elements do not help with filling up a bufferwe need to implement a strategy of handling cases when a buffer is filling up.

We need to use an onBackpressureBuffer() method to prevent BufferOverflowException.

The onBackpressureBuffer() method takes three arguments: a capacity of an Observable buffer, a method that is invoked when a buffer is filling up, and a strategy for handling elements that need to be discarded from a buffer. Strategies for overflow are in a BackpressureOverflow class.

There are 4 types of actions that can be executed when the buffer fills up:

  • ON_OVERFLOW_ERROR –  this is the default behavior signaling a BufferOverflowException when the buffer is full
  • ON_OVERFLOW_DEFAULT –  currently it is the same as ON_OVERFLOW_ERROR
  • ON_OVERFLOW_DROP_LATEST –  if an overflow would happen, the current value will be simply ignored and only the old values will be delivered once the downstream Observer requests
  • ON_OVERFLOW_DROP_OLDEST – drops the oldest element in the buffer and adds the current value to it

Let’s see how to specify that strategy:

Observable.range(1, 1_000_000)
  .onBackpressureBuffer(16, () -> {}, BackpressureOverflow.ON_OVERFLOW_DROP_OLDEST)
  .observeOn(Schedulers.computation())
  .subscribe(e -> {}, Throwable::printStackTrace);

Here our strategy for handling the overflowing buffer is dropping the oldest element in a buffer and adding newest item produced by an Observable.

Note that the last two strategies cause a discontinuity in the stream as they drop out elements. In addition, they won’t signal BufferOverflowException.

7. Dropping All Overproduced Elements

Whenever the downstream Observer is not ready to receive an element, we can use an onBackpressureDrop() method to drop that element from the sequence.

We can think of that method as an onBackpressureBuffer() method with a capacity of a buffer set to zero with a strategy ON_OVERFLOW_DROP_LATEST. 

This operator is useful when we can safely ignore values from a source Observable (such as mouse moves or current GPS location signals) as there will be more up-to-date values later on:

Observable.range(1, 1_000_000)
  .onBackpressureDrop()
  .observeOn(Schedulers.computation())
  .doOnNext(ComputeFunction::compute)
  .subscribe(v -> {}, Throwable::printStackTrace);

The method onBackpressureDrop() is eliminating a problem of overproducing Observable but needs to be used with caution.

8. Conclusion

In this article, we looked at a problem of overproducing Observable and ways of dealing with a backpressure. We looked at strategies of buffering, batching and skipping elements when the Observer is not able to consume elements as quickly as they are produced by an Observable.

The implementation of all these examples and code snippets can be found in the GitHub project – this is a Maven project, so it should be easy to import and run as it is.

Guide to PriorityBlockingQueue in Java

$
0
0

1. Introduction

In this article, we’ll focus on the PriorityBlockingQueue class and go over some practical examples.

Starting with the assumption that we already know what a Queue is, we will first demonstrate how elements in the PriorityBlockingQueue are ordered by priority.

Following this, we will demonstrate how this type of queue can be used to block a thread.

Finally, we will show how using these two features together can be useful when processing data across multiple threads.

2. Priority of Elements

Unlike a standard queue, you can’t just add any type of element to a PriorityBlockingQueue. There are two options:

  1. Adding elements which implement Comparable
  2. Adding elements which do not implement Comparable, on the condition that you provide a Comparator as well

By using either the Comparator or the Comparable implementations to compare elements, the PriorityBlockingQueue will always be sorted.

The aim is to implement comparison logic in a way in which the highest priority element is always ordered first. Then, when we remove an element from our queue, it will always be the one with the highest priority.

To begin with, let’s make use of our queue in a single thread, as opposed to using it across multiple ones. By doing this, it makes it easy to prove how elements are ordered in a unit test:

PriorityBlockingQueue<Integer> queue = new PriorityBlockingQueue<>();
ArrayList<Integer> polledElements = new ArrayList<>();
 
queue.add(1);
queue.add(5);
queue.add(2);
queue.add(3);
queue.add(4);

queue.drainTo(polledElements);

assertThat(polledElements).containsExactly(1, 2, 3, 4, 5);

As we can see, despite adding the elements to the queue in a random order, they will be ordered when we start polling them. This is because the Integer class implements Comparable, which will, in turn, be used to make sure we take them out from the queue in ascending order.

It’s also worth noting that when two elements are compared and are the same, there’s no guarantee of how they will be ordered.

3. Using the Queue to Block

If we were dealing with a standard queue, we would call poll() to retrieve elements. However, if the queue was empty, a call to poll() would return null.

The PriorityBlockingQueue implements the BlockingQueue interface, which gives us some extra methods that allow us to block when removing from an empty queue. Let’s try using the take() method, which should do exactly that:

PriorityBlockingQueue<Integer> queue = new PriorityBlockingQueue<>();

new Thread(() -> {
  System.out.println("Polling...");

  try {
      Integer poll = queue.take();
      System.out.println("Polled: " + poll);
  } catch (InterruptedException e) {
      e.printStackTrace();
  }
}).start();

Thread.sleep(TimeUnit.SECONDS.toMillis(5));
System.out.println("Adding to queue");
queue.add(1);

Although using sleep() is a slightly brittle way of demonstrating things, when we run this code we will see:

Polling...
Adding to queue
Polled: 1

This proves that take() blocked until an item was added:

  1. The thread will print “Polling” to prove that it’s started
  2. The test will then pause for around five seconds, to prove the thread must have called take() by this point
  3. We add to the queue, and should more or less instantly see “Polled: 1” to prove that take() returned an element as soon as it become available

It’s also worth mentioning that the BlockingQueue interface also provides us with ways of blocking when adding to full queues.

However, a PriorityBlockingQueue is unbounded. This means that it will never be full, thus it will always possible to add new elements.

4. Using Blocking and Prioritization Together

Now that we’ve explained the two key concepts of a PriorityBlockingQueue, let’s use them both together. We can simply expand on our previous example, but this time add more elements to the queue:

Thread thread = new Thread(() -> {
    System.out.println("Polling...");
    while (true) {
        try {
            Integer poll = queue.take();
            System.out.println("Polled: " + poll);
        } 
        catch (InterruptedException e) { 
            e.printStackTrace();
        }
    }
});

thread.start();

Thread.sleep(TimeUnit.SECONDS.toMillis(5));
System.out.println("Adding to queue");

queue.addAll(newArrayList(1, 5, 6, 1, 2, 6, 7));
Thread.sleep(TimeUnit.SECONDS.toMillis(1));

Again, while this is a little brittle because of the use of sleep(), it still shows us a valid use case. We now have a queue which blocks, waiting for elements to be added. We’re then adding lots of elements at once, and then showing that they will be handled in priority order. The output will look like this:

Polling...
Adding to queue
Polled: 1
Polled: 1
Polled: 2
Polled: 5
Polled: 6
Polled: 6
Polled: 7

5. Conclusion

In this guide, we’ve demonstrated how we can use a PriorityBlockingQueue in order to block a thread until some items have been added to it, and also that we are able to process those items based on their priority.

The implementation of these examples can be found over on GitHub. This is a Maven-based project, so should be easy to run as is.

Spring @RequestMapping New Shortcut Annotations

$
0
0

1. Overview

Spring 4.3. introduced some very cool method-level composed annotations to smooth out the handling @RequestMapping in typical Spring MVC projects.

In this article, we will learn how to use them in an efficient way.

2. New Annotations

Typically, if we want to implement the URL handler using traditional @RequestMapping annotation, it would have been something like this:

@RequestMapping(value = "/get/{id}", method = RequestMethod.GET)

The new approach makes it possible to shorten this simply to:

@GetMapping("/get/{id}")

Spring currently supports five types of inbuilt annotations for handling different types of incoming HTTP request methods which are GET, POST, PUT, DELETE and PATCH. These annotations are:

  • @GetMapping
  • @PostMapping
  • @PutMapping
  • @DeleteMapping
  • @PatchMapping

From the naming convention we can see that each annotation is meant to handle respective incoming request method type, i.e. @GetMapping is used to handle GET type of request method, @PostMapping is used to handle POST type of request method, etc.

3. How It Works

All of the above annotations are already internally annotated with @RequestMapping and the respective value in the method element.

For example, if we’ll look at the source code of @GetMapping annotation, we can see that it’s already annotated with RequestMethod.GET in the following way:

@Target({ java.lang.annotation.ElementType.METHOD })
@Retention(RetentionPolicy.RUNTIME)
@Documented
@RequestMapping(method = { RequestMethod.GET })
public @interface GetMapping {
    // abstract codes
}

All the other annotations are created in the same way, i.e. @PostMapping is annotated with RequestMethod.POST@PutMapping is annotated with RequestMethod.PUT, etc.

The full source code of the annotations is available here.

4. Implementation

Let’s try to use these annotations to build a quick REST application.

Please note that since we would use Maven to build the project and Spring MVC to create our application, we need to add necessary dependencies in the pom.xml:

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-webmvc</artifactId>
    <version>4.3.6.RELEASE</version>
</dependency>

The latest version of spring-webmvc is available in the Central Maven Repository.

Now, we need to create the controller to map incoming request URL. Inside this controller, we would use all of these annotations one by one.

4.1. @GetMapping

@GetMapping("/get")
public @ResponseBody ResponseEntity<String> get() {
    return new ResponseEntity<String>("GET Response", HttpStatus.OK);
}
@GetMapping("/get/{id}")
public @ResponseBody ResponseEntity<String>
  getById(@PathVariable String id) {
    return new ResponseEntity<String>("GET Response : " 
      + id, HttpStatus.OK);
}

4.2. @PostMapping

@PostMapping("/post")
public @ResponseBody ResponseEntity<String> post() {
    return new ResponseEntity<String>("POST Response", HttpStatus.OK);
}

4.3. @PutMapping

@PutMapping("/put")
public @ResponseBody ResponseEntity<String> put() {
    return new ResponseEntity<String>("PUT Response", HttpStatus.OK);
}

4.4. @DeleteMapping

@DeleteMapping("/delete")
public @ResponseBody ResponseEntity<String> delete() {
    return new ResponseEntity<String>("DELETE Response", HttpStatus.OK);
}

4.5. @PatchMapping

@PatchMapping("/patch")
public @ResponseBody ResponseEntity<String> patch() {
    return new ResponseEntity<String>("PATCH Response", HttpStatus.OK);
}

Points to note:

  • We have used the necessary annotations to handle proper incoming HTTP methods with URIFor example, @GetMapping to handle “/get” URI, @PostMapping to handle “/post” URI and so on
  • Since we are making an REST-based application, we are returning a constant string (unique to each request type) with 200 response code to simplify the application. We have used Spring’s @ResponseBody annotation in this case.
  • If we had to handle any URL path variable, we can simply do it in much less way we used to do in case of using @RequestMapping.

5. Testing the Application

To test the application we need to create a couple of test cases using JUnit. We would use SpringJUnit4ClassRunner to initiate the test class. We would create five different test cases to test each annotation and every handler we declared in the controller.

Let’s simple the example test case of @GetMapping:

@Test 
public void giventUrl_whenGetRequest_thenFindGetResponse() 
  throws Exception {

    MockHttpServletRequestBuilder builder = MockMvcRequestBuilders
      .get("/get");

    ResultMatcher contentMatcher = MockMvcResultMatchers.content()
      .string("GET Response");

    this.mockMvc.perform(builder).andExpect(contentMatcher)
      .andExpect(MockMvcResultMatchers.status().isOk());

}

As we can see, we are expecting a constant string “GET Response“, once we hit the GET URL “/get”.

Now, let’s create the test case to test @PostMapping:

@Test 
public void givenUrl_whenPostRequest_thenFindPostResponse() 
  throws Exception {
    
    MockHttpServletRequestBuilder builder = MockMvcRequestBuilders
      .post("/post");
	
    ResultMatcher contentMatcher = MockMvcResultMatchers.content()
      .string("POST Response");
	
    this.mockMvc.perform(builder).andExpect(contentMatcher)
      .andExpect(MockMvcResultMatchers.status().isOk());
	
}

In the same way, we created the rest of the test cases to test all of the HTTP methods.

Alternatively, we can always use any common REST client, for example, PostMan, RESTClient etc, to test our application. In that case, we need to be a little careful to choose correct HTTP method type while using the rest client. Otherwise, it would throw 405 error status.

6. Conclusion

In this article, we had a quick introduction to the different types of @RequestMapping shortcuts for quick web development using traditional Spring MVC framework. We can utilize these quick shortcuts to create a clean code base.

Like always, you can find the source code for this tutorial in the Github project.

Viewing all 3692 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>