Quantcast
Channel: Baeldung
Viewing all 3717 articles
Browse latest View live

Introduction to Immutables

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Introduction

In this article we will be showing how to work with the Immutables library.

Immutables consists of annotations and annotation processors for generating and working with serializable and customizable immutable objects.

2. Maven Dependencies

In order to use Immutables in your project, you need to add the following dependency to the dependencies section of your pom.xml file:

<dependency>
    <groupId>org.immutables</groupId>
    <artifactId>value</artifactId>
    <version>2.2.10</version>
    <scope>provided</scope>
</dependency>

As this artifact is not required during runtime, so it’s advisable to specify the provided scope.

The newest version of the library can be found here.

3. Immutables

The library generates immutable objects from abstract types: Interface, Class, Annotation.

The key to achieving this is the proper use of @Value.Immutable annotation. It generates an immutable version of an annotated type and prefixes its name with the Immutable keyword.

If we try to generate an immutable version of class named “X“, it will generate a class named “ImmutableX”. Generated classes are not recursively-immutable, so it’s good to keep that in mind.

And a quick note – because Immutables utilizes annotation processing, you need to remember to enable annotation processing in your IDE.

3.1. Using @Value.Immutable With Abstract Classes and Interfaces

Let’s create a simple abstract class Person consisting of two abstract accessor methods representing the to-be-generated fields, and then annotate the class with the @Value.Immutable annotation:

@Value.Immutable
public abstract class Person {

    abstract String getName();
    abstract Integer getAge();

}

After annotation processing is done, we can find a ready-to-use, newly-generated ImmutablePerson class in a target/generated-sources directory:

@Generated({"Immutables.generator", "Person"})
public final class ImmutablePerson extends Person {

    private final String name;
    private final Integer age;

    private ImmutablePerson(String name, Integer age) {
        this.name = name;
        this.age = age;
    }

    @Override
    String getName() {
        return name;
    }

    @Override
    Integer getAge() {
        return age;
    }

    // toString, hashcode, equals, copyOf and Builder omitted

}

The generated class comes with implemented toString, hashcode, equals methods and with a stepbuilder ImmutablePerson.Builder. Notice that the generated constructor has private access.

In order to construct an instance of ImmutablePerson class, we need to use the builder or static method ImmutablePerson.copyOf, which can create an ImmutablePerson copy from a Person object.

If we want to construct an instance using the builder, we can simply code:

ImmutablePerson john = ImmutablePerson.builder()
  .age(42)
  .name("John")
  .build();

Generated classes are immutable which means they can’t be modified. If you want to modify an already existing object, you can use one of the “withX” methods, which do not modify an original object, but create a new instance with a modified field.

Let’s update john’s age and create a new john43 object:

ImmutablePerson john43 = john.withAge(43);

In such a case the following assertions will be true:

assertThat(john).isNotSameAs(john43);
assertThat(john.getAge()).isEqualTo(42);

4. Additional Utilities

Such class generation would not be very useful without being able to customize it. Immutables library comes with a set of additional annotations that can be used for customizing @Value.Immutable‘s output. To see all of them, please refer to Immutables’ documentation.

4.1. The @Value.Parameter Annotation

The @Value.Parameter annotation can be used for specifying fields, for which constructor method should be generated.

If you annotate your class like this:

@Value.Immutable
public abstract class Person {

    @Value.Parameter
    abstract String getName();

    @Value.Parameter
    abstract Integer getAge();
}

It will be possible to instantiate it in the following way:

ImmutablePerson.of("John", 42);

4.2. The @Value.Default Annotation

The @Value.Default annotation allows you to specify a default value that should be used when an initial value is not provided. In order to do this, you need to create a non-abstract accessor method returning a fixed value and annotate it with @Value.Default:

@Value.Immutable
public abstract class Person {

    abstract String getName();

    @Value.Default
    Integer getAge() {
        return 42;
    }
}

The following assertion will be true:

ImmutablePerson john = ImmutablePerson.builder()
  .name("John")
  .build();

assertThat(john.getAge()).isEqualTo(42);

4.3. The @Value.Auxiliary Annotation

The @Value.Auxiliary annotation can be used for annotating a property that will be stored in an object’s instance, but will be ignored by equals, hashCode and toString implementations.

If you annotate your class like this:

@Value.Immutable
public abstract class Person {

    abstract String getName();
    abstract Integer getAge();

    @Value.Auxiliary
    abstract String getAuxiliaryField();

}

The following assertions will be true when using the auxiliary field:

ImmutablePerson john1 = ImmutablePerson.builder()
  .name("John")
  .age(42)
  .auxiliaryField("Value1")
  .build();

ImmutablePerson john2 = ImmutablePerson.builder()
  .name("John")
  .age(42)
  .auxiliaryField("Value2")
  .build();

assertThat(john1.equals(john2)).isTrue();
assertThat(john1.toString()).isEqualTo(john2.toString());
assertThat(john1.hashCode()).isEqualTo(john2.hashCode());

4.4. The @Value.Immutable(prehash = true) Annotation

Since our generated classes are immutable and can never get modified, hashCode results will always remain the same and can be computed only once during the object’s instantiation.

If you annotate your class like this:

@Value.Immutable(prehash = true)
public abstract class Person {

    abstract String getName();
    abstract Integer getAge();

}

When inspecting the generated class, you can see that hashcode value is now precomputed and stored in a field:

@Generated({"Immutables.generator", "Person"})
public final class ImmutablePerson extends Person {

    private final String name;
    private final Integer age;
    private final int hashCode;

    private ImmutablePerson(String name, Integer age) {
        this.name = name;
        this.age = age;
        this.hashCode = computeHashCode();
    }

    // generated methods
 
    @Override
    public int hashCode() {
        return hashCode;
    }
}

The hashCode() method returns a precomputed hashcode generated when the object was constructed.

5. Conclusion

In this quick tutorial we showed the basic workings of the Immutables library.

All source code and unit tests in the article can be found in the GitHub repository.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE


Spring JSON-P with Jackson

$
0
0

I just announced the Master Class of my "REST With Spring" Course:

>> THE "REST WITH SPRING" CLASSES

1. Overview

If you’ve been developing anything on the web, you’re aware of the same-origin policy constraint browsers have when dealing with AJAX requests. The simple overview of the constraint is that any request originating from different domain, schema or port, will not be permitted.

One way to relax this browser restriction when working with JSON data – is by using JSON with padding (JSON-P).

This article discusses Spring’s support for working with JSON-P data – with the help of AbstractJsonpResponseBodyAdvice.

2. JSON-P in Action

The same-origin policy is not imposed over <script> tag, allowing scripts to be loaded across different domains. JSON-P technique takes advantage of this by passing the JSON response as the argument of the javascript function.

2.1. Preparation

In our examples, we will use this simple Company class:

public class Company {
 
    private long id;
    private String name;
 
    // standard setters and getters
}

This class will bind the request parameters, and shall be returned from the server as JSON representation.

The Controller method is a simple implementation as well – returning the Company instance:

@RestController
public class CompanyController {

    @RequestMapping(value = "/companyRest",
      produces = MediaType.APPLICATION_JSON_VALUE)
    public Company getCompanyRest() {
        Company company = new Company(1, "Xpto");
        return company;
    }
}

On the client side we can use jQuery library to create and send an AJAX request:

$.ajax({
    url: 'http://localhost:8080/spring-mvc-java/companyRest',
    data: {
        format: 'json'
    },
    type: 'GET',
    ...
});

Consider an AJAX request against the following URL:

http://localhost:8080/spring-mvc-java/companyRest

The response from the server would be the following:

{"id":1,"name":"Xpto"}

As the request was send against the same schema, domain and port, the response will not get blocked, and JSON data will be allowed by the browser.

2.2. Cross-Origin Request

By changing the request URL to:

http://127.0.0.1:8080/spring-mvc-java/companyRest

the response will get blocked by the browser, due to request being sent from localhost to 127.0.0.1 which is considered a different domain and presents a violation of the same origin policy.

With JSON-P, we are able to add a callback parameter to the request:

http://127.1.1.1:8080/spring-mvc-java/companyRest?callback=getCompanyData

On the client side its as easy as adding the following parameters to the AJAX request:

$.ajax({
    ...
    jsonpCallback:'getCompanyData',
    dataType: 'jsonp',
    ...
});

The getCompanyData will be the function called when the response is received.

If the server formats the response like the following:

getCompanyData({"id":1,"name":"Xpto"});

browsers will not block it, as it will treat the response as a script negotiated and agreed upon between the client and the server on account of matching getCompanyData in both request and the response.

3. @ControllerAdvice Annotation

The beans annotated with @ControllerAdvice are able to assist all, or a specific subset of Controllers and is used to encapsulate cross-cutting behaviour shared between different Controllers. Typical usage patters are related to exception handling, adding attributes to models or registering binders.

Starting with Spring 4.1, @ControllerAdvice is able to register the implementations of ResponseBodyAdvice interface which allows changing the response after its being returned by a controller method but before being written by a suitable converter.

4. Changing the Response Using AbstractJsonpResponseBodyAdvice

Also starting with Spring 4.1, we now have access to the AbstractJsonpResponseBodyAdvice class – which formats the response according to JSON-P standards.

This section explains how to put the base class at play and change the response without making any changes to the existing Controllers.

In order to enable Spring support for JSON-P, let’s start with the configuration:

@ControllerAdvice
public class JsonpControllerAdvice 
  extends AbstractJsonpResponseBodyAdvice {

    public JsonpControllerAdvice() {
        super("callback");
    }
}

The support is made using the AbstractJsonpResponseBodyAdvice class. The key passed on the super method is the one that will be used in URL requesting JSON-P data.

With this controller advice, we automatically convert the response to JSON-P.

5. JSON-P with Spring in Practice

With the previously discussed configuration in place, we are able to make our REST applications respond with JSON-P. In the following example, we will return company’s data, so our AJAX request URL should be something like this:

http://127.0.0.1:8080/spring-mvc-java/companyRest?callback=getCompanyData

As a result of the previous configuration, the response will look as follows:

getCompanyData({"id":1,"name":"Xpto"});

As discussed, the response in this format will not get blocked despite originating from different domain.

The JsonpControllerAdvice can easily be applied to any method that returns a response annotated with @ResponseBody and ResponseEntity.

There should be a function with the same name passed in the callback, getCompanyData, for handling all the responds.

6. Conclusion

This quick article shows how an otherwise tedious work of formatting the response to take advantage of JSON-P is simplified using the new functionality in Spring 4.1.

The implementation of the examples and code snippets can be found in this github project.

The Master Class of my "REST With Spring" Course is finally out:

>> CHECK OUT THE CLASSES

Quick Guide to Spring Controllers

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Introduction

In this article we’ll focus on a core concept in Spring MVC – Controllers.

2. Overview

Let’s start by taking a step back and having a look at the concept of the Front Controller in the typical Spring Model View Controller architecture.

At a very high level, here are the main responsibilities we’re looking at:

  • Intercepts incoming requests
  • Converts the payload of the request to the internal structure of the data
  • Sends the data to Model for further processing
  • Gets processed data from the Model and advances that data to the View for rendering

Here’s a quick diagram for the high level flow in Spring MVC:

SpringMVC

As you can see, the DispatcherServlet plays the role of the Front Controller in the architecture.

The diagram is applicable both to typical MVC controllers as well as RESTful controllers – with some small differences (described below).

In the traditional approach, MVC applications are not service-oriented hence there is a View Resolver that renders final views based on data received from a Controller.

RESTful applications are designed to be service-oriented and return raw data (JSON/XML typically). Since these applications do not do any view rendering, there are no View Resolvers – the Controller is generally expected to send data directly via the HTTP response.

Let’s start with the MVC0-style controllers.

3. Maven Dependencies

In order to be able to work with Spring MVC, let’s deal with the Maven dependencies first:

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-webmvc</artifactId>
    <version>4.3.1.RELEASE</version>
<dependency>

To get the latest version of the library, have a look at spring-webmvc on Maven Central.

4. Project Web Config

Now, before looking at the controllers themselves, we first need to set up a simple web project and do a quick Servlet configuration.

Lets first see how the DispatcherServlet can be set up without using web.xml – but instead using an initializer:

public class StudentControllerConfig implements WebApplicationInitializer {

    @Override
    public void onStartup(ServletContext sc) throws ServletException {
        AnnotationConfigWebApplicationContext root = 
          new AnnotationConfigWebApplicationContext();
        root.register(WebConfig.class);

        root.refresh();
        root.setServletContext(sc);

        sc.addListener(new ContextLoaderListener(root));

        DispatcherServlet dv = 
          new DispatcherServlet(new GenericWebApplicationContext());

        ServletRegistration.Dynamic appServlet = sc.addServlet("test-mvc", dv);
        appServlet.setLoadOnStartup(1);
        appServlet.addMapping("/test/*");
    }
}

To set things up with no XML, make sure to have servlet-api 3.1.0 on your classpath.

Here’s how the web.xml would look like:

<servlet>
    <servlet-name>test-mvc</servlet-name>
    <servlet-class>
      org.springframework.web.servlet.DispatcherServlet
    </servlet-class>
    <load-on-startup>1</load-on-startup>
    <init-param>
        <param-name>contextConfigLocation</param-name>
        <param-value>/WEB-INF/test-mvc.xml</param-value>
    </init-param>
</servlet>

We’re setting the contextConfigLocation property here – pointing to the XML file used to load the Spring context. If the property is not there, Spring will search for a file named {servlet_name}-servlet.xml.

In our case the servlet_name is test-mvc and so, in this example the DispatcherServlet would search for a file called test-mvc-servlet.xml.

Finally, let’s set the DispatcherServlet up and map it to a particular URL – to finish our Front Controller based system here:

<servlet-mapping>
    <servlet-name>test-mvc</servlet-name>
    <url-pattern>/test/*</url-pattern>
</servlet-mapping>

Thus in this case the DispatcherServlet would intercept all requests within the pattern /test/* .

5. Spring MVC Web Config

Lets now look at how the Dispatcher Servlet can be setup using Spring Config:

@Configuration
@EnableWebMvc
@ComponentScan(basePackages= {
  "org.baeldung.controller.controller",
  "org.baeldung.controller.config" }) 
public class WebConfig extends WebMvcConfigurerAdapter {
    
    @Override
    public void configureDefaultServletHandling(
      DefaultServletHandlerConfigurer configurer) {
        configurer.enable();
    }
 
    @Bean
    public ViewResolver viewResolver() {
        InternalResourceViewResolver bean = 
          new InternalResourceViewResolver();
        bean.setPrefix("/WEB-INF/");
        bean.setSuffix(".jsp");
        return bean;
    }
}

Let’s now look at setting up the Dispatcher Servlet using XML . A snapshot of the DispatcherServlet XML file – the XML file which the DispatcherServlet uses for loading custom controllers and other Spring entities is shown below:

<context:component-scan base-package="com.baledung.controller" />
<mvc:annotation-driven />
<bean class="org.springframework.web.servlet.view.InternalResourceViewResolver">
    <property name="prefix">
        <value>/WEB-INF/</value>
    </property>
    <property name="suffix">
        <value>.jsp</value>
    </property>
</bean>

Based on this simple configuration, the framework will of course initialize any controller bean that it will find on the classpath.

Notice that we’re also defining the View Resolver, responsible for view rendering – we’ll be using Spring’s InternalResourceViewResolver here. This expects a name of a view to be resolved, which means finding a corresponding page by using prefix and suffix (both defined in the XML configuration).

So for example if the Controller returns a view named “welcome”the view resolver will try to resolve a page called “welcome.jsp” in the WEB-INF folder.

6. The MVC Controller

Let’s not finally implement the MVC style controller.

Notice how we’re returning a ModelAndView object – which contains a model map and a view object; both will be used by the View Resolver for data rendering:

@Controller
@RequestMapping(value = "/test")
public class TestController {

    @GetMapping
    public ModelAndView getTestData() {
        ModelAndView mv = new ModelAndView();
        mv.setViewName("welcome");
        mv.getModel().put("data", "Welcome home man");

        return mv;
    }
}

So, what exactly did we set up here.

First, we created a controller called TestController and mapped it to the “/test” path. In the class we have created a method which returns a ModelAndView object and is mapped to a GET request thus any URL call ending with “test” would be routed by the DispatcherServlet to the getTestData method in the TestController.

And of course we’re returning the ModelAndView object with some model data for good measure.

The view object has a name set to “welcome“. As discussed above, the View Resolver will search for a page in the WEB-INF folder called “welcome.jsp“.

Below you can see the result of an example GET operation:

result_final

Note that the URL ends with “test”. The pattern of the URL is “/test/test“.

The first “/test” comes from the Servlet, and the second one comes from the mapping of the controller.

7. More Spring Dependencies for REST

Let’s now start looking at a RESTful controller. Of course, a good place to start is the extra Maven dependencies we need for it:

<dependencies>
    <dependency>
        <groupId>org.springframework</groupId>
        <artifactId>spring-webmvc</artifactId>
        <version>4.3.0.RELEASE</version>
    </dependency>
    <dependency>
        <groupId>org.springframework</groupId>
        <artifactId>spring-web</artifactId>
        <version>4.3.0.RELEASE</version>
    </dependency>
    <dependency>
        <groupId>com.fasterxml.jackson.core</groupId>
        <artifactId>jackson-databind</artifactId>
        <version>2.8.0</version>
    </dependency>
</dependencies>

Please refer to jackson-corespring-webmvc and spring-web links for the newest versions of those dependencies.

Jackson is of course not mandatory here, but it’s certainly a good way to enable JSON support. If you’re interested to dive deeper into that support, have a look at the message converters article here.

8. The REST Controller

The setup for a Spring RESTful application is the same as the one for the MVC application with the only difference being that there is no View Resolvers and no model map.

The API will generally simply return raw data back to the client – XML and JSON representations usually – and so the DispatcherServlet bypasses the view resolvers and returns the data right in the HTTP response body.

Let’s have a look at a simple RESTful controller implementation:

@Controller
public class RestController {

    @GetMapping(value = "/student/{studentId}")
    public @ResponseBody Student getTestData(@PathVariable Integer studentId) {
        Student student = new Student();
        student.setName("Peter");
        student.setId(studentId);

        return student;
    }
}

Note the @ResponseBody annotation on the method – which instructs Spring to bypass the view resolver and essentially write out the output directly to the body of the HTTP response.

A quick snapshot of the output is displayed below:

16th_july

The above output is a result of sending the GET request to the API with the the student id of 1.

One quick note here is – the @RequestMapping annotation is one of those central annotations that you’ll really have to explore in order to use to its full potential.

9. Spring Boot and the @RestController Annotation

The @RestController annotation from Spring Boot is basically a quick shortcut that saves us from always having to define @ResponseBody.

Here’s the previous example controller using this new annotation:

@RestController
public class RestAnnotatedController {
    @GetMapping(value = "/annotated/student/{studentId}")
    public Student getData(@PathVariable Integer studentId) {
        Student student = new Student();
        student.setName("Peter");
        student.setId(studentId);

        return student;
    }
}

10. Conclusion

In this guide, we explore the basics of using controllers in Spring, both from the point of view of a typical MVC application as well as a RESTful API.

Of course all the code in the article is available over on Giuthub.

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

JMockit Advanced Usage

$
0
0

1. Introduction

In this article, we’ll go beyond the JMockit basics and we’ll start looking at some advanced scenarios, such as:

  • Faking (or the MockUp API)
  • The Deencapsulation utility class
  • How to mock more than one interface using only one mock
  • How to reuse expectations and verifications

If you want to discover JMockit’s basics, check other articles from this series. You can find relevant links at the bottom of the page.

2. Private Methods/Inner Classes Mocking

Mocking and testing of private methods or inner classes is often not considered good practice.

The reasoning behind it is that if they’re private, they shouldn’t be tested directly as they’re the innermost guts of the class, but sometimes it still needs to be done, especially when dealing with legacy code.

With JMockit, you have two options to handle these:

  • The MockUp API to alter the real implementation (for the second case)
  • The Deencapsulation utility class, to call any method directly (for the first case)

All following examples will be done for the following class and we’ll suppose that are run on a test class with the same configuration as the first one (to avoid repeating code):

public class AdvancedCollaborator {
    int i;
    private int privateField = 5;

    // default constructor omitted 
    
    public AdvancedCollaborator(String string) throws Exception{
        i = string.length();
    }

    public String methodThatCallsPrivateMethod(int i) {
        return privateMethod() + i;
    }
    public int methodThatReturnsThePrivateField() {
        return privateField;
    }
    private String privateMethod() {
        return "default:";
    }

    class InnerAdvancedCollaborator {...}
}

2.1. Faking with MockUp

JMockit’s Mockup API provides support for the creation of fake implementations or mock-ups. Typically, a mock-up targets a few methods and/or constructors in the class to be faked, while leaving most other methods and constructors unmodified. This allows for a complete re-write of a class, so any method or constructor (with any access modifier) can be targeted.

Let’s see how we can re-define privateMethod() using the Mockup’s API:

@RunWith(JMockit.class)
public class AdvancedCollaboratorTest {

    @Tested
    private AdvancedCollaborator mock;

    @Test
    public void testToMockUpPrivateMethod() {
        new MockUp<AdvancedCollaborator>() {
            @Mock
            private String privateMethod() {
                return "mocked: ";
            }
        };
        String res = mock.methodThatCallsPrivateMethod(1);
        assertEquals("mocked: 1", res);
    }
}

In this example we’re defining a new MockUp for the AdvancedCollaborator class using the @Mock annotation on a method with matching signature. After this, calls to that method will be delegated to our mocked one.

We can also use this to mock-up the constructor of a class that needs specific arguments or configuration in order to simplify tests:

@Test
public void testToMockUpDifficultConstructor() throws Exception{
    new MockUp<AdvancedCollaborator>() {
        @Mock
        public void $init(Invocation invocation, String string) {
            ((AdvancedCollaborator)invocation.getInvokedInstance()).i = 1;
        }
    };
    AdvancedCollaborator coll = new AdvancedCollaborator(null);
    assertEquals(1, coll.i);
}

In this example, we can see that for constructor mocking you need to mock the $init method. You can pass an extra argument of type Invocation, with which you can access information about the invocation of the mocked method, including the instance to which the invocation is being performed.

2.2. Using the Deencapsulation Class

JMockit includes a test utility class: the Deencapsulation. As its name indicates, it’s used to de-encapsulate a state of an object, and using it, you can simplify testing by accessing fields and methods that could not be accessed otherwise.

You can invoke a method:

@Test
public void testToCallPrivateMethodsDirectly(){
    Object value = Deencapsulation.invoke(mock, "privateMethod");
    assertEquals("default:", value);
}

You can also set fields:

@Test
public void testToSetPrivateFieldDirectly(){
    Deencapsulation.setField(mock, "privateField", 10);
    assertEquals(10, mock.methodThatReturnsThePrivateField());
}

And get fields:

@Test
public void testToGetPrivateFieldDirectly(){
    int value = Deencapsulation.getField(mock, "privateField");
    assertEquals(5, value);
}

And create new instances of classes:

@Test
public void testToCreateNewInstanceDirectly(){
    AdvancedCollaborator coll = Deencapsulation
      .newInstance(AdvancedCollaborator.class, "foo");
    assertEquals(3, coll.i);
}

Even new instances of inner classes:

@Test
public void testToCreateNewInnerClassInstanceDirectly(){
    InnerCollaborator inner = Deencapsulation
      .newInnerInstance(InnerCollaborator.class, mock);
    assertNotNull(inner);
}

As you can see, the Deencapsulation class is extremely useful when testing air tight classes. One example could be to set dependencies of a class that uses @Autowired annotations on private fields and has no setters for them, or to unit test inner classes without having to depend on the public interface of its container class.

3. Mocking Multiple Interfaces in One Same Mock

Let’s assume that you want to test a class – not yet implemented – but you know for sure that it will implement several interfaces.

Usually, you wouldn’t be able to test said class before implementing it, but with JMockit you have the ability to prepare tests beforehand by mocking more than one interface using one mock object.

This can be achieved by using generics and defining a type that extends several interfaces. This generic type can be either defined for a whole test class or for just one test method.

For example, we’re going to create a mock for interfaces List and Comparable two ways:

@RunWith(JMockit.class)
public class AdvancedCollaboratorTest<MultiMock
  extends List<String> & Comparable<List<String>>> {
    
    @Mocked
    private MultiMock multiMock;
    
    @Test
    public void testOnClass() {
        new Expectations() {{
            multiMock.get(5); result = "foo";
            multiMock.compareTo((List<String>) any); result = 0;
        }};
        assertEquals("foo", multiMock.get(5));
        assertEquals(0, multiMock.compareTo(new ArrayList<>()));
    }

    @Test
    public <M extends List<String> & Comparable<List<String>>>
      void testOnMethod(@Mocked M mock) {
        new Expectations() {{
            mock.get(5); result = "foo";
            mock.compareTo((List<String>) any); result = 0; 
        }};
        assertEquals("foo", mock.get(5));
        assertEquals(0, mock.compareTo(new ArrayList<>()));
    }
}

As you can see in the line 2, we can define a new test type for the whole test by using generics on the class name. That way, MultiMock will be available as a type and you’ll be able to create mocks for it using any of JMockit’s annotations.

In lines from 7 to 18, we can see an example using a mock of a multi-class defined for the whole test class.

If you need the multi-interface mock for just one test, you can achieve this by defining the generic type on the method signature and passing a new mock of that new generic as the test method argument. In lines 20 to 32, we can see an example of doing so for the same tested behavior as in the previous test.

4. Reusing Expectations and Verifications

In the end, when testing classes, you may encounter cases where you’re repeating the same Expectations and/or Verifications over and over. To ease that, you can reuse both easily.

We’re going to explain it by an example (we’re using the classes Model, Collaborator, and Performer from our JMockit 101 article):

@RunWith(JMockit.class)
public class ReusingTest {

    @Injectable
    private Collaborator collaborator;
    
    @Mocked
    private Model model;

    @Tested
    private Performer performer;
    
    @Before
    public void setup(){
        new Expectations(){{
           model.getInfo(); result = "foo"; minTimes = 0;
           collaborator.collaborate("foo"); result = true; minTimes = 0; 
        }};
    }

    @Test
    public void testWithSetup() {
        performer.perform(model);
        verifyTrueCalls(1);
    }
    
    protected void verifyTrueCalls(int calls){
        new Verifications(){{
           collaborator.receive(true); times = calls; 
        }};
    }
    
    final class TrueCallsVerification extends Verifications{
        public TrueCallsVerification(int calls){
            collaborator.receive(true); times = calls; 
        }
    }
    
    @Test
    public void testWithFinalClass() {
        performer.perform(model);
        new TrueCallsVerification(1);
    }
}

In this example, you can see in lines from 15 to 18 that we’re preparing an expectation for every test so that model.getInfo() always returns “foo” and for collaborator.collaborate() to always expect “foo” as the argument and returning true. We put the minTimes = 0 statement so no fails appear when not actually using them in tests.

Also, we’ve created method verifyTrueCalls(int) to simplify verifications to the collaborator.receive(boolean) method when the passed argument is true.

Lastly, you can also create new types of specific expectations and verifications just extending any of Expectations or Verifications classes. Then you define a constructor if you need to configure the behavior and create a new instance of said type in a test as we do in lines from 33 to 43.

5. Conclusion

With this installment of the JMockit series, we have touched on several advanced topics that will definitely help you with everyday mocking and testing.

We may do more articles on JMockit, so stay tuned to learn even more.

And, as always, the full implementation of this tutorial can be found on the GitHub.

5.1. Articles in the Series

All articles of the series:

A Guide to Mapping With Dozer

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Overview

Dozer is a Java Bean to Java Bean mapper that recursively copies data from one object to another, attribute by attribute.

The library not only supports mapping between attribute names of Java Beans, but also automatically converts between types – if they’re different.

Most conversion scenarios are supported out of the box, but Dozer also allows you to specify custom conversions via XML.

2. Simple Example

For our first example, let’s assume that the source and destination data objects all share the same common attribute names.

This is the most basic mapping one can do with Dozer:

public class Source {
    private String name;
    private int age;

    public Source() {}

    public Source(String name, int age) {
        this.name = name;
        this.age = age;
    }
    
    // standard getters and setters
}

Then our destination file, Dest.java:

public class Dest {
    private String name;
    private int age;

    public Dest() {}

    public Dest(String name, int age) {
        this.name = name;
        this.age = age;
    }
    
    // standard getters and setters
}

We need to make sure to include the default or zero argument constructors, since Dozer uses reflection under the hood.

And, for performance purposes, let’s make our mapper global and create a single object we’ll use throughout our tests:

DozerBeanMapper mapper;

@Before
public void before() throws Exception {
    mapper = new DozerBeanMapper();
}

Now, let’s run our first test to confirm that when we create a Source object, we can map it directly onto a Dest object:

@Test
public void givenSourceObjectAndDestClass_whenMapsSameNameFieldsCorrectly_
  thenCorrect() {
    Source source = new Source("Baeldung", 10);
    Dest dest = mapper.map(source, Dest.class);

    assertEquals(dest.getName(), "Baeldung");
    assertEquals(dest.getAge(), 10);
}

As we can see, after the Dozer mapping, the result will be a new instance of the Dest object that contains values for all fields that have the same field name as the Source object.

Alternatively, instead of passing mapper the Dest class, we could just have created the Dest object and passed mapper its reference:

@Test
public void givenSourceObjectAndDestObject_whenMapsSameNameFieldsCorrectly_
  thenCorrect() {
    Source source = new Source("Baeldung", 10);
    Dest dest = new Dest();
    mapper.map(source, dest);

    assertEquals(dest.getName(), "Baeldung");
    assertEquals(dest.getAge(), 10);
}

3. Maven Setup

Now that we have a basic understanding of how Dozer works, let’s add the following dependency to the pom.xml:

<dependency>
    <groupId>net.sf.dozer</groupId>
    <artifactId>dozer</artifactId>
    <version>5.5.1</version>
</dependency>

The latest version is available here.

4. Data Conversion Example

As we already know, Dozer can map an existing object to another as long as it finds attributes of the same name in both classes.

However, that’s not always the case; and so, if any of the mapped attributes are of different data types, the Dozer mapping engine will automatically perform a data type conversion.

Let’s see this new concept in action:

public class Source2 {
    private String id;
    private double points;

    public Source2() {}

    public Source2(String id, double points) {
        this.id = id;
        this.points = points;
    }
    
    // standard getters and setters
}

And the destination class:

public class Dest2 {
    private int id;
    private int points;

    public Dest2() {}

    public Dest2(int id, int points) {
        super();
        this.id = id;
        this.points = points;
    }
    
    // standard getters and setters
}

Notice that the attribute names are the same but their data types are different.

In the source class, id is a String and points is a double, whereas in the destination class, id and points are both integers.

Let’s now see how Dozer correctly handles the conversion:

@Test
public void givenSourceAndDestWithDifferentFieldTypes_
  whenMapsAndAutoConverts_thenCorrect() {
    Source2 source = new Source2("320", 15.2);
    Dest2 dest = mapper.map(source, Dest2.class);

    assertEquals(dest.getId(), 320);
    assertEquals(dest.getPoints(), 15);
}

We passed “320” and 15.2, a String and a double into the source object and the result had 320 and 15, both integers in the destination object.

5. Basic Custom Mappings Via XML

In all the previous examples we have seen, both the source and destination data objects have the same field names, which allows for easy mapping on our side.

However, in real world applications, there will be countless times where the two data objects we’re mapping won’t have fields that share a common property name.

To solve this, Dozer gives us an option to create a custom mapping configuration in XML.

In this XML file, we can define class mapping entries which the Dozer mapping engine will use to decide what source attribute to map to what destination attribute.

Let’s have a look at an example, and let’s try unmarshalling data objects from an application built by a French programmer, to an English style of naming our objects.

We have a Person object with name, nickname and age fields:

public class Person {
    private String name;
    private String nickname;
    private int age;

    public Person() {}

    public Person(String name, String nickname, int age) {
        super();
        this.name = name;
        this.nickname = nickname;
        this.age = age;
    }
    
    // standard getters and setters
}

The object we are unmarshalling is named Personne and has fields nom, surnom and age:

public class Personne {
    private String nom;
    private String surnom;
    private int age;

    public Personne() {}

    public Personne(String nom, String surnom, int age) {
        super();
        this.nom = nom;
        this.surnom = surnom;
        this.age = age;
    }
    
    // standard getters and setters
}

These objects really achieve the same purpose but we have a language barrier. In order to help with that barrier, we can use Dozer to map the French Personne object to our Person object.

We only have to create a custom mapping file to help Dozer do this, we will call it dozer_mapping.xml:

<?xml version="1.0" encoding="UTF-8"?>
<mappings xmlns="http://dozer.sourceforge.net" 
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://dozer.sourceforge.net
      http://dozer.sourceforge.net/schema/beanmapping.xsd">
    <mapping>
        <class-a>com.baeldung.dozer.Personne</class-a>
        <class-b>com.baeldung.dozer.Person</class-b>
        <field>
            <a>nom</a>
            <b>name</b>
        </field>
        <field>
            <a>surnom</a>
            <b>nickname</b>
        </field>
    </mapping>
</mappings>

This is the simplest example of a custom XML mapping file we can have.

For now, it’s enough to notice that we have <mappings> as our root element, which has a child <mapping>, we can have as many of these children inside <mappings> as there are incidences of class pairs that need custom mapping.

Notice also how we specify the source and destination classes inside the <mapping></mapping> tags. This is followed by a <field></field> for each source and destination field pair that need custom mapping.

Finally, notice that we have not included the field age in our custom mapping file. The French word for age is still age, which brings us to another important feature of Dozer.

Properties that are of the same name do not need to be specified in the mapping XML file. Dozer automatically maps all fields with the same property name from the source object into the destination object.

We will then place our custom XML file on the classpath directly under the src folder. However, wherever we place it on the classpath, Dozer will search the entire classpath looking for the specified file.

Let us create a helper method to add mapping files to our mapper:

public void configureMapper(String... mappingFileUrls) {
    mapper.setMappingFiles(Arrays.asList(mappingFileUrls));
}

Let’s now test the code:

@Test
public void givenSrcAndDestWithDifferentFieldNamesWithCustomMapper_
  whenMaps_thenCorrect() {
    configureMapper("dozer_mapping.xml");
    Personne frenchAppPerson = new Personne("Sylvester Stallone", "Rambo", 70);
    Person englishAppPerson = mapper.map(frenchAppPerson, Person.class);

    assertEquals(englishAppPerson.getName(), frenchAppPerson.getNom());
    assertEquals(englishAppPerson.getNickname(), frenchAppPerson.getSurnom());
    assertEquals(englishAppPerson.getAge(), frenchAppPerson.getAge());
}

As shown in the test, DozerBeanMapper accepts a list of custom XML mapping files and decides when to use each at runtime.

Assuming we now start unmarshalling these data objects back and forth between our English app and the French app. We don’t need to create another mapping in the XML file, Dozer is smart enough to map the objects both ways with only one mapping configuration:

@Test
public void givenSrcAndDestWithDifferentFieldNamesWithCustomMapper_
  whenMapsBidirectionally_thenCorrect() {
    configureMapper("dozer_mapping.xml");
    Person englishAppPerson = new Person("Dwayne Johnson", "The Rock", 44);
    Personne frenchAppPerson = mapper.map(englishAppPerson, Personne.class);

    assertEquals(frenchAppPerson.getNom(), englishAppPerson.getName());
    assertEquals(frenchAppPerson.getSurnom(),englishAppPerson.getNickname());
    assertEquals(frenchAppPerson.getAge(), englishAppPerson.getAge());
}

And so this example test uses this another feature of Dozer – the fact that the Dozer mapping engine is bi-directional, so if we want to map the destination object to the source object, we do not need to add another class mapping to the XML file.

We can also load a custom mapping file from outside the classpath, if we need to, using the “file:” prefix in the resource name.

On a Windows environment (such as the test below), we’ll of course use the Windows specific file syntax.

On a Linux box, we may store the file under /home and then:

configureMapper("file:/home/dozer_mapping.xml");

And on Mac OS:

configureMapper("file:/Users/me/dozer_mapping.xml");

If you are running the unit tests from the github project (which you should), you can copy the mapping file to the appropriate location and change the input for configureMapper method.

The mapping file is available under test/resources folder of the GitHub project:

@Test
public void givenMappingFileOutsideClasspath_whenMaps_thenCorrect() {
    configureMapper("file:E:\\dozer_mapping.xml");
    Person englishAppPerson = new Person("Marshall Bruce Mathers III","Eminem", 43);
    Personne frenchAppPerson = mapper.map(englishAppPerson, Personne.class);

    assertEquals(frenchAppPerson.getNom(), englishAppPerson.getName());
    assertEquals(frenchAppPerson.getSurnom(),englishAppPerson.getNickname());
    assertEquals(frenchAppPerson.getAge(), englishAppPerson.getAge());
}

6. Wildcards and Further XML Customization

Let’s create a second custom mapping file called dozer_mapping2.xml:

<?xml version="1.0" encoding="UTF-8"?>
<mappings xmlns="http://dozer.sourceforge.net" 
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://dozer.sourceforge.net 
      http://dozer.sourceforge.net/schema/beanmapping.xsd">
    <mapping wildcard="false">
        <class-a>com.baeldung.dozer.Personne</class-a>
        <class-b>com.baeldung.dozer.Person</class-b>
        <field>
            <a>nom</a>
            <b>name</b>
        </field>
        <field>
            <a>surnom</a>
            <b>nickname</b>
        </field>
    </mapping>
</mappings>

Notice that we have added an attribute wildcard to the <mapping></mapping> element which was not there before.

By default, wildcard is true. It tells the Dozer engine that we want all fields in the source object to be mapped to their appropriate destination fields.

When we set it to false, we are telling Dozer to only map fields we have explicitly specified in the XML.

So in the above configuration, we only want two fields mapped, leaving out age:

@Test
public void givenSrcAndDest_whenMapsOnlySpecifiedFields_thenCorrect() {
    configureMapper("dozer_mapping2.xml");
    Person englishAppPerson = new Person("Shawn Corey Carter","Jay Z", 46);
    Personne frenchAppPerson = mapper.map(englishAppPerson, Personne.class);

    assertEquals(frenchAppPerson.getNom(), englishAppPerson.getName());
    assertEquals(frenchAppPerson.getSurnom(),englishAppPerson.getNickname());
    assertEquals(frenchAppPerson.getAge(), 0);
}

As we can see in the last assertion, the destination age field remained 0.

7. Custom Mapping Via Annotations

For simple mapping cases and cases where we also have write access to the data objects we would like to map, we may not need to use XML mapping.

Mapping differently named fields via annotations is very simple and we have to write much less code than in XML mapping but can only help us in simple cases.

Let’s replicate our data objects into Person2.java and Personne2.java without changing the fields at all.

To implement this, we only need to add @mapper(“destinationFieldName”) annotation on the getter methods in the source object. Like so:

@Mapping("name")
public String getNom() {
    return nom;
}

@Mapping("nickname")
public String getSurnom() {
    return surnom;
}

This time we are treating Personne2 as the source, but it does not matter due to the bi-directional nature of the Dozer Engine.

Now with all the XML related code stripped out, our test code is shorter:

@Test
public void givenAnnotatedSrcFields_whenMapsToRightDestField_thenCorrect() {
    Person2 englishAppPerson = new Person2("Jean-Claude Van Damme", "JCVD", 55);
    Personne2 frenchAppPerson = mapper.map(englishAppPerson, Personne2.class);

    assertEquals(frenchAppPerson.getNom(), englishAppPerson.getName());
    assertEquals(frenchAppPerson.getSurnom(), englishAppPerson.getNickname());
    assertEquals(frenchAppPerson.getAge(), englishAppPerson.getAge());
}

We can also test for bi-directionality:

@Test
public void givenAnnotatedSrcFields_whenMapsToRightDestFieldBidirectionally_
  thenCorrect() {
    Personne2 frenchAppPerson = new Personne2("Jason Statham", "transporter", 49);
    Person2 englishAppPerson = mapper.map(frenchAppPerson, Person2.class);

    assertEquals(englishAppPerson.getName(), frenchAppPerson.getNom());
    assertEquals(englishAppPerson.getNickname(), frenchAppPerson.getSurnom());
    assertEquals(englishAppPerson.getAge(), frenchAppPerson.getAge());
}

8. Custom API Mapping

In our previous examples where we are unmarshalling data objects from a french application, we used XML and annotations to customize our mapping.

Another alternative available in Dozer, similar to annotation mapping is API mapping. They are similar because we eliminate XML configuration and strictly use Java code.

In this case, we use BeanMappingBuilder class, defined in our simplest case like so:

BeanMappingBuilder builder = new BeanMappingBuilder() {
    @Override
    protected void configure() {
        mapping(Person.class, Personne.class)
          .fields("name", "nom")
            .fields("nickname", "surnom");
    }
};

As we can see, we have an abstract method, configure(), which we must override to define our configurations. Then, just like our <mapping></mapping> tags in XML, we define as many TypeMappingBuilders as we require.

These builders tell Dozer which source to destination fields we are mapping. We then pass the BeanMappingBuilder to DozerBeanMapper as we would, the XML mapping file, only with a different API:

@Test
public void givenApiMapper_whenMaps_thenCorrect() {
    mapper.addMapping(builder);
 
    Personne frenchAppPerson = new Personne("Sylvester Stallone", "Rambo", 70);
    Person englishAppPerson = mapper.map(frenchAppPerson, Person.class);

    assertEquals(englishAppPerson.getName(), frenchAppPerson.getNom());
    assertEquals(englishAppPerson.getNickname(), frenchAppPerson.getSurnom());
    assertEquals(englishAppPerson.getAge(), frenchAppPerson.getAge());
}

The mapping API is also bi-directional:

@Test
public void givenApiMapper_whenMapsBidirectionally_thenCorrect() {
    mapper.addMapping(builder);
 
    Person englishAppPerson = new Person("Sylvester Stallone", "Rambo", 70);
    Personne frenchAppPerson = mapper.map(englishAppPerson, Personne.class);

    assertEquals(frenchAppPerson.getNom(), englishAppPerson.getName());
    assertEquals(frenchAppPerson.getSurnom(), englishAppPerson.getNickname());
    assertEquals(frenchAppPerson.getAge(), englishAppPerson.getAge());
}

Or we can choose to only map explicitely specified fields with this builder configuration:

BeanMappingBuilder builderMinusAge = new BeanMappingBuilder() {
    @Override
    protected void configure() {
        mapping(Person.class, Personne.class)
          .fields("name", "nom")
            .fields("nickname", "surnom")
              .exclude("age");
    }
};

and our age==0 test is back:

@Test
public void givenApiMapper_whenMapsOnlySpecifiedFields_thenCorrect() {
    mapper.addMapping(builderMinusAge); 
    Person englishAppPerson = new Person("Sylvester Stallone", "Rambo", 70);
    Personne frenchAppPerson = mapper.map(englishAppPerson, Personne.class);

    assertEquals(frenchAppPerson.getNom(), englishAppPerson.getName());
    assertEquals(frenchAppPerson.getSurnom(), englishAppPerson.getNickname());
    assertEquals(frenchAppPerson.getAge(), 0);
}

9. Custom Converters

Another scenario we may face in mapping is where we would like to perform custom mapping between two objects.

We have looked at scenarios where source and destination field names are different like in the French Personne object. This section solves a different problem.

What if a data object we are unmarshalling represents a date and time field such as a long or Unix time like so:

1182882159000

But our own equivalent data object represents the same date and time field and value in this ISO format such as a String:

2007-06-26T21:22:39Z

The default converter would simply map the long value to a String like so:

"1182882159000"

This would definitely bug our app. So how do we solve this? We solve it by adding a configuration block in the mapping XML file and specifying our own converter.

First, let’s replicate the remote application’s Person DTO with a name, then date and time of birth, dtob field:

public class Personne3 {
    private String name;
    private long dtob;

    public Personne3(String name, long dtob) {
        super();
        this.name = name;
        this.dtob = dtob;
    }
    
    // standard getters and setters
}

and here is our own:

public class Person3 {
    private String name;
    private String dtob;

    public Person3(String name, String dtob) {
        super();
        this.name = name;
        this.dtob = dtob;
    }
    
    // standard getters and setters
}

Notice the type difference of dtob in the source and destination DTOs.

Let’s also create our own CustomConverter to pass to Dozer in the mapping XML:

public class MyCustomConvertor implements CustomConverter {
    @Override
    public Object convert(Object dest, Object source, Class<?> arg2, Class<?> arg3) {
        if (source == null) 
            return null;
		
        if (source instanceof Personne3) {
            Personne3 person = (Personne3) source;
            Date date = new Date(person.getDtob());
            DateFormat format = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss'Z'");
            String isoDate = format.format(date);
            return new Person3(person.getName(), isoDate);

        } else if (source instanceof Person3) {
            Person3 person = (Person3) source;
            DateFormat format = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss'Z'");
            Date date = format.parse(person.getDtob());
            long timestamp = date.getTime();
            return new Personne3(person.getName(), timestamp);
        }
    }
}

We only have to override convert() method then return whatever we want to return in it. We are availed with the source and destination objects and their class types.

Notice how we have taken care of bi-directionality by assuming the source can be either of the two classes we are mapping.

We will create a new mapping file for clarity, dozer_custom_convertor.xml:

<?xml version="1.0" encoding="UTF-8"?>
<mappings xmlns="http://dozer.sourceforge.net" 
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://dozer.sourceforge.net
      http://dozer.sourceforge.net/schema/beanmapping.xsd">
    <configuration>
        <custom-converters>
            <converter type="com.baeldung.dozer.MyCustomConvertor">
                <class-a>com.baeldung.dozer.Personne3</class-a>
                <class-b>com.baeldung.dozer.Person3</class-b>
            </converter>
        </custom-converters>
    </configuration>
</mappings>

This is the normal mapping file we have seen in preceding sections, we have only added a <configuration></configuration> block within which we can define as many custom converters as we require with their respective source and destination data classes.

Let’s test our new CustomConverter code:

@Test
public void givenSrcAndDestWithDifferentFieldTypes_whenAbleToCustomConvert_
  thenCorrect() {

    configureMapper("dozer_custom_convertor.xml");
    String dateTime = "2007-06-26T21:22:39Z";
    long timestamp = new Long("1182882159000");
    Person3 person = new Person3("Rich", dateTime);
    Personne3 person0 = mapper.map(person, Personne3.class);

    assertEquals(timestamp, person0.getDtob());
}

We can also test to ensure it is bi-directional:

@Test
public void givenSrcAndDestWithDifferentFieldTypes_
  whenAbleToCustomConvertBidirectionally_thenCorrect() {
    configureMapper("dozer_custom_convertor.xml");
    String dateTime = "2007-06-26T21:22:39Z";
    long timestamp = new Long("1182882159000");
    Personne3 person = new Personne3("Rich", timestamp);
    Person3 person0 = mapper.map(person, Person3.class);

    assertEquals(dateTime, person0.getDtob());
}

10. Conclusion

In this tutorial, we have introduced most of the basics of the Dozer Mapping library and how to use it in our applications.

The full implementation of all these examples and code snippets can be found in the Dozer github project.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Java Web Weekly, Issue 136

$
0
0

I just released the Master Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> Groovy for Java Developers?! Meet Gradle, Grails and Spock [takipi.com]

A good intro to the Groovy and the many tools in that side of the ecosystem.

I’ve been selectively using some of these tools in my day to day work, but there’s a whole bunch of tools I haven’t tried out yet, and look potentially quite useful.

>> How to fetch multiple entities by id with Hibernate 5 [thoughts-on-java.org]

A basic operation me and most of the ORM using world needed at some point or another. Very nice additional to Hibernate.

>> Resizing the HashMap: dangers ahead [plumbr.eu]

The HashMap is definitely the workhorse of so many Java codebases, that it’s not even funny.

So, whether you’re using it as a blunt tool or as a sharp instrument, you definitely need to understanding it well. A solid writeup overall.

>> SpringOne Platform 2016 Recap: Day 1 [spring.io] and >> SpringOne Platform 2016 Recap: Day 2

A bit of fun from SpringOne.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> DDD Decoded – Entities and Value Objects Explained [sapiensworks.com]

Another solid intro to DDD article here. This series is shaping up to be great reference material.

>> Writing OpenAPI (Swagger) Specification Tutorial – Part 8 – Splitting specification file [apihandyman.io]

I thoroughly enjoy this deep-dive into Swagger – the entire series is chock full of solid info, and these last few installments have been exploring some aspects of Swagger I had no idea about. Very cool.

Also worth reading:

3. Musings

>> Hiring Engineers [dandreamsofcoding.com]

A high level intro to hiring engineers that’s well worth reading.

There are definitely a lot of ways you can go about the process – some better than others – but it’s worth understanding that some of the traditional approaches can work if done well.

>> The Human Cost of Tech Debt [daedtech.com]

Unmanaged technical debt goes way beyond just the technical downsides and always has a deep impact on teams.

And given enough time, it will give a strong nudge to developers to get past the unpleasantness of looking for a new job.

>> Combine smart people with crazily hard projects [lemire.me]

Some interesting musings on the huge benefits of stepping out of your comfort zone, tackling a hard problem and getting help.

>> Is Your Source Control Usage Conducive to Code Review? [daedtech.com]

That is a fantastic question to ask. And the answer to it is ultimately rooted in discipline and respect for your team, trying to make the review job easier.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Breakout groups to fantasize about being relevant [dilbert.com]

>> I love getting rich at your expense … and golfing [dilbert.com]

>> I can’t remember if we’re cheap or smart [dilbert.com]

5. Pick of the Week

>> Keep earning your title, or it expires [sivers.org]

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

A Guide to Spring Cloud Configuration

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Overview

Spring Cloud Config is Spring’s client/server approach for storing and serving distributed configurations across multiple applications and environments.

This configuration store is ideally versioned under Git version control and can be modified at application runtime. While it fits very well in Spring applications using all the supported configuration file formats together with constructs like Environment, PropertySource or @Value, it can be used in any environment running any programming language.

In this write-up, we’ll focus on an example of how to setup a Git-backed config server, use it in a simple REST application server and setup a secured environment including encrypted property values.

2. Project Setup and Dependencies

To get ready for writing some code, we create two new Maven projects first. The server project is relying on the spring-cloud-config-server module, as well as the spring-boot-starter-security and spring-boot-starter-web starter bundles:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-config-server</artifactId>
    <version>1.1.2.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-security</artifactId>
    <version>1.4.0.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <version>1.4.0.RELEASE</version>
</dependency>

However for the client project we’re going to only need the spring-cloud-starter-config and the spring-boot-starter-web modules:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-config</artifactId>
    <version>1.1.2.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <version>1.4.0.RELEASE</version>
</dependency>

3. A Config Server Implementation

The main part of the application is a config class – more specifically a @SpringBootApplication – which pulls in all the required setup through the auto-configure annotation @EnableConfigServer.

To secure our config server with Basic-Authentication, we additionally annotate the class with @EnableWebSecurity:

@SpringBootApplication
@EnableConfigServer
@EnableWebSecurity
public class ConfigServer {
    
    public static void main(String[] arguments) {
        SpringApplication.run(ConfigServer.class, arguments);
    }
}

Now we need to configure the server port on which our server is listening and a Git-url which provides our version-controlled configuration content. The latter can be used with protocols like http, ssh or a simple file on a local filesystem.

Tip: If you are planning to use multiple config server instances pointing to the same config repository, you can configure the server to clone your repo into a local temporary folder. But be aware of private repositories with two-factor authentication, they are difficult to handle! In such a case, it is easier to clone them on your local filesystem and work with the copy.

There are also some placeholder variables and search patterns for configuring the repository-url available; but this is beyond the scope of our article. If you are interested, the official documentation is a good place to start.

We also need to set a username and a password for the Basic-Authentication in our application.properties to avoid an auto-generated password on every application restart:

server.port=8888
spring.cloud.config.server.git.uri=ssh://localhost/config-repo
spring.cloud.config.server.git.clone-on-start=true
security.user.name=root
security.user.password=s3cr3t

4. A Git Repository as Configuration Storage

To complete our server, we have to initialize a Git repository under the configured url, create some new properties files and popularize them with some values.

The name of the configuration file is composed like a normal Spring application.properties, but instead of the word ‘application’ a configured name, e.g. the value of the property ‘spring.application.name’ of the client is used, followed by a dash and the active profile. For example:

$> git init
$> echo 'user.role=Developer' > config-client-development.properties
$> echo 'user.role=User'      > config-client-production.properties
$> git add .
$> git commit -m 'Initial config-client properties'

Troubleshooting: If you run into ssh-related authentication issues, double check ~/.ssh/known_hosts and ~/.ssh/authorized_keys on your ssh server!

5. Querying the Configuration

Now we’re able to start our server. The Git-backed configuration API provided by our server can be queried using the following paths:

/{application}/{profile}[/{label}]
/{application}-{profile}.yml
/{label}/{application}-{profile}.yml
/{application}-{profile}.properties
/{label}/{application}-{profile}.properties

In which the {label} placeholder refers to a Git branch, {application} to the client’s application name and the {profile} to the client’s current active application profile.

So we can retrieve the configuration for our planned config client running under development profile in branch master via:

$> curl http://root:s3cr3t@localhost:8888/config-client/development/master

6. The Client Implementation

Next, let’s take care of the client. This will be a very simple client application, consisting of a REST controller with one GET method.

The configuration, to fetch our server, must be placed in a resource file named bootstrap.application, because this file (like the name implies) will be loaded very early while the application starts:

@SpringBootApplication
@RestController
public class ConfigClient {
    
    @Value("${user.role}")
    private String role;

    public static void main(String[] args) {
        SpringApplication.run(ConfigClient.class, args);
    }

    @RequestMapping(
      value = "/whoami/{username}", 
      method = RequestMethod.GET, 
      produces = MediaType.TEXT_PLAIN_VALUE)
    public String whoami(@PathVariable("username") String username) {
        return String.format("Hello! 
          You're %s and you'll become a(n) %s...\n", username, role);
    }
}

In addition to the application name, we also put the active profile and the connection-details in our bootstrap.properties:

spring.application.name=config-client
spring.profiles.active=development
spring.cloud.config.uri=http://localhost:8888
spring.cloud.config.username=root
spring.cloud.config.password=s3cr3t

To test, if the configuration is properly received from our server and the role value gets injected in our controller method, we simply curl it after booting the client:

$> curl http://localhost:8080/whoami/Mr_Pink

If the response is as follows, our Spring Cloud Config Server and its client are working fine for now:

Hello! You're Mr_Pink and you'll become a(n) Developer...

7. Encryption and Decryption

Requirement: To use cryptographically strong keys together with Spring encryption and decryption features you need the ‘Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files’ installed in your JVM. These can be downloaded for example from Oracle. To install follow the instructions included in the download. Some Linux distributions also provide an installable package through their package managers.

Since the config server is supporting encryption and decryption of property values, you can use public repositories as storage for sensitive data like usernames and passwords. Encrypted values are prefixed with the string {cipher} and can be generated by an REST-call to the path ‘/encrypt’, if the server is configured to use a symmetric key or a key pair.

An endpoint to decrypt is also available. Both endpoints accepts a path containing placeholders for the name of the application and its current profile: ‘/*/{name}/{profile}’. This is especially useful for controlling cryptography per client. However, before they become useful, you have to configure a cryptographic key which we will do in the next section.

Tip: If you use curl to call the en-/decryption API, it’s better to use the –data-urlencode option (instead of –data/-d), or set the ‘Content-Type’ header explicit to ‘text/plain’. This ensure a correct handling of special characters like ‘+’ in the encrypted values.

If a value can’t be decrypted automatically while fetching through the client, its key is renamed with the name itself, prefixed by the word ‘invalid’. This should preventing, for example the usage of an encrypted value as password.

Tip: When setting-up a repository containing YAML files, you have to surround your encrypted and prefixed values with single-quotes! With Properties this is not the case.

7.1. Key Management

The config server is per default enabled to encrypt property values in a symmetric or asymmetric way.

To use symmetric cryptography, you simply have to set the property ‘encrypt.key’ in your application.properties to a secret of your choiceAlternatively you can pass-in the environment variable ENCRYPT_KEY.

For asymmetric cryptography, you can set ‘encrypt.key’ to a PEM-encoded string value or configure a keystore to use.

Because we need a highly secured environment for our demo server, we chose the latter option and generating a new keystore, including a RSA key-pair, with the Java keytool first:

$> keytool -genkeypair -alias config-server-key \
       -keyalg RSA -keysize 4096 -sigalg SHA512withRSA \
       -dname 'CN=Config Server,OU=Spring Cloud,O=Baeldung' \
       -keypass my-k34-s3cr3t -keystore config-server.jks \
       -storepass my-s70r3-s3cr3t

After that, we’re adding the created keystore to our server’s application.properties and re-run it:

encrypt.key-store.location=classpath:/config-server.jks
encrypt.key-store.password=my-s70r3-s3cr3t
encrypt.key-store.alias=config-server-key
encrypt.key-store.secret=my-k34-s3cr3t

As next step we can query the encryption-endpoint and add the response as value to a configuration in our repository:

$> export PASSWORD=$(curl -X POST --data-urlencode d3v3L \
       http://root:s3cr3t@localhost:8888/encrypt)
$> echo "user.password=$PASSWORD" >> config-client-development.properties
$> git commit -am 'Added encrypted password'
$> curl -X POST http://root:s3cr3t@localhost:8888/refresh

To test, if our setup works correctly, we’re modifying the ConfigClient class and restart our client:

@SpringBootApplication
@RestController
public class ConfigClient {

    ...
    
    @Value("${user.password}")
    private String password;

    ...
    public String whoami(@PathVariable("username") String username) {
        return String.format("Hello! 
          You're %s and you'll become a(n) %s, " +
          "but only if your password is '%s'!\n", 
          username, role, password);
    }
}

A final query against our client will show us, if our configuration value is being correct decrypted:

$> curl http://localhost:8080/whoami/Mr_Pink
Hello! You're Mr_Pink and you'll become a(n) Developer, \
  but only if your password is 'd3v3L'!

7.2. Using Multiple Keys

If you want to use multiple keys for encryption and decryption, for example: a dedicated one for each served application, you can add another prefix in the form of {name:value} between the {cipher} prefix and the BASE64-encoded property value.

The config server understands prefixes like {secret:my-crypto-secret} or {key:my-key-alias} nearly out-of-the-box. The latter option needs a configured keystore in your application.properties. This keystore is searched for a matching key alias. For example:

user.password={cipher}{secret:my-499-s3cr3t}AgAMirj1DkQC0WjRv...
user.password={cipher}{key:config-client-key}AgAMirj1DkQC0WjRv...

For scenarios without keystore you have to implement a @Bean of type TextEncryptorLocator which handles the lookup and returns a TextEncryptor-Object for each key.

7.3. Serving Encrypted Properties

If you want to disable server-side cryptography and handle decryption of property-values locally, you can put the following in your server’s application.properties:

spring.cloud.config.server.encrypt.enabled=false

Furthermore you can delete all the other ‘encrypt.*’ properties to disable the REST endpoints.

8. Conclusion

Now we are able to create a configuration server to provide a set of configuration files from a Git repository to client applications. There are a few other things you can do with such a server.

For example:

  • Serve configuration in YAML or Properties format instead of JSON – also with placeholders resolved. Which can be useful, when using it in non-Spring environments, where the configuration is not directly mapped to a PropertySource.
  • Serve plain text configuration files – in turn optionally with resolved placeholders. This can be useful for example to provide a environment-dependent logging-configuration.
  • Embed the config server into an application, where it configures itself from a Git repository, instead of running as standalone application serving clients. Therefore some bootstrap properties must be set and/or the @EnableConfigServer annotation must be removed, which depends on the use case.
  • Make the config server available at Spring Netflix Eureka service discovery and enable automatic server discovery in config clients. This becomes important if the server has no fixed location or it moves in its location.

And to wrap up, you’ll find the source code to this article on Github.

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

A Guide to JaCoCo

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Overview

Code coverage is a software metric used to measure how many lines of our code are executed during automated tests.

In this article we’re going to stroll through some practical aspects of using JaCoCo – a code coverage reports generator for Java projects.

2. Maven Configuration

In order to get up and running with JaCoCo, we need to declare this maven plugin in our pom.xml file:

<plugin>
    <groupId>org.jacoco</groupId>
    <artifactId>jacoco-maven-plugin</artifactId>
    <version>0.7.7.201606060606</version>
    <executions>
        <execution>
            <goals>
                <goal>prepare-agent</goal>
            </goals>
        </execution>
        <execution>
            <id>report</id>
            <phase>prepare-package</phase>
            <goals>
                <goal>report</goal>
            </goals>
        </execution>
    </executions>
</plugin>

The link provided here-before will always lead you to the latest version of the plugin in the maven central repository.

3. Code Coverage Reports

Before we start looking at JaCoCo’s code coverage capabilities, we need to have a code sample. Here’s a simple Java function that checks whether a string reads the same backward and forward:

public boolean isPalindrome(String inputString) {
    if (inputString.length() == 0) {
        return true;
    } else {
        char firstChar = inputString.charAt(0);
        char lastChar = inputString.charAt(inputString.length() - 1);
        String mid = inputString.substring(1, inputString.length() - 1);
        return (firstChar == lastChar) && isPalindrome(mid);
    }
}

All we need now is a simple JUnit test:

@Test
public void whenEmptyString_thenAccept() {
    Palindrome palindromeTester = new Palindrome();
    assertTrue(palindromeTester.isPalindrome(""));
}

Running the test using JUnit will automatically set in motion the JaCoCo agent, thus, it will create a coverage report in binary format in the target directory – target/jacoco.exec.

Obviously we cannot interpret the output single-handedly, but other tools and plugins can – e.g. Sonar Qube.

The good news is that we can use the jacoco:report goal in order to generate readable code coverage reports in several formats – e.g. HTML, CSV, and XML.

We can now take a look for example at target/site/jacoco/index.html page to see what the generated report looks like:

coverage

Following the link provided in the report – Palindrome.java , we can drill through a more detailed view for each Java class:

Note that you can straightforwardly manage code coverage using JaCoCo inside Eclipse with zero configuration, thanks to EclEmma Eclipse plugin.

4. Report Analysis

Our report shows 21% instructions coverage, 17% branches coverage, 3/5 for cyclomatic complexity and so on.

The 38 instructions shown by JaCoCo in the report refers to the bytecode instructions as opposed to ordinary Java code instructions.

JaCoCo reports help to visually analyze code coverage by using diamonds colors for branches and background colors for lines:

  • Red diamond means that no branches have been exercised during the test phase.
  • Yellow diamond shows that the code is partially covered – some branches have not been exercised.
  • Green diamond means that all branches have been exercised during the test.

The same color code applies on the background color, but for lines coverage.

JaCoCo mainly provide three important metrics:

  • Lines coverage reflects the amount of code that has been exercised based on the number of Java byte code instructions called by the tests.
  • Branches coverage shows the percent of exercised branches in the code – typically related to if/else and switch statements.
  • Cyclomatic complexity reflects the complexity of code by giving the number of paths needed to cover all the possible paths in a code through linear combination.

To take a trivial example, if there is no if or switch statements in the code, the cyclomatic complexity will be 1, as we only need one execution path to cover the entire code.

Generally the cyclomatic complexity reflects the number of test cases we need to implement in order to cover the entire code.

5. Concept Breakdown

JaCoCo runs as a Java agent, it is responsible for instrumenting the bytecode while running the tests. JaCoCo drills into each instruction and shows which lines are exercised during each test.

To gather coverage data, JaCoCo uses ASM for code instrumentation on the fly, receiving events from the JVM Tool Interface in the process:

jacoco concept

It is also possible to run the JaCoCo agent is server mode, in this case, we can run our tests with jacoco:dump as a goal, to initiate a dump request.

You can follow the official documentation link for more in-depth details about JaCoCo design.

6. Code Coverage Score

Now that we know a bit about how JaCoCo works, let’s improve our code coverage score.

In order to achieve 100% code coverage we need to introduce tests, that covers the missing parts shown in the initial report:

@Test
public void whenPalindrom_thenAccept() {
    Palindrome palindromeTester = new Palindrome();
    assertTrue(palindromeTester.isPalindrome("noon"));
}
    
@Test
public void whenNotPalindrom_thenReject(){
    Palindrome palindromeTester = new Palindrome();
    assertFalse(palindromeTester.isPalindrome("neon"));
}

Now we can say that we have enough tests to cover our the entire code, but to make sure of that, let’s run the Maven command mvn jacoco:report to publish the coverage report:

coverage

As you can see all lines/branches/paths in our code are fully covered:

coverage

In real world project, and as developments go further, we need to keep in track the code coverage score.

JaCoCo offers a simple way of declaring minimum requirements that should be met, otherwise the build will fail.

We can do that by adding the following check goal in our pom.xml file:

<execution>
    <id>jacoco-check</id>
    <goals>
        <goal>check</goal>
    </goals>
    <configuration>
        <rules>
            <rule>
                <element>PACKAGE</element>
                <limits>
                    <limit>
                        <counter>LINE</counter>
                        <value>COVEREDRATIO</value>
                        <minimum>0.50</minimum>
                    </limit>
                </limits>
            </rule>
        </rules>
    </configuration>
</execution>

As you can probably guess, we’re limiting here the minimum score for lines coverage to 50%.

The jacoco:check goal is bound to verify, so we can run the Maven command – mvn clean verify to check whether the rules are respected or not. The logs will show something like:

[ERROR] Failed to execute goal org.jacoco:jacoco-maven-plugin:0.7.7.201606060606:check 
  (jacoco-check) on project mutation-testing: Coverage checks have not been met.

7. Conclusion

In this article we’ve seen how to make use of JaCoCo maven plugin to generate code coverage reports for Java projects.

Keep in mind though, 100% code coverage does not necessary reflects effective testing, as it only reflects the amount of code exercised during tests. In a previous article, we’ve talked about mutation testing as a more sophisticated way to track tests effectiveness compared to ordinary code coverage.

You can check out the example provided in this article in the linked GitHub project.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE


Introduction To Orika

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Overview

Orika is a Java Bean mapping framework that recursively copies data from one object to another. It can be very useful when developing multi-layered applications.

While moving data objects back and forth between these layers it is common to find that we need to convert objects from one instance into another to accommodate different APIs.

Some ways to achieve this are: hard coding the copying logic or to implement bean mappers like Dozer. However, it can be used to simplify the process of mapping between one object layer and another.

Orika uses byte code generation to create fast mappers with minimal overhead, making it much faster than other reflection based mappers like Dozer.

2. Simple Example

The basic cornerstone of the mapping framework is the MapperFactory class. This is the class we will use to configure mappings and obtain the MapperFacade instance which performs the actual mapping work.

We create a MapperFactory object like so:

MapperFactory mapperFactory = new DefaultMapperFactory.Builder().build();

Then assuming we have a source data object, Source.java, with two fields:

public class Source {
    private String name;
    private int age;
    
    public Source(String name, int age) {
        this.name = name;
        this.age = age;
    }
    
    // standard getters and setters
}

And a similar destination data object, Dest.java:

public class Dest {
    private String name;
    private int age;
    
    public Dest(String name, int age) {
        this.name = name;
        this.age = age;
    }
    
    // standard getters and setters
}

This is the most basic of  bean mapping using Orika:

@Test
public void givenSrcAndDest_whenMaps_thenCorrect() {
    mapperFactory.classMap(Source.class, Dest.class);
    MapperFacade mapper = mapperFactory.getMapperFacade();
    Source src = new Source("Baeldung", 10);
    Dest dest = mapper.map(src, Dest.class);

    assertEquals(dest.getAge(), src.getAge());
    assertEquals(dest.getName(), src.getName());
}

As we can observe, we have created a Dest object with identical fields as Source, simply by mapping. Bidirectional or reverse mapping is also possible by default:

@Test
public void givenSrcAndDest_whenMapsReverse_thenCorrect() {
    mapperFactory.classMap(Source.class, Dest.class).byDefault();
    MapperFacade mapper = mapperFactory.getMapperFacade();
    Dest src = new Dest("Baeldung", 10);
    Source dest = mapper.map(src, Source.class);

    assertEquals(dest.getAge(), src.getAge());
    assertEquals(dest.getName(), src.getName());
}

3. Maven Setup

To use Orika mapper in our maven projects, we need to have orika-core dependency in pom.xml:

<dependency>
    <groupId>ma.glasnost.orika</groupId>
    <artifactId>orika-core</artifactId>
    <version>1.4.6</version>
</dependency>

The latest version can always be found here.

3. Working With MapperFactory

The general pattern of mapping with Orika involves creating a MapperFactory object, configuring it incase we need to tweak the default mapping behaviour, obtaining a MapperFacade object from it and finally, actual mapping.

We shall be observing this pattern in all our examples. But our very first example showed the default behaviour of the mapper without any tweak from our side.

3.1. The BoundMapperFacade vs MapperFacade

One thing to note is that we could choose to use BoundMapperFacade over the default MapperFacade which is quite slow. These are cases where we have a specific pair of types to map.

Our initial test would thus become:

@Test
public void givenSrcAndDest_whenMapsUsingBoundMapper_thenCorrect() {
    BoundMapperFacade<Source, Dest> 
      boundMapper = mapperFactory.getMapperFacade(Source.class, Dest.class);
    Source src = new Source("baeldung", 10);
    Dest dest = boundMapper.map(src);

    assertEquals(dest.getAge(), src.getAge());
    assertEquals(dest.getName(), src.getName());
}

However, for BoundMapperFacade to map bi-directionally, we have to explicitely call the mapReverse method rather than the map method we have looked at for the case of the default MapperFacade:

@Test
public void givenSrcAndDest_whenMapsUsingBoundMapperInReverse_thenCorrect() {
    BoundMapperFacade<Source, Dest> 
      boundMapper = mapperFactory.getMapperFacade(Source.class, Dest.class);
    Dest src = new Dest("baeldung", 10);
    Source dest = boundMapper.mapReverse(src);

    assertEquals(dest.getAge(), src.getAge());
    assertEquals(dest.getName(), src.getName());
}

The test will fail otherwise.

3.2. Configure Field Mappings

The examples we have looked at so far involve source and destination classes with identical field names. This subsection tackles the case where there is a difference between the two.

Consider a source object, Person ,with three fields namely name, nickname and age:

public class Person {
    private String name;
    private String nickname;
    private int age;
    
    public Person(String name, String nickname, int age) {
        this.name = name;
        this.nickname = nickname;
        this.age = age;
    }
    
    // standard getters and setters
}

Then another layer of the application has a similar object, but written by a french programmer. Let’s say that’s called Personne, with fields nom, surnom and age, all corresponding to the above three:

public class Personne {
    private String nom;
    private String surnom;
    private int age;
    
    public Personne(String nom, String surnom, int age) {
        this.nom = nom;
        this.surnom = surnom;
        this.age = age;
    }
    
    // standard getters and setters
}

Orika cannot automatically resolve these differences. But we can use the ClassMapBuilder API to register these unique mappings.

We have already used it before, but we have not tapped into any of it’s powerful features yet. The first line of each of our preceding tests using the default MapperFacade was using the ClassMapBuilder API to register the two classes we wanted to map:

mapperFactory.classMap(Source.class, Dest.class);

We could also map all fields using the default configuration, to make it clearer:

mapperFactory.classMap(Source.class, Dest.class).byDefault()

By adding the byDefault() method call, we are already configuring the behaviour of the mapper using the ClassMapBuilder API.

Now we want to be able to map Personne to Person, so we also configure field mappings onto the mapper using ClassMapBuilder API:

@Test
public void givenSrcAndDestWithDifferentFieldNames_whenMaps_thenCorrect() {
    mapperFactory.classMap(Personne.class, Person.class)
      .field("nom", "name").field("surnom", "nickname")
      .field("age", "age").register();
    MapperFacade mapper = mapperFactory.getMapperFacade();
    Personne frenchPerson = new Personne("Claire", "cla", 25);
    Person englishPerson = mapper.map(frenchPerson, Person.class);

    assertEquals(englishPerson.getName(), frenchPerson.getNom());
    assertEquals(englishPerson.getNickname(), frenchPerson.getSurnom());
    assertEquals(englishPerson.getAge(), frenchPerson.getAge());
}

Don’t forget to call the register() API method in order to register the configuration with the MapperFactory.

Even if only one field differs, going down this route means we must explicitly register all field mappings, including age which is the same in both objects, otherwise the unregistered field will not be mapped and the test would fail.

This will soon become tedious, what if we only want to map one field out of 20, do we need to configure all of their mappings?

No, not when we tell the mapper to use it’s default mapping configuration in cases where we have not explicitly defined a mapping:

mapperFactory.classMap(Personne.class, Person.class)
  .field("nom", "name").field("surnom", "nickname").byDefault().register();

Here, we have not defined a mapping for the age field, but nevertheless the test will pass.

3.3. Exclude a Field

Assuming we would like to exclude the nom field of Personne from the mapping – so that the Person object only receives new values for fields that are not excluded:

@Test
public void givenSrcAndDest_whenCanExcludeField_thenCorrect() {
    mapperFactory.classMap(Personne.class, Person.class).exclude("nom")
      .field("surnom", "nickname").field("age", "age").register();
    MapperFacade mapper = mapperFactory.getMapperFacade();
    Personne frenchPerson = new Personne("Claire", "cla", 25);
    Person englishPerson = mapper.map(frenchPerson, Person.class);

    assertEquals(null, englishPerson.getName());
    assertEquals(englishPerson.getNickname(), frenchPerson.getSurnom());
    assertEquals(englishPerson.getAge(), frenchPerson.getAge());
}

Notice how we exclude it in the configuration of the MapperFactory and then notice also the first assertion where we expect the value of name in the Person object to remain null, as a result of it being excluded in mapping.

4. Collections Mapping

Sometimes the destination object may have unique attributes while the source object just maintains every property in a collection.

4.1. Lists And Arrays

Consider a source data object that only has one field, a list of a person’s names:

public class PersonNameList {
    private List<String> nameList;
    
    public PersonNameList(List<String> nameList) {
        this.nameList = nameList;
    }
}

Now consider our destination data object which separates firstName and lastName into separate fields:

public class PersonNameParts {
    private String firstName;
    private String lastName;

    public PersonNameParts(String firstName, String lastName) {
        this.firstName = firstName;
        this.lastName = lastName;
    }
}

Let’s assume we are very sure that at index 0 there will always be the firstName of the person and at index 1 there will always be their lastName.

Orika allows us to use the bracket notation to access members of a collection:

@Test
public void givenSrcWithListAndDestWithPrimitiveAttributes_whenMaps_thenCorrect() {
    mapperFactory.classMap(PersonNameList.class, PersonNameParts.class)
      .field("nameList[0]", "firstName")
      .field("nameList[1]", "lastName").register();
    MapperFacade mapper = mapperFactory.getMapperFacade();
    List<String> nameList = Arrays.asList(new String[] { "Sylvester", "Stallone" });
    PersonNameList src = new PersonNameList(nameList);
    PersonNameParts dest = mapper.map(src, PersonNameParts.class);

    assertEquals(dest.getFirstName(), "Sylvester");
    assertEquals(dest.getLastName(), "Stallone");
}

Even if instead of PersonNameList, we had PersonNameArray, the same test would pass for an array of names.

4.2. Maps

Assuming our source object has a map of values. We know there is a key in that map, first, whose value represents a person’s firstName in our destination object.

Likewise we know that there is another key, last, in the same map whose value represents a person’s lastName in the destination object.

public class PersonNameMap {
    private Map<String, String> nameMap;

    public PersonNameMap(Map<String, String> nameMap) {
        this.nameMap = nameMap;
    }
}

Similar to the case in the preceding section, we use bracket notation, but instead of passing in an index, we pass in the key whose value we want to map to the given destination field.

Orika accepts two ways of retrieving the key, both are represented in the following test:

@Test
public void givenSrcWithMapAndDestWithPrimitiveAttributes_whenMaps_thenCorrect() {
    mapperFactory.classMap(PersonNameMap.class, PersonNameParts.class)
      .field("nameMap['first']", "firstName")
      .field("nameMap[\"last\"]", "lastName")
      .register();
    MapperFacade mapper = mapperFactory.getMapperFacade();
    Map<String, String> nameMap = new HashMap<>();
    nameMap.put("first", "Leornado");
    nameMap.put("last", "DiCaprio");
    PersonNameMap src = new PersonNameMap(nameMap);
    PersonNameParts dest = mapper.map(src, PersonNameParts.class);

    assertEquals(dest.getFirstName(), "Leornado");
    assertEquals(dest.getLastName(), "DiCaprio");
}

We can use either single quotes or double quotes but we must escape the latter.

5. Map Nested Fields

Following on from the preceding collections examples, assume that inside our source data object, there is another Data Transfer Object (DTO) that holds the values we want to map.

public class PersonContainer {
    private Name name;
    
    public PersonContainer(Name name) {
        this.name = name;
    }
}
public class Name {
    private String firstName;
    private String lastName;
    
    public Name(String firstName, String lastName) {
        this.firstName = firstName;
        this.lastName = lastName;
    }
}

To be able to access the properties of the nested DTO and map them onto our destination object, we use dot notation, like so:

@Test
public void givenSrcWithNestedFields_whenMaps_thenCorrect() {
    mapperFactory.classMap(PersonContainer.class, PersonNameParts.class)
      .field("name.firstName", "firstName")
      .field("name.lastName", "lastName").register();
    MapperFacade mapper = mapperFactory.getMapperFacade();
    PersonContainer src = new PersonContainer(new Name("Nick", "Canon"));
    PersonNameParts dest = mapper.map(src, PersonNameParts.class);

    assertEquals(dest.getFirstName(), "Nick");
    assertEquals(dest.getLastName(), "Canon");
}

6. Mapping Null Values

In some cases, you may wish to control whether nulls are mapped or ignored when they are encountered. By default, Orika will map null values when encountered:

@Test
public void givenSrcWithNullField_whenMapsThenCorrect() {
    mapperFactory.classMap(Source.class, Dest.class).byDefault();
    MapperFacade mapper = mapperFactory.getMapperFacade();
    Source src = new Source(null, 10);
    Dest dest = mapper.map(src, Dest.class);

    assertEquals(dest.getAge(), src.getAge());
    assertEquals(dest.getName(), src.getName());
}

This behavior can be customized at different levels depending on how specific we would like to be.

6.1. Global Configuration

We can configure our mapper to map nulls or ignore them at the global level before creating the global MapperFactory. Remember how we created this object in our very first example? This time we add an extra call during the build process:

MapperFactory mapperFactory = new DefaultMapperFactory.Builder()
  .mapNulls(false).build();

We can run a test to confirm that indeed, nulls are not getting mapped:

@Test
public void givenSrcWithNullAndGlobalConfigForNoNull_whenFailsToMap_ThenCorrect() {
    mapperFactory.classMap(Source.class, Dest.class);
    MapperFacade mapper = mapperFactory.getMapperFacade();
    Source src = new Source(null, 10);
    Dest dest = new Dest("Clinton", 55);
    mapper.map(src, dest);

    assertEquals(dest.getAge(), src.getAge());
    assertEquals(dest.getName(), "Clinton");
}

What happens is that, by default, nulls are mapped. This means that even if a field value in the source object is null and the corresponding field’s value in the destination object has a meaningful value, it will be overwritten.

In our case, the destination field is not overwritten if its corresponding source field has a null value.

6.2. Local Configuration

Mapping of null values can be controlled on a ClassMapBuilder by using the mapNulls(true|false) or mapNullsInReverse(true|false)  for controlling mapping of nulls in the reverse direction.

By setting this value on a ClassMapBuilder instance, all field mappings created on the same ClassMapBuilder, after the value is set, will take on that same value.

Let’s illustrate this with an example test:

@Test
public void givenSrcWithNullAndLocalConfigForNoNull_whenFailsToMap_ThenCorrect() {
    mapperFactory.classMap(Source.class, Dest.class).field("age", "age")
      .mapNulls(false).field("name", "name").byDefault().register();
    MapperFacade mapper = mapperFactory.getMapperFacade();
    Source src = new Source(null, 10);
    Dest dest = new Dest("Clinton", 55);
    mapper.map(src, dest);

    assertEquals(dest.getAge(), src.getAge());
    assertEquals(dest.getName(), "Clinton");
}

Notice how we call mapNulls just before registering name field, this will cause all fields following the mapNulls call to be ignored when they have null value.

Bi-directional mapping also accepts mapped null values:

@Test
public void givenDestWithNullReverseMappedToSource_whenMapsByDefault_thenCorrect() {
    mapperFactory.classMap(Source.class, Dest.class).byDefault();
    MapperFacade mapper = mapperFactory.getMapperFacade();
    Dest src = new Dest(null, 10);
    Source dest = new Source("Vin", 44);
    mapper.map(src, dest);

    assertEquals(dest.getAge(), src.getAge());
    assertEquals(dest.getName(), src.getName());
}

Also we can prevent this by calling mapNullsInReverse and passing in false:

@Test
public void 
  givenDestWithNullReverseMappedToSourceAndLocalConfigForNoNull_whenFailsToMap_thenCorrect() {
    mapperFactory.classMap(Source.class, Dest.class).field("age", "age")
      .mapNullsInReverse(false).field("name", "name").byDefault()
      .register();
    MapperFacade mapper = mapperFactory.getMapperFacade();
    Dest src = new Dest(null, 10);
    Source dest = new Source("Vin", 44);
    mapper.map(src, dest);

    assertEquals(dest.getAge(), src.getAge());
    assertEquals(dest.getName(), "Vin");
}

6.3. Field Level Configuration

We can configure this at the field level using fieldMap, like so:

mapperFactory.classMap(Source.class, Dest.class).field("age", "age")
  .fieldMap("name", "name").mapNulls(false).add().byDefault().register();

In this case, the configuration will only affect the name field as we have called it at field level:

@Test
public void givenSrcWithNullAndFieldLevelConfigForNoNull_whenFailsToMap_ThenCorrect() {
    mapperFactory.classMap(Source.class, Dest.class).field("age", "age")
      .fieldMap("name", "name").mapNulls(false).add().byDefault().register();
    MapperFacade mapper = mapperFactory.getMapperFacade();
    Source src = new Source(null, 10);
    Dest dest = new Dest("Clinton", 55);
    mapper.map(src, dest);

    assertEquals(dest.getAge(), src.getAge());
    assertEquals(dest.getName(), "Clinton");
}

7. Orika Custom Mapping

So far, we have looked at simple custom mapping examples using the ClassMapBuilder API. We shall still use the same API but customize our mapping using Orika’s CustomMapper class.

Assuming we have two data objects each with a certain field called dtob, representing the date and time of the birth of a person.

One data object represents this value as a datetime String in the following ISO format:

2007-06-26T21:22:39Z

and the other represents the same as a long type in the following unix timestamp format:

1182882159000

Clearly, non of the customizations we have covered so far suffices to convert between the two formats during the mapping process, not even Orika’s built in converter can handle the job. This is where we have to write a CustomMapper to do the required conversion during mapping.

Let us create our first data object:

public class Person3 {
    private String name;
    private String dtob;
    
    public Person3(String name, String dtob) {
        this.name = name;
        this.dtob = dtob;
    }
}

then our second data object:

public class Personne3 {
    private String name;
    private long dtob;
    
    public Personne3(String name, long dtob) {
        this.name = name;
        this.dtob = dtob;
    }
}

We will not label which is source and which is destination right now as the CustomMapper enables us to cater for bi-directional mapping.

Here is our concrete implementation of the CustomMapper abstract class:

class PersonCustomMapper extends CustomMapper<Personne3, Person3> {

    @Override
    public void mapAtoB(Personne3 a, Person3 b, MappingContext context) {
        Date date = new Date(a.getDtob());
        DateFormat format = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss'Z'");
        String isoDate = format.format(date);
        b.setDtob(isoDate);
    }

    @Override
    public void mapBtoA(Person3 b, Personne3 a, MappingContext context) {
        DateFormat format = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss'Z'");
        Date date = format.parse(b.getDtob());
        long timestamp = date.getTime();
        a.setDtob(timestamp);
    }
};

Notice that we have implemented methods mapAtoB and mapBtoA. Implementing both makes our mapping function bi-directional.

Each method exposes the data objects we are mapping and we take care of copying the field values from one to the other.

There in is where we write the custom code to manipulate the source data according to our requirements before writing it to the destination object.

Let’s run a test to confirm that our custom mapper works:

@Test
public void givenSrcAndDest_whenCustomMapperWorks_thenCorrect() {
    mapperFactory.classMap(Personne3.class, Person3.class)
      .customize(customMapper).register();
    MapperFacade mapper = mapperFactory.getMapperFacade();
    String dateTime = "2007-06-26T21:22:39Z";
    long timestamp = new Long("1182882159000");
    Personne3 personne3 = new Personne3("Leornardo", timestamp);
    Person3 person3 = mapper.map(personne3, Person3.class);

    assertEquals(person3.getDtob(), dateTime);
}

Notice that we still pass the custom mapper to Orika’s mapper via ClassMapBuilder API, just like all other simple customizations.

We can confirm too that bi-directional mapping works:

@Test
public void givenSrcAndDest_whenCustomMapperWorksBidirectionally_thenCorrect() {
    mapperFactory.classMap(Personne3.class, Person3.class)
      .customize(customMapper).register();
    MapperFacade mapper = mapperFactory.getMapperFacade();
    String dateTime = "2007-06-26T21:22:39Z";
    long timestamp = new Long("1182882159000");
    Person3 person3 = new Person3("Leornardo", dateTime);
    Personne3 personne3 = mapper.map(person3, Personne3.class);

    assertEquals(person3.getDtob(), timestamp);
}

8. Conclusion

In this article, we have explored the most important features of the Orika mapping framework.

There are definitely more advanced features that give us much more control but in most use cases, the ones covered here will be more than enough.

The full project code and all examples can be found in my github project. Don’t forget to check out our tutorial on the Dozer mapping framework as well, since they both solve more or less the same problem.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Asynchronous Batch Operations in Couchbase

$
0
0

I usually post about Persistence on Twitter - you can follow me there:

1. Introduction

In this follow-up to our tutorial on using Couchbase in a Spring application, we explore the the asynchronous nature of the Couchbase SDK and how it may be used to perform persistence operations in batches, thus allowing our application to achieve optimal use of Couchbase resources.

1.1. CrudService Interface

First, we augment our generic CrudService interface to include batch operations:

public interface CrudService<T> {
    ...
    
    List<T> readBulk(Iterable<String> ids);

    void createBulk(Iterable<T> items);

    void updateBulk(Iterable<T> items);

    void deleteBulk(Iterable<String> ids);

    boolean exists(String id);
}

1.2. CouchbaseEntity Interface

We define an interface for the entities that we want to persist:

public interface CouchbaseEntity {

    String getId();
    
    void setId(String id);
    
}

1.3. AbstractCrudService Class

Then we will implement each of these methods in a generic abstract class. This class is derived from the PersonCrudService class that we used in the previous tutorial and begins as follows:

public abstract class AbstractCrudService<T extends CouchbaseEntity> implements CrudService<T> {
    private BucketService bucketService;
    private Bucket bucket;
    private JsonDocumentConverter<T> converter;

    public AbstractCrudService(BucketService bucketService, JsonDocumentConverter<T> converter) {
        this.bucketService = bucketService;
        this.converter = converter;
    }

    protected void loadBucket() {
        bucket = bucketService.getBucket();
    }
    
    ...
}

2. The Asynchronous Bucket Interface

The Couchbase SDK provides the AsyncBucket interface for performing asynchronous operations. Given a Bucket instance, you can obtain its asynchronous version via the async() method:

AsyncBucket asyncBucket = bucket.async();

3. Batch Operations

To perform batch operations using the AsyncBucket interface, we employ the RxJava library.

3.1. Batch Read

Here we implement the readBulk method. First we use the AsyncBucket and the flatMap mechanism in RxJava to retrieve the documents asynchronously into an Observable<JsonDocument>, then we use the toBlocking mechanism in RxJava to convert these to a list of entities:

@Override
public List<T> readBulk(Iterable<String> ids) {
    AsyncBucket asyncBucket = bucket.async();
    Observable<JsonDocument> asyncOperation = Observable
      .from(ids)
      .flatMap(new Func1<String, Observable<JsonDocument>>() {
          public Observable<JsonDocument> call(String key) {
              return asyncBucket.get(key);
          }
    });

    List<T> items = new ArrayList<T>();
    try {
        asyncOperation.toBlocking()
          .forEach(new Action1<JsonDocument>() {
              public void call(JsonDocument doc) {
                  T item = converter.fromDocument(doc);
                  items.add(item);
              }
        });
    } catch (Exception e) {
        logger.error("Error during bulk get", e);
    }

    return items;
}

3.2. Batch Insert

We again use RxJava’s flatMap construct to implement the createBulk method.

Since bulk mutation requests are produced faster than their responses can be generated, sometimes resulting in an overload condition, we institute a retry with exponential delay whenever a BackpressureException is encountered:

@Override
public void createBulk(Iterable<T> items) {
    AsyncBucket asyncBucket = bucket.async();
    Observable
      .from(items)
      .flatMap(new Func1<T, Observable<JsonDocument>>() {
          @SuppressWarnings("unchecked")
          @Override
          public Observable<JsonDocument> call(final T t) {
              if(t.getId() == null) {
                  t.setId(UUID.randomUUID().toString());
              }
              JsonDocument doc = converter.toDocument(t);
              return asyncBucket.insert(doc)
                .retryWhen(RetryBuilder
                  .anyOf(BackpressureException.class)
                  .delay(Delay.exponential(TimeUnit.MILLISECONDS, 100))
                  .max(10)
                  .build());
          }
      })
      .last()
      .toBlocking()
      .single();
}

3.3. Batch Update

We use a similar mechanism in the updateBulk method:

@Override
public void updateBulk(Iterable<T> items) {
    AsyncBucket asyncBucket = bucket.async();
    Observable
      .from(items)
      .flatMap(new Func1<T, Observable<JsonDocument>>() {
          @SuppressWarnings("unchecked")
          @Override
          public Observable<JsonDocument> call(final T t) {
              JsonDocument doc = converter.toDocument(t);
              return asyncBucket.upsert(doc)
                .retryWhen(RetryBuilder
                  .anyOf(BackpressureException.class)
                  .delay(Delay.exponential(TimeUnit.MILLISECONDS, 100))
                  .max(10)
                  .build());
          }
      })
      .last()
      .toBlocking()
      .single();
}

3.4. Batch Delete

And we write the deleteBulk method as follows:

@Override
public void deleteBulk(Iterable<String> ids) {
    AsyncBucket asyncBucket = bucket.async();
    Observable
      .from(ids)
      .flatMap(new Func1<String, Observable<JsonDocument>>() {
          @SuppressWarnings("unchecked")
          @Override
          public Observable<JsonDocument> call(String key) {
              return asyncBucket.remove(key)
                .retryWhen(RetryBuilder
                  .anyOf(BackpressureException.class)
                  .delay(Delay.exponential(TimeUnit.MILLISECONDS, 100))
                  .max(10)
                  .build());
          }
      })
      .last()
      .toBlocking()
      .single();
}

4. PersonCrudService

Finally, we write a Spring service, PersonCrudService, that extends our AbstractCrudService for the Person entity.

Since all of the Couchbase interaction is implemented in the abstract class, the implementation for an entity class is trivial, as we only need to ensure that all our dependencies are injected and our bucket loaded:

@Service
public class PersonCrudService extends AbstractCrudService<Person> {

    @Autowired
    public PersonCrudService(
      @Qualifier("TutorialBucketService") BucketService bucketService,
      PersonDocumentConverter converter) {
        super(bucketService, converter);
    }

    @PostConstruct
    private void init() {
        loadBucket();
    }
}

5. Conclusion

The source code shown in this tutorial is available in the github project.

You can learn more about the Couchbase Java SDK at the official Couchbase developer documentation site.

I usually post about Persistence on Twitter - you can follow me there:


Introduction to Thread Pools in Java

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Introduction

This article is a look at thread pools in Java – starting with the different implementations in the standard Java library and then looking at Google’s Guava library.

2. The Thread Pool

In Java, threads are mapped to system-level threads which are operating system’s resources. If you create threads uncontrollably, you may run out of these resources quickly.

The context switching between threads is done by the operating system as well – in order to emulate parallelism. A simplistic view is that – the more threads you spawn, the less time each thread spends doing actual work.

The Thread Pool pattern helps to save resources in a multithreaded application, and also to contain the parallelism in certain predefined limits.

When you use a thread pool, you write your concurrent code in the form of parallel tasks and submit them for execution to an instance of a thread pool. This instance controls several re-used threads for executing these tasks.
2016-08-10_10-16-52-1024x572

The pattern allows you to control the number of threads the application is creating, their lifecycle, as well as to schedule tasks’ execution and keep incoming tasks in a queue.

3. Thread Pools in Java

3.1. Executors, Executor and ExecutorService

The Executors helper class contains several methods for creation of pre-configured thread pool instances for you. Those classes are a good place to start with – use it if you don’t need to apply any custom fine-tuning.

The Executor and ExecutorService interfaces are used to work with different thread pool implementations in Java. Usually you should keep your code decoupled from the actual implementation of the thread pool and use these interfaces throughout your application.

The Executor interface has a single execute method to submit Runnable instances for execution.

Here’s a quick example of how you can use the Executors API to acquire an Executor instance backed with a single thread pool and an unbounded queue for executing tasks sequentially. Here, we execute a single task that simply prints “Hello World” on the screen. The task is submitted as a lambda (a Java 8 feature) which is inferred to be Runnable.

Executor executor = Executors.newSingleThreadExecutor();
executor.execute(() -> System.out.println("Hello World"));

The ExecutorService interface contains a large number of methods for controlling the progress of the tasks and managing the termination of the service. Using this interface, you can submit the tasks for execution and also control their execution using the returned Future instance.

In the following example, we create an ExecutorService, submit a task and then use the returned Future‘s get method to wait until the submitted task is finished and the value is returned:

ExecutorService executorService = Executors.newFixedThreadPool(10);
Future<String> future = executorService.submit(() -> "Hello World");
// some operations
String result = future.get();

Of course, in a real-life scenario you usually don’t want to call future.get() right away, but defer calling it until you actually need the value of the computation.

The submit method is overloaded to take either Runnable or Callable both of which are functional interfaces and can be passed as lambdas (starting with Java 8).

Runnable‘s single method does not throw an exception and does not return value. Callable interface may be more convenient, as it allows to throw exception and return a value.

Finally – to let the compiler infer the Callable type, simply return a value from the lambda.

For more examples on using the ExecutorService interface and futures, have a look at “A Guide to the Java ExecutorService“.

3.2. ThreadPoolExecutor

The ThreadPoolExecutor is an extensible thread pool implementation with lots of parameters and hooks for fine-tuning.

The main configuration parameters that we’ll discuss here are: corePoolSize, maximumPoolSize and keepAliveTime.

The pool consists of a fixed number of core threads that are kept inside all the time, and some excessive threads that may be spawned and then terminated when they are not needed anymore. The corePoolSize parameter is the amount of core threads which will be instantiated and kept in the pool. If all core threads are busy and more tasks are submitted, then the pool is allowed to grow up to a maximumPoolSize.

The keepAliveTime parameter is the interval of time for which the excessive threads (i.e. threads that are instantiated in excess of the corePoolSize) are allowed to exist in the idle state.

These parameters cover a wide range of use cases, but the most typical configurations are predefined in the Executors static methods.

For example, newFixedThreadPool method creates a ThreadPoolExecutor with equal corePoolSize and maximumPoolSize parameter values and a zero keepAliveTime. This means that the number of threads in this thread pool is always the same:

ThreadPoolExecutor executor = 
  (ThreadPoolExecutor) Executors.newFixedThreadPool(2);
executor.submit(() -> {
    Thread.sleep(1000);
    return null;
});
executor.submit(() -> {
    Thread.sleep(1000);
    return null;
});
executor.submit(() -> {
    Thread.sleep(1000);
    return null;
});

assertEquals(2, executor.getPoolSize());
assertEquals(1, executor.getQueue().size());

In the example above we instantiate a ThreadPoolExecutor with a fixed thread count of 2. This means that if the amount of simultaneously running tasks is less or equal to two at all times, then they get executed right away. Otherwise some of these tasks may be put into a queue to wait for their turn.

We created three Callable tasks that imitate heavy work by sleeping for 1000 milliseconds. The first two tasks will be executed at once, and the third one will have to wait in the queue. We can verify it by calling the getPoolSize() and getQueue().size() methods immediately after submiting the tasks.

Another pre-configured ThreadPoolExecutor can be created with the Executors.newCachedThreadPool() method. This method does not receive a number of threads at all. The corePoolSize is actually set to 0, and the maximumPoolSize is set to Integer.MAX_VALUE for this instance. The keepAliveTime is 60 seconds for this one.

These parameter values mean that the cached thread pool may grow without bounds to accommodate any amount of submitted tasks. But when the threads are not needed anymore, they will be disposed of after 60 seconds of inactivity. A typical use case is when you have a lot of short-living tasks in your application.

ThreadPoolExecutor executor = 
  (ThreadPoolExecutor) Executors.newCachedThreadPool();
executor.submit(() -> {
    Thread.sleep(1000);
    return null;
});
executor.submit(() -> {
    Thread.sleep(1000);
    return null;
});
executor.submit(() -> {
    Thread.sleep(1000);
    return null;
});

assertEquals(3, executor.getPoolSize());
assertEquals(0, executor.getQueue().size());

The queue size in the example above will always be zero, because internally a SynchronousQueue instance is used. In a SynchronousQueue, pairs of insert and remove operations always occur simultaneously, so the queue never actually contains anything.

The Executors.newSingleThreadExecutor() API creates another typical form of ThreadPoolExecutor containing a single thread. The single thread executor is ideal for creating an event loop. The corePoolSize and maximumPoolSize parameters are equal to 1, and the keepAliveTime is zero.

Tasks in the above example will be executed sequentially, so the flag value will be 2 after task’s completion:

AtomicInteger counter = new AtomicInteger();

ExecutorService executor = Executors.newSingleThreadExecutor();
executor.submit(() -> {
    counter.set(1);
});
executor.submit(() -> {
    counter.compareAndSet(1, 2);
});

Additionally, this ThreadPoolExecutor is decorated with an immutable wrapper, so it cannot be reconfigured after creation. Note that also this is the reason we cannot cast it to a ThreadPoolExecutor.

3.3. ScheduledThreadPoolExecutor

The ScheduledThreadPoolExecutor extends the ThreadPoolExecutor class and also implements the ScheduledExecutorService interface with several additional methods:

  • schedule method allows to execute a task once after a specified delay;
  • scheduleAtFixedRate method allows to execute a task after a specified initial delay and then execute it repeatedly with a certain period; the period argument is the time measured between the starting times of the tasks, so the execution rate is fixed;
  • scheduleWithFixedDelay method is similar to scheduleAtFixedRate in that it repeatedly executes the given task, but the specified delay is measured between the end of the previous task and the start of the next; the execution rate may vary depending on the time it takes to execute any given task.

The Executors.newScheduledThreadPool() method is typically used to create a ScheduledThreadPoolExecutor with a given corePoolSize, unbounded maximumPoolSize and zero keepAliveTime. Here’s how to schedule a task for execution in 500 milliseconds:

ScheduledExecutorService executor = Executors.newScheduledThreadPool(5);
executor.schedule(() -> {
    System.out.println("Hello World");
}, 500, TimeUnit.MILLISECONDS);

The following code shows how to execute a task after 500 milliseconds delay and then repeat it every 100 milliseconds. After scheduling the task, we wait until it fires three times using the CountDownLatch lock, then cancel it using the Future.cancel() method.

CountDownLatch lock = new CountDownLatch(3);

ScheduledExecutorService executor = Executors.newScheduledThreadPool(5);
ScheduledFuture<?> future = executor.scheduleAtFixedRate(() -> {
    System.out.println("Hello World");
    lock.countDown();
}, 500, 100, TimeUnit.MILLISECONDS);

lock.await(1000, TimeUnit.MILLISECONDS);
future.cancel(true);

3.4. ForkJoinPool

ForkJoinPool is the central part of the fork/join framework introduced in Java 7. It solves a common problem of spawning multiple tasks in recursive algorithms. Using a simple ThreadPoolExecutor, you will run out of threads quickly, as every task or subtask requires its own thread to run.

In a fork/join framework, any task can spawn (fork) a number of subtasks and wait for their completion using the join method. The benefit of the fork/join framework is that it does not create a new thread for each task or subtask, implementing the Work Stealing algorithm instead. This framework is thoroughly described in the article “Guide to the Fork/Join Framework in Java

Let’s look at a simple example of using ForkJoinPool to traverse a tree of nodes and calculate the sum of all leaf values. Here’s a simple implementation of a tree consisting of a node, an int value and a set of child nodes:

static class TreeNode {

    int value;

    Set<TreeNode> children;

    TreeNode(int value, TreeNode... children) {
        this.value = value;
        this.children = Sets.newHashSet(children);
    }
}

Now if we want to sum all values in a tree in parallel, we need to implement a RecursiveTask<Integer> interface. Each task receives its own node and adds its value to the sum of values of its children. To calculate the sum of children values, the task implementation does the following:

  • streams the children set,
  • maps over this stream, creating a new CountingTask for each element,
  • executes each subtask by forking it,
  • collects the results by calling the join method on each forked task,
  • sums the results using the Collectors.summingInt collector.
public static class CountingTask extends RecursiveTask<Integer> {

    private final TreeNode node;

    public CountingTask(TreeNode node) {
        this.node = node;
    }

    @Override
    protected Integer compute() {
        return node.value + node.children.stream()
          .map(childNode -> new CountingTask(childNode).fork())
          .collect(Collectors.summingInt(ForkJoinTask::join));
    }
}

The code to run the calculation on an actual tree is very simple:

TreeNode tree = new TreeNode(5,
  new TreeNode(3), new TreeNode(2,
    new TreeNode(2), new TreeNode(8)));

ForkJoinPool forkJoinPool = ForkJoinPool.commonPool();
int sum = forkJoinPool.invoke(new CountingTask(tree));

4. Thread Pool’s Implementation in Guava

Guava is a popular Google library of utilities. It has many useful concurrency classes, including several handy implementations of ExecutorService. The implementing classes are not accessible for direct instantiation or subclassing, so the only entry point for creating their instances is the MoreExecutors helper class.

4.1. Adding Guava as a Maven Dependency

Add the following dependency to your Maven pom file to include the Guava library to your project. You can find the latest version of Guava library in the Maven Central repository:

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>19.0</version>
</dependency>

4.2. Direct Executor and Direct Executor Service

Sometimes you want to execute the task either in the current thread, or in a thread pool, depending on some conditions. You would prefer to use a single Executor interface and just switch the implementation. Although it is not so hard to come up with an implementation of Executor or ExecutorService that executes the tasks in the current thread, it still requires writing some boilerplate code.

Gladly, Guava provides predefined instances for us.

Here’s an example that demonstrates execution of a task in the same thread. Although the provided task sleeps for 500 milliseconds, it blocks the current thread, and the result is available immediately after the execute call is finished:

Executor executor = MoreExecutors.directExecutor();

AtomicBoolean executed = new AtomicBoolean();

executor.execute(() -> {
    try {
        Thread.sleep(500);
    } catch (InterruptedException e) {
        e.printStackTrace();
    }
    executed.set(true);
});

assertTrue(executed.get());

The instance returned by the directExecutor() method is actually a static singleton, so using this method does not provide any overhead on object creation at all.

You should prefer this method to the MoreExecutors.newDirectExecutorService(), because that API creates a full-fledged executor service implementation on every call.

4.3. Exiting Executor Services

Another common problem is shutting down the virtual machine while a thread pool is still running its tasks. Even with a cancellation mechanism in place, there is no guarantee that the tasks will behave nicely and stop their work when the executor service shuts down. This may cause JVM to hang indefinitely while the tasks keep doing their work.

To solve this problem, Guava introduces a family of exiting executor services. They are based on daemon threads which terminate together with the JVM.

These services also add a shutdown hook with the Runtime.getRuntime().addShutdownHook() method and prevent the VM from terminating for a configured amount of time before giving up on hung tasks.

In the following example we’re submitting the task that contains an infinite loop, but we use an exiting executor service with a configured time of 100 milliseconds to wait for the tasks upon VM termination. Without the exitingExecutorService in place, this task would cause the VM to hang indefinitely:

ThreadPoolExecutor executor = 
  (ThreadPoolExecutor) Executors.newFixedThreadPool(5);
ExecutorService executorService = 
  MoreExecutors.getExitingExecutorService(executor, 
    100, TimeUnit.MILLISECONDS);

executorService.submit(() -> {
    while (true) {
    }
});

4.4. Listening Decorators

Listening decorators allow you to wrap the ExecutorService and receive ListenableFuture instances upon task submission instead of simple Future instances. The ListenableFuture interface extends Future and has a single additional method addListener. This method allows to add a listener that is called upon future completion.

You’ll rarely want to use ListenableFuture.addListener() method directly, but it is essential to most of the helper methods in the Futures utility class.  For instance, with the Futures.allAsList() method you can combine several ListenableFuture instances in a single ListenableFuture that completes upon the successful completion of all the futures combined:

ExecutorService executorService = Executors.newCachedThreadPool();
ListeningExecutorService listeningExecutorService = 
  MoreExecutors.listeningDecorator(executorService);

ListenableFuture<String> future1 = 
  listeningExecutorService.submit(() -> "Hello");
ListenableFuture<String> future2 = 
  listeningExecutorService.submit(() -> "World");

String greeting = Futures.allAsList(future1, future2).get()
  .stream()
  .collect(Collectors.joining(" "));
assertEquals("Hello World", greeting);

5. Conclusion

In this article we have discussed the Thread Pool pattern and its implementations in the standard Java library and in the Google’s Guava library.

The source code for the article is available on GitHub.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Java Web Weekly, Issue 137

$
0
0

I just released the Master Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> Updated Spring 5.0 Roadmap and Reactive Story Presented at SpringOne [infoq.com]

A quick forward looking writeup discussing the plans for the next Spring releases and the general direction of the platform.

>> Impact of Java EE 8 Hiatus on Tomcat 9 Highlighted at SpringOne [infoq.com]

Servlet 4.0 can’t get here soon enough, although from the looks of it, the specification is still a long ways off – and that of course has an impact on anything and everything downstream.

>> Fluent API entity building with JPA and Hibernate [vladmihalcea.com]

Some cool exploration of what a fluent API might look like for building JPA/Hibernate entities.

>> The DISTINCT pass-through Hibernate Query Hint [in.relation.to]

A cool, detailed writeup about improving the way Hibernate handles the unnecessary passing of DISTINCT to the generated query.

>> How to get query results as a Stream with Hibernate 5.2 [thoughts-on-java.org]

Nice – streams in the Hibernate APIs, starting with 5.2.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Understanding Security by Country: SSL [shodan.io]

Interesting and damn scary data as always.

>> System Observability: How to Make Your Production Environment Great Again [takipi.com]

The observability of a system is a core aspect of going into production and it definitely needs to be considered and built in from the start.

Also worth reading:

3. Musings

>> How Do I Write Good Code? [daedtech.com]

A solid attempt to answer the question in earnest. Really.

>> Future of Serverless Architectures [martinfowler]

The wrap-up section of the huge and quite interesting deep-dive into Serverless Architectures.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Should I give you regular updates? Nah [dilbert.com]

>> It’s supposed to make you feel “engaged” [dilbert.com]

>> No one would pay you to feel good [dilbert.com]

5. Pick of the Week

>> I’ve never had a goal [signalvnoise.com]

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Guide to Spring NonTransientDataAccessException

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Overview

In this quick tutorial, we will go through the most important types of the common NonTransientDataAccessException and illustrate them with examples.

2. The Base Exception Class

Subclasses of this main exception class represent data access related exceptions which are considered non-transient or permanent.

Simply put, that means that – until the root cause is fixed – all future attempts of a method that caused an exception, will fail.

3. DataIntegrityViolationException

This subtype of NonTransientDataAccessException is thrown when an attempt to modify data causes a violation of an integrity constraint.

In our example of the Foo class, the name column is defined as not allowing the null value:

@Column(nullable = false)
private String name;

If we attempt to save an instance without setting a value for the name, we can expect a DataIntegrityViolationException to be thrown:

@Test(expected = DataIntegrityViolationException.class)
public void whenSavingNullValue_thenDataIntegrityException() {
    Foo fooEntity = new Foo();
    fooService.create(fooEntity);
}

3.1. DuplicateKeyException

One of subclasses of the DataIntegrityViolationException is DuplicateKeyException, which is thrown when there is an attempt to save a record with a primary key that already exists or a value that is already present in a column with a unique constraint, such as attempting to insert two rows in the foo table with the same id of 1:

@Test(expected = DuplicateKeyException.class)
public void whenSavingDuplicateKeyValues_thenDuplicateKeyException() {
    JdbcTemplate jdbcTemplate = new JdbcTemplate(restDataSource);
    jdbcTemplate.execute("insert into foo(id,name) values (1,'a')");
    jdbcTemplate.execute("insert into foo(id,name) values (1,'b')");
}

4. DataRetrievalFailureException

This exception is thrown when a problem during retrieving data appears, such as looking up an object with an identifier which doesn’t exist in a database.

For example, we’re going to use the JdbcTemplate class which has a method that throws this exception:

@Test(expected = DataRetrievalFailureException.class)
public void whenRetrievingNonExistentValue_thenDataRetrievalException() {
    JdbcTemplate jdbcTemplate = new JdbcTemplate(restDataSource);
    
    jdbcTemplate.queryForObject("select * from foo where id = 3", Integer.class);
}

4.1 IncorrectResultSetColumnCountException

This exception subclass is thrown when attempting to retrieve multiple columns from a table without creating the proper RowMapper:

@Test(expected = IncorrectResultSetColumnCountException.class)
public void whenRetrievingMultipleColumns_thenIncorrectResultSetColumnCountException() {
    JdbcTemplate jdbcTemplate = new JdbcTemplate(restDataSource);

    jdbcTemplate.execute("insert into foo(id,name) values (1,'a')");
    jdbcTemplate.queryForList("select id,name from foo where id=1", Foo.class);
}

4.2 IncorrectResultSizeDataAccessException

This exception is thrown when a number of retrieved records differs from expected one, for example when expecting a single Integer value, but retrieving two rows for the query:

@Test(expected = IncorrectResultSizeDataAccessException.class)
public void whenRetrievingMultipleValues_thenIncorrectResultSizeException() {
    JdbcTemplate jdbcTemplate = new JdbcTemplate(restDataSource);

    jdbcTemplate.execute("insert into foo(name) values ('a')");
    jdbcTemplate.execute("insert into foo(name) values ('a')");

    jdbcTemplate.queryForObject("select id from foo where name='a'", Integer.class);
}

5. DataSourceLookupFailureException

This exception is thrown when a specified data source cannot be obtained. For the example, we will use the class JndiDataSourceLookup, to look for a nonexistent data source:

@Test(expected = DataSourceLookupFailureException.class)
public void whenLookupNonExistentDataSource_thenDataSourceLookupFailureException() {
    JndiDataSourceLookup dsLookup = new JndiDataSourceLookup();
    dsLookup.setResourceRef(true);
    DataSource dataSource = dsLookup.getDataSource("java:comp/env/jdbc/example_db");
}

6. InvalidDataAccessResourceUsageException

This exception is thrown when a resource is accessed incorrectly, for example when a user lacks SELECT rights.

In order to test this exception, we’ll need to revoke the SELECT right for the user, then run a SELECT query:

@Test(expected = InvalidDataAccessResourceUsageException.class)
public void whenRetrievingDataUserNoSelectRights_thenInvalidResourceUsageException() {
    JdbcTemplate jdbcTemplate = new JdbcTemplate(restDataSource);
    jdbcTemplate.execute("revoke select from tutorialuser");

    try {
        fooService.findAll();
    } finally {
        jdbcTemplate.execute("grant select to tutorialuser");
    }
}

Notice that we are restoring the permission on the user in the finally block.

6.1 BadSqlGrammarException

A very common subtype of InvalidDataAccessResourceUsageException is BadSqlGrammarException, which is thrown when attempting to run a query with invalid SQL:

@Test(expected = BadSqlGrammarException.class)
public void whenIncorrectSql_thenBadSqlGrammarException() {
    JdbcTemplate jdbcTemplate = new JdbcTemplate(restDataSource);
    jdbcTemplate.queryForObject("select * fro foo where id=3", Integer.class);
}

Notice of course the fro – which is the invalid aspect of the query.

7. CannotGetJdbcConnectionException

This exception is thrown when a connection attempt through JDBC fails, for example when the database url is incorrect. If we write the url like the following:

jdbc.url=jdbc:mysql:3306://localhost/spring_hibernate4_exceptions?createDatabaseIfNotExist=true

Then the CannotGetJdbcConnectionException will be thrown when attempting to execute a statement:

@Test(expected = CannotGetJdbcConnectionException.class)
public void whenJdbcUrlIncorrect_thenCannotGetJdbcConnectionException() {
    JdbcTemplate jdbcTemplate = new JdbcTemplate(restDataSource);
    jdbcTemplate.execute("select * from foo");
}

8. Conclusion

In this very to the point tutorial we had a look at some of the most common subtypes of the NonTransientDataAccessException class.

The implementation of all examples can be found in the GitHub project. And of course all examples are using an in-memory database so you can easily run them without setting anything up.

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

Hibernate Criteria Queries

$
0
0

1. Overview

In this article we are going to discuss a very useful feature of Hibernate – the Criteria Queries.

It not only enables us to write queries without doing raw SQL, but also gives us some Object Oriented control over the queries, which is one of the main features of Hibernate.

The Criteria API allows us to build up a criteria query object programmatically, where we can apply different kind of filtration rules and logical conditions. And the Session provides the createCriteria() API – which can be used to create a Criteria object that returns instances of the persistence object’s class when we execute a query.

2. Maven Dependencies

In order to use Hibernate make sure you add the latest version of it to the dependencies section of your pom.xml file:

<dependency>
    <groupId>org.hibernate</groupId>
    <artifactId>hibernate-core</artifactId>   
    <version>5.2.1.Final</version>
</dependency>

The latest version of Hibernate can be found here.

3. Simple Example using Criteria

Now since we know what a criteria query is, we can start looking at how to retrieve data using them. Let’s have a look at how to get all the instances of a particular class from database.

Let’s start with a basic example.

We have an Item class which represents the tuple “ITEM” in the database:

public class Item implements Serializable {

    private Integer itemId;
    private String itemName;
    private String itemDescription;
    private Integer itemPrice;

   // standard setters and getters
}

Let’s look at a simple criteria query which will retrieve all the rows of “ITEM” from the database:

Session session = HibernateUtil.getHibernateSession();
Criteria cr = session.createCriteria(Item.class);
long startTimeCriteria = System.nanoTime();
cr.add(Restrictions.like("itemName", "%item One%"));

List results = cr.list();

The above query is a simple demonstration of how to get all the items. Let’s see what was done, step by step:

  1. Create a session object of type Session
  2. Create an instance of Session from the SessionFactory object
  3. Create an instance of Criteria by calling the createCriteria() method
  4. Call the list() method of the criteria object which gives us the results

Now that we’ve covered the basics, let’s move on to some of the features of criteria query:

3.1. Using Restrictions

The add() method can be used with Criteria object to add restriction for a criteria query. For that we also need to use the Restriction class.

The following code snippet of code contains all the available functionalities available with Restriction class:

To get items having price more than 1000:

cr.add(Restrictions.gt("itemPrice", 1000));

To get items having itemPrice less than 1000:

cr.add(Restrictions.lt("itemPrice", 1000));

To get items having itemNames start with Chair:

cr.add(Restrictions.like("itemName", "chair%"));

 Case sensitive form of the above restriction:

cr.add(Restrictions.ilike("itemName", "Chair%"));

To get records having itemPrice in between 100 and 200:

cr.add(Restrictions.between("itemPrice", 100, 200));

To check if the given property is null:

cr.add(Restrictions.isNull("itemDescription"));

To check if the given property is not null:

cr.add(Restrictions.isNotNull("itemDescription"));

You can also use the methods isEmpty() and isNotEmpty() to test if a List within a class is empty or not.

The above code shows all the operations that can be performed with the Criteria API.

Now inevitably the question comes, weather we can combine two or more of the above comparisons or not. The answer is of course yes – the Criteria API allows us to easily chain restrictions:

cr.add(Restrictions.isNotEmpty("itemDescription")) 
  .add(Restrictions.like("itemName", "chair%"));

To add two restrictions with logical operations:

Criterion greaterThanPrice = Restrictions.gt("itemPrice", 1000); 
Criterion chairItems = Restrictions.like("itemName", "Chair%");

To get items with the above defined conditions joined with Logical OR:

LogicalExpression orExample = Restrictions.or(greaterThanPrice, chairItems); 
cr.add(orExample);

To get items matching with the above defined conditions joined with Logical AND:

LogicalExpression andExample = Restrictions.or(greaterThanPrice, chairItems); 
cr.add(andExample);

3.2. Sorting

Now that we know the basic usage of Criteria, let’s have a look at the sorting functionalities of Criteria.

In the following example we order the list in an ascending order of the name and then in a descending order of the price:

List sortedItems = cr.addOrder(Order.asc("itemName"))
  .addOrder(Order.desc("itemPrice"))
  .list();

In the next section we will have a look at how to do aggregate functions.

3.3. Projections, Aggregates And Grouping Functions

So far we have covered most of the basic topics. Now let’s have a look at the different aggregate functions:

Set projections:

List itemProjected = session.createCriteria(Item.class) 
  .setProjection(Projections.rowCount())
  .add( Restrictions.eq("itemPrice", 12000)) 
  .list();

The following is an example of aggregate functions:

Aggregate function for Average:

List avgItemPriceList = session.createCriteria(Item.class)
  .setProjection(Projections.projectionList()
  .add(Projections.avg("itemPrice")))
  .list();

 Other useful aggregate methods that are available are sum(), max(), min() , count() etc.

4. Advantage Over HQL

In the previous sections we’ve covered how to use Criteria Queries.

Clearly, the main and most hard-hitting advantage of Criteria queries over HQL is the nice, clean, Object Oriented API.

We can simply write more flexible, dynamic queries compared to plain HQL. The logic can be refactored with the IDE and has all the type-safety benefits of the Java language itself.

There are of course some disadvantages as well, especially around more complex joins.

So, generally speaking, we’ll have to use the best tool for the job – that can be the Criteria API in most cases, but there are definitely cases were we’ll have to go lower level.

5. Conclusion

In this quick writeup we focused on the basics of Criteria Queries in Hibernate, and also on some of the advanced features of the API.

And of course the code discussed here is available in the Github repository.

Registration with Spring – Integrate reCAPTCHA

$
0
0

I just released the Master Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

In this article we’ll continue the Spring Security Registration series by adding Google reCAPTCHA to the registration process in order to differentiate human from bots.

2. Integrating Google’s reCAPTCHA

To integrate Google’s reCAPTCHA web-service, we first need to register our site with the service, add their library to our page, and then verify the user’s captcha response with the web-service.

Let’s register our site at https://www.google.com/recaptcha/admin. The registration process generates a site-key and secret-key for accessing the web-service.

2.1. Storing the API key-pair

We store the keys in the application.properties:

google.recaptcha.key.site=6LfaHiITAAAA...
google.recaptcha.key.secret=6LfaHiITAAAA...

And expose them to Spring using a bean annotated with @ConfigurationProperties:

@Component
@ConfigurationProperties(prefix = "google.recaptcha.key")
public class CaptchaSettings {

    private String site;
    private String secret;

    // standard getters and setters
}

2.2. Displaying the Widget

Building upon the tutorial from series, we’ll now modify the registration.html to include Google’s library.

Inside our registration form, we add the reCAPTCHA widget which expects the attribute data-sitekey to contain the site-key.

The widget will append the request parameter g-recaptcha-response when submitted:

<!DOCTYPE html>
<html>
<head>

...

<script src='https://www.google.com/recaptcha/api.js'></script>
</head>
<body>

    ...

    <form action="/" method="POST" enctype="utf8">
        ...

        <div class="g-recaptcha col-sm-5"
          th:attr="data-sitekey=${@captchaSettings.getSite()}"></div>
        <span id="captchaError" class="alert alert-danger col-sm-4"
          style="display:none"></span>

3. Server Side Validation

The new request parameter encodes our site key and a unique string identifying the user’s successful completion of the challenge.

However, since we cannot discern that ourselves, we cannot trust what the user has submitted is legitimate. A server-side request is made to validate the captcha response with the web-service API.

The endpoint accepts a HTTP request on the URL https://www.google.com/recaptcha/api/siteverify, with the query parameters secretresponse, and remoteip. It returns a json response having the schema:

{
    "success": true|false,
    "challenge_ts": timestamp,
    "hostname": string,
    "error-codes": [ ... ]
}

3.1. Retrieve User’s Response

The user’s response to the reCAPTCHA challenge is retrieved from the request parameter g-recaptcha-response using HttpServletRequest and validated with our CaptchaService. Any exception thrown while processing the response will abort the rest of the registration logic:

public class RegistrationController {

    @Autowired
    private ICaptchaService captchaService;

    ...

    @RequestMapping(value = "/user/registration", method = RequestMethod.POST)
    @ResponseBody
    public GenericResponse registerUserAccount(@Valid UserDto accountDto, HttpServletRequest request) {
        String response = request.getParameter("g-recaptcha-response");
        captchaService.processResponse(response);

        // Rest of implementation
    }

    ...
}

3.2. Validation Service

The captcha response obtained should be sanitized first. A simple regular expression is used.

If the response looks legitimate, we then make a request to the web-service with the secret-key, the captcha response, and the client’s IP address:

public class CaptchaService implements ICaptchaService {

    @Autowired
    private CaptchaSettings captchaSettings;

    @Autowired
    private RestOperations restTemplate;

    private static Pattern RESPONSE_PATTERN = Pattern.compile("[A-Za-z0-9_-]+");

    @Override
    public void processResponse(String response) {
        if(!responseSanityCheck(response)) {
            throw new InvalidReCaptchaException("Response contains invalid characters");
        }

        URI verifyUri = URI.create(String.format(
          "https://www.google.com/recaptcha/api/siteverify?secret=%s&response=%s&remoteip=%s",
          getReCaptchaSecret(), response, getClientIP()));

        GoogleResponse googleResponse = restTemplate.getForObject(verifyUri, GoogleResponse.class);

        if(!googleResponse.isSuccess()) {
            throw new ReCaptchaInvalidException("reCaptcha was not successfully validated");
        }
    }

    private boolean responseSanityCheck(String response) {
        return StringUtils.hasLength(response) && RESPONSE_PATTERN.matcher(response).matches();
    }
}

3.3. Objectifying the Validation

A Java-bean decorated with Jackson annotations encapsulates the validation response:

@JsonInclude(JsonInclude.Include.NON_NULL)
@JsonIgnoreProperties(ignoreUnknown = true)
@JsonPropertyOrder({
    "success",
    "challenge_ts",
    "hostname",
    "error-codes"
})
public class GoogleResponse {

    @JsonProperty("success")
    private boolean success;
    
    @JsonProperty("challenge_ts")
    private String challengeTs;
    
    @JsonProperty("hostname")
    private String hostname;
    
    @JsonProperty("error-codes")
    private ErrorCode[] errorCodes;

    @JsonIgnore
    public boolean hasClientError() {
        ErrorCode[] errors = getErrorCodes();
        if(errors == null) {
            return false;
        }
        for(ErrorCode error : errors) {
            switch(error) {
                case InvalidResponse:
                case MissingResponse:
                    return true;
            }
        }
        return false;
    }

    static enum ErrorCode {
        MissingSecret,     InvalidSecret,
        MissingResponse,   InvalidResponse;

        private static Map<String, ErrorCode> errorsMap = new HashMap<String, ErrorCode>(4);

        static {
            errorsMap.put("missing-input-secret",   MissingSecret);
            errorsMap.put("invalid-input-secret",   InvalidSecret);
            errorsMap.put("missing-input-response", MissingResponse);
            errorsMap.put("invalid-input-response", InvalidResponse);
        }

        @JsonCreator
        public static ErrorCode forValue(String value) {
            return errorsMap.get(value.toLowerCase());
        }
    }
    
    // standard getters and setters
}

As implied, a truth value in the success property means the user has been validated. Otherwise the errorCodes property will populate with the reason.

The hostname refers to the server that redirected the user to the reCAPTCHA. If you manage many domains and wish them all to share the same key-pair, you can choose to verify the hostname property yourself.

3.4. Validation Failure

In the event of a validation failure, an exception is thrown. The reCAPTCHA library needs to instruct the client to create a new challenge.

We do so in the client’s registration error-handler, by invoking reset on the library’s grecaptcha widget:

register(event){
    event.preventDefault();

    var formData= $('form').serialize();
    $.post(serverContext + "user/registration", formData, function(data){
        if(data.message == "success") {
            // success handler
        }
    })
    .fail(function(data) {
        grecaptcha.reset();
        ...
        
        if(data.responseJSON.error == "InvalidReCaptcha"){ 
            $("#captchaError").show().html(data.responseJSON.message);
        }
        ...
    }
}

4. Protecting Server Resources

Malicious clients do not need to obey the rules of the browser sandbox. So our security mindset should be at the resources exposed and how they might be abused.

4.1. Attempts Cache

It is important to understand that by integrating reCAPTCHA, every request made will cause the server to create a socket to validate the request.

While a more layered approach would be needed for a true DoS mitigation; We can implement an elementary cache that restricts a client to 4 failed captcha responses:

public class ReCaptchaAttemptService {
    private int MAX_ATTEMPT = 4;
    private LoadingCache<String, Integer> attemptsCache;

    public ReCaptchaAttemptService() {
        super();
        attemptsCache = CacheBuilder.newBuilder()
          .expireAfterWrite(4, TimeUnit.HOURS).build(new CacheLoader<String, Integer>() {
            @Override
            public Integer load(String key) {
                return 0;
            }
        });
    }

    public void reCaptchaSucceeded(String key) {
        attemptsCache.invalidate(key);
    }

    public void reCaptchaFailed(String key) {
        int attempts = attemptsCache.getUnchecked(key);
        attempts++;
        attemptsCache.put(key, attempts);
    }

    public boolean isBlocked(String key) {
        return attemptsCache.getUnchecked(key) >= MAX_ATTEMPT;
    }
}

4.2. Refactoring the Validation Service

The cache is incorporated first by aborting if the client has exceed the attempt limit. Otherwise when processing an unsuccessful GoogleResponse we record the attempts containing an error with the client’s response. Successful validation clears the attempts cache:

public class CaptchaService implements ICaptchaService {

    @Autowired
    private ReCaptchaAttemptService reCaptchaAttemptService;

    ...

    @Override
    public void processResponse(String response) {

        ...

        if(reCaptchaAttemptService.isBlocked(getClientIP())) {
            throw new InvalidReCaptchaException("Client exceeded maximum number of failed attempts");
        }

        ...

        GoogleResponse googleResponse = ...

        if(!googleResponse.isSuccess()) {
            if(googleResponse.hasClientError()) {
                reCaptchaAttemptService.reCaptchaFailed(getClientIP());
            }
            throw new ReCaptchaInvalidException("reCaptcha was not successfully validated");
        }
        reCaptchaAttemptService.reCaptchaSucceeded(getClientIP());
    }
}

5. Conclusion

In this article we integrated Google’s reCAPTCHA library into our registration page and implemented a service to verify the captcha response with a server-side request.

The full implementation of this tutorial is available in the github project – this is an Maven based project, so it should be easy to import and run as it is.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE


Jackson vs Gson

$
0
0

I usually post about Jackson and JSON stuff on Twitter - you can follow me there:

1. Introduction

In this article we’ll compare the Gson and Jackson APIs for serializing and deserializing JSON data to Java objects and vice-versa.

Gson and Jackson are complete libraries offering JSON data-binding support for Java. Each are actively developed open-source projects which offer handling of complex data types and support for Java Generics.

And in most cases, both libraries can deserialize to an entity without modifying an entity class, which is important in cases where a developer doesn’t have access to the entity source code.

2. Gson Maven Dependency 

<dependency>
    <groupId>com.google.code.gson</groupId>
    <artifactId>gson</artifactId>
    <version>${gson.version}</version>
</dependency>

You can get the latest version of Gson here.

3. Gson Serialization

Serialization converts Java objects to JSON output. Consider the following entities:

public class ActorGson {
    private String imdbId;
    private Date dateOfBirth;
    private List<String> filmography;
    
    // getters and setters, default constructor and field constructor omitted
}

public class Movie {
    private String imdbId;
    private String director;
    private List<ActorGson> actors;
    
    // getters and setters, default constructor and field constructor omitted
}

3.1. Simple Serialization

Let’s start with an example of Java to JSON serialization:

SimpleDateFormat sdf = new SimpleDateFormat("dd-MM-yyyy");

ActorGson rudyYoungblood = new ActorGson(
  "nm2199632",
  sdf.parse("21-09-1982"), 
  Arrays.asList("Apocalypto",
  "Beatdown", "Wind Walkers")
);
Movie movie = new Movie(
  "tt0472043", 
  "Mel Gibson",
  Arrays.asList(rudyYoungblood));

String serializedMovie = new Gson().toJson(movie);

This will result in:

{
    "imdbId": "tt0472043",
    "director": "Mel Gibson",
    "actors": [{
        "imdbId": "nm2199632",
        "dateOfBirth": "Sep 21, 1982 12:00:00 AM",
        "filmography": ["Apocalypto", "Beatdown", "Wind Walkers"]
	}]
}

By default:

  • All properties are serialized because they have no null values
  • dateOfBirth field was translated with the default Gson date pattern
  • Output is not formatted and JSON property names correspond to the Java entities

3.2. Custom Serialization

Using a custom serializer allows us to modify the standard behavior. We can introduce an output formatter with HTML, handle null values, exclude properties from output, or add a new output.

ActorGsonSerializer modifies generation of JSON code for the ActorGson element:

public class ActorGsonSerializer implements JsonSerializer<ActorGson> {
    private SimpleDateFormat sdf = new SimpleDateFormat("dd-MM-yyyy");
     
    @Override
    public JsonElement serialize(ActorGson actor, Type type,
        JsonSerializationContext jsonSerializationContext) {
        
    	JsonObject actorJsonObj = new JsonObject();
        
        actorJsonObj.addProperty("<strong>IMDB Code</strong>", actor.getImdbId());
        
        actorJsonObj.addProperty("<strong>Date Of Birth</strong>", 
          actor.getDateOfBirth() != null ? 
          sdf.format(actor.getDateOfBirth()) : null);
        
        actorJsonObj.addProperty("<strong>N° Film:</strong> ",  
          actor.getFilmography()  != null ?  
          actor.getFilmography().size() : null);
       
        actorJsonObj.addProperty("filmography", actor.getFilmography() != null ? 
          convertFilmography(actor.getFilmography()) : null);
        
        return actorJsonObj;
    }
 
    private String convertFilmography(List<String> filmography) {
        return filmography.stream()
          .collect(Collectors.joining("-"));
    }
}

In order to exclude the director property, the @Expose annotation is used for properties we want to consider:

public class MovieWithNullValue {
	
    @Expose
    private String imdbId;
    private String director;
    
    @Expose
    private List<ActorGson> actors;
}

Now we can proceed with Gson object creation using the GsonBuilder class:

Gson gson = new GsonBuilder()
  .setPrettyPrinting()
  .excludeFieldsWithoutExposeAnnotation()
  .serializeNulls()
  .disableHtmlEscaping()
  .registerTypeAdapter(ActorGson.class, new ActorGsonSerializer())
  .create();
 
SimpleDateFormat sdf = new SimpleDateFormat("dd-MM-yyyy");
 
ActorGson rudyYoungblood = new ActorGson("nm2199632",
  sdf.parse("21-09-1982"), Arrays.asList("Apocalypto","Beatdown", "Wind Walkers"));

MovieWithNullValue movieWithNullValue = new MovieWithNullValue(null,
  "Mel Gibson", Arrays.asList(rudyYoungblood));
 
String serializedMovie = gson.toJson(movieWithNullValue);

The result is the following:

{
  "imdbId": null,
  "actors": [
    {
      "<strong>IMDB Code</strong>": "nm2199632",
      "<strong>Date Of Birth</strong>": "21-09-1982",
      "<strong>N° Film:</strong> ": 3,
      "filmography": "Apocalypto-Beatdown-Wind Walkers"
    }
  ]
}

Notice that:

  • the output is formatted
  • some property names are changed and contain HTML
  • null values are included, and the director field is omitted
  • Date is now in the dd-MM-yyyy format
  • a new property is present – N° Film
  • filmography is a formatted property, not the default JSON list

4. Gson Deserialization

4.1. Simple Deserialization

Deserialization converts JSON input into Java objects. To illustrate the output, we implement the toString() method in both entity classes:

public class Movie {
    @Override
    public String toString() {
      return "Movie [imdbId=" + imdbId + ", director=" + director + ",
        actors=" + actors + "]";
    }
    ...
}

public class ActorGson {
    @Override
    public String toString() {
        return "ActorGson [imdbId=" + imdbId + ", dateOfBirth=" + dateOfBirth + ",
          filmography=" + filmography + "]";
    }
    ...
}

Then we utilize the serialized JSON and run it through standard Gson deserialization:

String jsonInput = "{\"imdbId\":\"tt0472043\",\"actors\":" +
  "[{\"imdbId\":\"nm2199632\",\"dateOfBirth\":\"1982-09-21T12:00:00+01:00\","+
  "\"filmography\":[\"Apocalypto\",\"Beatdown\",\"Wind Walkers\"]}]}";
		
Movie outputMovie = new Gson().fromJson(jsonInput, Movie.class);
outputMovie.toString();

The output is us our entities, populated with the data from our JSON input:

Movie [imdbId=tt0472043, director=null, actors=[ActorGson 
  [imdbId=nm2199632, dateOfBirth=Tue Sep 21 04:00:00 PDT 1982, 
  filmography=[Apocalypto, Beatdown, Wind Walkers]]]]

As was the case with the simple serializer:

  • the JSON input names must correspond with the Java entity names, or they are set to null.
  • dateOfBirth field was translated with the default Gson date pattern, ignoring the time zone.

4.2. Custom Deserialization

Using a custom deserializer allows us to modify the standard deserializer behavior. In this case, we want the date to reflect the correct time zone for dateOfBirth.  We use a custom ActorGsonDeserializer on the ActorGson entity to achieve this:

public class ActorGsonDeserializer implements JsonDeserializer<ActorGson> {

    private SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss");

    @Override
    public ActorGson deserialize(JsonElement json, Type type,
      JsonDeserializationContext jsonDeserializationContext) throws JsonParseException {
        
        JsonObject jsonObject = json.getAsJsonObject();

        JsonElement jsonImdbId = jsonObject.get("imdbId");
        JsonElement jsonDateOfBirth = jsonObject.get("dateOfBirth");
        JsonArray jsonFilmography = jsonObject.getAsJsonArray("filmography");

        ArrayList<String> filmList = new ArrayList<String>();
        if (jsonFilmography != null) {
            for (int i = 0; i < jsonFilmography.size(); i++) {
                filmList.add(jsonFilmography.get(i).getAsString());
            }
        }

	ActorGson actorGson = new ActorGson(jsonImdbId.getAsString(),
	  sdf.parse(jsonDateOfBirth.getAsString()), filmList);
        return actorGson;
    }
}

We employed a SimpleDateFormat parser to parse the input date, accounting for the time zone.

Note that we could have decided to simply write a custom deserializer for only the Date, but the ActorGsonDeserializer offers a more detailed view of the deserialization process.

Also note that the Gson approach does not require modifying the ActorGson entity, which is ideal as we may not always have access to the input entity. We use the custom deserializer here:

String jsonInput = "{\"imdbId\":\"tt0472043\",\"actors\":"
  + "[{\"imdbId\":\"nm2199632\",\"dateOfBirth\":\"1982-09-21T12:00:00+01:00\",
  + \"filmography\":[\"Apocalypto\",\"Beatdown\",\"Wind Walkers\"]}]}";

Gson gson = new GsonBuilder()
  .registerTypeAdapter(ActorGson.class,new ActorGsonDeserializer())
  .create();

Movie outputMovie = gson.fromJson(jsonInput, Movie.class);
outputMovie.toString();

The output is similar to the simple deserializer result, except the date uses correct time zone:

Movie [imdbId=tt0472043, director=null, actors=[ActorGson
  [imdbId=nm2199632, dateOfBirth=Tue Sep 21 12:00:00 PDT 1982, 
  filmography=[Apocalypto, Beatdown, Wind Walkers]]]]

5. Jackson Maven Dependency

<dependency> 
    <groupId>com.fasterxml.jackson.core</groupId> 
    <artifactId>jackson-databind</artifactId>   
    <version>${jackson.version}</version> 
</dependency>

You can get the latest version of Jackson here.

6. Jackson Serialization

6.1. Simple Serialization

Here we will use Jackson to obtain the same serialized content we had with Gson using the following entities. Note that the entity’s getters/setters must be public:

public class ActorJackson {
    private String imdbId;
    private Date dateOfBirth;
    private List<String> filmography;
    
    // required getters and setters, default constructor 
    // and field constructor details omitted
}

public class Movie {
    private String imdbId;
    private String director;
    private List<ActorJackson> actors;
    
    // required getters and setters, default constructor 
    // and field constructor details omitted
}
SimpleDateFormat sdf = new SimpleDateFormat("dd-MM-yyyy"); 
ActorJackson rudyYoungblood = new ActorJackson("nm2199632",sdf.parse("21-09-1982"),
  Arrays.asList("Apocalypto","Beatdown","Wind Walkers") ); 
Movie movie = new Movie("tt0472043","Mel Gibson", Arrays.asList(rudyYoungblood)); 
ObjectMapper mapper = new ObjectMapper(); 
String jsonResult = mapper.writeValueAsString(movie);

The output is as follows:

{"imdbId":"tt0472043","director":"Mel Gibson","actors":
[{"imdbId":"nm2199632","dateOfBirth":401439600000,
"filmography":["Apocalypto","Beatdown","Wind Walkers"]}]}

Some notes of interest:

  • ObjectMapper is our Jackson serializer/deserializer
  • The output JSON is not formatted
  • By default, Java Date is translated to long value

6.2. Custom Serialization

We can create a Jackson serializer for ActorJackson element generation by extending StdSerializer for our entity. Again note that the entity getters/setters must be public:

public class ActorJacksonSerializer extends StdSerializer<ActorJackson> {

    private SimpleDateFormat sdf = new SimpleDateFormat("dd-MM-yyyy");

    public ActorJacksonSerializer(Class t) {
        super(t);
    }

    @Override
    public void serialize(ActorJackson actor, JsonGenerator jsonGenerator,
      SerializerProvider serializerProvider) throws IOException {

        jsonGenerator.writeStartObject();
        jsonGenerator.writeStringField("imdbId", actor.getImdbId());
        jsonGenerator.writeObjectField("dateOfBirth",
          actor.getDateOfBirth() != null ?
          sdf.format(actor.getDateOfBirth()) : null);
	
        jsonGenerator.writeNumberField("N° Film: ", 
          actor.getFilmography() != null ? actor.getFilmography().size() : null);
	jsonGenerator.writeStringField("filmography", actor.getFilmography()
          .stream().collect(Collectors.joining("-")));

        jsonGenerator.writeEndObject();
    }
}

We create a Movie entity to allow ignoring of the director field:

public class MovieWithNullValue {
	
    private String imdbId;
    
    @JsonIgnore
    private String director;
    
    private List<ActorJackson> actors;
    
    // required getters and setters, default constructor
    // and field constructor details omitted
}

Now we can proceed with a custom ObjectMapper creation and setup:

SimpleDateFormat sdf = new SimpleDateFormat("dd-MM-yyyy");

ActorJackson rudyYoungblood = new ActorJackson(
  "nm2199632", 
  sdf.parse("21-09-1982"), 
  Arrays.asList("Apocalypto", "Beatdown","Wind Walkers"));
MovieWithNullValue movieWithNullValue = 
  new MovieWithNullValue(null,"Mel Gibson", Arrays.asList(rudyYoungblood));

SimpleModule module = new SimpleModule();
module.addSerializer(new ActorJacksonSerializer(ActorJackson.class));
ObjectMapper mapper = new ObjectMapper();
String jsonResult = mapper.registerModule(module)
  .writer(new DefaultPrettyPrinter())
  .writeValueAsString(movieWithNullValue);

The output is formatted JSON that handles null values, formats the date, excludes the director field and shows new output of :

{
  "actors" : [ {
    "imdbId" : "nm2199632",
    "dateOfBirth" : "21-09-1982",
    "N° Film: " : 3,
    "filmography" : "Apocalypto-Beatdown-Wind Walkers"
  } ],
  "imdbID" : null
}

7. Jackson Deserialization

7.1. Simple Deserialization

To illustrate the output, we implement the toString() method in both Jackson entity classes:

public class Movie {
    @Override
    public String toString() {
        return "Movie [imdbId=" + imdbId + ", director=" + director
          + ", actors=" + actors + "]";
    }
    ...
}

public class ActorJackson {
    @Override
    public String toString() {
        return "ActorJackson [imdbId=" + imdbId + ", dateOfBirth=" + dateOfBirth
          + ", filmography=" + filmography + "]";
    }
    ...
}

Then we utilize the serialized JSON and run it through Jackson deserialization:

String jsonInput = "{\"imdbId\":\"tt0472043\",\"actors\":
  [{\"imdbId\":\"nm2199632\",\"dateOfBirth\":\"1982-09-21T12:00:00+01:00\",
  \"filmography\":[\"Apocalypto\",\"Beatdown\",\"Wind Walkers\"]}]}";
ObjectMapper mapper = new ObjectMapper();
Movie movie = mapper.readValue(jsonInput, Movie.class);

The output is us our entities, populated with the data from our JSON input:

Movie [imdbId=tt0472043, director=null, actors=[ActorJackson 
  [imdbId=nm2199632, dateOfBirth=Tue Sep 21 04:00:00 PDT 1982, 
  filmography=[Apocalypto, Beatdown, Wind Walkers]]]]

As was the case with the simple serializer:

  • the JSON input names must correspond with the Java entity names, or they are set to null,
  • dateOfBirth field was translated with the default Jackson date pattern, ignoring the time zone.

7.2. Custom Deserialization

Using a custom deserializer allows us to modify the standard deserializer behavior.

In this case, we want the date to reflect the correct time zone for dateOfBirth, so we add a DateFormatter to our Jackson ObjectMapper:

String jsonInput = "{\"imdbId\":\"tt0472043\",\"director\":\"Mel Gibson\",
  \"actors\":[{\"imdbId\":\"nm2199632\",\"dateOfBirth\":\"1982-09-21T12:00:00+01:00\",
  \"filmography\":[\"Apocalypto\",\"Beatdown\",\"Wind Walkers\"]}]}";

ObjectMapper mapper = new ObjectMapper();
DateFormat df = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss");
mapper.setDateFormat(df);
		
Movie movie = mapper.readValue(jsonInput, Movie.class);
movie.toString();

The output reflects the correct time zone with the date:

Movie [imdbId=tt0472043, director=Mel Gibson, actors=[ActorJackson 
  [imdbId=nm2199632, dateOfBirth=Tue Sep 21 12:00:00 PDT 1982, 
  filmography=[Apocalypto, Beatdown, Wind Walkers]]]]

This solution is clean and simple.

Alternatively, we could have created a custom deserializer for the ActorJackson class, registered this module with our ObjectMapper, and deserialized the date using the @JsonDeserialize annotation on the ActorJackson entity.

The disadvantage of that approach is the need to modify the entity, which may not be ideal for cases when we don’t have access to the input entity classes.

8. Conclusion

Both Gson and Jackson are good options for serializing/deserializing JSON data, simple to use and well documented.

Advantages of Gson:

  • Simplicity of toJson/fromJson in the simple cases
  • For deserialization, do not need access to the Java entities

Advantages of Jackson:

  • Built into all JAX-RS (Jersey, Apache CXF, RESTEasy, Restlet), and Spring framework
  • Extensive annotation support

You can find the code for Gson and Jackson on GitHub.

I usually post about Jackson and JSON stuff on Twitter - you should follow me there:


Introduction to JSF Expression Language 3.0

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Overview

In this article, we’ll look at the latest features, improvements and compatibility issues of Expression Language, version 3.0 (EL 3.0).

This is the latest version as at the time of this writing and ships with more recent JavaEE application servers (JBoss EAP 7 and Glassfish 4 are good examples that have implemented support for it).

The article is focused on the developments in EL 3.0 only – to learn more about Expression Language in general, read the EL version 2.2 article first.

2. Prerequisites

The examples shown in the article have been tested against Tomcat 8 as well. In order to use EL3.0, you must add the following dependency:

<dependency>
    <groupId>javax.el</groupId>
    <artifactId>javax.el-api</artifactId>
    <version>3.0.0</version>
</dependency>

You can always check the Maven repository for the latest dependency by following this link.

3. Lambda Expressions

The latest EL iteration provides very robust support for lambda expressions. Lambda expressions were introduced into Java SE 8, but support for it in EL comes with Java EE 7.

The implementation here is full-featured, allowing for a great deal of flexibility (and some implied risk) in EL use and evaluation.

3.1. Lambda EL Value Expressions

Basic use of this functionality allows us to specify a lambda expression as the value type in an EL value expression:

<h:outputText id="valueOutput" 
  value="#{(x->x*x*x);(ELBean.pageCounter)}"/>

Extending from that, one can name the lambda function in EL for reuse in compound statements, just like you would in a lambda expression in Java SE. Compound lambda expressions can be separated by a semi-colon ( ;):

<h:outputText id="valueOutput" 
  value="#{cube=(x->x*x*x);cube(ELBean.pageCounter)}"/>

This snippet assigns the function to the cube identifier, which is then available for reuse immediately.

3.2. Passing Lambda Expressions to the Backing Bean

Let’s take this a little further: we can get a lot of flexibility by encapsulating logic in an EL expression (as a lambda) and passing it to the JSF backing bean:

<h:outputText id="valueOutput" 
  value="#{ELBean.multiplyValue(x->x*x*x)}"/>

This now allows us to process the lambda expression whole as an instance of javax.el.LambdaExpression:

public String multiplyValue(LambdaExpression expr){
    String theResult = (String) expr.invoke(FacesContext
      .getCurrentInstance().getELContext(), pageCounter);
    return theResult;
}

This is a very powerful feature that allows:

  • A clean way to package logic, providing for a very flexible functional programming paradigm. The backing bean logic above could be conditional based on values pulled in from different sources.
  • An easy way to introduce lambda support in pre-JDK 8 code-bases, that might not be ready to upgrade.
  • A powerful tool in using the new Streams/Collections API.

4. Collections API Enhancements

The support for the Collections API in earlier versions of EL were somewhat lacking. EL 3.0 has introduced major API improvements in its support for the Java Collections, and just like the lambda expressions, EL 3.0 provides JDK 8 Streaming support within Java EE 7.

4.1. Dynamic Collections Definition

New in 3.0, we can now dynamically define ad-hoc data structures in EL:

  • Lists:
   <h:dataTable var="listItem" value="#{['1','2','3']}">
       <h:column id="nameCol">
           <h:outputText id="name" value="#{listItem}"/>
       </h:column>
   </h:dataTable>
  • Sets:
   <h:dataTable var="setResult" value="#{{'1','2','3'}}">
    ....
   </h:dataTable>

        Note: As with normal Java Sets, the order of the elements is unpredictable, when listed

  • Maps:
   <h:dataTable var="mapResult" 
     value="#{{'one':'1','two':'2','three':'3'}}">
 

Tip: A common mistake in textbooks (and even in the official documentation from Oracle) when defining dynamic maps uses double quotes (“) instead of single quote for the Map key – it’s  going to result in an EL compilation error.

4.2. Advanced Collection Operations

With EL3.0, there is support for an advanced query semantics that combines the power  of lambda expressions, the new streaming API and SQL-like operations like joins and grouping. We won’t cover these in this article as these are advanced topics. Let’s look at a sample to demonstrate its power:

<h:dataTable var="streamResult" 
  value="#{['1','2','3'].stream().filter(x-> x>1).toList()}">
    <h:column id="nameCol">
        <h:outputText id="name" value="#{streamResult}"/>
    </h:column>
</h:dataTable>

The table above will filter a backing list using the lambda expression passed

 <h:outputLabel id="avgLabel" for="avg" 
   value="Average of integer list value"/>
 <h:outputText id="avg" 
   value="#{['1','2','3'].stream().average().get()}"/>

The output text avg will compute the average of the numbers in the list. Both of these operations are null-safe by way of the new Optional API (another improvement on previous versions).

Remember that support for this doesn’t require JDK 8, just JavaEE 7/EL3.0. What this means is that you’re able to do most of the JDK 8 Stream operations in EL, but not in the backing bean Java code.

Tip: You can use the JSTL <c:set/> tag to declare your data structure as a page-level variable and manipulate that instead throughout the JSF page:

 <c:set var='pageLevelNumberList' value="#{[1,2,3]}"/>

You can now refer to “#{pageLevelNumberList}” throughout the page like it were a bona-fide JSF component or bean. This allows a significant amount of reuse throughout the page

<h:outputText id="avg" 
  value="#{pageLevelNumberList.stream().average().get()}"/>

5. Static Fields and Methods

There was no support for static field, method or Enum access in previous versions of EL. Things have changed.

First we have to manually import the class containing the constants into the EL context. This is ideally done as early as possible. Here we’re doing it in the @PostConstruct initializer of the JSF managed bean (A ServletContextListener is also a viable candidate) :

 @PostConstruct
 public void init() {
     FacesContext.getCurrentInstance()
       .getApplication().addELContextListener(new ELContextListener() {
         @Override
         public void contextCreated(ELContextEvent evt) {
             evt.getELContext().getImportHandler()
              .importClass("com.baeldung.el.controllers.ELSampleBean");
         }
     });
 }

Then we define a String constant field (or an Enum if you choose) in the desired class:

public static final String constantField 
  = "THIS_IS_NOT_CHANGING_ANYTIME_SOON";

After which we can now access the variable in EL:

 <h:outputLabel id="staticLabel" 
   for="staticFieldOutput" value="Constant field access: "/>
 <h:outputText id="staticFieldOutput" 
   value="#{ELSampleBean.constantField}"/>

Per the EL 3.0 specification, any class outside of java.lang.* needs to be manually imported as shown. It’s only after doing this that the constants defined in a class are available in EL. The import is ideally done as part of the initialization of the JSF runtime.

A few notes are necessary here:

  • The syntax requires that the fields and methods be public, static (and final in the case of methods)
  • The syntax changed between the initial draft of the EL 3.0 specification and the release version. So in some textbooks, you might still find something that looks like:
    T(YourClass).yourStaticVariableOrMethod

    This won’t work in practice (a design change to simplify the syntax was decided late into the implementation cycle)

  • The final syntax that was released still came out with a bug as seen in Glassfish  (and a similar bug in Tomcat 8.0.9). It’s important to be running the latest versions of these.

6. Conclusion

We’ve examined some of the highlights in the latest EL implementation. Major improvements were made to bring cool new features like lambda and streams flexibility to the API.

With the flexibility that we now have in EL, it’s important to remember one of the design objectives of the JSF framework: clean separation of concerns with the use of the MVC pattern.

So it’s worth noting that the latest improvements to the API may open us up to anti-patterns in JSF, because EL now has the capability to do real business logic – more so than before. And so it’s definitely important to have that in mind during a real-world implementation, to make sure responsibilities are neatly separated.

And of course, the examples from the articles can be found in GitHub.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Introduction to Spring MVC HandlerInterceptor

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Introduction

In this tutorial we’ll focus on understanding the Spring MVC HandlerInterceptor and how to use it correctly.

2. Spring MVC Handler

And in order to understand the interceptor, let’s take a step back and look at the HandlerMapping. This maps a method to an URL, so that the DispatcherServlet will be able to invoke it when processing a request.

And the DispatcherServlet uses the HandlerAdapter to actually invoke the method.

Now that we understand the overall context – this is where the handler interceptor comes in. We’ll use the HandlerInterceptor to perform actions before handling, after handling or after completion (when the view is rendered) of a request.

The interceptor can be used for cross-cutting concerns and to avoid repetitive handler code like: logging, changing globally used parameters in Spring model etc.

In next few sections that’s exactly what we’re going to be looking at – the differences between various interceptor implementations.

3. Maven Dependencies

In order to use Interceptors, you need to include the following section in a dependencies section of your pom.xml file:

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-web</artifactId>
    <version>4.3.2.RELEASE</version>
</dependency>

Latest version can be found here.

4. Spring Handler Interceptor

Interceptors working with the HandlerMapping on the framework must implement the HandlerInterceptor interface.

This interface contains three main methods:

  • prehandle() – called before the actual handler is executed, but the view is not generated yet
  • postHandle() – called after the handler is executed
  • afterCompletion() – called after the complete request has finished and view was generated

These three methods provide flexibility to do all kinds of pre- and post-processing.

And a quick note – the main difference between HandlerInterceptor and HandlerInterceptorAdapter is that in the first one we need to override all three methods: preHandle()postHandle() and afterCompletion(), whereas in the second we may implement only required methods.

A quick note before we go further – if you want to skip the theory and jump straight to examples, jump right into section 5.

Here’s what a simple preHandle() implementation will look like:

@Override
public boolean preHandle(
  HttpServletRequest request,
  HttpServletResponse response, 
  Object handler) throws Exception {
    // your code
    return true;
}

Notice the method returns a boolean value – which tells Spring if the request should be further processed by a handler (true) or not (false).

Next, we have an implementation of postHandle():

@Override
public void postHandle(
  HttpServletRequest request, 
  HttpServletResponse response,
  Object handler, 
  ModelAndView modelAndView) throws Exception {
    // your code
}

This method is called immediately after the request is processed by HandlerAdapter, but before generating a view.

And it can of course be used in many ways – for example, we may add an avatar of a logged user into a model.

The final method we need to implement in the custom HandlerInterceptor implementation is afterCompletion():

@Override
public void afterCompletion(
  HttpServletRequest request, 
  HttpServletResponse response,
  Object handler, Exception ex) {
    // your code
}

When the view is successfully generated, we can use this hook to do things like gather additional statistics related to the request.

A final note to remember is that a HandlerInterceptor is registered to the DefaultAnnotationHandlerMapping bean, which is responsible for applying interceptors to any class marked with a @Controller annotation. Moreover, you may specify any number of interceptors in your web application.

5. Custom Logger Interceptor

In this example we will focus on logging in our web application. First of all, our class needs to extend HandlerInterceptorAdapter:

public class LoggerInterceptor extends HandlerInterceptorAdapter {
    ...
}

We also need to enable logging in our interceptor:

private static Logger log = LoggerFactory.getLogger(LoggerInterceptor.class);

This allows Log4J to display logs, as well as indicate, which class is currently logging information to the specified output.

Next, let’s focus on custom interceptor implementations:

5.1. Method preHandle()

This method is called before handling a request; it returns true, to allow the framework to send the request further to the handler method (or to the next interceptor). If the method returns false, Spring assumes that request has been handled and no further processing is needed.

We can use the hook to log information about the requests’ parameters: where the request comes from, etc.

In our example, we are logging this info using a simple Log4J logger:

@Override
public boolean preHandle(
  HttpServletRequest request,
  HttpServletResponse response, 
  Object handler) throws Exception {
    
    log.info("[preHandle][" + request + "]" + "[" + request.getMethod()
      + "]" + request.getRequestURI() + getParameters(request));
    
    return true;
}

As we can see, we’re logging some basic information about the request.

In case we run into a password here, we’ll need to make sure we don’t log that of course.

A simple option is to replace passwords, and any other sensitive type of data, with stars.

Here’s a quick implementation of how that can be done:

private String getParameters(HttpServletRequest request) {
    StringBuffer posted = new StringBuffer();
    Enumeration<?> e = request.getParameterNames();
    if (e != null) {
        posted.append("?");
    }
    while (e.hasMoreElements()) {
        if (posted.length() > 1) {
            posted.append("&");
        }
        String curr = (String) e.nextElement();
        posted.append(curr + "=");
        if (curr.contains("password") 
          || curr.contains("pass")
          || curr.contains("pwd")) {
            posted.append("*****");
        } else {
            posted.append(request.getParameter(curr));
        }
    }
    String ip = request.getHeader("X-FORWARDED-FOR");
    String ipAddr = (ip == null) ? getRemoteAddr(request) : ip;
    if (ipAddr!=null && !ipAddr.equals("")) {
        posted.append("&_psip=" + ipAddr); 
    }
    return posted.toString();
}

Finally, we’re aiming to get the source IP address of the HTTP request.

Here’s a simple implementation:

private String getRemoteAddr(HttpServletRequest request) {
    String ipFromHeader = request.getHeader("X-FORWARDED-FOR");
    if (ipFromHeader != null && ipFromHeader.length() > 0) {
        log.debug("ip from proxy - X-FORWARDED-FOR : " + ipFromHeader);
        return ipFromHeader;
    }
    return request.getRemoteAddr();
}

5.2. Method postHandle()

This hook runs when the HandlerAdapter is invoked the handler but DispatcherServlet is yet to render the view.

We can use this method to add additional attributes to the ModelAndView or to determine the time taken by handler method to process a client’s request.

In our case, we simply log a request just before DispatcherServlet is going to render a view.

@Override
public void postHandle(
  HttpServletRequest request, 
  HttpServletResponse response,
  Object handler, 
  ModelAndView modelAndView) throws Exception {
    
    log.info("[postHandle][" + request + "]");
}

5.3. Method afterCompletion()

When a request is finished and the view is rendered, we may obtain request and response data, as well as information about exceptions, if any occurred:

@Override
public void afterCompletion(
  HttpServletRequest request, HttpServletResponse response,Object handler, Exception ex) 
  throws Exception {
    if (ex != null){
        ex.printStackTrace();
    }
    log.info("[afterCompletion][" + request + "][exception: " + ex + "]");
}

6. Configuration

To add our interceptors into Spring configuration, we need to override addInterceptors() method inside WebConfig class that extends WebMvcConfigurerAdapter:

@Override
public void addInterceptors(InterceptorRegistry registry) {
    registry.addInterceptor(new LoggerInterceptor());
}

We may achieve the same configuration by editing our XML Spring configuration file:

<mvc:interceptors>
    <bean id="loggerInterceptor" class="org.baeldung.web.interceptor.LoggerInterceptor"/>
</mvc:interceptors>

With this configuration active, the interceptor will be active and all requests in the application will be properly logged.

Please notice, if multiple Spring interceptors are configured, the preHandle() method is executed in the order of configuration, whereas postHandle() and afterCompletion() methods are invoked in the reverse order.

7. Conclusion

This tutorial is an quick introduction to intercepting HTTP requests using Spring MVC Handler Interceptor.

All examples and configurations are available here on GitHub.

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

Hibernate Second-Level Cache

$
0
0

I usually post about Persistence on Twitter - you can follow me there:

1. Overview

One of the advantages of database abstraction layers such as ORM (object-relational mapping) frameworks is their ability to transparently cache data retrieved from the underlying store. This helps eliminate database-access costs for frequently accessed data.

Performance gains can be significant if read/write ratios of cached content are high, especially for entities which consist of large object graphs.

In this article we explore Hibernate second-level cache.

We explain some basic concepts and as always we illustrate everything with simple examples. We use JPA and fall back to Hibernate native API only for those features that are not standardized in JPA.

2. What Is a Second-Level Cache?

As most other fully-equipped ORM frameworks, Hibernate has the concept of first-level cache. It is a session scoped cache which ensures that each entity instance is loaded only once in the persistent context.

Once the session is closed, first-level cache is terminated as well. This is actually desirable, as it allows for concurrent sessions to work with entity instances in isolation from each other.

On the other hand, second-level cache is SessionFactory-scoped, meaning it is shared by all sessions created with the same session factory. When an entity instance is looked up by its id (either by application logic or by Hibernate internally, e.g. when it loads associations to that entity from other entities), and if second-level caching is enabled for that entity, the following happens:

  • If an instance is already present in the first-level cache, it is returned from there
  • If an instance is not found in the first-level cache, and the corresponding instance state is cached in the second-level cache, then the data is fetched from there and an instance is assembled and returned
  • Otherwise, the necessary data are loaded from the database and an instance is assembled and returned

Once the instance is stored in the persistence context (first-level cache), it is returned from there in all subsequent calls within the same session until the session is closed or the instance is manually evicted from the persistence context. Also, the loaded instance state is stored in L2 cache if it was not there already.

3. Region Factory

Hibernate second-level caching is designed to be unaware of the actual cache provider used. Hibernate only needs to be provided with an implementation of the org.hibernate.cache.spi.RegionFactory interface which encapsulates all details specific to actual cache providers. Basically, it acts as a bridge between Hibernate and cache providers.

In this article we use Ehcache as a cache provider, which is a mature and widely used cache. You can pick any other provider of course, as long as there is an implementation of a RegionFactory for it.

We add the Ehcache region factory implementation to the classpath with the following Maven dependency:

<dependency>
    <groupId>org.hibernate</groupId>
    <artifactId>hibernate-ehcache</artifactId>
    <version>5.2.2.Final</version>
</dependency>

Take a look here for latest version of hibernate-ehcache. However, make sure that hibernate-ehcache version is equal to Hibernate version which you use in your project, e.g. if you use hibernate-ehcache 5.2.2.Final like in this example, then the version of Hibernate should also be 5.2.2.Final.

The hibernate-ehcache artifact has a dependency on the Ehcache implementation itself, which is thus transitively included in the classpath as well.

4. Enabling Second-Level Caching

With the following two properties we tell Hibernate that L2 caching is enabled and we give it the name of the region factory class:

hibernate.cache.use_second_level_cache=true
hibernate.cache.region.factory_class=org.hibernate.cache.ehcache.EhCacheRegionFactory

For example, in persistence.xml it would look like:

<properties>
    ...
    <property name="hibernate.cache.use_second_level_cache" value="true"/>
    <property name="hibernate.cache.region.factory_class" 
      value="org.hibernate.cache.ehcache.EhCacheRegionFactory"/>
    ...
</properties>

To disable second-level caching (for debugging purposes for example), just set hibernate.cache.use_second_level_cache property to false.

5. Making an Entity Cacheable

In order to make an entity eligible for second-level caching, we annotate it with Hibernate specific @org.hibernate.annotations.Cache annotation and specify a cache concurrency strategy.

Some developers consider that it is a good convention to add the standard @javax.persistence.Cacheable annotation as well (although not required by Hibernate), so an entity class implementation might look like this:

@Entity
@Cacheable
@org.hibernate.annotations.Cache(usage = CacheConcurrencyStrategy.READ_WRITE)
public class Foo {
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    @Column(name = "ID")
    private long id;

    @Column(name = "NAME")
    private String name;
    
    // getters and setters
}

For each entity class, Hibernate will use a separate cache region to store state of instances for that class. The region name is the fully qualified class name.

For example, Foo instances are stored in a cache named org.baeldung.persistence.model.Foo in Ehcache.

To verify that caching is working, we may write a quick test like this:

Foo foo = new Foo();
fooService.create(foo);
fooService.findOne(foo.getId());
int size = CacheManager.ALL_CACHE_MANAGERS.get(0)
  .getCache("org.baeldung.persistence.model.Foo").getSize();
assertThat(size, greaterThan(0));

Here we use Ehcache API directly to verify that org.baeldung.persistence.model.Foo cache is not empty after we load a Foo instance.

You could also enable logging of SQL generated by Hibernate and invoke fooService.findOne(foo.getId()) multiple times in the test to verify that the select statement for loading Foo is printed only once (the first time), meaning that in subsequent calls the entity instance is fetched from the cache.

6. Cache Concurrency Strategy

Based on use cases, we are free to pick one of the following cache concurrency strategies:

  • READ_ONLY: Used only for entities that never change (exception is thrown if an attempt to update such an entity is made). It is very simple and performant. Very suitable for some static reference data that don’t change
  • NONSTRICT_READ_WRITE: Cache is updated after a transaction that changed the affected data has been committed. Thus, strong consistency is not guaranteed and there is a small time window in which stale data may be obtained from cache. This kind of strategy is suitable for use cases that can tolerate eventual consistency
  • READ_WRITE: This strategy guarantees strong consistency which it achieves by using ‘soft’ locks: When a cached entity is updated, a soft lock is stored in the cache for that entity as well, which is released after the transaction is committed. All concurrent transactions that access soft-locked entries will fetch the corresponding data directly from database
  • TRANSACTIONAL: Cache changes are done in distributed XA transactions. A change in a cached entity is either committed or rolled back in both database and cache in the same XA transaction

7. Cache Management

If expiration and eviction policies are not defined, the cache could grow indefinitely and eventually consume all of available memory. In most cases, Hibernate leaves cache management duties like these to cache providers, as they are indeed specific to each cache implementation.

For example, we could define the following Ehcache configuration to limit the maximum number of cached Foo instances to 1000:

<ehcache>
    <cache name="org.baeldung.persistence.model.Foo" maxElementsInMemory="1000" />
</ehcache>

8. Collection Cache

Collections are not cached by default, and we need to explicitly mark them as cacheable. For example:

@Entity
@Cacheable
@org.hibernate.annotations.Cache(usage = CacheConcurrencyStrategy.READ_WRITE)
public class Foo {

    ...

    @Cacheable
    @org.hibernate.annotations.Cache(usage = CacheConcurrencyStrategy.READ_WRITE)
    @OneToMany
    private Collection<Bar> bars;

    // getters and setters
}

9. Internal Representation of Cached State

Entities are not stored in second-level cache as Java instances, but rather in their disassembled (hydrated) state:

  • Id (primary key) is not stored (it is stored as part of the cache key)
  • Transient properties are not stored
  • Collections are not stored (see below for more details)
  • Non-association property values are stored in their original form
  • Only id (foreign key) is stored for ToOne associations

This depicts general Hibernate second-level cache design in which cache model reflects the underlying relational model, which is space-efficient and makes it easy to keep the two synchronized.

9.1. Internal Representation of Cached Collections

We already mentioned that we have to explicitly indicate that a collection (OneToMany or ManyToMany association) is cacheable, otherwise it is not cached.

Actually, Hibernate stores collections in separate cache regions, one for each collection. The region name is a fully qualified class name plus the name of collection property, for example: org.baeldung.persistence.model.Foo.bars. This gives us the flexibility to define separate cache parameters for collections, e.g. eviction/expiration policy.

Also, it is important to mention that only ids of entities contained in a collection are cached for each collection entry, which means that in most cases it is a good idea to make the contained entities cacheable as well.

10. Cache Invalidation for HQL DML-Style Queries and Native Queries

When it comes to DML-style HQL (insert, update and delete HQL statements), Hibernate is able to determine which entities are affected by such operations:

entityManager.createQuery("update Foo set … where …").executeUpdate();

In this case all Foo instances are evicted from L2 cache, while other cached content remains unchanged.

However, when it comes to native SQL DML statements, Hibernate cannot guess what is being updated, so it invalidates the entire second level cache:

session.createNativeQuery("update FOO set … where …").executeUpdate();

This is probably not what you want! The solution is to tell Hibernate which entities are affected by native DML statements, so that it can evict only entries related to Foo entities:

Query nativeQuery = entityManager.createNativeQuery("update FOO set ... where ...");
nativeQuery.unwrap(org.hibernate.SQLQuery.class).addSynchronizedEntityClass(Foo.class);
nativeQuery.executeUpdate();

We have too fall back to Hibernate native SQLQuery API, as this feature is not (yet) defined in JPA.

Note that the above applies only for DML statements (insert, update, delete and native function/procedure calls). Native select queries do not invalidate cache.

11. Query Cache

Results of HQL queries can also be cached. This is useful if you frequently execute a query on entities that rarely change.

To enable query cache, set the value of hibernate.cache.use_query_cache property to true:

hibernate.cache.use_query_cache=true

Then, for each query you have to explicitly indicate that the query is cacheable (via an org.hibernate.cacheable query hint):

entityManager.createQuery("select f from Foo f")
  .setHint("org.hibernate.cacheable", true)
  .getResultList();

11.1. Query Cache Best Practices

Here are a some guidelines and best practices related to query caching:

  • As is case with collections, only ids of entities returned as a result of a cacheable query are cached, so it is strongly recommended that second-level cache is enabled for such entities.
  • There is one cache entry per each combination of query parameter values (bind variables) for each query, so queries for which you expect lots of different combinations of parameter values are not good candidates for caching.
  • Queries that involve entity classes for which there are frequent changes in the database are not good candidates for caching either, because they will be invalidated whenever there is a change related to any of the entity classed participating in the query, regardless whether the changed instances are cached as part of the query result or not.
  • By default, all query cache results are stored in org.hibernate.cache.internal.StandardQueryCache region. As with entity/collection caching, you can customize cache parameters for this region to define eviction and expiration policies according to your needs. For each query you can also specify a custom region name in order to provide different settings for different queries.
  • For all tables that are queried as part of cacheable queries, Hibernate keeps last update timestamps in a separate region named org.hibernate.cache.spi.UpdateTimestampsCache. Being aware of this region is very important if you use query caching, because Hibernate uses it to verify that cached query results are not stale. The entries in this cache must not be evicted/expired as long as there are cached query results for the corresponding tables in query results regions. It is best to turn off automatic eviction and expiration for this cache region, as it does not consume lots of memory anyway.

12. Conclusion

In this article we looked at how to set up Hibernate second-level cache. We saw that it is fairly easy to configure and use, as Hibernate does all the heavy lifting behind the scenes making second-level cache utilization transparent to the application business logic.

The implementation of this Hibernate Second-Level Cache Tutorial is available on Github. This is a Maven based project, so it should be easy to import and run as it is.

I usually post about Persistence on Twitter - you can follow me there:


Guide To CompletableFuture

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Introduction

This article is a guide to the functionality and use cases of the CompletableFuture class – introduced as a Java 8 Concurrency API improvement.

2. Asynchronous Computation in Java

Asynchronous computation is difficult to reason about. Usually we want to think of any computation as a series of steps. But in case of asynchronous computation, actions represented as callbacks tend to be either scattered across the code or deeply nested inside each other. Things get even worse when we need to handle errors that might occur during one of the steps.

The Future interface was added in Java 5 to serve as a result of asynchronous computation, but it did not have any methods to combine these computations or handle possible errors.

In Java 8, the CompletableFuture class was introduced. Along with the Future interface, it also implemented the CompletionStage interface. This interface defines the contract for an asynchronous computation step that can be combined with other steps.

CompletableFuture is at the same time a building block and a framework with about 50 different methods for composing, combining, executing asynchronous computation steps and handling errors.

Such a large API can be overwhelming, but these mostly fall in several clear and distinct use cases.

3. Using CompletableFuture as a Simple Future

First of all, the CompletableFuture class implements the Future interface, so you can use it as a Future implementation, but with additional completion logic.

For example, you can create an instance of this class with a no-arg constructor to represent some future result, hand it out to the consumers and complete it at some time in the future using the complete method. The consumers may use the get method to block the current thread until this result will be provided.

In the example below we have a method that creates a CompletableFuture instance, then spins off some computation in another thread and returns the Future immediately.

When the computation is done, the method completes the Future by providing the result to the complete method:

public Future<String> calculateAsync() throws InterruptedException {
    CompletableFuture<String> completableFuture 
      = new CompletableFuture<>();

    Executors.newCachedThreadPool().submit(() -> {
        Thread.sleep(500);
        completableFuture.complete("Hello");
        return null;
    });

    return completableFuture;
}

To spin off the computation, we use the Executor API which is described in the article “Introduction to Thread Pools in Java”, but this method of creating and completing a CompletableFuture can be used together with any concurrency mechanism or API including raw threads.

Notice that the calculateAsync method returns a Future instance.

We simply call the method, receive the Future instance and call the get method on it when we’re ready to block for result.

Also notice that the get method throws some checked exceptions, namely ExecutionException (encapsulating an exception that occurred during a computation) and InterruptedException (an exception signifying that a thread executing a method was interrupted):

Future<String> completableFuture = calculateAsync();

// ... 

String result = completableFuture.get();
assertEquals("Hello", result);

If you already know the result of a computation, you can use the static completedFuture method with an argument that represents a result of this computation. Then the get method of the Future will never block, immediately returning this result instead.

Future<String> completableFuture = 
  CompletableFuture.completedFuture("Hello");

// ...

String result = completableFuture.get();
assertEquals("Hello", result);

As an alternative scenario, you may want to cancel the execution of a Future.

Suppose we didn’t manage to find a result and decided to cancel an asynchronous execution altogether. This can be done with the Future‘s cancel method. This method receives a boolean argument mayInterruptIfRunning, but in case of CompletableFuture it has no effect, as interrupts are not used to control processing for CompletableFuture.

Here’s a modified version of the asynchronous method:

public Future<String> calculateAsyncWithCancellation() throws InterruptedException {
    CompletableFuture<String> completableFuture = new CompletableFuture<>();

    Executors.newCachedThreadPool().submit(() -> {
        Thread.sleep(500);
        completableFuture.cancel(false);
        return null;
    });

    return completableFuture;
}

When we block on the result using the Future.get() method, it throws CancellationException if the future is cancelled:

Future<String> future = calculateAsyncWithCancellation();
future.get(); // CancellationException

4. CompletableFuture with Encapsulated Computation Logic

The code above allows us to pick any mechanism of concurrent execution, but what if we want to skip this boilerplate and simply execute some code asynchronously?

Static methods runAsync and supplyAsync allow us to create a CompletableFuture instance out of Runnable and Supplier functional types correspondingly.

Both Runnable and Supplier are functional interfaces that allow passing their instances as lambda expressions thanks to the new Java 8 feature.

The Runnable interface is the same old interface that is used in threads and it does not allow to return a value.

The Supplier interface is a generic functional interface with a single method that has no arguments and returns a value of a parameterized type.

This allows to provide an instance of the Supplier as a lambda expression that does the calculation and returns the result. This is as simple as:

CompletableFuture<String> future
  = CompletableFuture.supplyAsync(() -> "Hello");

// ...

assertEquals("Hello", future.get());

5. Processing Results of Asynchronous Computations

The most generic way to process the result of a computation is to feed it to a function. The thenApply method does exactly that: accepts a Function instance, uses it to process the result and returns a Future that holds a value returned by a function:

CompletableFuture<String> completableFuture
  = CompletableFuture.supplyAsync(() -> "Hello");

CompletableFuture<String> future = completableFuture
  .thenApply(s -> s + " World");

assertEquals("Hello World", future.get());

If you don’t need to return a value down the Future chain, you can use an instance of the Consumer functional interface. Its single method takes a parameter and returns void.

There’s a method for this use case in the CompletableFuture — the thenAccept method receives a Consumer and passes it the result of the computation. The final future.get() call returns an instance of the Void type.

CompletableFuture<String> completableFuture
  = CompletableFuture.supplyAsync(() -> "Hello");

CompletableFuture<Void> future = completableFuture
  .thenAccept(s -> System.out.println("Computation returned: " + s));

future.get();

At last, if you neither need the value of the computation nor want to return some value at the end of the chain, then you can pass a Runnable lambda to the thenRun method. In the following example, after the future.get() method is called, we simply print a line in the console:

CompletableFuture<String> completableFuture 
  = CompletableFuture.supplyAsync(() -> "Hello");

CompletableFuture<Void> future = completableFuture
  .thenRun(() -> System.out.println("Computation finished."));

future.get();

6. Combining Futures

The best part of the CompletableFuture API is the ability to combine CompletableFuture instances in a chain of computation steps.

The result of this chaining is itself a CompletableFuture that allows further chaining and combining. This approach is ubiquitous in functional languages and is often referred to as a monadic design pattern.

In the following example we use the thenCompose method to chain two Futures sequentially.

Notice that this method takes a function that returns a CompletableFuture instance. The argument of this function is the result of the previous computation step. This allows us to use this value inside the next CompletableFuture‘s lambda:

CompletableFuture<String> completableFuture 
  = CompletableFuture.supplyAsync(() -> "Hello")
    .thenCompose(s -> CompletableFuture.supplyAsync(() -> s + " World"));

assertEquals("Hello World", completableFuture.get());

The thenCompose method together with thenApply implement basic building blocks of the monadic pattern. They closely relate to the map and flatMap methods of Stream and Optional classes also available in Java 8.

Both methods receive a function and apply it to the computation result, but the thenCompose (flatMap) method receives a function that returns another object of the same type. This functional structure allows to compose the instances of these classes as building blocks.

If you want to execute two independent Futures and do something with their results, use the thenCombine method that accepts a Future and a Function with two arguments to process both results:

CompletableFuture<String> completableFuture 
  = CompletableFuture.supplyAsync(() -> "Hello")
    .thenCombine(CompletableFuture.supplyAsync(
      () -> " World"), (s1, s2) -> s1 + s2));

assertEquals("Hello World", completableFuture.get());

A simpler case is when you want to do something with two Futures‘ results, but don’t need to pass any resulting value down a Future chain. The thenAcceptBoth method is there to help:

CompletableFuture future = CompletableFuture.supplyAsync(() -> "Hello")
  .thenAcceptBoth(CompletableFuture.supplyAsync(() -> " World"),
    (s1, s2) -> System.out.println(s1 + s2));

7. Running Multiple Futures in Parallel

When we need to execute multiple Futures in parallel, we usually want to wait for all of them to execute and then process their combined results.

The CompletableFuture.allOf static method allows to wait for completion of all of the Futures provided as a var-arg:

CompletableFuture<String> future1  
  = CompletableFuture.supplyAsync(() -> "Hello");
CompletableFuture<String> future2  
  = CompletableFuture.supplyAsync(() -> "Beautiful");
CompletableFuture<String> future3  
  = CompletableFuture.supplyAsync(() -> "World");

CompletableFuture<Void> combinedFuture 
  = CompletableFuture.allOf(future1, future2, future3);

// ...

combinedFuture.get();

assertTrue(future1.isDone());
assertTrue(future2.isDone());
assertTrue(future3.isDone());

Notice that the return type of the CompletableFuture.allOf() is a CompletableFuture<Void>. The limitation of this method is that it does not return the combined results of all Futures. Instead you have to manually get results from Futures. Fortunately, CompletableFuture.join() method and Java 8 Streams API makes it simple:

String combined = Stream.of(future1, future2, future3)
  .map(CompletableFuture::join)
  .collect(Collectors.joining(" "));

assertEquals("Hello Beautiful World", combined);

The CompletableFuture.join() method is similar to the get method, but it throws an unchecked exception in case the Future does not complete normally. This makes it possible to use it as a method reference in the Stream.map() method.

8. Handling Errors

For error-handling in a chain of asynchronous computation steps, throw/catch idiom had to be adapted in a similar fashion.

Instead of catching an exception in a syntactic block, the CompletableFuture class allows you to handle it in a special handle method. This method receives two parameters: a result of a computation (if it finished successfully) and the exception thrown (if some computation step did not complete normally).

In the following example we use the handle method to provide a default value when the asynchronous computation of a greeting was finished with an error because no name was provided:

String name = null;

// ...

CompletableFuture<String> completableFuture  
  =  CompletableFuture.supplyAsync(() -> {
      if (name == null) {
          throw new RuntimeException("Computation error!");
      }
      return "Hello, " + name;
  })}).handle((s, t) -> s != null ? s : "Hello, Stranger!");

assertEquals("Hello, Stranger!", completableFuture.get());

As an alternative scenario, suppose we want to manually complete the Future with a value, as in the first example, but also to have the ability to complete it with an exception. The completeExceptionally method is intended for that. The completableFuture.get() method in the following example throws an ExecutionException with a RuntimeException as its cause:

CompletableFuture<String> completableFuture = new CompletableFuture<>();

// ...

completableFuture.completeExceptionally(
  new RuntimeException("Calculation failed!"));

// ...

completableFuture.get(); // ExecutionException

In the example above we could have handled the exception with the handle method asynchronously, but with the get method we can use a more typical approach of a synchronous exception processing.

9. Async Methods

Most methods of the fluent API in CompletableFuture class have two additional variants with the Async postfix. These methods are usually intended for running a corresponding step of execution in another thread.

The methods without the Async postfix run the next execution stage using a calling thread. The Async method without the Executor argument runs a step using the common fork/join pool implementation of Executor that is accessed with the ForkJoinPool.commonPool() method. The Async method with an Executor argument runs a step using the passed Executor.

Here’s a modified example that processes the result of a computation with a Function instance. The only visible difference is the thenApplyAsync method. But under the hood the application of a function is wrapped into a ForkJoinTask instance (for more information on the fork/join framework, see the article “Guide to the Fork/Join Framework in Java”). This allows to parallelize your computation even more and use system resources more efficiently.

CompletableFuture<String> completableFuture  
  = CompletableFuture.supplyAsync(() -> "Hello");

CompletableFuture<String> future = completableFuture
  .thenApplyAsync(s -> s + " World");

assertEquals("Hello World", future.get());

10. Conclusion

In this article we’ve described the methods and typical use cases of the CompletableFuture class.

The source code for the article is available on GitHub.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Viewing all 3717 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>