Quantcast
Channel: Baeldung
Viewing all 3522 articles
Browse latest View live

CORS in JAX-RS

$
0
0

1. Overview

In this quick article, we’ll learn about how to enable CORS (Cross-Origin Resource Sharing) in a JAX-RS based system. We’ll set up an application on top of JAX-RS to enable CORS mechanism.

2. How to Enable CORS Mechanism

There are two ways by which we can enable CORS in JAX-RS. The first and the most basic way is to create a filter to inject necessary response header at run-time in every request. The other one is to manually add an appropriate header in each URL endpoint.

Ideally, the first solution should be used; however, when that’s not an option, the more manual option is technical OK as well.

2.1. Using the Filter

JAX-RS has the ContainerResponseFilter interface – implemented by the container response filters. Typically, this filter instance is applied globally to any HTTP response.

We’ll implement this interface to create a custom filter which will inject Access-Control-Allow-* header to each outgoing request and enable the CORS mechanism:

@Provider
public class CorsFilter implements ContainerResponseFilter {

    @Override
    public void filter(ContainerRequestContext requestContext, 
      ContainerResponseContext responseContext) throws IOException {
          responseContext.getHeaders().add(
            "Access-Control-Allow-Origin", "*");
          responseContext.getHeaders().add(
            "Access-Control-Allow-Credentials", "true");
          responseContext.getHeaders().add(
           "Access-Control-Allow-Headers",
           "origin, content-type, accept, authorization");
          responseContext.getHeaders().add(
            "Access-Control-Allow-Methods", 
            "GET, POST, PUT, DELETE, OPTIONS, HEAD");
    }
}

A couple of points here:

  • Filters implementing ContainerResponseFilter must be explicitly annotated with @Provider to be discovered by the JAX-RS runtime
  • We’re injecting ‘Access-Control-Allow-*‘ header with ‘*’, that means any URL endpoints to this server instance can be accessed via any domain; if we want to restrict the cross-domain access explicitly, we have to mention that domain in this header

2.2. Using Header Modification into Each Endpoint

As stated earlier, we can explicitly inject the ‘Access-Control-Allow-*‘ header at the endpoint level as well:

@GET
@Path("/")
@Produces({MediaType.TEXT_PLAIN})
public Response index() {
    return Response
      .status(200)
      .header("Access-Control-Allow-Origin", "*")
      .header("Access-Control-Allow-Credentials", "true")
      .header("Access-Control-Allow-Headers",
        "origin, content-type, accept, authorization")
      .header("Access-Control-Allow-Methods", 
        "GET, POST, PUT, DELETE, OPTIONS, HEAD")
      .entity("")
      .build();
}

A point to note here is if we’re trying to enable CORS in a large application, we shouldn’t try this method because in this case, we have to manually inject the header into every URL endpoints which will introduce additional overhead.

However, this technique can be used in applications, where we need to enable CORS in only some of the URL endpoints.

3. Testing

Once the application is up, we can test the headers using the curl commands. A sample headers output should be something like below:

HTTP/1.1 200 OK
Date : Tue, 13 May 2014 12:30:00 GMT
Connection : keep-alive
Access-Control-Allow-Origin : *
Access-Control-Allow-Credentials : true
Access-Control-Allow-Headers : origin, content-type, accept, authorization
Access-Control-Allow-Methods : GET, POST, PUT, DELETE, OPTIONS, HEAD
Transfer-Encoding : chunked

What’s more, we can create a simple AJAX function and check the cross domain functionality:

function call(url, type, data) {
    var request = $.ajax({
      url: url,
      method: "GET",
      data: (data) ? JSON.stringify(data) : "",
      dataType: type
    });
 
    request.done(function(resp) {
      console.log(resp);
    });
 
    request.fail(function(jqXHR, textStatus) {
      console.log("Request failed: " + textStatus);
    });
};

Of course in order to actually perform the check, we’ll have to run this on a different origin than the API we’re consuming.

You can do that locally quite easily by running a client app on a separate port – since the port does determine the origin.

4. Conclusion

In this article, we showed about implementing CORS mechanism in JAX-RS based applications.

Like always, the full source code is available over on GitHub.


Validating Input with Finite Automata in Java

$
0
0

1. Overview

If you’ve studied CS, you’ve undoubtedly taken a course about compilers or something similar; in these classes, the concept of Finite Automaton (also known as Finite State Machine) is taught. This is a way of formalizing the grammar rules of languages.

You can read more about the subject here and here.

So how can this forgotten concept be helpful to us, high-level programmers, who don’t need to worry about building a new compiler?

Well, it turns out that the concept can simplify a lot of business scenarios and give us the tools to reason about complex logic.

As a quick example, we can also validate input without an external, third-party library.

2. The Algorithm

In a nutshell, such a machine declares states and ways of getting from one state to another. If you put a stream through it, you can validate its format with the following algorithm (pseudocode):

for (char c in input) {
    if (automaton.accepts(c)) {
        automaton.switchState(c);
        input.pop(c);
    } else {
        break;
    }
}
if (automaton.canStop() && input.isEmpty()) {
    print("Valid");
} else {
    print("Invalid");
}

We say the automaton “accepts” the given char if there is any arrow going from the current state, which has the char on it. Switching states means that a pointer is followed and the current state is replaced with the state that the arrow points to.

Finally, when the loop is over, we check if the automaton “can stop” (the current state is double-circled) and that input has been exhausted.

3. An Example

Let’s write a simple validator for a JSON object, to see the algorithm in action. Here is the automaton that accepts an object:

Note that value can be one of the following: string, integer, boolean, null or another JSON object. For the sake of brevity, in our example, we’ll consider only strings.

3.1. The Code

Implementing a finite state machine is quite straightforward. We have the following:

public interface FiniteStateMachine {
    FiniteStateMachine switchState(CharSequence c);
    boolean canStop();
}
 
interface State {
    State with(Transition tr);
    State transit(CharSequence c);
    boolean isFinal();
}
 
interface Transition {
    boolean isPossible(CharSequence c);
    State state();
}

The relations between them are:

  • The state machine has one current State and tells us if it can stop or not (if the state is final or not)
  • A State has a list of Transitions which could be followed (outgoing arrows)
  • A Transition tells us if the character is accepted and gives us the next State
publi class RtFiniteStateMachine implements FiniteStateMachine {

    private State current;

    public RtFiniteStateMachine(State initial) {
        this.current = initial;
    }

    public FiniteStateMachine switchState(CharSequence c) {
        return new RtFiniteStateMachine(this.current.transit(c));
    }

    public boolean canStop() {
        return this.current.isFinal();
    }
}

Note that the FiniteStateMachine implementation is immutable. This is mainly so a single instance of it can be used multiple times.

Following, we have the implementation RtState. The with(Transition) method returns the instance after the transition is added, for fluency. A State also tells us if it’s final (double-circled) or not.

public class RtState implements State {

    private List<Transition> transitions;
    private boolean isFinal;

    public RtState() {
        this(false);
    }
    
    public RtState(boolean isFinal) {
        this.transitions = new ArrayList<>();
        this.isFinal = isFinal;
    }

    public State transit(CharSequence c) {
        return transitions
          .stream()
          .filter(t -> t.isPossible(c))
          .map(Transition::state)
          .findAny()
          .orElseThrow(() -> new IllegalArgumentException("Input not accepted: " + c));
    }

    public boolean isFinal() {
        return this.isFinal;
    }

    @Override
    public State with(Transition tr) {
        this.transitions.add(tr);
        return this;
    }
}

And finally, RtTransition which checks the transition rule and can give the next State:

public class RtTransition implements Transition {

    private String rule;
    private State next;
    public State state() {
        return this.next;
    }

    public boolean isPossible(CharSequence c) {
        return this.rule.equalsIgnoreCase(String.valueOf(c));
    }

    // standard constructors
}

The code above is here. With this implementation, you should be able to build any state machine. The algorithm described at the beginning is as straightforward as:

String json = "{\"key\":\"value\"}";
FiniteStateMachine machine = this.buildJsonStateMachine();
for (int i = 0; i < json.length(); i++) {
    machine = machine.switchState(String.valueOf(json.charAt(i)));
}
 
assertTrue(machine.canStop());

Check the test class RtFiniteStateMachineTest to see the buildJsonStateMachine() method. Note that it adds a few more states than the image above, to also catch the quotes that surround the Strings properly.

4. Conclusion

Finite automata are great tools which you can use in validating structured data.

However, they’re not widely known because they can get complicated when it comes to complex input (since a transition can be used for only one character). Nevertheless, they are great when it comes to checking a simple set of rules.

Finally, if you want to do some more complicated work using finite state machines, StatefulJ and squirrel are two libraries worth looking into.

Introduction to TestNG

$
0
0

1. Overview

In this article, we’ll introduce the TestNG testing framework.

We’ll focus on: framework setup, writing simple test case and configuration, test execution, test reports generation, and concurrent test execution.

2. Setup

Let’s start by adding the Maven dependency in our pom.xml file:

<dependency>
    <groupId>org.testng</groupId>
    <artifactId>testng</artifactId>
    <version>6.11</version>
    <scope>test</scope>
</dependency>

The latest version can be found in the Maven repository.

When using Eclipse, the TestNG plugin may be downloaded and installed from the Eclipse Marketplace.

3. Writing a Test Case

To write a test using TestNG, we just need to annotate the test method with org.testng.annotations.Test annotation:

@Test
public void givenNumber_whenEven_thenTrue() {
    assertTrue(number % 2 == 0);
}

4. Test Configurations

While writing test cases, often we need to execute some configuration or initialization instructions before test executions, and also some cleanup after completion of tests. TestNG provides a number of initialization and clean-up features at method, class, group and suite levels:

@BeforeClass
public void setup() {
    number = 12;
}

@AfterClass
public void tearDown() {
    number = 0;
}

The setup() method annotated with @BeforeClass annotations will be invoked before execution of any methods of that test class, and tearDown() after execution all methods of the test class.

Similarly, we can use the @BeforeMethod, @AfterMethod, @Before/AfterGroup, @Before/AfterTest and @Before/AfterSuite annotations for any configuration at method, group, test and suite levels.

5. Test Execution

We can run the test cases with Maven’s “test” command, it will execute all the test cases annotated with @Test putting them to a default test suite. We can also run test cases from the TestNG test suite XML files, by using the maven-surefire-plugin:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-surefire-plugin</artifactId>
    <version>2.19.1</version>
    <configuration>
        <suiteXmlFiles>
            <suiteXmlFile>
               src\test\resources\test_suite.xml
            </suiteXmlFile>
        </suiteXmlFiles>
    </configuration>
</plugin>

Note that if we have multiple XML files, covering all test cases, we can add all of them in the suiteXmlFiles tag:

<suiteXmlFiles>
    <suiteXmlFile>
      src/test/resources/parametrized_test.xml
    </suiteXmlFile>
    <suiteXmlFile>
      src/test/resources/registration_test.xml
    </suiteXmlFile>
</suiteXmlFiles>

In order to run the test standalone, we need to have the TestNG library in the classpath and the compiled test class along with the XML configuration file:

java org.testng.TestNG test_suite.xml

6. Grouping Tests

Tests can be run in groups, for example out of 50 test cases 15 can be grouped together and executed leaving others as is.

In TestNG grouping tests in suites are done using XML file:

<suite name="suite">
    <test name="test suite">
        <classes>
            <class name="com.baeldung.RegistrationTest" />
            <class name="com.baeldung.SignInTest" />
        </classes>
    </test>
</suite>

Notice that, both the test classes RegistrationTest, SignInTest now belongs to the same suite and once suite is executed, test cases in this class will get executed.

Apart from test suites, we can also create test groups in TestNG, where instead of test classes methods are grouped together. In order to do that, add the groups parameter in the @Test annotation:

@Test(groups = "regression")
public void givenNegativeNumber_sumLessthanZero_thenCorrect() {
    int sum = numbers.stream().reduce(0, Integer::sum);
 
    assertTrue(sum < 0);
}

Let’s use an XML to execute the groups:

<test name="test groups">
    <groups>
        <run>
            <include name="regression" />
        </run>
    </groups>
    <classes>
        <class
          name="com.baeldung.SummationServiceTest" />
    </classes>
</test>

This will execute the test method tagged with group regression,  in the SummationServiceTest class.

7. Parameterized Tests

Parameterized unit tests are used for testing the same code under several conditions. With the help of parameterized unit tests, we can set up a test method that obtains data from some data source. The main idea is to make the unit test method reusable and to test with a different set of inputs.

In TestNG, we can parametrize tests using @Parameter or @DataProvider annotation. While using the XML file annotate the test method with @Parameter:

@Test
@Parameters({"value", "isEven"})
public void
  givenNumberFromXML_ifEvenCheckOK_thenCorrect(int value, boolean isEven) {
    
    assertEquals(isEven, value % 2 == 0);
}
And provide the data using XML file:
<suite name="My test suite">
    <test name="numbersXML">
        <parameter name="value" value="1"/>
        <parameter name="isEven" value="false"/>
        <classes>
            <class name="baeldung.com.ParametrizedTests"/>
        </classes>
    </test>
</suite>

Using data from XML file is useful, but we often need more complex data. @DataProvider annotation is used to handle these scenarios, which can be used to map complex parameter types for testing methods.@DataProvider for primitive data types:

@DataProvider(name = "numbers")
public static Object[][] evenNumbers() {
    return new Object[][]{{1, false}, {2, true}, {4, true}};
}
 
@Test(dataProvider = "numbers")
public void 
  givenNumberFromDataProvider_ifEvenCheckOK_thenCorrect(Integer number, boolean expected) {    
    assertEquals(expected, number % 2 == 0);
}

@DataProviderfor objects:

@Test(dataProvider = "numbersObject")
public void 
  givenNumberObjectFromDataProvider_ifEvenCheckOK_thenCorrect(EvenNumber number) {  
    assertEquals(number.isEven(), number.getValue() % 2 == 0);
}
 
@DataProvider(name = "numbersObject")
public Object[][] parameterProvider() {
    return new Object[][]{{new EvenNumber(1, false)},
      {new EvenNumber(2, true)}, {new EvenNumber(4, true)}};
}

Using this, any object that has to be tested can be created and used in the test. This is mostly useful for integration test cases.

8. Ignoring Test Cases

We sometimes want not to execute a certain test case, temporarily during the development process. This can be done adding enabled=false, in the @Test annotation:

@Test(enabled=false)
public void givenNumbers_sumEquals_thenCorrect() { 
    int sum = numbers.stream.reduce(0, Integer::sum);
    assertEquals(6, sum);
}

9. Dependent Tests

Let’s consider a scenario, where if the initial test case fails, all subsequent test cases should be executed, and rather be marked as skipped. TestNG provides this feature with the dependsOnMethods parameter of the @Test annotation:

@Test
public void givenEmail_ifValid_thenTrue() {
    boolean valid = email.contains("@");
 
    assertEquals(valid, true);
}
 
@Test(dependsOnMethods = {"givenEmail_ifValid_thenTrue"})
public void givenValidEmail_whenLoggedIn_thenTrue() {
    LOGGER.info("Email {} valid >> logging in", email);
}

Notice that, the login test case depends on the email validation test case. Thus, if email validation fails the login test will be skipped.

10. Concurrent Test Execution

TestNG allows tests to run in parallel or in multi-threaded mode, thus providing a way to test these multi-threaded pieces of code.

You can configure, for methods, classes, and suites to run in their own threads reducing the total execution time.

10.1. Classes and Methods in Parallel

To run test classes in parallel, mention the parallel  attribute in the suite tag in XML configuration file, with value classes:

<suite name="suite" parallel="classes" thread-count="2">
    <test name="test suite">
        <classes>
	    <class name="baeldung.com.RegistrationTest" />
            <class name="baeldung.com.SignInTest" />
        </classes>
    </test>
</suite>

Note that, if we have multiple test tags in the XML file, these tests can also be run in parallel, by mentioning parallel =” tests”.  Also to execute individual methods in parallel, mention parallel =” methods”.

10.2. Multi-Threaded Execution of Test Method

Let’s say we need to test the behavior of a code when running in multiple threads. TestNG allows to run a test method in multiple threads:

public class MultiThreadedTests {
    
    @Test(threadPoolSize = 5, invocationCount = 10, timeOut = 1000)
    public void givenMethod_whenRunInThreads_thenCorrect() {
        int count = Thread.activeCount();
 
        assertTrue(count > 1);
    }
}

The threadPoolSize indicates that the method will run in number of threads as mentioned. The invocationCount and timeOut indicate that the test will be executed multiple times and fail the test if it takes more time.

11. Functional Testing

TestNG comes with features which can be used for functional testing as well. In conjunction with Selenium, it can either be used to test functionalities of a web application or used for testing web services with HttpClient.

More details about functional testing with Selenium and TestNG is available here. Also some more pieces of stuff on integration testing in this article.

12. Conclusion

In this article, we had a quick look at how to setup TestNG and execute a simple test case, generate reports, concurrent execution of test cases and also a little about functional programming. For more features like dependent tests, ignoring test cases, test groups and suites you can refer our JUnit vs TestNG article here.

The implementation of all the code snippets can be found over on Github.

A Guide to the Java Web Start

$
0
0

1. Overview

This article explains what Java Web Start (JWS) is, how to configure it on the server side, and how to create a simple application.

2. Introduction

JWS is a runtime environment that comes with the Java SE for the client’s web browser and has been around since Java version 5.

With the download of the JNLP files (also known as Java Network Launch Protocol) from the web server, this environment allows us to run JAR packages referenced by it remotely.

Simply put, the mechanism loads and runs Java classes on a client’s computer with a regular JRE installation. It allows some extra instructions from Java EE as well. However, security restrictions are strictly applied by the client’s JRE, usually warning the user for untrustworthy domains, lack of HTTPS and even unsigned JARs.

From a generic website, one can download a JNLP file to execute a JWS application. Once downloaded, it can be run directly from a desktop shortcut or the Java Cache Viewer. After that, it downloads and executes JAR files.

This mechanism can be very helpful to deliver a graphical interface that is not web-based (HTML free), such as a secure file transfer application, a scientific calculator, a secure keyboard, a local image browser and so on.

3. A Simple JNLP Application

A good approach is to write an application and package it into a WAR file for regular web servers. All we need is to write our desired application (usually with Swing) and package it into a JAR file. This JAR must then, in turn, be packaged into a WAR file together with a JNLP that will reference, download and execute its application’s Main class normally.

There is no difference with a regular web application packaged in a WAR file, except for the fact that we need a JNLP file to enable the JWS, as will be demonstrated below.

3.1. Java Application

Let’s start by writing a simple Java application:

public class Hello {
    public static void main(String[] args) {
        JFrame f = new JFrame("main");
        f.setSize(200, 100);
        f.setLocationRelativeTo(null);
        f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
        JLabel label = new JLabel("Hello World");
        f.add(label);
        f.setVisible(true);
    }
}

We can see that this is a pretty straightforward Swing class. Indeed, nothing was added to make it JWS compliant.

3.2. Web Application

All we need is to JAR package this example Swing class into a WAR file along with the following JNLP file:

<?xml version="1.0" encoding="UTF-8"?>
<jnlp spec="1.0+" 
  codebase="http://localhost:8080/jnlp-example">
    <information>
        <title>Hello</title>
        <vendor>Example</vendor>
    </information>
    <resources>
        <j2se version="1.2+"/>
        <jar href="hello.jar" main="true" />
    </resources>
    <application-desc/>
</jnlp>

Let’s name it hello.jndl and place it under any web folder of our WAR. Both the JAR and WAR are downloadable, so we don’t need to worry putting the JAR in a lib folder.

The URL address to our final JAR is hard coded in the JNLP file, which can cause some distribution problems. If we change deployment servers, the application won’t work anymore.

Let’s fix that with a proper servlet later in this article. For now, let’s just place the JAR file for download in the root folder as the index.html, and link it to an anchor element:

<a href="hello.jnlp">Launch</a>

Let’s also set the main class in our JAR Manifest. This can be achieved by configuring the JAR plugin in the pom.xml file. Similarly, we move the JAR file outside of the WEB-INF/lib, since it is meant for download only, i.e. not for the classloader:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-jar-plugin</artifactId>
    ...
    <executions>
        <execution>
            <phase>compile</phase>
            <goals>
                <goal>jar</goal>
            </goals>
            <configuration>
                <archive>
                    <manifest>
                        <mainClass>
                            com.example.Hello
                        </mainClass>
                    </manifest>
                </archive>
                <outputDirectory>
                    ${project.basedir}/target/jws
                </outputDirectory>
            </configuration>
        </execution>
    </executions>
</plugin>

4. Special Configurations

4.1. Security Issues

To run an application, we need to sign the JAR. Creating a valid certificate and using the JAR Sign Maven Plugin goes beyond the scope of this article, but we can bypass this security policy for development purposes, or if we have administrative access to our user’s computer.

To do so, we need to add the local URL (for instance: http://localhost:8080) to the security exceptions list of the JRE installation on the computer where the application will be executed. It can be found by opening the Java Control Panel (on Windows, we can found it via the Control Panel) on the Security tab.

5. The JnlpDownloadServlet

5.1. Compression Algorithms

There is a special servlet that can be included in our WAR. It optimizes the download by looking for the most compressed compiled version of our JAR file if available, and also fix the hard coded codebase value on the JLNP file.

Since our JAR will be available for download, it’s advisable to package it with a compression algorithm, such as Pack200, and deliver the regular JAR and any JAR.PACK.GZ or JAR.GZ compressed version at the same folder so that this servlet can choose the best option for each case.

Unfortunately, there is no stable version of a Maven plugin yet for this compression algorithm, but we may work with the Pack200 executable that comes with the JRE (usually, installed on the path {JAVA_SDK_HOME}/jre/bin/).

Without changing our JNLP and by placing the jar.gz and jar.pack.gz versions of the JAR in the same folder, the servlet picks the better one once it gets a call from a remote JNLP. This enhances the user experience and optimizes network traffic.

5.2. Codebase Dynamic Substitution

The servlet can also perform dynamic substitutions for hardcoded URLs in the <jnlp spec=”1.0+” codebase=”http://localhost:8080/jnlp-example”> tag. By changing the JNLP to the wildcard <jnlp spec=”1.0+” codebase=”$$context”>, it delivers the same final rendered tag.

The servlet also works with the wildcards $$codebase, $$hostname, $$name and $$site, which will resolve “http://localhost:8080/jnlp-example/“, “localhost:8080“, “hello.jnlp“, and “http://localhost:8080” respectively.

5.3. Adding the Servlet to the Classpath

To add the servlet, let’s configure a normal servlet mapping for JAR and JNLP patterns to our web.xml:

<servlet>
    <servlet-name>JnlpDownloadServlet</servlet-name>
    <servlet-class>
        jnlp.sample.servlet.JnlpDownloadServlet
    </servlet-class>
</servlet>
<servlet-mapping>
    <servlet-name>JnlpDownloadServlet</servlet-name>
    <url-pattern>*.jar</url-pattern>
</servlet-mapping>
<servlet-mapping>
    <servlet-name>JnlpDownloadServlet</servlet-name>
    <url-pattern>*.jnlp</url-pattern>
</servlet-mapping>

The servlet itself comes in a set of JARs (jardiff.jar and jnlp-servlet.jar) that are nowadays located on the Demos & Samples section on the Java SDK download page.

In the GitHub example these files are included in the java-core-samples-lib folder and are included as web resources by the Maven WAR plugin:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-war-plugin</artifactId>
    ...
    <configuration>
        <webResources>
            <resource>
                <directory>
                    ${project.basedir}/java-core-samples-lib/
                </directory>
                <includes>
                    <include>**/*.jar</include>
                </includes>
                <targetPath>WEB-INF/lib</targetPath>
            </resource>
        </webResources>
    </configuration>
</plugin>

6. Final Thoughts

Java Web Start is a tool that may be used in (intranet) environments where there is no application server. Also, for applications that need to manipulate local user files.

An application is shipped to the end user by a simple download protocol, without any additional dependencies or configuration, except for some security concerns (HTTPS, signed JAR, etc.).

In the Git Example, the full source code described in this article is available for download. We can download it directly from GitHub to an OS with Tomcat and Apache Maven. After download, we need to run the mvn install command from the source directory and copy the generated jws.war file from the target to the webapps folder of the Tomcat installation.

After that, we can start Tomcat as usual.

From a default Apache Tomcat installation, the example will be available at the URL http://localhost:8080/jws/index.html.

Introduction to the Functional Web Framework in Spring 5

$
0
0

1. Introduction

One of the main new features of Spring 5 will be a new Functional Web Framework built using reactive principles.

In this article, we’ll have a look on how it looks like in practice.

2. Maven Dependency

Let’s start by defining the Maven dependencies that we’re going to need:

<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>2.0.0.BUILD-SNAPSHOT</version>
    <relativePath/> 
</parent>

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-webflux</artifactId>
</dependency>

As we are using a snapshot version of Spring Boot, remember to add Spring snapshot repository:

<repositories>
    <repository> 
        <id>repository.spring.snapshot</id> 
        <name>Spring Snapshot Repository</name> 
        <url>http://repo.spring.io/snapshot</url> 
    </repository>
</repositories>

3. Functional Web Framework

Before we start to look at what the new framework provides, it’s a good idea to read this article to brush up on the basics of the Reactive paradigm.

The framework introduces two fundamental components: HandlerFunction and RouterFunction, both located in the org.springframework.web.reactive.function.server package.

3.1. HandlerFunction

The HandlerFunction represents a function that handles incoming requests and generates responses:

@FunctionalInterface
public interface HandlerFunction<T extends ServerResponse> {
    Mono<T> handle(ServerRequest request);
}

This interface is primarily a Function<Request, Response<T>>, which is very much like a servlet. Compared to a standard servlet Servlet.service(ServletRequest req, ServletResponse res), a side-effect free HandlerFunction is naturally easier to test and reuse.

3.2. RouterFunction

RouterFunction serves as an alternative to the @RequestMapping annotation. It’s used for routing incoming requests to handler functions:

@FunctionalInterface
public interface RouterFunction<T extends ServerResponse> {

    Mono<HandlerFunction<T>> route(ServerRequest request);

    // ...

}

Typically, we can import RouterFunctions.route()a helper function to create routes, instead of writing a complete router function. It allows us to route requests by applying a RequestPredicateWhen the predicate is matched, then the second argument, the handler function, is returned:

public static <T extends ServerResponse> RouterFunction<T> route(
  RequestPredicate predicate,
  HandlerFunction<T> handlerFunction)

By returning a RouterFunction, route() can be chained and nested to build powerful and complex routing schemes.

3.3. A Quick Example

Here’s a simple example that serves requests to the root path “/” and returns a response with status 200:

RouterFunction<ServerResponse> routingFunction() {
    return route(path("/"), req -> ok().build());
}

The path(“/”) in the example above is a RequestPredicate. Besides matching a path, there’s a bunch of other commonly used predicates available in RequestPredicates, including HTTP methods, content type matching, etc.

3.4. Running on a Server

To run the functions in a server, we can wrap the RouterFunction in an HttpHandler for serving requests. The HttpHandler a reactive abstraction introduced in Spring 5.0 M1. With this abstraction, we can run our code in reactive runtimes such as Reactor Netty and Servlet 3.1+.

With the following code, we can run the functions in an embedded Tomcat:

HttpHandler httpHandler = RouterFunctions.toHttpHandler(routingFunction());

Tomcat tomcat = new Tomcat();
Context rootContext = tomcat.addContext("", System.getProperty("java.io.tmpdir"));
ServletHttpHandlerAdapter servlet = new ServletHttpHandlerAdapter(httpHandler);
HttpServlet servlet = new ServletHttpHandlerAdapter(httpHandler);
Tomcat.addServlet(rootContext, "httpHandlerServlet", servlet);
rootContext.addServletMappingDecoded("/", "httpHandlerServlet");
TomcatWebServer server = new TomcatWebServer(tomcat);
server.start();

We can also deploy the functions in a standalone Servlet container that supports Servlet 3.1+, such as Tomcat 9.

Similar to the code above, we can wrap the RouterFunction in a servlet that extends ServletHttpHandlerAdapter, say com.baeldung.functional.RootServlet:

public class RootServlet extends ServletHttpHandlerAdapter {

    public RootServlet() {
        this(WebHttpHandlerBuilder
          .webHandler(toHttpHandler(routingFunction()))
          .prependFilter(new IndexRewriteFilter())
          .build());
    }

    RootServlet(HttpHandler httpHandler) {
        super(httpHandler);
    }

    //...

}

Then register the Servlet as a bean:

@Bean
public ServletRegistrationBean servletRegistrationBean() throws Exception {
    HttpHandler httpHandler = WebHttpHandlerBuilder
      .webHandler(toHttpHandler(routingFunction()))
      .prependFilter(new IndexRewriteFilter())
      .build();
    ServletRegistrationBean registrationBean
      = new ServletRegistrationBean<>(new RootServlet(httpHandler), "/");
    registrationBean.setLoadOnStartup(1);
    registrationBean.setAsyncSupported(true);
    return registrationBean;
}

Or if you prefer a plain web.xml:

<servlet>
    <servlet-name>functional</servlet-name>
    <servlet-class>com.baeldung.functional.RootServlet</servlet-class>
    <load-on-startup>1</load-on-startup>
    <async-supported>true</async-supported>
</servlet>
<servlet-mapping>
    <servlet-name>functional</servlet-name>
    <url-pattern>/</url-pattern>
</servlet-mapping>

As we are using spring-boot-starter, after excluding the embedded tomcat: spring-boot-starter-tomcat included by spring-boot-starter-web; and adding a provided dependency javax.servlet.javax.servlet-api, we are good to go.

4. Testing

A new way of testing for the functional web framework is also introduced in Spring 5. WebTestClient serves as the basis of spring-webflux integration testing support.

With the help of bindToRouterFunction() provided by WebTestClient, we can test the functions without starting an actual server:

WebTestClient client = WebTestClient
  .bindToRouterFunction(routingFunction())
  .build();
client.get().uri("/").exchange().expectStatus().isOk();

If we already have a server running, use bindToServer() to test via socket:

WebTestClient client = WebTestClient
  .bindToServer()
  .baseUrl("http://localhost:8080")
  .build();
client.get().uri("/").exchange().expectStatus().isOk();

5. Conventional Web Request Patterns

Now that we have a basic understanding of the framework’s key components let’s see how they would fit in conventional web request patterns.

5.1. A Simple GET API

A simple HTTP GET API that returns “hello world”:

RouterFunction router = route(GET("/test"), 
  request -> ok().body(fromObject("hello world")));

@Test
public void givenRouter_whenGetTest_thenGotHelloWorld() throws Exception {
    client.get().uri("/test").exchange()
      .expectStatus().isOk()
      .expectBody(String.class).value().isEqualTo("hello world");
}

5.2. POST a Form

Posting a login form:

RouterFunction router = route(POST("/login"), request -> 
  request.body(toFormData()).map(MultiValueMap::toSingleValueMap)
    .map(formData -> {
        if ("baeldung".equals(formData.get("user")) 
           "you_know_what_to_do".equals(formData.get("token"))) {

            return ok()
              .body(Mono.just("welcome back!"), String.class)
              .block();
        }

        return badRequest().build().block();
    }));

@Test
public void givenLoginForm_whenPostValidToken_thenSuccess() throws Exception {
    MultiValueMap<String, String> formData = new LinkedMultiValueMap<>(2);
    formData.add("user", "baeldung");
    formData.add("token", "you_know_what_to_do");

    client.post().uri("/login")
      .contentType(MediaType.APPLICATION_FORM_URLENCODED)
      .exchange(BodyInserters.fromFormData(formData))
      .expectStatus().isOk()
      .expectBody(String.class)
      .value().isEqualTo("welcome back!");
}

Then a form with multipart data:

RouterFunction router = route(POST("/upload"), request -> 
  request.body(toDataBuffers()).collectList()
    .map(dataBuffers -> {

        AtomicLong atomicLong = new AtomicLong(0);
        dataBuffers.forEach(d -> 
          atomicLong.addAndGet(d.asByteBuffer().array().length));

        return ok().body(fromObject(atomicLong.toString())).block();

    }));

@Test
public void givenUploadForm_whenRequestWithMultipartData_thenSuccess()
  throws Exception {

    Resource resource = new ClassPathResource("/baeldung-weekly.png");
    client.post().uri("/upload").contentType(MediaType.MULTIPART_FORM_DATA)
      .exchange(fromResource(resource))
      .expectStatus().isOk()
      .expectBody(String.class)
      .value().isEqualTo(String.valueOf(resource.contentLength()));
}

As you may notice, the multipart processing is blocking. For now, a non-blocking multipart implementation is still under investigation by the Spring team. You can track this issue for further updates.

5.3. RESTful API

We can also manipulate Resources in a RESTful API:

List<Actor> actors = new CopyOnWriteArrayList<>(
  Arrays.asList(BRAD_PITT, TOM_HANKS));

RouterFunction router = nest(path("/actor"),
  route(GET("/"), request -> 
    ok().body(Flux.fromIterable(actors), Actor.class))

  .andRoute(POST("/"), request -> 
    request.bodyToMono(Actor.class).doOnNext(actors::add)

  .then(ok().build())));

@Test
public void givenActors_whenAddActor_thenAdded() throws Exception {
    client.get().uri("/actor")
      .exchange()
      .expectStatus().isOk()
      .expectBody(Actor.class).list().hasSize(2);

    client.post().uri("/actor")
      .exchange(fromObject(new Actor("Clint", "Eastwood")))
      .expectStatus().isOk();

    client.get().uri("/actor")
      .exchange()
      .expectStatus().isOk()
      .expectBody(Actor.class).list().hasSize(3);
}

As mentioned previously, RouterFunctions and RouterFunction gives us options on chaining and nesting route functions.

In this example, we nested two router functions to separately handle GET and POST request under the path “/actor”.

5.4. Serve Static Resources

RouterFunctions also provides a shortcut to serve static files:

RouterFunction router = resources(
  "/files/**", new ClassPathResource("files/"));

@Test
public void givenResources_whenAccess_thenGotContentHello() throws Exception {
    this.client.get().uri("/files/hello.txt")
      .exchange()
      .expectStatus().isOk()
      .expectBody(String.class).value().isEqualTo("hello");
}

5.5. Filters

RouterFunction allows filtering handler functions:

router.filter((request, next) -> {
    System.out.println("handling: " + request.path());
    return next.handle(request);
});

However, the filter above only applies to all handler functions routed by the router. When an URL is not explicitly handled, requests to such URLs will not go through the filter. Say we want to add URL rewriting features in such cases, router’s filter would do no help.

Currently, if we want to rewrite URLs in filters, we have to do it in a WebFilter, instead of the router’s filter:

WebHandler webHandler = toHttpHandler(routingFunction());
HttpHandler httpHandler = WebHttpHandlerBuilder.webHandler(webHandler)
  .prependFilter(((serverWebExchange, webFilterChain) -> {

      ServerHttpRequest request = serverWebExchange.getRequest();
      if (request.getURI().getPath().equals("/")) {

          return webFilterChain.filter(
            serverWebExchange.mutate().request(builder -> 
                builder
                  .method(request.getMethod())
                  .contextPath(request.getContextPath())
                  .path("/test"))
                  .build());

      } else {
          return webFilterChain.filter(serverWebExchange);
      }
  }));

@Test
public void givenIndexFilter_whenRequestRoot_thenRewrittenToTest()
  throws Exception {
    this.client.get().uri("/").exchange()
      .expectStatus().isOk()
      .expectBody(String.class).value().isEqualTo("hello world");
}

6. Summary

In this article, we introduced the new functional web framework in Spring 5, with detailed examples showing how the framework works in typical scenarios

Note that as of Spring 5.0.0.M5, @RequestMapping and RouterFunction cannot be mixed in the same application yet.

Laying its foundation on Reactor, the reactive framework would fully shine with reactive access to data stores. Unfortunately, most data stores do not provide such reactive access yet, except for a few NoSQL databases such as MongoDB.

As always, the full source code can be found over on Github.

Guide to Java 8 Comparator.comparing()

$
0
0

1. Overview

Java 8 introduced several enhancements to the Comparator interface, including a handful of static functions that are of great utility when coming up with a sort order for collections.

Java 8 lambdas can be leveraged effectively with the Comparator interface as well. A detailed explanation of lambdas and Comparator can be found here, and a chronicle on sorting and applications of Comparator can be found here.

In this tutorial, we will explore several functions introduced for the Comparator interface in Java 8.

2. Getting Started

2.1. Sample Bean Class

For the examples in this article, let’s create an Employee bean and use its fields for comparing and sorting purposes:

public class Employee {
    String name;
    int age;
    double salary;
    long mobile;

    // constructors, getters & setters
}

2.2. Our Testing Data

Let’s also create an array of employees that will be used to store the results of our type in various test cases throughout the article:

employees = new Employee[] { ... };

The initial ordering of elements of employees will be:

[Employee(name=John, age=25, salary=3000.0, mobile=9922001), 
Employee(name=Ace, age=22, salary=2000.0, mobile=5924001), 
Employee(name=Keith, age=35, salary=4000.0, mobile=3924401)]

Throughout the article, we’ll be sorting above Employee array using different functions.

For test assertions, we’ll use a set of pre-sorted arrays that we will compare to our sort results (i.e., the employees array) for different scenarios.

Let’s declare a few of these arrays:

@Before
public void initData() {
    sortedEmployeesByName = new Employee[] {...};
    sortedEmployeesByNameDesc = new Employee[] {...};
    sortedEmployeesByAge = new Employee[] {...};
    
    // ...
}

Like always, feel free to refer our GitHub link for the complete code.

3. Using Comparator.comparing

This section covers variants of the Comparator.comparing static function.

3.1. Key Selector Variant

The Comparator.comparing static function accepts a sort key Function and returns a Comparator for the type which contains the sort key:

static <T,U extends Comparable<? super U>> Comparator<T> comparing(
   Function<? super T,? extends U> keyExtractor)

To see this in action, let’s use the name field in Employee as the sort key and pass its method reference as an argument of type Function. The Comparator returned from the same is used for sorting:

@Test
public void whenComparing_thenSortedByName() {
    Comparator<Employee> employeeNameComparator
      = Comparator.comparing(Employee::getName);
    
    Arrays.sort(employees, employeeNameComparator);
    
    assertTrue(Arrays.equals(employees, sortedEmployeesByName));
}

As you can see, the employees array values are sorted by name as a result of the sort:

[Employee(name=Ace, age=22, salary=2000.0, mobile=5924001), 
Employee(name=John, age=25, salary=3000.0, mobile=9922001), 
Employee(name=Keith, age=35, salary=4000.0, mobile=3924401)]

3.2. Key Selector and Comparator Variant

There is another option that facilitates overriding the natural ordering of the sort key by providing the Comparator that creates a custom ordering for the sort key:

static <T,U> Comparator<T> comparing(
  Function<? super T,? extends U> keyExtractor,
    Comparator<? super U> keyComparator)

Let’s modify the test above, overriding the natural order of sorting by the name field by providing a Comparator for sorting the names in descending order as the second argument to Comparator.comparing:

@Test
public void whenComparingWithComparator_thenSortedByNameDesc() {
    Comparator<Employee> employeeNameComparator
      = Comparator.comparing(
        Employee::getName, (s1, s2) -> {
            return s2.compareTo(s1);
        });
    
    Arrays.sort(employees, employeeNameComparator);
    
    assertTrue(Arrays.equals(employees, sortedEmployeesByNameDesc));
}

As you can see, the results are sorted in descending order by name:

[Employee(name=Keith, age=35, salary=4000.0, mobile=3924401), 
Employee(name=John, age=25, salary=3000.0, mobile=9922001), 
Employee(name=Ace, age=22, salary=2000.0, mobile=5924001)]

3.3. Using Comparator.reversed

When invoked on an existing Comparator, the instance method Comparator.reversed returns a new Comparator that reverses the sort order of the original.

Let’s use the Comparator that sorts the employees by name and reverse it so that employees are sorted in descending order of the name:

@Test
public void whenReversed_thenSortedByNameDesc() {
    Comparator<Employee> employeeNameComparator
      = Comparator.comparing(Employee::getName);
    Comparator<Employee> employeeNameComparatorReversed 
      = employeeNameComparator.reversed();
    Arrays.sort(employees, employeeNameComparatorReversed);
    assertTrue(Arrays.equals(employees, sortedEmployeesByNameDesc));
}

The results are sorted in descending order by name:

[Employee(name=Keith, age=35, salary=4000.0, mobile=3924401), 
Employee(name=John, age=25, salary=3000.0, mobile=9922001), 
Employee(name=Ace, age=22, salary=2000.0, mobile=5924001)]

3.4. Using Comparator.comparingInt

There is also a function Comparator.comparingInt which does the same thing as Comparator.comparing, but it takes only int selectors. Let’s try this with an example where we order employees by age:

@Test
public void whenComparingInt_thenSortedByAge() {
    Comparator<Employee> employeeAgeComparator 
      = Comparator.comparingInt(Employee::getAge);
    
    Arrays.sort(employees, employeeAgeComparator);
    
    assertTrue(Arrays.equals(employees, sortedEmployeesByAge));
}

Let’s see how the employees array values are ordered after the sort:

[Employee(name=Ace, age=22, salary=2000.0, mobile=5924001), 
Employee(name=John, age=25, salary=3000.0, mobile=9922001), 
Employee(name=Keith, age=35, salary=4000.0, mobile=3924401)]

3.5. Using Comparator.comparingLong

Similar to what we did for int keys, let’s see an example using Comparator.comparingLong to consider a sort key of type long by ordering the employees array by the mobile field:

@Test
public void whenComparingLong_thenSortedByMobile() {
    Comparator<Employee> employeeMobileComparator 
      = Comparator.comparingLong(Employee::getMobile);
    
    Arrays.sort(employees, employeeMobileComparator);
    
    assertTrue(Arrays.equals(employees, sortedEmployeesByMobile));
}

Let’s see how the employees array values are ordered after the sort with mobile as the key:

[Employee(name=Keith, age=35, salary=4000.0, mobile=3924401), 
Employee(name=Ace, age=22, salary=2000.0, mobile=5924001), 
Employee(name=John, age=25, salary=3000.0, mobile=9922001)]

3.6. Using Comparator.comparingDouble

Again, similar to what we did for int and long keys, let’s see an example using Comparator.comparingDouble to consider a sort key of type double by ordering the employees array by the salary field:

@Test
public void whenComparingDouble_thenSortedBySalary() {
    Comparator<Employee> employeeSalaryComparator
      = Comparator.comparingDouble(Employee::getSalary);
    
    Arrays.sort(employees, employeeSalaryComparator);
    
    assertTrue(Arrays.equals(employees, sortedEmployeesBySalary));
}

Let’s see how the employees array values are ordered after the sort with salary as the sort key:

[Employee(name=Ace, age=22, salary=2000.0, mobile=5924001), 
Employee(name=John, age=25, salary=3000.0, mobile=9922001), 
Employee(name=Keith, age=35, salary=4000.0, mobile=3924401)]

4. Considering Natural Order in Comparator

The natural order is defined by the behavior of the Comparable interface implementation. More information about the difference between Comparator and uses of the Comparable interface can be found in this article.

Let’s implement Comparable in our Employee class so that we can try the natrualOrder and reverseOrder functions of the Comparator interface:

public class Employee implements Comparable<Employee>{
    // ...

    @Override
    public int compareTo(Employee argEmployee) {
        return name.compareTo(argEmployee.getName());
    }
}

4.1. Using Natural Order

The naturalOrder function returns the Comparator for the return type mentioned in the signature:

static <T extends Comparable<? super T>> Comparator<T> naturalOrder()

Given the above logic to compare employees based on name field, let’s use this function to obtain to a Comparator which sorts the employees array in natural order:

@Test
public void whenNaturalOrder_thenSortedByName() {
    Comparator<Employee> employeeNameComparator 
      = Comparator.<Employee> naturalOrder();
    
    Arrays.sort(employees, employeeNameComparator);
    
    assertTrue(Arrays.equals(employees, sortedEmployeesByName));
}

Let’s see how the employees array values are ordered after the sort:

[Employee(name=Ace, age=22, salary=2000.0, mobile=5924001), 
Employee(name=John, age=25, salary=3000.0, mobile=9922001), 
Employee(name=Keith, age=35, salary=4000.0, mobile=3924401)]

4.2. Using Reverse Natural Order

Similar to naturalOrder, let’s use the reverseOrder method to generate a Comparator which will produce a reverse ordering of employees to the one in the naturalOrder example:

@Test
public void whenReverseOrder_thenSortedByNameDesc() {
    Comparator<Employee> employeeNameComparator 
      = Comparator.<Employee> reverseOrder();
    
    Arrays.sort(employees, employeeNameComparator);
    
    assertTrue(Arrays.equals(employees, sortedEmployeesByNameDesc));
}

Let’s see how the employees array values are ordered after the sort:

[Employee(name=Keith, age=35, salary=4000.0, mobile=3924401), 
Employee(name=John, age=25, salary=3000.0, mobile=9922001), 
Employee(name=Ace, age=22, salary=2000.0, mobile=5924001)]

5. Considering Null Values in Comparator

This section covers functions nullsFirst and nullsLast, which consider null values in ordering and keep the null values at the beginning or end of the ordering sequence.

5.1. Considering Null First

Let’s randomly insert null values in employees array:

[Employee(name=John, age=25, salary=3000.0, mobile=9922001), 
null, 
Employee(name=Ace, age=22, salary=2000.0, mobile=5924001), 
null, 
Employee(name=Keith, age=35, salary=4000.0, mobile=3924401)]

The nullsFirst function will return a Comparator that keeps all nulls at the beginning of the ordering sequence:

@Test
public void whenNullsFirst_thenSortedByNameWithNullsFirst() {
    Comparator<Employee> employeeNameComparator
      = Comparator.comparing(Employee::getName);
    Comparator<Employee> employeeNameComparator_nullFirst
      = Comparator.nullsFirst(employeeNameComparator);
  
    Arrays.sort(employeesArrayWithNulls, 
      employeeNameComparator_nullFirst);
  
    assertTrue(Arrays.equals(
      employeesArrayWithNulls,
      sortedEmployeesArray_WithNullsFirst));
}

Let’s see how the employees array values are ordered after the sort:

[null, 
null, 
Employee(name=Ace, age=22, salary=2000.0, mobile=5924001), 
Employee(name=John, age=25, salary=3000.0, mobile=9922001), 
Employee(name=Keith, age=35, salary=4000.0, mobile=3924401)]

5.2. Considering Null Last

The nullsLast function will return a Comparator that keeps all nulls at the end of the ordering sequence:

@Test
public void whenNullsLast_thenSortedByNameWithNullsLast() {
    Comparator<Employee> employeeNameComparator
      = Comparator.comparing(Employee::getName);
    Comparator<Employee> employeeNameComparator_nullLast
      = Comparator.nullsLast(employeeNameComparator);
  
    Arrays.sort(employeesArrayWithNulls, employeeNameComparator_nullLast);
  
    assertTrue(Arrays.equals(
      employeesArrayWithNulls, sortedEmployeesArray_WithNullsLast));
}

Let’s see how the employees array values are ordered after the sort:

[Employee(name=Ace, age=22, salary=2000.0, mobile=5924001), 
Employee(name=John, age=25, salary=3000.0, mobile=9922001), 
Employee(name=Keith, age=35, salary=4000.0, mobile=3924401), 
null, 
null]

6. Using Comparator.thenComparing

The thenComparing function lets you set up lexicographical ordering of values by provisioning multiple sort keys in a particular sequence.

Let’s consider another array of Employee class:

someMoreEmployees = new Employee[] { ... };

Consider the following sequence of elements in the above array:

[Employee(name=Jake, age=25, salary=3000.0, mobile=9922001), 
Employee(name=Jake, age=22, salary=2000.0, mobile=5924001), 
Employee(name=Ace, age=22, salary=3000.0, mobile=6423001), 
Employee(name=Keith, age=35, salary=4000.0, mobile=3924401)]

Let’s write a sequence of comparisons as age followed by the name and see the ordering of this array:

@Test
public void whenThenComparing_thenSortedByAgeName(){
    Comparator<Employee> employee_Age_Name_Comparator
      = Comparator.comparing(Employee::getAge)
        .thenComparing(Employee::getName);
  
    Arrays.sort(someMoreEmployees, employee_Age_Name_Comparator);
  
    assertTrue(Arrays.equals(someMoreEmployees, sortedEmployeesByAgeName));
}

Here the ordering will be done by age, and for the values with the same age, ordering will be done by name. Let’s observe this in the sequence we receive after sorting:

[Employee(name=Ace, age=22, salary=3000.0, mobile=6423001), 
Employee(name=Jake, age=22, salary=2000.0, mobile=5924001), 
Employee(name=Jake, age=25, salary=3000.0, mobile=9922001), 
Employee(name=Keith, age=35, salary=4000.0, mobile=3924401)]

Let’s use the other version of thenComparing that is thenComparingInt, by changing the lexicographical sequence to name followed by age:

@Test
public void whenThenComparing_thenSortedByNameAge() {
    Comparator<Employee> employee_Name_Age_Comparator
      = Comparator.comparing(Employee::getName)
        .thenComparingInt(Employee::getAge);
  
    Arrays.sort(someMoreEmployees, employee_Name_Age_Comparator);
  
    assertTrue(Arrays.equals(someMoreEmployees, 
      sortedEmployeesByNameAge));
}

Let’s see how the employees array values are ordered after the sort:T

[Employee(name=Ace, age=22, salary=3000.0, mobile=6423001), 
Employee(name=Jake, age=22, salary=2000.0, mobile=5924001), 
Employee(name=Jake, age=25, salary=3000.0, mobile=9922001), 
Employee(name=Keith, age=35, salary=4000.0, mobile=3924401)]

Similarly, there are functions thenComparingLong and thenComparingDouble for using long and double sorting keys.

7. Conclusion

This article is a guide to several features introduced in Java 8 for the Comparator interface.

As usual, the source code can be found over on Github.

Comprehensive Guide to Null Safety in Kotlin

$
0
0

1. Overview

In this article, we’ll be looking at the null safety features built into the Kotlin language. Kotlin provides comprehensive, native handling of nullable fields –  no additional libraries are needed.

2. Maven Dependency

To get started, you’ll need to add the kotlin-stdlib Maven dependency to your pom.xml:

<dependency>
    <groupId>org.jetbrains.kotlin</groupId>
    <artifactId>kotlin-stdlib</artifactId>
    <version>1.1.1</version>
</dependency>

You can find the latest version on Maven Central.

3. Nullable and Non-Nullable Reference Types

Kotlin has two types of references that are interpreted by the compiler to give the programmer information about the correctness of a program at compile time – those that are nullable and those that are not.

By default, Kotlin assumes that value cannot be null:

var a: String = "value"

assertEquals(a.length, 5)

We cannot assign null to the reference a, and if you try to, it will cause a compiler error.

If we want to create a nullable reference, we need to create append the question mark(?) to the type definition:

var b: String? = "value"

After that, we can assign null to it:

b = null

When we want to access the reference, we must handle the null case explicitly to avoid a compilation error because Kotlin knows that this variable can hold null:

if (b != null) {
    println(b.length)
} else {
    assertNull(b)
}

4. Safe Calls

Handling every nullable reference in such way could be cumbersome. Fortunately, Kotlin has a syntax for “safe calls” – this syntax allows programmers to execute an action only when the specific reference holds a non-null value.

Let’s define two data classes to illustrate this feature:

data class Person(val country: Country?)

data class Country(val code: String?)

Note that the country and code fields are of nullable reference type.

To access those fields in a fluent way, we can use the safe call syntax:

val p: Person? = Person(Country("ENG"))

val res = p?.country?.code

assertEquals(res, "ENG")

Should the variable hold a null, the safe calls syntax will return a null result:

val p: Person? = Person(Country(null))

val res = p?.country?.code

assertNull(res)

4.1. The let() method

To execute an action only when a reference holds a non-nullable value, we can use a let operator.

Let’s say that we have a list of values and there is also a null value in that list:

val firstName = "Tom"
val secondName = "Michael"
val names: List<String?> = listOf(firstName, null, secondName)

Next, we can execute an action on every non-nullable element of the names list by using a let function:

var res = listOf<String?>()
for (item in names) {
    item?.let { res = res.plus(it) }
}

assertEquals(2, res.size)
assertTrue { res.contains(firstName) }
assertTrue { res.contains(secondName) }

4.2. The also() method

If we want to apply some additional operation, for example logging on every non-nullable value we can use an also() method and chain it with a let():

var res = listOf<String?>()
for (item in names) {
    item?.let { res = res.plus(it); it }
  ?.also{it -> println("non nullable value: $it")}
}

It will print out every element that is not null:

non nullable value: Tom
non nullable value: Michael

4.3. The run() method

Kotlin has a run() method to execute some operation on a nullable reference. It is very similar to let() but inside of a function body, the run() method operates on this reference instead of a function parameter:

var res = listOf<String?>()
for (item in names) {
    item?.run{res = res.plus(this)}
}

5. Elvis Operator

Sometimes, when we have a reference, we want to return some default value from the operation if the reference holds a null. To achieve that, we can use an elvis (?:) operator. This is an equivalent of orElse/orElseGet from Java Optional class:

val value: String? = null

val res = value?.length ?: -1

assertEquals(res, -1)

When the value reference holds a non-nullable value, the method length will be invoked:

val value: String? = "name"

val res = value?.length ?: -1

assertEquals(res, 4)

6. Nullable Unsafe Get

Kotlin also has an unsafe operator to get a value of a nullable field without handling absence logic explicitly, but it should be used very carefully.

The double exclamation mark operator (!!) takes a value from a nullable reference and throws a NullPointerException if it holds null. This is an equivalent of Optional.get() operation:

var b: String? = "value"
b = null

assertFailsWith<NullPointerException> {
    b!!.length
}

If the nullable reference holds a non-nullable value, the action on that value will be executed successfully:

val b: String? = "value"

assertEquals(b!!.length, 5)

7. Filtering Null Values From a List

The List class in Kotlin has a utility method filterNotNull() that returns only non-nullable values from a list that holds nullable references:

val list: List<String?> = listOf("a", null, "b")

val res = list.filterNotNull()

assertEquals(res.size, 2)
assertTrue { res.contains("a") }
assertTrue { res.contains("b") }

This is a very useful construct that encapsulates the logic that we would otherwise need to implement ourselves.

8. Conclusion

In this article, we explored Koltin’s null safety features in depth. We saw types of references that can hold null values and those that cannot. We implemented fluent null handling logic by using “safe call” features and the elvis operator.

The implementation of all these examples and code snippets can be found in the GitHub project – this is a Maven project, so it should be easy to import and run as it is.

Java Web Weekly, Issue 170

$
0
0

Lots of interesting writeups on Java 9 this week.

Here we go…

1. Spring and Java

>> Java Finalization to be Deprecated? [infoq.com]

It looks like Object.finalize() might be getting deprecated.

>> Java’s Ternary Operator in Three Minutes [sitepoint.com]

A short but comprehensive guide to the ternary operator(condition ? … :) in Java.

>> Object Deserialisation Filters Backported from Java 9 [infoq.com]

JEP-290 (filtering incoming data in an object input stream) was backported to Java 6, 7, and 8. Very nice.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Cloud offerings free tier – Amazon vs Google [frankel.ch]

A realistic comparison of what we can expect from Amazon and Google Cloud without paying a single penny.

>> Distributed Stream Processing Frameworks for Fast & Big Data [codecentric.de]

A short introduction to the basics of stream processing.

>> How I would approach creating automated user interface-driven tests [ontestautomation.com]

One of the ways you could approach building interface-driven tests.

>> Modules vs. microservice [oreilly.com]

An interesting, balanced take on modularizing the architecture of a system.

Also worth reading:

3. Musings

>> Improving your craftsmanship through conferences [ontestautomation.com]

Attending conferences is a great way for finding inspiration and learning from others. The next step is to start speaking which boosts your self-confidence, helps building a personal brand and forces you to master the topic.

>> How to Perform Effective Team Code Reviews [daedtech.com]

It’s important to not get lost in code reviews and not fixate(too much) on trivial stuff. You should also make sure that code reviews do not become toxic and are not a source of conflict in a team.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Shallow breathing [dilbert.com]

>> Fake websites and SEO [dilbert.com]

>> Forklift jousting [dilbert.com]

5. Pick of the Week

>> Living without expectations [m.signalvnoise.com]


A Guide to GemFire with Spring Data

$
0
0

1. Overview

GemFire is a high performance distributed data management infrastructure that sits between application cluster and back-end data sources.

With GemFire, data can be managed in-memory, which makes the access faster. Spring Data provides an easy configuration and access to GemFire from Spring application.

In this article, we’ll take a look at how we can use GemFire to meet our application’s caching requirements.

2. Maven Dependencies

To make use of the Spring Data GemFire support, we first need to add the following dependency in our pom.xml:

<dependency>
    <groupId>org.springframework.data</groupId>
    <artifactId>spring-data-gemfire</artifactId>
    <version>1.9.1.RELEASE</version>
</dependency>

The latest version of this dependency can be found here.

3. GemFire Basic Features

3.1. Cache

The cache in the GemFire provides the essential data management services as well as manages the connectivity to other peers.

The cache configuration (cache.xml) describes how the data will be distributed among different nodes:

<cache>
    <region name="region">
        <region-attributes>
            <cache-listener>
                <class-name>
                ...
                </class-name>
            </cache-listener>
        </region-attributes>
    </region>
    ...
</cache>

3.2. Regions

Data regions are a logical grouping within a cache for a single data set.

Simply put, a region lets us store data in multiple VMs in the system without consideration to which node the data is stored within the cluster.

Regions are classified into three broad categories:

  • Replicated region holds the complete set of data on each node. It gives a high read performance. Write operations are slower as the data update need to be propagated to each node:
    <region name="myRegion" refid="REPLICATE"/>
  • Partitioned region distributes the data so that each node only stores a part of region contents. A copy of the data is stored on one of the other nodes. It provides a good write performance.
    <region name="myRegion" refid="PARTITION"/>
  • Local region resides on the defining member node. There is no connectivity with other nodes within the cluster.
    <region name="myRegion" refid="LOCAL"/>

3.3. Query the Cache

GemFire provides a query language called OQL (Object Query Language) that allows us to refer to the objects stored in GemFire data regions. This is very similar to SQL in syntax. Let’s see how a very basic query looks like:

SELECT DISTINCT * FROM exampleRegion

GemFire’s QueryService provides methods to create the query object.

3.4. Data Serialization

To manage the data serialization-deserialization, GemFire provides options other than Java serialization that gives a higher performance, provides greater flexibility for data storage and data transfer, also support for different languages.

With that in mind, GemFire has defined Portable Data eXchange(PDX) data format. PDX is a cross-language data format that provides a faster serialization and deserialization, by storing the data in the named field which can be accessed directly without the need of fully deserializing the object.

3.5. Function Execution

In GemFire, a function can reside on a server and can be invoked from a client application or another server without the need to send the function code itself.

The caller can direct a data-dependent function to operate on a particular data set or can lead an independent data function to work on a particular server, member or member group.

3.6. Continuous Querying

With continuous querying, the clients subscribe to server side events by using SQL-type query filtering. The server sends all the events that modify the query results. The continuous querying event delivery uses the client/server subscription framework.

The syntax for a continuous query is similar to basic queries written in OQL. For example, a query which provides the latest stock data from Stock region can be written as:

SELECT * from StockRegion s where s.stockStatus='active';

To get the status update from this query, an implementation of CQListener need to be attached with the StockRegion:

<cache>
    <region name="StockRegion>
        <region-attributes refid="REPLICATE">
            ...
            <cache-listener>
                <class-name>...</class-name>
            </cache-listener>
        ...
        </region-attributes>
    </region>
</cache>

4. Spring Data GemFire Support

 4.1. Java Configuration

To simplify configuration, Spring Data GemFire provides various annotations for configuring core GemFire components:

@Configuration
public class GemfireConfiguration {

    @Bean
    Properties gemfireProperties() {
        Properties gemfireProperties = new Properties();
        gemfireProperties.setProperty("name","SpringDataGemFireApplication");
        gemfireProperties.setProperty("mcast-port", "0");
        gemfireProperties.setProperty("log-level", "config");
        return gemfireProperties;
    }

    @Bean
    CacheFactoryBean gemfireCache() {
        CacheFactoryBean gemfireCache = new CacheFactoryBean();
        gemfireCache.setClose(true);
        gemfireCache.setProperties(gemfireProperties());
        return gemfireCache;
    }

    @Bean(name="employee")
    LocalRegionFactoryBean<String, Employee> getEmployee(final GemFireCache cache) {
        LocalRegionFactoryBean<String, Employee> employeeRegion = new LocalRegionFactoryBean();
        employeeRegion.setCache(cache);
        employeeRegion.setName("employee");
        // ...
        return employeeRegion;
    }
}

To set up the GemFire cache and region, we have to first setup few specific properties. Here mcast-port is set to zero, which indicates that this GemFire node is disabled for multicast discovery and distribution. These properties are then passed to CacheFactoryBean to create a GemFireCache instance.

Using GemFireCache bean, an instance of LocalRegionFatcoryBean is created which represents the region within the Cache for the Employee instances.

4.2. Entity Mapping

The library provides support to map objects to be stored in GemFire grid. The mapping metadata is defined by using annotations at the domain classes:

@Region("employee")
public class Employee {

    @Id
    public String name;
    public double salary;

    @PersistenceConstructor
    public Employee(String name, double salary) {
        this.name = name;
        this.salary = salary;
    }

    // standard getters/setters
}

In the example above, we used the following annotations:

  • @Region, to specify the region instance of the Employee class
  • @Id, to annotate the property that shall be utilized as a cache key
  • @PersistenceConstructor, which helps to mark the one constructor that will be used to create entities, in case multiple constructors available

4.3. GemFire Repositories

Next, let’s have a look at a central component in Spring Data – the repository:

@Configuration
@EnableGemfireRepositories(basePackages
  = "com.baeldung.spring.data.gemfire.repository")
public class GemfireConfiguration {

    @Autowired
    EmployeeRepository employeeRepository;
    
    // ...
}

4.4. OQL Query support

The repositories allow the definition of query methods to efficiently run the OQL queries against the region the managed entity is mapped to:

@Repository
public interface EmployeeRepository extends   
  CrudRepository<Employee, String> {

    Employee findByName(String name);

    Iterable<Employee> findBySalaryGreaterThan(double salary);

    Iterable<Employee> findBySalaryLessThan(double salary);

    Iterable<Employee> 
      findBySalaryGreaterThanAndSalaryLessThan(double salary1, double salary2);
}

4.5. Function Execution Support

We also have annotation support available – to simplify working with GemFire function execution.

There are two concerns to address when we make use of functions, the implementation, and the execution.

Let’s see how a POJO can be exposed as a GemFire function using Spring Data annotations:

@Component
public class FunctionImpl {

    @GemfireFunction
    public void greeting(String message){
        // some logic
    }
 
    // ...
}

We need to activate the annotation processing explicitly for @GemfireFunction to work:

@Configuration
@EnableGemfireFunctions
public class GemfireConfiguration {
    // ...
}

For function execution, a process invoking a remote function need to provide calling arguments, a function id, the execution target (onServeronRegiononMember, etc.):

@OnRegion(region="employee")
public interface FunctionExecution {
 
    @FunctionId("greeting")
    public void execute(String message);
    
    // ...
}

To enable the function execution annotation processing, we need to add to activate it using Spring’s component scanning capabilities:

@Configuration
@EnableGemfireFunctionExecutions(
  basePackages = "com.baeldung.spring.data.gemfire.function")
public class GemfireConfiguration {
    // ...
}

5. Conclusion

In this article, we have explored GemFire essential features and examined how Spring Data provided APIs make it easy to work with it.

The complete code for this article is available over on GitHub.

Exploring the Spring Boot TestRestTemplate

$
0
0

1. Overview

This article explores the Spring Boot TestRestTemplate. It can be treated as a follow-up of The Guide to RestTemplate, which we firmly recommend to read before focusing on TestRestTemplateTestRestTemplate can be considered as an attractive alternative of RestTemplate.

2. Maven Dependencies

To use TestRestTemplate, you are required to have an appropriate dependency like:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-test</artifactId>
    <version>1.5.2.RELEASE</version>
</dependency>

You can find the latest version on Maven Central.

3. TestRestTemplate and RestTemplate 

Both of these clients are quite suitable for writing integration tests and can handle communicating with HTTP APIs very well.

For example, they provide us with the same methods standard methods, headers, and other HTTP constructs.

And all of these operations are well described in The Guide to RestTemplate, so we won’t revisit them here.

Here’s a simple GET GET request example:

TestRestTemplate testRestTemplate = new TestRestTemplate();
ResponseEntity<String> response = testRestTemplate.
  getForEntity(FOO_RESOURCE_URL + "/1", String.class);
 
assertThat(response.getStatusCode(), equalTo(HttpStatus.OK));

Despite the fact that both classes are very similar, TestRestTemplate does not extend RestTemplate and does offer a few very exciting new features.

4. What’s New in TestRestTemplate?

4.1. Constructor with Basic Auth Credentials

TestRestTemplate provides a constructor with which we can create a template with specified credentials for basic authentication.

All requests performed using this instance will be authenticated using provided credentials:

TestRestTemplate testRestTemplate
 = new TestRestTemplate("user", "passwd");
ResponseEntity<String> response = testRestTemplate.
  getForEntity(URL_SECURED_BY_AUTHENTICATION, String.class);
 
assertThat(response.getStatusCode(), equalTo(HttpStatus.OK));

4.2. Constructor with HttpClientOption

TestRestTemplate also enables us to customize the underlying Apache HTTP client using the HttpClientOption which is an enum in TestRestTemplate with the following options: ENABLE_COOKIES, ENABLE_REDIRECTS, and SSL.

Let’s see a quick example:

TestRestTemplate testRestTemplate = new TestRestTemplate("user", 
  "passwd", TestRestTemplate.HttpClientOption.ENABLE_COOKIES);
ResponseEntity<String> response = testRestTemplate.
  getForEntity(URL_SECURED_BY_AUTHENTICATION, String.class);
 
assertThat(response.getStatusCode(), equalTo(HttpStatus.OK))

In the above example, we’re using the options together with Basic Authentication.

If we don’t need authentication, we still can create a template with a simple constructor:

TestRestTemplate(TestRestTemplate.HttpClientOption.ENABLE_COOKIES)

4.3. New Method

Not only can constructors create a template with specified credentials. We can also add credentials after our template is created. TestRestTemplate gives us a method withBasicAuth() which adds credentials to an already existing template:

TestRestTemplate testRestTemplate = new TestRestTemplate();
ResponseEntity<String> response = testRestTemplate.withBasicAuth(
  "user", "passwd").getForEntity(URL_SECURED_BY_AUTHENTICATION, 
  String.class);
 
assertThat(response.getStatusCode(), equalTo(HttpStatus.OK));

5. Using Both TestRestTemplate and RestTemplate

TestRestTemplate can work as a wrapper for RestTemplate, e.g. if we are forced to use it because we are dealing with legacy code. You can see below how to create such a simple wrapper:

TestRestTemplate testRestTemplate = new TestRestTemplate(restTemplate);
ResponseEntity<String> response = testRestTemplate.getForEntity(
  FOO_RESOURCE_URL + "/1", String.class);
 
assertThat(response.getStatusCode(), equalTo(HttpStatus.OK));

A bit more complicated wrapper with the usage of RestTemplateBuilder is:

RestTemplateBuilder restTemplateBuilder = new RestTemplateBuilder();
TestRestTemplate testRestTemplate = new TestRestTemplate(
  restTemplateBuilder);
ResponseEntity<String> response = testRestTemplate.
  getForEntity(FOO_RESOURCE_URL + "/1", String.class);
assertThat(response.getStatusCode(), equalTo(HttpStatus.OK));

6. Conclusion

TestRestTemplate is not an extension of RestTemplate, but rather an alternative that simplifies integration testing and facilitates authentication during tests. It helps in customization of Apache HTTP client, but also it can be used as a wrapper of RestTemplate.

You can check out the examples provided in this article over on GitHub.

Map Serialization and Deserialization with Jackson

$
0
0

1. Overview

In this article, we’ll look at serialization and deserialization of Java maps using Jackson.

We’ll illustrate how to serialize and deserialize Map<String, String>Map<Object, String>, and Map<Object, Object> to and from JSON-formatted Strings.

2. Maven Configuration

<dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-databind</artifactId>
    <version>2.8.7</version>
</dependency>

You can get the latest version of Jackson here.

3. Serialization

Serialization converts a Java object into a stream of bytes, which can be persisted or shared as needed. Java Maps are collections which map a key Object to a value Object and are often the least intuitive objects to serialize.

3.1. Map<String, String> Serialization

For the simple case, let’s create a Map<String, String> and serialize it to JSON:

Map<String, String> map = new HashMap<>();
map.put("key", "value");

ObjectMapper mapper = new ObjectMapper();
String jsonResult = mapper.writerWithDefaultPrettyPrinter()
  .writeValueAsString(map);

ObjectMapper is Jackson’s serialization mapper, which allows us to serialize our map and write it out as a pretty-printed JSON String, using the toString() method in String:

{
  "key" : "value"
}

3.2. Map<Object, String> Serialization

You can serialize a map containing a custom Java class with a few extra steps. Let’s create a MyPair class to represent a pair of related String objects.

Note: the getters/setters should be public, and we annotate toString() with @JsonValue to ensure Jackson uses this custom toString() when serializing:

public class MyPair {

    private String first;
    private String second;
    
    @Override
    @JsonValue
    public String toString() {
        return first + " and " + second;
    }
 
    // standard getter, setters, equals, hashCode, constructors
}

Now let’s tell Jackson how to serialize MyPair by extending Jackson’s JsonSerializer:

public class MyPairSerializer extends JsonSerializer<MyPair> {

    private ObjectMapper mapper = new ObjectMapper();

    @Override
    public void serialize(MyPair value, 
      JsonGenerator gen,
      SerializerProvider serializers) 
      throws IOException, JsonProcessingException {
 
        StringWriter writer = new StringWriter();
        mapper.writeValue(writer, value);
        gen.writeFieldName(writer.toString());
    }
}

JsonSerializer, as the name suggests, serializes MyPair to JSON using MyPair‘s toString() method. Jackson provides many Serializer classes to fit your serialization requirements.

We apply MyPairSerializer to our Map<MyPair, String> with the @JsonSerialize annotation. Note that we’ve only told Jackson how to serialize MyPair because it already knows how to serialize String:

@JsonSerialize(keyUsing = MyPairSerializer.class) 
Map<MyPair, String> map;

Let’s test our map serialization:

map = new HashMap<>();
MyPair key = new MyPair("Abbott", "Costello");
map.put(key, "Comedy");

String jsonResult = mapper.writerWithDefaultPrettyPrinter()
  .writeValueAsString(map);

The serialized JSON output is:

{
  "Abbott and Costello" : "Comedy"
}

3.3. Map<Object, Object> Serialization

The most complex case is serializing a Map<Object, Object>, but most of the work is already done. Let’s use Jackson’s MapSerializer for our map, and MyPairSerializer from the previous section for the map’s key and value types:

@JsonSerialize(keyUsing = MapSerializer.class)
Map<MyPair, MyPair> map;
	
@JsonSerialize(keyUsing = MyPairSerializer.class)
MyPair mapKey;

@JsonSerialize(keyUsing = MyPairSerializer.class)
MyPair mapValue;

Let’s test out serializing our Map<MyPair, MyPair>:

mapKey = new MyPair("Abbott", "Costello");
mapValue = new MyPair("Comedy", "1940s");
map.put(mapKey, mapValue);

String jsonResult = mapper.writerWithDefaultPrettyPrinter()
  .writeValueAsString(map);

The serialized JSON output, using MyPair‘s toString() method, is:

{
  "Abbott and Costello" : "Comedy and 1940s"
}

4. Deserialization

Deserialization converts a stream of bytes into a Java object that we can use in code. In this section, we’ll deserialize JSON input into Mapof different signatures.

4.1. Map<String, String> Deserialization

For the simple case, let’s take a JSON-formatted input string and convert it to a Map<String, String> Java collection:

String jsonInput = "{\"key\": \"value\"}";
TypeReference<HashMap<String, String>> typeRef 
  = new TypeReference<HashMap<String, String>>() {};
Map<String, String> map = mapper.readValue(jsonInput, typeRef);

We use Jackson’s ObjectMapper as we did for serialization, using readValue() to process the input. Also, note our use of Jackson’s TypeReference, which we’ll use in all of our deserialization examples, to describe the type of our destination Map. Here is the toString() representation of our map:

{key=value}

4.2. Map<Object, String> Deserialization

Now, let’s change our input JSON and the TypeReference of our destination to Map<MyPair, String>:

String jsonInput = "{\"Abbott and Costello\" : \"Comedy\"}";

TypeReference<HashMap<MyPair, String>> typeRef 
  = new TypeReference<HashMap<MyPair, String>>() {};
Map<MyPair,String> map = mapper.readValue(jsonInput, typeRef);

We need to create a constructor for MyPair that takes a String with both elements and parses them to the MyPair elements:

public MyPair(String both) {
    String[] pairs = both.split("and");
    this.first = pairs[0].trim();
    this.second = pairs[1].trim();
}

And the toString() of our Map<MyPair,String> object is:

{Abbott and Costello=Comedy}

There is another option for the case when we deserialize into a Java class that contains a Map — we can use Jackson’s KeyDeserializer class, one of many Deserialization classes that Jackson offers. We annotate our ClassWithAMap with @JsonCreator, @JsonProperty, and @JsonDeserialize:

public class ClassWithAMap {

  @JsonProperty("map")
  @JsonDeserialize(keyUsing = MyPairDeserializer.class)
  private Map<MyPair, String> map;

  @JsonCreator
  public ClassWithAMap(Map<MyPair, String> map) {
    this.map = map;
  }
 
  // public getters/setters omitted
}

We are telling Jackson to deserialize the Map<MyPair, String> contained in ClassWithAMap, so we need to extend KeyDeserializer to describe how to deserialize the map’s key, a MyPair object, from an input String:

public class MyPairDeserializer extends KeyDeserializer {

  @Override
  public MyPair deserializeKey(
    String key, 
    DeserializationContext ctxt) throws IOException, 
    JsonProcessingException {
      
      return new MyPair(key);
    }
}

We test the deserialization out using readValue:

String jsonInput = "{\"Abbott and Costello\":\"Comedy\"}";

ClassWithAMap classWithMap = mapper.readValue(jsonInput,
  ClassWithAMap.class);

Again, the toString() method of our ClassWithAMap’s map gives us the output we expect:

{Abbott and Costello=Comedy}

4.3. Map<Object,Object> Deserialization

Lastly, let’s change our input JSON and the TypeReference of our destination to Map<MyPair, MyPair>:

String jsonInput = "{\"Abbott and Costello\" : \"Comedy and 1940s\"}";
TypeReference<HashMap<MyPair, MyPair>> typeRef 
  = new TypeReference<HashMap<MyPair, MyPair>>() {};
Map<MyPair,MyPair> map = mapper.readValue(jsonInput, typeRef);

And the toString() of our Map<MyPair, MyPair> object is:

{Abbott and Costello=Comedy and 1940s}

5. Conclusion

In this quick tutorial, we’ve seen how to serialize and deserialize Java Maps to and from JSON-formatted Strings.

As always, you can check out the example provided in this article in the Github repository.

A CLI with Spring Shell

$
0
0

1. Overview

Simply put, the Spring Shell project provides an interactive shell for processing commands and building a full-featured CLI using the Spring programming model.

In this article, we’ll explore its features, key classes, and annotations, and implement several custom commands and customizations.

2. Maven Dependency

First, we need to add the spring-shell dependency to our pom.xml:

<dependency>
    <groupId>org.springframework.shell</groupId>
    <artifactId>spring-shell</artifactId>
    <version>1.2.0.RELEASE</version>
</dependency>

The latest version of this artifact can be found here.

3. Accessing the Shell

There are two main ways to access the shell in our applications.

The first is to bootstrap the shell in the entry point of our application and let the user enter the commands:

public static void main(String[] args) throws IOException {
    Bootstrap.main(args);
}

The second is to obtain a JLineShellComponent and execute the commands programmatically:

Bootstrap bootstrap = new Bootstrap();
JLineShellComponent shell = bootstrap.getJLineShellComponent();
shell.executeCommand("help");

We’re going to use the first approach since its best suited for the examples in this article, however, in the source code you can find test cases that use the second form.

4. Commands

There are already several built-in commands in the shell, such as clear, help, exit, etc., that provide the standard functionality of every CLI.

Custom commands can be exposed by adding methods marked with the @CliCommand annotation inside a Spring component implementing the CommandMarker interface.

Every argument of that method must be marked with a @CliOption annotation, if we fail to do this, we’ll encounter several errors when trying to execute the command.

4.1. Adding Commands to the Shell

First, we need to let the shell know where our commands are. For this, it requires the file META-INF/spring/spring-shell-plugin.xml to be present in our project, there, we can use the component scanning functionality of Spring:

<beans ... >
    <context:component-scan base-package="org.baeldung.shell.simple" />
</beans>

Once the components are registered and instantiated by Spring, they are registered with the shell parser, and their annotations are processed.

Let’s create two simple commands, one to grab the contents of an URL and display them, and other to save those contents to a file:

@Component
public class SimpleCLI implements CommandMarker {

    @CliCommand(value = { "web-get", "wg" })
    public String webGet(
      @CliOption(key = "url") String url) {
        return getContentsOfUrlAsString(url);
    }
    
    @CliCommand(value = { "web-save", "ws" })
    public String webSave(
      @CliOption(key = "url") String url,
      @CliOption(key = { "out", "file" }) String file) {
        String contents = getContentsOfUrlAsString(url);
        try (PrintWriter out = new PrintWriter(file)) {
            out.write(contents);
        }
        return "Done.";
    }
}

Note that we can pass more than one string to the value and key attributes of @CliCommand and @CliOption respectively, this permits us to expose several commands and arguments that behave the same.

Now, let’s check if everything is working as expected:

spring-shell>web-get --url https://www.google.com
<!doctype html ... 
spring-shell>web-save --url https://www.google.com --out contents.txt
Done.

4.2. Availability of Commands

We can use the @CliAvailabilityIndicator annotation on a method returning a boolean to change, at runtime, if a command should be exposed to the shell.

First, let’s create a method to modify the availability of the web-save command:

private boolean adminEnableExecuted = false;

@CliAvailabilityIndicator(value = "web-save")
public boolean isAdminEnabled() {
    return adminEnableExecuted;
}

Now, let’s create a command to change the adminEnableExecuted variable:

@CliCommand(value = "admin-enable")
public String adminEnable() {
    adminEnableExecuted = true;
    return "Admin commands enabled.";
}

Finally, let’s verify it:

spring-shell>web-save --url https://www.google.com --out contents.txt
Command 'web-save --url https://www.google.com --out contents.txt'
  was found but is not currently available
  (type 'help' then ENTER to learn about this command)
spring-shell>admin-enable
Admin commands enabled.
spring-shell>web-save --url https://www.google.com --out contents.txt
Done.

4.3. Required Arguments

By default, all command arguments are optional. However, we can make them required with the mandatory attribute of the @CliOption annotation:

@CliOption(key = { "out", "file" }, mandatory = true)

Now, we can test that if we don’t introduce it, results in an error:

spring-shell>web-save --url https://www.google.com
You should specify option (--out) for this command

4.4. Default Arguments

An empty key value for a @CliOption makes that argument the default. There, we’ll receive the values introduced in the shell that are not part of any named argument:

@CliOption(key = { "", "url" })

Now, let’s check that it works as expected:

spring-shell>web-get https://www.google.com
<!doctype html ...

4.5. Helping Users

@CliCommand and @CliOption annotations provide a help attribute that allows us to guide our users when using the built-in help command or when tabbing to get auto-completion.

Let’s modify our web-get to add custom help messages:

@CliCommand(
  // ...
  help = "Displays the contents of an URL")
public String webGet(
  @CliOption(
    // ...
    help = "URL whose contents will be displayed."
  ) String url) {
    // ...
}

Now, the user can know exactly what our command does:

spring-shell>help web-get
Keyword:                    web-get
Keyword:                    wg
Description:                Displays the contents of a URL.
  Keyword:                  ** default **
  Keyword:                  url
    Help:                   URL whose contents will be displayed.
    Mandatory:              false
    Default if specified:   '__NULL__'
    Default if unspecified: '__NULL__'

* web-get - Displays the contents of a URL.
* wg - Displays the contents of a URL.

5. Customization

There are three ways to customize the shell by implementing the BannerProvider, PromptProvider and HistoryFileNameProvider interfaces, all of them with default implementations already provided.

Also, we need to use the @Order annotation to allow our providers to take precedence over those implementations.

Let’s create a new banner to begin our customization:

@Component
@Order(Ordered.HIGHEST_PRECEDENCE)
public class SimpleBannerProvider extends DefaultBannerProvider {

    public String getBanner() {
        StringBuffer buf = new StringBuffer();
        buf.append("=======================================")
            .append(OsUtils.LINE_SEPARATOR);
        buf.append("*          Baeldung Shell             *")
            .append(OsUtils.LINE_SEPARATOR);
        buf.append("=======================================")
            .append(OsUtils.LINE_SEPARATOR);
        buf.append("Version:")
            .append(this.getVersion());
        return buf.toString();
    }

    public String getVersion() {
        return "1.0.1";
    }

    public String getWelcomeMessage() {
        return "Welcome to Baeldung CLI";
    }

    public String getProviderName() {
        return "Baeldung Banner";
    }
}

Note that we can also change the version number and welcome message.

Now, let’s change the prompt:

@Component
@Order(Ordered.HIGHEST_PRECEDENCE)
public class SimplePromptProvider extends DefaultPromptProvider {

    public String getPrompt() {
        return "baeldung-shell";
    }

    public String getProviderName() {
        return "Baeldung Prompt";
    }
}

Finally, let’s modify the name of the history file:

@Component
@Order(Ordered.HIGHEST_PRECEDENCE)
public class SimpleHistoryFileNameProvider
  extends DefaultHistoryFileNameProvider {

    public String getHistoryFileName() {
        return "baeldung-shell.log";
    }

    public String getProviderName() {
        return "Baeldung History";
    }

}

The history file will record all commands executed in the shell and will be put alongside our application.

With everything in place, we can call our shell and see it in action:

=======================================
*          Baeldung Shell             *
=======================================
Version:1.0.1
Welcome to Baeldung CLI
baeldung-shell>

6. Converters

So far, we’ve only used simple types as arguments to our commands. Common types such as Integer, Date, Enum, File, etc., have a default converter already registered.

By implementing the Converter interface, we can also add our converters to receive custom objects.

Let’s create a converter that can transform a String into an URL:

@Component
public class SimpleURLConverter implements Converter<URL> {

    public URL convertFromText(
      String value, Class<?> requiredType, String optionContext) {
        return new URL(value);
    }

    public boolean getAllPossibleValues(
      List<Completion> completions,
      Class<?> requiredType,
      String existingData,
      String optionContext,
      MethodTarget target) {
        return false;
    }

    public boolean supports(Class<?> requiredType, String optionContext) {
        return URL.class.isAssignableFrom(requiredType);
    }
}

Finally, let’s modify our web-get and web-save commands:

public String webSave(... URL url) {
    // ...
}

public String webSave(... URL url) {
    // ...
}

As you may have guessed, the commands behave the same.

7. Conclusion

In this article, we had a brief look at the core features of the Spring Shell project. We were able to contribute our commands and customize the shell with our providers, we changed the availability of commands according to different runtime conditions and created a simple type converter.

Complete source code for this article can be found over on Github.

Implementing a Custom Spring AOP Annotation

$
0
0

1. Introduction

In this article, we’ll implement a custom AOP annotation using the AOP support in Spring.

First, we’ll give a high-level overview of AOP, explaining what it is and its advantages. Following this, we’ll implement our annotation step by step, gradually building up a more in-depth understanding of AOP concepts as we go.

The outcome will be a better understanding of AOP and the ability to create our custom Spring annotations in the future.

2. What is an AOP Annotation?

To quickly summarize, AOP stands for aspect orientated programming. Essentially, it is a way for adding behavior to existing code without modifying that code.

For a detailed introduction to AOP, there are articles on AOP pointcuts and advice. This article assumes we have a basic knowledge already.

The type of AOP that we will be implementing in this article is annotation driven. We may be familiar with this already if we’ve used the Spring @Transactional annotation:

@Transactional
public void orderGoods(Order order) {
   // A series of database calls to be performed in a transaction
}

The key here is non-invasiveness. By using annotation meta-data, our core business logic isn’t polluted with our transaction code. This makes it easier to reason about, refactor, and to test in isolation.

Sometimes, people developing Spring applications can see this as ‘Spring Magic’, without thinking in much detail about how it’s working. In reality, what’s happening isn’t particularly complicated. However, once we’ve completed the steps in this article, we will be able to create our own custom annotation in order to understand and leverage AOP.

3. Maven Dependency

First, let’s add our Maven dependencies.

For this example, we’ll be using Spring Boot, as its convention over configuration approach lets us get up and running as quickly as possible:

<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>1.5.2.RELEASE</version>
</parent>

<dependencies>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-aop</artifactId>
    </dependency>
</dependencies>

Note that we’ve included the AOP starter, which pulls in the libraries we need to start implementing aspects.

4. Creating our Custom Annotation

The annotation we are going to create is one which will be used to log the amount of time it takes a method to execute. Let’s create our annotation:

@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
public @interface LogExecutionTime {

}

Although a relatively simple implementation, it’s worth noting what the two meta-annotations are used for.

The @Target annotation tells us where our annotation will be applicable. Here we are using ElementType.Method, which means it will only work on methods. If we tried to use the annotation anywhere else, then our code would fail to compile. This behavior makes sense, as our annotation will be used for logging method execution time.

And @Retention just states whether the annotation will be available to the JVM at runtime or not. By default it is not, so Spring AOP would not be able to see the annotation. This is why it’s been reconfigured.

5. Creating our Aspect

Now we have our annotation, let’s create our aspect. This is just the module that will encapsulate our cross-cutting concern, which is our case is method execution time logging. All it is is a class, annotated with @Aspect:

@Aspect
@Component
public class ExampleAspect {

}

We’ve also included the @Component annotation, as our class also needs to be a Spring bean to be detected. Essentially, this is the class where we will implement the logic that we want our custom annotation to inject.

6. Creating our Pointcut and Advice

Now, let’s create our pointcut and advice. This will be an annotated method that lives in our aspect:

@Around("@annotation(LogExecutionTime)")
public Object logExecutionTime(ProceedingJoinPoint joinPoint) throws Throwable {
    return joinPoint.proceed();
}

Technically this doesn’t change the behavior of anything yet, but there’s still quite a lot going on that needs analysis.

First, we have annotated our method with @AroundThis is our advice, and around advice means we are adding extra code both before and after method execution. There are other types of advice, such as before and after but they will be left out of scope for this article.

Next, our @Around annotation has a point cut argument. Our pointcut just says, ‘Apply this advice any method which is annotated with @LogExecutionTime.’ There are lots of other types of pointcuts, but they will again be left out if scope.

The method logExecutionTime() itself is our advice. There is a single argument, ProceedingJoinPoint. In our case, this will be an executing method which has been annotated with @LogExecutionTime. 

Finally, when our annotated method ends up being called, what will happen is our advice will be called first. Then it’s up to our advice to decide what to do next. In our case, our advice is doing nothing other than calling proceed(), which is the just calling the original annotated method.

7. Logging our Execution Time

Now we have our skeleton in place, all we need to do is add some extra logic to our advice. This will be what logs the execution time in addition to calling the original method. Let’s add this extra behavior to our advice:

@Around("@annotation(LogExecutionTime)")
public Object logExecutionTime(ProceedingJoinPoint joinPoint) throws Throwable {
    long start = System.currentTimeMillis();

    Object proceed = joinPoint.proceed();

    long executionTime = System.currentTimeMillis() - start;

    System.out.println(joinPoint.getSignature() + " executed in " + executionTime + "ms");
    return proceed;
}

Again, we’ve not done anything that’s particularly complicated here. We’ve just recorded the current time, executed the method, then printed the amount of time it took to the console. We’re also logging the method signature, which is provided to use the joinpoint instance. We would also be able to gain access to other bits of information if we wanted to, such as method arguments.

Now, let’s try annotating a method with @LogExecutionTime, and then executing it to see what happens. Note that this must be a Spring Bean to work correctly:

@LogExecutionTime
public void serve() throws InterruptedException {
    Thread.sleep(2000);
}

After execution, we should see the following logged to the console:

void org.baeldung.Service.serve() executed in 2030ms

8. Conclusion

In this article, we’ve leveraged Spring Boot AOP to create our custom annotation, which we can apply to Spring beans to inject extra behavior to them at runtime.

The source code for our application is available on over on GitHub; this is a Maven project which should be able to run as is.

Ratpack Google Guice Integration

$
0
0

1. Overview

In our earlier article, we’ve shown how building scalable applications using Ratpack looks like.

In this tutorial, we will discuss further on how to use Google Guice with Ratpack as dependency management engine.

2. Why Google Guice?

Google Guice is an open source software framework for the Java platform released by Google under the Apache License.

It’s extremely lightweight dependency management module that is easy to configure. Moreover, it only allows constructor level dependency injection for the sake of usability.

More details about Guice can be found here.

3. Using Guice with Ratpack

3.1. Maven Dependency

Ratpack has first-class support for Guice dependency. Therefore, we don’t have to manually add any external dependency for Guice; it already comes pre-built with Ratpack. More details on Ratpack‘s Guice support can be found here.

Hence, we just need to add following core Ratpack dependency in the pom.xml:

<dependency>
    <groupId>io.ratpack</groupId>
    <artifactId>ratpack-core</artifactId>
    <version>1.4.5</version>
</dependency>

You can check the latest version on Maven Central.

3.2. Building Service Modules

Once done with the Maven configuration, we’ll build a service and make good use of some simple dependency injection in our example here.

Let’s create one service interface and one service class:

public interface DataPumpService {
    String generate();
}

This is the service interface which will act as the injector. Now, we have to build the service class which will implement it and will define the service method generate():

public class DataPumpServiceImpl implements DataPumpService {

    @Override
    public String generate() {
        return UUID.randomUUID().toString();
    }

}

An important point to note here is that since we are using Ratpack’s Guice module, we don’t need to use Guice‘s @ImplementedBy or @Inject annotation to manually inject the service class.

3.3. Dependency Management

There are two ways to perform dependency management with Google Guice.

The first one is to use Guice‘s AbstractModule and other on is to use Guice’s instance binding mechanism method:

public class DependencyModule extends AbstractModule {

    @Override
    public void configure() {
        bind(DataPumpService.class).to(DataPumpServiceImpl.class)
          .in(Scopes.SINGLETON);
    }

}

A few of points to note here:

  • by extending AbstractModule we are overriding default configure() method
  • we are mapping DataPumpServiceImpl class with the DataPumpService interface which is the service layer built earlier
  • we have also injected the dependency as Singleton manner.

3.4. Integration with the Existing Application

Since the dependency management configuration is ready, let’s now integrate it:

public class Application {

    public static void main(String[] args) throws Exception {

      RatpackServer
          .start(server -> server.registry(Guice
            .registry(bindings -> bindings.module(DependencyModule.class)))
            .handlers(chain -> chain.get("randomString", ctx -> {
                DataPumpService dataPumpService = ctx.get(DataPumpService.class);
                ctx.render(dataPumpService.generate().length());
            })));
    }
}

Here, with the registry() – we’ve bound the DependencyModule class which extends AbstractModuleRatpack’s Guice module will internally do the rest of the needful and inject the service in the application Context.

Since it’s available in the application-context, we can now fetch the service instance from anywhere in the application. Here, we have fetched the DataPumpService instance from the current context and mapped the /randomString URL with the service’s generate() method.

As a result, whenever the /randomString URL is hit, it will return random string fragments.

3.5. Runtime Instance Binding

As said earlier, we will now use Guice’s instance binding mechanism to do the dependency management at runtime. It’s almost the same like previous technique apart from using Guice’s bindInstance() method instead of AbstractModule to inject the dependency:

public class Application {

    public static void main(String[] args) throws Exception {

      RatpackServer.start(server -> server
        .registry(Guice.registry(bindings -> bindings
        .bindInstance(DataPumpService.class, new DataPumpServiceImpl())))
        .handlers(chain -> chain.get("randomString", ctx -> {
            DataPumpService dataPumpService = ctx.get(DataPumpService.class);
            ctx.render(dataPumpService.generate());
        })));
    }
}

Here, by using bindInstance(), we’re performing instance binding, i.e. injecting DataPumpService interface to DataPumpServiceImpl class.

In this way, we can inject the service instance to the application-context as we did in the earlier example.

Although we can use any of the two techniques for dependency management, it’s always better to use AbstractModule since it’ll completely separate the dependency management module from the application code. This way the code will be a lot cleaner and easier to maintain in the future.

3.6. Factory Binding

There’s also one more way for dependency management called factory binding. It’s not directly related to Guice’s implementation but this can work in parallel with Guice as well.

A factory class decouples the client from the implementation. A simple factory uses static methods to get and set mock implementations for interfaces.

We can use already created service classes to enable factory bindings. We just need to create one factory class just like DependencyModule (which extends Guice’s AbstractModule class) and bind the instances via static methods:

public class ServiceFactory {

    private static DataPumpService instance;

    public static void setInstance(DataPumpService dataPumpService) {
        instance = dataPumpService;
    }

    public static DataPumpService getInstance() {
        if (instance == null) {
            return new DataPumpServiceImpl();
        }
        return instance;
    }
}

Here, we’re statically injecting the service interface in the factory class. Hence, at a time only one instance of that interface would be available to this factory class. Then, we have created normal getter/setter methods to set and fetch the service instance.

Point to note here is that in the getter method we have made one explicit check to make sure only a single instance of the service is present or not; if it’s null then only we have created the instance of the implementational service class and returned the same.

Thereafter, we can use this factory instance in the application chain:

.get("factory", ctx -> ctx.render(ServiceFactory.getInstance().generate()))

4. Testing

We will use Ratpack‘s MainClassApplicationUnderTest to test our application with the help of Ratpack‘s internal JUnit testing framework. We have to add the necessary dependency (ratpack-test) for it.

Point to note here is since the URL content is dynamic, we can’t predict it while writing the test case. Hence, we would match the content length of the /randomString URL endpoint in the test case:

@RunWith(JUnit4.class)
public class ApplicationTest {

    MainClassApplicationUnderTest appUnderTest
      = new MainClassApplicationUnderTest(Application.class);

    @Test
    public void givenStaticUrl_getDynamicText() {
        assertEquals(21, appUnderTest.getHttpClient()
          .getText("/randomString").length());
    }
	
    @After
    public void shutdown() {
        appUnderTest.close();
    }
}

5. Conclusion

In this quick article, we’ve shown how to use Google Guice with Ratpack.

As always, the full source code is available over on GitHub.

Introduction to Vert.x

$
0
0

1. Overview

In this article, we’ll discuss Vert.x, cover its core concepts and create a simple RESTfull web service with it.

We’ll start by covering the foundation concepts about the toolkit, slowly move forward to an HTTP server and then build the RESTfull service.

2. About Vert.x

Vert.x is an open source, reactive and polyglot software development toolkit from the developers of Eclipse.

Reactive programming is a programming paradigm, associated with asynchronous streams, which respond to any changes or events.

Similarly, Vert.x uses an event bus, to communicate with different parts of the application and passes events, asynchronously to handlers when they available.

We call it polyglot due to its support for multiple JVM and non-JVM languages like Java, Groovy, Ruby, Python, and JavaScript.

3. Setup

To use Vert.x we need to add the Maven dependency:

<dependency>
    <groupId>io.vertx</groupId>
    <artifactId>vertx-core</artifactId>
    <version>3.4.1</version>
</dependency>

The latest version of the dependency can be found here.

3. Verticles

Verticles are pieces of code that Vert.x engine executes. The toolkit provides us many abstract verticle class, which can be extended, and implemented as we want to.

Being polyglot, verticles can be written in any of the supported languages. An application would typically be composed of multiple verticles running in the same Vert.x instance and communicate with each other using events via the event bus.

To create a verticle in JAVA, the class must implement io.vertx.core.Verticle interface, or any one of its subclasses.

4. Event Bus

It is the nerve system of any Vert.x application.

Being reactive, verticles remain dormant until they receive a message or event. Verticles communicate with each other through the event bus. The message can be anything from a string to a complex object.

Message handling is ideally asynchronous, messages are queued to the event bus, and control is returned to the sender. Later it’s dequeued to the listening verticle. The response is sent using Future and callback methods.

5. Simple Vert.x Application

Let’s create a simple application with a verticle and deploy it using a vertx instance. To create our verticle, we’ll extend the

To create our verticle, we’ll extend the io.vertx.core.AbstractVerticle class and override the start()  method:

public class HelloVerticle extends AbstractVerticle {

    @Override
    public void start(Future<Void> future) {
        LOGGER.info("Welcome to Vertx");
    }
}

The start()  method will be invoked by the vertx instance when the verticle is deployed. The method takes io.vertx.core.Future as a parameter, which can be used to discover the status of an asynchronous deployment of the verticle.

Now let’s deploy the verticle:

public static void main(String[] args) {
    Vertx vertx = Vertx.vertx();
    vertx.deployVerticle(new HelloVerticle());
}

Similarly, we can override the stop() method from the AbstractVerticle class, which will be invoked while shutting down the verticle:

@Override
public void stop() {
    LOGGER.info("Shutting down application");
}

6. HTTP Server

Now let’s spin up an HTTP server using a verticle:

@Override
public void start(Future<Void> future) {
    vertx.createHttpServer()
      .requestHandler(r -> r.response().end("Welcome to Vert.x Intro");
      })
      .listen(config().getInteger("http.port", 9090), 
        result -> {
          if (result.succeeded()) {
              future.complete();
          } else {
              future.fail(result.cause());
          }
      });
}

We have overridden the start() method to create an HTTP server and attached a request handler to it. The requestHandler() method is called every time the server receives a request.

Finally, the server is bound to a port, and an AsyncResult<HttpServer> handler is passed to the listen() method whether or not the connection or the server startup is succeeded using future.complete() or future.fail() in the case of any errors.

Note that: config.getInteger() method, is reading the value for HTTP port configuration which is being loaded from an external conf.json file.

Let’s test our server:

@Test
public void whenReceivedResponse_thenSuccess(TestContext testContext) {
    Async async = testContext.async();

    vertx.createHttpClient()
      .getNow(port, "localhost", "/", response -> {
        response.handler(responseBody -> {
          testContext.assertTrue(responseBody.toString().contains("Hello"));
          async.complete();
        });
      });
}

For the test, let’s use vertx-unit along with JUnit.:

<dependency>
    <groupId>io.vertx</groupId>
    <artifactId>vertx-unit</artifactId>
    <version>3.4.1</version>
    <scope>test</scope>
</dependency>

We can get the latest version here.

The verticle is deployed and in a vertx instance in the setup() method of the unit test:

@Before
public void setup(TestContext testContext) {
    vertx = Vertx.vertx();

    vertx.deployVerticle(SimpleServerVerticle.class.getName(), 
      testContext.asyncAssertSuccess());
}

Similarly, the vertx instance is closed in the @AfterClass tearDown()  method:

@After
public void tearDown(TestContext testContext) {
    vertx.close(testContext.asyncAssertSuccess());
}

Notice that the @BeforeClass setup() method takes an TestContext argument. This helps up in controlling and testing the asynchronous behavior of the test. For example, the verticle deployment is async, so basically we can’t test anything unless it’s deployed correctly. We have a second parameter to the

For example, the verticle deployment is async, so basically we can’t test anything unless it’s deployed correctly.

We have a second parameter to the deployVerticle() method, testContext.asyncAssertSuccess(). This is used to know if the server is deployed correctly or any failures occurred. It waits for the future.complete() or future.fail()  in the server verticle to be called. In the case of a failure, it fails the test.

7. RESTful WebService 

We have created an HTTP server, lets now use that to host an RESTfull WebService. In order do so we will need another Vert.x module called vertx-web. This gives a lot of additional features for web development on top of vertx-core.

Let’s add the dependency to our pom.xml:

<dependency>
    <groupId>io.vertx</groupId>
    <artifactId>vertx-web</artifactId>
    <version>3.4.1</version>
</dependency>

We can find the latest version here.

7.1. Router and Routes

Let’s create a router for our WebService. This router will take a simple route of GET method, and handler method getArtilces():

Router router = Router.router(vertx);
router.get("/api/baeldung/articles/article/:id")
  .handler(this::getArticles);

The getArticle() method is a simple method that returns new Article object:

private void getArticles(RoutingContext routingContext) {
    String articleId = routingContext.request()
      .getParam("id");
    Article article = new Article(articleId, 
      "This is an intro to vertx", "baeldung", "01-02-2017", 1578);

    routingContext.response()
      .putHeader("content-type", "application/json")
      .setStatusCode(200)
      .end(Json.encodePrettily(article));
}

Router, when receives a request, looks for the matching route, and passes the request further. The routes having a handler method associated with it to do sumthing with the request.

In our case, the handler invokes the getArticle() method. It receives the routingContext object as an argument. Derives the path parameter id, and creates an Article object with it.

In the last part of the method, let’s invoke the response() method on the routingContext object and put the headers, set the HTTP response code, and end the response using the JSON encoded article object.

7.2. Adding Router to Server

Now let’s add the router, created in the previous section to the HTTP server:

vertx.createHttpServer()
  .requestHandler(router::accept)
  .listen(config().getInteger("http.port", 8080), 
    result -> {
      if (result.succeeded()) {
          future.complete();
      } else {
          future.fail(result.cause());
      }
});

Notice that we have added requestHandler(router::accept) to the server. This instructs the server, to invoke the accept() of the router object when any request is received.

Now let’s test our WebService:

@Test
public void givenId_whenReceivedArticle_thenSuccess(TestContext testContext) {
    Async async = testContext.async();

    vertx.createHttpClient()
      .getNow(8080, "localhost", "/api/baeldung/articles/article/12345", 
        response -> {
            response.handler(responseBody -> {
            testContext.assertTrue(
              responseBody.toString().contains("\"id\" : \"12345\""));
            async.complete();
        });
      });
}

8. Packaging Vert.x Application

To package the application as a deployable Java Archive (.jar) let’s use Maven Shade plugin and the configurations in the execution tag:

<configuration>
    <transformers>
        <transformer 
          implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
            <manifestEntries>
                <Main-Class>io.vertx.core.Starter</Main-Class>
                <Main-Verticle>com.baeldung.SimpleServerVerticle</Main-Verticle>
            </manifestEntries>
        </transformer>
    </transformers>
    <artifactSet />
    <outputFile>
        ${project.build.directory}/${project.artifactId}-${project.version}-app.jar
    </outputFile>
</configuration>

In the manifestEntries, Main-Verticle indicates the starting point of the application and the Main-Class is a Vert.x class which, creates the vertx instance and deploys the Main-Verticle.

9. Conclusion

In this introductory article, we discussed the Vert.x toolkit and its fundamental concepts. Saw how to create and HTTP server, with Vert.x and also an RESTFull WebService and showed how to test them using vertx-unit. 

Finally packaged the application as an executable jar.

The complete implementation of the code snippets is available over on GitHub.


Create a Custom FailureAnalyzer with Spring Boot

$
0
0

1. Overview

A FailureAnalyzer in Spring Boot offers a way to intercept exceptions that occur during the startup of an application causing an application startup failure.

The FailureAnalyzer replaces the stack trace of the exception with a more readable message represented by a FailureAnalysis object that contains a description of the error and a suggested course of action.

Boot contains a series of analyzers for common startup exceptions such as PortInUseExceptionNoUniqueBeanDefinitionException, and UnsatisfiedDependencyException. These can be found in the org.springframework.boot.diagnostics package.

In this quick tutorial, we’re going to take a look at how we can add our own custom FailureAnalyzer to the existing ones.

2. Creating a Custom FailureAnalyzer

To create a custom FailureAnalyzer, we’ll simply extend the abstract class AbstractFailureAnalyzer – which intercepts a specified exception type and implement analyze() API.

The framework provides a BeanNotOfRequiredTypeFailureAnalyzer implementation that deals with the exception BeanNotOfRequiredTypeException only if the bean being injected is of a dynamic proxy class.

Let’s create a custom FailureAnalyzer that deals with all exceptions of type BeanNotOfRequiredTypeException. Our class intercepts the exception and creates a FailureAnalysis object with helpful description and action messages:

public class MyBeanNotOfRequiredTypeFailureAnalyzer 
  extends AbstractFailureAnalyzer<BeanNotOfRequiredTypeException> {

    @Override
    protected FailureAnalysis analyze(Throwable rootFailure, 
      BeanNotOfRequiredTypeException cause) {
        return new FailureAnalysis(getDescription(cause), getAction(cause), cause);
    }

    private String getDescription(BeanNotOfRequiredTypeException ex) {
        return String.format("The bean %s could not be injected as %s "
          + "because it is of type %s",
          ex.getBeanName(),
          ex.getRequiredType().getName(),
          ex.getActualType().getName());
    }

    private String getAction(BeanNotOfRequiredTypeException ex) {
        return String.format("Consider creating a bean with name %s of type %s",
          ex.getBeanName(),
          ex.getRequiredType().getName());
    }
}

3. Registering the Custom FailureAnalyzer

For the custom FailureAnalyzer to be considered by Spring Boot, it is mandatory to register it in a standard resources/META-INF/spring.factories file that contains the org.springframework.boot.diagnostics.FailureAnalyzer key with a value of the full name of our class:

org.springframework.boot.diagnostics.FailureAnalyzer=\
  com.baeldung.failureanalyzer.MyBeanNotOfRequiredTypeFailureAnalyzer

4. Custom FailureAnalyzer in Action

Let’s create a very simple example in which we attempt to inject a bean of an incorrect type to see how our custom FailureAnalyzer behaves.

Let’s create two classes MyDAO and MySecondDAO and annotate the second class as a bean called myDAO:

public class MyDAO { }
@Repository("myDAO")
public class MySecondDAO { }

Next, in a MyService class, we will attempt to inject the myDAO bean, which is of type MySecondDAO, into a variable of type MyDAO:

@Service
public class MyService {

    @Resource(name = "myDAO")
    private MyDAO myDAO;
}

Upon running the Spring Boot application, the startup will fail with the following console output:

***************************
APPLICATION FAILED TO START
***************************

Description:

The bean myDAO could not be injected as com.baeldung.failureanalyzer.MyDAO 
  because it is of type com.baeldung.failureanalyzer.MySecondDAO$$EnhancerBySpringCGLIB$$d902559e

Action:

Consider creating a bean with name myDAO of type com.baeldung.failureanalyzer.MyDAO

5. Conclusion

In this quick tutorial, we’ve focused on how implementing a custom Spring Boot FailureAnalyzer.

As always, the full source code of the example can be found over on GitHub.

List of In-Memory Databases

$
0
0

1. Overview

In-memory databases rely on system memory as opposed to disk space for storage of data. Because memory access is faster than disk access, these databases are naturally faster.

Of course, we can only use an in-memory database in applications and scenarios where data doesn’t need to be persisted or for the purpose of executing tests faster. They’re often run as embedded databases, which means they are created when a process starts and discarded when the process ends which is super comfortable for testing because you do not need to setup an external database.

In the following sections, we will take a look at some of the most commonly used in-memory databases for the Java environment and the configuration necessary for each of them.

2. H2 Database

H2 is an open source database written in Java that supports standard SQL for both embedded and standalone databases. It is very fast and contained within a JAR of only around 1.5 MB.

2.1. Maven Dependency

To use H2 databases in an application, we need to add the following dependency:

<dependency>
    <groupId>com.h2database</groupId>
    <artifactId>h2</artifactId>
    <version>1.4.194</version>
</dependency>

The latest version of the H2 database can be downloaded from Maven Central.

2.2. Configuration

To connect to an H2 in-memory database, we can use a connection String with the protocol mem, followed by the database name. The driverClassName, URL, username and password properties can be placed in a .properties file to be read by our application:

driverClassName=org.h2.Driver
url=jdbc:h2:mem:myDb;DB_CLOSE_DELAY=-1
username=sa
password=sa

These properties ensure the myDb database is created automatically on startup of the application.

By default, when a connection to the database is closed, the database is closed as well. If we want the database to last for as long as the JVM is running, we can specify the property DB_CLOSE_DELAY=-1

If we are using the database with Hibernate, we also need to specify the Hibernate dialect:

hibernate.dialect=org.hibernate.dialect.H2Dialect

H2 database is regularly maintained and provides a more detailed documentation on h2database.com.

3. HSQLDB (HyperSQL Database)

HSQLDB is an open source project, also written in Java, representing a relational database. It follows the SQL and JDBC standards and supports SQL features such as stored procedures and triggers.

It can be used in the in-memory mode, or it can be configured to use disk storage.

3.1. Maven Dependency

To develop an application using HSQLDB, we need the Maven dependency:

<dependency>
    <groupId>org.hsqldb</groupId>
    <artifactId>hsqldb</artifactId>
    <version>2.3.4</version>
</dependency>

You can find the latest version of HSQLDB on Maven Central.

3.2. Configuration

The connection properties we need have the following format:

driverClassName=org.hsqldb.jdbc.JDBCDriver
url=jdbc:hsqldb:mem:myDb
username=sa
password=sa

This ensures that the database will be created automatically on startup, reside in memory for the duration of the application, and be deleted when the process ends.

The Hibernate dialect property for HSQLDB is:

hibernate.dialect=org.hibernate.dialect.HSQLDialect

The JAR file also contains a Database Manager with a GUI. More information can be found on the hsqldb.org website.

4. Apache Derby Database

Apache Derby is another open source project containing a relational database management system created by the Apache Software Foundation.

Derby is based on SQL and JDBC standards and is mainly used as an embedded database, but it can also be run in client-server mode by using the Derby Network Server framework.

4.1. Maven Dependency

To use a Derby database in an application, we need to add the following Maven dependency:

<dependency>
    <groupId>org.apache.derby</groupId>
    <artifactId>derby</artifactId>
    <version>10.13.1.1</version>
</dependency>

The latest version of Derby database can be found on Maven Central.

4.2. Configuration

The connection string uses the memory protocol:

driverClassName=org.apache.derby.jdbc.EmbeddedDriver
url=jdbc:derby:memory:myDb;create=true
username=sa
password=sa

For the database to be created automatically on startup, we have to specify create=true in the connection string. The database is closed and dropped by default on JVM exit.

If using the database with Hibernate, we need to define the dialect:

hibernate.dialect=org.hibernate.dialect.DerbyDialect

You can read more about Derby database at db.apache.org/derby.

5. SQLite Database

SQLite is a SQL database that runs only in embedded mode, either in memory or saved as a file. It is written in the C language but can also be used with Java.

5.1. Maven Dependency

To use an SQLite database, we need to add the JDBC driver JAR:

<dependency>
    <groupId>org.xerial</groupId>
    <artifactId>sqlite-jdbc</artifactId>
    <version>3.16.1</version>
</dependency>

The sqlite-jdbc dependency can be downloaded from Maven Central.

5.2. Configuration

The connection properties use the org.sqlite.JDBC driver class and the memory protocol for the connection string:

driverClassName=org.sqlite.JDBC
url=jdbc:sqlite:memory:myDb
username=sa
password=sa

This will create the myDb database automatically if it does not exist.

Currently, Hibernate does not provide a dialect for SQLite, although it very likely will in the future. If you want to use SQLite with Hibernate, you have to create your HibernateDialect class.

To find out more about SQLite, go to sqlite.org.

6. In-Memory Databases in Spring Boot

Spring Boot makes it especially easy to use an in-memory database – because it can create the configuration automatically for H2, HSQLDB, and Derby.

All we need to do to use a database of one of the three types in Spring Boot is add its dependency to the pom.xml. When the framework encounters the dependency on the classpath, it will configure the database automatically.

7. Conclusion

In this article, we had a quick look at the most commonly used in-memory databases in the Java ecosystem, along with their basic configurations. Although they are useful for testing, keep in mind that in many cases they do not provide exactly the same functionality as the original standalone ones.

You can find the code examples used in this article over on Github.

Hibernate Tips Book Excerpt: How to Map an Inheritance Hierarchy to One Table

$
0
0

1. Introduction

Inheritance is one of the key concepts in Java. So, it’s no surprise that most domain models use it. But unfortunately, this concept doesn’t exist in relational databases, and you need to find a way to map the inheritance hierarchy to a relational table model.

JPA and Hibernate support different strategies, that map the inheritance hierarchy to various table models. Let’s take a look at a chapter of my new book Hibernate Tips – More than 70 solutions to common Hibernate problems in which I explain the SingleTable strategy. It maps all classes of the inheritance hierarchy to the same database table.

I explain Hibernate’s other inheritance mapping strategies in my Hibernate Tips book. It’s a cookbook with more than 70 ready-to-use recipes for topics like basic and advanced mappings, logging, Java 8 support, caching and statically and dynamically defined queries. You can get it this week on Amazon at a special launch price of just $2.99.


2. Hibernate Tips – How to map an inheritance hierarchy to one table

2.1. Problem

My database contains one table, which I want to map to an inheritance hierarchy of entities. How do I define such a mapping?

2.2. Solution

JPA and Hibernate support different inheritance strategies which allow you to map the entities to different table structures. The SingleTable strategy is one of them and maps an inheritance hierarchy of entities to a single database table.

Let’s have a look at the entity model before I explain the details of the SingleTable strategy. Authors can write different kinds of Publications, like Books and BlogPosts. The Publication class is the super class of the Book and BlogPost classes.

The SingleTable strategy maps the three entities of the inheritance hierarchy to the publication table.

If you want to use this inheritance strategy, you need to annotate the superclass with an @Inheritance annotation and provide the InheritanceType.SINGLE_TABLE as the value of the strategy attribute.

You can also annotate the superclass with a @DiscriminatorColumn annotation to define the name of the discriminator value. Hibernate uses this value to determine the entity to which it has to map a database record. If you don’t define a discriminator column, as I do in the following code snippet, Hibernate, and all other JPA implementations use the column DTYPE.

@Entity
@Inheritance(strategy = InheritanceType.SINGLE_TABLE)
public abstract class Publication {

    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    @Column(name = "id", updatable = false, nullable = false)
    private Long id;

    @Version
    private int version;

    private String title;

    private LocalDate publishingDate;
	
    @ManyToMany
    @JoinTable(
      name="PublicationAuthor",
      joinColumns={@JoinColumn(name="publicationId", referencedColumnName="id")},
      inverseJoinColumns={@JoinColumn(name="authorId", referencedColumnName="id")})
    private Set<Author> authors = new HashSet<Author>();

    ...
}

The subclasses need to extend the superclass, and you need to annotate them with a @Entity annotation.

The JPA specification also recommends to annotate it with a @DiscriminatorValue annotation to define the discriminator value for this entity class. If you don’t provide this annotation, your JPA implementation generates a discriminator value.

But the JPA specification doesn’t define how to generate the discriminator value, and your application might not be portable to other JPA implementations. Hibernate uses the simple entity name as the discriminator.

@Entity
@DiscriminatorValue("Book")
public class Book extends Publication {

    private int numPages;

    ...
}

The SingleTable strategy doesn’t require Hibernate to generate any complex queries if you want to select a specific entity, perform a polymorphic query or traverse a polymorphic association.

Author a = em.find(Author.class, 1L);
List<Publication> publications = a.getPublications();

All entities are stored in the same table, and Hibernate can select them from there without an additional JOIN clause.

15:41:28,379 DEBUG [org.hibernate.SQL] - 
    select
        author0_.id as id1_0_0_,
        author0_.firstName as firstNam2_0_0_,
        author0_.lastName as lastName3_0_0_,
        author0_.version as version4_0_0_ 
    from
        Author author0_ 
    where
        author0_.id=?
15:41:28,384 DEBUG [org.hibernate.SQL] - 
    select
        publicatio0_.authorId as authorId2_2_0_,
        publicatio0_.publicationId as publicat1_2_0_,
        publicatio1_.id as id2_1_1_,
        publicatio1_.publishingDate as publishi3_1_1_,
        publicatio1_.title as title4_1_1_,
        publicatio1_.version as version5_1_1_,
        publicatio1_.numPages as numPages6_1_1_,
        publicatio1_.url as url7_1_1_,
        publicatio1_.DTYPE as DTYPE1_1_1_ 
    from
        PublicationAuthor publicatio0_ 
    inner join
        Publication publicatio1_ 
            on publicatio0_.publicationId=publicatio1_.id 
    where
        publicatio0_.authorId=?

2.3. Source Code

You can find a download link for a project with executable test cases for this Hibernate tip in the book.

2.4. Learn More

You can also map the entities of the inheritance hierarchy to multiple database tables.
I show you how to do that in the chapter How to map an inheritance hierarchy to multiple tables.

 

3. Summary

As you have seen in this Hibernate Tip, JPA and Hibernate provide an easy option to map an inheritance hierarchy to a single database table. You just have to annotate the superclass with @Inheritance(strategy = InheritanceType.SINGLE_TABLE) and you should also annotate the subclasses with @DiscriminatorValue(“Book”).

You can get more recipes like this in my new book Hibernate Tips: More than 70 solutions to common Hibernate problemsIt gives you more than 70 ready-to-use recipes for topics like basic and advanced mappings, logging, Java 8 support, caching and statically and dynamically defined queries. For just a few days, you can get the ebook for $2.99 and the paperback for $12.99 on Amazon.

An Intro to the Spring State Machine Project

$
0
0

1. Introduction

This article is focused on Spring’s State Machine project – which can be used to represent workflows or any other kind of finite state automata representation problems.

2. Maven Dependency

To get started, we need to add the main Maven dependency:

<dependency>
    <groupId>org.springframework.statemachine</groupId>
    <artifactId>spring-statemachine-core</artifactId>
    <version>1.2.3.RELEASE</version>
</dependency>

The latest version of this dependency may be found here.

3. State Machine Configuration

Now, let’s get started by defining a simple state machine:

@Configuration
@EnableStateMachine
public class SimpleStateMachineConfiguration 
  extends StateMachineConfigurerAdapter<String, String> {

    @Override
    public void configure(StateMachineStateConfigurer<String, String> states) 
      throws Exception {
 
        states
          .withStates()
          .initial("SI")
          .end("SF")
          .states(
            new HashSet<String>(Arrays.asList("S1", "S2", "S3")));

    }

    @Override
    public void configure(
      StateMachineTransitionConfigurer<String, String> transitions) 
      throws Exception {
 
        transitions.withExternal()
          .source("SI").target("S1").event("E1").and()
          .withExternal()
          .source("S1").target("S2").event("E2").and()
          .withExternal()
          .source("S2").target("SF").event("end");
    }
}

Note that this class is annotated as a conventional Spring configuration as well as a state machine. It also needs to extend StateMachineConfigurerAdapter so that various initialization methods can be invoked. In one of the configuration methods, we define all the possible states of the state machine, in the other, how events change the current state.

The configuration above sets out a pretty simple, straight-line transition state machine which should be easy enough to follow.

Now we need to start a Spring context and obtain a reference to the state machine defined by our configuration:

@Autowired
private StateMachine<String, String> stateMachine;

Once we have the state machine, it needs to be started:

stateMachine.start();

Now that our machine is in the initial state, we can send events and thus trigger transitions:

stateMachine.sendEvent("E1");

We can always check the current state of the state machine:

stateMachine.getState();

4. Actions

Let us add some actions to be executed around state transitions. First, we define our action as a Spring bean in the same configuration file:

@Bean
public Action<String, String> initAction() {
    return ctx -> System.out.println(ctx.getTarget().getId());
}

Then we can register the above-created action on the transition in our configuration class:

@Override
public void configure(
  StateMachineTransitionConfigurer<String, String> transitions)
  throws Exception {
 
    transitions.withExternal()
      transitions.withExternal()
      .source("SI").target("S1")
      .event("E1").action(initAction())

This action will be executed when the transition from SI to S1 via event E1 occurs. Actions can be attached to the states themselves:

@Bean
public Action<String, String> executeAction() {
    return ctx -> System.out.println("Do" + ctx.getTarget().getId());
}

states
  .withStates()
  .state("S3", executeAction(), errorAction());

This state definition function accepts an operation to be executed when the machine is in the target state and, optionally, an error action handler.

An error action handler is not much different from any other action, but it will be invoked if an exception is thrown any time during the evaluation of state’s actions:

@Bean
public Action<String, String> errorAction() {
    return ctx -> System.out.println(
      "Error " + ctx.getSource().getId() + ctx.getException());
}

It is also possible to register individual actions for entrydo and exit state transitions:

@Bean
public Action<String, String> entryAction() {
    return ctx -> System.out.println(
      "Entry " + ctx.getTarget().getId());
}

@Bean
public Action<String, String> executeAction() {
    return ctx -> 
      System.out.println("Do " + ctx.getTarget().getId());
}

@Bean
public Action<String, String> exitAction() {
    return ctx -> System.out.println(
      "Exit " + ctx.getSource().getId() + " -> " + ctx.getTarget().getId());
}
states
  .withStates()
  .stateEntry("S3", entryAction())
  .stateDo("S3", executeAction())
  .stateExit("S3", exitAction());

Respective actions will be executed on the corresponding state transitions. For example, we might want to verify some pre-conditions at the time of entry or trigger some reporting at the time of exit.

5. Global Listeners

Global event listeners can be defined for the state machine. These listeners will be invoked any time a state transition occurs and can be utilized for things such as logging or security.

First, we need to add another configuration method – one that does not deal with states or transitions but with the config for the state machine itself.

We need to define a listener by extending StateMachineListenerAdapter:

public class StateMachineListener extends StateMachineListenerAdapter {
 
    @Override
    public void stateChanged(State from, State to) {
        System.out.printf("Transitioned from %s to %s%n", from == null ? 
          "none" : from.getId(), to.getId());
    }
}

Here we only overrode stateChanged though many other even hooks are available.

6. Extended State

Spring State Machine keeps track of its state, but to keep track of our application state, be it some computed values, entries from admins or responses from calling external systems, we need to use what is called an extended state.

Suppose we want to make sure that an account application goes through two levels of approval. We can keep track of approvals count using an integer stored in the extended state:

@Bean
public Action<String, String> executeAction() {
    return ctx -> {
        int approvals = (int) ctx.getExtendedState().getVariables()
          .getOrDefault("approvalCount", 0);
        approvals++;
        ctx.getExtendedState().getVariables()
          .put("approvalCount", approvals);
    };
}

7. Guards

A guard can be used to validate some data before a transition to a state is executed. A guard looks very similar to an action:

@Bean
public Guard<String, String> simpleGuard() {
    return ctx -> (int) ctx.getExtendedState()
      .getVariables()
      .getOrDefault("approvalCount", 0) > 0;
}

The noticeable difference here is that a guard returns a true or false which will inform the state machine whether the transition should be allowed to occur.

Support for SPeL expressions as guards also exists. The example above could also have been written as:

.guardExpression("extendedState.variables.approvalCount > 0")

8. State Machine from a Builder

StateMachineBuilder can be used to create a state machine without using Spring annotations or creating a Spring context:

StateMachineBuilder.Builder<String, String> builder 
  = StateMachineBuilder.builder();
builder.configureStates().withStates()
  .initial("SI")
  .state("S1")
  .end("SF");

builder.configureTransitions()
  .withExternal()
  .source("SI").target("S1").event("E1")
  .and().withExternal()
  .source("S1").target("SF").event("E2");

StateMachine<String, String> machine = builder.build();

9. Hierarchical States

Hierarchical states can be configured by using multiple withStates() in conjunction with parent():

states
  .withStates()
    .initial("SI")
    .state("SI")
    .end("SF")
    .and()
  .withStates()
    .parent("SI")
    .initial("SUB1")
    .state("SUB2")
    .end("SUBEND");

This kind of setup allows the state machine to have multiple states, so a call to getState() will produce multiple IDs. For example, immediately after startup the following expression results in:

stateMachine.getState().getIds()
["SI", "SUB1"]

10. Junctions (Choices)

So far, we’ve created state transitions which were linear by nature. Not only is this rather uninteresting, but it also does not reflect real-life use-cases that a developer will be asked to implement either. The odds are conditional paths will need to be implemented, and Spring state machine’s junctions (or choices) allow us to do just that.

First, we need to mark a state a junction (choice) in the state definition:

states
  .withStates()
  .junction("SJ")

Then in the transitions, we define first/then/last options which correspond to an if-then-else structure:

.withJunction()
  .source("SJ")
  .first("high", highGuard())
  .then("medium", mediumGuard())
  .last("low")

first and then take a second argument which is a regular guard which will be invoked to find out which path to take:

@Bean
public Guard<String, String> mediumGuard() {
    return ctx -> false;
}

@Bean
public Guard<String, String> highGuard() {
    return ctx -> false;
}

Note that a transition does not stop at a junction node but will immediately execute defined guards and go to one of the designated routes.

In the example above, instructing state machine to transition to SJ will result in the actual state to become low as the both guards just return false.

A final note is that the API provides both junctions and choices. However, functionally they are identical in every aspect.

11. Fork

Sometimes it becomes necessary to split the execution into multiple independent execution paths. This can be achieved using the fork functionality.

First, we need to designate a node as a fork node and create hierarchical regions into which the state machine will perform the split:

states
  .withStates()
  .initial("SI")
  .fork("SFork")
  .and()
  .withStates()
    .parent("SFork")
    .initial("Sub1-1")
    .end("Sub1-2")
  .and()
  .withStates()
    .parent("SFork")
    .initial("Sub2-1")
    .end("Sub2-2");

Then define fork transition:

.withFork()
  .source("SFork")
  .target("Sub1-1")
  .target("Sub2-1");

12. Join

The complement of the fork operation is the join.  It allows us to set a state transitioning to which is dependent on completing some other states:

As with forking, we need to designate a join node in the state definition:

states
  .withStates()
  .join("SJoin")

Then in transitions, we define which states need to complete to enable our join state:

transitions
  .withJoin()
    .source("Sub1-2")
    .source("Sub2-2")
    .target("SJoin");

That’s it!  With this configuration, when both Sub1-2 and Sub2-2 are achieved, the state machine will transition to SJoin

13. Enums Instead of Strings

In the examples above we have used string constants to define states and events for clarity and simplicity.  On a real-world production system, one would probably want to use Java’s enums to avoid spelling errors and gain more type safety.

First, we need to define all possible states and events in our system:

public enum ApplicationReviewStates {
    PEER_REVIEW, PRINCIPAL_REVIEW, APPROVED, REJECTED
}

public enum ApplicationReviewEvents {
    APPROVE, REJECT
}

We also need to pass our enums as generic parameters when we extend the configuration:

public class SimpleEnumStateMachineConfiguration 
  extends StateMachineConfigurerAdapter
  <ApplicationReviewStates, ApplicationReviewEvents>

Once defined, we can use our enum constants instead of strings.  For example to define a transition:

transitions.withExternal()
  .source(ApplicationReviewStates.PEER_REVIEW)
  .target(ApplicationReviewStates.PRINCIPAL_REVIEW)
  .event(ApplicationReviewEvents.APPROVE)

14. Conclusion

This article explored some of the features of the Spring state machine.

As always you can find the sample source code over on GitHub.

Spring MVC Custom Validation

$
0
0

1. Overview

Generally, when we need to validate user input, Spring MVC offers standard predefined validators.

However, when we need to validate a more particular type input, we have the possibility of creating our own, custom validation logic.

In this article, we’ll do just that – we’ll create a custom validator to validate a form with a phone number field.

2. Setup

To benefit from the API, add the dependency to your pom.xml file:

<dependency>
    <groupId>javax.validation</groupId>
    <artifactId>validation-api</artifactId>
    <version>1.1.0.Final</version>
</dependency>
<dependency>
    <groupId>org.hibernate</groupId>
    <artifactId>hibernate-validator</artifactId>
    <version>5.4.1.Final</version>
</dependency>

The latest version of the dependency can be checked here and here.

3. Custom Validation

Creating a custom validator entails us rolling out our own annotation and using it in our model to enforce the validation rules.

So, let’s create our custom validator – which checks phone numbers. The phone number must be a number with more than eight digits but no more that 11 digits.

4. The New Annotation

Let’s create a new @interface with the to define our annotation:

@Documented
@Constraint(validatedBy = ContactNumberValidator.class)
@Target( { ElementType.METHOD, ElementType.FIELD })
@Retention(RetentionPolicy.RUNTIME)
public @interface ContactNumberConstraint {
    String message() default "Invalid phone number";
    Class<?>[] groups() default {};
    Class<? extends Payload>[] payload() default {};
}

With the @Constraint annotation, we defined the class that is going to validate our field, the message() is the error message that is showed in the user interface and the additional code is most boilerplate code to conforms to the Spring standards.

5. Creating a Validator

Let’s now create a validator class that enforces rules of our validation:

public class ContactNumberValidator implements 
  ConstraintValidator<ContactNumberConstraint, String> {

    @Override
    public void initialize(ContactNumberConstraint contactNumber) {
    }

    @Override
    public boolean isValid(String contactField,
      ConstraintValidatorContext cxt) {
        return contactField != null && contactField.matches("[0-9]+")
          && (contactField.length() > 8) && (contactField.length() < 14);
    }

}

The validation class implements the ConstraintValidator interface and must implement the isValid method; it’s in this method that we defined our validation rules.

Naturally we’re going with a simple validation rule here, to show how the validator works.

ConstraintValidator defines the logic to validate a given constraint for a given object. Implementations must comply with the following restriction:

  • the object must resolve to a non-parametrized type
  • generic parameters of the object must be unbounded wildcard types

6. Applying Validation Annotation

In our case, we’ve created a simple class with one field to apply the validation rules. Here, we’re setting up our annotated field to be validated:

@ContactNumberConstraint
private String phone;

We defined a string field and annotated it with our custom annotation @ContactNumberConstraint. In our controller we created our mappings and handled the error if any:

@Controller
@EnableWebMvc
public class ValidatedPhoneController {
 
    @GetMapping("/validatePhone")
    public String loadFormPage(Model m) {
        m.addAttribute("validatedPhone", new ValidatedPhone());
        return "phoneHome";
    }
    
    @PostMapping("/addValidatePhone")
    public String submitForm(@Valid ValidatedPhone validatedPhone,
      BindingResult result, Model m) {
        if(result.hasErrors()) {
            return "phoneHome";
        }
        m.addAttribute("message", "Successfully saved phone: "
          + validatedPhone.toString());
        return "phoneHome";
    }   
}

We defined this simple controller that have a single JSP page, and use the submitForm method to enforce the validation of our phone number.

7. The View

Our view is a basic JSP page with a form that has a single field. When the user submits the form, then the field gets validated by our custom validator and redirects to the same page with the message of successful or failed validation:

<form:form 
  action="/${pageContext.request.contextPath}/addValidatePhone"
  modelAttribute="validatedPhone">
    <label for="phoneInput">Phone: </label>
    <form:input path="phone" id="phoneInput" />
    <form:errors path="phone" cssClass="error" />
    <input type="submit" value="Submit" />
</form:form>

8. Tests

Let’s now test our controller and check if it’s giving us the appropriate response and view:

@Test
public void givenPhonePageUri_whenMockMvc_thenReturnsPhonePage(){
    this.mockMvc.
      perform(get("/validatePhone")).andExpect(view().name("phoneHome"));
}

Also, let’s test that our field is validated, based on user input:

@Test
public void 
  givenPhoneURIWithPostAndFormData_whenMockMVC_thenVerifyErrorResponse() {
 
    this.mockMvc.perform(MockMvcRequestBuilders.post("/addValidatePhone").
      accept(MediaType.TEXT_HTML).
      param("phoneInput", "123")).
      andExpect(model().attributeHasFieldErrorCode(
          "validatedPhone","phone","ContactNumberConstraint")).
      andExpect(view().name("phoneHome")).
      andExpect(status().isOk()).
      andDo(print());
}

In the test, we’re providing a user with the input of “123, ” and – as we expected – everything’s working and we’re seeing the error on the client side.

9. Summary

In this quick article, we created a custom validator to verify a field and wired that into Spring MVC.

As always, you can find the code from the article over on Github.

Viewing all 3522 articles
Browse latest View live