Quantcast
Channel: Baeldung
Viewing all 3697 articles
Browse latest View live

Guide to JMapper

$
0
0

1. Overview

In this tutorial, we’ll explore JMapper – a fast and easy to use mapping framework.

We’ll discuss different ways to configure JMapper, how to perform custom conversions, as well as relational mapping.

2. Maven Configuration

First, we need to add the JMapper dependency to our pom.xml:

<dependency>
    <groupId>com.googlecode.jmapper-framework</groupId>
    <artifactId>jmapper-core</artifactId>
    <version>1.6.0.1</version>
</dependency>

3. Source and Destination Models

Before we get to the configuration, let’s take a look at the simple beans we’re going to use throughout this tutorial.

First, here’s our source bean – a basic User:

public class User {
    private long id;    
    private String email;
    private LocalDate birthDate;
}

And our destination bean, UserDto:

public class UserDto {
    private long id;
    private String username;
}

We’ll use the library to map attributes from our source bean User to our destination bean UserDto.

There are three ways to configure JMapper: by using the API, annotations and XML configuration.

In the following sections, we’ll go over each of these.

4. Using the API

Let’s see how to configure JMapper using the API.

Here, we don’t need to add any configuration to our source and destination classes. Instead, all the configuration can be done using JMapperAPI, which makes it the most flexible configuration method:

@Test
public void givenUser_whenUseApi_thenConverted(){
    JMapperAPI jmapperApi = new JMapperAPI() 
      .add(mappedClass(UserDto.class)
        .add(attribute("id").value("id"))
        .add(attribute("username").value("email")));

    JMapper<UserDto, User> userMapper = new JMapper<>
      (UserDto.class, User.class, jmapperApi);
    User user = new User(1L,"john@test.com", LocalDate.of(1980,8,20));
    UserDto result = userMapper.getDestination(user);

    assertEquals(user.getId(), result.getId());
    assertEquals(user.getEmail(), result.getUsername());
}

Here, we use the mappedClass() method to define our mapped class UserDto. Then, we used the attribute() method to define each attribute and its mapped value.

Next, we created a JMapper object based on the configuration and used its getDestination() method to obtain the UserDto result.

5. Using Annotations

Let’s see how we can use the @JMap annotation to configure our mapping:

public class UserDto {  
    @JMap
    private long id;

    @JMap("email")
    private String username;
}

And here’s how we’ll use our JMapper:

@Test
public void givenUser_whenUseAnnotation_thenConverted(){
    JMapper<UserDto, User> userMapper = new JMapper<>(UserDto.class, User.class);
    User user = new User(1L,"john@test.com", LocalDate.of(1980,8,20));
    UserDto result = userMapper.getDestination(user);

    assertEquals(user.getId(), result.getId());
    assertEquals(user.getEmail(), result.getUsername());        
}

Note that for the id attribute, we didn’t need to provide a target field name as it’s the same name as the source bean, while for the username field we mention that it corresponds to the email field in the User class.

Then, we only need to pass source and destination beans to our JMapper – no further configuration needed.

Overall, this method is convenient as it uses the least amount of code.

6. Using XML Configuration

We can also use XML configuration to define our mapping.

Here’s our sample XML configuration at user_jmapper.xml:

<jmapper>
  <class name="com.baeldung.jmapper.UserDto">
    <attribute name="id">
      <value name="id"/>
    </attribute>
    <attribute name="username">
      <value name="email"/>
    </attribute>
  </class>
</jmapper>

And we need to pass our XML configuration to JMapper:

@Test
public void givenUser_whenUseXml_thenConverted(){
    JMapper<UserDto, User> userMapper = new JMapper<>
      (UserDto.class, User.class,"user_jmapper.xml");
    User user = new User(1L,"john@test.com", LocalDate.of(1980,8,20));
    UserDto result = userMapper.getDestination(user);

    assertEquals(user.getId(), result.getId());
    assertEquals(user.getEmail(), result.getUsername());            
}

We can also pass the XML configuration as a String directly to JMapper instead of a file name.

7. Global Mapping

We can take advantage of global mapping if we have multiple fields with the same name in both the source and destination beans.

For example, if we have a UserDto1 which has two fields, id and email:

public class UserDto1 {  
    private long id;
    private String email;
    
    // standard constructor, getters, setters
}

Global mapping will be easier to use as they are mapped to fields with the same name at User source bean.

7.1. Using the API

For JMapperAPI configuration, we’ll use global():

@Test
public void givenUser_whenUseApiGlobal_thenConverted() {
    JMapperAPI jmapperApi = new JMapperAPI()
      .add(mappedClass(UserDto.class).add(global())) ;
    JMapper<UserDto1, User> userMapper1 = new JMapper<>
      (UserDto1.class, User.class,jmapperApi);
    User user = new User(1L,"john@test.com", LocalDate.of(1980,8,20));
    UserDto1 result = userMapper1.getDestination(user);

    assertEquals(user.getId(), result.getId());
    assertEquals(user.getEmail(), result.getEmail());
}

7.2. Using Annotations

For the annotation configuration, we’ll use @JGlobalMap at class level:

@JGlobalMap
public class UserDto1 {  
    private long id;
    private String email;
}

And here’s a simple test:

@Test
public void whenUseGlobalMapAnnotation_thenConverted(){
    JMapper<UserDto1, User> userMapper= new JMapper<>(
      UserDto1.class, User.class);
    User user = new User(
      1L,"john@test.com", LocalDate.of(1980,8,20));
    UserDto1 result = userMapper.getDestination(user);

    assertEquals(user.getId(), result.getId());
    assertEquals(user.getEmail(), result.getEmail());        
}

7.3. XML Configuration

And for the XML configuration, we have the <global/> element:

<jmapper>
  <class name="com.baeldung.jmapper.UserDto1">
    <global/>
  </class>
</jmapper>

And then pass the XML file name:

@Test
public void givenUser_whenUseXmlGlobal_thenConverted(){
    JMapper<UserDto1, User> userMapper = new JMapper<>
      (UserDto1.class, User.class,"user_jmapper1.xml");
    User user = new User(1L,"john@test.com", LocalDate.of(1980,8,20));
    UserDto1 result = userMapper.getDestination(user);

    assertEquals(user.getId(), result.getId());
    assertEquals(user.getEmail(), result.getEmail());            
}

8. Custom Conversions

Now, let’s see how to apply a custom conversion using JMapper.

We have a new field age in our UserDto, which we need to calculate from the User birthDate attribute:

public class UserDto {
    @JMap
    private long id;

    @JMap("email")
    private String username;
    
    @JMap("birthDate")
    private int age;

    @JMapConversion(from={"birthDate"}, to={"age"})
    public int conversion(LocalDate birthDate){
        return Period.between(birthDate, LocalDate.now())
          .getYears();
    }
}

So, we used @JMapConversion to apply a complex conversion from the User’s birthDate to the UserDto’s age attribute. Therefore the age field will be calculated when we map User to UserDto:

@Test
public void whenUseAnnotationExplicitConversion_thenConverted(){
    JMapper<UserDto, User> userMapper = new JMapper<>(
      UserDto.class, User.class);
    User user = new User(
      1L,"john@test.com", LocalDate.of(1980,8,20));
    UserDto result = userMapper.getDestination(user);

    assertEquals(user.getId(), result.getId());
    assertEquals(user.getEmail(), result.getUsername());     
    assertTrue(result.getAge() > 0);
}

9. Relational Mapping

Finally, we’ll discuss relational mapping. With this method, we need to define our JMapper using a target class each time.

If we already know the target classes, we can define them for each mapped field and use RelationalJMapper.

In this example, we have one source bean User:

public class User {
    private long id;    
    private String email;
}

And two destination beans UserDto1:

public class UserDto1 {  
    private long id;
    private String username;
}

And UserDto2:

public class UserDto2 {
    private long id;
    private String email;
}

Let’s see how to take advantage of our RelationalJMapper.

9.1. Using the API

For our API configuration, we can define target classes for each attribute using targetClasses():

@Test
public void givenUser_whenUseApi_thenConverted(){
    JMapperAPI jmapperApi = new JMapperAPI()
      .add(mappedClass(User.class)
      .add(attribute("id")
        .value("id")
        .targetClasses(UserDto1.class,UserDto2.class))
      .add(attribute("email")
        .targetAttributes("username","email")
        .targetClasses(UserDto1.class,UserDto2.class)));
    
    RelationalJMapper<User> relationalMapper = new RelationalJMapper<>
      (User.class,jmapperApi);
    User user = new User(1L,"john@test.com");
    UserDto1 result1 = relationalMapper
      .oneToMany(UserDto1.class, user);
    UserDto2 result2 = relationalMapper
      .oneToMany(UserDto2.class, user);

    assertEquals(user.getId(), result1.getId());
    assertEquals(user.getEmail(), result1.getUsername());
    assertEquals(user.getId(), result2.getId());
    assertEquals(user.getEmail(), result2.getEmail());            
}

Note that for each target class, we need to define the target attribute name.

The RelationalJMapper only takes one class – the mapped class.

9.2. Using Annotations

For the annotation approach, we’ll define the classes as well:

public class User {
    @JMap(classes = {UserDto1.class, UserDto2.class})
    private long id;    
    
    @JMap(
      attributes = {"username", "email"}, 
      classes = {UserDto1.class, UserDto2.class})
    private String email;
}

As usual, no further configuration needed when we use annotations:

@Test
public void givenUser_whenUseAnnotation_thenConverted(){
    RelationalJMapper<User> relationalMapper
      = new RelationalJMapper<>(User.class);
    User user = new User(1L,"john@test.com");
    UserDto1 result1 = relationalMapper
      .oneToMany(UserDto1.class, user);
    UserDto2 result2= relationalMapper
      .oneToMany(UserDto2.class, user);

    assertEquals(user.getId(), result1.getId());
    assertEquals(user.getEmail(), result1.getUsername());  
    assertEquals(user.getId(), result2.getId());
    assertEquals(user.getEmail(), result2.getEmail());          
}

9.3. XML Configuration

For the XML configuration, we use <classes> to define the target classes for each attribute.

Here’s our user_jmapper2.xml:

<jmapper>
  <class name="com.baeldung.jmapper.relational.User">
    <attribute name="id">
      <value name="id"/>
      <classes>
        <class name="com.baeldung.jmapper.relational.UserDto1"/>
        <class name="com.baeldung.jmapper.relational.UserDto2"/>
      </classes>
    </attribute>
    <attribute name="email">
      <attributes>
        <attribute name="username"/>
        <attribute name="email"/>
      </attributes>
      <classes>
        <class name="com.baeldung.jmapper.relational.UserDto1"/>
        <class name="com.baeldung.jmapper.relational.UserDto2"/>
      </classes>      
    </attribute>
  </class>
</jmapper>

And then pass the XML configuration file to RelationalJMapper:

@Test
public void givenUser_whenUseXml_thenConverted(){
    RelationalJMapper<User> relationalMapper
     = new RelationalJMapper<>(User.class,"user_jmapper2.xml");
    User user = new User(1L,"john@test.com");
    UserDto1 result1 = relationalMapper
      .oneToMany(UserDto1.class, user);
    UserDto2 result2 = relationalMapper
      .oneToMany(UserDto2.class, user);

    assertEquals(user.getId(), result1.getId());
    assertEquals(user.getEmail(), result1.getUsername());
    assertEquals(user.getId(), result2.getId());
    assertEquals(user.getEmail(), result2.getEmail());         
}

10. Conclusion

In this tutorial, we learned different ways to configure JMapper and how to perform a custom conversion.

The full source code for the examples can be found over on GitHub.


Test a REST API with curl

$
0
0

1. Overview

This tutorial gives a brief overview of testing a REST API using curl.

curl is a command-line tool for transferring data and supports about 22 protocols including HTTP. This combination makes it a very good ad-hoc tool for testing our REST services.

2. Command-line Options

curl supports over 200 command-line options. And we can have zero or more of them to accompany the URL in the command.

But before we use it for our purposes, let’s take a look at two which would make our lives easier.

2.1. Verbose

When we are testing, it’s a good idea to set the verbose mode on:

curl -v http://www.example.com/

As a result, the commands would provide helpful information such as the resolved IP address, the port we are trying to connect to and the headers.

2.2. Output

By default, curl outputs the response body to standard output. Optionally, we can provide the output option to save to a file:

curl -o out.json http://www.example.com/index.html

This is especially helpful when the response size is large.

3. HTTP Methods with curl

Every HTTP request contains a method. The most commonly used methods are GET, POST, PUT and DELETE.

3.1. GET

This is the default method when making HTTP calls with curl. In fact, the examples previously shown were plain GET calls.

While running a local instance of a service at port 8082, we’d use something like this command to make a GET call:

curl -v http://localhost:8082/spring-rest/foos/9

And since we have the verbose mode on, we’d get a little more information along with the response body:

*   Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8082 (#0)
> GET /spring-rest/foos/9 HTTP/1.1
> Host: localhost:8082
> User-Agent: curl/7.60.0
> Accept: */*
>
< HTTP/1.1 200
< X-Application-Context: application:8082
< Content-Type: application/json;charset=UTF-8
< Transfer-Encoding: chunked
< Date: Sun, 15 Jul 2018 11:55:26 GMT
<
{
  "id" : 9,
  "name" : "TuwJ"
}* Connection #0 to host localhost left intact

3.2. POST

We use this method to send data to a receiving service. And for that, we use the data option.

The simplest way of doing this is to embed the data in the command:

curl -d 'id=9&name=baeldung' http://localhost:8082/spring-rest/foos/new

or, pass a file containing the request body to the data option like this:

curl -d @request.json -H "Content-Type: application/json" 
  http://localhost:8082/spring-rest/foos/new

By using the above commands as they are, we may run into error messages like the following one:

{
  "timestamp" : "15-07-2018 05:57",
  "status" : 415,
  "error" : "Unsupported Media Type",
  "exception" : "org.springframework.web.HttpMediaTypeNotSupportedException",
  "message" : "Content type 'application/x-www-form-urlencoded;charset=UTF-8' not supported",
  "path" : "/spring-rest/foos/new"
}

This is because curl adds the following default header to all POST requests:

Content-Type: application/x-www-form-urlencoded

This is also what the browsers use in a plain POST. In our usage, we’d usually want to customize the headers depending on our needs.

For instance, if our service expects json content-type, then we can use the -H option to modify our original POST request:

curl -d '{"id":9,"name":"baeldung"}' -H 'Content-Type: application/json' 
  http://localhost:8082/spring-rest/foos/new

Windows command prompt has no support for single quotes like the Unix-like shells.

As a result, we’d need to replace the single quotes with double quotes; escaping them wherever necessary:

curl -d "{\"id\":9,\"name\":\"baeldung\"}" -H "Content-Type: application/json" 
  http://localhost:8082/spring-rest/foos/new

Besides, when we want to send a somewhat larger amount of data, it is usually a good idea to use a data file.

3.3. PUT

This method is very similar to POST. But we use it when we want to send a new version of an existing resource. In order to do this, we use the -X option.

Without any mention of a request method type, curl defaults to using GET. Therefore, we explicitly mention the method type in case of PUT:

curl -d @request.json -H 'Content-Type: application/json' 
  -X PUT http://localhost:8082/spring-rest/foos/9

3.4. DELETE

Again, we specify that we want to use DELETE by using the -X option:

curl -X DELETE http://localhost:8082/spring-rest/foos/9

4. Custom Headers

We can replace the default headers or add our own headers.

For instance, to change the Host header, we do this:

curl -H "Host: com.baeldung" http://example.com/

To switch off the User-Agent header, we put in an empty value:

curl -H "User-Agent:" http://example.com/

The most usual scenario while testing is changing the Content-Type and Accept header. We’d just have to prefix each header with the -H option:

curl -d @request.json -H "Content-Type: application/json" 
  -H "Accept: application/json" http://localhost:8082/spring-rest/foos/new

5. Authentication

A service that requires authentication would send back a 401 – Unauthorized HTTP response code and an associated WWW-Authenticate header.

For basic authentication, we can simply embed the username and password combination inside our request using the user option:

curl --user baeldung:secretPassword http://example.com/

However, if we want to use OAuth2 for authentication, we’d first need to get the access_token from our authorization service.

The service response would contain the access_token:

{
  "access_token": "b1094abc0-54a4-3eab-7213-877142c33fh3",
  "token_type": "bearer",
  "refresh_token": "253begef-868c-5d48-92e8-448c2ec4bd91",
  "expires_in": 31234
}

Now, we can use the token in our Authorization header:

curl -H "Authorization: Bearer b1094abc0-54a4-3eab-7213-877142c33fh3" http://example.com/

6. Conclusion

We looked at using the bare-minimum functionality of curl to test our REST services. Although it can do much more than what has been discussed here, for our purpose, this much should suffice.

Feel free to type curl –help on the command line to check out all the available options. The REST service used for the demonstration is available here on GitHub.

Build a Jar with Maven and Ignore the Test Results

$
0
0

1. Introduction

This quick tutorial shows how to build a jar with Maven while ignoring the test results.

By default, Maven runs unit tests automatically while building the project. However, there are rare cases when the tests can be skipped and we need to build the project regardless of the test results.

2. Building the Project

Let’s create a simple project where we also include a small test case:

public class TestFail {
    @Test
    public void whenMessageAssigned_thenItIsNotNull() {
        String message = "hello there";
        assertNotNull(message);
    }
}

Let’s build a jar file by executing the following Maven command:

mvn package

This will result in compiling the sources and generate a maven-0.0.1-SNAPSHOT.jar file under the /target directory.

Now, let’s change the test a bit, so the test starts to fail.

@Test
public void whenMessageAssigned_thenItIsNotNull() {
    String message = null;
    assertNotNull(message);
}

This time, when we try to run the mvn package command again, the build fails and the maven-0.0.1-SNAPSHOT.jar file isn’t created.

This means, if we have a failing test in our application, we can’t provide an executable file unless we fix the test.

So how can we solve this problem?

3. Maven Arguments

Maven has its own arguments to deal with this issue:

  • -Dmaven.test.failure.ignore=true ignores any failure that occurs during test execution
  • -Dmaven.test.skip=true would not compile the tests

Let’s run the mvn package -Dmaven.test.skip=true command and see the results:

[INFO] Tests are skipped.
[INFO] BUILD SUCCESS

This means the project will be built without compiling the tests.

Now let’s run the mvn package -Dmaven.test.failure.ignore=true command:

[INFO] Running testfail.TestFail
[ERROR] whenMessageAssigned_thenItIsNotNull java.lang.AssertionError
[INFO] BUILD SUCCESS

Our unit test fails on assertion but the build is successful.

4. Maven Surefire Plugin

Another convenient way to achieve our goal is to use Maven’s Surefire plugin.

For an extended overview of the Surefire plugin, refer to this article here.

To ignore test fails we can simply set the testFailureIgnore property to true:

<plugin>
    <artifactId>maven-surefire-plugin</artifactId>
    <version>${maven.surefire.version}</version>
    <configuration>
        <includes>
            <include>TestFail.java</include>
        </includes>
        <testFailureIgnore>true</testFailureIgnore>
    </configuration>
</plugin>

Now, let’s see the output of the package command:

[INFO]  T E S T S
[INFO] Running testfail.TestFail
[ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, <<< FAILURE! - in testfail.TestFail

From the running tests output, we can see the TestFail class is failing. But looking further we see that the BUILD SUCCESS message is also there and the maven-0.0.1-SNAPSHOT.jar file is compiled.

Depending on our need, we can skip running the tests at all. For that we can replace the testFailureIgnore line with:

<skipTests>true</skipTests>

Or set the command line argument -DskipTests. This will compile the test classes but skip test execution entirely.

5. Conclusion

In this article, we learned how to build our project with Maven regardless of the test results. We went through the practical examples of skipping the failing tests or excluding compilation of the tests entirely.

As usual, the complete code for this article is available over on GitHub project.

Spring Webflux and CORS

$
0
0

1. Overview

In a previous post, we learned about Cross-Origin Resource Sharing (CORS) specification and how to use it within Spring.

In this quick tutorial, we’ll set up a similar CORS configuration using Spring’s 5 WebFlux framework.

First of all, we’ll see how we can enable the mechanism on annotation-based APIs.

Then, we’ll analyze how to enable it on the whole project as a global configuration, or by using a special WebFilter.

2. Enabling CORS on Annotated Elements

Spring provides the @CrossOrigin annotation to enable CORS requests on controller classes and/or handler methods.

2.1. Using @CrossOrigin on a Request Handler Method

Let’s add this annotation to our mapped request method:

@CrossOrigin
@PutMapping("/cors-enabled-endpoint")
public Mono<String> corsEnabledEndpoint() {
    // ...
}

We’ll use a WebTestClient (as we explained in section ‘4. Testing’ of this post) to analyze the response we get from this endpoint :

ResponseSpec response = webTestClient.put()
  .uri("/cors-enabled-endpoint")
  .header("Origin", "http://any-origin.com")
  .exchange();

response.expectHeader()
  .valueEquals("Access-Control-Allow-Origin", "*");

In addition, we can try out a preflight request to make sure the CORS configuration is working as expected:

ResponseSpec response = webTestClient.options()
  .uri("/cors-enabled-endpoint")
  .header("Origin", "http://any-origin.com")
  .header("Access-Control-Request-Method", "PUT")
  .exchange();

response.expectHeader()
  .valueEquals("Access-Control-Allow-Origin", "*");
response.expectHeader()
  .valueEquals("Access-Control-Allow-Methods", "PUT");
response.expectHeader()
  .exists("Access-Control-Max-Age");

The @CrossOrigin annotation has the following default configuration:

  • Allows all origins (that explains the ‘*’ value in the response header)
  • Allows all headers
  • All HTTP methods mapped by the handler method are allowed
  • Credentials are not enabled
  • The ‘max-age’ value is of 1800 seconds (30 minutes)

However, any of these values can be overridden using the annotation’s parameters.

2.2. Using @CrossOrigin on the Controller

This annotation is also supported at a class level, and it will affect all its methods.

In case the class-level configuration isn’t suitable for all our methods, we can annotate both elements to get the desired result:

@CrossOrigin(value = { "http://allowed-origin.com" },
  allowedHeaders = { "Baeldung-Allowed" },
  maxAge = 900
)
@RestController
public class CorsOnClassController {

    @PutMapping("/cors-enabled-endpoint")
    public Mono<String> corsEnabledEndpoint() {
        // ...
    }

    @CrossOrigin({ "http://another-allowed-origin.com" })
    @PutMapping("/endpoint-with-extra-origin-allowed")
    public Mono<String> corsEnabledWithExtraAllowedOrigin() {
        // ...
    }

    // ...
}

3. Enabling CORS on the Global Configuration

We can also define a global CORS configuration by overriding the addCorsMappings() method of a WebFluxConfigurer implementation.

In addition, the implementation needs the @EnableWebFlux annotation to import the Spring WebFlux configuration:

@Configuration
@EnableWebFlux
public class CorsGlobalConfiguration implements WebFluxConfigurer {

    @Override
    public void addCorsMappings(CorsRegistry corsRegistry) {
        corsRegistry.addMapping("/**")
          .allowedOrigins("http://allowed-origin.com")
          .allowedMethods("PUT")
          .maxAge(3600);
    }
}

As a result, we are enabling cross-origin request handling for that particular path pattern.

The default configuration is similar to the @CrossOrigin one, but with only the GET, HEAD, and POST methods allowed.

We can also combine this configuration with a local one:

  • For the multiple-value attributes, the resulting CORS configuration will be the addition of each specification
  • On the other hand, the local values will have precedence over the global ones for the single-value ones

Using this approach is not effective for functional endpoints, though.

4. Enabling CORS with a WebFilter

The best way to enable CORS on functional endpoints is by using a WebFilter.

As we’ve seen in this post, we can use WebFilters to modify requests and responses, while keeping the endpoint’s implementation intact.

Spring provides the built-in CorsWebFilter so as to deal with the cross-origin configurations easily:

@Bean
CorsWebFilter corsWebFilter() {
    CorsConfiguration corsConfig = new CorsConfiguration();
    corsConfig.setAllowedOrigins(Arrays.asList("http://allowed-origin.com"));
    corsConfig.setMaxAge(8000L);
    corsConfig.addAllowedMethod("PUT");
    corsConfig.addAllowedHeader("Baeldung-Allowed");

    UrlBasedCorsConfigurationSource source =
      new UrlBasedCorsConfigurationSource();
    source.registerCorsConfiguration("/**", corsConfig);

    return new CorsWebFilter(source);
}

This is also effective for annotated handlers, but it can’t be combined with a more fine-grained @CrossOrigin configuration.

We have to keep in mind that the CorsConfiguration doesn’t have a default configuration.

Thus, unless we specify all the relevant attributes, the CORS implementation will be pretty much restrictive.

A simple way of setting the default values is by using the applyPermitDefaultValues() method on the object.

5. Conclusion

In conclusion, we learned with very short examples of how to enable CORS on our webflux-based service.

We saw different approaches, therefore all we have to do now is analyze which one suits our requirements best.

We can find plenty of examples in our Github repo, together with test cases where we analyze most of the edge cases regarding this topic.

Spring Webflux with Kotlin

$
0
0

1. Overview

In this tutorial, we demonstrate how to use Spring WebFlux module using Kotlin programming language.

We illustrate how to use the annotation-based and the lambda-based style approaches in defining the endpoints.

2. Spring WebFlux and Kotlin

The release of Spring 5 has introduced two new big features among which are native support of the reactive programming paradigm and a possibility to use the Kotlin programming language.

Throughout this tutorial, we assume that we have already configured the environment (consult one of our tutorials on this issue) and understand the Kotlin language syntax (another tutorial on the topic).

3. Annotation-Based Approach

WebFlux allows us to define the endpoints that should handle the incoming requests in the well-known fashion using the SpringMVC framework annotations like @RequestMapping or @PathVariable or convenience annotations like @RestController or @GetMapping.

Despite the fact that the annotation names are the same, the WebFlux’s ones make the methods be non-blocking.

For example, this endpoint:

@GetMapping(path = ["/numbers"],
  produces = [MediaType.APPLICATION_STREAM_JSON_VALUE])
@ResponseBody
fun getNumbers() = Flux.range(1, 100)

produces a stream of some first integers. If the server runs on localhost:8080, then connecting to it by means of the command:

curl localhost:8080/stream

will print out the requested numbers.

4. Lambda-based Approach

A newer approach in defining the endpoints is by means of lambda expressions that are present in Java since the version 1.8. With the help of Kotlin, lambda expressions can be used even with earlier Java versions.

In WebFlux, the router functions are functions determined by a RequestPredicate (in other words, who should manage the request) and a HandlerFunction (in other words, how the request should be elaborated).

A handler function accepts a ServerRequest instance and produces a Mono<ServerResponse> one.

In Kotlin, if the last function argument is a lambda expression, it can be placed outside the parentheses.

Such a syntax allows us to highlight the splitting between the request predicate and the handler function

router {
    GET("/route") { _ -> ServerResponse.ok().body(fromObject(arrayOf(1, 2, 3))) }
}

for router functions.

The function has a clear human-readable format: once a request of type GET arrives at /route, then construct a response (ignoring the content of the request – hence the symbol underscore) with the HTTP status OK and with a body constructed from the given object.

Now, in order to make it work in WebFlux, we should place the router function in a class:

@Configuration
class SimpleRoute {
 
    @Bean
    fun route() = router {
        GET("/route") { _ -> ServerResponse.ok().body(fromObject(arrayOf(1, 2, 3))) }
    }
}

Often, the logic of our applications requires that we should construct more sophisticated router functions.

In WebFlux, Kotlin’s router function DSL defines a variety of functions like acceptandornestinvokeGETPOST by means of extension functions that allows us to construct composite router functions:

router {
    accept(TEXT_HTML).nest {
        (GET("/device/") or GET("/devices/")).invoke(handler::getAllDevices)
    }
}

The handler variable should be an instance of a class implementing a method getAllDevices() with the standard HandlerFunction signature:

fun getAllDevices(request: ServerRequest): Mono<ServerResponse>

as we have mentioned above.

In order to maintain a proper separation of concerns, we may place the definitions of non-related router functions in separate classes:

@Configuration
class HomeSensorsRouters(private val handler: HomeSensorsHandler) {
    @Bean
    fun roomsRouter() = router {
        (accept(TEXT_HTML) and "/room").nest {
            GET("/light", handler::getLightReading)
            POST("/light", handler::setLight)
        }
    }
    // eventual other router function definitions
}

We may access the path variables by means of  the String-valued method pathVariable():

val id = request.pathVariable("id")

while the access to the body of ServerRequest is achieved by means of the methods bodyToMono and bodyToFlux, i.e.:

val device: Mono<Device> = request
  .bodyToMono(Device::class.java)

5. Testing

In order to test the router functions, we should build a WebTestClient instance the routers SimpleRoute().route() that we want to test:

var client = WebTestClient.bindToRouterFunction(SimpleRoute().route()).build()

Now we are ready to test whether the router’s handler function returns status OK:

client.get()
  .uri("/route")
  .exchange()
  .expectStatus()
  .isOk

The WebTestClient interface defines the methods that allow us to test all HTTP request methods like GET, POST and PUT even without running the server. 

In order to test the content of the response body, we may want to use the json() method:

client.get()
  .uri("/route")
  .exchange()
  .expectBody()
  .json("[1, 2, 3]")

6. Conclusion

In this article, we demonstrated how to use the basic features of the WebFlux framework using Kotlin.

We briefly mentioned the well-known annotation-based approach in defining the endpoints and dedicated more time to illustrate how to define them using router functions in a lambda-based style approach.

All code snippets may be found in our repository over on GitHub.

Guide to Apache Avro

$
0
0

1. Overview

Data serialization is a technique of converting data into binary or text format. There are multiple systems available for this purpose. Apache Avro is one of those data serialization systems.

Avro is a language independent, schema-based data serialization library. It uses a schema to perform serialization and deserialization. Moreover, Avro uses a JSON format to specify the data structure which makes it more powerful.

In this tutorial, we’ll explore more about Avro setup, the Java API to perform serialization and a comparison of Avro with other data serialization systems.

We’ll focus primarily on schema creation which is the base of the whole system.

2. Apache Avro

Avro is a language-independent serialization library. To do this Avro uses a schema which is one of the core components. It stores the schema in a file for further data processing.

Avro is the best fit for Big Data processing. It’s quite popular in Hadoop and Kafka world for its faster processing.

Avro creates a data file where it keeps data along with schema in its metadata section. Above all, it provides a rich data structure which makes it more popular than other similar solutions.

To use Avro for serialization, we need to follow the steps mentioned below.

3. Problem Statement

Let’s start with defining a class called AvroHttRequest that we’ll use for our examples. The class contains primitive as well as complex type attributes:

class AvroHttpRequest {
    
    private long requestTime;
    private ClientIdentifier clientIdentifier;
    private List<String> employeeNames;
    private Active active;
}

Here, requestTime is a primitive value. ClientIdentifier is another class which represents a complex type. We also have employeeName which is again a complex type. Active is an enum to describe whether the given list of employees is active or not.

Our objective is to serialize and de-serialize the AvroHttRequest class using Apache Avro.

4. Avro Data Types

Before proceeding further, let’s discuss the data types supported by Avro.

Avro supports two types of data:

  • Primitive type: Avro supports all the primitive types. We use primitive type name to define a type of a given field. For example, a value which holds a String should be declared as {“type”: “string”} in Schema
  • Complex type: Avro supports six kinds of complex types: records, enums, arrays, maps, unions and fixed

For example, in our problem statement, ClientIdentifier is a record.

In that case schema for ClientIdentifier should look like:

{
   "type":"record",
   "name":"ClientIdentifier",
   "namespace":"com.baeldung.avro",
   "fields":[
      {
         "name":"hostName",
         "type":"string"
      },
      {
         "name":"ipAddress",
         "type":"string"
      }
   ]
}

5. Using Avro

To start with, let’s add the Maven dependencies we’ll need to our pom.xml file.

We should include the following dependencies:

  • Apache Avro –  core components
  • Compiler –  Apache Avro Compilers for Avro IDL and Avro Specific Java APIT
  • Tools – which includes Apache Avro command line tools and utilities
  • Apache Avro Maven Plugin for Maven projects

We’re using version 1.8.2 for this tutorial.

However, it’s always advised to find the latest version on Maven Central:

<dependency>
    <groupId>org.apache.avro</groupId>
    <artifactId>avro-compiler</artifactId>
    <version>1.8.2</version>
</dependency>
<dependency>
    <groupId>org.apache.avro</groupId>
    <artifactId>avro-maven-plugin</artifactId>
    <version>1.8.2</version>
</dependency>

After adding maven dependencies, the next steps will be:

  • Schema creation
  • Reading the schema in our program
  • Serializing our data using Avro
  • Finally, de-serialize the data

6. Schema Creation

Avro describes its Schema using a JSON format. There are mainly four attributes for a given Avro Schema:

  • Type- which describes the type of Schema whether its complex type or primitive value
  • Namespace- which describes the namespace where the given Schema belongs to
  • Name – the name of the Schema
  • Fields- which tells about the fields associated with a given schema. Fields can be of primitive as well as complex type.

One way of creating the schema is to write the JSON representation, as we saw in the previous sections.

We can also create a schema using SchemaBuilder which is undeniably a better and efficient way to create it.

6.1. SchemaBuilder Utility

The class org.apache.avro.SchemaBuilder is useful for creating the Schema.

First of all, let’s create the schema for ClientIdentifier:

Schema clientIdentifier = SchemaBuilder.record("ClientIdentifier")
  .namespace("com.baeldung.avro")
  .fields().requiredString("hostName").requiredString("ipAddress")
  .endRecord();

Now, let’s use this for creating an avroHttpRequest schema:

Schema avroHttpRequest = SchemaBuilder.record("AvroHttpRequest")
  .namespace("com.baeldung.avro")
  .fields().requiredLong("requestTime")
  .name("clientIdentifier")
    .type(clientIdentifier)
    .noDefault()
  .name("employeeNames")
    .type()
    .array()
    .items()
    .stringType()
    .arrayDefault(null)
  .name("active")
    .type()
    .enumeration("Active")
    .symbols("YES","NO")
    .noDefault()
  .endRecord();

It’s important to note here that we’ve assigned clientIdentifier as the type for the clientIdentifier field. In this case, clientIdentifier used to define type is the same schema we created before.

Later we can apply the toString method to get the JSON structure of Schema.

Schema files are saved using the .avsc extension. Let’s save our generated schema to the “src/main/resources/avroHttpRequest-schema.avsc” file.

7. Reading the Schema

Reading a schema is more or less about creating Avro classes for the given schema. Once Avro classes are created we can use them to serialize and deserialize objects.

There are two ways to create Avro classes:

  • Programmatically generating Avro classes: Classes can be generated using SchemaCompiler. There are a couple of APIs which we can use for generating Java classes. We can find the code for generation classes on GitHub.
  • Using Maven to generate classes

We do have one maven plugin which does the job well. We need to include the plugin and run mvn clean install.

Let’s add the plugin to our pom.xml file:

<plugin>
    <groupId>org.apache.avro</groupId>
    <artifactId>avro-maven-plugin</artifactId>
    <version>${avro.version}</version>
        <executions>
            <execution>
                <id>schemas</id>
                <phase>generate-sources</phase>
                <goals>
                    <goal>schema</goal>
                    <goal>protocol</goal>
                    <goal>idl-protocol</goal>
                </goals>
                <configuration>
                    <sourceDirectory>${project.basedir}/src/main/resources/</sourceDirectory>
                    <outputDirectory>${project.basedir}/src/main/java/</outputDirectory>
                </configuration>
            </execution>
        </executions>
</plugin>

8. Serialization and Deserialization with Avro

As we’re done with generating the schema let’s continue exploring the serialization part.

There are two data serialization formats which Avro supports: JSON format and Binary format.

First, we’ll focus on the JSON format and then we’ll discuss the Binary format.

Before proceeding further, we should go through a few key interfaces. We can use the interfaces and classes below for serialization:

DatumWriter<T>: We should use this to write data on a given Schema. We’ll be using the SpecificDatumWriter implementation in our example, however, DatumWriter has other implementations as well. Other implementations are GenericDatumWriter, Json.Writer, ProtobufDatumWriter, ReflectDatumWriter, ThriftDatumWriter.

Encoder: Encoder is used or defining the format as previously mentioned. EncoderFactory provides two types of encoders, binary encoder, and JSON encoder.

DatumReader<D>: Single interface for de-serialization. Again, it got multiple implementations, but we’ll be using SpecificDatumReader in our example. Other implementations are- GenericDatumReader, Json.ObjectReader, Json.Reader, ProtobufDatumReader, ReflectDatumReader, ThriftDatumReader.

Decoder: Decoder is used while de-serializing the data. Decoderfactory provides two types of decoders: binary decoder and JSON decoder.

Next, let’s see how serialization and de-serialization happen in Avro.

8.1. Serialization

We’ll take the example of AvroHttpRequest class and try to serialize it using Avro.

First of all, let’s serialize it in JSON format:

public byte[] serealizeAvroHttpRequestJSON(
  AvroHttpRequest request) {
 
    DatumWriter<AvroHttpRequest> writer = new SpecificDatumWriter<>(
      AvroHttpRequest.class);
    byte[] data = new byte[0];
    ByteArrayOutputStream stream = new ByteArrayOutputStream();
    Encoder jsonEncoder = null;
    try {
        jsonEncoder = EncoderFactory.get().jsonEncoder(
          AvroHttpRequest.getClassSchema(), stream);
        writer.write(request, jsonEncoder);
        jsonEncoder.flush();
        data = stream.toByteArray();
    } catch (IOException e) {
        logger.error("Serialization error:" + e.getMessage());
    }
    return data;
}

Let’s have a look at a test case for this method:

@Test
public void whenSerialized_UsingJSONEncoder_ObjectGetsSerialized(){
    byte[] data = serealizer.serealizeAvroHttpRequestJSON(request);
    assertTrue(Objects.nonNull(data));
    assertTrue(data.length > 0);
}

Here we’ve used the jsonEncoder method and passing the schema to it.

If we wanted to use a binary encoder, we need to replace the jsonEncoder() method with binaryEncoder():

Encoder jsonEncoder = EncoderFactory.get().binaryEncoder(stream,null);

8.2. Deserialization

To do this, we’ll be using the above-mentioned DatumReader and Decoder interfaces.

As we used EncoderFactory to get an Encoder, similarly we’ll use DecoderFactory to get a Decoder object.

Let’s de-serialize the data using JSON format:

public AvroHttpRequest deSerealizeAvroHttpRequestJSON(byte[] data) {
    DatumReader<AvroHttpRequest> reader
     = new SpecificDatumReader<>(AvroHttpRequest.class);
    Decoder decoder = null;
    try {
        decoder = DecoderFactory.get().jsonDecoder(
          AvroHttpRequest.getClassSchema(), new String(data));
        return reader.read(null, decoder);
    } catch (IOException e) {
        logger.error("Deserialization error:" + e.getMessage());
    }
}

And let’s see the test case:

@Test
public void whenDeserializeUsingJSONDecoder_thenActualAndExpectedObjectsAreEqual(){
    byte[] data = serealizer.serealizeAvroHttpRequestJSON(request);
    AvroHttpRequest actualRequest = deSerealizer
      .deSerealizeAvroHttpRequestJSON(data);
    assertEquals(actualRequest,request);
    assertTrue(actualRequest.getRequestTime()
      .equals(request.getRequestTime()));
}

Similarly, we can use a binary decoder:

Decoder decoder = DecoderFactory.get().binaryDecoder(data, null);

9. Conclusion

Apache Avro is especially useful while dealing with big data. It offers data serialization in binary as well as JSON format which can be used as per the use case.

The Avro serialization process is faster, and it’s space efficient as well. Avro does not keep the field type information with each field; instead, it creates metadata in a schema.

Last but not least Avro has a great binding with a wide range of programming languages, which gives it an edge.

As always, the code can be found over on GitHub.

Initializing HashSet at the Time of Construction

$
0
0

1. Overview

In this quick tutorial, we’ll introduce various methods of initializing the HashSet with values, at the time of its construction.

If you’re instead looking to explore the features of HashSet, refer to this core article here.

We’ll dive into Java built-in methods since Java 5 and before followed by new mechanism introduced since Java 8. We’ll also see a custom utility method and finally explore the features provided by 3rd party collection libraries, Google Guava in particular.

If you’re lucky to have migrated to JDK9+ already, you can simply use collection factory methods.

2. Java Built-in Methods

Let’s begin by examining three built-in mechanisms available since the Java 5 or before.

2.1. Using Another Collection Instance

We can pass an existing instance of another collection to initialize the Set. In below example, we are using an inline created List:

Set<String> set = new HashSet<>(Arrays.asList("a", "b", "c"));

2.2. Using Anonymous Class

In yet another approach, we can use the anonymous class to add an element to HashSet.

Note the use of double curly braces. This approach is technically very expensive as it creates an anonymous class each time it’s called.

So depending on how frequently we need to initialize Set we can try to avoid using this approach:

Set<String> set = new HashSet<String>(){{
    add("a");
    add("b");
    add("c");
}};

2.3. Using Collections Utility Method Since Java 5

The Java’s Collections utility class provides the method named singleton to create a Set with a single element. The Set instance created with the singleton method is immutable meaning that we cannot add more values to it.

There are situations especially in unit testing where we need to create a Set with a single value:

Set<String> set = Collections.singleton("a");

3. Defining Custom Utility Method

We can define a static final method as below. The method accepts variable arguments.

Using Collections.addAll which accepts the collection object and an array of values is best among others because of the low overhead of copying values.

The method is using generics so we can pass values of any type:

public static final <T> Set<T> newHashSet(T... objs) {
    Set<T> set = new HashSet<T>();
    Collections.addAll(set, objs);
    return set;
}

The utility method can be used in our code as below.

Set<String> set = newHashSet("a","b","c");

4. Using Stream since Java 8

With the introduction of Stream API in Java 8, we have additional options. We can use Stream with Collectors as shown in below code:

Set<String> set = Stream.of("a", "b", "c")
  .collect(Collectors.toCollection(HashSet::new));


5. Using 3rd Party Collection Library

There are multiple 3rd party collections libraries including Google Guava, Apache Commons Collections, and Eclipse Collections just to name a few.

These libraries provide convenient utility methods to initialize collections like Set. Since Google Guava is one of the most commonly used here we have an example from it. The Guava has convenient methods for mutable and immutable Set objects:

Set<String> set = Sets.newHashSet("a", "b", "c");

Similarly, Guava has a utility class for creating immutable Set instances, as we can see in the example below.

Set<String> set = ImmutableSet.of("a", "b", "c");

6. Conclusion

In conclusion, we saw multiple ways in which a HashSet can be initialized while it is constructed. These approaches don’t necessarily cover all possible ways by any means. It was just an attempt to showcase most common ways.

One such approach not covered here could be using the object builder to construct a Set.

As always working code example is available over on GitHub.

How to Trigger and Stop a Scheduled Spring Batch Job

$
0
0

1. Overview

In this tutorial, we’ll investigate and compare different ways to trigger and stop a scheduled Spring Batch job for any required business cases.

If you need introductions about Spring Batch and Scheduler, please refer to Spring-Batch and Spring-Scheduler articles.

2. Trigger a Scheduled Spring Batch Job

Firstly, we have a class SpringBatchScheduler to configure scheduling and batch job. A method launchJob() will be registered as a scheduled task.

Furtherly, to trigger the scheduled Spring Batch job in the most intuitive way, let’s add a conditional flag to fire the job only when the flag is set to true:

private AtomicBoolean enabled = new AtomicBoolean(true);

private AtomicInteger batchRunCounter = new AtomicInteger(0);

@Scheduled(fixedRate = 2000)
public void launchJob() throws Exception {
    if (enabled.get()) {
        Date date = new Date();
        JobExecution jobExecution = jobLauncher()
          .run(job(), new JobParametersBuilder()
            .addDate("launchDate", date)
            .toJobParameters());
        batchRunCounter.incrementAndGet();
    }
}

// stop, start functions (changing the flag of enabled)

The variable batchRunCounter will be used in integration tests to verify if the batch job has been stopped.

3. Stop a Scheduled Spring Batch Job

With above conditional flag, we’re able to trigger the scheduled Spring Batch job with the scheduled task alive.

If we don’t need to resume the job, then we can actually stop the scheduled task to save resources.

Let’s take a look at two options in the next two subsections.

3.1. Using Scheduler Post Processor

Since we’re scheduling a method by using @Scheduled annotation, a bean post processor ScheduledAnnotationBeanPostProcessor would’ve been registered first.

We can explicitly call the postProcessBeforeDestruction() to destroy the given scheduled bean:

@Test
public void stopJobSchedulerWhenSchedulerDestroyed() throws Exception {
    ScheduledAnnotationBeanPostProcessor bean = context
      .getBean(ScheduledAnnotationBeanPostProcessor.class);
    SpringBatchScheduler schedulerBean = context
      .getBean(SpringBatchScheduler.class);
    await().untilAsserted(() -> Assert.assertEquals(
      2, 
      schedulerBean.getBatchRunCounter().get()));
    bean.postProcessBeforeDestruction(
      schedulerBean, "SpringBatchScheduler");
    await().atLeast(3, SECONDS);

    Assert.assertEquals(
      2, 
      schedulerBean.getBatchRunCounter().get());
}

Considering multiple schedulers, it’s better to keep one scheduler in its own class, so we can stop specific scheduler as needed.

3.2. Canceling the Scheduled Future

Another way to stop the scheduler would be manually canceling its Future.

Here’s a custom task scheduler for capturing Future map:

@Bean
public TaskScheduler poolScheduler() {
    return new CustomTaskScheduler();
}

private class CustomTaskScheduler 
  extends ThreadPoolTaskScheduler {

    //

    @Override
    public ScheduledFuture<?> scheduleAtFixedRate(
      Runnable task, long period) {
        ScheduledFuture<?> future = super
          .scheduleAtFixedRate(task, period);

        ScheduledMethodRunnable runnable = (ScheduledMethodRunnable) task;
        scheduledTasks.put(runnable.getTarget(), future);

        return future;
    }
}

Then we iterate the Future map and cancel the Future for our batch job scheduler:

public void cancelFutureSchedulerTasks() {
    scheduledTasks.forEach((k, v) -> {
        if (k instanceof SpringBatchScheduler) {
            v.cancel(false);
        }
    });
}

In the cases with multiple scheduler tasks, then we can maintain the Future map inside of the custom scheduler pool but cancel the corresponding scheduled Future based on scheduler class.

4. Conclusion

In this quick article, we tried three different ways to trigger or stop a scheduled Spring Batch job.

When we need to restart the batch job, using a conditional flag to manage job running would be a flexible solution. Otherwise, we can follow the other two options to stop the scheduler completely.

As usual, all the code samples used in the article are available over on GitHub.


A Guide to Message Driven Beans in EJB

$
0
0

1. Introduction

Simply put, an Enterprise JavaBean (EJB) is a JEE component that runs on an application server.

In this tutorial, we’ll discuss Message Driven Beans (MDB), responsible for handling message processing in an asynchronous context.

MDBs are part of JEE since EJB 2.0 specification; EJB 3.0 introduced the use of annotations, making it easier to create those objects. Here, we’ll focus on annotations.

2. Some Background

Before we dive into the Message Driven Beans details, let’s review some concepts related to messaging.

2.1. Messaging

Messaging is a communication mechanism. By using messaging, programs can exchange data even if they’re written in different program languages or reside in different operational systems.

It offers a loosely coupled solution; neither the producer or the consumer of the information need to know details about each other.

Therefore, they don’t even have to be connected to the messaging system at the same time (asynchronous communication).

2.2. Synchronous and Asynchronous Communication

During synchronous communication, the requester waits until the response is back. In the meantime, the requester process stays blocked.

In asynchronous communication, on the other hand, the requester initiates the operation but isn’t blocked by it; the requester can move on to other tasks and receive the response later.

2.3. JMS

Java Message Services (“JMS”) is a Java API that supports messaging.

JMS provides peer to peer and publish/subscribe messaging models.

3. Message Driven Beans

An MDB is a component invoked by the container every time a message arrives on the messaging system. As a result, this event triggers the code inside this bean.

We can perform a lot of tasks inside an MDB onMessage() method, since showing the received data on a browser or parsing and saving it to a database.

Another example is submitting data to another queue after some processing. It all comes down to our business rules.

3.1. Message Driven Beans Lifecycle

An MDB has only two states:

  1. It doesn’t exist on the container
  2. created and ready to receive messages

The dependencies, if present, are injected right after the MDB is created.

To execute instructions before receiving messages, we need to annotate a method with @javax.ejb.PostConstruct.

Both dependency injection and @javax.ejb.PostConstruct execution happen only once.

After that, the MDB is ready to receive messages.

3.2. Transaction

A message can be delivered to an MDB inside a transaction context.

Meaning that all operations within the onMessage() method are part of a single transaction.

Therefore, if a rollback happens, message system redelivers the data.

4. Working With Message Driven Beans

4.1. Creating the Consumer

To create a Message Driven Bean, we use @javax.ejb.MessageDriven annotation before the class name declaration.

To handle the incoming message, we must implement the onMessage() method of the MessageListener interface:

@MessageDriven(activationConfig = { 
    @ActivationConfigProperty(
      propertyName = "destination", 
      propertyValue = "tutorialQueue"), 
    @ActivationConfigProperty(
      propertyName = "destinationType", 
      propertyValue = "javax.jms.Queue")
})
public class ReadMessageMDB implements MessageListener {

    public void onMessage(Message message) {
        TextMessage textMessage = (TextMessage) message;
        try {
            System.out.println("Message received: " + textMessage.getText());
        } catch (JMSException e) {
            System.out.println(
              "Error while trying to consume messages: " + e.getMessage());
        }
    }
}

Since this article focus on annotations instead of .xml descriptors we’ll use @ActivationConfigProperty rather than <activation-config-property>.

@ActivationConfigProperty is a key-value property that represents that configuration. We’ll use two properties inside activationConfig, setting the queue and the type of object the MDB will consume.

Inside onMessage() method we can cast message parameter to TextMessage, BytesMessage, MapMessage StreamMessage or ObjectMessage.

However, for this article, we’ll only look at the message content on standard output.

4.2. Creating the Producer

As covered in section 2.1, producer and consumer services are completely independent and can even be written in different programming languages!

We’ll produce our messages using Java Servlets:

@Override
protected void doGet(
  HttpServletRequest req, 
  HttpServletResponse res) 
  throws ServletException, IOException {
 
    String text = req.getParameter("text") != null ? req.getParameter("text") : "Hello World";

    try (
        Context ic = new InitialContext();
 
        ConnectionFactory cf = (ConnectionFactory) ic.lookup("/ConnectionFactory");
        Queue queue = (Queue) ic.lookup("queue/tutorialQueue");
 
        Connection connection = cf.createConnection();
    ) {
        Session session = connection.createSession(
          false, Session.AUTO_ACKNOWLEDGE);
        MessageProducer publisher = session
          .createProducer(queue);
 
        connection.start();

        TextMessage message = session.createTextMessage(text);
        publisher.send(message);
 
    } catch (NamingException | JMSException e) {
        res.getWriter()
          .println("Error while trying to send <" + text + "> message: " + e.getMessage());
    } 

    res.getWriter()
      .println("Message sent: " + text);
}

After obtaining the ConnectionFactory and Queue instances, we must create a Connection and Session.

To create a session, we call the createSession method.

The first parameter in createSession is a boolean which defines whether the session is part of a transaction or not.

The second parameter is only used when the first is false. It allows us to describe the acknowledgment method that applies to incoming messages and takes the values of Session.AUTO_ACKNOWLEDGE, Session.CLIENT_ACKNOWLEDGE and Session.DUPS_OK_ACKNOWLEDGE.

We can now start the connection, create a text message on the session object and send our message.

A consumer, bound to the same queue will receive a message and perform its asynchronous task.

Also, apart from looking up JNDI objects, all actions in our try-with-resources block make sure the connection is closed if JMSException encounters an error, such as trying to connect to a non-existing queue or specifying a wrong port number to connect.

5. Testing the Message Driven Bean

Send a message through the GET method on SendMessageServlet, as in:

http://127.0.0.1:8080/producer/SendMessageServlet?text=Text to send

Also, the servlet sends “Hello World” to the queue if we don’t send any parameters, as in http://127.0.0.1:8080/producer/SendMessageServlet.

6. Conclusion

Message Driven Beans allow simple creation of a queue based application.

Therefore, MDBs allow us to decouple our applications into smaller services with localized responsibilities, allowing a much more modular and incremental system that can recover from system failures.

As always the code is over on GitHub.

A Guide to Eclipse JNoSQL

$
0
0

1. Overview

Eclipse JNoSQL is a set of APIs and implementations that simplify the interaction of Java applications with NoSQL databases.

In this article, we’ll learn how to set up and configure JNoSQL to interact with a NoSQL database. We’ll work with both the Communication and Mapping layer.

2. Eclipse JNoSQL Communication Layer

Technically speaking, the communication layer consists of two modules: Diana API and a driver.

While the API defines an abstraction to NoSQL database types, the driver provides implementations for most known databases.

We can compare this with the JDBC API and JDBC driver in relational databases.

2.1. Eclipse JNoSQL Diana API

Simply put, there are four basic types of NoSQL databases: Key-Value, Column, Document, and Graph.

And the Eclipse JNoSQL Diana API defines three modules:

  1. diana-key-value
  2. diana-column
  3. diana-document

The NoSQL graph type isn’t covered by the API, because it is already covered by Apache ThinkerPop.

The API is based on a core module, diana-core, and defines an abstraction to common concepts, such as Configuration, Factory, Manager, Entity, and Value.

To work with the API, we need to provide the dependency of the corresponding module to our NoSQL database type.

Thus, for a document-oriented database, we’ll need the diana-document dependency:

<dependency>
    <groupId>org.jnosql.diana</groupId>
    <artifactId>diana-document</artifactId>
    <version>0.0.5</version>
</dependency>

Similarly, we should use the diana-key-value module if the working NoSQL database is key-value oriented:

<dependency>
    <groupId>org.jnosql.diana</groupId>
    <artifactId>diana-key-value</artifactId>
    <version>0.0.5</version>
</dependency>

And finally, the diana-column module if it’s a column-oriented:

<dependency>
    <groupId>org.jnosql.diana</groupId>
    <artifactId>diana-column</artifactId>
    <version>0.0.5</version>
</dependency>

The most recent versions can be found on Maven Central.

2.2. Eclipse JNoSQL Diana Driver

The driver is a set of implementations of the API for the most common NoSQL databases.

There’s one implementation per NoSQL database. If the database is multi-model, the driver should implement all supported APIs.

For example, the couchbase-driver implements both diana-document and diana-key-value because Couchbase is both document and key-value oriented.

Unlike with relational databases, where the driver is typically provided by the database vendor, here the driver is provided by Eclipse JNoSQL. In most cases, this driver is a wrapper around the official vendor library.

To get started with the driver, we should include the API and the corresponding implementation for the chosen NoSQL database.

For MongoDB, for example, we need to include the following dependencies:

<dependency>
    <groupId>org.jnosql.diana</groupId>
    <artifactId>diana-document</artifactId>
    <version>0.0.5</version>
</dependency>
<dependency>
    <groupId>org.jnosql.diana</groupId>
    <artifactId>mongodb-driver</artifactId>
    <version>0.0.5</version>
</dependency>

The process behind working with the Driver is simple.

First, we need a Configuration bean. By reading a configuration file from the classpath or hardcoding values, the Configuration is able to create a Factory. We then use it then to create a Manager.

Finally, The Manager is responsible for pushing and retrieving the Entity to and from the NoSQL database.

In the next subsections, we’ll explain this process for each NoSQL database type.

2.3. Working with a Document-Oriented Database

In this example, we’ll be using an embedded MongoDB as it’s simple to get started with and doesn’t require installation. It’s document-oriented and the following instructions are applicable to any other document-oriented NoSQL database.

At the very beginning, we should provide all necessary settings needed by the application to properly interact with the database. In its most elementary form, we should provide the host and the port of a running instance of the MongoDB.

We can provide these settings either in the mongodb-driver.properties located on the classpath:

#Define Host and Port
mongodb-server-host-1=localhost:27017

Or as a hardcoded values:

Map<String, Object> map = new HashMap<>();
map.put("mongodb-server-host-1", "localhost:27017");

Next, we create the Configuration bean for the document type:

DocumentConfiguration configuration = new MongoDBDocumentConfiguration();

From this Configuration bean, we are able to create a ManagerFactory:

DocumentCollectionManagerFactory managerFactory = configuration.get();

Implicitly, the get() method of the Configuration bean uses settings from the properties file. We can also obtain this factory from hardcoded values:

DocumentCollectionManagerFactory managerFactory 
  = configuration.get(Settings.of(map));

The ManagerFactory has a simple method get(), that takes the database name as a parameter, and creates the Manager:

DocumentCollectionManager manager = managerFactory.get("my-db");

And finally, we’re ready. The Manager provides all the necessary methods to interact with the underlying NoSQL database through the DocumentEntity.

So, we could, for example, insert a document:

DocumentEntity documentEntity = DocumentEntity.of("books");
documentEntity.add(Document.of("_id", "100"));
documentEntity.add(Document.of("name", "JNoSQL in Action"));
documentEntity.add(Document.of("pages", "620"));
DocumentEntity saved = manager.insert(documentEntity);

We could also search for documents:

DocumentQuery query = select().from("books").where("_id").eq(100).build();
List<DocumentEntity> entities = manager.select(query);

And in a similar manner, we could update an existing document:

saved.add(Document.of("author", "baeldung"));
DocumentEntity updated = manager.update(saved);

And finally, we can delete a stored document:

DocumentDeleteQuery deleteQuery = delete().from("books").where("_id").eq("100").build();
manager.delete(deleteQuery);

To run the sample, we just need to access the jnosql-diana module and run the DocumentApp application.

We should see the output in the console:

DefaultDocumentEntity{documents={pages=620, name=JNoSQL in Action, _id=100}, name='books'}
DefaultDocumentEntity{documents={pages=620, author=baeldung, name=JNoSQL in Action, _id=100}, name='books'}
[]

2.4. Working with a Column-Oriented Database

For the purpose of this section, we’ll use an embedded version of the Cassandra database, so no installation is needed.

The process for working with a Column-oriented database is very similar. First of all, we add the Cassandra driver and the column API to the pom:

<dependency>
    <groupId>org.jnosql.diana</groupId>
    <artifactId>diana-column</artifactId>
    <version>0.0.5</version>
</dependency>
<dependency>
    <groupId>org.jnosql.diana</groupId>
    <artifactId>cassandra-driver</artifactId>
    <version>0.0.5</version>
</dependency>

Next, we need the configuration settings specified in the configuration file, diana-cassandra.properties, on the classpath. Alternatively, we could also use hardcoded configuration values.

Then, with a similar approach, we’ll create a ColumnFamilyManager and start manipulating the ColumnEntity:

ColumnConfiguration configuration = new CassandraConfiguration();
ColumnFamilyManagerFactory managerFactory = configuration.get();
ColumnFamilyManager entityManager = managerFactory.get("my-keySpace");

So to create a new entity, let’s invoke the insert() method:

ColumnEntity columnEntity = ColumnEntity.of("books");
Column key = Columns.of("id", 10L);
Column name = Columns.of("name", "JNoSQL in Action");
columnEntity.add(key);
columnEntity.add(name);
ColumnEntity saved = entityManager.insert(columnEntity);

To run the sample and see the output in the console, run the ColumnFamilyApp application.

2.5. Working with a Key-Value Oriented Database

In this section, we’ll use the Hazelcast. Hazelcast is a key-value oriented NoSQL database. For more information on the Hazelcast database, you can check this link.

The process for working with the key-value oriented type is also similar. We start by adding these dependencies to the pom:

<dependency>
    <groupId>org.jnosql.diana</groupId>
    <artifactId>diana-key-value</artifactId>
    <version>0.0.5</version>
</dependency>
<dependency>
    <groupId>org.jnosql.diana</groupId>
    <artifactId>hazelcast-driver</artifactId>
    <version>0.0.5</version>
</dependency>

Then we need to provide the configuration settings. Next, we can obtain a BucketManager and then manipulate the KeyValueEntity:

KeyValueConfiguration configuration = new HazelcastKeyValueConfiguration();
BucketManagerFactory managerFactory = configuration.get();
BucketManager entityManager = managerFactory.getBucketManager("books");

Let’s say we want to save the following Book model:

public class Book implements Serializable {

    private String isbn;
    private String name;
    private String author;
    private int pages;

    // standard constructor
    // standard getters and setters
}

So we create a Book instance and then we save it by invoking the put() method;

Book book = new Book(
  "12345", "JNoSQL in Action", 
  "baeldung", 420);
KeyValueEntity keyValueEntity = KeyValueEntity.of(
  book.getIsbn(), book);
entityManager.put(keyValueEntity);

Then to retrieve the saved Book instance:

Optional<Value> optionalValue = manager.get("12345");
Value value = optionalValue.get(); // or any other adequate Optional handling
Book savedBook = value.get(Book.class);

To run the sample and see the output in the console, run the KeyValueApp application.

3. Eclipse JNoSQL Mapping Layer

The mapping layer, Artemis API, is a set of APIs that help map java annotated Objects to NoSQL databases. It’s based on the Diana API and CDI (Context and Dependency Injection).

We can consider this API as JPA or ORM like for the NoSQL world. This layer also provides an API for each NoSQL type and one core API for common features.

In this section, we’re going to work with the MongoDB document-oriented database.

3.1. Required Dependencies

To enable Artemis in the application, we need to add the artemis-configuration dependency. Since MongoDB is document-oriented, the dependency artemis-document is also needed.

For other types of NoSQL databases, we would use artemis-column, artemis-key-value and artemis-graph.

The Diana driver for MongoDB is also needed:

<dependency>
    <groupId>org.jnosql.artemis</groupId>
    <artifactId>artemis-configuration</artifactId>
    <version>0.0.5</version>
</dependency>
<dependency>
    <groupId>org.jnosql.artemis</groupId>
    <artifactId>artemis-document</artifactId>
    <version>0.0.5</version>
</dependency>
<dependency>
    <groupId>org.jnosql.diana</groupId>
    <artifactId>mongodb-driver</artifactId>
    <version>0.0.5</version>
</dependency>

Artemis is based on CDI, so we need also to provide this Maven dependency:

<dependency>
    <groupId>javax</groupId>
    <artifactId>javaee-web-api</artifactId>
    <version>8.0</version>
    <scope>provided</scope>
</dependency>

3.2. The Document Configuration File

A configuration is a set of properties for a given database that let us provides setting outside the code. By default, we need to supply the jnosql.json file under the META-INF resource.

This is an example of the configuration file:

[
    {
        "description": "The mongodb document configuration",
        "name": "document",
        "provider": "org.jnosql.diana.mongodb.document.MongoDBDocumentConfiguration",
        "settings": {
            "mongodb-server-host-1":"localhost:27019"
        }
    }
]

We will need to specify the configuration name above by setting the name attribute in our ConfigurationUnit. If the configuration is in a different file, it can be specified by using the fileName attribute.

Given this configuration, we create a factory:

@Inject
@ConfigurationUnit(name = "document")
private DocumentCollectionManagerFactory<MongoDBDocumentCollectionManager> managerFactory;

And from this factory, we can create a DocumentCollectionManager:

@Produces
public MongoDBDocumentCollectionManager getEntityManager() {
    return managerFactory.get("todos");
}

The DocumentCollectionManager is a CDI enabled bean and it is used in both the Template and Repository.

3.3. Mapping

The mapping is an annotation-driven process by which the Entity model is converted to the Diana EntityValue.

Let’s start by defining a Todo model:

@Entity
public class Todo implements Serializable {

    @Id("id")
    public long id;

    @Column
    public String name;

    @Column
    public String description;

    // standard constructor
    // standard getters and setters
}

As shown above, we have the basic mapping annotations: @Entity, @Id, and @Column.

Now to manipulate this model, we need either a Template class or a Repository interface.

3.4. Working with the Template

The template is the bridge between the entity model and the Diana API. For a document-oriented database, we start by injecting the DocumentTemplate bean:

@Inject
DocumentTemplate documentTemplate;

And then, we can manipulate the Todo Entity. For example, we can create a Todo:

public Todo add(Todo todo) {
    return documentTemplate.insert(todo);
}

Or we can retrieve a Todo by id:

public Todo get(String id) {
    Optional<Todo> todo = documentTemplate
      .find(Todo.class, id);
    return todo.get(); // or any other proper Optional handling
}

To select all entities, we build a DocumentQuery and then we invoke the select() method:

public List<Todo> getAll() {
    DocumentQuery query = select().from("Todo").build();
    return documentTemplate.select(query);
}

And finally we can delete a Todo entity by id:

public void delete(String id) {
    documentTemplate.delete(Todo.class, id);
}

3.5. Working with the Repository

In addition to the Template class, we can also manage entities through the Repository interface which has methods for creating, updating, deleting and retrieving information.

To use the Repository interface, we just provide a subinterface of the Repository:

public interface TodoRepository extends Repository<Todo, String> {
    List<Todo> findByName(String name);
    List<Todo> findAll();
}

By the following method and parameter naming conventions, an implementation of this interface is provided at runtime as a CDI bean.

In this example, all Todo entities with a matching name are retrieved by the findByName() method.

We can now use it:

@Inject
TodoRepository todoRepository;

The Database qualifier lets us work with more than one NoSQL database in the same application. It comes with two attributes, the type and the provider.

If the database is multi-model, then we need to specify which model we are working with:

@Inject
@Database(value = DatabaseType.DOCUMENT)
TodoRepository todoRepository;

Additionally, if we have more than one database of the same model, we need to specify the provider:

@Inject
@Database(value = DatabaseType.DOCUMENT, provider="org.jnosql.diana.mongodb.document.MongoDBDocumentConfiguration")
TodoRepository todoRepository;

To run the sample, just access the jnosql-artemis module and invoke this command:

mvn package liberty:run

This command builds, deploy and start the Open Liberty server thanks to the liberty-maven-plugin.

3.6. Testing the Application

As the application exposes a REST endpoint, we can use any REST client for our tests. Here we used the curl tool.

So to save a Todo class:

curl -d '{"id":"120", "name":"task120", "description":"Description 120"}' -H "Content-Type: application/json" -X POST http://localhost:9080/jnosql-artemis/todos

and to get all Todo:

curl -H "Accept: application/json" -X GET http://localhost:9080/jnosql-artemis/todos

Or to get just one Todo:

curl -H "Accept: application/json" -X GET http://localhost:9080/jnosql-artemis/todos/120

4. Conclusion

In this tutorial, we have explored how JNoSQL is able to abstract the interaction with a NoSQL database.

First, we have used JNoSQL Diana API to interact with the database with low-level code. Then, we used the JNoSQL Artemis API to work with friendly Java annotated Models.

As usual, we can find the code used in this article over on Github.

Getting a File’s Mime Type in Java

$
0
0

1. Overview

In this tutorial, we’ll take a look at various strategies for getting MIME types of a file. We’ll look at ways to extend the MIME types available to the strategies, wherever applicable.

We’ll also point out where we should favor one strategy over the other.

2. Using Java 7

Let’s start with Java 7 – which provides the method Files.probeContentType(path) for resolving the MIME type:

@Test
public void whenUsingJava7_thenSuccess() {
    Path path = new File("product.png").toPath();
    String mimeType = Files.probeContentType(path);
 
    assertEquals(mimeType, "image/png");
}

This method makes use of the installed FileTypeDetector implementations to probe the MIME type. It invokes the probeContentType of each implementation to resolve the type.

Now, if the file is recognized by any of the implementations, the content type is returned. However, if that doesn’t happen, a system-default file type detector is invoked.

However, the default implementations are OS specific and might fail depending on the OS that we are using.

In addition to that, it’s also important to note that the strategy will fail if the file isn’t present in the filesystem. Furthermore, if the file doesn’t have an extension, it will result in failure.

 3. Using URLConnection

URLConnection provides several APIs for detecting MIME types of a file. Let’s briefly explore each of them.

3.1. Using getContentType()

We can use getContentType() method of URLConnection to retrieve a file’s MIME type:

@Test
public void whenUsingGetContentType_thenSuccess(){
    File file = new File("product.png");
    URLConnection connection = file.toURL().openConnection();
    String mimeType = connection.getContentType();
 
    assertEquals(mimeType, "image/png");
}

However, a major drawback of this approach is that it’s very slow.

3.2. Using guessContentTypeFromName()

Next, let’s see how we can make use of the guessContentTypeFromName() for the purpose:

@Test
public void whenUsingGuessContentTypeFromName_thenSuccess(){
    File file = new File("product.png");
    String mimeType = URLConnection.guessContentTypeFromName(file.getName());
 
    assertEquals(mimeType, "image/png");
}

This method makes use of the internal FileNameMap to resolve the MIME type from the extension.

We also have the option of using guessContentTypeFromStream() instead, which uses the first few characters of the input stream, to determine the type.

3.3. Using getFileNameMap()

A faster way to obtain the MIME type using URLConnection is using the getFileNameMap() method:

@Test
public void whenUsingGetFileNameMap_thenSuccess(){
    File file = new File("product.png");
    FileNameMap fileNameMap = URLConnection.getFileNameMap();
    String mimeType = fileNameMap.getContentTypeFor(file.getName());
 
    assertEquals(mimeType, "image/png");
}

The method returns the table of MIME types used by all instances of URLConnection. This table is then used to resolve the input file type.

The built-in table of MIME types is very limited when it comes to URLConnection.

By default, the class uses content-types.properties file in JRE_HOME/lib. We can, however, extend it, by specifying a user-specific table using the content.types.user.table property:

System.setProperty("content.types.user.table","<path-to-file>");

4. Using MimeTypesFileTypeMap

MimeTypesFileTypeMap resolves MIME types by using file’s extension. This class came with Java 6, and hence comes very handy when we’re working with JDK 1.6.

Now let’s see how to use it:

@Test
public void whenUsingMimeTypesFileTypeMap_thenSuccess() {
    File file = new File("product.png");
    MimetypesFileTypeMap fileTypeMap = new MimetypesFileTypeMap();
    String mimeType = fileTypeMap.getContentType(file.getName());
 
    assertEquals(mimeType, "image/png");
}

Here, we can either pass the name of the file or the File instance itself as the parameter to the function. However, the function with File instance as the parameter internally calls the overloaded method that accepts the filename as the parameter.

Internally, this method looks up a file called mime.types for the type resolution. It’s very important to note that the method searches for the file in a specific order:

  1. Programmatically added entries to the MimetypesFileTypeMap instance
  2. .mime.types in the user’s home directory
  3. <java.home>/lib/mime.types
  4. resources named META-INF/mime.types
  5. resource named META-INF/mimetypes.default (usually found only in the activation.jar file)

However, if no file is found, it will return application/octet-stream as the response.

5. Using jMimeMagic

jMimeMagic is a restrictively licensed library that we can use to obtain the MIME type of a file.

Let’s start by configuring the Maven dependency:

<dependency>
    <groupId>net.sf.jmimemagic</groupId>
    <artifactId>jmimemagic</artifactId>
    <version>0.1.5</version>
</dependency>

We can find the latest version of this library on Maven Central.

Next, we’ll explore how to work with the library:

@Test    
public void whenUsingJmimeMagic_thenSuccess() {
    File file = new File("product.png");
    Magic magic = new Magic();
    MagicMatch match = magic.getMagicMatch(file, false);
 
    assertEquals(match.getMimeType(), "image/png");
}

This library can work with a stream of data and hence doesn’t require the file to be present in the file system.

6. Using Apache Tika

Apache Tika is a toolset that detects and extracts metadata and text from a variety of files. It has a rich and powerful API and comes with tika-core which we can make use of, for detecting MIME type of a file.

Let’s begin by configuring the Maven dependency:

<dependency>
    <groupId>org.apache.tika</groupId>
    <artifactId>tika-core</artifactId>
    <version>1.18</version>
</dependency>

Next, we’ll make use of the detect() method to resolve the type:

@Test
public void whenUsingTika_thenSuccess() {
    File file = new File("product.png");
    Tika tika = new Tika();
    String mimeType = tika.detect(file);
 
    assertEquals(mimeType, "image/png");
}

The library relies on magic markers in the stream prefix, for type resolution.

7. Conclusion

In this article, we’ve looked at the various strategies of obtaining the MIME type of a file. Furthermore, we have also analyzed the tradeoffs of the approaches. We have also pointed out the scenarios where we should favor one strategy over the other.

The full source code that is used in this article is available over at GitHub, as always.

How to Convert List to Map in Java

$
0
0

1. Overview

Converting List to Map is a common task. In this tutorial, we’ll cover several ways to do this.

We’ll assume that each element of the List has an identifier which will be used as a key in the resulting Map.

2. Sample Data Structure

Firstly, let’s model the element:

public class Animal {
    private int id;
    private String name;

    //  constructor/getters/setters
}

The id field is unique, hence we can make it the key.

Let’s start converting with the traditional way.

3. Before Java 8

Evidently, we can convert a List to a Map using core Java methods:

public Map<Integer, Animal> convertListBeforeJava8(List<Animal> list) {
    Map<Integer, Animal> map = new HashMap<>();
    for (Animal animal : list) {
        map.put(animal.getId(), animal);
    }
    return map;
}

Let’s test the conversion:

@Test
public void whenConvertBeforeJava8_thenReturnMapWithTheSameElements() {
    Map<Integer, Animal> map = convertListService
      .convertListBeforeJava8(list);
    
    assertThat(
      map.values(), 
      containsInAnyOrder(list.toArray()));
}

4. With Java 8

Starting with Java 8 we can convert a List into a Map using streams and Collectors:

 public Map<Integer, Animal> convertListAfterJava8(List<Animal> list) {
    Map<Integer, Animal> map = list.stream()
      .collect(Collectors.toMap(Animal::getId, animal -> animal));
    return map;
}

Again, let’s make sure the conversion is done correctly:

@Test
public void whenConvertAfterJava8_thenReturnMapWithTheSameElements() {
    Map<Integer, Animal> map = convertListService.convertListAfterJava8(list);
    
    assertThat(
      map.values(), 
      containsInAnyOrder(list.toArray()));
}

5. Using the Guava Library

Besides core Java, we can use 3rd party libraries for the conversion.

5.1. Maven Configuration

Firstly, we need to add the following dependency to our pom.xml:

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>23.6.1-jre</version>
</dependency>

The latest version of this library can always be found here.

5.2. Conversion With Maps.uniqueIndex()

Secondly, let’s use Maps.uniqueIndex() method to convert a List into a Map:

public Map<Integer, Animal> convertListWithGuava(List<Animal> list) {
    Map<Integer, Animal> map = Maps
      .uniqueIndex(list, Animal::getId);
    return map;
}

Finally, let’s test the conversion:

@Test
public void whenConvertWithGuava_thenReturnMapWithTheSameElements() {
    Map<Integer, Animal> map = convertListService
      .convertListWithGuava(list);
    
    assertThat(
      map.values(), 
      containsInAnyOrder(list.toArray()));
}

6. Using Apache Commons Library

We can also make a conversion with the method of Apache Commons library.

6.1. Maven Configuration

Firstly, let’s include Maven dependency:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-collections4</artifactId>
    <version>4.2</version>
</dependency>

The latest version of this dependency is available here.

6.2. MapUtils

Secondly, we’ll make the conversion using MapUtils.populateMap():

public Map<Integer, Animal> convertListWithApacheCommons2(List<Animal> list) {
    Map<Integer, Animal> map = new HashMap<>();
    MapUtils.populateMap(map, list, Animal::getId);
    return map;
}

Lastly, let’s make sure it works as expected:

@Test
public void whenConvertWithApacheCommons2_thenReturnMapWithTheSameElements() {
    Map<Integer, Animal> map = convertListService
      .convertListWithApacheCommons2(list);
    
    assertThat(
      map.values(), 
      containsInAnyOrder(list.toArray()));
}

7. Conflict of Values

Let’s check what happens if the id field isn’t unique.

7.1. List of Animals With Duplicated ids

Firstly, let’s create a List of Animals with non-unique ids:

@Before
public void init() {

    this.duplicatedIdList = new ArrayList<>();

    Animal cat = new Animal(1, "Cat");
    duplicatedIdList.add(cat);
    Animal dog = new Animal(2, "Dog");
    duplicatedIdList.add(dog);
    Animal pig = new Animal(3, "Pig");
    duplicatedIdList.add(pig);
    Animal cow = new Animal(4, "Cow");
    duplicatedIdList.add(cow);
    Animal goat= new Animal(4, "Goat");
    duplicatedIdList.add(goat);
}

As shown above, the cow and the goat have the same id.

7.2. Checking the Behavior

Java Map‘s put() method is implemented so that the latest added value overwrites the previous one with the same key.

For this reason, the traditional conversion and Apache Commons MapUtils.populateMap() behave in the same way:

@Test
public void whenConvertBeforeJava8_thenReturnMapWithRewrittenElement() {

    Map<Integer, Animal> map = convertListService
      .convertListBeforeJava8(duplicatedIdList);

    assertThat(map.values(), hasSize(4));
    assertThat(map.values(), hasItem(duplicatedIdList.get(4)));
}

@Test
public void whenConvertWithApacheCommons_thenReturnMapWithRewrittenElement() {

    Map<Integer, Animal> map = convertListService
      .convertListWithApacheCommons(duplicatedIdList);

    assertThat(map.values(), hasSize(4));
    assertThat(map.values(), hasItem(duplicatedIdList.get(4)));
}

As can be seen, the goat overwrites the cow with the same id.

Unlike that, Collectors.toMap() and MapUtils.populateMap() throw IllegalStateException and IllegalArgumentException respectively:

@Test(expected = IllegalStateException.class)
public void givenADupIdList_whenConvertAfterJava8_thenException() {

    convertListService.convertListAfterJava8(duplicatedIdList);
}

@Test(expected = IllegalArgumentException.class)
public void givenADupIdList_whenConvertWithGuava_thenException() {

    convertListService.convertListWithGuava(duplicatedIdList);
}

8. Conclusion

In this quick article, we’ve covered various ways of converting a List to a Map, giving examples with core Java as well as some popular third-party libraries.

As usual, the complete source code is available over on GitHub.

Query Entities by Dates and Times with Spring Data JPA

$
0
0

1. Introduction

In this quick tutorial, we’ll see how to query entities by dates with Spring Data JPA.

We’ll first refresh our memory about how to map dates and times with JPA.

Then, we’ll create an entity with date and time fields as well as a Spring Data repository to query those entities.

2. Mapping Dates and Times with JPA

For starters, we’ll review a bit of theory about mapping dates with JPA. The thing to know is that we need to decide whether we want to represent:

  • A date only
  • A time only
  • Or both

In addition to the (optional) @Column annotation, we’ll need to add the @Temporal annotation to specify what the field represents.

This annotation takes a parameter which is a value of TemporalType enum:

  • TemporalType.DATE
  • TemporalType.TIME
  • TemporalType.TIMESTAMP

A detailed article about dates and times mapping with JPA can be found here.

3. In Practice

In practice, once our entities are correctly set up, there is not much work to do to query them using Spring Data JPA. We just have to use query methods, @Query annotation.

Every Spring Data JPA mechanism will work just fine.

Let’s see a couple examples of entities queried by dates and times with Spring Data JPA.

3.1. Set Up an Entity

For starters, let’s say we have an Article entity, with a publication date, a publication time and a creation date and time:

@Entity
public class Article {

    @Id
    @GeneratedValue
    Integer id;
 
    @Temporal(TemporalType.DATE)
    Date publicationDate;
 
    @Temporal(TemporalType.TIME)
    Date publicationTime;
 
    @Temporal(TemporalType.TIMESTAMP)
    Date creationDateTime;
}

We split publication date and time into two fields for the demonstration purpose. That way the three temporal types are represented.

3.2. Query the Entities

Now that our entity is all set up, let’s create a Spring Data repository to query those articles.

We’ll create three methods, using several Spring Data JPA features:

public interface ArticleRepository 
  extends JpaRepository<Article, Integer> {

    List<Article> findAllByPublicationDate(Date publicationDate);

    List<Article> findAllByPublicationTimeBetween(
      Date publicationTimeStart,
      Date publicationTimeEnd);

    @Query("select a from Article a where a.creationDateTime <= :creationDateTime")
    List<Article> findAllWithCreationDateTimeBefore(
      @Param("creationDateTime") Date creationDateTime);

}

So we defined three methods:

  • findAllByPublicationDate which retrieves articles published on a given date
  • findAllByPublicationTimeBetween which retrieves articles published between two given hours
  • and findAllWithCreationDateTimeBefore which retrieves articles created before a given date and time

The two first methods rely on Spring Data query methods mechanism and the last on @Query annotation.

In the end, that doesn’t change the way dates will be treated. The first method will only consider the date part of the parameter.

The second will only consider the time of the parameters. And the last will use both date and time.

3.3. Test the Queries

The last thing we’ve to do is to set up some tests to check that these queries work as expected.

We’ll first import a few data into our database and then we’ll create the test class which will check each method of the repository:

@RunWith(SpringRunner.class)
@DataJpaTest
public class ArticleRepositoryIntegrationTest {

    @Autowired
    private ArticleRepository repository;

    @Test
    public void whenFindByPublicationDate_thenArticles1And2Returned() {
        List<Article> result = repository.findAllByPublicationDate(new SimpleDateFormat("yyyy-MM-dd").parse("2018-01-01"));

        assertEquals(2, result.size());
        assertTrue(result.stream()
          .map(Article::getId)
          .allMatch(id -> Arrays.asList(1, 2).contains(id)));
    }

    @Test
    public void whenFindByPublicationTimeBetween_thenArticles2And3Returned() {
        List<Article> result = repository.findAllByPublicationTimeBetween(
          new SimpleDateFormat("HH:mm").parse("15:15"),
          new SimpleDateFormat("HH:mm").parse("16:30"));

        assertEquals(2, result.size());
        assertTrue(result.stream()
          .map(Article::getId)
          .allMatch(id -> Arrays.asList(2, 3).contains(id)));
    }

    @Test
    public void givenArticlesWhenFindWithCreationDateThenArticles2And3Returned() {
        List<Article> result = repository.findAllWithCreationDateTimeBefore(
          new SimpleDateFormat("yyyy-MM-dd HH:mm").parse("2017-12-15 10:00"));

        assertEquals(2, result.size());
        assertTrue(result.stream()
          .map(Article::getId)
          .allMatch(id -> Arrays.asList(2, 3).contains(id));
    }
}

Each test verifies that only the articles matching the conditions are retrieved.

4. Conclusion

In this short article, we’ve seen how to query entities using their dates and times fields with Spring Data JPA.

We learned a bit of theory before using Spring Data mechanisms to query the entities. We saw those mechanisms work the same way with dates and times as they do with other types of data.

The source code for this article is available over on GitHub.

Guide to Java Instrumentation

$
0
0

1. Introduction

In this tutorial, we’re going to talk about Java Instrumentation API. It provides the ability to add byte-code to existing compiled Java classes.

We’ll also talk about java agents and how we use them to instrument our code.

2. Setup

Throughout the article, we’ll build an app using instrumentation.

Our application will consist of two modules:

  1. An ATM app that allows us to withdraw money
  2. And a Java agent that will allow us to measure the performance of our ATM by measuring the time invested spending money

The Java agent will modify the ATM byte-code allowing us to measure withdrawal time without having to modify the ATM app.

Our project will have the following structure:

<groupId>com.baeldung.instrumentation</groupId>
<artifactId>base</artifactId>
<version>1.0.0</version>
<packaging>pom</packaging>
<modules>
    <module>agent</module>
    <module>application</module>
</modules>

Before getting too much into the details of instrumentation, let’s see what a java agent is.

3. What is a Java Agent

In general, a java agent is just a specially crafted jar file. It utilizes the Instrumentation API that the JVM provides to alter existing byte-code that is loaded in a JVM.

For an agent to work, we need to define two methods:

  • premain – will statically load the agent using -javaagent parameter at JVM startup
  • agentmain – will dynamically load the agent into the JVM using the Java Attach API

An interesting concept to keep in mind is that a JVM implementation, like Oracle, OpenJDK, and others, can provide a mechanism to start agents dynamically, but it is not a requirement.

First, let’s see how we’d use an existing Java agent.

After that, we’ll look at how we can create one from scratch to add the functionality we need in our byte-code.

4. Loading a Java Agent

To be able to use the Java agent, we must first load it.

We have two types of load:

  • static – makes use of the premain to load the agent using -javaagent option
  • dynamic – makes use of the agentmain to load the agent into the JVM using the Java Attach API

Next, we’ll take a look at each type of load and explain how it works.

4.1. Static Load

Loading a Java agent at application startup is called static load. Static load modifies the byte-code at startup time before any code is executed.

Keep in mind that the static load uses the premain method, which will run before any application code runs, to get it running we can execute:

java -javaagent:agent.jar -jar application.jar

It’s important to note that we should always put the –javaagent parameter before the –jar parameter.

Below are the logs for our command:

22:24:39.296 [main] INFO - [Agent] In premain method
22:24:39.300 [main] INFO - [Agent] Transforming class MyAtm
22:24:39.407 [main] INFO - [Application] Starting ATM application
22:24:41.409 [main] INFO - [Application] Successful Withdrawal of [7] units!
22:24:41.410 [main] INFO - [Application] Withdrawal operation completed in:2 seconds!
22:24:53.411 [main] INFO - [Application] Successful Withdrawal of [8] units!
22:24:53.411 [main] INFO - [Application] Withdrawal operation completed in:2 seconds!

We can see when the premain method ran and when MyAtm class was transformed. We also see the two ATM withdrawal transactions logs which contain the time it took each operation to complete.

Remember that in our original application we didn’t have this time of completion for a transaction, it was added by our Java agent.

4.2. Dynamic Load

The procedure of loading a Java agent into an already running JVM is called dynamic load. The agent is attached using the Java Attach API.

A more complex scenario is when we already have our ATM application running in production and we want to add the total time of transactions dynamically without downtime for our application.

Let’s write a small piece of code to do just that and we’ll call this class AgentLoader. For simplicity, we’ll put this class in the application jar file. So our application jar file can both start our application, and attach our agent to the ATM application:

VirtualMachine jvm = VirtualMachine.attach(jvmPid);
jvm.loadAgent(agentFile.getAbsolutePath());
jvm.detach();

Now that we have our AgentLoader, we start our application making sure that in the ten-second pause between transactions, we’ll attach our Java agent dynamically using the AgentLoader.

Let’s also add the glue that will allow us to either start the application or load the agent.

We’ll call this class Launcher and it will be our main jar file class:

public class Launcher {
    public static void main(String[] args) throws Exception {
        if(args[0].equals("StartMyAtmApplication")) {
            new MyAtmApplication().run(args);
        } else if(args[0].equals("LoadAgent")) {
            new AgentLoader().run(args);
        }
    }
}

Starting the Application

java -jar application.jar StartMyAtmApplication
22:44:21.154 [main] INFO - [Application] Starting ATM application
22:44:23.157 [main] INFO - [Application] Successful Withdrawal of [7] units!

Attaching Java Agent

After the first operation, we attach the java agent to our JVM:

java -jar application.jar LoadAgent
22:44:27.022 [main] INFO - Attaching to target JVM with PID: 6575
22:44:27.306 [main] INFO - Attached to target JVM and loaded Java agent successfully

Check Application Logs

Now that we attached our agent to the JVM we’ll see that we have the total completion time for the second ATM withdrawal operation.

This means that we added our functionality on the fly, while our application was running:

22:44:27.229 [Attach Listener] INFO - [Agent] In agentmain method
22:44:27.230 [Attach Listener] INFO - [Agent] Transforming class MyAtm
22:44:33.157 [main] INFO - [Application] Successful Withdrawal of [8] units!
22:44:33.157 [main] INFO - [Application] Withdrawal operation completed in:2 seconds!

5. Creating a Java Agent

After learning how to use an agent, let’s see how we can create one. We’ll look at how to use Javassist to change byte-code and we’ll combine this with some instrumentation API methods.

Since a java agent makes use of the Java Instrumentation API, before getting too deep into creating our agent, let’s see some of the most used methods in this API and a short description of what they do:

  • addTransformer – adds a transformer to the instrumentation engine
  • getAllLoadedClasses – returns an array of all classes currently loaded by the JVM
  • retransformClasses – facilitates the instrumentation of already loaded classes by adding byte-code
  • removeTransformer – unregisters the supplied transformer
  • redefineClasses – redefine the supplied set of classes using the supplied class files, meaning that the class will be fully replaced, not modified as with retransformClasses

5.1. Create the Premain and Agentmain Methods

We know that every Java agent needs at least one of the premain or agentmain methods. The latter is used for dynamic load, while the former is used to statically load a java agent into a JVM.

Let’s define both of them in our agent so that we’re able to load this agent both statically and dynamically:

public static void premain(
  String agentArgs, Instrumentation inst) {
 
    LOGGER.info("[Agent] In premain method");
    String className = "com.baeldung.instrumentation.application.MyAtm";
    transformClass(className,inst);
}
public static void agentmain(
  String agentArgs, Instrumentation inst) {
 
    LOGGER.info("[Agent] In agentmain method");
    String className = "com.baeldung.instrumentation.application.MyAtm";
    transformClass(className,inst);
}

In each method, we declare the class that we want to change and then dig down to transform that class using the transformClass method.

Below is the code for the transformClass method that we defined to help us transform MyAtm class.

In this method, we find the class we want to transform and using the transform method. Also, we add the transformer to the instrumentation engine:

private static void transformClass(
  String className, Instrumentation instrumentation) {
    Class<?> targetCls = null;
    ClassLoader targetClassLoader = null;
    // see if we can get the class using forName
    try {
        targetCls = Class.forName(className);
        targetClassLoader = targetCls.getClassLoader();
        transform(targetCls, targetClassLoader, instrumentation);
        return;
    } catch (Exception ex) {
        LOGGER.error("Class [{}] not found with Class.forName");
    }
    // otherwise iterate all loaded classes and find what we want
    for(Class<?> clazz: instrumentation.getAllLoadedClasses()) {
        if(clazz.getName().equals(className)) {
            targetCls = clazz;
            targetClassLoader = targetCls.getClassLoader();
            transform(targetCls, targetClassLoader, instrumentation);
            return;
        }
    }
    throw new RuntimeException(
      "Failed to find class [" + className + "]");
}

private static void transform(
  Class<?> clazz, 
  ClassLoader classLoader,
  Instrumentation instrumentation) {
    AtmTransformer dt = new AtmTransformer(
      clazz.getName(), classLoader);
    instrumentation.addTransformer(dt, true);
    try {
        instrumentation.retransformClasses(clazz);
    } catch (Exception ex) {
        throw new RuntimeException(
          "Transform failed for: [" + clazz.getName() + "]", ex);
    }
}

With this out of the way, let’s define the transformer for MyAtm class.

5.2. Defining our Transformer

A class transformer must implement ClassFileTransformer and implement the transform method.

We’ll use Javassist to add byte-code to MyAtm class and add a log with the total ATW withdrawal transaction time:

public class AtmTransformer implements ClassFileTransformer {
    @Override
    public byte[] transform(
      ClassLoader loader, 
      String className, 
      Class<?> classBeingRedefined, 
      ProtectionDomain protectionDomain, 
      byte[] classfileBuffer) {
        byte[] byteCode = classfileBuffer;
        String finalTargetClassName = this.targetClassName
          .replaceAll("\\.", "/"); 
        if (!className.equals(finalTargetClassName)) {
            return byteCode;
        }

        if (className.equals(finalTargetClassName) 
              && loader.equals(targetClassLoader)) {
 
            LOGGER.info("[Agent] Transforming class MyAtm");
            try {
                ClassPool cp = ClassPool.getDefault();
                CtClass cc = cp.get(targetClassName);
                CtMethod m = cc.getDeclaredMethod(
                  WITHDRAW_MONEY_METHOD);
                m.addLocalVariable(
                  "startTime", CtClass.longType);
                m.insertBefore(
                  "startTime = System.currentTimeMillis();");

                StringBuilder endBlock = new StringBuilder();

                m.addLocalVariable("endTime", CtClass.longType);
                m.addLocalVariable("opTime", CtClass.longType);
                endBlock.append(
                  "endTime = System.currentTimeMillis();");
                endBlock.append(
                  "opTime = (endTime-startTime)/1000;");

                endBlock.append(
                  "LOGGER.info(\"[Application] Withdrawal operation completed in:" +
                                "\" + opTime + \" seconds!\");");

                m.insertAfter(endBlock.toString());

                byteCode = cc.toBytecode();
                cc.detach();
            } catch (NotFoundException | CannotCompileException | IOException e) {
                LOGGER.error("Exception", e);
            }
        }
        return byteCode;
    }
}

5.3. Creating an Agent Manifest File

Finally, in order to get a working Java agent, we’ll need a manifest file with a couple of attributes.

Hence, we can find the full list of manifest attributes in the Instrumentation Package official documentation.

In the final Java agent jar file, we will add the following lines to the manifest file:

Agent-Class: com.baeldung.instrumentation.agent.MyInstrumentationAgent
Can-Redefine-Classes: true
Can-Retransform-Classes: true
Premain-Class: com.baeldung.instrumentation.agent.MyInstrumentationAgent

Our Java instrumentation agent is now complete. To run it, please refer to Loading a Java Agent section of this article.

6. Conclusion

In this article, we talked about the Java Instrumentation API. We looked at how to load a Java agent into a JVM both statically and dynamically.

We also looked at how we would go about creating our own Java agent from scratch.

As always, the full implementation of the example can be found over on Github.

A Guide to SqlResultSetMapping

$
0
0

1. Introduction

In this guide, we’ll take a look at SqlResultSetMapping, out of the Java Persistence API (JPA).

The core functionality here involves mapping result sets from database SQL statements into Java objects.

2. Setup

Before we look at its usage, let’s do some setup.

2.1. Maven Dependency

Our required Maven dependencies are Hibernate and H2 Database. Hibernate gives us the implementation of the JPA specification.  We use H2 Database  for an in-memory database.

2.2. Database

Next, we’ll create two tables as seen here:

CREATE TABLE EMPLOYEE
(id BIGINT,
 name VARCHAR(10));

The EMPLOYEE table stores one result Entity object. SCHEDULE_DAYS contains records linked to the EMPLOYEE table by the column employeeId:

CREATE TABLE SCHEDULE_DAYS
(id IDENTITY,
 employeeId BIGINT,
 dayOfWeek  VARCHAR(10));

A script for data creation can be found in the code for this guide.

2.3. Entity Objects

Our Entity objects should look similar:

@Entity
public class Employee {
    @Id
    private Long id;
    private String name;
}

Entity objects might be named differently than database tables. We can annotate the class with @Table to explicitly map them:

@Entity
@Table(name = "SCHEDULE_DAYS")
public class ScheduledDay {

    @Id
    @GeneratedValue
    private Long id;
    private Long employeeId;
    private String dayOfWeek;
}

3. Scalar Mapping

Now that we have data we can start mapping query results.

3.1. ColumnResult

While SqlResultSetMapping and Query annotations work on Repository classes as well, we use the annotations on an Entity class in this example.

Every SqlResultSetMapping annotation requires only one property, name. However, without one of the member types, nothing will be mapped. The member types are ColumnResult, ConstructorResult, and EntityResult.

In this case, ColumnResult maps any column to a scalar result type:

@SqlResultSetMapping(
  name="FridayEmployeeResult",
  columns={@ColumnResult(name="employeeId")})

The ColumnResult property name identifies the column in our query:

@NamedNativeQuery(
  name = "FridayEmployees",
  query = "SELECT employeeId FROM schedule_days WHERE dayOfWeek = 'FRIDAY'",
  resultSetMapping = "FridayEmployeeResult")

Note that the value of resultSetMapping in our NamedNativeQuery annotation is important because it matches the name property from our ResultSetMapping declaration.

As a result, the NamedNativeQuery result set is mapped as expected. Likewise, StoredProcedure API requires this association.

3.2. ColumnResult Test

We’ll need some Hibernate specific objects to run our code:

@BeforeAll
public static void setup() {
    emFactory = Persistence.createEntityManagerFactory("java-jpa-scheduled-day");
    em = emFactory.createEntityManager();
}

Finally, we call the named query to run our test:

@Test
public void whenNamedQuery_thenColumnResult() {
    List<Long> employeeIds = em.createNamedQuery("FridayEmployees").getResultList();
    assertEquals(2, employeeIds.size());
}

4. Constructor Mapping

Let’s take a look at when we need to map a result set to an entire object.

4.1. ConstructorResult

Similarly to our ColumnResult example, we will add the SqlResultMapping annotation on our Entity class, ScheduledDay. However, in order to map using a constructor, we need to create one:

public ScheduledDay (
  Long id, Long employeeId, 
  Integer hourIn, Integer hourOut, 
  String dayofWeek) {
    this.id = id;
    this.employeeId = employeeId;
    this.dayOfWeek = dayofWeek;
}

Also, the mapping specifies the target class and columns (both required):

@SqlResultSetMapping(
    name="ScheduleResult",
    classes={
      @ConstructorResult(
        targetClass=com.baeldung.sqlresultsetmapping.ScheduledDay.class,
        columns={
          @ColumnResult(name="id", type=Long.class),
          @ColumnResult(name="employeeId", type=Long.class),
          @ColumnResult(name="dayOfWeek")})})

The order of the ColumnResults is very important. If columns are out of order the constructor will fail to be identified. In our example, the ordering matches the table columns, so it would actually not be required.

@NamedNativeQuery(name = "Schedules",
  query = "SELECT * FROM schedule_days WHERE employeeId = 8",
  resultSetMapping = "ScheduleResult")

Another unique difference for ConstructorResult is that the resulting object instantiation as “new” or “detached”.  The mapped Entity will be in the detached state when a matching primary key exists in the EntityManager otherwise it will be new.

Sometimes we may encounter runtime errors because of mismatching SQL datatypes to Java datatypes. Therefore, we can explicitly declare it with type.

4.2. ConstructorResult Test

Let’s test the ConstructorResult in a unit test:

@Test
public void whenNamedQuery_thenConstructorResult() {
  List<ScheduledDay> scheduleDays
    = Collections.checkedList(
      em.createNamedQuery("Schedules", ScheduledDay.class).getResultList(), ScheduledDay.class);
    assertEquals(3, scheduleDays.size());
    assertTrue(scheduleDays.stream().allMatch(c -> c.getEmployeeId().longValue() == 3));
}

5. Entity Mapping

Finally, for a simple entity mapping with less code, let’s have a look at EntityResult.

5.1. Single Entity

EntityResult requires us to specify the entity class, Employee. We use the optional fields property for more control. Combined with FieldResult, we can map aliases and fields that do not match:

@SqlResultSetMapping(
  name="EmployeeResult",
  entities={
    @EntityResult(
      entityClass = com.baeldung.sqlresultsetmapping.Employee.class,
        fields={
          @FieldResult(name="id",column="employeeNumber"),
          @FieldResult(name="name", column="name")})})

Now our query should include the aliased column:

@NamedNativeQuery(
  name="Employees",
  query="SELECT id as employeeNumber, name FROM EMPLOYEE",
  resultSetMapping = "EmployeeResult")

Similarly to ConstructorResult, EntityResult requires a constructor. However, a default one works here.

5.2. Multiple Entities

Mapping multiple entities is pretty straightforward once we have mapped a single Entity:

@SqlResultSetMapping(
  name = "EmployeeScheduleResults",
  entities = {
    @EntityResult(entityClass = com.baeldung.sqlresultsetmapping.Employee.class),
    @EntityResult(entityClass = com.baeldung.sqlresultsetmapping.ScheduledDay.class)

5.3. EntityResult Tests

Let’s have a look at EntityResult in action:

@Test
public void whenNamedQuery_thenSingleEntityResult() {
    List<Employee> employees = Collections.checkedList(
      em.createNamedQuery("Employees").getResultList(), Employee.class);
    assertEquals(3, employees.size());
    assertTrue(employees.stream().allMatch(c -> c.getClass() == Employee.class));
}

Since the multiple entity results join two entities, the query annotation on only one of the classes is confusing.

For that reason, we define the query in the test:

@Test
public void whenNamedQuery_thenMultipleEntityResult() {
    Query query = em.createNativeQuery(
      "SELECT e.id, e.name, d.id, d.employeeId, d.dayOfWeek "
        + " FROM employee e, schedule_days d "
        + " WHERE e.id = d.employeeId", "EmployeeScheduleResults");
    
    List<Object[]> results = query.getResultList();
    assertEquals(4, results.size());
    assertTrue(results.get(0).length == 2);

    Employee emp = (Employee) results.get(1)[0];
    ScheduledDay day = (ScheduledDay) results.get(1)[1];

    assertTrue(day.getEmployeeId() == emp.getId());
}

6. Conclusion

In this guide, we looked at different options for using the SqlResultSetMapping annotationSqlResultSetMapping is a key part to the Java Persistence API.

Code snippets can be found over on GitHub.


Mockito.mock() vs @Mock vs @MockBean

$
0
0

1. Overview

In this quick tutorial, we’ll look at three different ways of creating mock objects and how they differ from each other – with Mockito and with the Spring mocking support.

2. Mockito.mock()

The Mockito.mock() method allows us to create a mock object of a class or an interface.

Then, we can use the mock to stub return values for its methods and verify if they were called.

Let’s look at an example:

@Test
public void givenCountMethodMocked_WhenCountInvoked_ThenMockedValueReturned() {
    UserRepository localMockRepository = Mockito.mock(UserRepository.class);
    Mockito.when(localMockRepository.count()).thenReturn(111L);

    long userCount = localMockRepository.count();

    Assert.assertEquals(111L, userCount);
    Mockito.verify(localMockRepository).count();
}

This method doesn’t need anything else to be done before it can be used. We can use it to create mock class fields as well as local mocks in a method.

3. Mockito’s @Mock Annotation

This annotation is a shorthand for the Mockito.mock() method. As well, we should only use it in a test class. Unlike the mock() method, we need to enable Mockito annotations to use this annotation.

We can do this either by using the MockitoJUnitRunner to run the test or calling the MockitoAnnotations.initMocks() method explicitly.

Let’s look at an example using MockitoJUnitRunner:

@RunWith(MockitoJUnitRunner.class)
public class MockAnnotationUnitTest {
    
    @Mock
    UserRepository mockRepository;
    
    @Test
    public void givenCountMethodMocked_WhenCountInvoked_ThenMockValueReturned() {
        Mockito.when(mockRepository.count()).thenReturn(123L);

        long userCount = mockRepository.count();

        Assert.assertEquals(123L, userCount);
        Mockito.verify(mockRepository).count();
    }
}

Apart from making the code more readable, @Mock makes it easier to find the problem mock in case of a failure, as the name of the field appears in the failure message:

Wanted but not invoked:
mockRepository.count();
-> at org.baeldung.MockAnnotationTest.testMockAnnotation(MockAnnotationTest.java:22)
Actually, there were zero interactions with this mock.

  at org.baeldung.MockAnnotationTest.testMockAnnotation(MockAnnotationTest.java:22)

Also, when used in conjunction with @InjectMocks, it can reduce the amount of setup code significantly.

4. Spring Boot’s @MockBean Annotation

We can use the @MockBean to add mock objects to the Spring application context. The mock will replace any existing bean of the same type in the application context.

If no bean of the same type is defined, a new one will be added. This annotation is useful in integration tests where a particular bean – for example, an external service – needs to be mocked.

To use this annotation, we have to use SpringRunner to run the test:

@RunWith(SpringRunner.class)
public class MockBeanAnnotationIntegrationTest {
    
    @MockBean
    UserRepository mockRepository;
    
    @Autowired
    ApplicationContext context;
    
    @Test
    public void givenCountMethodMocked_WhenCountInvoked_ThenMockValueReturned() {
        Mockito.when(mockRepository.count()).thenReturn(123L);

        UserRepository userRepoFromContext = context.getBean(UserRepository.class);
        long userCount = userRepoFromContext.count();

        Assert.assertEquals(123L, userCount);
        Mockito.verify(mockRepository).count();
    }
}

When we use the annotation on a field, as well as being registered in the application context, the mock will also be injected into the field.

This is evident in the code above. Here, we have used the injected UserRepository mock to stub the count method. We have then used the bean from the application context to verify that it is indeed the mocked bean.

5. Conclusion

In this article, we saw how the three methods for creating mock objects differ and how each can be used.

The source code that accompanies this article is available over on GitHub.

Overriding System Time for Testing in Java

$
0
0

1. Overview

In this quick tutorial, we’ll focus on different ways to override the system time for testing.

Sometimes there’s a logic around the current date in our code. Maybe some function calls such as new Date() or Calendar.getInstance(), which eventually are going to call System.CurrentTimeMillis.

For an introduction to the use of Java Clock, please refer to this article here. Or, to the use of AspectJ, here.

2. Using Clock in java.time

The java.time package in Java 8 includes an abstract class java.time.Clock with the purpose of allowing alternate clocks to be plugged in as and when required. With that, we can plug our own implementation or find one that is already made to satisfy our needs.

To accomplish our goals, the above library includes static methods to yield special implementations. We’re going to use two of them which returns an immutable, thread-safe and serializable implementation.

The first one is fixed. From it, we can obtain a Clock that always returns the same Instantensuring that the tests aren’t dependent on the current clock.

To use it, we need an Instant and a ZoneOffset:

Instant.now(Clock.fixed( 
  Instant.parse("2018-08-22T10:00:00Z"),
  ZoneOffset.UTC))

The second static method is offset. In this one, a clock wraps another clock that makes it the returned object capable of getting instants that are later or earlier by the specified duration.

In other words, it’s possible to simulate running in the future, in the past, or in any arbitrary point in time:

Clock constantClock = Clock.fixed(ofEpochMilli(0), ZoneId.systemDefault());

// go to the future:
Clock clock = Clock.offset(constantClock, Duration.ofSeconds(10));
        
// rewind back with a negative value:
clock = Clock.offset(constantClock, Duration.ofSeconds(-5));
 
// the 0 duration returns to the same clock:
clock = Clock.offset(constClock, Duration.ZERO);

With the Duration class, it’s possible to manipulate from nanoseconds to days. Also, we can negate a duration, which means to get a copy of this duration with the length negated.


3. Using Aspect-Oriented Programming

Another way to override the system time is by AOP. With this approach, we’re able to weave the System class to return a predefined value which we can set within our test cases.

Also, it’s possible to weave the application classes to redirect the call to System.currentTimeMillis() or to new Date() to another utility class of our own.

One way to implement this is through the use of AspectJ:

public aspect ChangeCallsToCurrentTimeInMillisMethod {
    long around(): 
      call(public static native long java.lang.System.currentTimeMillis()) 
        && within(user.code.base.pckg.*) {
          return 0;
      }
}

In the above example, we’re catching every call to System.currentTimeMillis() inside a specified package, which in this case is user.code.base.pckg.*, and returning zero every time that this event happens.

It’s in this place where we can declare our own implementation to obtain the desired time in milliseconds.

One advantage of using AspectJ is that it operates on bytecode level directly, so it doesn’t need the original source code to work.

For that reason, we wouldn’t need to recompile it.

4. Conclusion

In this quick article, we’ve explored different ways to override the system time for testing. At first, with the native package java.time and its Clock class and finally, applying an aspect to weave the System class.
As always, code samples can be found over on GitHub.

Java Weekly, Issue 239

$
0
0

Here we go…

1. Spring and Java

>> Refining functional Spring [blog.frankel.ch]

A quick writeup touching on a few nuances of writing handlers and routes in this exciting new functional approach to Spring Boot.

>> Improve Application Performance with These Advanced GC Techniques [blog.takipi.com]

A solid primer on how garbage collection works in the JVM and a few tricks you can use to improve your application’s performance. Good stuff.

>> How to query parent rows when all children must match the filtering criteria with SQL and Hibernate [vladmihalcea.com]

A nice tutorial that gradually builds an optimal solution to this problem, first in a native SQL query, and then in a JPQL criteria-based query. Very cool.

>> Only modified files in Jenkins [blog.code-cop.org]

And, an interesting approach that uses a Groovy script to identify all files that have been changed since the last green build.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> Update your Database Schema Without Downtime [thoughts-on-java.org]

If you absolutely can’t afford downtime, here are some great strategies to use when rolling out non-backwards-compatible schema updates in conjunction with an application update.

>> The future of WebAssembly – A look at upcoming features and proposals [blog.scottlogic.com]

Looks like some essential enhancements are coming soon to this browser-based VM, including reference types, exception handling, and garbage collection.

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> How to Become an Engineer [dilbert.com]

>> Dilbert Joins MENSA [dilbert.com]

>> Upgrades Can Be Risky [dilbert.com]

4. Pick of the Week

>> Why Json Isn’t A Good Configuration Language [lucidchart.com]

Comparing Embedded Servlet Containers in Spring Boot

$
0
0

1. Introduction

The rising popularity of cloud-native applications and micro-services generate an increased demand for embedded servlet containers. Spring Boot allows developers to easily build applications or services using the 3 most mature containers available: Tomcat, Undertow, and Jetty.

In this tutorial, we’ll demonstrate a way to quickly compare container implementations using metrics obtained at startup and under some load.

2. Dependencies

Our setup for each available container implementation will always require that we declare a dependency on spring-boot-starter-web in our pom.xml.

In general, we want to specify our parent as spring-boot-starter-parent, and then include the starters we want:

<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>2.0.3.RELEASE</version>
    <relativePath/>
</parent>

<dependencies>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter</artifactId>
    </dependency>

    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
    </dependency>
</dependencies>

2.1. Tomcat

No further dependencies are required when using Tomcat because it is included by default when using spring-boot-starter-web.

2.2. Jetty

In order to use Jetty, we first need to exclude spring-boot-starter-tomcat from spring-boot-starter-web.

Then, we simply declare a dependency on spring-boot-starter-jetty:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <exclusions>
        <exclusion>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-tomcat</artifactId>
        </exclusion>
    </exclusions>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-jetty</artifactId>
</dependency>

2.3. Undertow

Setting up for Undertow is identical to Jetty, except that we use spring-boot-starter-undertow as our dependency:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <exclusions>
        <exclusion>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-tomcat</artifactId>
        </exclusion>
    </exclusions>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-undertow</artifactId>
</dependency>

2.4. Actuator

We’ll use Spring Boot’s Actuator as a convenient way to both stress the system and query for metrics.

Check out this article for details on Actuator. We simply add a dependency in our pom to make it available:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

 2.5. Apache Bench

Apache Bench is an open source load testing utility that comes bundled with the Apache web server.

Windows users can download Apache from one of the 3rd party vendors linked here. If Apache is already installed on your Windows machine, you should be able to find ab.exe in your apache/bin directory.

If you are on a Linux machine, ab can be installed using apt-get with:

$ apt-get install apache2-utils

3. Startup Metrics

3.1. Collection

In order to collect our startup metrics, we’ll register an event handler to fire on Spring Boot’s ApplicationReadyEvent.

We’ll programmatically extract the metrics we’re interested in by directly working with the MeterRegistry used by the Actuator component:

@Component
public class StartupEventHandler {

    // logger, constructor
    
    private String[] METRICS = {
      "jvm.memory.used", 
      "jvm.classes.loaded", 
      "jvm.threads.live"};
    private String METRIC_MSG_FORMAT = "Startup Metric >> {}={}";
    
    private MeterRegistry meterRegistry;

    @EventListener
    public void getAndLogStartupMetrics(
      ApplicationReadyEvent event) {
        Arrays.asList(METRICS)
          .forEach(this::getAndLogActuatorMetric);
    }

    private void processMetric(String metric) {
        Meter meter = meterRegistry.find(metric).meter();
        Map<Statistic, Double> stats = getSamples(meter);
 
        logger.info(METRIC_MSG_FORMAT, metric, stats.get(Statistic.VALUE).longValue());
    }

    // other methods
}

We avoid the need to manually query Actuator REST endpoints or to run a standalone JMX console by logging interesting metrics on startup within our event handler.

3.2. Selection

There are a large number of metrics that Actuator provides out of the box. We selected 3 metrics that help to get a high-level overview of key runtime characteristics once the server is up:

  • jvm.memory.used – the total memory used by the JVM since startup
  • jvm.classes.loaded – the total number of classes loaded
  • jvm.threads.live – the total number of active threads. In our test, this value can be viewed as the thread count “at rest”

4. Runtime Metrics

4.1. Collection

In addition to providing startup metrics, we’ll use the /metrics endpoint exposed by the Actuator as the target URL when we run Apache Bench in order to put the application under load.

In order to test a real application under load, we might instead use endpoints provided by our application.

Once the server has started, we’ll get a command prompt and execute ab:

ab -n 10000 -c 10 http://localhost:8080/actuator/metrics

In the command above, we’ve specified a total of 10,000 requests using 10 concurrent threads.

4.2. Selection

Apache Bench is able to very quickly give us some useful information including connection times and the percentage of requests that are served within a certain time.

For our purposes, we focused on requests-per-second and time-per-request (mean).

5. Results

On startup, we found that the memory footprint of Tomcat, Jetty, and Undertow was comparable with Undertow requiring slightly more memory than the other two and Jetty requiring the smallest amount.

For our benchmark, we found that the performance of Tomcat, Jetty, and Undertow was comparable but that Undertow was clearly the fastest and Jetty only slightly less fast. 

Metric Tomcat Jetty Undertow
jvm.memory.used (MB) 168 155 164
jvm.classes.loaded 9869 9784 9787
jvm.threads.live 25 17 19
Requests per second 1542 1627 1650
Average time per request (ms) 6.483 6.148 6.059

Note that the metrics are, naturally, representative of bare-bones project; the metrics of your own application will most certainly be different.

6. Benchmark Discussion

Developing appropriate benchmark tests to perform thorough comparisons of server implementations can get complicated. In order to extract the most relevant information, it’s critical to have a clear understanding of what’s important for the use case in question.

It’s important to note that the benchmark measurements collected in this example were taken using a very specific workload consisting of HTTP GET requests to an Actuator endpoint.

It’s expected that different workloads would likely result in different relative measurements across container implementations. If more robust or precise measurements were required, it would be a very good idea to set up a test plan that more closely matched the production use case.

In addition, a more sophisticated benchmarking solution such as JMeter or Gatling would likely yield more valuable insights.

7. Choosing a Container

Selecting the right container implementation should likely be based on many factors that can’t be neatly summarized with a handful of metrics alone. Comfort level, features, available configuration options, and policy are often equally important, if not more so.

8. Conclusion

In this article, we looked at the Tomcat, Jetty, and Undertow embedded servlet container implementations. We examined the runtime characteristics of each container at startup with the default configurations by looking at metrics exposed by the Actuator component.

We executed a contrived workload against the running system and then measured performance using Apache Bench.

Lastly, we discussed the merits of this strategy and mentioned a few things to keep in mind when comparing implementation benchmarks. As always, all source code can be found over on GitHub.

Spring @Primary Annotation

$
0
0

1. Overview

In this quick tutorial, we’ll discuss Spring’s @Primary annotation which was introduced with version 3.0 of the framework.

Simply put, we use @Primary to give higher preference to a bean when there are multiple beans of the same type.

Let’s describe the problem in detail.

2. Why is @Primary Needed?

In some cases, we need to register more than one bean of the same type.

In this example we have JohnEmployee() and TonyEmployee() beans of the Employee type:

@Configuration
public class Config {

    @Bean
    public Employee JohnEmployee() {
        return new Employee("John");
    }

    @Bean
    public Employee TonyEmployee() {
        return new Employee("Tony");
    }
}

Spring throws NoUniqueBeanDefinitionException if we try to run the application.

To access beans with the same type we usually use @Qualifier(“beanName”) annotation.

We apply it at the injection point along with @Autowired. In our case, we select the beans at the configuration phase so @Qualifier can’t be applied here. We can learn more about @Qualifier annotation by following the link.

To resolve this issue Spring offers the @Primary annotation.

3. Use @Primary with @Bean

Let’s have a look at configuration class:

@Configuration
public class Config {

    @Bean
    public Employee JohnEmployee() {
        return new Employee("John");
    }

    @Bean
    @Primary
    public Employee TonyEmployee() {
        return new Employee("Tony");
    }
}

We mark TonyEmployee() bean with @Primary. Spring will inject TonyEmployee() bean preferentially over the JohnEmployee().

Now, let’s start the application context and get the Employee bean from it:

AnnotationConfigApplicationContext context
  = new AnnotationConfigApplicationContext(Config.class);

Employee employee = context.getBean(Employee.class);
System.out.println(employee);

After we run the application:

Employee{name='Tony'}

From the output, we can see that the TonyEmployee() instance has a preference while autowiring.

4. Use @Primary with @Component

We can use @Primary directly on the beans. Let’s have a look at the following scenario:

public interface Manager {
    String getManagerName();
}

We have a Manager interface and two subclass beans, DepartmentManager:

@Component
public class DepartmentManager implements Manager {
    @Override
    public String getManagerName() {
        return "Department manager";
    }
}

And the GeneralManager bean:

@Component
@Primary
public class GeneralManager implements Manager {
    @Override
    public String getManagerName() {
        return "General manager";
    }
}

They both override the getManagerName() of the Manager interface. Also, note that we mark the GeneralManager bean with @Primary.

This time, @Primary only makes sense when we enable the component scan:

@Configuration
@ComponentScan(basePackages="org.baeldung.primary")
public class Config {
}

Let’s create a service to use dependency injection while finding the right bean:

@Service
public class ManagerService {

    @Autowired
    private Manager manager;

    public Manager getManager() {
        return manager;
    }
}

Here, both beans DepartmentManager and GeneralManager are eligible for autowiring.

As we marked GeneralManager bean with @Primary, it will be selected for dependency injection:

ManagerService service = context.getBean(ManagerService.class);
Manager manager = service.getManager();
System.out.println(manager.getManagerName());

The output is “General manager”.

5. Conclusion

In this article, we learned about Spring’s @Primary annotation. With the code examples, we demonstrated the need and the use cases of the @Primary.

As usual, the complete code for this article is available over on GitHub project.

Viewing all 3697 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>