Quantcast
Channel: Baeldung
Viewing all 3702 articles
Browse latest View live

Processing JSON with Kotlin and Klaxson

$
0
0

1. Overview

Klaxon is one of the open source libraries that we can use to parse JSON in Kotlin.

In this tutorial, we’re going to look at its features.

2. Maven Dependency

First, we’ll need to add the library dependency to our Maven project:

<dependency>
    <groupId>com.beust</groupId>
    <artifactId>klaxon</artifactId>
    <version>3.0.4</version>
</dependency>

The latest version can be found at jcenter or in the Spring Plugins Repository.

3. API Features

Klaxon has four APIs to work with JSON documents. We’ll explore these in the following sections.

4. Object Binding API

With this API, we can bind JSON documents to Kotlin objects and vice-versa.
To start, let’s define the following JSON document:

{
    "name": "HDD"
}

Next, we’ll create the Product class for binding:

class Product(val name: String)

Now, we can test serialization:

@Test
fun givenProduct_whenSerialize_thenGetJsonString() {
    val product = Product("HDD")
    val result = Klaxon().toJsonString(product)

    assertThat(result).isEqualTo("""{"name" : "HDD"}""")
}

And we can test deserialization:

@Test
fun givenJsonString_whenDeserialize_thenGetProduct() {
    val result = Klaxon().parse<Product>(
    """
        {
            "name" : "RAM"
        }
    """)

    assertThat(result?.name).isEqualTo("RAM")
}

This API also supports working with data classes as well mutable and immutable classes.

Klaxon allows us to customize the mapping process with the @Json annotation. This annotation has two properties:

  • name – for setting a different name for the fields
  • ignored – for ignoring fields of the mapping process

Let’s create a CustomProduct class to see how these work:

class CustomProduct(
    @Json(name = "productName")
    val name: String,
    @Json(ignored = true)
    val id: Int)

Now, let’s verify it with a test:

@Test
fun givenCustomProduct_whenSerialize_thenGetJsonString() {
    val product = CustomProduct("HDD", 1)
    val result = Klaxon().toJsonString(product)

    assertThat(result).isEqualTo("""{"productName" : "HDD"}""")
}

As we can see, the name property is serialized as productName, and the id property is ignored.

5. Streaming API

With the Streaming API, we can handle huge JSON documents by reading from a stream. This feature allows our code to process JSON values while it is still reading.

We need to use the JsonReader class from the API to read a JSON stream. This class has two special functions to handle streaming:

  • beginObject() – makes sure that the next token is the beginning of an object
  • beginArray() – makes sure that the next token is the beginning of an array

With these functions, we can be sure the stream is correctly positioned and that it’s closed after consuming the object or array.

Let’s test the streaming API against an array of the following ProductData class:

data class ProductData(val name: String, val capacityInGb: Int)
@Test
fun givenJsonArray_whenStreaming_thenGetProductArray() {
    val jsonArray = """
    [
        { "name" : "HDD", "capacityInGb" : 512 },
        { "name" : "RAM", "capacityInGb" : 16 }
    ]"""
    val expectedArray = arrayListOf(
      ProductData("HDD", 512),
      ProductData("RAM", 16))
    val klaxon = Klaxon()
    val productArray = arrayListOf<ProductData>()
    JsonReader(StringReader(jsonArray)).use { 
        reader -> reader.beginArray {
            while (reader.hasNext()) {
                val product = klaxon.parse<ProductData>(reader)
                productArray.add(product!!)
            }
        }
    }

    assertThat(productArray).hasSize(2).isEqualTo(expectedArray)
}

6. JSON Path Query API

Klaxon supports the element location feature from the JSON Path specification. With this API, we can define path matchers to locate specific entries in our documents.

Note that this API is streaming, too, and we’ll be notified after an element is found and parsed.

We need to use the PathMatcher interface. This interface is called when the JSON Path found matches of the regular expression.

To use this, we need to implement its methods:

  • pathMatches() – return true if we want to observe this path
  • onMatch() – fired when the path is found; note that the value can only be a basic type (e.g., int, String) and never JsonObject or JsonArray

Let’s make a test to see it in action.

First, let’s define an inventory JSON document as a source of data:

{
    "inventory" : {
        "disks" : [
            {
                "type" : "HDD",
                "sizeInGb" : 1000
            },
            {
                "type" : "SDD",
                "sizeInGb" : 512
            }
        ]
    }
}

Now, we implement the PathMatcher interface as follows:

val pathMatcher = object : PathMatcher {
    override fun pathMatches(path: String)
      = Pattern.matches(".*inventory.*disks.*type.*", path)

    override fun onMatch(path: String, value: Any) {
        when (path) {
            "$.inventory.disks[0].type"
              -> assertThat(value).isEqualTo("HDD")
            "$.inventory.disks[1].type"
              -> assertThat(value).isEqualTo("SDD")
        }
    }
}

Note we defined the regex to match the type of disk of our inventory document.

Now, we are ready to define our test:

@Test
fun givenDiskInventory_whenRegexMatches_thenGetTypes() {
    val jsonString = """..."""
    val pathMatcher = //...
    Klaxon().pathMatcher(pathMatcher)
      .parseJsonObject(StringReader(jsonString))
}

7. Low-Level API

With Klaxon, we can process JSON documents like a Map or a List. To do this, we can use the classes JsonObject and JsonArray from the API.

Let’s make a test to see the JsonObject in action:

@Test
fun givenJsonString_whenParser_thenGetJsonObject() {
    val jsonString = StringBuilder("""
        {
            "name" : "HDD",
            "capacityInGb" : 512,
            "sizeInInch" : 2.5
        }
    """)
    val parser = Parser()
    val json = parser.parse(jsonString) as JsonObject

    assertThat(json)
      .hasSize(3)
      .containsEntry("name", "HDD")
      .containsEntry("capacityInGb", 512)
      .containsEntry("sizeInInch", 2.5)
}

Now, let’s make a test to see the JsonArray functionality:

@Test
fun givenJsonStringArray_whenParser_thenGetJsonArray() {
    val jsonString = StringBuilder("""
    [
        { "name" : "SDD" },
        { "madeIn" : "Taiwan" },
        { "warrantyInYears" : 5 }
    ]""")
    val parser = Parser()
    val json = parser.parse(jsonString) as JsonArray<JsonObject>

    assertSoftly({
        softly ->
            softly.assertThat(json).hasSize(3)
            softly.assertThat(json[0]["name"]).isEqualTo("SDD")
            softly.assertThat(json[1]["madeIn"]).isEqualTo("Taiwan")
            softly.assertThat(json[2]["warrantyInYears"]).isEqualTo(5)
    })
}

As we can see in both cases, we made the conversions without the definition of specific classes.

8. Conclusion

In this article, we explored the Klaxon library and its APIs to handle JSON documents.

As always, the source code is available over on Github.


Guide to the this Java Keyword

$
0
0

1. Introduction

In this tutorial, we’ll take a look at the this Java keyword.

In Java, this keyword is a reference to the current object whose method is being called.

Let’s explore how and when we can use the keyword.

2. Disambiguating Field Shadowing

The keyword is useful for disambiguating instance variables from local parameters. The most common reason is when we have constructor parameters with the same name as instance fields:

public class KeywordTest {

    private String name;
    private int age;
    
    public KeywordTest(String name, int age) {
        this.name = name;
        this.age = age;
    }
}

As we can see here, we’re using this with the name and age instance fields – to distinguish them from parameters.

Another usage is to use this with the parameter hiding or shadowing in the local scope. An example of use can be found in the Variable and Method Hiding article.

3. Referencing Constructors of the Same Class

From a constructor, we can use this() to call a different constructor of the same class. Here, we use this() for the constructor chaining to reduce the code usage.

The most common use case is to call a default constructor from the parameterized constructor:

public KeywordTest(String name, int age) {
    this();
    
    // the rest of the code
}

Or, we can call the parameterized constructor from the no argument constructor and pass some arguments:

public KeywordTest() {
    this("John", 27);
}

Note, that this() should be the first statement in the constructor, otherwise the compilation error will occur.

4. Passing this as a Parameter

Here we have printInstance() method, where the this Keyword argument is defined:

public KeywordTest() {
    printInstance(this);
}

public void printInstance(KeywordTest thisKeyword) {
    System.out.println(thisKeyword);
}

Inside the constructor, we invoke printInstance() method. With this, we pass a reference to the current instance.

5. Returning this

We can also use this keyword to return the current class instance from the method.

To not duplicate the code, here’s a full practical example of how it’s implemented in the builder design pattern.

6. The this Keyword Within the Inner Class

We also use this to access the outer class instance from within the inner class:

public class KeywordTest {

    private String name;

    class ThisInnerClass {

        boolean isInnerClass = true;

        public ThisInnerClass() {
            KeywordTest thisKeyword = KeywordTest.this;
            String outerString = KeywordTest.this.name;
        }
    }
}

Here, inside the constructor, we can get a reference to the KeywordTest instance with the KeywordTest.this call. We can go even deeper and access the instance variables like KeywordTest.this.name field.

7. Conclusion

In this article, we explored the this keyword in Java.

As usual, the complete code is available over on Github.

Guide to the super Java Keyword

$
0
0

1. Introduction

In this quick tutorial, we’ll take a look at the super Java keyword.

Simply put, we can use the super keyword to access the parent class. 

Let’s explore the applications of the core keyword in the language.

2. The super Keyword with Constructors

We can use super() to call the parent default constructor. It should be the first statement in a constructor.

In our example, we use super(message) with the String argument:

public class SuperSub extends SuperBase {

    public SuperSub(String message) {
        super(message);
    }
}

Let’s create a child class instance and see what’s happening behind:

SuperSub child = new SuperSub("message from the child class");

The new keyword invokes the constructor of the SuperSub, which itself calls the parent constructor first and passes the String argument to it.

3. Accessing Parent Class Variables

Let’s create a parent class with the message instance variable:

public class SuperBase {
    String message = "super class";
}

Now, we create a child class with the variable of the same name:

public class SuperSub extends SuperBase {

    String message = "child class";

    public void getParentMessage() {
        System.out.println(super.message);
    }
}

We can access the parent variable from the child class by using the super keyword.

4. The super Keyword with Method Overriding

Before going further, we advise reviewing our method overriding guide.

Let’s add an instance method to our parent class:

public class SuperBase {

    String message = "super class";

    public void printMessage() {
        System.out.println(message);
    }
}

And override the printMessage() method in our child class:

public class SuperSub extends SuperBase {

    String message = "child class";

    public SuperSub() {
        super.printMessage();
        printMessage();
    }

    public void printMessage() {
        System.out.println(message);
    }
}

We can use the super to access the overridden method from the child class. The super.printMessage() in constructor calls the parent method from SuperBase.

5. Conclusion

In this article, we explored the super keyword.

As usual, the complete code is available over on Github.

Returning a JSON Response from a Servlet

$
0
0

1. Introduction

In this quick tutorial, we’ll create a small web application and explore how to return a JSON response from a Servlet.

2. Maven

For our web application, we’ll include javax.servlet-api and Gson dependencies in our pom.xml:

<dependency>
    <groupId>javax.servlet</groupId>
    <artifactId>javax.servlet-api</artifactId>
    <version>${javax.servlet.version}</version>
</dependency>
<dependency>
    <groupId>com.google.code.gson</groupId>
    <artifactId>gson</artifactId>
    <version>${gson.version}</version>
</dependency>

The latest versions of the dependencies can be found here: javax.servlet-api and gson.

We also need to configure a Servlet container to deploy our application to. This article is a good place to start on how to deploy a WAR on Tomcat.

3. Creating an Entity

Let’s create an Employee entity which will later be returned from the Servlet as JSON:

public class Employee {
	
    private int id;
    
    private String name;
    
    private String department;
   
    private long salary;

    // constructors
    // standard getters and setters.
}

4. Entity to JSON

To send a JSON response from the Servlet we first need to convert the Employee object into its JSON representation.

There are many java libraries available to convert an object to there JSON representation and vice versa. Most prominent of them would be the Gson and Jackson libraries. To learn about the differences between GSON and Jackson, have a look at this article.

A quick sample for converting an object to JSON representation with Gson would be:

String employeeJsonString = new Gson().toJson(employee);

5. Response and Content Type

For HTTP Servlets, the correct procedure for populating the response:

  1. Retrieve an output stream from the response
  2. Fill in the response headers
  3. Write content to the output stream
  4. Commit the response

In a response, a Content-Type header tells the client what the content type of the returned content actually is.

For producing a JSON response the content type should be application/json:

PrintWriter out = response.getWriter();
response.setContentType("application/json");
response.setCharacterEncoding("UTF-8");
out.print(employeeJsonString);
out.flush();

Response headers must always be set before the response is committed. The web container will ignore any attempt to set or add headers after the response is committed.

Calling flush() on the PrintWriter commits the response.

6. Example Servlet

Now let’s see an example Servlet that returns a JSON response:

@WebServlet(name = "EmployeeServlet", urlPatterns = "/employeeServlet")
public class EmployeeServlet extends HttpServlet {

    private Gson gson = new Gson();

    @Override
    protected void doGet(
      HttpServletRequest request, 
      HttpServletResponse response) throws IOException {
        
        Employee employee = new Employee(1, "Karan", "IT", 5000);
        String employeeJsonString = this.gson.toJson(employee);

        PrintWriter out = response.getWriter();
        response.setContentType("application/json");
        response.setCharacterEncoding("UTF-8");
        out.print(employeeJsonString);
        out.flush();   
    }
}

7. Conclusion

This article showcased how to return a JSON response from a Servlet. This is helpful in web applications that use Servlets to implement REST Services.

All code samples shown here can be found on GitHub.

Spring Data Reactive Repositories with MongoDB

$
0
0

1. Introduction

In this tutorial, we’re going to see how to configure and implement database operations using Reactive Programming through Spring Data Reactive Repositories with MongoDB.

We’ll go over the basic usages of ReactiveCrudRepository, ReactiveMongoRepository, as well as ReactiveMongoTemplate.

Even though these implementations use reactive programming, that isn’t the primary focus of this tutorial.

2. Environment

In order to use Reactive MongoDB, we need to add the dependency to our pom.xml.

We’ll also add an embedded MongoDB for testing:

<dependencies>
    // ...
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-data-mongodb-reactive</artifactId>
    </dependency>
    <dependency>
        <groupId>de.flapdoodle.embed</groupId>
        <artifactId>de.flapdoodle.embed.mongo</artifactId>
        <scope>test</scope>
    </dependency>
</dependencies>

3. Configuration

In order to activate the reactive support, we need to use the @EnableReactiveMongoRepositories alongside with some infrastructure setup:

@EnableReactiveMongoRepositories
public class MongoReactiveApplication
  extends AbstractReactiveMongoConfiguration {

    @Bean
    public MongoClient mongoClient() {
        return MongoClients.create();
    }

    @Override
    protected String getDatabaseName() {
        return "reactive";
    }
}

Note that the above would be necessary if we were using the standalone MongoDB installation. But, as we’re using Spring Boot with embedded MongoDB in our example, the above configuration is not necessary.

4. Creating a Document

For the examples below, let’s create an Account class and annotate it with @Document to use it in the database operations:

@Document
public class Account {
 
    @Id
    private String id;
    private String owner;
    private Double value;
 
    // getters and setters
}

5. Using Reactive Repositories

We are already familiar with the repositories programming model, with the CRUD methods already defined plus support for some other common things as well.

Now with the Reactive model, we get the same set of methods and specifications, except that we’ll deal with the results and parameters in a reactive way.

5.1. ReactiveCrudRepository

We can use this repository the same way as the blocking CrudRepository:

@Repository
public interface AccountCrudRepository 
  extends ReactiveCrudRepository<Account, String> {
 
    Flux<Account> findAllByValue(String value);
    Mono<Account> findFirstByOwner(Mono<String> owner);
}

We can pass different types of arguments like plain (String), wrapped (Optional, Stream), or reactive (Mono, Flux) as we can see in the findFirstByOwner() method.

5.2. ReactiveMongoRepository

There’s also the ReactiveMongoRepository interface, which inherits from ReactiveCrudRepository and adds some new query methods:

@Repository
public interface AccountReactiveRepository 
  extends ReactiveMongoRepository<Account, String> { }

Using the ReactiveMongoRepository, we can query by example:

Flux<Account> accountFlux = repository
  .findAll(Example.of(new Account(null, "owner", null)));

As a result, we’ll get every Account that is the same as the example passed.

With our repositories created, they already have defined methods to perform some database operations that we don’t need to implement:

Mono<Account> accountMono 
  = repository.save(new Account(null, "owner", 12.3));
Mono<Account> accountMono2 = repository
  .findById("123456");

5.3. RxJava2CrudRepository

With RxJava2CrudRepository, we have the same behavior as the ReactiveCrudRepository, but with the results and parameter types from RxJava:

@Repository
public interface AccountRxJavaRepository 
  extends RxJava2CrudRepository<Account, String> {
 
    Observable<Account> findAllByValue(Double value);
    Single<Account> findFirstByOwner(Single<String> owner);
}

5.4. Testing Our Basic Operations

In order to test our repository methods, we’ll use the test subscriber:

@Test
public void givenValue_whenFindAllByValue_thenFindAccount() {
    repository.save(new Account(null, "Bill", 12.3)).block();
    Flux<Account> accountFlux = repository.findAllByValue(12.3);

    StepVerifier
      .create(accountFlux)
      .assertNext(account -> {
          assertEquals("Bill", account.getOwner());
          assertEquals(Double.valueOf(12.3) , account.getValue());
          assertNotNull(account.getId());
      })
      .expectComplete()
      .verify();
}

@Test
public void givenOwner_whenFindFirstByOwner_thenFindAccount() {
    repository.save(new Account(null, "Bill", 12.3)).block();
    Mono<Account> accountMono = repository
      .findFirstByOwner(Mono.just("Bill"));

    StepVerifier
      .create(accountMono)
      .assertNext(account -> {
          assertEquals("Bill", account.getOwner());
          assertEquals(Double.valueOf(12.3) , account.getValue());
          assertNotNull(account.getId());
      })
      .expectComplete()
      .verify();
}

@Test
public void givenAccount_whenSave_thenSaveAccount() {
    Mono<Account> accountMono = repository.save(new Account(null, "Bill", 12.3));

    StepVerifier
      .create(accountMono)
      .assertNext(account -> assertNotNull(account.getId()))
      .expectComplete()
      .verify();
}

6. ReactiveMongoTemplate

Besides the repositories approach, we have the ReactiveMongoTemplate.

First of all, we need to register ReactiveMongoTemplate as a bean:

@Configuration
public class ReactiveMongoConfig {
 
    @Autowired
    MongoClient mongoClient;

    @Bean
    public ReactiveMongoTemplate reactiveMongoTemplate() {
        return new ReactiveMongoTemplate(mongoClient, "test");
    }
}

And then, we can inject this bean into our service to perform the database operations:

@Service
public class AccountTemplateOperations {
 
    @Autowired
    ReactiveMongoTemplate template;

    public Mono<Account> findById(String id) {
        return template.findById(id, Account.class);
    }
 
    public Flux<Account> findAll() {
        return template.findAll(Account.class);
    } 
    public Mono<Account> save(Mono<Account> account) {
        return template.save(account);
    }
}

ReactiveMongoTemplate also has a number of methods that do not relate to the domain we have, you can check them out in the documentation.

7. Conclusion

In this brief tutorial, we’ve covered the use of repositories and templates using reactive programming with MongoDB with Spring Data Reactive Repositories framework.

The full source code for the examples is available over on GitHub.

Visitor Design Pattern in Java

$
0
0

1. Overview

In this tutorial, we’ll introduce one of the behavioral GoF design patterns – the Visitor.

First, we’ll explain its purpose and the problem it tries to solve.

Next, we’ll have a look at Visitor’s UML diagram and implementation of the practical example.

2. Visitor Design Pattern

The purpose of a Visitor pattern is to define a new operation without introducing the modifications to an existing object structure.

Imagine that we have a composite object which consists of components. The object’s structure is fixed – we either can’t change it, or we don’t plan to add new types of elements to the structure.

Now, how could we add new functionality to our code without modification of existing classes?

The Visitor design pattern might be an answer. Simply put, we’ll have to do is to add a function which accepts the visitor class to each element of the structure.

That way our components will allow the visitor implementation to “visit” them and perform any required action on that element.

In other words, we’ll extract the algorithm which will be applied to the object structure from the classes.

Consequently, we’ll make good use of the Open/Closed principle as we won’t modify the code, but we’ll still be able to extend the functionality by providing a new Visitor implementation.

3. UML Diagram

On the UML diagram above, we have two implementation hierarchies, specialized visitors, and concrete elements.

First of all, the client uses a Visitor implementation and applies it to the object structure. The composite object iterates over its components and applies the visitor to each of them.

Now, especially relevant is that concrete elements (ConcreteElementA and ConcreteElementB) are accepting a Visitor, simply allowing it to visit them.

Lastly, this method is the same for all elements in the structure, it performs double dispatch with passing itself (via the this keyword) to the visitor’s visit method.

4. Implementation

Our example will be custom Document object that consists of JSON and XML concrete elements; the elements have a common abstract superclass, the Element.

The Document class:

public class Document extends Element {

    List<Element> elements = new ArrayList<>();

    // ...

    @Override
    public void accept(Visitor v) {
        for (Element e : this.elements) {
            e.accept(v);
        }
    }
}

The Element class has an abstract method which accepts the Visitor interface:

public abstract void accept(Visitor v);

Therefore, when creating the new element, name it the JsonElement, we’ll have to provide the implementation this method.

However, due to nature of the Visitor pattern, the implementation will be the same, so in most cases, it would require us to copy-paste the boilerplate code from other, already existing element:

public class JsonElement extends Element {

    // ...

    public void accept(Visitor v) {
        v.visit(this);
    }
}

Since our elements allow visiting them by any visitor, let’s say that we want to process our Document elements, but each of them in a different way, depending on its class type.

Therefore, our visitor will have a separate method for the given type:

public class ElementVisitor implements Visitor {

    @Override
    public void visit(XmlElement xe) {
        System.out.println(
          "processing an XML element with uuid: " + xe.uuid);
    }

    @Override
    public void visit(JsonElement je) {
        System.out.println(
          "processing a JSON element with uuid: " + je.uuid);
    }
}

Here, our concrete visitor implements two methods, correspondingly one per each type of the Element.

This gives us access to the particular object of the structure on which we can perform necessary actions.

5. Testing

For testing purpose, let’s have a look at VisitorDemoclass:

public class VisitorDemo {

    public static void main(String[] args) {

        Visitor v = new ElementVisitor();

        Document d = new Document(generateUuid());
        d.elements.add(new JsonElement(generateUuid()));
        d.elements.add(new JsonElement(generateUuid()));
        d.elements.add(new XmlElement(generateUuid()));

        d.accept(v);
    }

    // ...
}

First, we create an ElementVisitor, it holds the algorithm we will apply to our elements.

Next, we set up our Document with proper components and apply the visitor which will be accepted by every element of an object structure.

The output would be like this:

processing a JSON element with uuid: fdbc75d0-5067-49df-9567-239f38f01b04
processing a JSON element with uuid: 81e6c856-ddaf-43d5-aec5-8ef977d3745e
processing an XML element with uuid: 091bfcb8-2c68-491a-9308-4ada2687e203

It shows that visitor has visited each element of our structure, depending on the Element type, it dispatched the processing to appropriate method and could retrieve the data from every underlying object.

6. Downsides

As each design pattern, even the Visitor has its downsides, particularly, its usage makes it more difficult to maintain the code if we need to add new elements to the object’s structure.

For example, if we add new YamlElement, then we need to update all existing visitors with the new method desired for processing this element. Following this further, if we have ten or more concrete visitors, that might be cumbersome to update all of them.

Other than this, when using this pattern, the business logic related to one particular object gets spread over all visitor implementations.

7. Conclusion

The Visitor pattern is great to separate the algorithm from the classes on which it operates. Besides that, it makes adding new operation more easily, just by providing a new implementation of the Visitor.

Furthermore, we don’t depend on components interfaces, and if they are different, that’s fine, since we have a separate algorithm for processing per concrete element.

Moreover, the Visitor can eventually aggregate data based on the element it traverses.

To see a more specialized version of the Visitor design pattern, check out visitor pattern in Java NIO – the usage of the pattern in the JDK.

As usual, the complete code is available on the Github project.

The Thread.join() Method in Java

$
0
0

1. Overview

In this tutorial, we’ll discuss the different join() methods in the Thread class. We’ll go into the details of these methods and some example code.

Like the wait() and notify() methods, join() is another mechanism of inter-thread synchronization.

You can have a quick look at this tutorial to read more about wait() and notify().

2. The Thread.join() Method

The join method is defined in the Thread class:

public final void join() throws InterruptedException
Waits for this thread to die.

When we invoke the join() method on a thread, the calling thread goes into a waiting state. It remains in a waiting state until the referenced thread terminates.

We can see this behavior in the following code:

class SampleThread extends Thread {
    public int processingCount = 0;

    SampleThread(int processingCount) {
        this.processingCount = processingCount;
        LOGGER.info("Thread Created");
    }

    @Override
    public void run() {
        LOGGER.info("Thread " + this.getName() + " started");
        while (processingCount > 0) {
            try {
                Thread.sleep(1000);
            } catch (InterruptedException e) {
                LOGGER.info("Thread " + this.getName() + " interrupted");
            }
            processingCount--;
        }
        LOGGER.info("Thread " + this.getName() + " exiting");
    }
}

@Test
public void givenStartedThread_whenJoinCalled_waitsTillCompletion() 
  throws InterruptedException {
    Thread t2 = new SampleThread(1);
    t2.start();
    LOGGER.info("Invoking join");
    t2.join();
    LOGGER.info("Returned from join");
    assertFalse(t2.isAlive());
}

We should expect results similar to the following when executing the code:

INFO: Thread Created
INFO: Invoking join
INFO: Thread Thread-1 started
INFO: Thread Thread-1 exiting
INFO: Returned from join

The join() method may also return if the referenced thread was interrupted.  In this case, the method throws an InterruptedException.

Finally, if the referenced thread was already terminated or hasn’t been started, the call to join() method returns immediately.

Thread t1 = new SampleThread(0);
t1.join();  //returns immediately

3. Thread.join() Methods with Timeout

The join() method will keep waiting if the referenced thread is blocked or is taking too long to process. This can become an issue as the calling thread will become non-responsive. To handle these situations, we use overloaded versions of the join() method that allow us to specify a timeout period.

There are two timed versions which overload the join() method:

“public final void join(long millis) throws InterruptedException
Waits at most millis milliseconds for this thread to die. A timeout of 0 means to wait forever.”

“public final void join(long millis,int nanos) throws InterruptedException
Waits at most millis milliseconds plus nanos nanoseconds for this thread to die.”

We can use the timed join() as below:

@Test
public void givenStartedThread_whenTimedJoinCalled_waitsUntilTimedout()
  throws InterruptedException {
    Thread t3 = new SampleThread(10);
    t3.start();
    t3.join(1000);
    assertTrue(t3.isAlive());
}

In this case, the calling thread waits for roughly 1 second for the thread t3 to finish. If the thread t3 does not finish in this time period, the join() method returns control to the calling method.

Timed join() is dependent on the OS for timing. So, we cannot assume that join() will wait exactly as long as specified.

4. Thread.join() Methods and Synchronization

In addition to waiting until termination, calling the join() method has a synchronization effect. join() creates a happens-before relationship:

“All actions in a thread happen-before any other thread successfully returns from a join() on that thread.”

This means that when a thread t1 calls t2.join(), then all changes done by t2 are visible in t1 on return. However, if we do not invoke join() or use other synchronization mechanisms, we do not have any guarantee that changes in the other thread will be visible to the current thread even if the other thread has completed.

Hence, even though the join() method call to a thread in the terminated state returns immediately, we still need to call it in some situations.

We can see an example of improperly synchronized code below:

SampleThread t4 = new SampleThread(10);
t4.start();
// not guaranteed to stop even if t4 finishes.
do {
       
} while (t4.processingCount > 0);

To properly synchronize the above code, we can add timed t4.join() inside the loop or use some other synchronization mechanism.

5. Conclusion

join() method is quite useful for inter-thread synchronization. In this article, we discussed the join() methods and their behavior. We also reviewed code using join() method.

As always, the full source code can be found over on GitHub.

Method Parameter Reflection in Java

$
0
0

1. Overview

Method Parameter Reflection support was added in Java 8. Simply put, it provides support for getting the names of parameters at runtime.

In this quick tutorial, we’ll take a look at how to access parameter names for constructors and methods at runtime – using reflection.

2. Compiler Argument 

In order to be able to get access to method name information, we must opt-in explicitly.

To do this, we specify the parameters option during compilation.

For a Maven project, we can declare this option in the pom.xml:

<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-compiler-plugin</artifactId>
  <version>3.1</version>
  <configuration>
    <source>1.8</source>
    <target>1.8</target>
    <compilerArgument>-parameters</compilerArgument>
  </configuration>
</plugin>

3. Example Class

We’ll use a contrived Person class with a single property called fullName to demonstrate:

public class Person {

    private String fullName;

    public Person(String fullName) {
        this.fullName = fullName;
    }

    public void setFullName(String fullName) {
        this.fullName = fullName;
    }

    // other methods
}

4. Usage

The Parameter class is new in Java 8 and has a variety of interesting methods. If the -parameters compiler option was provided, the isNamePresent() method will return true.

To access the name of a parameter, we can simply call getName():

@Test
public void whenGetConstructorParams_thenOk() 
  throws NoSuchMethodException, SecurityException {
 
    List<Parameter> parameters 
        = Arrays.asList(Person.class.getConstructor(String.class).getParameters());
    Optional<Parameter> parameter 
        = parameters.stream().filter(Parameter::isNamePresent).findFirst();
    assertThat(parameter.get().getName()).isEqualTo("fullName");
}

@Test
public void whenGetMethodParams_thenOk() 
  throws NoSuchMethodException, SecurityException {
 
    List<Parameter> parameters = Arrays.asList(
      Person.class.getMethod("setFullName", String.class).getParameters());
    Optional<Parameter> parameter= parameters.stream()
      .filter(Parameter::isNamePresent)
      .findFirst();
 
    assertThat(parameter.get().getName()).isEqualTo("fullName");
}

5. Conclusion

In this quick article, we looked at the new reflection support for parameter names that became available in Java 8.

The most obvious use case for this information is to help implement auto-complete support within editors.

As always, the source code can be found over on Github.


Performance of Java Mapping Frameworks

$
0
0

1. Introduction

Creating large Java applications composed of multiple layers require using multiple models such as persistence model, domain model or so-called DTOs. Using multiple models for different application layers will require us to provide a way of mapping between beans.

Doing this manually can quickly create much boilerplate code and consume a lot of time. Luckily for us, there are multiple object mapping frameworks for Java.

In this tutorial we’re going to compare the performance of the most popular Java mapping frameworks.

2. Mapping Frameworks

2.1. Dozer

Dozer is a mapping framework that uses recursion to copy data from one object to another.  The framework is able not only to copy properties between the beans, but it can also automatically convert between different types.

To use the Dozer framework we need to add such dependency to our project:

<dependency>
    <groupId>net.sf.dozer</groupId>
    <artifactId>dozer</artifactId>
    <version>5.5.1</version>
</dependency>

More information about the usage of the Dozer framework can be found in this article.

The documentation of the framework can be found here.

2.2. Orika

Orika is a bean to bean mapping framework that recursively copies data from one object to another.

The general principle of work of the Orika is similar to Dozer. The main difference between the two is the fact that Orika uses bytecode generation. This allows generating faster mappers with the minimal overhead.

To use it, we need to add such dependency to our project:

<dependency>
    <groupId>ma.glasnost.orika</groupId>
    <artifactId>orika-core</artifactId>
    <version>1.5.2</version>
</dependency>

More detailed information about the usage of the Orika can be found in this article.

The actual documentation of the framework can be found here.

2.3. MapStruct

MapStruct is a code generator that generates bean mapper classes automatically.

MapStruct also has the ability to convert between different data types. More information on how to use it can be found in this article.

To add MapStruct to our project we need to include the following dependency :

<dependency>3
    <groupId>org.mapstruct</groupId>
    <artifactId>mapstruct-processor</artifactId>
    <version>1.2.0.Final</version>
</dependency>

The documentation of the framework can be found here.

2.4. ModelMapper

ModelMapper is a framework that aims to simplify object mapping, by determining how objects map to each other basing on conventions. It provides type-safe and refactoring-safe API.

More information about the framework can be found in the documentation.

To include the ModelMapper to our project we need to add the following dependency:

<dependency>
  <groupId>org.modelmapper</groupId>
  <artifactId>modelmapper</artifactId>
  <version>1.1.0</version>
</dependency>

2.5. JMapper

JMapper is the mapping framework that aims to provide easy-to-use, high-performance mapping between Java Beans.

The framework aims to apply DRY principle using Annotations and relational mapping.

The framework allows for different ways of configuration: annotation-based, XML or API-based.

More information about the framework can be found in its documentation.

To include the JMapper in our project we need to add its dependency:

<dependency>
    <groupId>com.googlecode.jmapper-framework</groupId>
    <artifactId>jmapper-core</artifactId>
    <version>1.6.0.1</version>
</dependency>

3. Testing Model

To be able to test mapping properly we need to have a source and target models. We’ve created two testing models.

First one is just a simple POJO with one String field, this allowed us to compare frameworks in simpler cases and check whether anything changes if we use more complicated beans.

The simple source model looks like below:

public class SourceCode {
    String code;
    // getter and setter
}

And its destination is quite similar:

public class DestinationCode {
    String code;
    // getter and setter
}

The real-life example of source bean looks like that:

public class SourceOrder {
    private String orderFinishDate;
    private PaymentType paymentType;
    private Discount discount;
    private DeliveryData deliveryData;
    private User orderingUser;
    private List<Product> orderedProducts;
    private Shop offeringShop;
    private int orderId;
    private OrderStatus status;
    private LocalDate orderDate;
    // standard getters and setters
}

And the target class looks like below:

public class Order {
    private User orderingUser;
    private List<Product> orderedProducts;
    private OrderStatus orderStatus;
    private LocalDate orderDate;
    private LocalDate orderFinishDate;
    private PaymentType paymentType;
    private Discount discount;
    private int shopId;
    private DeliveryData deliveryData;
    private Shop offeringShop;
    // standard getters and setters
}

The whole model structure can be found here.

4. Converters

To simplify the design of the testing setup, we’ve created the Converter interface which looks like below:

public interface Converter {
    Order convert(SourceOrder sourceOrder);
    DestinationCode convert(SourceCode sourceCode);
}

And all our custom mappers will implement this interface.

4.1. OrikaConverter

Orika allows for full API implementation, this greatly simplifies the creation of the mapper:

public class OrikaConverter implements Converter{
    private MapperFacade mapperFacade;

    public OrikaConverter() {
        MapperFactory mapperFactory = new DefaultMapperFactory
          .Builder().build();

        mapperFactory.classMap(Order.class, SourceOrder.class)
          .field("orderStatus", "status").byDefault().register();
        mapperFacade = mapperFactory.getMapperFacade();
    }

    @Override
    public Order convert(SourceOrder sourceOrder) {
        return mapperFacade.map(sourceOrder, Order.class);
    }

    @Override
    public DestinationCode convert(SourceCode sourceCode) {
        return mapperFacade.map(sourceCode, DestinationCode.class);
    }
}

4.2. DozerConverter

Dozer requires XML mapping file,  with the following sections:

<mappings xmlns="http://dozer.sourceforge.net"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://dozer.sourceforge.net
  http://dozer.sourceforge.net/schema/beanmapping.xsd">

    <mapping>
        <class-a>com.baeldung.performancetests.model.source.SourceOrder</class-a>
        <class-b>com.baeldung.performancetests.model.destination.Order</class-b>
        <field>
            <a>status</a>
            <b>orderStatus</b>
        </field>
    </mapping>
    <mapping>
        <class-a>com.baeldung.performancetests.model.source.SourceCode</class-a>
        <class-b>com.baeldung.performancetests.model.destination.DestinationCode</class-b>
    </mapping>
</mappings>

After defining the XML mapping, we can use it from code:

public class DozerConverter implements Converter {
    private final Mapper mapper;

    public DozerConverter() {
        DozerBeanMapper mapper = new DozerBeanMapper();
        mapper.addMapping(
          DozerConverter.class.getResourceAsStream("/dozer-mapping.xml"));
        this.mapper = mapper;
    }

    @Override
    public Order convert(SourceOrder sourceOrder) {
        return mapper.map(sourceOrder,Order.class);
    }

    @Override
    public DestinationCode convert(SourceCode sourceCode) {
        return mapper.map(sourceCode, DestinationCode.class);
    }
}

4.3. MapStructConverter

Map struct definition is quite simple as it entirely bases on code generation :

@Mapper
public interface MapStructConverter extends Converter {
    MapStructConverter MAPPER = Mappers.getMapper(MapStructConverter.class);

    @Mapping(source = "status", target = "orderStatus")
    @Override
    Order convert(SourceOrder sourceOrder);

    @Override
    DestinationCode convert(SourceCode sourceCode);
}

4.4. JMapperConverter

JMapperConverter requires more work to do. After implementing the interface:

public class JMapperConverter implements Converter {
    JMapper realLifeMapper;
    JMapper simpleMapper;
 
    public JMapperConverter() {
        JMapperAPI api = new JMapperAPI()
          .add(JMapperAPI.mappedClass(Order.class));
        realLifeMapper = new JMapper(Order.class, SourceOrder.class, api);
        JMapperAPI simpleApi = new JMapperAPI()
          .add(JMapperAPI.mappedClass(DestinationCode.class));
        simpleMapper = new JMapper(
          DestinationCode.class, SourceCode.class, simpleApi);
    }

    @Override
    public Order convert(SourceOrder sourceOrder) {
        return (Order) realLifeMapper.getDestination(sourceOrder);
    }

    @Override
    public DestinationCode convert(SourceCode sourceCode) {
        return (DestinationCode) simpleMapper.getDestination(sourceCode);
    }
}

We also need to add @JMap annotations to each field of the target class. Also, JMapper can’t convert between enum types on its own and it requires us to create custom mapping functions :

@JMapConversion(from = "paymentType", to = "paymentType")
public PaymentType conversion(com.baeldung.performancetests.model.source.PaymentType type) {
    PaymentType paymentType = null;
    switch(type) {
        case CARD:
            paymentType = PaymentType.CARD;
            break;

        case CASH:
            paymentType = PaymentType.CASH;
            break;

        case TRANSFER:
            paymentType = PaymentType.TRANSFER;
            break;
    }
    return paymentType;
}

4.5. ModelMapperConverter

ModelMapperConverter requires only to provide the classes that we want to map :

public class ModelMapperConverter implements Converter {
    private ModelMapper modelMapper;

    public ModelMapperConverter() {
        modelMapper = new ModelMapper();
    }

    @Override
    public Order convert(SourceOrder sourceOrder) {
       return modelMapper.map(sourceOrder, Order.class);
    }

    @Override
    public DestinationCode convert(SourceCode sourceCode) {
        return modelMapper.map(sourceCode, DestinationCode.class);
    }
}

5. Simple Model Testing

For the performance testing, we can use Java Microbenchmark Harness, more information about how to use it can be found in this article.

We’ve created a separate benchmark for each Converter with specifying BenchmarkMode to Mode.All.

5.1. AverageTime

JMH returned the following results for average running time (the less is the better) :

 

This benchmark shows clearly that MapStruct has by far the best average working time.

5.2. Throughput

In this mode, benchmark returns the number of operations per second. We have received the following results(the more is the better) :

Again, MapStruct was the fastest in among all the frameworks.

5.3. SingleShotTime

This mode allows measuring the time of single operation from it’s beginning to the end. The benchmark gave the following result (the less is the better):

Again, MapStruct was the fastest, however, ModelMapper gave better results than in previous tests.

5.4. SampleTime

This mode allows sampling the time of each operation.  The results for three different percentiles  look like below:

All benchmarks have shown that MapStruct has the best performance, JMapper is also quite a good choice, although it gave significantly worse results for SingleShotTime.

6. Real Life Model Testing

For the performance testing, we can use Java Microbenchmark Harness, more information about how to use it can be found in this article.

We have created a separate benchmark for each Converter with specifying BenchmarkMode to Mode.All.

6.1. AverageTime

JMH returned the following results for average running time (the less is the better) :

6.2. Throughput

In this mode, benchmark returns the number of operations per second. For each of the mappers we’ve received the following results (the more is the better) :

6.3. SingleShotTime

This mode allows measuring the time of single operation from it’s beginning to the end. The benchmark gave the following results (the less is the better):

6.4. SampleTime

This mode allows sampling the time of each operation. Sampling results are split into percentiles, we’ll present results for three different percentiles: p0.90, p0.999, and p1.00:

While the exact results of the simple example and the real-life example were clearly different, but they do follow the same trend. Both examples gave similar results in terms of which algorithm is the fastest and which is the slowest one.

The best performance clearly belongs to the MapStruct and the worst to the Orika.

7. Summary

In this article, we’ve conducted performance tests of five popular java bean mapping frameworks: ModelMapper, MapStruct, Orika, Dozer, and JMapper.

As always, code samples can be found over on GitHub.

Spring – Injecting Collections

$
0
0

1. Introduction

In this tutorial, we’re going to show how to inject Java collections using the Spring framework.

Simply put, we’ll demonstrate examples with the List, Map, Set collection interfaces.

2. List with @Autowired

Let’s create an example bean:

public class CollectionsBean {

    @Autowired
    private List<String> nameList;

    public void printNameList() {
        System.out.println(nameList);
    }
}

Here, we declared the nameList property to hold a List of String values.

In this example, we use field injection for nameList. Therefore, we put the @Autowired annotation.

To learn more about the dependency injection or different ways to implement it, check out this guide.

After, we register the CollectionsBean in the configuration setup class:

@Configuration
public class CollectionConfig {

    @Bean
    public CollectionsBean getCollectionsBean() {
        return new CollectionsBean();
    }

    @Bean
    public List<String> nameList() {
        return Arrays.asList("John", "Adam", "Harry");
    }
}

Besides registering the CollectionsBean, we also inject a new list by explicitly initializing and returning it as a separate @Bean configuration.

Now, we can test the results:

ApplicationContext context = new AnnotationConfigApplicationContext(CollectionConfig.class);
CollectionsBean collectionsBean = context.getBean(
  CollectionsBean.class);
collectionsBean.printNameList();

The output of printNameList() method:

[John, Adam, Harry]

3. Set with Constructor Injection

To set up the same example with the Set collection, let’s modify the CollectionsBean class:

public class CollectionsBean {

    private Set<String> nameSet;

    public CollectionsBean(Set<String> strings) {
        this.nameSet = strings;
    }

    public void printNameSet() {
        System.out.println(nameSet);
    }
}

This time we want to use a constructor injection for initializing the nameSet property. This requires also changes in configuration class:

@Bean
public CollectionsBean getCollectionsBean() {
    return new CollectionsBean(new HashSet<>(Arrays.asList("John", "Adam", "Harry")));
}

4. Map with Setter Injection

Following the same logic, let’s add the nameMap field to demonstrate the map injection:

public class CollectionsBean {

    private Map<Integer, String> nameMap;

    @Autowired
    public void setNameMap(Map<Integer, String> nameMap) {
        this.nameMap = nameMap;
    }

    public void printNameMap() {
        System.out.println(nameMap);
    }
}

This time we have a setter method in order to use a setter dependency injection. We also need to add the Map initializing code in configuration class:

@Bean
public Map<Integer, String> nameMap(){
    Map<Integer, String>  nameMap = new HashMap<>();
    nameMap.put(1, "John");
    nameMap.put(2, "Adam");
    nameMap.put(3, "Harry");
    return nameMap;
}

The results after invoking the printNameMap() method:

{1=John, 2=Adam, 3=Harry}

5. Injecting Bean References

Let’s look at an example where we inject bean references as elements of the collection.

First, let’s create the bean:

public class BaeldungBean {

    private String name;

    // constructor
}

And add a List of BaeldungBean as a property to the CollectionsBean class:

public class CollectionsBean {

    @Autowired(required = false)
    private List<BaeldungBean> beanList;

    public void printBeanList() {
        System.out.println(beanList);
    }
}

Next, we add the Java configuration factory methods for each BaeldungBean element:

@Configuration
public class CollectionConfig {

    @Bean
    public BaeldungBean getElement() {
        return new BaeldungBean("John");
    }

    @Bean
    public BaeldungBean getAnotherElement() {
        return new BaeldungBean("Adam");
    }

    @Bean
    public BaeldungBean getOneMoreElement() {
        return new BaeldungBean("Harry");
    }

    // other factory methods
}

The Spring container injects the individual beans of the BaeldungBean type into one collection.

To test this, we invoke the collectionsBean.printBeanList() method. The output shows the bean names as list elements:

[John, Harry, Adam]

Now, let’s consider a scenario when there is not a BaeldungBean. If there isn’t a BaeldungBean registered in the application context, Spring will throw an exception because the required dependency is missing.

We can use @Autowired(required = false) to mark the dependency as optional. Instead of throwing an exception, the beanList won’t be initialized and its value will stay null.

If we need an empty list instead of null, we can initialize beanList with a new ArrayList:

@Autowired(required = false)
private List<BaeldungBean> beanList = new ArrayList<>();

5.1. Using @Order to Sort Beans

We can specify the order of the beans while injecting into the collection.

For that purpose, we use the @Order annotation and specify the index:

@Configuration
public class CollectionConfig {

    @Bean
    @Order(2)
    public BaeldungBean getElement() {
        return new BaeldungBean("John");
    }

    @Bean
    @Order(3)
    public BaeldungBean getAnotherElement() {
        return new BaeldungBean("Adam");
    }

    @Bean
    @Order(1)
    public BaeldungBean getOneMoreElement() {
        return new BaeldungBean("Harry");
    }
}

Spring container first will inject the bean with the name “Harry”, as it has the lowest order value.

It will then inject the “John”, and finally, the “Adam” bean:

[Harry, John, Adam]

Learn more about @Order in this guide.

5.2. Using @Qualifier to Select Beans

We can use the @Qualifier to select the beans to be injected into the specific collection that matches the @Qualifier name.

Here’s how we use it for the injection point:

@Autowired
@Qualifier("CollectionsBean")
private List<BaeldungBean> beanList;

Then, we mark with the same @Qualifier the beans that we want to inject into the List:

@Configuration
public class CollectionConfig {

    @Bean
    @Qualifier("CollectionsBean")
    public BaeldungBean getElement() {
        return new BaeldungBean("John");
    }

    @Bean
    public BaeldungBean getAnotherElement() {
        return new BaeldungBean("Adam");
    }

    @Bean
    public BaeldungBean getOneMoreElement() {
        return new BaeldungBean("Harry");
    }

    // other factory methods
}

In this example, we specify that the bean with the name “John” will be injected into the List named “CollectionsBean”. The results we test here:

ApplicationContext context = new AnnotationConfigApplicationContext(CollectionConfig.class);
CollectionsBean collectionsBean = context.getBean(CollectionsBean.class);
collectionsBean.printBeanList();

From the output, we see that our collection has only one element:

[John]

6. Summary

With this guide, we learned how to inject different types of Java collections using the Spring framework.

We also examined injection with reference types and how to select or order them inside of the collection.

As usual, the complete code is available in the Github project.

Difference Between JVM, JRE, and JDK

$
0
0

1. Overview

In this article, we’ll discuss differences between JVM, JRE, and JDK by considering their components and uses.

2. JVM

Java Virtual Machine (JVM) is an implementation of a virtual machine which executes a Java program.

The JVM first interprets the bytecode. It then stores the class information in the memory area. Finally, it executes the bytecode generated by the java compiler.

It is an abstract computing machine with its own instruction set and manipulates various memory areas at runtime.

Components of the JVM are:

  • Class Loaders
  • Run-Time Data Areas
  • Execution Engine

2.1. Class Loaders

Initial tasks of the JVM includes loading, verifying and linking the bytecode. Class loaders handle these tasks.

We have a detailed article specifically on class loaders.

2.2. Run-Time Data Areas

The JVM defines various memory areas to execute a Java program. These are used during runtime and are known as run-time data areas. Some of these areas are created on the JVM start-up and destroyed when the JVM exits while some are created when a thread is created and destroyed when a thread exits.

Let’s consider these areas one by one:

Method Area

Basically, method area is analogous to the storage area for compiled code. It stores structures such as run-time constant pool, field and method data, the code for methods and constructors as well as fully qualified class names. The JVM stores these structure for each and every class.

The method area, also known as permanent generation space (PermGen), is created when the JVM starts up. The memory for this area does not need to be contiguous. All the JVM threads share this memory area.

Heap Area

The JVM allocates the memory for all the class instances and arrays from this area.

Garbage Collector (GC) reclaims the heap memory for objects. Basically, GC has three phases to reclaim memory from objects viz. two minor GC and one major GC.

The heap memory has three portions:

  • Eden Space – it’s a part of Young Generation space. When we create an object, the JVM allocates memory from this space
  • Survivor Space – it’s also a part of Young Generation space. Survivor space contains existing objects which have survived the minor GC phases of GC
  • Tenured Space – this is also known as the Old Generation space. It holds long surviving objects. Basically, a threshold is set for Young Generation objects and when this threshold is met, these objects are moved to tenured space.

JVM creates heap area as soon as it starts up. All the threads of the JVM share this area. The memory for the heap area does not need to be contiguous.

Stack area

Stores data as frames and each frame stores local variables, partial results and nested method calls. JVM creates the stack area whenever it creates a new thread. This area is private for each thread.

Each entry in the stack is called Stack Frame or Activation record. Each frame contains three parts:

  • Local Variable Array – contains all the local variables and parameters of the method
  • Operand Stack – used as a workspace for storing intermediate calculation’s result
  • Frame Data – used to store partial results, return values for methods, and reference to the Exception table which provides corresponding catch block information in case of exceptions

The memory for the JVM stack does not need to be contiguous.

PC Registers

Each JVM thread has a separate PC Register which stores the address of the currently executing instruction. If the currently executing instruction is a part of the native method then this value is undefined.

Native method stacks

Native methods are those which are written in languages other than Java.

JVM provides capabilities to call these native methods. Native method stacks are also known as “C stacks”. They store the native method information. Whenever the native methods are compiled into machine codes, they usually use a native method stack to keep track of their state.

The JVM creates these stacks whenever it creates a new thread. And thus JVM threads don’t share this area.

2.3. Execution Engine

Execution engine executes the instructions using information present in the memory areas. It has three parts:

Interpreter

Once classloaders load and verify bytecode, the interpreter executes the bytecode line by line. This execution is quite slow. The disadvantage of the interpreter is that when one method is called multiple times, every time new interpretation is required.

However, the JVM uses JIT Compiler to mitigate this disadvantage.

Just-In-Time (JIT) Compiler

JIT compiler compiles the bytecode of the often-called methods into native code at run-time. Hence it is responsible for the optimization of the Java programs.

JVM automatically monitors which methods are being executed. Once a method becomes eligible for JIT compilation, it is scheduled for compilation into machine code. This method is then known as a hot method. This compilation into machine code happens on a separate JVM thread.

As a result, it does not interrupt the execution of the current program. After compilation into machine code, it runs faster.

Garbage Collector

Java takes care of memory management using Garbage Collection. It’s a process of looking at heap memory, identifying which objects are in use and which are not, and finally deleting unused objects.

GC is a daemon thread. It can be called explicitly using System.gc() method, however, it won’t be executed immediately and the JVM decides when to invoke GC.

2.4. Java Native Interface

It acts as an interface between the Java code and the native (C/C++) libraries.

There are situations in which Java alone doesn’t meet the needs for your application, for example, implementing a platform-dependent feature.

In those cases, we can use JNI to enable the code running in the JVM to call. Conversely, it enables native methods to call the code running in the JVM.

2.5. Native Libraries

These are platform specific libraries and contains the implementation of native methods.

3. JRE

Java Runtime Environment (JRE) is a bundle of software components used to run Java applications.

Core components of the JRE include:

  • An implementation of a Java Virtual Machine (JVM)
  • Classes required to run the Java programs
  • Property Files

We discussed the JVM in the above section. Here we will focus on the core classes and support files.

3.1. Bootstrap Classes

We’ll find bootstrap classes under jre/lib/. This path is also known as the bootstrap classpath. It includes:

  • Runtime classes in rt.jar
  • Internationalization classes in i18n.jar
  • Character conversion classes in charsets.jar
  • Others

Bootstrap ClassLoader loads these classes when the JVM starts up.

3.2. Extension Classes

We can find extension classes in jre/lib/extn/ which acts as a directory for extensions to the Java platform. This path is also known as extension classpath.

It contains JavaFX runtime libraries in jfxrt.jar and locale data for java.text and java.util packages in localedata.jar. Users can also add custom jars into this directory.

3.3. Property Settings

Java platform uses these property settings to maintain its configuration. Depending on their usage they are located in different folders inside /jre/lib/. These include:

  • Calendar configurations in the calendar.properties
  • Logging configurations in logging.properties
  • Networking configurations in net.properties
  • Deployment properties in /jre/lib/deploy/
  • Management properties in /jre/lib/management/

3.4. Other files

Apart from the above-mentioned files and classes, JRE also contains files for other matters:

  • Security management at jre/lib/security
  • The directory for placing support classes for applets at jre/lib/applet
  • Font related files at jre/lib/fonts and others

4. JDK

Java Development Kit (JDK) provides environment and tools for developing, compiling, debugging, and executing a Java program.

Core components of JDK include:

  • JRE
  • Development Tools

We discussed the JRE in the above section.

Now, we’ll focus on various development tools. Let’s categorize these tools based on their usage:

4.1. Basic Tools

These tools lay the foundation of the JDK and are used to create and build Java applications. Among these tools, we can find utilities for compiling, debugging, archiving, generating Javadocs, etc.

They include:

  • javac – reads class and interface definitions and compiles them into class files
  • java – launches the Java application
  • javadoc – generates HTML pages of API documentation from Java source files
  • apt – finds and executes annotation processors based on the annotations present in the set of specified source files
  • appletviewer – enables us to run Java applets without a web browser
  • jar – packages Java applets or applications into a single archive
  • jdb – a command-line debugging tool used to find and fix bugs in Java applications
  • javah – produces C header and source files from a Java class
  • javap – disassembles the class files and displays information about fields, constructors, and methods present in a class file
  • extcheck – detects version conflicts between target Java Archive (JAR) file and currently installed extension JAR files

4.2. Security Tools

These include key and certificate management tools that are used to manipulate Java Keystores.

A Java Keystore is a container for authorization certificates or public key certificates. Consequently, it is often used by Java-based applications for encryption, authentication, and serving over HTTPS.

Also, they help to set the security policies on our system and create applications which can work within the scope of these policies in the production environment. These include:

  • keytool – helps in managing keystore entries, namely, cryptographic keys and certificates
  • jarsigner – generates digitally signed JAR files by using keystore information
  • policytool –  enables us to manage the external policy configuration files that define installation’s security policy

Some security tools also help in managing Kerberos tickets.

Kerberos is a network authentication protocol.

It works on the basis of tickets to allow nodes communicating over a non-secure network to prove their identity to one another in a secure manner:

  • kinit – used to obtain and cache Kerberos ticket-granting tickets
  • ktab – manages principle names and key pairs in the key table
  • klist – displays entries in the local credentials cache and key table

4.3. Internationalization Tool

Internationalization is the process of designing an application so that it can be adapted to various languages and regions without engineering changes.

For this purpose, the JDK brings native2ascii. This tool converts a file with characters supported by JRE to files encoded in ASCII or Unicode escapes.

4.4. Remote Method Invocation (RMI) Tools

RMI tools enable remote communication between Java applications thus providing scope for development of distributed applications.

RMI enables an object running in one JVM to invoke methods on an object running in another JVM. These tools include:

  • rmic – generates stub, skeleton, and tie classes for remote objects using the Java Remote Method Protocol (JRMP) or Internet Inter-Orb Protocol (IIOP)
  • rmiregistry – creates and starts remote object registry
  • rmid – starts the activation system daemon. This allows objects to be registered and activated in a Java Virtual Machine
  • serialver – returns serial version UID for specified classes

4.5. Java IDL and RMI-IIOP Tools

Java Interface Definition Language (IDL) adds Common Object-Based Request Broker Architecture (CORBA) capability to the Java platform.

These tools enable distributed Java web applications to invoke operations on remote network services using industry standard Object Management Group (OMG) – IDL.

Likewise, we could use Internet InterORB Protocol (IIOP).

RMI-IIOP, i.e. RMI over IIOP enables programming of CORBA servers and applications via the RMI API. Thus enabling connection between two applications written in any CORBA-compliant language via Internet InterORB Protocol (IIOP).

These tools include:

  • tnameserv – transient Naming Service which provides a tree-structured directory for object references
  • idlj – the IDL-to-Java Compiler for generating the Java bindings for a specified IDL file
  • orbd – enable clients to transparently locate and invoke persistent objects on the server in CORBA environment
  • servertool – provides command-line interface to register or unregister a persistent server with ORB Daemon (orbd), start and shut down a persistent server registered with ORB Daemon, etcetera

4.6. Java Deployment Tools

These tools help in deploying Java applications and applets on the web. They include:

  • pack200 – transforms a JAR file into a pack200 file using the Java gzip compressor
  • unpack200 – transforms pack200 file into a JAR file

4.7. Java Plug-in Tool

JDK provides us with htmlconverter. Furthermore, it’s used in conjunction with the Java Plug-in.

On the one hand, Java Plug-in establishes a connection between popular browsers and the Java platform. As a result of this connection, applets on the website can run within a browser.

On the other hand, htmlconverter is a utility for converting an HTML page containing applets to a format for Java Plug-in.

4.8. Java Web Start Tool

JDK brings javaws. We can use it in conjunction with the Java Web Start.

This tool allows us to download and launch Java applications with a single click from the browser. Hence, there is no need to run any installation process.

4.9. Monitoring and Management Tools

These are great tools that we can use to monitor JVM performance and resource consumption. Here are a few of these: :

  • jconsole – provides a graphical console that lets you monitor and manage Java applications
  • jps – lists the instrumented JVMs on the target system
  • jstat – monitors JVM statistics
  • jstatd – monitors creation and termination of instrumented JVMs

4.10. Troubleshooting Tools

These are experimental tools that we can leverage for troubleshooting tasks:

  • info – generates configuration information for a specified Java process
  • jmap – prints shared object memory maps or heap memory details of a specified process
  • jsadebugd – attaches to a Java process and acts as a debug server
  • jstack – prints Java stack traces of Java threads for a given Java process

5. Conclusion

In this article, we identified that the basic difference between JVM, JRE, and JDK lies in their usage.

First, we described how the JVM is an abstract computing machine that actually executes the Java bytecode.

Then, we explained how to just run Java applications, we use the JRE.

And finally, we understood how to develop Java applications, we use the JDK.

We also took some time to dig into tools and fundamental concepts of this components.

Mockito ArgumentMatchers

$
0
0

1. Overview

This tutorial shows how to use the ArgumentMatcher and how it differs from the ArgumentCaptor.

For an introduction to the Mockito framework, please refer to this article.

2. Maven Dependencies

We need to add a single artifact:

<dependency>
    <groupId>org.mockito</groupId> 
    <artifactId>mockito-core</artifactId>
    <version>2.18.3</version> 
    <scope>test</scope>
</dependency>

The latest version of Mockito can be found on Maven Central.

3. ArgumentMatchers

Configuring a mocked method in various ways is possible. One of them is to return fixed values:

doReturn("Flower").when(flowerService).analyze("poppy");

In the above example, the String “Flower” is returned only when the analyze service receive the String “poppy”.

But maybe we need to respond to a wider range of values or beforehand unknown values.

In all these scenarios, we can configure our mocked methods with argument matchers:

when(flowerService.analyze(anyString())).thenReturn("Flower");

Now, because of the anyString argument matcher, the result will be the same no matter what value we pass to analyze. ArgumentMatchers allows us flexible verification or stubbing.

In case of a method has more than one argument, it isn’t possible to use ArgumentMatchers for only some of the argumentsMockito requires you to provide all arguments either by matchers or by exact values.

A next example is an incorrect approach to this:

abstract class FlowerService {
    public abstract boolean isABigFlower(String name, int petals);
}

FlowerService mock = mock(FlowerService.class);

when(mock.isABigFlower("poppy", anyInt())).thenReturn(true);

To fix it and keep the String name “poppy” as it’s desired, we’ll use eq matcher:

when(mock.isABigFlower(eq("poppy"), anyInt())).thenReturn(true);

There are two more points to take care when matchers are used:

  • We can’t use them as a return value, an exact value is required when stubbing calls
  • Finally, we can’t use argument matchers outside of verification or stubbing

In the last case, Mockito will detect the misplaced argument and throw an InvalidUseOfMatchersException.

A bad example could be:

String orMatcher = or(eq("poppy"), endsWith("y"));
verify(mock).analyze(orMatcher);

The way to implement the above code is:

verify(mock).analyze(or(eq("poppy"), endsWith("y")));

Mockito also provides AdditionalMatchers to implement common logical operations (‘not’, ‘and’, ‘or’) on ArgumentMatchers that match both primitive and non-primitive types:

verify(mock).analyze(or(eq("poppy"), endsWith("y")));

4. Custom Argument Matcher

Creating our matcher can be good to select the best possible approach for a given scenario and produce highest quality test, which is clean and maintainable.

For instance, we could have a MessageController that delivers messages. It’ll receive a MessageDTO, and from that, It’ll create a Message to be delivered by MessageService.

Our verification will be simple, verify that we called the MessageService exactly 1 time with any Message:

verify(messageService, times(1)).deliverMessage(any(Message.class));

Because the Message is constructed inside the method under test, we’re forced to use any as the matcher.

This approach doesn’t let us validate the data inside the Message, which can be different compared to the data inside MessageDTO.

For that reason, we’re going to implement a custom argument matcher:

public class MessageMatcher extends ArgumentMatcher<Message> {

    private Message left;

    // constructors

    @Override
    public boolean matches(Object object) {
        if (object instanceof Message) {
            Message right = (Message) object;
            return left.getFrom().equals(right.getFrom()) &&
              left.getTo().equals(right.getTo()) &&
              left.getText().equals(right.getText());
        }
        return false;
    }
}

To use our matcher, we need to modify our test and replace any by argThat:

verify(messageService, times(1)).deliverMessage(argThat(new MessageMatcher(message)));

Now we know our Message instance will have the same data as our MessageDTO.

5. Custom Argument Matcher vs. ArgumentCaptor

Both techniques custom argument matchers and ArgumentCaptor can be used for making sure certain arguments were passed to mocks.

However, ArgumentCaptor may be a better fit if we need it to assert on argument values to complete verification or our custom argument matcher is not likely to be reused.

Custom argument matchers via ArgumentMatcher are usually better for stubbing.

6. Conclusion

In this article, we’ve explored a feature of Mockito, ArgumentMatcher and its difference with ArgumentCaptor.

As always, the full source code of the examples is available over on GitHub.

Spring Core Annotations

$
0
0

1. Overview

We can leverage the capabilities of Spring DI engine using the annotations in the org.springframework.beans.factory.annotation and org.springframework.context.annotation packages.

We often call these “Spring core annotations” and we’ll review them in this tutorial.

2. DI-Related Annotations

2.1. @Autowired

We can use the @Autowired to mark a dependency which Spring is going to resolve and inject. We can use this annotation with a constructor, setter, or field injection.

Constructor injection:

class Car {
    Engine engine;

    @Autowired
    Car(Engine engine) {
        this.engine = engine;
    }
}

Setter injection:

class Car {
    Engine engine;

    @Autowired
    void setEngine(Engine engine) {
        this.engine = engine;
    }
}

Field injection:

class Car {
    @Autowired
    Engine engine;
}

@Autowired has a boolean argument called required with a default value of true. It tunes Spring’s behavior when it doesn’t find a suitable bean to wire. When true, an exception is thrown, otherwise, nothing is wired.

Note, that if we use constructor injection, all constructor arguments are mandatory.

Starting with version 4.3, we don’t need to annotate constructors with @Autowired explicitly unless we declare at least two constructors.

For more details visit our articles about @Autowired and constructor injection.

2.2. @Bean

@Bean marks a factory method which instantiates a Spring bean:

@Bean
Engine engine() {
    return new Engine();
}

Spring calls these methods when a new instance of the return type is required.

The resulting bean has the same name as the factory method. If we want to name it differently, we can do so with the name or the value arguments of this annotation (the argument value is an alias for the argument name):

@Bean("engine")
Engine getEngine() {
    return new Engine();
}

Note, that all methods annotated with @Bean must be in @Configuration classes.

2.3. @Qualifier

We use @Qualifier along with @Autowired to provide the bean id or bean name we want to use in ambiguous situations.

For example, the following two beans implement the same interface:

class Bike implements Vehicle {}

class Car implements Vehicle {}

If Spring needs to inject a Vehicle bean, it ends up with multiple matching definitions. In such cases, we can provide a bean’s name explicitly using the @Qualifier annotation.

Using constructor injection:

@Autowired
Biker(@Qualifier("bike") Vehicle vehicle) {
    this.vehicle = vehicle;
}

Using setter injection:

@Autowired
void setVehicle(@Qualifier("bike") Vehicle vehicle) {
    this.vehicle = vehicle;
}

Alternatively:

@Autowired
@Qualifier("bike")
void setVehicle(Vehicle vehicle) {
    this.vehicle = vehicle;
}

Using field injection:

@Autowired
@Qualifier("bike")
Vehicle vehicle;

For a more detailed description, please read this article.

2.4. @Required

@Required on setter methods to mark dependencies that we want to populate through XML:

@Required
void setColor(String color) {
    this.color = color;
}
<bean class="com.baeldung.annotations.Bike">
    <property name="color" value="green" />
</bean>

Otherwise, BeanInitializationException will be thrown.

2.5. @Value

We can use @Value for injecting property values into beans. It’s compatible with constructor, setter, and field injection.

Constructor injection:

Engine(@Value("8") int cylinderCount) {
    this.cylinderCount = cylinderCount;
}

Setter injection:

@Autowired
void setCylinderCount(@Value("8") int cylinderCount) {
    this.cylinderCount = cylinderCount;
}

Alternatively:

@Value("8")
void setCylinderCount(int cylinderCount) {
    this.cylinderCount = cylinderCount;
}

Field injection:

@Value("8")
int cylinderCount;

Of course, injecting static values isn’t useful. Therefore, we can use placeholder strings in @Value to wire values defined in external sources, for example, in .properties or .yaml files.

Let’s assume the following .properties file:

engine.fuelType=petrol

We can inject the value of engine.fuelType with the following:

@Value("${engine.fuelType}")
String fuelType;

We can use @Value even with SpEL. More advanced examples can be found in our article about @Value.

2.6. @DependsOn

We can use this annotation to make Spring initialize other beans before the annotated one. Usually, this behavior is automatic, based on the explicit dependencies between beans.

We only need this annotation when the dependencies are implicit, for example, JDBC driver loading or static variable initialization.

We can use @DependsOn on the dependent class specifying the names of the dependency beans. The annotation’s value argument needs an array containing the dependency bean names:

@DependsOn("engine")
class Car implements Vehicle {}

Alternatively, if we define a bean with the @Bean annotation, the factory method should be annotated with @DependsOn:

@Bean
@DependsOn("fuel")
Engine engine() {
    return new Engine();
}

2.7. @Lazy

We use @Lazy when we want to initialize our bean lazily. By default, Spring creates all singleton beans eagerly at the startup/bootstrapping of the application context.

However, there are cases when we need to create a bean when we request it, not at application startup.

This annotation behaves differently depending on where we exactly place it. We can put it on:

  • a @Bean annotated bean factory method, to delay the method call (hence the bean creation)
  • a @Configuration class and all contained @Bean methods will be affected
  • a @Component class, which is not a @Configuration class, this bean will be initialized lazily
  • an @Autowired constructor, setter, or field, to load the dependency itself lazily (via proxy)

This annotation has an argument named value with the default value of true. It is useful to override the default behavior.

For example, marking beans to be eagerly loaded when the global setting is lazy, or configure specific @Bean methods to eager loading in a @Configuration class marked with @Lazy:

@Configuration
@Lazy
class VehicleFactoryConfig {

    @Bean
    @Lazy(false)
    Engine engine() {
        return new Engine();
    }
}

For further reading, please visit this article.

2.8. @Lookup

A method annotated with @Lookup tells Spring to return an instance of the method’s return type when we invoke it.

Detailed information about the annotation can be found in this article.

2.9. @Primary

Sometimes we need to define multiple beans of the same type. In these cases, the injection will be unsuccessful because Spring has no clue which bean we need.

We already saw an option to deal with this scenario: marking all the wiring points with @Qualifier and specify the name of the required bean.

However, most of the time we need a specific bean and rarely the others. We can use @Primary to simplify this case: if we mark the most frequently used bean with @Primary it will be chosen on unqualified injection points:

@Component
@Primary
class Car implements Vehicle {}

@Component
class Bike implements Vehicle {}

@Component
class Driver {
    @Autowired
    Vehicle vehicle;
}

@Component
class Biker {
    @Autowired
    @Qualifier("bike")
    Vehicle vehicle;
}

In the previous example Car is the primary vehicle. Therefore, in the Driver class, Spring injects a Car bean. Of course, in the Biker bean, the value of the field vehicle will be a Bike object because it’s qualified.

2.10. @Scope

We use @Scope to define the scope of a @Component class or a @Bean definition. It can be either singleton, prototype, request, session, globalSession or some custom scope.

For example:

@Component
@Scope("prototype")
class Engine {}

3. Context Configuration Annotations

We can configure the application context with the annotations described in this section.

3.1. @Profile

If we want Spring to use a @Component class or a @Bean method only when a specific profile is active, we can mark it with @Profile. We can configure the name of the profile with the value argument of the annotation:

@Component
@Profile("sportDay")
class Bike implements Vehicle {}

You can read more about profiles in this article.

3.2. @Import

We can use specific @Configuration classes without component scanning with this annotation. We can provide those classes with @Import‘s value argument:

@Import(VehiclePartSupplier.class)
class VehicleFactoryConfig {}

3.3. @ImportResource

We can import XML configurations with this annotation. We can specify the XML file locations with the locations argument, or with its alias, the value argument:

@Configuration
@ImportResource("classpath:/annotations.xml")
class VehicleFactoryConfig {}

3.4. @PropertySource

With this annotation, we can define property files for application settings:

@Configuration
@PropertySource("classpath:/annotations.properties")
class VehicleFactoryConfig {}

@PropertySource leverages the Java 8 repeating annotations feature, which means we can mark a class with it multiple times:

@Configuration
@PropertySource("classpath:/annotations.properties")
@PropertySource("classpath:/vehicle-factory.properties")
class VehicleFactoryConfig {}

3.5. @PropertySources

We can use this annotation to specify multiple @PropertySource configurations:

@Configuration
@PropertySources({ 
    @PropertySource("classpath:/annotations.properties"),
    @PropertySource("classpath:/vehicle-factory.properties")
})
class VehicleFactoryConfig {}

Note, that since Java 8 we can achieve the same with the repeating annotations feature as described above.

4. Conclusion

In this article, we saw an overview of the most common Spring core annotations. We saw how to configure bean wiring and application context, and how to mark classes for component scanning.

As usual, the examples are available over on GitHub.

Java 9 java.lang.Module API

$
0
0

1. Introduction

Following A Guide to Java 9 Modularity, in this article, we’re going to explore the java.lang.Module API that was introduced alongside the Java Platform Module System.

This API provides a way to access a module programmatically, to retrieve specific information from a module, and generally to work with it and its ModuleDescriptor.

2. Reading Module Information

The Module class represents both named and unnamed modules. Named modules have a name and are constructed by the Java Virtual Machine when it creates a module layer, using a graph of modules as a definition.

An unnamed module doesn’t have a name, and there is one for each ClassLoader. All types that aren’t in a named module are members of the unnamed module related to their class loader.

The interesting part of the Module class is that it exposes methods that allow us to retrieve information from the module, like the module name, the module classloader and the packages within the module.

Let’s see how it’s possible to find out if a module is named or unnamed.

2.1. Named or Unnamed

Using the isNamed() method we can identify whether a module is named or not.

Let’s see how we can see if a given class, like HashMap, is part of a named module and how we can retrieve its name:

Class<HashMap> hashMapClass = HashMap.class;
Module javaBaseModule = hashMapClass.getModule();

assertThat(javaBaseModule.isNamed(), is(true));
assertThat(javaBaseModule.getName(), is("java.base"));

Let’s now define a Person class:

public class Person {
    private String name;

    // constructor, getters and setters
}

In the same way, as we did for the HashMap class, we can check if the Person class is part of a named module:

Class<Person> personClass = Person.class;
Module module = personClass.getModule();

assertThat(module.isNamed(), is(false));
assertThat(module.getName(), is(nullValue()));

2.2. Packages

When working with a module, it might be important to know which packages are available within the module.

Let’s see how we can check if a given package, for example, java.lang.annotation, is contained in a given module:

assertTrue(javaBaseModule.getPackages().contains("java.lang.annotation"));
assertFalse(javaBaseModule.getPackages().contains("java.sql"));

2.3. Annotations

In the same way, as for the packages, it’s possible to retrieve the annotations that are present in the module using the getAnnotations() method.

If there are no annotations present in a named module, the method will return an empty array.

Let’s see how many annotations are present in the java.base module:

assertThat(javaBaseModule.getAnnotations().length, is(0));

When invoked on an unnamed module, the getAnnotations() method will return an empty array.

2.4. ClassLoader

Thanks to the getClassLoader() method available within the Module class, we can retrieve the ClassLoader for a given module:

assertThat(
  module.getClassLoader().getClass().getName(), 
  is("jdk.internal.loader.ClassLoaders$AppClassLoader")
);

2.5. Layer

Another valuable information that could be extracted from a module is the ModuleLayer, which represents a layer of modules in the Java virtual machine.

A module layer informs the JVM about the classes that may be loaded from the modules. In this way, the JVM knows exactly which module each class is a member of.

A ModuleLayer contains information related to its configuration, the parent layer and the set of modules available within the layer.

Let’s see how to retrieve the ModuleLayer of a given a module:

ModuleLayer javaBaseModuleLayer = javaBaseModule.getLayer();

Once we have retrieved the ModuleLayer, we can access its information:

assertTrue(javaBaseModuleLayer.configuration().findModule("jaa.base").isPresent());
assertThat(javaBaseModuleLayer.configuration().modules().size(), is(78));

A special case is the boot layer, created when Java Virtual Machine is started. The boot layer is the only layer that contains the java.base module.

3. Dealing with ModuleDescriptor

A ModuleDescriptor describes a named module and defines methods to obtain each of its components.

ModuleDescriptor objects are immutable and safe for use by multiple concurrent threads.

Let’s start by looking at how we can retrieve a ModuleDescriptor.

3.1. Retrieving a ModuleDescriptor

Since the ModuleDescriptor is tightly connected to a Module, it’s possible to retrieve it directly from a Module:

ModuleDescriptor moduleDescriptor = javaBaseModule.getDescriptor();

3.2. Creating a ModuleDescriptor

It’s also possible to create a module descriptor using the ModuleDescriptor.Builder class or by reading the binary form of a module declaration, the module-info.class.

Let’s see how we create a module descriptor using the ModuleDescriptor.Builder API:

ModuleDescriptor.Builder moduleBuilder = ModuleDescriptor
  .newModule("baeldung.base");

ModuleDescriptor moduleDescriptor = moduleBuilder.build();

assertThat(moduleDescriptor.name(), is("baeldung.base"));

With this, we created a normal module but in case we want to create an open module or an automatic one, we can respectively use the newOpenModule() or the newAutomaticModule() method.

3.3. Classifying a Module

A module descriptor describes a normal, open, or automatic module.

Thanks to the method available within the ModuleDescriptor, it’s possible to identify the type of the module:

ModuleDescriptor moduleDescriptor = javaBaseModule.getDescriptor();

assertFalse(moduleDescriptor.isAutomatic());
assertFalse(moduleDescriptor.isOpen());

3.4. Retrieving Requires

With a module descriptor, it’s possible to retrieve the set of Requires, representing the module dependencies.

This is possible using the requires() method:

Set<Requires> javaBaseRequires = javaBaseModule.getDescriptor().requires();
Set<Requires> javaSqlRequires = javaSqlModule.getDescriptor().requires();

Set<String> javaSqlRequiresNames = javaSqlRequires.stream()
  .map(Requires::name)
  .collect(Collectors.toSet());

assertThat(javaBaseRequires, empty());
assertThat(javaSqlRequires.size(), is(3));
assertThat(
  javaSqlRequiresNames, 
  containsInAnyOrder("java.base", "java.xml", "java.logging")
);

All modules, except java.base, have the java.base module as a dependency.

However, if the module is an automatic module, the set of dependencies will be empty except for the java.base one.

3.5. Retrieving Provides

With the provides() method it’s possible to retrieve the list of services that the module provides:

Set<Provides> javaBaseProvides = javaBaseModule.getDescriptor().provides();
Set<Provides> javaSqlProvides = javaSqlModule.getDescriptor().provides();

Set<String> javaBaseProvidesService = javaBaseProvides.stream()
  .map(Provides::service)
  .collect(Collectors.toSet());

assertThat(
  javaBaseProvidesService, 
  contains("java.nio.file.spi.FileSystemProvider")
);
assertThat(javaSqlProvides, empty());

3.6. Retrieving Exports

Using the exports() method, we can find out if the modules exports packages and which in particular:

Set<Exports> javaBaseExports = javaBaseModule.getDescriptor().exports();
Set<Exports> javaSqlExports = javaSqlModule.getDescriptor().exports();

Set<String> javaSqlExportsSource = javaSqlExports.stream()
  .map(Exports::source)
  .collect(Collectors.toSet());

assertThat(javaBaseExports.size(), is(108));
assertThat(javaSqlExports.size(), is(3));
assertThat(
  javaSqlExportsSource, 
  containsInAnyOrder("java.sql","javax.transaction.xa", "javax.sql")
);

As a special case, if the module is an automatic one, the set of exported packages will be empty.

3.7. Retrieving Uses

With the uses() method, it’s possible to retrieve the set of service dependencies of the module:

Set<String> javaBaseUses = javaBaseModule.getDescriptor().uses();
Set<String> javaSqlUses = javaSqlModule.getDescriptor().uses();

assertThat(javaBaseUses.size(), is(34));
assertThat(javaSqlUses, contains("java.sql.Driver"));

In case the module is an automatic one, the set of dependencies will be empty.

3.8. Retrieving Opens

Whenever we want to retrieve the list of the open packages of a module, we can use the opens() method:

Set<Opens> javaBaseUses = javaBaseModule.getDescriptor().opens();
Set<Opens> javaSqlUses = javaSqlModule.getDescriptor().opens();

assertThat(javaBaseUses, empty());
assertThat(javaSqlUses, empty());

The set will be empty if the module is an open or an automatic one.

4. Dealing with Modules

Working with the Module API, other than reading information from the module, we can update a module definition.

4.1. Adding Exports

Let’s see how we can update a module, exporting the given package from a given module:

Module updatedModule = module.addExports(
  "com.baeldung.java9.modules", javaSqlModule);

assertTrue(updatedModule.isExported("com.baeldung.java9.modules"));

This can be done only if the caller’s module is the module the code is a member of.

As a side note, there are no effects if the package is already exported by the module or if the module is an open one.

4.2. Adding Reads

When we want to update a module to read a given module, we can use the addReads() method:

Module updatedModule = module.addReads(javaSqlModule);

assertTrue(updatedModule.canRead(javaSqlModule));

This method does nothing if we add the module itself since all modules read themselves.

In the same way, this method does nothing if the module is an unnamed module or this module already reads the other.

4.3. Adding Opens

When we want to update a module that has opened a package to at least the caller module, we can use addOpens() to open the package to another module:

Module updatedModule = module.addOpens(
  "com.baeldung.java9.modules", javaSqlModule);

assertTrue(updatedModule.isOpen("com.baeldung.java9.modules", javaSqlModule));

This method has no effect if the package is already open to the given module.

4.4. Adding Uses

Whenever we want to update a module adding a service dependency, the method addUses() is our choice:

Module updatedModule = module.addUses(Driver.class);

assertTrue(updatedModule.canUse(Driver.class));

This method does nothing when invoked on an unnamed module or an automatic module.

5. Conclusion

In this article, we explored the use of the java.lang.Module API where we learned how to retrieve information of a module, how to use a ModuleDescriptor to access additional information regarding a module and how to manipulate it.

As always, all code examples in this article can be found over on GitHub.

Configure a RestTemplate with RestTemplateBuilder

$
0
0

1. Introduction

In this quick tutorial, we’re going to look at how to configure a Spring RestTemplate bean.

Let’s start by discussing the three main configuration types:

  • using the default RestTemplateBuilder
  • using a RestTemplateCustomizer
  • creating our own RestTemplateBuilder

To be able to test this easily, please follow the guide on how to set up a simple Spring Boot application.

2. Configuration Using the Default RestTemplateBuilder

To configure a RestTemplate this way, we need to inject the default RestTemplateBuilder bean provided by Spring Boot into our classes:

private RestTemplate restTemplate;

@Autowired
public HelloController(RestTemplateBuilder builder) {
    this.restTemplate = builder.build();
}

The RestTemplate bean created with this method has its scope limited to the class in which we build it.

3. Configuration Using a RestTemplateCustomizer

With this approach, we can create an application-wide, additive customization.

This is a slightly more complicated approach. For this we need to create a class that implements RestTemplateCustomizer, and define it as a bean:

public class CustomRestTemplateCustomizer implements RestTemplateCustomizer {
    @Override
    public void customize(RestTemplate restTemplate) {
        restTemplate.getInterceptors().add(new CustomClientHttpRequestInterceptor());
    }
}

The CustomClientHttpRequestInterceptor interceptor is doing basic logging of the request:

public class CustomClientHttpRequestInterceptor implements ClientHttpRequestInterceptor {
    private static Logger LOGGER = LoggerFactory
      .getLogger(CustomClientHttpRequestInterceptor.class);

    @Override
    public ClientHttpResponse intercept(
      HttpRequest request, byte[] body, 
      ClientHttpRequestExecution execution) throws IOException {
 
        logRequestDetails(request);
        return execution.execute(request, body);
    }
    private void logRequestDetails(HttpRequest request) {
        LOGGER.info("Headers: {}", request.getHeaders());
        LOGGER.info("Request Method: {}", request.getMethod());
        LOGGER.info("Request URI: {}", request.getURI());
    }
}

Now, we define CustomRestTemplateCustomizer as a bean in a configuration class or in our Spring Boot application class:

@Bean
public CustomRestTemplateCustomizer customRestTemplateCustomizer() {
    return new CustomRestTemplateCustomizer();
}

With this configuration, every RestTemplate that we’ll use in our application will have the custom interceptor set on it.

4. Configuration by Creating Our Own RestTemplateBuilder

This is the most extreme approach to customizing a RestTemplate. It disables the default auto-configuration of RestTemplateBuilder, so we need to define it ourselves:

@Bean
@DependsOn(value = {"customRestTemplateCustomizer"})
public RestTemplateBuilder restTemplateBuilder() {
    return new RestTemplateBuilder(customRestTemplateCustomizer());
}

After this, we can inject the custom builder into our classes like we’d do with a default RestTemplateBuilder and create a RestTemplate as usual:

private RestTemplate restTemplate;

@Autowired
public HelloController(RestTemplateBuilder builder) {
    this.restTemplate = builder.build();
}

5. Conclusion

We’ve seen how to configure a RestTemplate with the default RestTemplateBuilder, building our own RestTemplateBuilder, or using a RestTemplateCustomizer bean.

As always, the full codebase for this example can be found in our GitHub repository.


JUnit5 Programmatic Extension Registration with @RegisterExtension

$
0
0

1. Overview

JUnit 5 provides multiple methods for registering extensions. For an overview of some of these methods, refer to our Guide to JUnit 5 Extensions.

In this quick tutorial, we’ll focus on programmatic registration of JUnit 5 extensions, using the @RegisterExtension annotation.

2. @RegisterExtension

We can apply this annotation to fields in test classes. One advantage of this method is that we can access the extension as an object in the test content directly. 

JUnit will call the extension methods at appropriate stages.

For example, if an extension implements BeforeEachCallback, JUnit will call its corresponding interface methods before executing a test method.

3. Using @RegisterExtension with Static Fields

When used with static fields, JUnit will apply the methods of this extension after the class-level @ExtendWith based extensions have been applied.

Also, JUnit will invoke both class-level and method-level callbacks of the extension.

For example, the following extension features both a beforeAll and a beforeEach implementation:

public class LoggingExtension implements 
  BeforeAllCallback, BeforeEachCallback {

    // logger, constructor etc

    @Override
    public void beforeAll(ExtensionContext extensionContext) 
      throws Exception {
        logger.info("Type {} In beforeAll : {}", 
          type, extensionContext.getDisplayName());
    }

    @Override
    public void beforeEach(ExtensionContext extensionContext) throws Exception {
        logger.info("Type {} In beforeEach : {}",
          type, extensionContext.getDisplayName());
    }

    public String getType() {
        return type;
    }
}

Let’s apply this extension to a static field of a test:

public class RegisterExtensionTest {

    @RegisterExtension
    static LoggingExtension staticExtension = new LoggingExtension("static version");

    @Test
    public void demoTest() {
        // assertions
    }
}

The output shows messages from both the beforeAll and beforeEach methods:

Type static version In beforeAll : RegisterExtensionTest
Type static version In beforeEach : demoTest()

4. Using @RegisterExtension with Instance Fields

If we use RegisterExtension with non-static fields, JUnit will only apply the extension after processing all TestInstancePostProcessor callbacks.

In this case, JUnit will ignore class level callbacks like beforeAll.

In the above example, let’s remove the static modifier from LoggingExtension:

@RegisterExtension
LoggingExtension instanceLevelExtension = new LoggingExtension("instance version");

Now JUnit will only invoke the beforeEach method, as seen in the output:

Type instance version In beforeEach : demoTest()

5. Conclusion

In this article, we did an overview of programmatically registering JUnit 5 extensions with @RegisterExtension.

We also covered the difference between applying the extension to static fields vs. instance fields.

As usual, code examples can be found at our Github repository.

Java Weekly, Issue 232

$
0
0

Here we go…

1. Spring and Java

>> Truth First, or Why You Should Mostly Implement Database First Designs [blog.jooq.org]

Some solid points to consider when thinking about where the source of truth in your system is, and how to make sure your architecture reflects that.

>> Java Collections Are Evolving [dzone.com]

The highly useful new functionality the last couple of JDK releases have brought to the Java Collection Framework. Really good stuff.

>> Zip Slip Directory Traversal Vulnerability Impacts Multiple Java Projects [infoq.com]

A quick but interesting write-up, all about the new “Zip Slip” vulnerability – along with a few practical examples, if you’re curious.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> Storing Encrypted Credentials in GIT [techblog.bozho.net]

Storing credentials correctly isn’t necessarily easy, but it’s highly important that you understand how to do that well.

>> “Should that be a Microservice?” Part 4: Independent Scalability [content.pivotal.io]

Microservices can be a useful architectural choice… but don’t always. Better think twice.

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> Motivational Speaker [dilbert.com]

>> Decentralization Changes Everything [dilbert.com]

>> Boiling an Ocean [dilbert.com]

4. Pick of the Week

>> Why the best things in life are all backwards [markmanson.net]

Spring Security OAuth Guides

$
0
0

REST Query Language Over Multiple Tables with Querydsl Web Support

$
0
0

1. Overview

In this tutorial, we’ll continue with the second part of Spring Data Querydsl Web Support. Here, we’ll focus on associated entities and how to create queries over HTTP.

Following the same configuration used in part one, we’ll create a Maven-based project. Please refer to the original article to check how to set up the basics.

2. Entities

First, let’s add a new entity (Address) creating a relationship between the user and her address. We’ve used the OneToOne relationship to keep it simple.

Consequently, we’ll have the following classes:

@Entity 
public class User {

    @Id 
    @GeneratedValue
    private Long id;

    private String name;

    @OneToOne(fetch = FetchType.LAZY, mappedBy = "user") 
    private Address addresses;

    // getters & setters 
}
@Entity 
public class Address {

    @Id 
    @GeneratedValue
    private Long id;

    private String address;

    private String country;

    @OneToOne(fetch = FetchType.LAZY) 
    @JoinColumn(name = "user_id") 
    private User user;

    // getters & setters
}

3. Spring Data Repositories

At this point, we have to create the Spring Data repositories, as usual, one for each entity. Note that these repositories will have the Querydsl configuration.

Let’s see the AddressRepository repository and explain how the framework configuration works:

public interface AddressRepository extends JpaRepository<Address, Long>, 
  QueryDslPredicateExecutor<Address>, QuerydslBinderCustomizer<QAddress> {
 
    @Override 
    default void customize(QuerydslBindings bindings, QAddress root) {
        bindings.bind(String.class)
          .first((SingleValueBinding<StringPath, String>) StringExpression::eq);
    }
}

We’re overriding the customize() method to configure the default binding. In this case, we’ll customize the default method binding to be equals, for all String properties.

Once the repository is all set, we just have to add a @RestController to manage the HTTP queries.

4. Query Rest Controller

In part one, we explained the Query@RestController over user repository, here, we’ll just reuse it.

Also, we may want to query the address table; so for this, we’ll just add a similar method:

@GetMapping(value = "/addresses", produces = MediaType.APPLICATION_JSON_VALUE)
public Iterable<Address> queryOverAddress(
  @QuerydslPredicate(root = Address.class) Predicate predicate) {
    BooleanBuilder builder = new BooleanBuilder();
    return addressRepository.findAll(builder.and(predicate));
}

Let’s create some tests to see how this works.

5. Integration Testing

We’ve included a test to prove how Querydsl works. For this, we are using the MockMvc framework to simulate HTTP querying over user joining this entity with the new one: address. Therefore, we are now able to make queries filtering address attributes.

Let’s retrieve all users living in Spain:

/users?addresses.country=Spain 

@Test
public void givenRequest_whenQueryUserFilteringByCountrySpain_thenGetJohn() throws Exception {
    mockMvc.perform(get("/users?address.country=Spain")).andExpect(status().isOk()).andExpect(content()
      .contentType(contentType))
      .andExpect(jsonPath("$", hasSize(1)))
      .andExpect(jsonPath("$[0].name", is("John")))
      .andExpect(jsonPath("$[0].address.address", is("Fake Street 1")))
      .andExpect(jsonPath("$[0].address.country", is("Spain")));
}

As a result, Querydsl will map the predicate sent over HTTP and generates the following SQL script:

select user0_.id as id1_1_, 
       user0_.name as name2_1_ 
from user user0_ 
      cross join address address1_ 
where user0_.id=address1_.user_id 
      and address1_.country='Spain'

6. Conclusion

To sum up, we have seen that Querydsl offers to the web clients a very simple alternative to create dynamic queries; another powerful use of this framework.

In part I, we saw how to retrieve data from one table; consequently, now, we can add queries joining several tables, offering web-clients a better experience filtering directly over HTTP requests they make.

The implementation of this example can be checked in the GitHub project – this is a Maven-based project, so it should be easy to import and run as it is.

Jagged Arrays In Java

$
0
0

1. Overview

A jagged array in Java is a multi-dimensional array comprising arrays of varying sizes as its elements. It’s also referred to as “an array of arrays” or “ragged array”.

In this quick tutorial, we’ll look more in-depth into defining and working with jagged arrays.

2. Creating Jagged Array

Let’s start by looking at ways in which we can create a jagged array:

2.1. Using the Shorthand-Form

An easy way to define a jagged array would be:

int[][] jaggedArr = {{1, 2}, {3, 4, 5}, {6, 7, 8, 9}};

Here, we’ve declared and initialized jaggedArr in a single step.

2.2. Declaration and then Initialization

We start by declaring a jagged array of size three:

int[][] jaggedArr = new int[3][];

Here, we’ve omitted to specify the second dimension since it will vary.

Next, let’s go further by both declaring and initializing the respective elements within jaggedArr:

jaggedArr[0] = new int[] {1, 2};
jaggedArr[1] = new int[] {3, 4, 5};
jaggedArr[2] = new int[] {6, 7, 8, 9};

We can also simply declare its elements without initializing them:

jaggedArr[0] = new int[2];
jaggedArr[1] = new int[3];
jaggedArr[2] = new int[4];

These can then later be initialized, for example by using user inputs.

3. Memory Representation

How will the memory representation of our jaggedArr look like?

As we know, an array in Java is nothing but an object, the elements of which could be either primitives or references. So, a two-dimensional array in Java can be thought of as an array of one-dimensional arrays.

Our jaggedArr in memory would look similar to:

Clearly, jaggedArr[0] is holding a reference to a single-dimensional array of size 2, jaggedArr[1] holds a reference to another one-dimensional array of size 3 and so on.

This way Java makes it possible for us to define and use jagged arrays.

4. Iterating Elements

We can iterate a jagged array much like any other multi-dimensional array in Java.

Let’s try iterating and initializing the jaggedArr elements using user inputs:

void initializeElements(int[][] jaggedArr) {
    Scanner sc = new Scanner(System.in);
    for (int outer = 0; outer < jaggedArr.length; outer++) {
        for (int inner = 0; inner < jaggedArr[outer].length; inner++) {
            jaggedArr[outer][inner] = sc.nextInt();
        }
    }
}

Here, jaggedArr[outer].length is the length of an array at an index outer in jaggedArr.

It helps us to ensure that we are looking for elements only within a valid range of each sub-array, thereby avoiding an ArrayIndexOutOfBoundException.

5. Printing Elements

What if we want to print the elements of our jagged array?

One obvious way would be to use the iteration logic we’ve already covered. This involves iterating through each item within our jagged array, which itself is an array, and then iterating over that child array – one element at a time.

Another option we have is to use java.util.Arrays.toString() helper method:

void printElements(int[][] jaggedArr) {
    for (int index = 0; index < jaggedArr.length; index++) {
        System.out.println(Arrays.toString(jaggedArr[index]));
    }
}

And we end up having a clean and simple code. The generated console output would look like:

[1, 2]
[3, 4, 5]
[6, 7, 8, 9]

6. Conclusion

In this article, we looked at what jagged arrays are, how they look in-memory and the ways in which we can define and use them.

As always, the source code of the examples presented can be found over on Github.

Viewing all 3702 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>