Quantcast
Channel: Baeldung
Viewing all 3747 articles
Browse latest View live

Using NullAway to Avoid NullPointerExceptions

$
0
0

1. Overview

We've been undertaking numerous strategies over the years, from Elvis operators to Optional, to help remove NullPointerExceptions from our apps. In this tutorial, we'll learn about Uber's contribution to the conversation, NullAway, and how to use it.

1. NullAway

NullAway is a build tool that helps us to eliminate NullPointerExceptions (NPEs) in our Java code.

This tool performs a series of type-based, local checks to ensure that any pointer that gets dereferenced in your code cannot be null. It has low build-time overhead and can be configured to run in every build of your code.

2. Installation

Let's take a look at how to install NullAway and its dependencies. In this example, we're going to configure NullAway using Gradle.

NullAway is dependant on Error Prone. Therefore, we will add the errorprone plugin:

plugins {
  id "net.ltgt.errorprone" version "1.1.1"
}

We'll also add four dependencies in different scopes: annotationProcessor, compileOnly, errorprone, and errorproneJavac:

dependencies {
  annotationProcessor "com.uber.nullaway:nullaway:0.7.9"
  compileOnly "com.google.code.findbugs:jsr305:3.0.2"
  errorprone "com.google.errorprone:error_prone_core:2.3.4"
  errorproneJavac "com.google.errorprone:javac:9+181-r4173-1"
}

Finally, we'll add the Gradle task that configures how NullAway works during the compilation:

import net.ltgt.gradle.errorprone.CheckSeverity

tasks.withType(JavaCompile) {
    options.errorprone {
        check("NullAway", CheckSeverity.ERROR)
        option("NullAway:AnnotatedPackages", "com.baeldung")
    }
}

The above task sets NullAway severity to the error level which means we can configure NullAway to stop the build with an error. By default, NullAway will just warn the user at compile time.

Additionally, the task sets the package to be checked for null dereferences.

And that's it, we're ready now to use the tool in our Java code.

Similarly, we can use other build systems, Maven or Bazel, to integrate the tool.

3. Usage

Let's say we have a Person class, containing an age attribute. Additionally, we have a getAge method that takes a Person instance as a parameter:

Integer getAge(Person person) {
    return person.getAge();
}

At this point, we can see that getAge will throw a NullPointerException if person is null.

NullAway assumes every method parameter, return value, and field are non-null. Therefore, it will expect the person instance to be non-null.

And let's also say that there is someplace in our code that is, indeed, passing a null reference into getAge:

Integer yearsToRetirement() {
    Person p = null;
    // ... p never gets set correctly...
    return 65 - getAge(p);
}

Then, running a build will produce the following error:

error: [NullAway] passing @Nullable parameter 'null' where @NonNull is required
    getAge(p);

We can fix this error by adding a @Nullable annotation to our parameter:

Integer getAge(@Nullable Person person) { 
    // ... same as earlier
}

Now when we run a build, we'll see a new error:

error: [NullAway] dereferenced expression person is @Nullable
    return person.getAge();
            ^

This is telling us that the person instance has the possibility of being null. We can fix this adding a standard null check:

Integer getAge(@Nullable Person person) {
    if (person != null) {
        return person.getAge();
    } else {
        return 0;
    }
}

4. Conclusions

In this tutorial, we've looked at how we can use NullAway to limit the possibilities of encountering NullPointerExceptions.

As always all source code is available on GitHub.


Delete Everything in Redis

$
0
0

1. Overview

When caching in Redis, it can be useful to clear the whole cache when it becomes invalid.

In this short tutorial, we'll learn how to delete all the keys present in Redis, both in specific databases, and across all databases.

First, we'll look at the command line. Then, we'll see how to accomplish the same thing using the APIs and the Java client.

2. Redis Commands

Let's start with the Redis commands to delete everything.

There are two major commands to delete the keys present in Redis: FLUSHDB and FLUSHALL. We can use the Redis CLI to execute these commands.

The FLUSHDB command deletes the keys in a database. And the FLUSHALL command deletes all keys in all databases.

We can execute these operations in a background thread using the ASYNC option. This is useful if the flush takes a long time, as making the command ASYNC stops it from blocking until its complete.

We should note that the ASYNC option is available from Redis 4.0.0.

3. Delete Using the Java Client

Now let's try to delete the keys using the Java client.

We'll make use of an embedded Redis server and see how to clear keys.

3.1. Clearing a Single Database

Let's put some data into a database, and then clear it. We'll then persist and flush the database using the flushdb method.

We interact with our server through the RedisCallback interface:

ValueOperations<String, String> simpleValues = flushRedisTemplate.opsForValue();
String key = "key";
String value = "value";
simpleValues.set(key, value);
assertThat(simpleValues.get(key)).isEqualTo(value);

flushRedisTemplate.execute(new RedisCallback<Void>() {
    @Override
    public Void doInRedis(RedisConnection connection) throws DataAccessException {
        connection.flushDb();
        return null;
    }
}

assertThat(simpleValues.get(key)).isNull();

First, we used set to add a key to our database.

Then we used the flushRedisTemplate and called flushdb within the callback to clear the database.

Finally, after clearing the database, attempting to retrieve the value for the key returned null.

3.2. Clearing All Databases

In order to clear all the databases, we use the above technique with the flushAll method:

public Void doInRedis(RedisConnection connection) throws DataAccessException {
    connection.flushAll();
    return null;
}

4. Time Complexity

Redis is a fast data store which scales well. However, flush operations can take longer when there is more data.

The time complexity of the FLUSHDB operation is O(N), where N is the number of keys in the database. If we use the FLUSHALL command, the time complexity is again O(N), but here, N is the number of keys in all the databases.

5. Conclusion

In this tutorial, we learned how to delete everything in Redis using the command line and the Java client.

As always, the source code for this article is available over on GitHub.

Open/Closed Principle in Java

$
0
0

1. Overview

In this tutorial, we'll discuss the Open/Closed Principle (OCP) as one of the SOLID principles of object-oriented programming.

Overall, we'll go into detail on what this principle is and how to implement it when designing our software.

2. Open/Closed Principle

As the name suggests, this principle states that software entities should be open for extension, but closed for modification. As a result, when the business requirements change then the entity can be extended, but not modified.

For the illustration below, we'll focus on how interfaces are one way to follow OCP.

2.1. Non-Compliant

Let's consider we're building a calculator app that might have several operations, such as addition and subtraction.

First of all, we'll define a top-level interface – CalculatorOperation:

public interface CalculatorOperation {}

Let's define an Addition class, which would add two numbers and implement the CalculatorOperation:

public class Addition implements CalculatorOperation {
    private double left;
    private double right;
    private double result = 0.0;

    public Addition(double left, double right) {
        this.left = left;
        this.right = right;
    }

    // getters and setters

}

As of now, we only have one class Addition, so we need to define another class named Subtraction:

public class Subtraction implements CalculatorOperation {
    private double left;
    private double right;
    private double result = 0.0;

    public Subtraction(double left, double right) {
        this.left = left;
        this.right = right;
    }

    // getters and setters
}

Let's now define our main class, which will perform our calculator operations: 

public class Calculator {

    public void calculate(CalculatorOperation operation) {
        if (operation == null) {
            throw new InvalidParameterException("Can not perform operation");
        }

        if (operation instanceof Addition) {
            Addition addition = (Addition) operation;
            addition.setResult(addition.getLeft() + addition.getRight());
        } else if (operation instanceof Subtraction) {
            Subtraction subtraction = (Subtraction) operation;
            subtraction.setResult(subtraction.getLeft() - subtraction.getRight());
        }
    }
}

Although this may seem fine, it's not a good example of the OCP. When a new requirement of adding multiplication or divide functionality comes in, we've no way besides changing the calculate method of the Calculator class.

Hence, we can say this code is not OCP compliant.

2.2. OCP Compliant

As we've seen our calculator app is not yet OCP compliant. The code in the calculate method will change with every incoming new operation support request. So, we need to extract this code and put it in an abstraction layer.

One solution is to delegate each operation into their respective class:

public interface CalculatorOperation {
    void perform();
}

As a result, the Addition class could implement the logic of adding two numbers:

public class Addition implements CalculatorOperation {
    private double left;
    private double right;
    private double result;

    // constructor, getters and setters

    @Override
    public void perform() {
        result = left + right;
    }
}

Likewise, an updated Subtraction class would have similar logic. And similarly to Addition and Subtraction, as a new change request, we could implement the division logic:

public class Division implements CalculatorOperation {
    private double left;
    private double right;
    private double result;

    // constructor, getters and setters
    @Override
    public void perform() {
        if (right != 0) {
            result = left / right;
        }
    }
}

And finally, our Calculator class doesn't need to implement new logic as we introduce new operators:

public class Calculator {

    public void calculate(CalculatorOperation operation) {
        if (operation == null) {
            throw new InvalidParameterException("Cannot perform operation");
        }
        operation.perform();
    }
}

That way the class is closed for modification but open for an extension.

3. Conclusion

In this tutorial, we've learned what is OCP by definition, then elaborated on that definition. We then saw an example of a simple calculator application that was flawed in its design. Lastly, we made the design good by making it adhere to the OCP.

As always, the code is available over on GitHub.

How to Call Python From Java

$
0
0

1. Overview

Python is an increasingly popular programming language, particularly in the scientific community due to its rich variety of numerical and statistical packages. Therefore, it's not an uncommon requirement to able to invoke Python code from our Java applications.

In this tutorial, we'll take a look at some of the most common ways of calling Python code from Java.

2. A Simple Python Script

Throughout this tutorial, we'll use a very simple Python script which we'll define in a dedicated file called hello.py:

print("Hello Baeldung Readers!!")

Assuming we have a working Python installation, when we run our script we should see the message printed:

$ python hello.py 
Hello Baeldung Readers!!

3. Core Java

In this section, we'll take a look at two different options we can use to invoke our Python script using core Java.

3.1. Using ProcessBuilder

Let's first take a look at how we can use the ProcessBuilder API to create a native operating system process to launch python and execute our simple script:

@Test
public void givenPythonScript_whenPythonProcessInvoked_thenSuccess() throws Exception {
    ProcessBuilder processBuilder = new ProcessBuilder("python", resolvePythonScriptPath("hello.py"));
    processBuilder.redirectErrorStream(true);

    Process process = processBuilder.start();
    List<String> results = readProcessOutput(process.getInputStream());

    assertThat("Results should not be empty", results, is(not(empty())));
    assertThat("Results should contain output of script: ", results, hasItem(
      containsString("Hello Baeldung Readers!!")));

    int exitCode = process.waitFor();
    assertEquals("No errors should be detected", 0, exitCode);
}

In this first example, we're running the python command with one argument which is the absolute path to our hello.py script. We can find it in our test/resources folder.

To summarize, we create our ProcessBuilder object passing the command and argument values to the constructor. It's also important to mention the call to redirectErrorStream(true). In case of any errors, the error output will be merged with the standard output. 

This is useful as it means we can read any error messages from the corresponding output when we call the getInputStream() method on the Process object. If we don't set this property to true, then we'll need to read output from two separate streams, using the getInputStream() and the getErrorStream() methods.

Now, we start the process using the start() method to get a Process object. Then we read the process output and verify the contents is what we expect.

As previously mentioned, we've made the assumption that the python command is available via the PATH variable.

3.2. Working With the JSR-223 Scripting Engine

JSR-223, which was first introduced in Java 6, defines a set of scripting APIs that provide basic scripting functionality. These methods provide mechanisms for executing scripts and for sharing values between Java and a scripting language. The main objective of this standard was to try to bring some uniformity to interoperating with different scripting languages from Java.

We can use the pluggable script engine architecture for any dynamic language provided it has a JVM implementation, of course. Jython is the Java platform implementation of Python which runs on the JVM.

Assuming that we have Jython on the CLASSPATH, the framework should automatically discover that we have the possibility of using this scripting engine and enable us to ask for the Python script engine directly.

Since Jython is available from Maven Central, we can just include it in our pom.xml:

<dependency>
    <groupId>org.python</groupId>
    <artifactId>jython</artifactId>
    <version>2.7.2</version>
</dependency>

Likewise, it can also be downloaded and installed directly.

Let's list out all the scripting engines that we have available to us:

ScriptEngineManagerUtils.listEngines();

If we have the possibility of using Jython, we should see the appropriate scripting engine displayed:

...
Engine name: jython
Version: 2.7.2
Language: python
Short Names:
python
jython

Now that we know we can use the Jython scripting engine, let's go ahead and see how to call our hello.py script:

@Test
public void givenPythonScriptEngineIsAvailable_whenScriptInvoked_thenOutputDisplayed() throws Exception {
    StringWriter writer = new StringWriter();
    ScriptContext context = new SimpleScriptContext();
    context.setWriter(writer);

    ScriptEngineManager manager = new ScriptEngineManager();
    ScriptEngine engine = manager.getEngineByName("python");
    engine.eval(new FileReader(resolvePythonScriptPath("hello.py")), context);
    assertEquals("Should contain script output: ", "Hello Baeldung Readers!!", writer.toString().trim());
}

As we can see, it is pretty simple to work with this API. First, we begin by setting up a ScriptContext which contains a StringWriter. This will be used to store the output from the script we want to invoke.

We then use the getEngineByName method of the ScriptEngineManager class to look up and create a ScriptEngine for a given short name. In our case, we can pass python or jython which are the two short names associated with this engine.

As before, the final step is to get the output from our script and check it matches what we were expecting.

4. Jython

Continuing with Jython, we also have the possibility of embedding Python code directly into our Java code. We can do this using the PythonInterpretor class:

@Test
public void givenPythonInterpreter_whenPrintExecuted_thenOutputDisplayed() {
    try (PythonInterpreter pyInterp = new PythonInterpreter()) {
        StringWriter output = new StringWriter();
        pyInterp.setOut(output);

        pyInterp.exec("print('Hello Baeldung Readers!!')");
        assertEquals("Should contain script output: ", "Hello Baeldung Readers!!", output.toString()
          .trim());
    }
}

Using the PythonInterpreter class allows us to execute a string of Python source code via the exec method. As before we use a StringWriter to capture the output from this execution.

Now let's see an example where we add two numbers together:

@Test
public void givenPythonInterpreter_whenNumbersAdded_thenOutputDisplayed() {
    try (PythonInterpreter pyInterp = new PythonInterpreter()) {
        pyInterp.exec("x = 10+10");
        PyObject x = pyInterp.get("x");
        assertEquals("x: ", 20, x.asInt());
    }
}

In this example we see how we can use the get method, to access the value of a variable.

In our final Jython example, we'll see what happens when an error occurs:

try (PythonInterpreter pyInterp = new PythonInterpreter()) {
    pyInterp.exec("import syds");
}

When we run this code a PyException is thrown and we'll see the same error as if we were working with native Python:

Traceback (most recent call last):
  File "<string>", line 1, in <module>
ImportError: No module named syds

A few points we should note:

  • As PythonIntepreter implements AutoCloseable, it's good practice to use try-with-resources when working with this class
  • The PythonInterpreter class name does not imply that our Python code is interpreted. Python programs in Jython are run by the JVM and therefore compiled to Java bytecode before execution
  • Although Jython is the Python implementation for Java, it may not contain all the same sub-packages as native Python

5. Apache Commons Exec

Another third-party library that we could consider using is Apache Common Exec which attempts to overcome some of the shortcomings of the Java Process API.

The commons-exec artifact is available from Maven Central:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-exec</artifactId>
    <version>1.3</version>
</dependency>

Now let's how we can use this library:

@Test
public void givenPythonScript_whenPythonProcessExecuted_thenSuccess() 
  throws ExecuteException, IOException {
    String line = "python " + resolvePythonScriptPath("hello.py");
    CommandLine cmdLine = CommandLine.parse(line);
        
    ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
    PumpStreamHandler streamHandler = new PumpStreamHandler(outputStream);
        
    DefaultExecutor executor = new DefaultExecutor();
    executor.setStreamHandler(streamHandler);

    int exitCode = executor.execute(cmdLine);
    assertEquals("No errors should be detected", 0, exitCode);
    assertEquals("Should contain script output: ", "Hello Baeldung Readers!!", outputStream.toString()
      .trim());
}

This example is not too dissimilar to our first example using ProcessBuilder. We create a CommandLine object for our given command. Next, we set up a stream handler to use for capturing the output from our process before executing our command.

To summarize, the main philosophy behind this library is to offer a process execution package aimed at supporting a wide range of operating systems through a consistent API.

6. Utilizing HTTP for Interoperability

Let's take a step back for a moment and instead of trying to invoke Python directly consider using a well-established protocol like HTTP as an abstraction layer between the two different languages.

In actual fact Python ships with a simple built-in HTTP server which we can use for sharing content or files over HTTP:

python -m http.server 9000

If we now go to http://localhost:9000, we'll see the contents listed for the directory where we launched the previous command.

Some other popular frameworks we could consider using for creating more robust Python-based web services or applications are Flask and Django.

Once we have an endpoint we can access, we can use any one of several Java HTTP libraries to invoke our Python web service/application implementation.

7. Conclusion

In this tutorial, we've learned about some of the most popular technologies for calling Python code from Java.

As always, the full source code of the article is available over on GitHub.

Clicking Elements in Selenium using JavaScript

$
0
0

1. Introduction

In this short tutorial, we're going to take a look at a simple example of how to click and element in Selenium WebDriver using JavaScript.

For our demo, we'll use JUnit and Selenium to open https://baeldung.com and search for “Selenium” articles.

2. Dependencies

First, we add the selenium-java and junit dependencies to our project in the pom.xml:

<dependency>
    <groupId>org.seleniumhq.selenium</groupId>
    <artifactId>selenium-java</artifactId>
    <version>3.141.59</version>
</dependency>
<dependency>
    <groupId>junit</groupId>
    <artifactId>junit</artifactId>
    <version>4.13</version>
    <scope>test</scope>
</dependency>

3. Configuration

Next, we need to configure WebDriver. In this example, we'll use its Chrome implementation:

@Before
public void setUp() {
    System.setProperty("webdriver.chrome.driver","path to chromedriver.exe");
    driver = new ChromeDriver();
}

We're using a method annotated with @Before to do the initial setup before each test. Inside we're setting the webdriver.chrome.driver property defining the chrome driver location. After that, we're instantiating the WebDriver object.

When the test finishes we should close the browser window. We can do it by placing the driver.close() statement in a method annotated with @After. This makes sure it'll be executed even if the test fails:

@After
public void cleanUp() {
    driver.close();
}

4. Opening the Browser

Now, we can create a test case that will do our first step – open the website:

@Test
public void whenSearchForSeleniumArticles_thenReturnNotEmptyResults() {
    driver.get("https://baeldung.com");
    String title = driver.getTitle();
    assertEquals("Baeldung | Java, Spring and Web Development tutorials", title);
}

Here, we use the driver.get() method to load the webpage. Next, we verify its title to make sure we're in the right place.

5. Clicking an Element Using JavaScript

Selenium comes with a handy WebElement#click method that invokes a click event on a given element. But in some cases click action is not possible.

One example is if we want to click a disabled element. In that case, WebElement#click throws an IllegalStateException. Instead, we can use Selenium's JavaScript support.

To do this, the first thing that we'll need is the JavascriptExecutor. Since we are using the ChromeDriver implementation, we can simply cast it to what we need:

JavascriptExecutor executor = (JavascriptExecutor) driver;

After getting the JavascriptExecutor, we can use its executeScript method. The arguments are the script itself and an array of script parameters. In our case, we invoke the click method on the first argument:

executor.executeScript("arguments[0].click();", element);

Now, lets put it together into a single method that we'll call clickElement:

private void clickElement(WebElement element) {
    JavascriptExecutor executor = (JavascriptExecutor) driver;
    executor.executeScript("arguments[0].click();", element);
}

And finally, we can add this to our test:

@Test
public void whenSearchForSeleniumArticles_thenReturnNotEmptyResults() {
    // ... load https://baeldung.com
    WebElement searchButton = driver.findElement(By.className("menu-search"));
    clickElement(searchButton);

    WebElement searchInput = driver.findElement(By.id("search"));
    searchInput.sendKeys("Selenium");

    WebElement seeSearchResultsButton = driver.findElement(By.className("btn-search"));
    clickElement(seeSearchResultsButton);
}

6. Non-Clickable Elements

One of the most common problems occurring while clicking an element using JavaScript is executing the click script before the element is clickable. In this situation, the click action won't happen but the code will continue to execute.

To overcome this issue, we have to hold back the execution until the clicking is available. We can use WebDriverWait#until to wait until the button is rendered.

First, WebDriverWait object requires two parameters; the driver and a timeout:

WebDriverWait wait = new WebDriverWait(driver, 5000);

Then, we call until, giving the expected elementToBeClickable condition:

wait.until(ExpectedConditions.elementToBeClickable(By.className("menu-search")));

And once that returns successfully, we know we can proceed:

WebElement searchButton = driver.findElement(By.className("menu-search"));
clickElement(searchButton);

For more available condition methods refer to the official documentation.

7. Conclusion

In this tutorial, we've learned how to click an element in Selenium using JavaScript. As always, the source for the article is available over on GitHub.

@PropertySource with YAML Files in Spring Boot

$
0
0

1. Overview

In this quick tutorial, we'll show how to read a YAML properties file using the @PropertySource annotation in Spring Boot.

2. @PropertySource and YAML format

Spring Boot has great support for externalized configuration. Also, it's possible to use different ways and formats to read the properties in the Spring Boot application out-of-the-box.

However, by default, @PropertySource doesn't load YAML files. This fact is explicitly mentioned in the official documentation.

So, if we want to use the @PropertySource annotation in our application, we need to stick with the standard properties files. Or we can implement the missing puzzle piece ourselves!

3. Custom PropertySourceFactory

As of Spring 4.3, @PropertySource comes with the factory attribute. We can make use of it to provide our custom implementation of the PropertySourceFactory, which will handle the YAML file processing.

This is easier than it sounds! Let's see how to do this:

public class YamlPropertySourceFactory implements PropertySourceFactory {

    @Override
    public PropertySource<?> createPropertySource(String name, EncodedResource encodedResource) 
      throws IOException {
        YamlPropertiesFactoryBean factory = new YamlPropertiesFactoryBean();
        factory.setResources(encodedResource.getResource());

        Properties properties = factory.getObject();

        return new PropertiesPropertySource(encodedResource.getResource().getFilename(), properties);
    }
}

As we can see, it's enough to implement a single createPropertySource method.

In our custom implementation, first, we used the YamlPropertiesFactoryBean to convert the resources in YAML format to the java.util.Properties object.

Then, we simply returned a new instance of the PropertiesPropertySource, which is a wrapper that allows Spring to read the parsed properties.

4. @PropertySource and YAML in action

Let's now put all the pieces together and see how to use them in practice.

First, let's create a simple YAML file – foo.yml:

yaml:
  name: foo
  aliases:
    - abc
    - xyz

Next, let's create a properties class with @ConfigurationProperties and use our custom YamlPropertySourceFactory:

@Configuration
@ConfigurationProperties(prefix = "yaml")
@PropertySource(value = "classpath:foo.yml", factory = YamlPropertySourceFactory.class)
public class YamlFooProperties {

    private String name;

    private List<String> aliases;

    // standard getter and setters
}

And finally, let's verify that the properties are properly injected:

@RunWith(SpringRunner.class)
@SpringBootTest
public class YamlFooPropertiesIntegrationTest {

    @Autowired
    private YamlFooProperties yamlFooProperties;

    @Test
    public void whenFactoryProvidedThenYamlPropertiesInjected() {
        assertThat(yamlFooProperties.getName()).isEqualTo("foo");
        assertThat(yamlFooProperties.getAliases()).containsExactly("abc", "xyz");
    }
}

5. Conclusion

To sum up, in this quick tutorial, we first showed how easy it is to create a custom PropertySourceFactory. After that, we presented how to pass this custom implementation to the @PropertySource using its factory attribute.

Consequently, we were able to successfully load the YAML properties file into our Spring Boot application.

As usual, all the code examples are available over on GitHub.

CQRS and Event Sourcing in Java

$
0
0

1. Introduction

In this tutorial, we'll explore the basic concepts of Command Query Responsibility Segregation (CQRS) and Event Sourcing design patterns.

While often cited as complementary patterns, we'll try to understand them separately and finally see how they complement each other. There are several tools and frameworks to help adopt these patterns, but we'll create a simple application in Java to understand the basics.

2. Basic Concepts

We'll first understand these patterns theoretically before we attempt to implement them. Also, as they stand as individual patterns quite well, we'll try to understand without mixing them.

Please note that these patterns are often used together in an enterprise application. In this regard, they also benefit from several other enterprise architecture patterns. We'll discuss some of them as we go along.

2.1. Event Sourcing

Event Sourcing gives us a new way of persisting application state as an ordered sequence of events. We can selectively query these events and reconstruct the state of the application at any point in time. Of course, to make this work, we need to reimage every change to the state of the application as events:

These events here are facts that have happened and can not be altered — in other words, they must be immutable. Recreating the application state is just a matter of replaying all the events.

Note that this also opens up the possibility to replay events selectively, replay some events in reverse, and much more. As a consequence, we can treat the application state itself as a secondary citizen, with the event log as our primary source of truth.

2.2. CQRS

Put simply, CQRS is about segregating the command and query side of the application architecture. CQRS is based on the Command Query Separation (CQS) principle which was suggested by Bertrand Meyer. CQS suggests that we divide the operations on domain objects into two distinct categories: Queries and Commands:

Queries return a result and do not change the observable state of a system. Commands change the state of the system but do not necessarily return a value.

We achieve this by cleanly separating the Command and Query sides of the domain model. We can take a step further, splitting the write and read side of the data store as well, of course, by introducing a mechanism to keep them in sync.

3. A Simple Application

We'll begin by describing a simple application in Java that builds a domain model.

The application will offer CRUD operations on the domain model and will also feature a persistence for the domain objects. CRUD stands for Create, Read, Update, and Delete, which are basic operations that we can perform on a domain object.

We'll use the same application to introduce Event Sourcing and CQRS in later sections.

In the process, we'll leverage some of the concepts from Domain-Driven Design (DDD) in our example.

DDD addresses the analysis and design of software that relies on complex domain-specific knowledge. It builds upon the idea that software systems need to be based on a well-developed model of a domain. DDD was first prescribed by Eric Evans as a catalog of patterns. We'll be using some of these patterns to build our example.

3.1. Application Overview

Creating a user profile and managing it is a typical requirement in many applications. We'll define a simple domain model capturing the user profile along with a persistence:

As we can see, our domain model is normalized and exposes several CRUD operations. These operations are just for demonstration and can be simple or complex depending upon the requirements. Moreover, the persistence repository here can be in-memory or use a database instead.

3.2. Application Implementation

First, we'll have to create Java classes representing our domain model. This is a fairly simple domain model and may not even require the complexities of design patterns like Event Sourcing and CQRS. However, we'll keep this simple to focus on understanding the basics:

public class User {
private String userid;
    private String firstName;
    private String lastName;
    private Set<Contact> contacts;
    private Set<Address> addresses;
    // getters and setters
}

public class Contact {
    private String type;
    private String detail;
    // getters and setters
}

public class Address {
    private String city;
    private String state;
    private String postcode;
    // getters and setters
}

Also, we'll define a simple in-memory repository for the persistence of our application state. Of course, this does not add any value but suffices for our demonstration later:

public class UserRepository {
    private Map<String, User> store = new HashMap<>();
}

Now, we'll define a service to expose typical CRUD operations on our domain model:

public class UserService {
    private UserRepository repository;
    public UserService(UserRepository repository) {
        this.repository = repository;
    }

    public void createUser(String userId, String firstName, String lastName) {
        User user = new User(userId, firstName, lastName);
        repository.addUser(userId, user);
    }

    public void updateUser(String userId, Set<Contact> contacts, Set<Address> addresses) {
        User user = repository.getUser(userId);
        user.setContacts(contacts);
        user.setAddresses(addresses);
        repository.addUser(userId, user);
    }

    public Set<Contact> getContactByType(String userId, String contactType) {
        User user = repository.getUser(userId);
        Set<Contact> contacts = user.getContacts();
        return contacts.stream()
          .filter(c -> c.getType().equals(contactType))
          .collect(Collectors.toSet());
    }

    public Set<Address> getAddressByRegion(String userId, String state) {
        User user = repository.getUser(userId);
        Set<Address> addresses = user.getAddresses();
        return addresses.stream()
          .filter(a -> a.getState().equals(state))
          .collect(Collectors.toSet());
    }
}

That's pretty much what we have to do to set up our simple application. This is far from being production-ready code, but it exposes some of the important points that we're going to deliberate on later in this tutorial.

3.3. Problems in This Application

Before we proceed any further in our discussion with Event Sourcing and CQRS, it's worthwhile to discuss the problems with the current solution. After all, we'll be addressing the same problems by applying these patterns!

Out of many problems that we may notice here, we'll just like to focus on two of them:

  • Domain Model: The read and write operations are happening over the same domain model. While this is not a problem for a simple domain model like this, it may worsen as the domain model gets complex. We may need to optimize our domain model and the underlying storage for them to suit the individual needs of the read and write operations.
  • Persistence: The persistence we have for our domain objects stores only the latest state of the domain model. While this is sufficient for most situations, it makes some tasks challenging. For instance, if we have to perform a historical audit of how the domain object has changed state, it's not possible here. We have to supplement our solution with some audit logs to achieve this.

4. Introducing CQRS

We'll begin addressing the first problem we discussed in the last section by introducing the CQRS pattern in our application. As part of this, we'll separate the domain model and its persistence to handle write and read operations. Let's see how CQRS pattern restructures our application:

The diagram here explains how we intend to cleanly separate our application architecture to write and read sides. However, we have introduced quite a few new components here that we must understand better. Please note that these are not strictly related to CQRS, but CQRS greatly benefits from them:

  • Aggregate/Aggregator:

Aggregate is a pattern described in Domain-Driven Design (DDD) that logically groups different entities by binding entities to an aggregate root. The aggregate pattern provides transactional consistency between the entities.

CQRS naturally benefits from the aggregate pattern, which groups the write domain model, providing transactional guarantees. Aggregates normally hold a cached state for better performance but can work perfectly without it.

  • Projection/Projector:

Projection is another important pattern which greatly benefits CQRS. Projection essentially means representing domain objects in different shapes and structures.

These projections of original data are read-only and highly optimized to provide an enhanced read experience. We may again decide to cache projections for better performance, but that's not a necessity.

4.1. Implementing Write Side of Application

Let's first implement the write side of the application.

We'll begin by defining the required commands. A command is an intent to mutate the state of the domain model. Whether it succeeds or not depends on the business rules that we configure.

Let's see our commands:

public class CreateUserCommand {
    private String userId;
    private String firstName;
    private String lastName;
}

public class UpdateUserCommand {
    private String userId;
    private Set<Address> addresses;
    private Set<Contact> contacts;
}

These are pretty simple classes that hold the data we intend to mutate.

Next, we define an aggregate that's responsible for taking commands and handling them. Aggregates may accept or reject a command:

public class UserAggregate {
    private UserWriteRepository writeRepository;
    public UserAggregate(UserWriteRepository repository) {
        this.writeRepository = repository;
    }

    public User handleCreateUserCommand(CreateUserCommand command) {
        User user = new User(command.getUserId(), command.getFirstName(), command.getLastName());
        writeRepository.addUser(user.getUserid(), user);
        return user;
    }

    public User handleUpdateUserCommand(UpdateUserCommand command) {
        User user = writeRepository.getUser(command.getUserId());
        user.setAddresses(command.getAddresses());
        user.setContacts(command.getContacts());
        writeRepository.addUser(user.getUserid(), user);
        return user;
    }
}

The aggregate uses a repository to retrieve the current state and persist any changes to it. Moreover, it may store the current state locally to avoid the round-trip cost to a repository while processing every command.

Finally, we need a repository to hold the state of the domain model. This will typically be a database or other durable store, but here we'll simply replace them with an in-memory data structure:

public class UserWriteRepository {
    private Map<String, User> store = new HashMap<>();
    // accessors and mutators
}

This concludes the write side of our application.

4.2. Implementing Read Side of Application

Let's switch over to the read side of the application now. We'll begin by defining the read side of the domain model:

public class UserAddress {
    private Map<String, Set<Address>> addressByRegion = new HashMap<>();
}

public class UserContact {
    private Map<String, Set<Contact>> contactByType = new HashMap<>();
}

If we recall our read operations, it's not difficult to see that these classes map perfectly well to handle them. That is the beauty of creating a domain model centered around queries we have.

Next, we'll define the read repository. Again, we'll just use an in-memory data structure, even though this will be a more durable data store in real applications:

public class UserReadRepository {
    private Map<String, UserAddress> userAddress = new HashMap<>();
    private Map<String, UserContact> userContact = new HashMap<>();
    // accessors and mutators
}

Now, we'll define the required queries we have to support. A query is an intent to get data — it may not necessarily result in data.

Let's see our queries:

public class ContactByTypeQuery {
    private String userId;
    private String contactType;
}

public class AddressByRegionQuery {
    private String userId;
    private String state;
}

Again, these are simple Java classes holding the data to define a query.

What we need now is a projection that can handle these queries:

public class UserProjection {
    private UserReadRepository readRepository;
    public UserProjection(UserReadRepository readRepository) {
        this.readRepository = readRepository;
    }

    public Set<Contact> handle(ContactByTypeQuery query) {
        UserContact userContact = readRepository.getUserContact(query.getUserId());
        return userContact.getContactByType()
          .get(query.getContactType());
    }

    public Set<Address> handle(AddressByRegionQuery query) {
        UserAddress userAddress = readRepository.getUserAddress(query.getUserId());
        return userAddress.getAddressByRegion()
          .get(query.getState());
    }
}

The projection here uses the read repository we defined earlier to address the queries we have. This pretty much concludes the read side of our application as well.

4.3. Synchronizing Read and Write Data

One piece of this puzzle is still unsolved: there's nothing to synchronize our write and read repositories.

This is where we'll need something known as a projector. A projector has the logic to project the write domain model into the read domain model.

There are much more sophisticated ways to handle this, but we'll keep it relatively simple:

public class UserProjector {
    UserReadRepository readRepository = new UserReadRepository();
    public UserProjector(UserReadRepository readRepository) {
        this.readRepository = readRepository;
    }

    public void project(User user) {
        UserContact userContact = Optional.ofNullable(
          readRepository.getUserContact(user.getUserid()))
            .orElse(new UserContact());
        Map<String, Set<Contact>> contactByType = new HashMap<>();
        for (Contact contact : user.getContacts()) {
            Set<Contact> contacts = Optional.ofNullable(
              contactByType.get(contact.getType()))
                .orElse(new HashSet<>());
            contacts.add(contact);
            contactByType.put(contact.getType(), contacts);
        }
        userContact.setContactByType(contactByType);
        readRepository.addUserContact(user.getUserid(), userContact);

        UserAddress userAddress = Optional.ofNullable(
          readRepository.getUserAddress(user.getUserid()))
            .orElse(new UserAddress());
        Map<String, Set<Address>> addressByRegion = new HashMap<>();
        for (Address address : user.getAddresses()) {
            Set<Address> addresses = Optional.ofNullable(
              addressByRegion.get(address.getState()))
                .orElse(new HashSet<>());
            addresses.add(address);
            addressByRegion.put(address.getState(), addresses);
        }
        userAddress.setAddressByRegion(addressByRegion);
        readRepository.addUserAddress(user.getUserid(), userAddress);
    }
}

This is rather a very crude way of doing this but gives us enough insight into what is needed for CQRS to work. Moreover, it's not necessary to have the read and write repositories sitting in different physical stores. A distributed system has its own share of problems!

Please note that it's not convenient to project the current state of the write domain into different read domain models. The example we have taken here is fairly simple, hence, we do not see the problem.

However, as the write and read models get more complex, it'll get increasingly difficult to project. We can address this through event-based projection instead of state-based projection with Event Sourcing. We'll see how to achieve this later in the tutorial.

4.4. Benefits and Drawbacks of CQRS

We discussed the CQRS pattern and learned how to introduce it in a typical application. We've categorically tried to address the issue related to the rigidity of the domain model in handling both read and write.

Let's now discuss some of the other benefits that CQRS brings to an application architecture:

  • CQRS provides us a convenient way to select separate domain models appropriate for write and read operations; we don't have to create a complex domain model supporting both
  • It helps us to select repositories that are individually suited for handling the complexities of the read and write operations, like high throughput for writing and low latency for reading
  • It naturally complements event-based programming models in a distributed architecture by providing a separation of concerns as well as simpler domain models

However, this does not come for free. As is evident from this simple example, CQRS adds considerable complexity to the architecture. It may not be suitable or worth the pain in many scenarios:

  • Only a complex domain model can benefit from the added complexity of this pattern; a simple domain model can be managed without all this
  • Naturally leads to code duplication to some extent, which is an acceptable evil compared to the gain it leads us to; however, individual judgment is advised
  • Separate repositories lead to problems of consistency, and it's difficult to keep the write and read repositories in perfect sync always; we often have to settle for eventual consistency

5. Introducing Event Sourcing

Next, we'll address the second problem we discussed in our simple application. If we recall, it was related to our persistence repository.

We'll introduce Event Sourcing to address this problem. Event Sourcing dramatically changes the way we think of the application state storage.

Let's see how it changes our repository:

Here, we've structured our repository to store an ordered list of domain events. Every change to the domain object is considered an event. How coarse- or fine-grained an event should be is a matter of domain design. The important things to consider here are that events have a temporal order and are immutable.

5.1. Implementing Events and Event Store

The fundamental objects in event-driven applications are events, and event sourcing is no different. As we've seen earlier, events represent a specific change in the state of the domain model at a specific point of time. So, we'll begin by defining the base event for our simple application:

public abstract class Event {
    public final UUID id = UUID.randomUUID();
    public final Date created = new Date();
}

This just ensures that every event we generate in our application gets a unique identification and the timestamp of creation. These are necessary to process them further.

Of course, there can be several other attributes that may interest us, like an attribute to establish the provenance of an event.

Next, let's create some domain-specific events inheriting from this base event:

public class UserCreatedEvent extends Event {
    private String userId;
    private String firstName;
    private String lastName;
}

public class UserContactAddedEvent extends Event {
    private String contactType;
    private String contactDetails;
}

public class UserContactRemovedEvent extends Event {
    private String contactType;
    private String contactDetails;
}

public class UserAddressAddedEvent extends Event {
    private String city;
    private String state;
    private String postCode;
}

public class UserAddressRemovedEvent extends Event {
    private String city;
    private String state;
    private String postCode;
}

These are simple POJOs in Java containing the details of the domain event. However, the important thing to note here is the granularity of events.

We could've created a single event for user updates, but instead, we decided to create separate events for addition and removal of address and contact. The choice is mapped to what makes it more efficient to work with the domain model.

Now, naturally, we need a repository to hold our domain events:

public class EventStore {
    private Map<String, List<Event>> store = new HashMap<>();
}

This is a simple in-memory data structure to hold our domain events. In reality, there are several solutions specially created to handle event data like Apache Druid. There are many general-purpose distributed data stores capable of handling event sourcing including Kafka and Cassandra.

5.2. Generating and Consuming Events

So, now our service that handled all CRUD operations will change. Now, instead of updating a moving domain state, it will append domain events. It will also use the same domain events to respond to queries.

Let's see how we can achieve this:

public class UserService {
    private EventStore repository;
    public UserService(EventStore repository) {
        this.repository = repository;
    }

    public void createUser(String userId, String firstName, String lastName) {
        repository.addEvent(userId, new UserCreatedEvent(userId, firstName, lastName));
    }

    public void updateUser(String userId, Set<Contact> contacts, Set<Address> addresses) {
        User user = UserUtility.recreateUserState(repository, userId);
        user.getContacts().stream()
          .filter(c -> !contacts.contains(c))
          .forEach(c -> repository.addEvent(
            userId, new UserContactRemovedEvent(c.getType(), c.getDetail())));
        contacts.stream()
          .filter(c -> !user.getContacts().contains(c))
          .forEach(c -> repository.addEvent(
            userId, new UserContactAddedEvent(c.getType(), c.getDetail())));
        user.getAddresses().stream()
          .filter(a -> !addresses.contains(a))
          .forEach(a -> repository.addEvent(
            userId, new UserAddressRemovedEvent(a.getCity(), a.getState(), a.getPostcode())));
        addresses.stream()
          .filter(a -> !user.getAddresses().contains(a))
          .forEach(a -> repository.addEvent(
            userId, new UserAddressAddedEvent(a.getCity(), a.getState(), a.getPostcode())));
    }

    public Set<Contact> getContactByType(String userId, String contactType) {
        User user = UserUtility.recreateUserState(repository, userId);
        return user.getContacts().stream()
          .filter(c -> c.getType().equals(contactType))
          .collect(Collectors.toSet());
    }

    public Set<Address> getAddressByRegion(String userId, String state) throws Exception {
        User user = UserUtility.recreateUserState(repository, userId);
        return user.getAddresses().stream()
          .filter(a -> a.getState().equals(state))
          .collect(Collectors.toSet());
    }
}

Please note that we're generating several events as part of handling the update user operation here. Also, it's interesting to note how we are generating the current state of the domain model by replaying all the domain events generated so far.

Of course, in a real application, this is not a feasible strategy, and we'll have to maintain a local cache to avoid generating the state every time. There are other strategies like snapshots and roll-up in the event repository that can speed up the process.

This concludes our effort to introduce event sourcing in our simple application.

5.3. Benefits and Drawbacks of Event Sourcing

Now we've successfully adopted an alternate way of storing domain objects using event sourcing. Event sourcing is a powerful pattern and brings a lot of benefits to an application architecture if used appropriately:

  • Makes write operations much faster as there is no read, update, and write required; write is merely appending an event to a log
  • Removes the object-relational impedance and, hence, the need for complex mapping tools; of course, we still need to recreate the objects back
  • Happens to provide an audit log as a by-product, which is completely reliable; we can debug exactly how the state of a domain model has changed
  • It makes it possible to support temporal queries and achieve time-travel (the domain state at a point in the past)!
  • It's a natural fit for designing loosely coupled components in a microservices architecture that communicate asynchronously by exchanging messages

However, as always, even event sourcing is not a silver bullet. It does force us to adopt a dramatically different way to store data. This may not prove to be useful in several cases:

  • There's a learning curve associated and a shift in mindset required to adopt event sourcing; it's not intuitive, to begin with
  • It makes it rather difficult to handle typical queries as we need to recreate the state unless we keep the state in the local cache
  • Although it can be applied to any domain model, it's more appropriate for the event-based model in an event-driven architecture

6. CQRS with Event Sourcing

Now that we have seen how to individually introduce Event Sourcing and CQRS to our simple application, it's time to bring them together. It should be fairly intuitive now that these patterns can greatly benefit from each other. However, we'll make it more explicit in this section.

Let's first see how the application architecture brings them together:

This should not be any surprise by now. We've replaced the write side of the repository to be an event store, while the read side of the repository continues to be the same.

Please note that this is not the only way to use Event Sourcing and CQRS in the application architecture. We can be quite innovative and use these patterns together with other patterns and come up with several architecture options.

What's important here is to ensure that we use them to manage the complexity, not to simply increase the complexities further!

6.1. Bringing CQRS and Event Sourcing Together

Having implemented Event Sourcing and CQRS individually, it should not be that difficult to understand how we can bring them together.

We'll begin with the application where we introduced CQRS and just make relevant changes to bring event sourcing into the fold. We'll also leverage the same events and event store that we defined in our application where we introduced event sourcing.

There are just a few changes. We'll begin by changing the aggregate to generate events instead of updating state:

public class UserAggregate {
    private EventStore writeRepository;
    public UserAggregate(EventStore repository) {
        this.writeRepository = repository;
    }

    public List<Event> handleCreateUserCommand(CreateUserCommand command) {
        UserCreatedEvent event = new UserCreatedEvent(command.getUserId(), 
          command.getFirstName(), command.getLastName());
        writeRepository.addEvent(command.getUserId(), event);
        return Arrays.asList(event);
    }

    public List<Event> handleUpdateUserCommand(UpdateUserCommand command) {
        User user = UserUtility.recreateUserState(writeRepository, command.getUserId());
        List<Event> events = new ArrayList<>();

        List<Contact> contactsToRemove = user.getContacts().stream()
          .filter(c -> !command.getContacts().contains(c))
          .collect(Collectors.toList());
        for (Contact contact : contactsToRemove) {
            UserContactRemovedEvent contactRemovedEvent = new UserContactRemovedEvent(contact.getType(), 
              contact.getDetail());
            events.add(contactRemovedEvent);
            writeRepository.addEvent(command.getUserId(), contactRemovedEvent);
        }
        List<Contact> contactsToAdd = command.getContacts().stream()
          .filter(c -> !user.getContacts().contains(c))
          .collect(Collectors.toList());
        for (Contact contact : contactsToAdd) {
            UserContactAddedEvent contactAddedEvent = new UserContactAddedEvent(contact.getType(), 
              contact.getDetail());
            events.add(contactAddedEvent);
            writeRepository.addEvent(command.getUserId(), contactAddedEvent);
        }

        // similarly process addressesToRemove
        // similarly process addressesToAdd

        return events;
    }
}

The only other change required is in the projector, which now needs to process events instead of domain object states:

public class UserProjector {
    UserReadRepository readRepository = new UserReadRepository();
    public UserProjector(UserReadRepository readRepository) {
        this.readRepository = readRepository;
    }

    public void project(String userId, List<Event> events) {
        for (Event event : events) {
            if (event instanceof UserAddressAddedEvent)
                apply(userId, (UserAddressAddedEvent) event);
            if (event instanceof UserAddressRemovedEvent)
                apply(userId, (UserAddressRemovedEvent) event);
            if (event instanceof UserContactAddedEvent)
                apply(userId, (UserContactAddedEvent) event);
            if (event instanceof UserContactRemovedEvent)
                apply(userId, (UserContactRemovedEvent) event);
        }
    }

    public void apply(String userId, UserAddressAddedEvent event) {
        Address address = new Address(
          event.getCity(), event.getState(), event.getPostCode());
        UserAddress userAddress = Optional.ofNullable(
          readRepository.getUserAddress(userId))
            .orElse(new UserAddress());
        Set<Address> addresses = Optional.ofNullable(userAddress.getAddressByRegion()
          .get(address.getState()))
          .orElse(new HashSet<>());
        addresses.add(address);
        userAddress.getAddressByRegion()
          .put(address.getState(), addresses);
        readRepository.addUserAddress(userId, userAddress);
    }

    public void apply(String userId, UserAddressRemovedEvent event) {
        Address address = new Address(
          event.getCity(), event.getState(), event.getPostCode());
        UserAddress userAddress = readRepository.getUserAddress(userId);
        if (userAddress != null) {
            Set<Address> addresses = userAddress.getAddressByRegion()
              .get(address.getState());
            if (addresses != null)
                addresses.remove(address);
            readRepository.addUserAddress(userId, userAddress);
        }
    }

    public void apply(String userId, UserContactAddedEvent event) {
        // Similarly handle UserContactAddedEvent event
    }

    public void apply(String userId, UserContactRemovedEvent event) {
        // Similarly handle UserContactRemovedEvent event
    }
}

If we recall the problems we discussed while handling state-based projection, this is a potential solution to that.

The event-based projection is rather convenient and easier to implement. All we have to do is process all occurring domain events and apply them to all read domain models. Typically, in an event-based application, the projector would listen to domain events it's interested in and would not rely on someone calling it directly.

This is pretty much all we have to do to bring Event Sourcing and CQRS together in our simple application.

7. Conclusion

In this tutorial, we discussed the basics of Event Sourcing and CQRS design patterns. We developed a simple application and applied these patterns individually to it.

In the process, we understood the advantages they bring and the drawbacks they present. Finally, we understood why and how to incorporate both of these patterns together in our application.

The simple application we've discussed in this tutorial does not even come close to justifying the need for CQRS and Event Sourcing. Our focus was to understand the basic concepts, hence, the example was trivial. But as mentioned before, the benefit of these patterns can only be realized in applications that have a reasonably complex domain model.

As usual, the source code for this article can be found over on GitHub.

Introduction to Exchanger in Java

$
0
0

1. Overview

In this tutorial, we'll look into java.util.concurrent.Exchanger<T>. This works as a common point for two threads in Java to exchange objects between them.

2. Introduction to Exchanger

The Exchanger class in Java can be used to share objects between two threads of type T. The class provides only a single overloaded method exchange(T t).

When invoked exchange waits for the other thread in the pair to call it as well. At this point, the second thread finds the first thread is waiting with its object. The thread exchanges the objects they are holding and signals the exchange, and now they can return.

Let's look into an example to understand message exchange between two threads with Exchanger:

@Test
public void givenThreads_whenMessageExchanged_thenCorrect() {
    Exchanger<String> exchanger = new Exchanger<>();

    Runnable taskA = () -> {
        try {
            String message = exchanger.exchange("from A");
            assertEquals("from B", message);
        } catch (InterruptedException e) {
            Thread.currentThread.interupt();
            throw new RuntimeException(e);
        }
    };

    Runnable taskB = () -> {
        try {
            String message = exchanger.exchange("from B");
            assertEquals("from A", message);
        } catch (InterruptedException e) {
            Thread.currentThread.interupt();
            throw new RuntimeException(e);
        }
    };
    CompletableFuture.allOf(
      runAsync(taskA), runAsync(taskB)).join();
}

Here, we have the two threads exchanging messages between each other using the common exchanger. Let's see an example where we exchange an object from the main thread with a new thread:

@Test
public void givenThread_WhenExchangedMessage_thenCorrect() throws InterruptedException {
    Exchanger<String> exchanger = new Exchanger<>();

    Runnable runner = () -> {
        try {
            String message = exchanger.exchange("from runner");
            assertEquals("to runner", message);
        } catch (InterruptedException e) {
            Thread.currentThread.interupt();
            throw new RuntimeException(e);
        }
    };
    CompletableFuture<Void> result 
      = CompletableFuture.runAsync(runner);
    String msg = exchanger.exchange("to runner");
    assertEquals("from runner", msg);
    result.join();
}

Note that, we need to start the runner thread first and later call exchange() in the main thread.

Also, note that the first thread's call may timeout if the second thread does not reach the exchange point in time. How long the first thread should wait can be controlled using the overloaded exchange(T t, long timeout, TimeUnit timeUnit).

3. No GC Data Exchange

Exchanger could be used to create pipeline kind of patterns with passing data from one thread to the other. In this section, we'll create a simple stack of threads continuously pass data between each other as a pipeline.

@Test
public void givenData_whenPassedThrough_thenCorrect() throws InterruptedException {

    Exchanger<Queue<String>> readerExchanger = new Exchanger<>();
    Exchanger<Queue<String>> writerExchanger = new Exchanger<>();

    Runnable reader = () -> {
        Queue<String> readerBuffer = new ConcurrentLinkedQueue<>();
        while (true) {
            readerBuffer.add(UUID.randomUUID().toString());
            if (readerBuffer.size() >= BUFFER_SIZE) {
                readerBuffer = readerExchanger.exchange(readerBuffer);
            }
        }
    };

    Runnable processor = () -> {
        Queue<String> processorBuffer = new ConcurrentLinkedQueue<>();
        Queue<String> writerBuffer = new ConcurrentLinkedQueue<>();
        processorBuffer = readerExchanger.exchange(processorBuffer);
        while (true) {
            writerBuffer.add(processorBuffer.poll());
            if (processorBuffer.isEmpty()) {
                processorBuffer = readerExchanger.exchange(processorBuffer);
                writerBuffer = writerExchanger.exchange(writerBuffer);
            }
        }
    };

    Runnable writer = () -> {
        Queue<String> writerBuffer = new ConcurrentLinkedQueue<>();
        writerBuffer = writerExchanger.exchange(writerBuffer);
        while (true) {
            System.out.println(writerBuffer.poll());
            if (writerBuffer.isEmpty()) {
                writerBuffer = writerExchanger.exchange(writerBuffer);
            }
        }
    };
    CompletableFuture.allOf(
      runAsync(reader), 
      runAsync(processor),
      runAsync(writer)).join();
}

Here, we have three threads: reader, processor, and writer. Together, they work as a single pipeline exchanging data between them.

The readerExchanger is shared between the reader and the processor thread, while the writerExchanger is shared between the processor and the writer thread.

Note that the example here is only for demonstration. We must be careful while creating infinite loops with while(true). Also to keep the code readable, we've omitted some exceptions handling.

This pattern of exchanging data while reusing the buffer allows having less garbage collection. The exchange method returns the same queue instances and thus there would be no GC for these objects. Unlike any blocking queue, the exchanger does not create any nodes or objects to hold and share data.

Creating such a pipeline is similar to the Disrupter pattern, with a key difference, the Disrupter pattern supports multiple producers and consumers, while an exchanger could be used between a pair of consumers and producers.

4.Conclusion

So, we have learned what Exchanger<T> is in Java, how it works, and we have seen how to use the Exchanger class. Also, we created a pipeline and demonstrated GC-less data exchange between threads.

As always, the code is available over on GitHub.


LinkedBlockingQueue vs ConcurrentLinkedQueue

$
0
0

1. Introduction

LinkedBlockingQueue and ConcurrentLinkedQueue are the two most frequently used concurrent queues in Java. Although both queues are often used as a concurrent data structure, there are subtle characteristics and behavioural differences between them.

In this short tutorial, we'll discuss both of these queues and explain their similarities and differences.

2. LinkedBlockingQueue

The LinkedBlockingQueue is an optionally-bounded blocking queue implementation, meaning that the queue size can be specified if needed.

Let's create a LinkedBlockingQueue which can contain up to 100 elements:

BlockingQueue<Integer> boundedQueue = new LinkedBlockingQueue<>(100);

We can also create an unbounded LinkedBlockingQueue just by not specifying the size:

BlockingQueue<Integer> unboundedQueue = new LinkedBlockingQueue<>();

An unbounded queue implies that the size of the queue is not specified while creating. Therefore, the queue can grow dynamically as elements are added to it. However, if there is no memory left, then the queue throws a java.lang.OutOfMemoryError.

We can create a LinkedBlockingQueue from an existing collection as well:

Collection<Integer> listOfNumbers = Arrays.asList(1,2,3,4,5);
BlockingQueue<Integer> queue = new LinkedBlockingQueue<>(listOfNumbers);

The LinkedBlockingQueue class implements the BlockingQueue interface, which provides the blocking nature to it.

A blocking queue indicates that the queue blocks the accessing thread if it is full (when the queue is bounded) or becomes empty. If the queue is full, then adding a new element will block the accessing thread unless there is space available for the new element. Similarly, if the queue is empty, then accessing an element blocks the calling thread:

ExecutorService executorService = Executors.newFixedThreadPool(1);
LinkedBlockingQueue<Integer> queue = new LinkedBlockingQueue<>();
executorService.submit(() -> {
  try {
    queue.take();
  } 
  catch (InterruptedException e) {
    // exception handling
  }
});

In the above code snippet, we are accessing an empty queue. Therefore, the take method blocks the calling thread.

The blocking feature of the LinkedBlockingQueue is associated with some cost. This cost is because every put or the take operation is lock contended between the producer or the consumer threads. Therefore, in scenarios with many producers and consumers, put and take actions could be slower.

3. ConcurrentLinkedQueue

A ConcurrentLinkedQueue is an unbounded, thread-safe, and non-blocking queue.

Let's create an empty ConcurrentLinkedQueue:

ConcurrentLinkedQueue queue = new ConcurrentLinkedQueue();

We can create a ConcurrentLinkedQueue from an existing collection as well:

Collection<Integer> listOfNumbers = Arrays.asList(1,2,3,4,5);
ConcurrentLinkedQueue<Integer> queue = new ConcurrentLinkedQueue<>(listOfNumbers);

Unlike a LinkedBlockingQueue, a ConcurrentLinkedQueue is a non-blocking queue. Thus, it does not block a thread once the queue is empty. Instead, it returns null. Since its unbounded, it'll throw a java.lang.OutOfMemoryError if there's no extra memory to add new elements.

Apart from being non-blocking, a ConcurrentLinkedQueue has additional functionality.

In any producer-consumer scenario, consumers will not content with producers; however, multiple producers will contend with one another:

int element = 1;
ExecutorService executorService = Executors.newFixedThreadPool(2);
ConcurrentLinkedQueue<Integer> queue = new ConcurrentLinkedQueue<>();

Runnable offerTask = () -> queue.offer(element);

Callable<Integer> pollTask = () -> {
  while (queue.peek() != null) {
    return queue.poll().intValue();
  }
  return null;
};

executorService.submit(offerTask);
Future<Integer> returnedElement = executorService.submit(pollTask);
assertThat(returnedElement.get().intValue(), is(equalTo(element)));

The first task, offerTask, adds an element to the queue, and the second task, pollTask, retrieve an element from the queue. The poll task additionally checks the queue for an element first as ConcurrentLinkedQueue is non-blocking and can return a null value.

4. Similarities

Both LinkedBlockingQueue and the ConcurrentLinkedQueue are queue implementations and share some common characteristics. Let's discuss the similarities of these two queues:

  1. Both implements the Queue Interface
  2. They both use linked nodes to store their elements
  3. Both are suitable for concurrent access scenarios

5. Differences

Although both of these queues have certain similarities, there are substantial characteristics differences, too:

Feature LinkedBlockingQueue ConcurrentLinkedQueue
Blocking Nature It is a blocking queue and implements the BlockingQueue interface It is a non-blocking queue and does not implement the BlockingQueue interface
Queue Size It is an optionally bounded queue, which means there are provisions to define the queue size during creation It is an unbounded queue, and there is no provision to specify the queue size during creation
Locking Nature It is a lock-based queue It is a lock-free queue
Algorithm It implements its locking based on two-lock queue algorithm It relies on the Michael & Scott algorithm for non-blocking, lock-free queues
Implementation In the two-lock queue algorithm mechanism, LinkedBlockingQueue uses two different locks – the putLock and the takeLock. The put/take operations uses the first lock type, and the take/poll operations use the other lock type It uses CAS (Compare-And-Swap) for its operations
Blocking Behavior It is a blocking queue. So, it blocks the accessing threads when the queue is empty It does not block the accessing thread when the queue is empty and returns null

6. Conclusion

In this article, we learned about LinkedBlockingQueue and ConcurrentLinkedQueue.

First, we individually discussed these two queue implementations and some of their characteristics. Then, we saw the similarities between these two queue implementations. Finally, we explored the differences between these two queue implementations.

As always, the source code of the examples is available over on GitHub.

Spring REST Docs vs OpenAPI

$
0
0

1. Overview

Spring REST Docs and OpenAPI 3.0 are two ways to create API documentation for a REST API.

In this tutorial, we'll examine their relative advantages and disadvantages.

2. A Brief Summary of Origins

Spring REST Docs is a framework developed by the Spring community in order to create accurate documentation for RESTful APIs. It takes a test-driven approach, wherein the documentation is written either as Spring MVC tests, Spring Webflux's WebTestClient, or REST-Assured.

The output of running the tests is created as AsciiDoc files which can be put together using Asciidoctor to generate an HTML page describing our APIs. Since it follows the TDD method, Spring REST Docs automatically brings in all its advantages such as less error-prone code, reduced rework, and faster feedback cycles, to name a few.

OpenAPI, on the other hand, is a specification born out of Swagger 2.0. Its latest version as of writing this is 3.0 and has many known implementations.

As any other specification would, OpenAPI lays out certain ground rules for its implementations to follow. Simply put, all OpenAPI implementations are supposed to produce the documentation as a JSON object, either in JSON or YAML format.

There also exist many tools that take this JSON/YAML in and spit out a UI to visualize and navigate the API. This comes in handy during acceptance testing, for example. In our code samples here, we'll be using springdoc – a library for OpenAPI 3 with Spring Boot.

Before looking at the two in detail, let's quickly set up an API to be documented.

3. The REST API

Let's put together a basic CRUD API using Spring Boot.

3.1. The Repository

Here, the repository that we'll be using is a bare-bones PagingAndSortingRepository interface, with the model Foo:

@Repository
public interface FooRepository extends PagingAndSortingRepository<Foo, Long>{}

@Entity
public class Foo {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private long id;
    
    @Column(nullable = false)
    private String title;
  
    @Column()
    private String body;

    // constructor, getters and setters
}

We'll also load the repository using a schema.sql and a data.sql.

3.2. The Controller

Next, let's look at the controller, skipping its implementation details for brevity:

@RestController
@RequestMapping("/foo")
public class FooController {

    @Autowired
    FooRepository repository;

    @GetMapping
    public ResponseEntity<List<Foo>> getAllFoos() {
        // implementation
    }

    @GetMapping(value = "{id}")
    public ResponseEntity<Foo> getFooById(@PathVariable("id") Long id) {
        // implementation
    }

    @PostMapping
    public ResponseEntity<Foo> addFoo(@RequestBody @Valid Foo foo) {
        // implementation
    }

    @DeleteMapping("/{id}")
    public ResponseEntity<Void> deleteFoo(@PathVariable("id") long id) {
        // implementation
    }

    @PutMapping("/{id}")
    public ResponseEntity<Foo> updateFoo(@PathVariable("id") long id, @RequestBody Foo foo) {
        // implementation
    }
}

3.3. The Application

And finally, the Boot App:

@SpringBootApplication()
public class Application {
    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }
}

4. OpenAPI / Springdoc

Now let's see how springdoc can add documentation to our Foo REST API.

Recall that it'll generate a JSON object and a UI visualization of the API based on that object.

4.1. Basic UI

To begin with, we'll just add a couple of Maven dependencies – springdoc-openapi-data-rest for generating the JSON, and springdoc-openapi-ui for rendering the UI.

The tool will introspect the code for our API, and read the controller methods' annotations. On that basis, it'll generate the API JSON which will be live at http://localhost:8080/api-docs/. It'll also serve a basic UI at http://localhost:8080/swagger-ui-custom.html:

As we can see, without adding any code at all, we obtained a beautiful visualization of our API, right down to the Foo schema. Using the Try it out button, we can even execute the operations and view the results.

Now, what if we wanted to add some real documentation to the API? In terms of what the API is all about, what all its operations mean, what should be input, and what responses to expect?

We'll look at this in the next section.

4.2. Detailed UI

Let's first see how to add a general description to the API.

For that, we'll add an OpenAPI bean to our Boot App:

@Bean
public OpenAPI customOpenAPI(@Value("${springdoc.version}") String appVersion) {
    return new OpenAPI().info(new Info()
      .title("Foobar API")
      .version(appVersion)
      .description("This is a sample Foobar server created using springdocs - " + 
        "a library for OpenAPI 3 with spring boot.")
      .termsOfService("http://swagger.io/terms/")
      .license(new License().name("Apache 2.0")
      .url("http://springdoc.org")));
}

Next, to add some information to our API operations, we'll decorate our mappings with a few OpenAPI-specific annotations.

Let's see how we can describe getFooById. We'll do this inside another controller, FooBarController, which is similar to our FooController:

@RestController
@RequestMapping("/foobar")
@Tag(name = "foobar", description = "the foobar API with documentation annotations")
public class FooBarController {
    @Autowired
    FooRepository repository;

    @Operation(summary = "Get a foo by foo id")
    @ApiResponses(value = {
      @ApiResponse(responseCode = "200", description = "found the foo", content = { 
        @Content(mediaType = "application/json", schema = @Schema(implementation = Foo.class))}),
      @ApiResponse(responseCode = "400", description = "Invalid id supplied", content = @Content), 
      @ApiResponse(responseCode = "404", description = "Foo not found", content = @Content) })
    @GetMapping(value = "{id}")
    public ResponseEntity getFooById(@Parameter(description = "id of foo to be searched") 
      @PathVariable("id") String id) {
        // implementation omitted for brevity
    }
    // other mappings, similarly annotated with @Operation and @ApiResponses
}

Now let's see the effect on the UI:

So with these minimal configurations, the user of our API can now see what it's about, how to use it, and what results to expect. All we had to do was compile the code and run the Boot App.

5. Spring REST Docs

REST docs is a totally different take on API documentation. As described earlier, the process is test-driven, and the output is in the form of a static HTML page.

In our example here, we'll be using Spring MVC Tests to create documentation snippets.

At the outset, we'll need to add the spring-restdocs-mockmvc dependency and the asciidoc Maven plugin to our pom.

5.1. The JUnit5 Test

Now let's have a look at the JUnit5 test which includes our documentation:

@ExtendWith({ RestDocumentationExtension.class, SpringExtension.class })
@SpringBootTest(classes = Application.class)
public class SpringRestDocsIntegrationTest {
    private MockMvc mockMvc;
    
    @Autowired
    private ObjectMapper objectMapper;

    @BeforeEach
    public void setup(WebApplicationContext webApplicationContext, 
      RestDocumentationContextProvider restDocumentation) {
        this.mockMvc = MockMvcBuilders.webAppContextSetup(webApplicationContext)
          .apply(documentationConfiguration(restDocumentation))
          .build();
    }

    @Test
    public void whenGetFooById_thenSuccessful() throws Exception {
        ConstraintDescriptions desc = new ConstraintDescriptions(Foo.class);
        this.mockMvc.perform(get("/foo/{id}", 1))
          .andExpect(status().isOk())
          .andDo(document("getAFoo", preprocessRequest(prettyPrint()), 
            preprocessResponse(prettyPrint()), 
            pathParameters(parameterWithName("id").description("id of foo to be searched")),
            responseFields(fieldWithPath("id")
              .description("The id of the foo" + 
                collectionToDelimitedString(desc.descriptionsForProperty("id"), ". ")),
              fieldWithPath("title").description("The title of the foo"), 
              fieldWithPath("body").description("The body of the foo"))));
    }

    // more test methods to cover other mappings

}

After running this test, we get several files in our targets/generated-snippets directory with information about the given API operation. Particularly, whenGetFooById_thenSuccessful will give us eight adocs in a getAFoo folder in the directory.

Here's a sample http-response.adoc, of course containing the response body:

[source,http,options="nowrap"]
----
HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 60

{
  "id" : 1,
  "title" : "Foo 1",
  "body" : "Foo body 1"
}
----

5.2. fooapi.adoc

Now we need a master file that will weave all these snippets together to form a well-structured HTML.

Let's call it fooapi.adoc and see a small portion of it:

=== Accessing the foo GET
A `GET` request is used to access the foo read.

==== Request structure
include::{snippets}/getAFoo/http-request.adoc[]

==== Path Parameters
include::{snippets}/getAFoo/path-parameters.adoc[]

==== Example response
include::{snippets}/getAFoo/http-response.adoc[]

==== CURL request
include::{snippets}/getAFoo/curl-request.adoc[]

After executing the asciidoctor-maven-plugin, we get the final HTML file fooapi.html in the target/generated-docs folder.

And this is how it'll look when opened in a browser:

6. Key Takeaways

Now that we've looked at both the implementations, let's summarize the advantages and disadvantages.

With springdoc, the annotations we had to use cluttered our rest controller's code and reduced its readability. Also, the documentation was tightly coupled to the code and would make its way into production.

Needless to say, maintaining the documentation is another challenge here – if something in the API changed, would the programmer always remember to update the corresponding OpenAPI annotation?

On the other hand, REST Docs neither looks as catchy as the other UI did nor can it be used for acceptance testing. But it has its advantages.

Notably, the successful completion of the Spring MVC test not only gives us the snippets but also verifies our API as any other unit test would. This forces us to make documentation changes corresponding to API modifications if any. Also, the documentation code is completely separate from the implementation.

But again, on the flip side, we had to write more code to generate the documentation. First, the test itself which is arguably as verbose as the OpenAPI annotations, and second, the master adoc.

It also needs more steps to generate the final HTML – running the test first and then the plugin.  Springdoc only required us to run the Boot App.

7. Conclusion

In this tutorial, we looked at the differences between the OpenAPI based springdoc and Spring REST Docs. We also saw how to implement the two to generate documentation for a basic CRUD API.

In summary, both have their pros and cons, and the decision of using one over the other is subject to our specific requirements.

As always, source code is available over on GitHub.

Univocity Parsers

$
0
0

1. Introduction

In this tutorial, we'll take a quick look at Univocity Parsers, a library for parsing CSV, TSV, and fixed-width files in Java.

We'll start with the basics of reading and writing files before moving on to reading and writing files to and from Java beans. Then, we'll take a quick look at the configuration options before wrapping up.

2. Setup

To use the parsers, we need to add the latest Maven dependency to our project pom.xml file:

<dependency>
    <groupId>com.univocity</groupId>
    <artifactId>univocity-parsers</artifactId>
    <version>2.8.4</version>
</dependency>

3. Basic Usage

3.1. Reading

In Univocity, we can quickly parse an entire file into a collection of String arrays that represent each line in the file.

First, let's parse a CSV file by providing a Reader to our CSV file into a CsvParser with default settings:

try (Reader inputReader = new InputStreamReader(new FileInputStream(
  new File("src/test/resources/productList.csv")), "UTF-8")) {
    CsvParser parser = new CsvParser(new CsvParserSettings());
    List<String[]> parsedRows = parser.parseAll(inputReader);
    return parsedRows;
} catch (IOException e) {
    // handle exception
}

We can easily switch this logic to parse a TSV file by switching to TsvParser and providing it with a TSV file.

It's only slightly more complicated to process a fixed-width file. The primary difference is that we need to provide our field widths in the parser settings.

Let's read a fixed-width file by providing a FixedWidthFields object to our FixedWidthParserSettings:

try (Reader inputReader = new InputStreamReader(new FileInputStream(
  new File("src/test/resources/productList.txt")), "UTF-8")) {
    FixedWidthFields fieldLengths = new FixedWidthFields(8, 30, 10);
    FixedWidthParserSettings settings = new FixedWidthParserSettings(fieldLengths);

    FixedWidthParser parser = new FixedWidthParser(settings);
    List<String[]> parsedRows = parser.parseAll(inputReader);
    return parsedRows;
} catch (IOException e) {
    // handle exception
}

3.2. Writing

Now that we've covered reading files with the parsers, let's learn how to write them.

Writing files is very similar to reading them in that we provide a Writer along with our desired settings to the parser that matches our file type.

Let's create a method to write files in all three possible formats:

public boolean writeData(List<Object[]> products, OutputType outputType, String outputPath) {
    try (Writer outputWriter = new OutputStreamWriter(new FileOutputStream(new File(outputPath)),"UTF-8")){
        switch(outputType) {
            case CSV:
                CsvWriter writer = new CsvWriter(outputWriter, new CsvWriterSettings());
                writer.writeRowsAndClose(products);
                break;
            case TSV:
                TsvWriter writer = new TsvWriter(outputWriter, new TsvWriterSettings());
                writer.writeRowsAndClose(products);
                break;
            case FIXED_WIDTH:
                FixedWidthFields fieldLengths = new FixedWidthFields(8, 30, 10);
                FixedWidthWriterSettings settings = new FixedWidthWriterSettings(fieldLengths);
                FixedWidthWriter writer = new FixedWidthWriter(outputWriter, settings);
                writer.writeRowsAndClose(products);
                break;
            default:
                logger.warn("Invalid OutputType: " + outputType);
                return false;
        }
        return true;
    } catch (IOException e) {
        // handle exception
    }
}

As with reading files, writing CSV files and TSV files are nearly identical. For fixed-width files, we have to provide the field width to our settings.

3.3. Using Row Processors

Univocity provides a number of row processors we can use and also provides the ability for us to create our own.

To get a feel for using row processors, let's use the BatchedColumnProcessor to process a larger CSV file in batches of five rows:

try (Reader inputReader = new InputStreamReader(new FileInputStream(new File(relativePath)), "UTF-8")) {
    CsvParserSettings settings = new CsvParserSettings();
    settings.setProcessor(new BatchedColumnProcessor(5) {
        @Override
        public void batchProcessed(int rowsInThisBatch) {}
    });
    CsvParser parser = new CsvParser(settings);
    List<String[]> parsedRows = parser.parseAll(inputReader);
    return parsedRows;
} catch (IOException e) {
    // handle exception
}

To use this row processor, we define it in our CsvParserSettings and then all we have to do is call parseAll.

3.4. Reading and Writing into Java Beans

The list of String arrays is alright, but we're often working with data in Java beans. Univocity also allows for reading and writing into specially annotated Java beans.

Let's define a Product bean with the Univocity annotations:

public class Product {

    @Parsed(field = "product_no")
    private String productNumber;
    
    @Parsed
    private String description;
    
    @Parsed(field = "unit_price")
    private float unitPrice;

    // getters and setters
}

The main annotation is the @Parsed annotation.

If our column heading matches the field name, we can use @Parsed without any values specified. If our column heading differs from the field name we can specify the column heading using the field property.

Now that we've defined our Product bean, let's read our CSV file into it:

try (Reader inputReader = new InputStreamReader(new FileInputStream(
  new File("src/test/resources/productList.csv")), "UTF-8")) {
    BeanListProcessor<Product> rowProcessor = new BeanListProcessor<Product>(Product.class);
    CsvParserSettings settings = new CsvParserSettings();
    settings.setHeaderExtractionEnabled(true);
    settings.setProcessor(rowProcessor);
    CsvParser parser = new CsvParser(settings);
    parser.parse(inputReader);
    return rowProcessor.getBeans();
} catch (IOException e) {
    // handle exception
}

We first constructed a special row processor, BeanListProcessor, with our annotated class. Then, we provided that to the CsvParserSettings and used it to read in a list of Products.

Next, let's write our list of Products out to a fixed-width file:

try (Writer outputWriter = new OutputStreamWriter(new FileOutputStream(new File(outputPath)), "UTF-8")) {
    BeanWriterProcessor<Product> rowProcessor = new BeanWriterProcessor<Product>(Product.class);
    FixedWidthFields fieldLengths = new FixedWidthFields(8, 30, 10);
    FixedWidthWriterSettings settings = new FixedWidthWriterSettings(fieldLengths);
    settings.setHeaders("product_no", "description", "unit_price");
    settings.setRowWriterProcessor(rowProcessor);
    FixedWidthWriter writer = new FixedWidthWriter(outputWriter, settings);
    writer.writeHeaders();
    for (Product product : products) {
        writer.processRecord(product);
    }
    writer.close();
    return true;
} catch (IOException e) {
    // handle exception
}

The notable difference is that we're specifying our column headers in our settings.

4. Settings

Univocity has a number of settings we can apply to the parsers. As we saw earlier, we can use settings to apply a row processor to the parsers.

There are many other settings that can be changed to suit our needs. Although many of the configurations are common across the three file types, each parser also has format-specific settings.

Let's adjust our CSV parser settings to put some limits on the data we're reading:

CsvParserSettings settings = new CsvParserSettings();
settings.setMaxCharsPerColumn(100);
settings.setMaxColumns(50);
CsvParser parser = new CsvParser(new CsvParserSettings());

5. Conclusion

In this quick tutorial, we learned the basics of parsing files using the Univocity library.

We learned how to read and write files both into lists of string arrays and Java beans. Before, we got into Java beans, we took a quick look at using different row processors. Finally, we briefly touched on how to customize the settings.

As always, the source code is available over on GitHub.

An Introduction to Invoke Dynamic in the JVM

$
0
0

1. Overview

Invoke Dynamic (Also known as Indy) was part of JSR 292 intended to enhance the JVM support for dynamically typed languages. After its first release in Java 7, the invokedynamic opcode is used quite extensively by dynamic JVM-based languages like JRuby and even statically typed languages like Java.

In this tutorial, we're going to demystify invokedynamic and see how it can help library and language designers to implement many forms of dynamicity.

2. Meet Invoke Dynamic

Let's start with a simple chain of Stream API calls:

public class Main { 

    public static void main(String[] args) {
        long lengthyColors = List.of("Red", "Green", "Blue")
          .stream().filter(c -> c.length() > 3).count();
    }
}

At first, we might think that Java creates an anonymous inner class deriving from Predicate and then passes that instance to the filter method. But, we'd be wrong.

2.1. The Bytecode

To check this assumption, we can take a peek at the generated bytecode:

javap -c -p Main
// truncated
// class names are simplified for the sake of brevity 
// for instance, Stream is actually java/util/stream/Stream
0: ldc               #7             // String Red
2: ldc               #9             // String Green
4: ldc               #11            // String Blue
6: invokestatic      #13            // InterfaceMethod List.of:(LObject;LObject;)LList;
9: invokeinterface   #19,  1        // InterfaceMethod List.stream:()LStream;
14: invokedynamic    #23,  0        // InvokeDynamic #0:test:()LPredicate;
19: invokeinterface  #27,  2        // InterfaceMethod Stream.filter:(LPredicate;)LStream;
24: invokeinterface  #33,  1        // InterfaceMethod Stream.count:()J
29: lstore_1
30: return

Despite what we thought, there's no anonymous inner class and certainly, nobody is passing an instance of such a class to the filter method

Surprisingly, the invokedynamic instruction is somehow responsible for creating the Predicate instance.

2.2. Lambda Specific Methods

Additionally, the Java compiler also generated the following funny-looking static method:

private static boolean lambda$main$0(java.lang.String);
    Code:
       0: aload_0
       1: invokevirtual #37                 // Method java/lang/String.length:()I
       4: iconst_3
       5: if_icmple     12
       8: iconst_1
       9: goto          13
      12: iconst_0
      13: ireturn

This method takes a String as the input and then performs the following steps:

  • Computing the input length (invokevirtual on length)
  • Comparing the length with the constant 3 (if_icmple and iconst_3)
  • Returning false if the length is less than or equal to 3

Interestingly, this is actually equivalent of the lambda we passed to the filter method:

c -> c.length() > 3

So instead of an anonymous inner class, Java creates a special static method and somehow invokes that method via invokedynamic. 

Over the course of this article, we're going to see how this invocation works internally. But, first, let's define the problem that invokedynamic is trying to solve.

2.3. The Problem

Before Java 7, the JVM only had four method invocation types: invokevirtual to call normal class methods, invokestatic to call static methods, invokeinterface to call interface methods, and invokespecial to call constructors or private methods.

Despite their differences, all these invocations share one simple trait: They have a few predefined steps to complete each method call, and we can't enrich these steps with our custom behaviors.

There are two main workarounds for this limitation: One at compile-time and the other at runtime. The former is usually used by languages like Scala or Koltin and the latter is the solution of choice for JVM-based dynamic languages like JRuby.

The runtime approach is usually reflection-based and consequently, inefficient.

On the other hand, the compile-time solution is usually relying on code-generation at compile-time. This approach is more efficient at runtime. However, it's somewhat brittle and also may cause a slower startup time as there's more bytecode to process.

Now that we've got a better understanding of the problem, let's see how the solution works internally.

3. Under the Hood

invokedynamic lets us bootstrap the method invocation process in any way we want. That is, when the JVM sees an invokedynamic opcode for the first time, it calls a special method known as the bootstrap method to initialize the invocation process:

The bootstrap method is a normal piece of Java code that we've written to set up the invocation process. Therefore, it can contain any logic.

Once the bootstrap method completes normally, it should return an instance of CallSiteThis CallSite encapsulates the following pieces of information:

  • A pointer to the actual logic that JVM should execute. This should be represented as a MethodHandle
  • A condition representing the validity of the returned CallSite.

From now on, every time JVM sees this particular opcode again, it will skip the slow path and directly calls the underlying executable. Moreover, the JVM will continue to skip the slow path until the condition in the CallSite changes.

As opposed to the Reflection API, the JVM can completely see through MethodHandles and will try to optimize them, hence the better performance.

3.1. Bootstrap Method Table

Let's take another look at the generated invokedynamic bytecode:

14: invokedynamic #23,  0  // InvokeDynamic #0:test:()Ljava/util/function/Predicate;

This means that this particular instruction should call the first bootstrap method (#0 part) from the bootstrap method table. Also, it mentions some of the arguments to pass to the bootstrap method:

  • The test is the only abstract method in the Predicate
  • The ()Ljava/util/function/Predicate represents a method signature in the JVM – the method takes nothing as input and returns an instance of the Predicate interface

In order to see the bootstrap method table for the lambda example, we should pass -v option to javap:

javap -c -p -v Main
// truncated
// added new lines for brevity
BootstrapMethods:
  0: #55 REF_invokeStatic java/lang/invoke/LambdaMetafactory.metafactory:
    (Ljava/lang/invoke/MethodHandles$Lookup;
     Ljava/lang/String;
     Ljava/lang/invoke/MethodType;
     Ljava/lang/invoke/MethodType;
     Ljava/lang/invoke/MethodHandle;
     Ljava/lang/invoke/MethodType;)Ljava/lang/invoke/CallSite;
    Method arguments:
      #62 (Ljava/lang/Object;)Z
      #64 REF_invokeStatic Main.lambda$main$0:(Ljava/lang/String;)Z
      #67 (Ljava/lang/String;)Z

The bootstrap method for all lambdas is the metafactory static method in the LambdaMetafactory class.

Similar to all other bootstrap methods, this one takes at least three arguments as follows:

  • The Ljava/lang/invoke/MethodHandles$Lookup argument represents the lookup context for the invokedynamic
  • The Ljava/lang/String represents the method name in the call site – in this example, the method name is test
  • The Ljava/lang/invoke/MethodType is the dynamic method signature of the call site – in this case, it's ()Ljava/util/function/Predicate

In addition to these three arguments, bootstrap methods also can optionally accept one or more extra parameters. In this example, these are the extra ones:

  • The (Ljava/lang/Object;)Z is an erased method signature accepting an instance of Object and returning a boolean.
  • The REF_invokeStatic Main.lambda$main$0:(Ljava/lang/String;)Z is the MethodHandle pointing to the actual lambda logic.
  • The (Ljava/lang/String;)Z is a non-erased method signature accepting one String and returning a boolean.

Put simply, the JVM will pass all the required information to the bootstrap method. Bootstrap method will, in turn, use that information to create an appropriate instance of Predicate. Then, the JVM will pass that instance to the filter method.

3.2. Different Types of CallSites

Once the JVM sees invokedynamic in this example for the first time, it calls the bootstrap method. As of writing this article, the lambda bootstrap method will use the InnerClassLambdaMetafactory to generate an inner class for the lambda at runtime.

Then the bootstrap method encapsulates the generated inner class inside a special type of CallSite known as ConstantCallSiteThis type of CallSite would never change after setup. Therefore, after the first setup for each lambda, the JVM will always use the fast path to directly call the lambda logic.

Although this is the most efficient type of invokedynamic, it's certainly not the only available option. As a matter of fact, Java provides MutableCallSite and VolatileCallSite to accommodate for more dynamic requirements.

3.3. Advantages

So, in order to implement lambda expressions, instead of creating anonymous inner classes at compile-time, Java creates them at runtime via invokedynamic.

One might argue against deferring inner class generation until runtime. However, the invokedynamic approach has a few advantages over the simple compile-time solution.

First, the JVM does not generate the inner class until the first use of lambda. Hence, we won't pay for the extra footprint associated with the inner class before the first lambda execution.

Additionally, much of the linkage logic is moved out from the bytecode to the bootstrap method. Therefore, the invokedynamic bytecode is usually much smaller than alternative solutions. The smaller bytecode can boost startup speed.

Suppose a newer version of Java comes with a more efficient bootstrap method implementation. Then our invokedynamic bytecode can take advantage of this improvement without recompiling. This way we can achieve some sort of forward binary compatibility. Basically, we can switch between different strategies without recompilation.

Finally, writing the bootstrap and linkage logic in Java is usually easier than traversing an AST to generate a complex piece of bytecode. So, invokedynamic can be (subjectively) less brittle.

4. More Examples

Lambda expressions are not the only feature, and Java is not certainly the only language using invokedynamic. In this section, we're going to get familiar with a few other examples of dynamic invocation.

4.1. Java 14: Records

Records are a new preview feature in Java 14 providing a nice concise syntax to declare classes that are supposed to be dumb data holders.

Here's a simple record example:

public record Color(String name, int code) {}

Given this simple one-liner, Java compiler generates appropriate implementations for accessor methods, toString, equals, and hashcode. 

In order to implement toString, equals, or hashcode, Java is using invokedynamicFor instance, the bytecode for equals is as follows:

public final boolean equals(java.lang.Object);
    Code:
       0: aload_0
       1: aload_1
       2: invokedynamic #27,  0  // InvokeDynamic #0:equals:(LColor;Ljava/lang/Object;)Z
       7: ireturn

The alternative solution is to find all record fields and generate the equals logic based on those fields at compile-time. The more we have fields, the lengthier the bytecode.

On the contrary, Java calls a bootstrap method to link the appropriate implementation at runtime. Therefore, the bytecode length would remain constant regardless of the number of fields.

Looking more closely at the bytecode shows that the bootstrap method is ObjectMethods#bootstrap:

BootstrapMethods:
  0: #42 REF_invokeStatic java/lang/runtime/ObjectMethods.bootstrap:
    (Ljava/lang/invoke/MethodHandles$Lookup;
     Ljava/lang/String;
     Ljava/lang/invoke/TypeDescriptor;
     Ljava/lang/Class;
     Ljava/lang/String;
     [Ljava/lang/invoke/MethodHandle;)Ljava/lang/Object;
    Method arguments:
      #8 Color
      #49 name;code
      #51 REF_getField Color.name:Ljava/lang/String;
      #52 REF_getField Color.code:I

4.2. Java 9: String Concatenation

Prior to Java 9, non-trivial string concatenations were implemented using StringBuilder. As part of JEP 280, string concatenation is now using invokedynamic. For instance, let's concatenate a constant string with a random variable:

"random-" + ThreadLocalRandom.current().nextInt();

Here's how the bytecode looks like for this example:

0: invokestatic  #7          // Method ThreadLocalRandom.current:()LThreadLocalRandom;
3: invokevirtual #13         // Method ThreadLocalRandom.nextInt:()I
6: invokedynamic #17,  0     // InvokeDynamic #0:makeConcatWithConstants:(I)LString;

Moreover, the bootstrap methods for string concatenations are residing in the StringConcatFactory class:

BootstrapMethods:
  0: #30 REF_invokeStatic java/lang/invoke/StringConcatFactory.makeConcatWithConstants:
    (Ljava/lang/invoke/MethodHandles$Lookup;
     Ljava/lang/String;
     Ljava/lang/invoke/MethodType;
     Ljava/lang/String;
     [Ljava/lang/Object;)Ljava/lang/invoke/CallSite;
    Method arguments:
      #36 random-\u0001

5. Conclusion

In this article, first, we got familiar with the problems the indy is trying to solve.

Then, by walking through a simple lambda expression example, we saw how invokedynamic works internally.

Finally, we enumerated a few other examples of indy in recent versions of Java.

A Guide to the Hibernate Types Library

$
0
0

1. Overview

In this tutorial, we'll take a look at Hibernate Types. This library provides us with a few types that are not native in the core Hibernate ORM.

2. Dependencies

To enable Hibernate Types we'll just add the appropriate hibernate-types dependency:

<dependency>
    <groupId>com.vladmihalcea</groupId>
    <artifactId>hibernate-types-52</artifactId>
    <version>2.9.7</version>
</dependency>

This will work with Hibernate versions 5.4, 5.3, and 5.2.

In the event that the version of Hibernate is older, the artifactId value above will be different. For versions 5.1 and 5.0, we can use hibernate-types-51. Similarly, version 4.3 requires hibernate-types-43, and versions 4.2, and 4.1 require hibernate-types-4.

The examples in this tutorial require a database. Using Docker we've provided a database container. Therefore, we'll need a working copy of Docker.

So, to run and create our database we only need to execute:

$ ./create-database.sh

3. Supported Databases

We can use our types with Oracle, SQL Server, PostgreSQL, and MySQL databases. Therefore, the mapping of types in Java to database column types will vary depending on the database we use. In our case, we will use MySQL and map the JsonBinaryType to a JSON column type.

Documentation on the supported mappings can be found over on the Hibernate Types repository.

4. Data Model

The data model for this tutorial will allow us to store information about albums and songs. An album has cover art and one or more songs. A song has an artist and length. The cover art has two image URLs and a UPC code. Finally, an artist has a name, a country, and a musical genre.

In the past, we'd have created tables to represent all the data in our model. But, now that we have types available to us we can very easily store some of the data as JSON instead.

For this tutorial, we'll only create tables for the albums and the songs:

public class Album extends BaseEntity {
    @Type(type = "json")
    @Column(columnDefinition = "json")
    private CoverArt coverArt;

    @OneToMany(fetch = FetchType.EAGER)
    private List<Song> songs;

   // other class members
}
public class Song extends BaseEntity {

    private Long length = 0L;

    @Type(type = "json")
    @Column(columnDefinition = "json")
    private Artist artist;

    // other class members
}

Using the JsonStringType we'll represent the cover art and artists as JSON columns in those tables:

public class Artist implements Serializable {
 
    private String name;
    private String country;
    private String genre;

    // other class members
}
public class CoverArt implements Serializable {

    private String frontCoverArtUrl;
    private String backCoverArtUrl;
    private String upcCode;

    // other class members
}

It's important to note that the Artist and CoverArt classes are POJOs and not entities. Furthermore, they are members of our database entity classes, defined with the @Type(type = “json”) annotation.

4.1. Storing JSON Types

We defined our album and songs models to contain members the database will store as JSON. This is due to using the provided json type. In order to have that type available for us to use we must define it using a type definition:

@TypeDefs({
  @TypeDef(name = "json", typeClass = JsonStringType.class),
  @TypeDef(name = "jsonb", typeClass = JsonBinaryType.class)
})
public class BaseEntity {
  // class members
}

The @Type for JsonStringType and JsonBinaryType makes the types json and jsonb available.

The latest MySQL versions support JSON as a column type. Consequently, JDBC processes any JSON read from or any object saved to a column with either of these types as a String. This means that to map to the column correctly we must use JsonStringType in our type definition.

4.2. Hibernate

Ultimately, our types will automatically translate to SQL using JDBC and Hibernate. So, now we can create a few song objects, an album object and persist them to the database.  Subsequently, Hibernate generates the following SQL statements:

insert into song (name, artist, length, id) values ('A Happy Song', '{"name":"Superstar","country":"England","genre":"Pop"}', 240, 3);
insert into song (name, artist, length, id) values ('A Sad Song', '{"name":"Superstar","country":"England","genre":"Pop"}', 120, 4);
insert into song (name, artist, length, id) values ('A New Song', '{"name":"Newcomer","country":"Jamaica","genre":"Reggae"}', 300, 6)
insert into album (name, cover_art, id) values ('Album 0', '{"frontCoverArtUrl":"http://fakeurl-0","backCoverArtUrl":"http://fakeurl-1","upcCode":"b2b9b193-ee04-4cdc-be8f-3a276769ab5b"}', 7)

As expected, our json type Java objects are all translated by Hibernate and stored as well-formed JSON in our database.

5. Storing Generic Types

Besides supporting JSON based columns, the library also adds a few generics types: YearMonth, Year, and Month from the java.time package.

Now, we can map these types that are not natively supported by Hibernate or JPA. Also, we now have the ability to store them as an Integer, String, or Date column.

For example, let's say we want to add the recorded date of a song to our song model and store it as an Integer in our database. We can use the YearMonthIntegerType in our Song entity class definition:

@TypeDef(
  typeClass = YearMonthIntegerType.class,
  defaultForType = YearMonth.class
)
public class Song extends BaseEntity {
    @Column(
      name = "recorded_on",
      columnDefinition = "mediumint"
    )
    private YearMonth recordedOn = YearMonth.now();

    // other class members  
}

Our recordedOn property value is translated to the typeClass we provided. As a result, a pre-defined converter will persist the value in our database as an Integer.

6. Other Utility Classes

Hibernate Types has a few helper classes that further improve the developer experience when using Hibernate.

The CamelCaseToSnakeCaseNamingStrategy maps camel-cased properties in our Java classes to snake-cased columns in our database.

The ClassImportIntegrator allows simple Java DTO class name values in JPA constructor parameters.

There are also the ListResultTransformer and the MapResultTransformer classes providing cleaner implementations of the result objects used by JPA. In addition, they support the use of lambdas and provide backward compatibility with older JPA versions.

7. Conclusion

In this tutorial, we introduced the Hibernate Types Java library and the new types it adds to Hibernate and JPA. We also looked at some of the utilities and generic types provided by the library.

The implementation of the examples and code snippets are available over on GitHub.

Comparing Objects in Java

$
0
0

1. Introduction

Comparing objects is an essential feature of object-oriented programming languages.

In this tutorial, we're going look at some of the features of the Java language that allow us to compare objects. Additionally, we'll look at such features in external libraries.

2. == and != Operators

Let's begin with the == and != operators that can tell if two Java objects are the same or not, respectively.

2.1. Primitives

For primitive types, being the same means having equal values:

assertThat(1 == 1).isTrue();

Thanks to auto-unboxing, this also works when comparing a primitive value with its wrapper type counterpart:

Integer a = new Integer(1);
assertThat(1 == a).isTrue();

If two integers have different values, the == operator would return false, while the != operator would return true.

2.2. Objects

Let's say we want to compare two Integer wrapper types with the same value:

Integer a = new Integer(1);
Integer b = new Integer(1);

assertThat(a == b).isFalse();

By comparing two objects, the value of those objects is not 1. Rather it is their memory addresses in the stack that are different since both objects were created using the new operator. If we had assigned a to b, then we would've had a different result:

Integer a = new Integer(1);
Integer b = a;

assertThat(a == b).isTrue();

Now, let's see what happens when we're using the Integer#valueOf factory method:

Integer a = Integer.valueOf(1);
Integer b = Integer.valueOf(1);

assertThat(a == b).isTrue();

In this case, they are considered the same. This is because the valueOf() method stores the Integer in a cache two avoid creating too many wrapper objects with the same value. Therefore, the method returns the same Integer instance for both calls.

Java also does this for String:

assertThat("Hello!" == "Hello!").isTrue();

However, if they were created using the new operator, then they would not be the same.

Finally, two null references are considered to be the same, while any non-null object will be considered different from null:

assertThat(null == null).isTrue();

assertThat("Hello!" == null).isFalse();

Of course, the behavior of the equality operators can be limiting. What if we want to compare two objects mapped to different addresses and yet having them considered equal based on their internal states? We'll see how in the next sections.

3. Object#equals Method

Now, let's talk about a broader concept of equality with the equals() method.

This method is defined in the Object class so that every Java object inherits it. By default, its implementation compares object memory addresses, so it works the same as the == operator. However, we can override this method in order to define what equality means for our objects.

First, let's see how it behaves for existing objects like Integer:

Integer a = new Integer(1);
Integer b = new Integer(1);

assertThat(a.equals(b)).isTrue();

The method still returns true when both objects are the same.

We should note that we can pass a null object as the argument of the method, but of course, not as the object we call the method upon.

We can use the equals() method with an object of our own. Let's say we have a Person class:

public class Person {
    private String firstName;
    private String lastName;

    public Person(String firstName, String lastName) {
        this.firstName = firstName;
        this.lastName = lastName;
    }
}

We can override the equals() method for this class so that we can compare two Persons based on their internal details:

@Override
public boolean equals(Object o) {
    if (this == o) return true;
    if (o == null || getClass() != o.getClass()) return false;
    Person that = (Person) o;
    return firstName.equals(that.firstName) &&
      lastName.equals(that.lastName);
}

For more information, see our article about this topic.

4. Objects#equals Static Method

Let's now look at the Objects#equals static method. We mentioned earlier that we can't use null as the value of the first object otherwise a NullPointerException would be thrown.

The equals() method of the Objects helper class solves that problems. It takes two arguments and compares them, also handling null values.

Let's compare Person objects again with:

Person joe = new Person("Joe", "Portman");
Person joeAgain = new Person("Joe", "Portman");
Person natalie = new Person("Natalie", "Portman");

assertThat(Objects.equals(joe, joeAgain)).isTrue();
assertThat(Objects.equals(joe, natalie)).isFalse();

As we said, the method handles null values. Therefore, if both arguments are null it will return true, and if only one of them is null, it will return false.

This can be really handy. Let's say we want to add an optional birth date to our Person class:

public Person(String firstName, String lastName, LocalDate birthDate) {
    this(firstName, lastName);
    this.birthDate = birthDate;
}

Then, we'd have to update our equals() method but with null handling. We could do this by adding this condition to our equals() method:

birthDate == null ? that.birthDate == null : birthDate.equals(that.birthDate);

However, if we add many nullable fields to our class, it can become really messy. Using the Objects#equals method in our equals() implementation is much cleaner, and improves readability:

Objects.equals(birthDate, that.birthDate);

5. Comparable Interface

Comparison logic can also be used to place objects in a specific order. The Comparable interface allows us to define an ordering between objects, by determining if an object is greater, equal, or lesser than another.

The Comparable interface is generic and has only one method, compareTo(), which takes an argument of the generic type and returns an int. The returned value is negative if this is lower than the argument, 0 if they are equal, and positive otherwise.

Let's say, in our Person class, we want to compare Person objects by their last name:

public class Person implements Comparable<Person> {
    //...

    @Override
    public int compareTo(Person o) {
        return this.lastName.compareTo(o.lastName);
    }
}

The compareTo() method will return a negative int if called with a Person having a greater last name than this, zero if the same last name, and positive otherwise.

For more information, take a look at our article about this topic.

6. Comparator Interface

The Comparator interface is generic and has a compare method that takes two arguments of that generic type and returns an integer. We already saw that pattern earlier with the Comparable interface.

Comparator is similar; however, it's separated from the definition of the class. Therefore, we can define as many Comparators we want for a class, where we can only provide one Comparable implementation.

Let's imagine we have a web page displaying people in a table view, and we want to offer the user the possibility to sort them by first names rather than last names. It isn't possible with Comparable if we also want to keep our current implementation, but we could implement our own Comparators.

Let's create a Person Comparator that will compare them only by their first names:

Comparator<Person> compareByFirstNames = Comparator.comparing(Person::firstName);

Let's now sort a List of people using that Comparator:

Person joe = new Person("Joe", "Portman");
Person allan = new Person("Allan", "Dale");

List<Person> people = new ArrayList<>();
people.add(joe);
people.add(allan);

people.sort(compareByFirstNames);

assertThat(people).containsExactly(allan, joe);

There are other methods on the Comparator interface we can use in our compareTo() implementation:

@Override
public int compareTo(Person o) {
    return Comparator.comparing(Person::lastName)
      .thenComparing(Person::firstName)
      .thenComparing(Person::birthDate, Comparator.nullsLast(Comparator.naturalOrder()))
      .compare(this, o);
}

In this case, we are first comparing last names, then first names. Then, we compare birth dates but as they are nullable we must say how to handle that so we give a second argument telling they should be compared according to their natural order but with null values going last.

7. Apache Commons

Let's now take a look at the Apache Commons library. First of all, let's import the Maven dependency:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-lang3</artifactId>
    <version>3.10</version>
</dependency>

7.1. ObjectUtils#notEqual Method

First, let's talk about the ObjectUtils#notEqual method. It takes two Object arguments, to determine if they are not equal, according to their own equals() method implementation. It also handles null values.

Let's reuse our String examples:

String a = new String("Hello!");
String b = new String("Hello World!");

assertThat(ObjectUtils.notEqual(a, b)).isTrue();

It should be noted that ObjectUtils has an equals() method. However, that's deprecated since Java 7, when Objects#equals appeared

7.2. ObjectUtils#compare Method

Now, let's compare object order with the ObjectUtils#compare method. It's a generic method that takes two Comparable arguments of that generic type and returns an Integer.

Let's see that using Strings again:

String first = new String("Hello!");
String second = new String("How are you?");

assertThat(ObjectUtils.compare(first, second)).isNegative();

By default, the method handles null values by considering them as greater. It offers an overloaded version that offers to invert that behavior and consider them lesser, taking a boolean argument.

8. Guava

Now, let's take a look at Guava. First of all, let's import the dependency:

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>29.0-jre</version>
</dependency>

8.1. Objects#equal Method

Similar to the Apache Commons library, Google provides us with a method to determine if two objects are equal, Objects#equal. Though they have different implementations, they return the same results:

String a = new String("Hello!");
String b = new String("Hello!");

assertThat(Objects.equal(a, b)).isTrue();

Though it's not marked as deprecated, the JavaDoc of this method says that it should be considered as deprecated since Java 7 provides the Objects#equals method.

8.2. Comparison Methods

Now, the Guava library doesn't offer a method to compare two objects (we'll see in the next section what we can do to achieve that though), but it does provide us with methods to compare primitive values. Let's take the Ints helper class and see how its compare() method works:

assertThat(Ints.compare(1, 2)).isNegative();

As usual, it returns an integer that may be negative, zero, or positive if the first argument is lesser, equal, or greater than the second, respectively. Similar methods exist for all the primitive types, except for bytes.

8.3. ComparisonChain Class

Finally, the Guava library offers the ComparisonChain class that allows us to compare two objects through a chain of comparisons. We can easily compare two Person objects by the first and last names:

Person natalie = new Person("Natalie", "Portman");
Person joe = new Person("Joe", "Portman");

int comparisonResult = ComparisonChain.start()
  .compare(natalie.lastName(), joe.lastName())
  .compare(natalie.firstName(), joe.firstName())
  .result();

assertThat(comparisonResult).isPositive();

The underlying comparison is achieved using the compareTo() method, so the arguments passed to the compare() methods must either be primitives or Comparables.

9. Conclusion

In this article, we looked at the different ways to compare objects in Java. We examined the difference between sameness, equality, and ordering. We also had a look at the corresponding features in the Apache Commons and Guava libraries.

As usual, the full code for this article can be found over on GitHub.

Inject Arrays and Lists From Spring Properties Files

$
0
0

1. Overview

In this quick tutorial, we're going to learn how to inject values into an array or List from a Spring properties file.

2. Default Behavior

We're going to start with a simple application.properties file:

arrayOfStrings=Baeldung,dot,com

Let's see how Spring behaves when we set our variable type to String[]:

@Value("${arrayOfStrings}")
private String[] arrayOfStrings;
@Test
void whenContextIsInitialized_thenInjectedArrayContainsExpectedValues() {
    assertEquals(new String[] {"Baeldung", "dot", "com"}, arrayOfStrings);
}

We can see that Spring correctly assumes our delimiter is a comma and initializes the array accordingly.

We should also note that, by default, injecting an array works correctly only when we have comma-separated values.

3. Injecting Lists

If we try to inject a List in the same way, we'll get a surprising result:

@Value("${arrayOfStrings}")
private List<String> unexpectedListOfStrings;
@Test
void whenContextIsInitialized_thenInjectedListContainsUnexpectedValues() {
    assertEquals(Collections.singletonList("Baeldung,dot,com"), unexpectedListOfStrings);
}

Our List contains a single element, which is equal to the value we set in our properties file.

In order to properly inject a List, we need to use a special syntax called Spring Expression Language (SpEL):

@Value("#{'${arrayOfStrings}'.split(',')}")
private List<String> listOfStrings;
@Test
void whenContextIsInitialized_thenInjectedListContainsExpectedValues() {
    assertEquals(Arrays.asList("Baeldung", "dot", "com"), listOfStrings);
}

We can see that our expression starts with # instead of the $ that we're used to with @Value.

We should also note that we're invoking a split method, which makes the expression a bit more complex than a usual injection.

If we'd like to keep our expression a bit simpler, we can declare our property in a special format:

listOfStrings={'Baeldung','dot','com'}

Spring will recognize this format, and we'll be able to inject our List using a somewhat simpler expression:

@Value("#{${listOfStrings}}")
private List<String> listOfStringsV2;
@Test
void whenContextIsInitialized_thenInjectedListV2ContainsExpectedValues() {
    assertEquals(Arrays.asList("Baeldung", "dot", "com"), listOfStringsV2);
}

4. Using Custom Delimiters

Let's create a similar property, but this time, we're going to use a different delimiter:

listOfStringsWithCustomDelimiter=Baeldung;dot;com

As we've seen when injecting Lists, we can use a special expression where we can specify our desired delimiter:

@Value("#{'${listOfStringsWithCustomDelimiter}'.split(';')}")
private List<String> listOfStringsWithCustomDelimiter;
@Test
void whenContextIsInitialized_thenInjectedListWithCustomDelimiterContainsExpectedValues() {
    assertEquals(Arrays.asList("Baeldung", "dot", "com"), listOfStringsWithCustomDelimiter);
}

5. Injecting Other Types

Let's take a look at the following properties:

listOfBooleans=false,false,true
listOfIntegers=1,2,3,4
listOfCharacters=a,b,c

We can see that Spring supports basic types out-of-the-box, so we don't need to do any special parsing:

@Value("#{'${listOfBooleans}'.split(',')}")
private List<Boolean> listOfBooleans;

@Value("#{'${listOfIntegers}'.split(',')}")
private List<Integer> listOfIntegers;

@Value("#{'${listOfCharacters}'.split(',')}")
private List<Character> listOfCharacters;
@Test
void whenContextIsInitialized_thenInjectedListOfBasicTypesContainsExpectedValues() {
    assertEquals(Arrays.asList(false, false, true), listOfBooleans);
    assertEquals(Arrays.asList(1, 2, 3, 4), listOfIntegers);
    assertEquals(Arrays.asList('a', 'b', 'c'), listOfCharacters);
}

This is only supported via SpEL, so we can't inject an array in the same way.

6. Reading Properties Programmatically

In order to read properties programmatically, we first need to get the instance of our Environment object:

@Autowired
private Environment environment;

Then, we can simply use the getProperty method to read any property by specifying its key and expected type:

@Test
void whenReadingFromSpringEnvironment_thenPropertiesHaveExpectedValues() {
    String[] arrayOfStrings = environment.getProperty("arrayOfStrings", String[].class);
    List<String> listOfStrings = (List<String>)environment.getProperty("arrayOfStrings", List.class);

    assertEquals(new String[] {"Baeldung", "dot", "com"}, arrayOfStrings);
    assertEquals(Arrays.asList("Baeldung", "dot", "com"), listOfStrings);
}

7. Conclusion

In this quick tutorial, we've learned how to easily inject arrays and Lists through quick and practical examples.

As always, the code is available over on GitHub.


The “Cannot find symbol” Compilation Error

$
0
0

1. Overview

In this tutorial, we'll review what compilation errors are, and then specifically explain what the “cannot find symbol” error is and how it's caused.

2. Compile Time Errors

During compilation, the compiler analyses and verifies the code for numerous things; reference types, type casts, and method declarations to name a few. This part of the compilation process is important since during this phase we'll get a compilation error.

Basically, there are three types of compile-time errors:

  • We can have syntax errors. One of the most common mistakes any programmer can make is forgetting to put the semicolon at the end of the statement; some others are forgetting imports, mismatching parentheses, or omitting the return statement
  • Next, there are type-checking errors. This is a process of verifying type safety in our code. With this check, we're making sure that we have consistent types of expressions. For example, if we define a variable of type int, we should never assign a double or String value to it
  • Meanwhile, there is a possibility that the compiler crashes. This is very rare but it can happen. In this case, it's good to know that our code might not be a problem, but that it's rather an external issue

3. The “cannot find symbol” Error

The “cannot find symbol” error comes up mainly when we try to use a variable that is not defined or declared in our program.

When our code compiles, the compiler needs to verify all identifiers we have. The errror “cannot find symbol” means we're referring to something that the compiler doesn't know about.

3.1. What Can Cause “cannot find symbol” Error?

Really, there's only one cause: The compiler couldn't find the definition of a variable we're trying to reference.

But, there are many reasons why this happens. To help us understand why, let's remind ourselves what Java code consists of.

Our Java source code consists of:

  • Keywords: true, false, class, while
  • Literals: numbers and text
  • Operators and other non-alphanumeric tokens: -, /, +, =, {
  • Identifiers: main, Reader, i, toString, etc.
  • Comments and whitespace

4. Misspelling

The most common issues are all spelling-related. If we recall that all Java identifiers are case-sensitive, we can see that:

  • StringBiulder
  • stringBuilder
  • String_Builder

would all be different ways to incorrectly refer to the StringBuilder class.

5. Instance Scope

This error can also be caused when using something that was declared outside of the scope of the class.

For example, let's say we have an Article class that calls a generateId method:

public class Article {
    private int length;
    private long id;

    public Article(int length) {
        this.length = length;
        this.id = generateId();
    }
}

But, we declare the generateId method in a separate class:

public class IdGenerator {
    public long generateId() {
        Random random = new Random();
        return random.nextInt();
    }
}

With this setup, the compiler will give a “cannot find symbol” error for generateId on line 7 of the Article snippet. The reason is that the syntax of line 7 implies that the generateId method is declared in Article.

Like in all mature languages, there's more than one way to address this issue. But, one way would be to construct IdGenerator in the Article class and then call the method:

public class Article {
    private int length;
    private long id;

    public Article(int length) {
        this.length = length;
        this.id = new IdGenerator().generateId();
    }
}

6. Undefined Variables

Sometimes we forget to declare the variable. As we can see from the snippet below, we're trying to manipulate the variable we haven't declared, in this case, text:

public class Article {
    private int length;

    // ...

    public void setText(String newText) {
        this.text = newText; // text variable was never defined
    }
}

We solve this problem by declaring the variable text of type String:

public class Article {
    private int length;
    private String text;
    // ...

    public void setText(String newText) {
        this.text = newText;
    }
}

7. Variable Scope

When a variable declaration is out of scope at the point we tried to use it, it'll cause an error during compilation. This typically happens when we work with loops.

Variables inside the loop aren't accessible outside the loop:

public boolean findLetterB(String text) {
    for (int i=0; i < text.length(); i++) {
        Character character = text.charAt(i);
        if (String.valueOf(character).equals("b")) {
            return true;
        }
        return false;
    }

    if (character == "a") {  // <-- error!
        ...
    }
}

The if statement should go inside of the for loop if we need to examine characters more:

public boolean findLetterB(String text) {
    for (int i = 0; i < text.length(); i++) {
        Character character = text.charAt(i);
        if (String.valueOf(character).equals("b")) {
            return true;
        } else if (String.valueOf(character).equals("a")) {
            ...
        }
        return false;
    }
}

8. Invalid Use of Methods or Fields

The “cannot find symbol” error will also occur if we use a field as a method or vice versa:

public class Article {
    private int length;
    private long id;
    private List<String> texts;

    public Article(int length) {
        this.length = length;
    }
    // getters and setters
}

Now, if we try to refer to the Article's texts field as if it were a method:

Article article = new Article(300);
List<String> texts = article.texts();

then we'd see the error.

That's because the compiler is looking for a method called texts, which there isn't one.

Actually, there's a getter method that we can use instead:

Article article = new Article(300);
List<String> texts = article.getTexts();

Mistakenly operating on an array rather than an array element is also an issue:

for (String text : texts) {
    String firstLetter = texts.charAt(0); // it should be text.charAt(0)
}

And so is forgetting the new keyword as in:

String s = String(); // should be 'new String()'

9. Package and Class Imports

Another problem is forgetting to import the class or package. For example, using a List object without importing java.util.List:

// missing import statement: 
// import java.util.List

public class Article {
    private int length;
    private long id;
    private List<String> texts;  <-- error!
    public Article(int length) {
        this.length = length;
    }
}

This code would not compile since the program doesn't know what List is.

10. Wrong Imports

Importing the wrong type, due to IDE completion or auto-correction is also a common issue.

Think of the situation when we want to use dates in Java. A lot of times we could import a wrong Date class, which doesn't provide methods and functionalities as other date classes that we might need:

Date date = new Date();
int year, month, day;

To get the year, month, or day for java.util.Date, we also need to import Calendar class and extract the information from there.

Simply invoking getDate() from java.util.Date won't work:

...
date.getDay();
date.getMonth();
date.getYear();

Instead, we use the Calendar object:

...
Calendar cal = Calendar.getInstance(TimeZone.getTimeZone("Europe/Paris"));
cal.setTime(date);
year = cal.get(Calendar.YEAR);
month = cal.get(Calendar.MONTH);
day = cal.get(Calendar.DAY_OF_MONTH);

However, if we have imported the LocalDate class, we wouldn't need additional code that provides us the information we need:

...
LocalDate localDate=date.toInstant().atZone(ZoneId.systemDefault()).toLocalDate();
year = localDate.getYear();
month = localDate.getMonthValue();
day = localDate.getDayOfMonth();

11. Conclusion

Compilers work on a fixed set of rules which are language-specific. If a code doesn't stick to these rules, the compiler cannot perform a conversion process which results in a compilation error. When we face the “Cannot find symbol” compilation error, the key is to identify the cause.

From the error message, we can find the line of code where the error occurs, and which element is wrong. Knowing the most common issues causing this error, will make solving it very easy and fast.

What is [Ljava.lang.Object;?

$
0
0

1. Overview

In this tutorial, we'll learn what [Ljava.lang.Object means and how to access the proper values of the object.

2. Java Object Class

In Java, if we want to print a value directly from an object, the first thing that we could try is to call its toString method:

Object[] arrayOfObjects = { "John", 2, true };
assertTrue(arrayOfObjects.toString().startsWith("[Ljava.lang.Object;"));

If we run the test, it will be successful, but usually, it's not a very useful result.

What we want to do is print the values inside the array. Instead, we have [Ljava.lang.Object. The name of the class, as implemented in Object.class:

getClass().getName() + '@' + Integer.toHexString(hashCode())

When we get the class name directly from the object, we are getting the internal names from the JVM with their types, that's why we have extra characters like [ and L, they represent the Array and the ClassName types, respectively.

3. Printing Meaningful Values

To be able to print the result correctly, we can use some classes from the java.util package.

3.1. Arrays

For example, we can use two of the methods in the Arrays class to deal with the conversion.

With one-dimensional arrays, we can use the toString method:

Object[] arrayOfObjects = { "John", 2, true };
assertEquals(Arrays.toString(arrayOfObjects), "[John, 2, true]");

For deeper arrays, we have the deepToString method:

Object[] innerArray = { "We", "Are", "Inside" };
Object[] arrayOfObjects = { "John", 2, innerArray };
assertEquals(Arrays.deepToString(arrayOfObjects), "[John, 2, [We, Are, Inside]]");

3.2. Streaming

One of the significant new features in JDK 8 is the introduction of Java streams, which contains classes for processing sequences of elements:

Object[] arrayOfObjects = { "John", 2, true };
List<String> listOfString = Stream.of(arrayOfObjects)
  .map(Object::toString)
  .collect(Collectors.toList());
assertEquals(listOfString.toString(), "[John, 2, true]");

First, we've created a stream using the helper method of. We've converted all the objects inside the array to a string using map, then we've inserted it to a list using collect to print the values.

4. Conclusion

In this tutorial, we've seen how we can print meaningful information from an array and avoid the default [Ljava.lang.Object;.

We can always find the source code for this article over on GitHub.

HTTP Server with Netty

$
0
0

1. Overview

In this tutorial, we're going to implement a simple upper-casing server over HTTP with Netty, an asynchronous framework that gives us the flexibility to develop network applications in Java.

2. Server Bootstrapping

Before we start, we should be aware of the basics concepts of Netty, such as channel, handler, encoder, and decoder.

Here we'll jump straight into bootstrapping the server, which is mostly the same as a simple protocol server:

public class HttpServer {

    private int port;
    private static Logger logger = LoggerFactory.getLogger(HttpServer.class);

    // constructor

    // main method, same as simple protocol server

    public void run() throws Exception {
        ...
        ServerBootstrap b = new ServerBootstrap();
        b.group(bossGroup, workerGroup)
          .channel(NioServerSocketChannel.class)
          .handler(new LoggingHandler(LogLevel.INFO))
          .childHandler(new ChannelInitializer() {
            @Override
            protected void initChannel(SocketChannel ch) throws Exception {
                ChannelPipeline p = ch.pipeline();
                p.addLast(new HttpRequestDecoder());
                p.addLast(new HttpResponseEncoder());
                p.addLast(new CustomHttpServerHandler());
            }
          });
        ...
    }
}

So, here only the childHandler differs as per the protocol we want to implement, which is HTTP for us.

We're adding three handlers to the server's pipeline:

  1. Netty's HttpResponseEncoder – for serialization
  2. Netty's HttpRequestDecoder – for deserialization
  3. Our own CustomHttpServerHandler – for defining our server's behavior

Let's look at the last handler in detail next.

3. CustomHttpServerHandler

Our custom handler's job is to process inbound data and send a response.

Let's break it down to understand its working.

3.1. Structure of the handler

CustomHttpServerHandler extends Netty's abstract SimpleChannelInboundHandler and implements its lifecycle methods:

public class CustomHttpServerHandler extends SimpleChannelInboundHandler {
    private HttpRequest request;
    StringBuilder responseData = new StringBuilder();

    @Override
    public void channelReadComplete(ChannelHandlerContext ctx) {
        ctx.flush();
    }

    @Override
    protected void channelRead0(ChannelHandlerContext ctx, Object msg) {
       // implementation to follow
    }

    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
        cause.printStackTrace();
        ctx.close();
    }
}

As the method name suggests, channelReadComplete flushes the handler context after the last message in the channel has been consumed so that it's available for the next incoming message. The method exceptionCaught is for handling exceptions if any.

So far, all we've seen is boilerplate code.

Now let's get on with the interesting stuff, the implementation of channelRead0.

3.2. Reading the Channel

Our use case is simple, the server will simply transform the request body and query parameters, if any, to uppercase. A word of caution here on reflecting request data in the response – we are doing this only for demonstration purposes, to understand how we can use Netty to implement an HTTP server.

Here, we'll consume the message or request, and set up its response as recommended by the protocol (note that RequestUtils is something we'll write in just a moment):

if (msg instanceof HttpRequest) {
    HttpRequest request = this.request = (HttpRequest) msg;

    if (HttpUtil.is100ContinueExpected(request)) {
        writeResponse(ctx);
    }
    responseData.setLength(0);            
    responseData.append(RequestUtils.formatParams(request));
}
responseData.append(RequestUtils.evaluateDecoderResult(request));

if (msg instanceof HttpContent) {
    HttpContent httpContent = (HttpContent) msg;
    responseData.append(RequestUtils.formatBody(httpContent));
    responseData.append(RequestUtils.evaluateDecoderResult(request));

    if (msg instanceof LastHttpContent) {
        LastHttpContent trailer = (LastHttpContent) msg;
        responseData.append(RequestUtils.prepareLastResponse(request, trailer));
        writeResponse(ctx, trailer, responseData);
    }
}

As we can see, when our channel receives an HttpRequest, it first checks if the request expects a 100 Continue status. In that case, we immediately write back with an empty response with a status of CONTINUE:

private void writeResponse(ChannelHandlerContext ctx) {
    FullHttpResponse response = new DefaultFullHttpResponse(HTTP_1_1, CONTINUE, 
      Unpooled.EMPTY_BUFFER);
    ctx.write(response);
}

After that, the handler initializes a string to be sent as a response and adds the request's query parameters to it to be sent back as-is.

Let's now define the method formatParams and place it in a RequestUtils helper class to do that:

StringBuilder formatParams(HttpRequest request) {
    StringBuilder responseData = new StringBuilder();
    QueryStringDecoder queryStringDecoder = new QueryStringDecoder(request.uri());
    Map<String, List<String>> params = queryStringDecoder.parameters();
    if (!params.isEmpty()) {
        for (Entry<String, List<String>> p : params.entrySet()) {
            String key = p.getKey();
            List<String> vals = p.getValue();
            for (String val : vals) {
                responseData.append("Parameter: ").append(key.toUpperCase()).append(" = ")
                  .append(val.toUpperCase()).append("\r\n");
            }
        }
        responseData.append("\r\n");
    }
    return responseData;
}

Next, on receiving an HttpContent, we take the request body and convert it to upper case:

StringBuilder formatBody(HttpContent httpContent) {
    StringBuilder responseData = new StringBuilder();
    ByteBuf content = httpContent.content();
    if (content.isReadable()) {
        responseData.append(content.toString(CharsetUtil.UTF_8).toUpperCase())
          .append("\r\n");
    }
    return responseData;
}

Also, if the received HttpContent is a LastHttpContent, we add a goodbye message and trailing headers, if any:

StringBuilder prepareLastResponse(HttpRequest request, LastHttpContent trailer) {
    StringBuilder responseData = new StringBuilder();
    responseData.append("Good Bye!\r\n");

    if (!trailer.trailingHeaders().isEmpty()) {
        responseData.append("\r\n");
        for (CharSequence name : trailer.trailingHeaders().names()) {
            for (CharSequence value : trailer.trailingHeaders().getAll(name)) {
                responseData.append("P.S. Trailing Header: ");
                responseData.append(name).append(" = ").append(value).append("\r\n");
            }
        }
        responseData.append("\r\n");
    }
    return responseData;
}

3.3. Writing the Response

Now that our data to be sent is ready, we can write the response to the ChannelHandlerContext:

private void writeResponse(ChannelHandlerContext ctx, LastHttpContent trailer,
  StringBuilder responseData) {
    boolean keepAlive = HttpUtil.isKeepAlive(request);
    FullHttpResponse httpResponse = new DefaultFullHttpResponse(HTTP_1_1, 
      ((HttpObject) trailer).decoderResult().isSuccess() ? OK : BAD_REQUEST,
      Unpooled.copiedBuffer(responseData.toString(), CharsetUtil.UTF_8));
    
    httpResponse.headers().set(HttpHeaderNames.CONTENT_TYPE, "text/plain; charset=UTF-8");

    if (keepAlive) {
        httpResponse.headers().setInt(HttpHeaderNames.CONTENT_LENGTH, 
          httpResponse.content().readableBytes());
        httpResponse.headers().set(HttpHeaderNames.CONNECTION, 
          HttpHeaderValues.KEEP_ALIVE);
    }
    ctx.write(httpResponse);

    if (!keepAlive) {
        ctx.writeAndFlush(Unpooled.EMPTY_BUFFER).addListener(ChannelFutureListener.CLOSE);
    }
}

In this method, we created a FullHttpResponse with HTTP/1.1 version, adding the data we'd prepared earlier.

If a request is to be kept-alive, or in other words, if the connection is not to be closed, we set the response's connection header as keep-alive. Otherwise, we close the connection.

4. Testing the Server

To test our server, let's send some cURL commands and look at the responses.

Of course, we need to start the server by running the class HttpServer before this.

4.1. GET Request

Let's first invoke the server, providing a cookie with the request:

curl http://127.0.0.1:8080?param1=one

As a response, we get:

Parameter: PARAM1 = ONE

Good Bye!

We can also hit http://127.0.0.1:8080?param1=one from any browser to see the same result.

4.2. POST Request

As our second test, let's send a POST with body sample content:

curl -d "sample content" -X POST http://127.0.0.1:8080

Here's the response:

SAMPLE CONTENT
Good Bye!

This time, since our request contained a body, the server sent it back in uppercase.

5. Conclusion

In this tutorial, we saw how to implement the HTTP protocol, particularly an HTTP server using Netty.

HTTP/2 in Netty demonstrates a client-server implementation of the HTTP/2 protocol.

As always, source code is available over on GitHub.

The Constructor Return Type in Java

$
0
0

1. Overview

In this quick tutorial, we're going to focus on the return type for a constructor in Java.

First, we'll get familiar with how object initialization works in Java and the JVM. Then, we'll dig deeper to see how object initialization and assignment work under-the-hood.

2. Instance Initialization

Let's start with an empty class:

public class Color {}

Here, we're going to create an instance from this class and assign it to some variable:

Color color = new Color();

After compiling this simple Java snippet, let's take a peek at its bytecode via the javap -c command:

0: new           #7                  // class Color
3: dup
4: invokespecial #9                  // Method Color."<init>":()V
7: astore_1

When we instantiate an object in Java, the JVM performs the following operations:

  1. First, it finds a place in its process space for the new object.
  2. Then, the JVM performs the system initialization process. In this step, it creates the object in its default state. The new opcode in the bytecode is actually responsible for this step.
  3. Finally, it initializes the object with the constructor and other initializer blocks. In this case, the invokespecial opcode calls the constructor.

As shown above, the method signature for the default constructor is:

Method Color."<init>":()V

The <init> is the name of instance initialization methods in the JVM. In this case, the <init> is a function that:

  • takes nothing as the input (empty parentheses after the method name)
  • returns nothing (V stands for void)

Therefore, the return type of a constructor in Java and JVM is void.

Taking another look at our simple assignment:

Color color = new Color();

Now that we know the constructor returns void, let's see how the assignment works.

3. How Assignment Works

JVM is a stack-based virtual machine. Each stack consists of stack frames. Put simply, each stack frame corresponds to a method call. In fact, JVM creates frames with a new method call and destroys them as they finish their job:

Each stack frame uses an array to store local variables and an operand stack to store partial results. Given that, let's take another look at the bytecode:

0: new           #7                // class Color
3: dup
4: invokespecial #9               // Method Color."<init>":()V
7: astore_1

Here's how the assignment works:

  • The new instruction creates an instance of Color and pushes its reference onto the operand stack
  • The dup opcode duplicates the last item on the operand stack
  • The invokespecial takes the duplicated reference and consumes it for initialization. After this, only the original reference remains on the operand stack
  • The astore_1 stores the original reference to index 1 of the local variables array. The prefix “a” means that the item to be stored is an object reference, and the “1” is the array index

From now on, the second item (index 1) in the local variables array is a reference to the newly created object. Therefore, we don't lose the reference, and the assignment actually works — even when the constructor returns nothing!

4. Conclusion

In this quick tutorial, we learned how the JVM creates and initializes our class instances. Moreover, we saw how instance initialization works under-the-hood.

For an even more detailed understanding of the JVM, it's always a good idea to check out its specification.

Java Weekly, Issue 337

$
0
0

1. Spring and Java

>> Fault-tolerant and reliable messaging with Kafka and Spring Boot [arnoldgalovics.com]

A comprehensive example showcasing how to use Kafka for DLQ processing, retries, and manual commits.

>> Distributed Cache with Hazelcast and Spring [reflectoring.io]

Why caching is important and how does it fit into modern software architecture using Hazelcast as an example. Cool stuff.

>> Why do the Gradle/Maven wrappers matter? [andresalmiray.com]

Use wrappers and have one less thing to worry about.

Also worth reading:

Webinars and presentations:

2. Technical

>> Writing my first game in Unity [beust.com]

It's always refreshing to see how to start working with a technology that we don't touch on a daily basis.

Also worth reading:

3. Musings

>> K-19 or How to not run a team [blog.scottlogic.com]

Lessons learned while building a soviet submarine 🙂

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Hate Edits [dilbert.com]

>> Shocking Fake Video [dilbert.com]

>> Disbanding Task Force [dilbert.com]

5. Pick of the Week

>> How to be more productive by working less [markmanson.net]

Viewing all 3747 articles
Browse latest View live