Quantcast
Channel: Baeldung
Viewing all 3763 articles
Browse latest View live

InvalidAlgorithmParameterException: Wrong IV Length

$
0
0

1. Overview

The Advanced Encryption Standard (AES) is a widely used symmetric block cipher algorithm. Initialization Vector (IV) plays an important role in the AES algorithm.

In this tutorial, we'll explain how to generate IV in Java. Also, we'll describe how to avoid InvalidAlgorithmParameterException when we generate the IV and use it in a cipher algorithm.

2. Initialization Vector

The AES algorithm has usually three inputs: plaintext, secret key, and IV. It supports secret keys of 128, 192, and 256 bits to encrypt and decrypt data in blocks of 128 bits. The below figure shows the AES inputs:

The goal of IV is to augment the encryption process. The IV is used in conjunction with the secret key in some AES modes of operation. For example, the Cipher Block Chaining (CBC) mode uses the IV in its algorithm.

In general, the IV is a pseudo-random value chosen by the sender. The IV for the encryption must be the same when decrypting information.

It has the same size as the block that is encrypted. Therefore, the size of the IV is 16 bytes or 128 bits.

3. Generating the IV

It's recommended to use java.security.SecureRandom class instead of java.util.Random to generate a random IV. In addition, it's a best practice that the IV be unpredictable. Also, we should not hard-code the IV in the source code.

To use the IV in a cipher, we use the IvParameterSpec class. Let’s create a method for generating the IV:

public static IvParameterSpec generateIv() {
    byte[] iv = new byte[16];
    new SecureRandom().nextBytes(iv);
    return new IvParameterSpec(iv);
}

4. Exception

The AES algorithm requires that the IV size must be 16 bytes (128 bits). So, if we provide an IV whose size is not equal to 16 bytes, an InvalidAlgorithmParameterException will be thrown.

To solve this issue, we'll have to use the IV with a size of 16 bytes. Sample snippet code regarding the use of IV in AES CBC mode can be found in this article.

5. Conclusion

In summary, we've learned how to generate an Initialization Vector (IV) in Java. Also, we've described the exception relevant to the IV generation. The source code used in this tutorial is available over on GitHub.

 

The post InvalidAlgorithmParameterException: Wrong IV Length first appeared on Baeldung.

        

Writing byte[] to a File in Java

$
0
0

1. Overview

In this quick tutorial, we're going to learn several different ways to write a Java byte array to a file. We'll start at the beginning, using the Java IO package. Next, we'll look at an example using Java NIO. After that, we'll use Google Guava and Apache Commons IO.

2. Java IO

Java's IO package has been around since JDK 1.0 and provides a collection of classes and interfaces for reading and writing data.

Let's use a FileOutputStream to write the image to a file:

File outputFile = tempFolder.newFile("outputFile.jpg");
try (FileOutputStream outputStream = new FileOutputStream(outputFile)) {
    outputStream.write(dataForWriting);
}

We open an output stream to our destination file, and then we can simply pass our byte[] dataForWriting to the write method. Note that we're using a try-with-resources block here to ensure that we close the OutputStream in case an IOException is thrown.

3. Java NIO

The Java NIO package was introduced in Java 1.4, and the file system API for NIO was introduced as an extension in Java 7. Java NIO is uses buffering and is non-blocking, whereas Java IO uses blocking streams. The syntax for creating file resources is more succinct in the java.nio.file package.

We can write our byte[] in one line using the Files class:

Files.write(outputFile.toPath(), dataForWriting);

Our example either creates a file or truncates an existing file and opens it for write. We can also use Paths.get(“path/to/file”) or Paths.get(“path”, “to”, “file”) to construct the Path that describes where our file will be stored. Path is the Java NIO native way of expressing paths.

If we need to override the file opening behavior, we can also provide OpenOption to the write method.

4. Google Guava

Guava is a library by Google that provides a variety of types for performing common operations in Java, including IO.

Let's import Guava into our pom.xml file:

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>30.0-jre</version>
</dependency>

4.1. Guava Files

As with the Java NIO package, we can write our byte[] in one line:

Files.write(dataForWriting, outputFile);

Guava's Files.write method also takes an optional OptionOptions and uses the same defaults as java.nio.Files.write.

There's a catch here though: The Guava Files.write method is marked with the @Beta annotation. According to the documentation, that means it can change at any time and so is not recommended for use in libraries.

So, if we're writing a library project, we should consider using a ByteSink.

4.2. ByteSink

We can also create a ByteSink to write our byte[]:

ByteSink byteSink = Files.asByteSink(outputFile);
byteSink.write(dataForWriting);

The ByteSink is a destination to which we can write bytes. It supplies an OutputStream to the destination.

If we need to use a java.nio.files.Path or to supply a special OpenOption, we can get our ByteSink using the MoreFiles class:

ByteSink byteSink = MoreFiles.asByteSink(outputFile.toPath(), 
    StandardOpenOption.CREATE, 
    StandardOpenOption.WRITE);
byteSink.write(dataForWriting);

5. Apache Commons IO

Apache Commons IO provides some common file tasks.

Let's import the latest version of commons-io:

<dependency>
    <groupId>commons-io</groupId>
    <artifactId>commons-io</artifactId>
    <version>1.3.2</version>
</dependency>

Now, let's write our byte[] using the FileUtils class:

FileUtils.writeByteArrayToFile(outputFile, dataForWriting);

The FileUtils.writeByteArrayToFile method is similar to the other methods that we've used in that we give it a File representing our desired destination and the binary data we're writing. If our destination file or any of the parent directories don't exist, they'll be created.

6. Conclusion

In this short tutorial, we learned how to write binary data from a byte[] to a file using plain Java and two popular Java utility libraries: Google Guava and Apache Commons IO.

As always, the example code is available over on GitHub.

The post Writing byte[] to a File in Java first appeared on Baeldung.

        

Configuring a Project to Exclude Certain Sonar Violations

$
0
0

1. Overview

During our builds, we can use various tools to report on the quality of our source code. One such tool is SonarQube, which performs static code analysis.

Sometimes we may disagree with the results returned. We may, therefore, wish to exclude some code that has been incorrectly flagged by SonarQube.

In this short tutorial, we'll look at how to disable Sonar checks. While it's possible to change the ruleset on the SonarQube's server, we'll focus only on how to control individual checks within the source code and configuration of our project.

2. Violation Example

Let's look at an example:

public void printStringToConsoleWithDate(String str) {
    System.out.println(LocalDateTime.now().toString() + " " + str);
}

By default, SonarQube reports this code as a Code Smell due to the java:S106 rule violation:

However, let's imagine that for this particular class, we've decided that logging with System.out is valid. Maybe this is a lightweight utility that will run in a container and does not need a whole logging library just to log to stdout.

We should note that it's also possible to mark a violation as a false-positive within the SonarQube user interface. However, if the code is analyzed on multiple servers, or if the line moves to another class after refactoring, then the violation will re-appear.

Sometimes we want to make our exclusions within the source code repository so that they persist.

So, let's see how we can exclude this code from the SonarQube report by configuring the project.

3. Using //NOSONAR

We can disable a single line of code by putting a //NOSONAR at the end:

System.out.println(
  LocalDateTime.now()
    .toString() + " " + str); //NOSONAR lightweight logging

The //NOSONAR tag at the end of the line suppresses all issues that might be raised on it. This approach works for most languages supported by SonarQube.

We're also allowed to put some additional comments after NOSONAR explaining why we have disabled the check.

Let's move forward and take a look at a Java-specific way to disable checks.

4. Using @SuppressWarnings

4.1. Annotating the Code

In Java, we can exclude Sonar checks using the built-in @SuppressWarnings annotation.

We can annotate the function:

@SuppressWarnings("java:S106")
public void printStringToConsoleWithDate(String str) {
    System.out.println(LocalDateTime.now().toString() + " " + str);
}

This works exactly the same way as suppressing compiler warnings. All we have to do is specify the rule identifier, in this case java:S106.

4.2. How to Get the Identifier

We can get the rule identifier using the SonarQube user interface. When we're looking at the violation, we can click Why is this an issue?:

It shows us the definition. From this we can find the rule identifier in the top right corner:

5. Using sonar-project.properties

We can also define exclusion rules in the sonar-project.properties file using analysis properties.

Let's define and add the sonar-project.properties file to our resource dir:

sonar.issue.ignore.multicriteria=e1
sonar.issue.ignore.multicriteria.e1.ruleKey=java:S106
sonar.issue.ignore.multicriteria.e1.resourceKey=**/SonarExclude.java

We've just declared our very first multicriteria, named e1. We excluded the java:S106 rule for the SonarExclude class. Our definition can mix exclusions using rule identifiers and file matching patterns together, respectively in ruleKey and resourceKey properties preceded by the e1 name tag.

Using this approach, we can build a complex configuration that excludes particular rules across multiple files:

sonar.issue.ignore.multicriteria=e1,e2
# Console usage - ignore a single class
sonar.issue.ignore.multicriteria.e1.ruleKey=java:S106
sonar.issue.ignore.multicriteria.e1.resourceKey=**/SonarExclude.java
# Too many parameters - ignore the whole package
sonar.issue.ignore.multicriteria.e2.ruleKey=java:S107
sonar.issue.ignore.multicriteria.e2.resourceKey=com/baeldung/sonar/*.java

We've just defined a subset of multicriteria. We extended our configuration by adding a second definition and named it e2. Then we combined both rules in a single subset, separating the names with a comma.

6. Disable Using Maven

All analysis properties can be also applied using Maven properties. A similar mechanism is also available in Gradle.

6.1. Multicriteria in Maven

Returning to the example, let's modify our pom.xml:

<properties>
    <sonar.issue.ignore.multicriteria>e1</sonar.issue.ignore.multicriteria>
    <sonar.issue.ignore.multicriteria.e1.ruleKey>java:S106</sonar.issue.ignore.multicriteria.e1.ruleKey>
    <sonar.issue.ignore.multicriteria.e1.resourceKey>
      **/SonarExclude.java
    </sonar.issue.ignore.multicriteria.e1.resourceKey>
</properties>

This configuration works exactly the same as if it were used in a sonar-project.properties file.

6.2. Narrowing the Focus

Sometimes, an analyzed project may contain some generated code that we want to exclude and narrow the focus of SonarQube checks.

Let's exclude our class by defining sonar.exclusions in our pom.xml:

<properties>
    <sonar.exclusions>**/SonarExclude.java</sonar.exclusions>
</properties>

In that case, we've excluded a single file by its name. Checks will be performed for all files except that one.

We can also use file matching patterns. Let's exclude the whole package by defining:

<properties>
    <sonar.exclusions>com/baeldung/sonar/*.java</sonar.exclusions>
</properties>

On the other hand, by using the sonar.inclusions property, we can ask SonarQube only to analyse a particular subset of the project's files:

<properties>
    <sonar.inclusions>com/baeldung/sonar/*.java</sonar.inclusions>
</properties>

This snippet defines analysis only for java files from the com.baeldung.sonar package.

Finally, we can also define the sonar.skip value:

<properties>
    <sonar.skip>true</sonar.skip>
</properties>

This excludes the whole Maven module from SonarQube checks.

7. Conclusion

In this article, we discussed different ways to suppress certain SonarQube analysis on our code.

We started by excluding checks on individual lines. Then, we talked about built-in @SuppressWarnings annotation and exclusion by a specific rule. This requires us to find the rule's identifier.

We also looked at configuring the analysis properties. We tried multicriteria and the sonar-project.properties file.

Finally, we moved our properties to the pom.xml and reviewed other ways to narrow the focus.

The post Configuring a Project to Exclude Certain Sonar Violations first appeared on Baeldung.

        

Difference Between COPY and ADD in a Dockerfile

$
0
0

1. Introduction

When creating Dockerfiles, it's often necessary to transfer files from the host system into the Docker image. These could be property files, native libraries, or other static content that our applications will require at runtime.

The Dockerfile specification provides two ways to copy files from the source system into an image: the COPY and ADD directives.

In this article, we'll look at the differences between them and when it makes sense to use each one.

2. Difference Between COPY and ADD

At first glance, the COPY and ADD directives look the same. They have the same syntax:

COPY <source> <destination>
ADD <source> <destination>

And both copy files from the host system to the Docker image.

So what's the difference? In short, the ADD directive is more capable than COPY.

While functionally similar, the ADD directive is more powerful in two ways:

  • It can handle remote URLs
  • It can auto-extract tar files

Let's look at these more closely.

First, the ADD directive can accept a remote URL for its source argument. The COPY directive, on the other hand, can only accept local files.

Note that using ADD to fetch remote files and copying is not typically ideal. This is because the file will increase the overall Docker image size. Instead, we should use curl or wget to fetch remote files and remove them when no longer needed.

Second, the ADD directive will automatically expand tar files into the image file system. While this can reduce the number of Dockerfile steps required to build an image, it may not be desired in all cases.

Note that the auto-expansion only occurs when the source file is local to the host system.

3. When to Use ADD or COPY

According to the Dockerfile best practices guide, we should always prefer COPY over ADD unless we specifically need one of the two additional features of ADD.

As noted above, using ADD to copy remote files into a Docker image creates an extra layer and increases the file size. If we use wget or curl instead, we can remove the files afterward, and they don't remain a permanent part of the Docker image.

Additionally, since the ADD command automatically expands tar files and certain compressed formats, it can lead to unexpected files being written to the file system in our images.

4. Conclusion

In this quick tutorial, we've seen the two primary ways to copy files into a Docker image: ADD and COPY. While functionally similar, the COPY directive is preferred for most cases. This is because the ADD directive provides additional functionality that should be used with caution and only when needed.

The post Difference Between COPY and ADD in a Dockerfile first appeared on Baeldung.

Viewing Contents of a JAR File

$
0
0

1. Overview

We've learned about getting class names from a JAR file. Further, in that tutorial, we've discussed how to get the classes' names in a JAR file in a Java application.

In this tutorial, we'll learn another way to list a JAR file's content from the command-line.

We'll also see several GUI tools for viewing more detailed contents of a JAR file — for example, the Java source code.

2. Example JAR File

In this tutorial, we'll still take the stripe-0.0.1-SNAPSHOT.jar file as an example to address how to view the content in a JAR file:

3. Reviewing the jar Command

We've learned that we can use the jar command shipped with the JDK to check the content of a JAR file:

$ jar tf stripe-0.0.1-SNAPSHOT.jar 
META-INF/
META-INF/MANIFEST.MF
...
templates/result.html
templates/checkout.html
application.properties
com/baeldung/stripe/StripeApplication.class
com/baeldung/stripe/ChargeRequest.class
com/baeldung/stripe/StripeService.class
com/baeldung/stripe/ChargeRequest$Currency.class

If we want to filter the output to get only the information we want, for example, class names or properties files, we can pipe the output to filter tools such as grep.

The jar command is pretty convenient to use if our system has a JDK installed.

However, sometimes, we want to examine a JAR file's content on a system without a JDK installed. In this case, the jar command is not available.

We'll take a look at this next.

4. Using the unzip Command

JAR files are packaged in the ZIP file format. In other words, if a utility can read a ZIP file, we can use it to view a JAR file as well.

The unzip command is a commonly used utility for working with ZIP files from the Linux command-line.

Therefore, we can use the -l option of the unzip command to list the content of a JAR file without extracting it:

$ unzip -l stripe-0.0.1-SNAPSHOT.jar
Archive:  stripe-0.0.1-SNAPSHOT.jar
  Length      Date    Time    Name
---------  ---------- -----   ----
        0  2020-10-16 20:53   META-INF/
...
      137  2020-10-16 20:53   static/index.html
      677  2020-10-16 20:53   templates/result.html
     1323  2020-10-16 20:53   templates/checkout.html
       37  2020-10-16 20:53   application.properties
      715  2020-10-16 20:53   com/baeldung/stripe/StripeApplication.class
     3375  2020-10-16 20:53   com/baeldung/stripe/ChargeRequest.class
     2033  2020-10-16 20:53   com/baeldung/stripe/StripeService.class
     1146  2020-10-16 20:53   com/baeldung/stripe/ChargeRequest$Currency.class
     2510  2020-10-16 20:53   com/baeldung/stripe/ChargeController.class
     1304  2020-10-16 20:53   com/baeldung/stripe/CheckoutController.class
...
---------                     -------
    15394                     23 files

Thanks to the unzip command, we can view the content of a JAR file without the JDK.

The output above is pretty clear. It lists the files in the JAR file in a tabular format.

5. Exploring JAR Files Using GUI Utilities

Both the jar and the unzip commands are handy, but they only list the filenames in a JAR file.

Sometimes, we would like to know more information about files in the JAR file, for example, examining the Java source code of a class.

In this section, we'll introduce several platform-independent GUI tools to help us to look at files inside a JAR file.

5.1. Using JD-GUI

First, let's have a look at JD-GUI.

The JD-GUI is a nice open-source GUI utility to explore Java source code decompiled by the Java decompiler JD-Core.

JD-GUI ships a JAR file. We can start the utility by using the java command with the -jar option, for instance:

$ java -jar jd-gui-1.6.6.jar

When we see the main window of JD-GUI, we can either open our JAR file by navigating the menu “File -> Open File…” or just drag-and-drop the JAR file in the window.

Once we open a JAR file, all the classes in the JAR file will be decompiled.

Then we can select the files we're interested in on the left side to examine their source code:

As we can see in the above demo, in the outline on the left side, the classes and the members of each class such as methods and fields are listed, too, just like we usually see in an IDE.

It's pretty handy to locate methods or fields, particularly when we need to check some classes with many lines of code.

When we click through different classes on the left side, each class will be opened in a tab on the right side.

The tab feature is helpful if we need to switch among several classes.

5.2. Using Jar Explorer

Jar Explorer is another open-source GUI tool for viewing the contents of JAR files. It ships a jar file and a start script “Jar Explorer.sh“. It also supports the drag-and-drop feature, making opening a JAR file pretty easy.

Another nice feature provided by Jar Explorer is that it supports three different Java decompilers: JD-Core, Procyon, and Fernflower.

We can switch among the decompilers when we examine source code:

Jar Explorer is pretty easy to use. The decompiler switching feature is nice, too. However, the outline on the left side stops at the class level.

Also, since Jar Explorer doesn't provide the tab feature, we can only open a single file at a time.

Moreover, every time we select a class on the left side, the class will be decompiled by the currently selected decompiler.

5.3. Using Luyten

Luyten is a nice open-source GUI utility for Java decompiler Procyon that provides downloads for different platforms, for example, the .exe format and the JAR format.

Once we've downloaded the JAR file, we can start Luyten using the java -jar command:

$ java -jar luyten-0.5.4.jar 

We can drag and drop our JAR file into Luyten and explore the contents in the JAR file:

Using Luyten, we cannot choose different Java decompilers. But, as the demo above shows, Luyten provides various options for decompiling. Also, we can open multiple files in tabs.

Apart from that, Luyten supports a nice theme system, and we can choose a comfortable theme while examining the source codes.

However, Luyten lists the structure of the JAR file only to the file level.

6. Conclusion

In this article, we've learned how to list files in a JAR file from the command-line. Later, we've seen three GUI utilities to view more detailed contents of a JAR file.

If we want to decompile the classes and examine the JAR file's source code, picking a GUI tool may be the most straightforward approach.

The post Viewing Contents of a JAR File first appeared on Baeldung.

        

Scheduled WebSocket Push with Spring Boot

$
0
0

1. Overview

In this tutorial, we'll see how to send scheduled messages from a server to the browser using WebSockets. An alternative would be using Server sent events (SSE), but we won't be covering that in this article.

Spring provides a variety of scheduling options. First, we'll be covering the @Scheduled annotation. Then, we'll see an example with Flux::interval method provided by Project Reactor. This library is available out-of-the-box for Webflux applications, and it can be used as a standalone library in any Java project.

Also, more advanced mechanisms exist, like the Quartz scheduler, but we won't be covering them.

2. A Simple Chat Application

In a previous article, we used WebSockets to build a chat application. Let's extend it with a new feature: chatbots. Those bots are the server-side components that push scheduled messages to the browser.

2.1. Maven Dependencies

Let's start by setting the necessary dependencies in Maven. To build this project, our pom.xml should have:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-websocket</artifactId>
</dependency>
<dependency>
    <groupId>io.projectreactor</groupId>
    <artifactId>reactor-core</artifactId>
</dependency>
<dependency>
    <groupId>com.github.javafaker</groupId>
    <artifactId>javafaker</artifactId>
    <version>1.0.2</version>
</dependency>
<dependency>
    <groupId>com.google.code.gson</groupId>
    <artifactId>gson</artifactId>
</dependency>

2.2. JavaFaker Dependency

We'll be using the JavaFaker library to generate our bots' messages. This library is often used to generate test data. Here, we'll add a guest named “Chuck Norris” to our chat room.

Let's see the code:

Faker faker = new Faker();
ChuckNorris chuckNorris = faker.chuckNorris();
String messageFromChuck = chuckNorris.fact();

The Faker will provide factory methods for various data generators. We'll be using the ChuckNorris generator. A call to chuckNorris.fact() will display a random sentence from a list of predefined messages.

2.3. Data Model

The chat application uses a simple POJO as the message wrapper:

public class OutputMessage {
    private String from;
    private String text;
    private String time;
   // standard constructors, getters/setters, equals and hashcode
}

Putting it all together, here's an example of how we create a chat message:

OutputMessage message = new OutputMessage(
  "Chatbot 1", "Hello there!", new SimpleDateFormat("HH:mm").format(new Date())));

2.4. Client-Side

Our chat client is a simple HTML page. It uses a SockJS client and the STOMP message protocol.

Let's see how the client subscribes to a topic:

<html>
<head>
    <script src="./js/sockjs-0.3.4.js"></script>
    <script src="./js/stomp.js"></script>
    <script type="text/javascript">
        // ...
        stompClient = Stomp.over(socket);
	
        stompClient.connect({}, function(frame) {
            // ...
            stompClient.subscribe('/topic/pushmessages', function(messageOutput) {
                showMessageOutput(JSON.parse(messageOutput.body));
            });
        });
        // ...
    </script>
</head>
<!-- ... -->
</html>

First, we created a Stomp client over the SockJS protocol. Then, the topic subscription serves as the communication channel between the server and the connected clients.

In our repository, this code is in webapp/bots.html. We access it when running locally at http://localhost:8080/bots.html. Of course, we need to adjust the host and port depending on how we deploy the application.

2.5. Server-Side

We've seen how to configure WebSockets in Spring in a previous article. Let's modify that configuration a little bit:

@Configuration
@EnableWebSocketMessageBroker
public class WebSocketConfig implements WebSocketMessageBrokerConfigurer {
    @Override
    public void configureMessageBroker(MessageBrokerRegistry config) {
        config.enableSimpleBroker("/topic");
        config.setApplicationDestinationPrefixes("/app");
    }
    @Override
    public void registerStompEndpoints(StompEndpointRegistry registry) {
        // ...
        registry.addEndpoint("/chatwithbots");
        registry.addEndpoint("/chatwithbots").withSockJS();
    }
}

To push our messages, we use the utility class SimpMessagingTemplate. By default, it's made available as a @Bean in the Spring Context. We can see how it's declared through autoconfiguration when the AbstractMessageBrokerConfiguration is in the classpath. Therefore, we can inject it in any Spring component.

Following that, we use it to publish messages to the topic /topic/pushmessages. We assume our class has that bean injected in a variable named simpMessagingTemplate:

simpMessagingTemplate.convertAndSend("/topic/pushmessages", 
  new OutputMessage("Chuck Norris", faker.chuckNorris().fact(), time));

As shown previously in our client-side example, the client subscribes to that topic to process messages as they arrive.

3. Scheduling Push Messages

In the Spring ecosystem, we can choose from a variety of scheduling methods. If we use Spring MVC, the @Scheduled annotation comes as a natural choice for its simplicity. If we use Spring Webflux, we can also use Project Reactor's Flux::interval method. We'll see one example of each.

3.1. Configuration

Our chatbots will use the JavaFaker's Chuck Norris generator. We'll configure it as a bean so we can inject it where we need it.

@Configuration
class AppConfig {
    @Bean
    public ChuckNorris chuckNorris() {
        return (new Faker()).chuckNorris();
    }
}

3.2. Using @Scheduled

Our example bots are scheduled methods. When they run, they send our OutputMessage POJOs through a WebSocket using SimpMessagingTemplate.

As its name implies, the @Scheduled annotation allows the repeated execution of methods. With it, we can use simple rate-based scheduling or more complex “cron” expressions.

Let's code our first chatbot:

@Service
public class ScheduledPushMessages {
    @Scheduled(fixedRate = 5000)
    public void sendMessage(SimpMessagingTemplate simpMessagingTemplate, ChuckNorris chuckNorris) {
        String time = new SimpleDateFormat("HH:mm").format(new Date());
        simpMessagingTemplate.convertAndSend("/topic/pushmessages", 
          new OutputMessage("Chuck Norris (@Scheduled)", chuckNorris().fact(), time));
    }
    
}

We annotate the sendMessage method with @Scheduled(fixedRate = 5000). This makes sendMessage run every five seconds. Then, we use the simpMessagingTemplate instance to send an OutputMessage to the topic. The simpMessagingTemplate and chuckNorris instances are injected from the Spring context as method parameters.

3.3. Using Flux::interval()

If we use WebFlux, we can use the Flux::interval operator. It will publish an infinite stream of Long items separated by a chosen Duration.

Now, let's use Flux with our previous example. The goal will be to send a quote from Chuck Norris every five seconds. First, we need to implement the InitializingBean interface to subscribe to the Flux at application startup:

@Service
public class ReactiveScheduledPushMessages implements InitializingBean {
    private SimpMessagingTemplate simpMessagingTemplate;
    private ChuckNorris chuckNorris;
    @Autowired
    public ReactiveScheduledPushMessages(SimpMessagingTemplate simpMessagingTemplate, ChuckNorris chuckNorris) {
        this.simpMessagingTemplate = simpMessagingTemplate;
        this.chuckNorris = chuckNorris;
    }
    @Override
    public void afterPropertiesSet() throws Exception {
        Flux.interval(Duration.ofSeconds(5L))
            // discard the incoming Long, replace it by an OutputMessage
            .map((n) -> new OutputMessage("Chuck Norris (Flux::interval)", 
                              chuckNorris.fact(), 
                              new SimpleDateFormat("HH:mm").format(new Date()))) 
            .subscribe(message -> simpMessagingTemplate.convertAndSend("/topic/pushmessages", message));
    }
}

Here, we use constructor injection to set the simpMessagingTemplate and chuckNorris instances. This time, the scheduling logic is in afterPropertiesSet(), which we override when implementing InitializingBean. The method will run as soon as the service starts up.

The interval operator emits a Long every five seconds. Then, the map operator discards that value and replaces it with our message. Finally, we subscribe to the Flux to trigger our logic for each message.

4. Conclusion

In this tutorial, we've seen that the utility class SimpMessagingTemplate makes it easy to push server messages through a WebSocket. In addition, we've seen two ways of scheduling the execution of a piece of code.

As always, the source code for the examples is available over on GitHub.

The post Scheduled WebSocket Push with Spring Boot first appeared on Baeldung.

        

Java Weekly, Issue 364

$
0
0

1. Spring and Java

>> HotSpot Intrinsics [alidg.me]

What you see isn't what you get: an introduction to how compiler intrinsics works on the HotSpot JVM!

>> Smaller, Faster-starting Container Images With jlink and AppCDS [morling.dev]

Application Class Data Sharing or AppCDS meets jlink: faster startup times with AppCDS in custom runtime images!

>> Announcing gRPC Kotlin 1.0 for Android and Cloud [developers.googleblog.com]

Better asynchrony with gRPC and coroutines: high-performance RPC framework with Kotlin, gRPC, and of course, CSP style concurrency.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> How Netflix Scales its API with GraphQL Federation [netflixtechblog.com]

Flexible and complex schema, observability, and security: GraphQL federation at Netflix scale!

Also worth reading:

3. Musings

>> Apple's M1 Chip Benchmarks focused on the real-world programming [tech.ssut.me]

ARM vs x86: Apple's M1 chip shines on some famous benchmarks suits for Java, Javascript, Python, and Go!

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Pick Midpoint [dilbert.com]

>> Assigning Dilbert To Project [dilbert.com]

>> Ted Reimagined More [dilbert.com]

5. Pick of the Week

And a view from the “other side”:

>> How to hire a programmer to make your ideas happen [sive.rs]

The post Java Weekly, Issue 364 first appeared on Baeldung.

        

Get list of JSON objects with Spring RestTemplate

$
0
0

1. Overview

Our services often have to communicate with other REST services in order to fetch information.

In Spring, we can use RestTemplate to perform synchronous HTTP requests. The data is usually returned as JSON, and RestTemplate can convert it for us.

In this tutorial we are going to explore how we can convert a JSON Array into three different object structures in Java: Array of ObjectArray of POJO and a List of POJO.

2. JSON, POJO and Service

Let's imagine that we have an endpoint http://localhost:8080/users returning a list of users as the following JSON:

[{
  "id": 1,
  "name": "user1",
}, {
  "id": 2,
  "name": "user2"
}]

We'll require the corresponding User class to process data:

public class User {
    private int id;
    private String name;
    // getters and setters..
}

For our interface implementation we write a UserConsumerServiceImpl with RestTemplate as its dependency:

public class UserConsumerServiceImpl implements UserConsumerService {
    private final RestTemplate restTemplate;
    public UserConsumerServiceImpl(RestTemplate restTemplate) {
        this.restTemplate = restTemplate;
    }
...
}

3. Mapping a List of JSON objects

When the response to a REST request is a JSON array, there are a few ways we can convert it to a Java collection. Let's look at the options and see how easily they allow us to process the data that is returned. We'll look at extracting the usernames of some user objects returned by a REST service.

3.1. RestTemplate with Object Array

First, let's make the call with RestTemplate.getForEntity and use a ResponseEntity of type Object[] to collect the response:

ResponseEntity<Object[]> responseEntity =
   restTemplate.getForEntity(BASE_URL, Object[].class);

Next, we can extract the body into our array of Object:

Object[] objects = responseEntity.getBody();

The actual Object here is just some arbitrary structure that contains our data, but doesn't use our User type. Let's convert it into our User objects.

For this, we'll need an ObjectMapper:

ObjectMapper mapper = new ObjectMapper();

We can declare it inline, though this is usually done as a private static final member of the class.

Lastly we are ready to extract the usernames:

return Arrays.stream(objects)
  .map(object -> mapper.convertValue(object, User.class))
  .map(User::getName)
  .collect(Collectors.toList());

With this method, we can essentially read an array of anything into an Object array in Java. This can be handy if we only wanted to count the results, for instance. However, it doesn't lend itself well to further processing. We had to put extra effort into converting it to a type we could work with.

The Jackson Deserializer actually deserialises JSON into a series of LinkedHashMap objects when we ask it to produce Object as the target type. Post-processing with convertValue is an inefficient overhead.

We can avoid it if we provide our desired type to Jackson in the first place.

3.2. RestTemplate with User Array

We can provide User[]  to RestTemplate, instead of Object[]:

  ResponseEntity<User[]> responseEntity = 
    restTemplate.getForEntity(BASE_URL, User[].class); 
  User[] userArray = responseEntity.getBody();
  return Arrays.stream(userArray) 
    .map(User::getName) 
    .collect(Collectors.toList());

We can see that we no longer need the ObjectMapper.convertValue. The ResponseEntity has User objects inside it. However we still need to do some extra conversions to use the Java Stream API and for our code to work with a List.

3.3. RestTemplate with User List and ParameterizedTypeReference

If we need the convenience of Jackson producing a List of Users instead of an Array we need to describe the List we want to create. To do this we have to use RestTemplate.exchange. This method takes a ParameterizedTypeReference produced by an anonymous inner class:

ResponseEntity<List<User>> responseEntity = 
  restTemplate.exchange(
    BASE_URL,
    HttpMethod.GET,
    null,
    new ParameterizedTypeReference<List<User>>() {}
  );
List<User> users = responseEntity.getBody();
return users.stream()
  .map(User::getName)
  .collect(Collectors.toList());

This produces the List that we want to use.

Let's have a closer look into why we need to use the ParameterisedTypeReference.

In the first two examples Spring can easily deserialise the JSON into a User.class type token where the type information is fully available at runtime.

With generics, however, type erasure occurs if we try to use List<User>.class. So, Jackson would not be able to determine the type inside the <>.

We can overcome this by using a super type token called ParameterizedTypeReference. Instantiating it as an anonymous inner class – new ParameterizedTypeReference<List<User>>() {} – exploits the fact that subclasses of generic classes contain compile-time type information that is not subject to type erasure and can be consumed through reflection.

4. Summary

In this article we saw three different ways of processing JSON objects using RestTemplate. We saw how to specify the types of arrays of Object and our own custom classes.

Then we learnt how we provide the type information to produce a List by using the ParameterizedTypeReference.

As always, the code for this article is available over on GitHub.

The post Get list of JSON objects with Spring RestTemplate first appeared on Baeldung.

        

Unmarshalling a JSON Array Using camel-jackson

$
0
0

1. Overview

Apache Camel is a powerful open-source integration framework implementing a number of the known Enterprise Integration Patterns.

Typically when working with message routing using Camel, we'll want to use one of the many supported pluggable data formats. Given that JSON is popular in most modern APIs and data services, it becomes an obvious choice.

In this tutorial, we'll take a look at a couple of ways we can unmarshal a JSON Array into a list of Java objects using the camel-jackson component.

2. Dependencies

First, let’s add the camel-jackson dependency to our pom.xml:

<dependency>
    <groupId>org.apache.camel</groupId>
    <artifactId>camel-jackson</artifactId>
    <version>3.6.0</version>
</dependency>

Then, we'll also add the camel-test dependency specifically for our unit tests, which is available from Maven Central as well:

<dependency>
    <groupId>org.apache.camel</groupId>
    <artifactId>camel-test</artifactId>
    <version>3.6.0</version>
</dependency>

3. Fruit Domain Classes

Throughout this tutorial, we'll use a couple of light POJO objects to model our fruit domain.

Let's go ahead and define a class with an id and a name to represent a fruit:

public class Fruit {
    private String name;
    private int id;
    // standard getter and setters
}

Next, we'll define a container to hold a list of Fruit objects:

public class FruitList {
    private List<Fruit> fruits;
    public List<Fruit> getFruits() {
        return fruits;
    }
    public void setFruits(List<Fruit> fruits) {
        this.fruits = fruits;
    }
}

In the next couple of sections, we'll see how to unmarshal a JSON string representing a list of fruits into these domain classes. Ultimately what we are looking for is a variable of type List<Fruit> that we can work with.

4. Unmarshalling a JSON FruitList

In this first example, we're going to represent a simple list of fruit using JSON format:

{
    "fruits": [
        {
            "id": 100,
            "name": "Banana"
        },
        {
            "id": 101,
            "name": "Apple"
        }
    ]
}

Above all, we should emphasize that this JSON represents an object which contains a property called fruits, which contains our array.

Now let's set up our Apache Camel route to perform the deserialization:

@Override
protected RouteBuilder createRouteBuilder() throws Exception {
    return new RouteBuilder() {
        @Override
        public void configure() throws Exception {
            from("direct:jsonInput")
              .unmarshal(new JacksonDataFormat(FruitList.class))
              .to("mock:marshalledObject");
        }
    };
}

In this example, we use direct endpoint with the name jsonInput. Next, we call the unmarshal method, which unmarshals the message body on our Camel exchange using the specified data format.

We're using the JacksonDataFormat class with a custom unmarshal type of FruitList. This is essentially a simple wrapper around the Jackon ObjectMapper and lets us marshal to and from JSON.

Finally, we send the result of the unmarshal method to a mock endpoint called marshalledObject. As we're going to see, this is how we'll test our route to see if it is working correctly.

With that in mind, let's go ahead and write our first unit test:

public class FruitListJacksonUnmarshalUnitTest extends CamelTestSupport {
    @Test
    public void givenJsonFruitList_whenUnmarshalled_thenSuccess() throws Exception {
        MockEndpoint mock = getMockEndpoint("mock:marshalledObject");
        mock.expectedMessageCount(1);
        mock.message(0).body().isInstanceOf(FruitList.class);
        String json = readJsonFromFile("/json/fruit-list.json");
        template.sendBody("direct:jsonInput", json);
        assertMockEndpointsSatisfied();
        FruitList fruitList = mock.getReceivedExchanges().get(0).getIn().getBody(FruitList.class);
        assertNotNull("Fruit lists should not be null", fruitList);
        List<Fruit> fruits = fruitList.getFruits();
        assertEquals("There should be two fruits", 2, fruits.size());
        Fruit fruit = fruits.get(0);
        assertEquals("Fruit name", "Banana", fruit.getName());
        assertEquals("Fruit id", 100, fruit.getId());
        fruit = fruits.get(1);
        assertEquals("Fruit name", "Apple", fruit.getName());
        assertEquals("Fruit id", 101, fruit.getId());
    }
}

Let's walk through the key parts of our test to understand what is going on:

  • First things first, we start by extending the CamelTestSupport class –  a useful testing utility base class
  • Then we set up our test expectations. Our mock variable should have one message, and the message type should be a FruitList
  • Now we're ready to send out the JSON input file as a String to the direct endpoint we defined earlier
  • After we check our mock expectations have been satisfied, we are free to retrieve the FruitList and check the contents is as expected

This test confirms that our route is working properly and our JSON is being unmarshalled as expected. Awesome!

5. Unmarshalling a JSON Fruit Array

On the other hand, we want to avoid using a container object to hold our Fruit objects. We can modify our JSON to hold a fruit array directly:

[
    {
        "id": 100,
        "name": "Banana"
    },
    {
        "id": 101,
        "name": "Apple"
    }
]

This time around, our route is almost identical, but we set it up to work specifically with a JSON array:

@Override
protected RouteBuilder createRouteBuilder() throws Exception {
    return new RouteBuilder() {
        @Override
        public void configure() throws Exception {
            from("direct:jsonInput")
              .unmarshal(new ListJacksonDataFormat(Fruit.class))
              .to("mock:marshalledObject");
        }
    };
}

As we can see, the only difference to our previous example is that we're using the ListJacksonDataFormat class with a custom unmarshal type of Fruit. This is a Jackson data format type prepared directly to work with lists.

Likewise, our unit test is very similar:

@Test
public void givenJsonFruitArray_whenUnmarshalled_thenSuccess() throws Exception {
    MockEndpoint mock = getMockEndpoint("mock:marshalledObject");
    mock.expectedMessageCount(1);
    mock.message(0).body().isInstanceOf(List.class);
    String json = readJsonFromFile("/json/fruit-array.json");
    template.sendBody("direct:jsonInput", json);
    assertMockEndpointsSatisfied();
    @SuppressWarnings("unchecked")
    List<Fruit> fruitList = mock.getReceivedExchanges().get(0).getIn().getBody(List.class);
    assertNotNull("Fruit lists should not be null", fruitList);
    // more standard assertions
}

However, there are two subtle differences with respect to the test we saw in the previous section:

  • We're first setting up our mock expectation to contain a body with a List.class directly
  • When we retrieve the message body as a List.class, we'll get a standard warning about type safety – hence the use of @SuppressWarnings(“unchecked”)

6. Conclusion

In this short article, we've seen two simple approaches for unmarshalling JSON arrays using camel message routing and the camel-jackson component.

As always, the full source code of the article is available over on GitHub.

The post Unmarshalling a JSON Array Using camel-jackson first appeared on Baeldung.

        

Redis vs MongoDB

$
0
0

1. Overview

Often, we find it challenging to decide on a non-relational database as a primary data store for our applications.

In this article, we'll explore two popular non-relational databases, Redis and MongoDB.

First, we'll take a quick look at the features offered by Redis and MongoDB. Then, we'll discuss when to use Redis or MongoDB by comparing them against each other.

2. Redis

Redis is an in-memory data structure store that offers a rich set of features. It's useful as a cache, message broker, and queue.

2.1. Features

2.2. Installation

We can download the latest Redis server from the official website and install it:

$ wget http://download.redis.io/releases/redis-6.0.9.tar.gz
$ tar xzf redis-6.0.9.tar.gz
$ cd redis-6.0.9
$ make

3. MongoDB

MongoDB is a NoSQL document database that stores information in a JSON-like document structure. It's useful as a schemaless data store for rapidly changing applications, prototyping, and startups in a design and implementation phase.

3.1. Features

  • Offers an interactive command-line interface MongoDB Shell (mongosh) to perform administrative operations and query/update data
  • JSON based query structure with the support of joins
  • Supports various types of searches like geo-based search, graph search, and text search
  • Supports multi-document ACID transactions
  • Spring Data support
  • Available in community, enterprise, and cloud (MongoDB Atlas) editions
  • Various drivers for major technologies like C++, Java, Go, Python, Rust, and Scala
  • Provides GUI to explore and manipulate data through MongoDB Compass
  • Offers a visual representation of data using MongoDB Charts
  • MongoDB BI Connector provides connections to BI and analytics platforms

3.2. Installation

We can download the latest MongoDB server or, if using macOS, we can install the community edition directly using Homebrew:

brew tap mongodb/brew
brew install mongodb-community@4.4

4. When to Use Redis?

4.1. Caching

Redis provides best-in-class caching performance by providing sub-millisecond response time on frequently requested items.

Furthermore, it allows setting expiration time on keys using commands like EXPIRE, EXPIREAT, and PEXPIRE.

At the same time, we can use the PERSIST command to remove the timeout and persist the key-value pair, making it ideal for caching.

4.2. Flexible Data Storage

Redis provides various data structures like string, list, set, and hash to decide how to store and organize our data. Hence, Redis gives us full freedom over the implementation of the database structures.

However, it may also require a long time to think through the DB design. Similarly, it can be challenging to build and maintain the inner structure of the schema using Redis.

4.3. Complex Data Storage

Similarly, with the combination of the list, set, and hash, we can implement complex data structures like queues, arrays, sorted sets, and graphs for our storage.

4.4. Chat, Queue, and Message Broker

Redis can publish and subscribe to messages using pub/sub message queues with pattern matching. Thus, Redis can support real-time chat and social-media feed applications.

Similarly, we can implement a lightweight queue using the list data structure. Furthermore, Redis's list supports atomic operations and offer blocking capabilities, making it suitable to implement a message broker.

4.5. Session Store

Redis provides an in-memory data store with persistence capabilities, making it a good candidate to store and manage sessions for web/mobile applications.

4.6. IoT and Embedded Systems

As per Redis's official documentation, newer versions starting from 4 and 5 support the ARM processor and the Raspberry Pi.

Also, it runs on Andriod, and efforts are in place to include Android as an officially supported platform.

So, Redis looks ideal for IoT and embedded systems, benefitted by its small memory footprint and low CPU requirements.

4.7. Real-Time Processing

Being a blazing fast in-memory data structure, we can use it for real-time processing applications.

For instance, Redis can efficiently serve applications that offer features like stock price alerts, leaderboards, and real-time analytics.

4.8. Geospatial Apps

Redis offers a purpose-built in-memory data structure Geo Set – built on sorted set – for managing geospatial indices. Also, it provides specific geo commands like GEOADD, GEOPOS, and GEORADIUS to add, read, and analyze geospatial data.

Therefore, we can build real-time geospatial applications with location-based features like drive time and drive distance using Redis.

5. When to Use MongoDB?

5.1. Dynamic Queries

MongoDB offers a powerful set of query tools. Also, it provides a wide range of flexible query schemes like geo-based search, graph search, and text search for efficient data retrieval.

At the same time, with the support of JSON-structured queries, MongoDB looks to be a better choice for scenarios where data search and analytics are daily activities.

5.2. Rapidly Changing Schema

MongoDB can be helpful in the design and early implementation phases, where we require quick changes to our schema. At the same time, it doesn't make assumptions on the underlying data, and it optimizes itself without needing a schema.

5.3. Prototyping and Hackathons

By following the JSON-like document structure, MongoDB allows for rapid prototyping, quick integrations with front-end channels, and hackathons.

At the same time, it can be useful for junior teams that don't want to deal with the complexities of an RDBMS.

5.4. Catalogs

By providing a dynamic schema that is self-describing, MongoDB makes it easier to add products, features, and recommendations for catalogs like e-commerce, asset management, and inventory.

We can also use expressive queries in MongoDB for features like advanced search and analytics by indexing a field or set of fields of the JSON-structured document.

5.5. Mobile Apps

MongoDB’s JSON document structure allows storing different types of data from various devices along with geospatial indexes.

Besides, horizontal scalability with native sharding allows easy scaling of a mobile app. Therefore, MongoDB can serve tons of users, process petabytes of data, and support hundreds of thousands of operations per second, making it a worthy choice for backing mobile apps.

5.6. Content-Rich Apps

It's not easy to incorporate various content in RDBMS for modern content-rich apps. On the other hand, MongoDB allows storing and serving rich content like text, audio, and video.

Also, we can easily store files larger than 16MB efficiently using MongoDB GridFS. It allows accessing a portion of large files without loading the entire file into memory.

Additionally, it automatically syncs our files and metadata across all servers. As a result, MongoDB looks to be a more suitable choice for supporting content-rich apps.

5.7. Gaming Apps

Similar to mobile and content-rich apps, gaming also requires massive scaling and dynamic data structures. Thus, MongoDB can be a promising choice for gaming apps.

5.8. Global Cloud Database Service

MongoDB Atlas is available across multiple cloud services like AWS, Google Cloud, and Azure. In addition, with built-in replication and failover mechanism, it offers a highly available distributed system. Therefore, we can quickly deploy and manage the database and use it as a global cloud database service.

6. Conclusion

In this article, we explored Redis and MongoDB as choices for a non-relational database.

First, we looked at the features offered by both databases. Then, we explored scenarios where one of them is better than the other.

We can certainly conclude Redis looks promising as a better solution for caching, message broker, and queue. At the same time, it can prove worthy in real-time processing, geospatial apps, and embedded systems.

On the other hand, MongoDB is a solid choice for storing JSON-like objects. As a result, MongoDB can be best suited for schema-less architecture for prototyping, modern-day content-rich, mobile, and gaming applications.

The post Redis vs MongoDB first appeared on Baeldung.

        

Java 14 – New Features

$
0
0

1. Overview

Java 14 released on March 17, 2020, exactly six months after its previous version as per Java's new release cadence.

In this tutorial, we'll look at a summary of new and deprecated features of version 14 of the language.

We also have more detailed articles on Java 14 that offer an in-depth view of the new features.

2. Features Carried Over From Earlier Versions

A few features have been carried over in Java 14 from the previous version. Let's look at them one by one.

2.1. Switch Expressions (JEP 361)

These were first introduced as a preview feature in JDK 12, and even in Java 13, they continued as preview features only. But now, switch expressions have been standardized so that they are part and parcel of the development kit.

What this effectively means is that this feature can now be used in production code, and not just in the preview mode to be experimented with by developers.

As a simple example, let's consider a scenario where we'd designate days of the week as either weekday or weekend.

Prior to this enhancement, we'd have written it as:

boolean isTodayHoliday;
switch (day) {
    case "MONDAY":
    case "TUESDAY":
    case "WEDNESDAY":
    case "THURSDAY":
    case "FRIDAY":
        isTodayHoliday = false;
        break;
    case "SATURDAY":
    case "SUNDAY":
        isTodayHoliday = true;
        break;
    default:
        throw new IllegalArgumentException("What's a " + day);
}

With switch expressions, we can write the same thing more succinctly:

boolean isTodayHoliday = switch (day) {
    case "MONDAY", "TUESDAY", "WEDNESDAY", "THURSDAY", "FRIDAY" -> false;
    case "SATURDAY", "SUNDAY" -> true;
    default -> throw new IllegalArgumentException("What's a " + day);
};

2.2. Text Blocks (JEP 368)

Text blocks continue their journey to getting a mainstream upgrade and are still available as preview features.

In addition to the capabilities from JDK 13 to make multiline strings easier to use, in their second preview, text blocks now have two new escape sequences:

  • \: to indicate the end of the line, so that a new line character is not introduced
  • \s: to indicate a single space

For example:

String multiline = "A quick brown fox jumps over a lazy dog; the lazy dog howls loudly.";

can now be written as:

String multiline = """
    A quick brown fox jumps over a lazy dog; \
    the lazy dog howls loudly.""";

This improves the readability of the sentence for a human eye but does not add a new line after dog;.

3. New Preview Features

3.1. Pattern Matching for instanceof (JEP 305)

JDK 14 has introduced pattern matching for instanceof with the aim of eliminating boilerplate code and make the developer's life a little bit easy.

To understand this, let's consider a simple example.

Before this feature, we wrote:

if (obj instanceof String) {
    String str = (String) obj;
    int len = str.length();
    // ...
}

Now, we don't need as much code to start using obj as String:

if (obj instanceof String str) {
    int len = str.length();
    // ...
}

In future releases, Java is going to come up with pattern matching for other constructs such as switch.

3.2. Records (JEP 359)

Records were introduced to reduce repetitive boilerplate code in data model POJOs. They simplify day to day development, improve efficiency and greatly minimize the risk of human error.

For example, a data model for a User with an id and password can be simply defined as:

public record User(int id, String password) { };

As we can see, we are making use of a new keyword, record, here. This simple declaration will automatically add a constructor, getters, equals, hashCode and toString methods for us.

Let's see this in action with a JUnit:

private User user1 = new User(0, "UserOne");
@Test
public void givenRecord_whenObjInitialized_thenValuesCanBeFetchedWithGetters() {
    assertEquals(0, user1.id());
    assertEquals("UserOne", user1.password());
}
@Test
public void whenRecord_thenEqualsImplemented() {
    User user2 = user1;
    assertTrue(user1, user2);
}
@Test
public void whenRecord_thenToStringImplemented() {
    assertTrue(user1.toString().contains("UserOne"));
}

4. New Production Features

Along with the two new preview features, Java 14 has also shipped a concrete production-ready one.

4.1. Helpful NullPointerExceptions (JEP 358)

Previously, the stack trace for a NullPointerException didn't have much of a story to tell except that some value was null at a given line in a given file.

Though useful, this information only suggested a line to debug instead of painting the whole picture for a developer to understand, just by looking at the log.

Now Java has made this easier by adding the capability to point out what exactly was null in a given line of code.

For example, consider this simple snippet:

int[] arr = null;
arr[0] = 1;

Earlier, on running this code, the log would say:

Exception in thread "main" java.lang.NullPointerException
at com.baeldung.MyClass.main(MyClass.java:27)

But now, given the same scenario, the log might say:

java.lang.NullPointerException: Cannot store to int array because "a" is null

As we can see, now we know precisely which variable caused the exception.

5. Incubating Features

These are the non-final APIs and tools that the Java team comes up with, and provides us for experimentation. They are different from preview features and are provided as separate modules in the package jdk.incubator.

5.1. Foreign Memory Access API (JEP 370)

This is a new API to allow Java programs to access foreign memory, such as native memory, outside the heap in a safe and efficient manner.

Many Java libraries such as mapDB and memcached do access foreign memory and it was high time the Java API itself offered a cleaner solution. With this intention, the team came up with this JEP as an alternative to its already existing ways to access non-heap memory – ByteBuffer API and sun.misc.Unsafe API.

Built upon three main abstractions of MemorySegment, MemoryAddress and MemoryLayout, this API is a safe way to access both heap and non-heap memory.

5.2. Packaging Tool (JEP 343)

Traditionally, to deliver Java code, an application developer would simply send out a JAR file that the user was supposed to run inside their own JVM.

However, users rather expected an installer that they'd double click to install the package on their native platform, such as Windows or macOS.

This JEP aims to do precisely that. Developers can use jlink to condense the JDK down to the minimum required modules, and then use this packaging tool to create a lightweight image that can be installed as an exe on Windows or a dmg on a macOS.

6. JVM/HotSpot Features

6.1. ZGC on Windows (JEP 365) and macOS (JEP 364) – Experimental

The Z Garbage Collector, a scalable, low-latency garbage collector, was first introduced in Java 11 as an experimental feature. But initially, the only supported platform was Linux/x64.

After receiving positive feedback on ZGC for Linux, Java 14 has ported its support to Windows and macOS as well. Though still an experimental feature, it's all set to become production-ready in the next JDK release.

6.2. NUMA-Aware Memory Allocation for G1 (JEP 345)

Non-uniform memory access (NUMA) was not implemented so far for the G1 garbage collector, unlike the Parallel collector.

Looking at the performance improvement that it offers to run a single JVM across multiple sockets, this JEP was introduced to make the G1 collector NUMA-aware as well.

At this point, there's no plan to replicate the same to other HotSpot collectors.

6.3. JFR Event Streaming (JEP 349)

With this enhancement, JDK's flight recorder data is now exposed so that it can be continuously monitored. This involves modifications to the package jdk.jfr.consumer so that users can now read or stream the recording data directly.

7. Deprecated or Removed Features

Java 14 has deprecated a couple of features:

  • Solaris and SPARC Ports (JEP 362) – because this Unix operating system and RISC processor are not in active development since the past few years
  • ParallelScavenge + SerialOld GC Combination (JEP 366) – since this is a rarely used combination of GC algorithms, and requires significant maintenance effort

There are a couple of removals as well:

  • Concurrent Mark Sweep (CMS) Garbage Collector (JEP 363) – deprecated by Java 9, this GC has been succeeded by G1 as the default GC. Also, there are other more performant alternatives to use now, such as ZGC and Shenandoah, hence the removal
  • Pack200 Tools and API (JEP 367) – these were deprecated for removal in Java 11, and now removed

8. Conclusion

In this tutorial, we looked at the various JEPs of Java 14.

In all, there are 16 major features in this release of the language, including preview features, incubators, deprecations and removals. We looked at all of them one by one, and the language features with examples.

As always, the source code is available over on GitHub.

The post Java 14 - New Features first appeared on Baeldung.

        

Behavioral Patterns in Core Java

$
0
0

1. Introduction

Recently we looked at Creational Design Patterns and where to find them within the JVM and other core libraries. Now we're going to look at Behavioral Design Patterns. These focus on how our objects interact with each other or how we interact with them.

2. Chain of Responsibility

The Chain of Responsibility pattern allows for objects to implement a common interface and for each implementation to delegate on to the next one if appropriate. This then allows us to build a chain of implementations, where each one performs some actions before or after the call to the next element in the chain:

interface ChainOfResponsibility {
    void perform();
}
class LoggingChain {
    private ChainOfResponsibility delegate;
    public void perform() {
        System.out.println("Starting chain");
        delegate.perform();
        System.out.println("Ending chain");
    }
}

Here we can see an example where our implementation prints out before and after the delegate call.

We aren't required to call on to the delegate. We could decide that we shouldn't do so and instead terminate the chain early. For example, if there were some input parameters, we could have validated them and terminated early if they were invalid.

2.1. Examples in the JVM

Servlet Filters are an example from the JEE ecosystem that works in this way. A single instance receives the servlet request and response, and a FilterChain instance represents the entire chain of filters. Each one should then perform its work and then either terminate the chain or else call chain.doFilter() to pass control on to the next filter:

public class AuthenticatingFilter implements Filter {
    public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) 
      throws IOException, ServletException {
        HttpServletRequest httpRequest = (HttpServletRequest) request;
        if (!"MyAuthToken".equals(httpRequest.getHeader("X-Auth-Token")) {
             return;
        }
        chain.doFilter(request, response);
    }
}

3. Command

The Command pattern allows us to encapsulate some concrete behaviors – or commands – behind a common interface, such that they can be correctly triggered at runtime.

Typically we'll have a Command interface, a Receiver instance that receives the command instance, and an Invoker that is responsible for calling the correct command instance. We can then define different instances of our Command interface to perform different actions on the receiver:

interface DoorCommand {
    perform(Door door);
}
class OpenDoorCommand implements DoorCommand {
    public void perform(Door door) {
        door.setState("open");
    }
}

Here, we have a command implementation that will take a Door as the receiver and will cause the door to become “open”. Our invoker can then call this command when it wishes to open a given door, and the command encapsulates how to do this.

In the future, we might need to change our OpenDoorCommand to check that the door isn't locked first. This change will be entirely within the command, and the receiver and invoker classes don't need to have any changes.

3.1. Examples in the JVM

A very common example of this pattern is the Action class within Swing:

Action saveAction = new SaveAction();
button = new JButton(saveAction)

Here, SaveAction is the command, the Swing JButton component that uses this class is the invoker, and the Action implementation is called with an ActionEvent as the receiver.

4. Iterator

The Iterator pattern allows us to work across the elements in a collection and interact with each in turn. We use this to write functions taking an arbitrary iterator over some elements without regard to where they are coming from. The source could be an ordered list, an unordered set, or an infinite stream:

void printAll<T>(Iterator<T> iter) {
    while (iter.hasNext()) {
        System.out.println(iter.next());
    }
}

4.1. Examples in the JVM

All of the JVM standard collections implement the Iterator pattern by exposing an iterator() method that returns an Iterator<T> over the elements in the collection. Streams also implement the same method, except in this case, it might be an infinite stream, so the iterator might never terminate.

5. Memento

The Memento pattern allows us to write objects that are able to change state, and then revert back to their previous state. Essentially an “undo” function for object state.

This can be implemented relatively easily by storing the previous state any time a setter is called:

class Undoable {
    private String value;
    private String previous;
    public void setValue(String newValue) {
        this.previous = this.value;
        this.value = newValue;
    }
    public void restoreState() {
        if (this.previous != null) {
            this.value = this.previous;
            this.previous = null;
        }
    }
}

This gives the ability to undo the last change that was made to the object.

This is often implemented by wrapping the entire object state in a single object, known as the Memento. This allows the entire state to be saved and restored in a single action, instead of having to save every field individually.

5.1. Examples in the JVM

JavaServer Faces provides an interface called StateHolder that allows implementers to save and restore their state. There are several standard components that implement this, consisting of individual components – for example, HtmlInputFile, HtmlInputText, or HtmlSelectManyCheckbox – as well as composite components such as HtmlForm.

6. Observer

The Observer pattern allows for an object to indicate to others that changes have happened. Typically we'll have a Subject – the object emitting events, and a series of Observers – the objects receiving these events. The observers will register with the subject that they want to be informed about changes. Once this has happened, any changes that happen in the subject will cause the observers to be informed:

class Observable {
    private String state;
    private Set<Consumer<String>> listeners = new HashSet<>;
    public void addListener(Consumer<String> listener) {
        this.listeners.add(listener);
    }
    public void setState(String newState) {
        this.state = state;
        for (Consumer<String> listener : listeners) {
            listener.accept(newState);
        }
    }
}

This takes a set of event listeners and calls each one every time the state changes with the new state value.

6.1. Examples in the JVM

Java has a standard pair of classes that allow us to do exactly this – java.beans.PropertyChangeSupport and java.beans.PropertyChangeListener.

PropertyChangeSupport acts as a class that can have observers added and removed from it and can notify them all of any state changes. PropertyChangeListener is then an interface that our code can implement to receive any changes that have happened:

PropertyChangeSupport observable = new PropertyChangeSupport();
// Add some observers to be notified when the value changes
observable.addPropertyChangeListener(evt -> System.out.println("Value changed: " + evt));
// Indicate that the value has changed and notify observers of the new value
observable.firePropertyChange("field", "old value", "new value");

Note that there are another pair of classes that seem a better fit – java.util.Observer and java.util.Observable. These are deprecated in Java 9 though, due to being inflexible and unreliable.

7. Strategy

The Strategy pattern allows us to write generic code and then plug specific strategies into it to give us the specific behavior needed for our exact cases.

This will typically be implemented by having an interface representing the strategy. The client code is then able to write concrete classes implementing this interface as needed for the exact cases. For example, we might have a system where we need to notify end-users and implement the notification mechanisms as pluggable strategies:

interface NotificationStrategy {
    void notify(User user, Message message);
}
class EmailNotificationStrategy implements NotificationStrategy {
    ....
}
class SMSNotificationStrategy implements NotificationStrategy {
    ....
}

We can then decide at runtime exactly which of these strategies to actually use to send this message to this user. We can also write new strategies to use with minimal impact on the rest of the system.

7.1. Examples in the JVM

The standard Java libraries use this pattern extensively, often in ways that may not seem obvious at first. For example, the Streams API introduced in Java 8 makes extensive use of this pattern. The lambdas provided to map(), filter(), and other methods are all pluggable strategies that are provided to the generic method.

Examples go back even further, though. The Comparator interface introduced in Java 1.2 is a strategy that can be provided to sort elements within a collection as required. We can provide different instances of the Comparator to sort the same list in different ways as desired:

// Sort by name
Collections.sort(users, new UsersNameComparator());
// Sort by ID
Collections.sort(users, new UsersIdComparator());

8. Template Method

The Template Method pattern is used when we want to orchestrate several different methods working together. We'll define a base class with the template method and a set of one or more abstract methods – either unimplemented or else implemented with some default behavior. The template method then calls these abstract methods in a fixed pattern. Our code then implements a subclass of this class and implements these abstract methods as needed:

class Component {
    public void render() {
        doRender();
        addEventListeners();
        syncData();
    }
    protected abstract void doRender();
    protected void addEventListeners() {}
    protected void syncData() {}
}

Here, we have some arbitrary UI component. Our subclasses will implement the doRender() method to actually render the component. We can also optionally implement the addEventListeners() and syncData() methods. When our UI framework renders this component it will guarantee that all three get called in the correct order.

8.1. Examples in the JVM

The AbstractList, AbstractSet, and AbstractMap used by Java Collections have many examples of this pattern. For example, the indexOf() and lastIndexOf() methods both work in terms of the listIterator() method, which has a default implementation but which gets overridden in some subclasses. Equally, the add(T) and addAll(int, T) methods both work in terms of the add(int, T) method which doesn't have a default implementation and needs to be implemented by the subclass.

Java IO also makes use of this pattern within InputStream, OutputStream, Reader, and Writer. For example, the InputStream class has several methods that work in terms of read(byte[], int, int), which needs the subclass to implement.

9. Visitor

The Visitor pattern allows our code to handle various subclasses in a typesafe way, without needing to resort to instanceof checks. We'll have a visitor interface with one method for each concrete subclass that we need to support. Our base class will then have an accept(Visitor) method. The subclasses will each call the appropriate method on this visitor, passing itself in. This then allows us to implement concrete behaviour in each of these methods, each knowing that it will be working with the concrete type:

interface UserVisitor<T> {
    T visitStandardUser(StandardUser user);
    T visitAdminUser(AdminUser user);
    T visitSuperuser(Superuser user);
}
class StandardUser {
    public <T> T accept(UserVisitor<T> visitor) {
        return visitor.visitStandardUser(this);
    }
}

Here we have our UserVisitor interface with three different visitor methods on it. Our example StandardUser calls the appropriate method, and the same will be done in AdminUser and Superuser. We can then write our visitors to work with these as needed:

class AuthenticatingVisitor {
    public Boolean visitStandardUser(StandardUser user) {
        return false;
    }
    public Boolean visitAdminUser(AdminUser user) {
        return user.hasPermission("write");
    }
    public Boolean visitSuperuser(Superuser user) {
        return true;
    }
}

Our StandardUser never has permission, our Superuser always has permission, and our AdminUser might have permission but this needs to be looked up in the user itself.

9.1. Examples in the JVM

The Java NIO2 framework uses this pattern with Files.walkFileTree(). This takes an implementation of FileVisitor that has methods to handle various different aspects of walking the file tree. Our code can then use this for searching files, printing out matching files, processing many files in a directory, or lots of other things that need to work within a directory:

Files.walkFileTree(startingDir, new SimpleFileVisitor() {
    public FileVisitResult visitFile(Path file, BasicFileAttributes attr) {
        System.out.println("Found file: " + file);
    }
    public FileVisitResult preVisitDirectory(Path dir, BasicFileAttributes attrs) {
        System.out.println("Found directory: " + dir);
    }
});

10. Conclusion

In this article, we've had a look at various design patterns used for the behaviour of objects. We've also looked at examples of these patterns as used within the core JVM as well, so we can see them in use in a way that many applications already benefit from.

The post Behavioral Patterns in Core Java first appeared on Baeldung.

        

Spring Boot: Customize the Jackson ObjectMapper

$
0
0

1. Overview

When using JSON format, Spring Boot will use an ObjectMapper instance to serialize responses and deserialize requests. In this tutorial, we'll take a look at the most common ways to configure the serialization and deserialization options.

To learn more about Jackson, be sure to check out our Jackson tutorial.

2. Default Configuration

By default, the Spring Boot configuration will:

  • Disable MapperFeature.DEFAULT_VIEW_INCLUSION
  • Disable DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES
  • Disable SerializationFeature.WRITE_DATES_AS_TIMESTAMPS

Let's start with a quick example:

  • The client will send a GET request to our /coffee?name=Lavazza
  • The controller will return a new Coffee object
  • Spring will use ObjectMapper to serialize our POJO to JSON

We'll exemplify the customization options by using String and LocalDateTime objects:

public class Coffee {
    private String name;
    private String brand;
    private LocalDateTime date;
   //getters and setters
}

We'll also define a simple REST controller to demonstrate the serialization:

@GetMapping("/coffee")
public Coffee getCoffee(
        @RequestParam(required = false) String brand,
        @RequestParam(required = false) String name) {
    return new Coffee()
      .setBrand(brand)
      .setDate(FIXED_DATE)
      .setName(name);
}

By default, the response when calling GET http://lolcahost:8080/coffee?brand=Lavazza will be:

{
  "name": null,
  "brand": "Lavazza",
  "date": "2020-11-16T10:21:35.974"
}

We would like to exclude null values and to have a custom date format (dd-MM-yyyy HH:mm). The final response will be:

{
  "brand": "Lavazza",
  "date": "04-11-2020 10:34"
}

When using Spring Boot, we have the option to customize the default ObjectMapper or to override it. We'll cover both of these options in the next sections.

3. Customizing the Default ObjectMapper

In this section, we'll see how to customize the default ObjectMapper that Spring Boot uses.

3.1. Application Properties and Custom Jackson Module

The simplest way to configure the mapper is via application properties. The general structure of the configuration is:

spring.jackson.<category_name>.<feature_name>=true,false

As an example, if we want to disable SerializationFeature.WRITE_DATES_AS_TIMESTAMPS, we'll add:

spring.jackson.serialization.write-dates-as-timestamps=false

Besides the mentioned feature categories, we can also configure property inclusion:

spring.jackson.default-property-inclusion=always, non_null, non_absent, non_default, non_empty

Configuring the environment variables is the simplest approach. The downside of this approach is that we can't customize advanced options like having a custom date format for LocalDateTime. At this point, we'll obtain the result:

{
  "brand": "Lavazza",
  "date": "2020-11-16T10:35:34.593"
}

In order to achieve our goal, we'll register a new JavaTimeModule with our custom date format:

@Configuration
@PropertySource("classpath:coffee.properties")
public class CoffeeRegisterModuleConfig {
    @Bean
    public Module javaTimeModule() {
        JavaTimeModule module = new JavaTimeModule();
        module.addSerializer(LOCAL_DATETIME_SERIALIZER);
        return module;
    }
}

Also, the configuration properties file coffee.properties will contain:

spring.jackson.default-property-inclusion=non_null

Spring Boot will automatically register any bean of type com.fasterxml.jackson.databind.Module. The final result will be:

{
  "brand": "Lavazza",
  "date": "16-11-2020 10:43"
}

3.2. Jackson2ObjectMapperBuilderCustomizer

The purpose of this functional interface is to allow us to create configuration beans. They will be applied to the default ObjectMapper created via Jackson2ObjectMapperBuilder:

@Bean
public Jackson2ObjectMapperBuilderCustomizer jsonCustomizer() {
    return builder -> builder.serializationInclusion(JsonInclude.Include.NON_NULL)
      .serializers(LOCAL_DATETIME_SERIALIZER);
}

The configuration beans are applied in a specific order, which we can control using the @Order annotation. This elegant approach is suitable if we want to configure the ObjectMapper from different configurations or modules.

4. Overriding the Default Configuration

If we want to have full control over the configuration, there are several options that will disable the auto-configuration and allow only our custom configuration to be applied. Let's take a close look at these options.

4.1. ObjectMapper

The simplest way to override the default configuration is to define an ObjectMapper bean and to mark it as @Primary:

@Bean
@Primary
public ObjectMapper objectMapper() {
    JavaTimeModule module = new JavaTimeModule();
    module.addSerializer(LOCAL_DATETIME_SERIALIZER);
    return new ObjectMapper()
      .setSerializationInclusion(JsonInclude.Include.NON_NULL)
      .registerModule(module);
}

We should use this approach when we want to have full control over the serialization process and we don't want to allow external configuration.

4.2. Jackson2ObjectMapperBuilder

Another clean approach is to define a Jackson2ObjectMapperBuilder bean. Actually, Spring Boot is using this builder by default when building the ObjectMapper and will automatically pick up the defined one:

@Bean
public Jackson2ObjectMapperBuilder jackson2ObjectMapperBuilder() {
    return new Jackson2ObjectMapperBuilder().serializers(LOCAL_DATETIME_SERIALIZER)
      .serializationInclusion(JsonInclude.Include.NON_NULL);
}

It will configure two options by default:

  • Disable MapperFeature.DEFAULT_VIEW_INCLUSION
  • Disable DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES

According to the Jackson2ObjectMapperBuilder documentation, it will also register some modules if they're present on the classpath:

  • jackson-datatype-jdk8: support for other Java 8 types like Optional
  • jackson-datatype-jsr310: support for Java 8 Date and Time API types
  • jackson-datatype-joda: support for Joda-Time types
  • jackson-module-kotlin: support for Kotlin classes and data classes

The advantage of this approach is that the Jackson2ObjectMapperBuilder offers a simple and intuitive way to build an ObjectMapper.

4.3. MappingJackson2HttpMessageConverter

We can just define a bean with type MappingJackson2HttpMessageConverter, and Spring Boot will automatically use it:

@Bean
public MappingJackson2HttpMessageConverter mappingJackson2HttpMessageConverter() {
    Jackson2ObjectMapperBuilder builder = new Jackson2ObjectMapperBuilder().serializers(LOCAL_DATETIME_SERIALIZER)
      .serializationInclusion(JsonInclude.Include.NON_NULL);
    return new MappingJackson2HttpMessageConverter(builder.build());
}

Be sure to check out our Spring Http Message Converters article to learn more.

5. Testing the Configuration

To test our configuration, we'll use TestRestTemplate and serialize the objects as String. In this way, we can validate that our Coffee object is serialized without null values and with the custom date format:

@Test
public void whenGetCoffee_thenSerializedWithDateAndNonNull() {
    String formattedDate = DateTimeFormatter.ofPattern(CoffeeConstants.dateTimeFormat).format(FIXED_DATE);
    String brand = "Lavazza";
    String url = "/coffee?brand=" + brand;
    
    String response = restTemplate.getForObject(url, String.class);
    
    assertThat(response).isEqualTo("{\"brand\":\"" + brand + "\",\"date\":\"" + formattedDate + "\"}");
}

6. Conclusion

In this tutorial, we took a look at several methods to configure the JSON serialization options when using Spring Boot.

We saw two different approaches: configuring the default options or overriding the default configuration.

As always, the full source code of the article is available over on GitHub.

The post Spring Boot: Customize the Jackson ObjectMapper first appeared on Baeldung.

        

Collections.synchronizedMap vs. ConcurrentHashMap

$
0
0

1. Overview

In this tutorial, we'll discuss the differences between Collections.synchronizedMap() and ConcurrentHashMap.

Additionally, we'll look at the performance outputs of the read and write operations for each.

2. The Differences

Collections.synchronizedMap() and ConcurrentHashMap both provide thread-safe operations on collections of data.

The Collections utility class provides polymorphic algorithms that operate on collections and return wrapped collections. Its synchronizedMap() method provides thread-safe functionality.

As the name implies, synchronizedMap() returns a synchronized Map backed by the Map that we provide in the parameter. To provide thread-safety, synchronizedMap() allows all accesses to the backing Map via the returned Map.

ConcurrentHashMap was introduced in JDK 1.5 as an enhancement of HashMap that supports high concurrency for retrievals as well as updates. HashMap isn't thread-safe, so it might lead to incorrect results during thread contention.

The ConcurrentHashMap class is thread-safe. Therefore, multiple threads can operate on a single object with no complications.

In ConcurrentHashMap, read operations are non-blocking, whereas write operations take a lock on a particular segment or bucket. The default bucket or concurrency level is 16, which means 16 threads can write at any instant after taking a lock on a segment or bucket.

2.1. ConcurrentModificationException

For objects like HashMap, performing concurrent operations is not allowed. Therefore, if we try to update a HashMap while iterating over it, we will receive a ConcurrentModificationException. This will also occur when using synchronizedMap():

@Test(expected = ConcurrentModificationException.class)
public void whenRemoveAndAddOnHashMap_thenConcurrentModificationError() {
    Map<Integer, String> map = new HashMap<>();
    map.put(1, "baeldung");
    map.put(2, "HashMap");
    Map<Integer, String> synchronizedMap = Collections.synchronizedMap(map);
    Iterator<Entry<Integer, String>> iterator = synchronizedMap.entrySet().iterator();
    while (iterator.hasNext()) {
        synchronizedMap.put(3, "Modification");
        iterator.next();
    }
}

However, this is not the case with ConcurrentHashMap:

Map<Integer, String> map = new ConcurrentHashMap<>();
map.put(1, "baeldung");
map.put(2, "HashMap");
 
Iterator<Entry<Integer, String>> iterator = map.entrySet().iterator();
while (iterator.hasNext()) {
    synchronizedMap.put(3, "Modification");
    iterator.next()
}
 
Assert.assertEquals(3, map.size());

2.2. null Support

Collections.synchronizedMap() and ConcurrentHashMap handle null keys and values differently.

ConcurrentHashMap doesn't allow null in keys or values:

@Test(expected = NullPointerException.class)
public void allowNullKey_In_ConcurrentHasMap() {
    Map<String, Integer> map = new ConcurrentHashMap<>();
    map.put(null, 1);
}

However, when using Collections.synchronizedMap(), null support depends on the input Map. We can have one null as a key and any number of null values when Collections.synchronizedMap() is backed by HashMap or LinkedHashMap, whereas if we're using TreeMap, we can have null values but not null keys.

Let's assert that we can use a null key for Collections.synchronizedMap() backed by a HashMap:

Map<String, Integer> map = Collections
  .synchronizedMap(new HashMap<String, Integer>());
map.put(null, 1);
Assert.assertTrue(map.get(null).equals(1));

Similarly, we can validate null support in values for both Collections.synchronizedMap() and ConcurrentHashMap.

3. Performance Comparison

Let's compare the performances of ConcurrentHashMap versus Collections.synchronizedMap(). In this case, we're using the open-source framework Java Microbenchmark Harness (JMH) to compare the performances of the methods in nanoseconds.

We ran the comparison for random read and write operations on these maps. Let's take a quick look at our JMH benchmark code:

@Benchmark
public void randomReadAndWriteSynchronizedMap() {
    Map<String, Integer> map = Collections.synchronizedMap(new HashMap<String, Integer>());
    performReadAndWriteTest(map);
}
@Benchmark
public void randomReadAndWriteConcurrentHashMap() {
    Map<String, Integer> map = new ConcurrentHashMap<>();
    performReadAndWriteTest(map);
}
private void performReadAndWriteTest(final Map<String, Integer> map) {
    for (int i = 0; i < TEST_NO_ITEMS; i++) {
        Integer randNumber = (int) Math.ceil(Math.random() * TEST_NO_ITEMS);
        map.get(String.valueOf(randNumber));
        map.put(String.valueOf(randNumber), randNumber);
    }
}

We ran our performance benchmarks using 5 iterations with 10 threads for 1,000 items. Let's see the benchmark results:

Benchmark                                                     Mode  Cnt        Score        Error  Units
MapPerformanceComparison.randomReadAndWriteConcurrentHashMap  avgt  100  3061555.822 ±  84058.268  ns/op
MapPerformanceComparison.randomReadAndWriteSynchronizedMap    avgt  100  3234465.857 ±  60884.889  ns/op
MapPerformanceComparison.randomReadConcurrentHashMap          avgt  100  2728614.243 ± 148477.676  ns/op
MapPerformanceComparison.randomReadSynchronizedMap            avgt  100  3471147.160 ± 174361.431  ns/op
MapPerformanceComparison.randomWriteConcurrentHashMap         avgt  100  3081447.009 ±  69533.465  ns/op
MapPerformanceComparison.randomWriteSynchronizedMap           avgt  100  3385768.422 ± 141412.744  ns/op

The above results show that ConcurrentHashMap performs better than Collections.synchronizedMap().

4. When to Use

We should favor Collections.synchronizedMap() when data consistency is of utmost importance, and we should choose ConcurrentHashMap for performance-critical applications where there are far more write operations than there are read operations.

5. Conclusion

In this article, we've demonstrated the differences between ConcurrentHashMap and Collections.synchronizedMap(). We've also shown the performances of both of them using a simple JMH benchmark.

As always, the code samples are available over on GitHub.

The post Collections.synchronizedMap vs. ConcurrentHashMap first appeared on Baeldung.

        

Java Weekly, Issue 365

$
0
0

1. Spring and Java

>> Troubleshooting Native Memory Leaks in Java Applications [blogs.oracle.com]

Native out of memory error – different approaches to identify the native memory leaks and diagnosing them in JVM.

>> Helidon 2.2.0 Released [medium.com]

Integration with Project Loom, GraphQL, Micronaut, and extended GraalVM support, all in a new Helidon version.

>> Implementing a Circuit Breaker with Resilience4j [reflectoring.io]

Resilient integration with remote services – a practical guide on how to use Circuit Breakers for more reliable remote calls. 

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Infinite Precision [alidg.me]

All benchmarks are wrong, but some are useful  how measurement uncertainties might affect our benchmarks!

Also worth reading: 

3. Musings

>> From Schooling to Space: Eight Predictions on How Technology Will Continue to Change Our Lives in the Coming Year [allthingsdistributed.com]

The future is here: ubiquitous could, internet of machine learning, remote learning, quantum computing, and many more transformations!

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Wally Does Three Jobs [dilbert.com]

>> Ethics Class [dilbert.com]

>> Bad Attitude [dilbert.com]

5. Pick of the Week

And, an interesting read about pricing and company financials:

>> Kung Fu [asmartbear.com]

The post Java Weekly, Issue 365 first appeared on Baeldung.

        

Integration Tests With Spring Cloud Netflix and Feign

$
0
0

1. Overview

In this article, we're going to explore the integration testing of a Feign Client.

We'll create a basic Open Feign Client for which we'll write a simple integration test with the help of WireMock.

After that, we'll add a Ribbon configuration to our client and also build an integration test for it. And finally, we'll configure a Eureka test container and test this setup to make sure our entire configuration works as expected.

2. The Feign Client

To set up our Feign Client, we should first add the Spring Cloud OpenFeign Maven dependency:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-openfeign</artifactId>
</dependency>

After that, let's create a Book class for our model:

public class Book {
    private String title;
    private String author;
}

And finally, let's create our Feign Client interface:

@FeignClient(value="simple-books-client", url="${book.service.url}")
public interface BooksClient {
    @RequestMapping("/books")
    List<Book> getBooks();
}

Now, we have a Feign Client that retrieves a list of Books from a REST service. Now, let's move forward and write some integration tests.

3. WireMock

3.1. Setting up the WireMock Server

If we want to test our BooksClient, we need a mock service that provides the /books endpoint. Our client will make calls against this mock service. For this purpose, we'll use WireMock.

So, let's add the WireMock Maven dependency:

<dependency>
    <groupId>com.github.tomakehurst</groupId>
    <artifactId>wiremock</artifactId>
    <scope>test</scope>
</dependency>

and configure the mock server:

@TestConfiguration
public class WireMockConfig {
    @Autowired
    private WireMockServer wireMockServer;
    @Bean(initMethod = "start", destroyMethod = "stop")
    public WireMockServer mockBooksService() {
        return new WireMockServer(9561);
    }
}

We now have a running mock server accepting connections on port 9651.

3.2. Setting up the Mock

Let's add the property book.service.url to our application-test.yml pointing to the WireMockServer port:

book:
  service:
    url: http://localhost:9561

And let's also prepare a mock response get-books-response.json for the /books endpoint:

[
  {
    "title": "Dune",
    "author": "Frank Herbert"
  },
  {
    "title": "Foundation",
    "author": "Isaac Asimov"
  }
]

Let's now configure the mock response for a GET request on the /books endpoint:

public class BookMocks {
    public static void setupMockBooksResponse(WireMockServer mockService) throws IOException {
        mockService.stubFor(WireMock.get(WireMock.urlEqualTo("/books"))
          .willReturn(WireMock.aResponse()
            .withStatus(HttpStatus.OK.value())
            .withHeader("Content-Type", MediaType.APPLICATION_JSON_VALUE)
            .withBody(
              copyToString(
                BookMocks.class.getClassLoader().getResourceAsStream("payload/get-books-response.json"),
                defaultCharset()))));
    }
}

At this point, all the required configuration is in place. Let's go ahead and write our first test.

4. Our First Integration Test

Let's create an integration test BooksClientIntegrationTest:

@SpringBootTest
@ActiveProfiles("test")
@EnableConfigurationProperties
@ExtendWith(SpringExtension.class)
@ContextConfiguration(classes = { WireMockConfig.class })
class BooksClientIntegrationTest {
    @Autowired
    private WireMockServer mockBooksService;
    @Autowired
    private BooksClient booksClient;
    @BeforeEach
    void setUp() throws IOException {
        BookMocks.setupMockBooksResponse(mockBooksService);
    }
    // ...
}

At this point, we have a SpringBootTest configured with a WireMockServer ready to return a predefined list of Books when the /books endpoint is invoked by the BooksClient.

And finally, let's add our test methods:

@Test
public void whenGetBooks_thenBooksShouldBeReturned() {
    assertFalse(booksClient.getBooks().isEmpty());
}
@Test
public void whenGetBooks_thenTheCorrectBooksShouldBeReturned() {
    assertTrue(booksClient.getBooks()
      .containsAll(asList(
        new Book("Dune", "Frank Herbert"),
        new Book("Foundation", "Isaac Asimov"))));
}

5. Integrating with Ribbon

Now let's improve our client by adding the load-balancing capabilities provided by Ribbon.

All we need to do in the client interface is to remove the hard-coded service URL and instead refer to the service by the service name book-service:

@FeignClient("books-service")
public interface BooksClient {
...

Next, add the Netflix Ribbon Maven dependency:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-netflix-ribbon</artifactId>
</dependency>

And finally, in the application-test.yml file, we should now remove the book.service.url and instead define the Ribbon listOfServers:

books-service:
  ribbon:
    listOfServers: http://localhost:9561

Let's now run the BooksClientIntegrationTest again. It should pass, confirming the new setup works as expected.

5.1. Dynamic Port Configuration

If we don't want to hard-code the server's port, we can configure WireMock to use a dynamic port at startup.

For this, let's create another test configuration, RibbonTestConfig:

@TestConfiguration
@ActiveProfiles("ribbon-test")
public class RibbonTestConfig {
    @Autowired
    private WireMockServer mockBooksService;
    @Autowired
    private WireMockServer secondMockBooksService;
    @Bean(initMethod = "start", destroyMethod = "stop")
    public WireMockServer mockBooksService() {
        return new WireMockServer(options().dynamicPort());
    }
    @Bean(name="secondMockBooksService", initMethod = "start", destroyMethod = "stop")
    public WireMockServer secondBooksMockService() {
        return new WireMockServer(options().dynamicPort());
    }
    @Bean
    public ServerList ribbonServerList() {
        return new StaticServerList<>(
          new Server("localhost", mockBooksService.port()),
          new Server("localhost", secondMockBooksService.port()));
    }
}

This configuration sets up two WireMock servers, each running on a different port dynamically assigned at runtime. Moreover, it also configures the Ribbon server list with the two mock servers.

5.2. Load Balancing Testing

Now that we have our Ribbon load balancer configured, let's make sure our BooksClient correctly alternates between the two mock servers:

@SpringBootTest
@ActiveProfiles("ribbon-test")
@EnableConfigurationProperties
@ExtendWith(SpringExtension.class)
@ContextConfiguration(classes = { RibbonTestConfig.class })
class LoadBalancerBooksClientIntegrationTest {
    @Autowired
    private WireMockServer mockBooksService;
    @Autowired
    private WireMockServer secondMockBooksService;
    @Autowired
    private BooksClient booksClient;
    @BeforeEach
    void setUp() throws IOException {
        setupMockBooksResponse(mockBooksService);
        setupMockBooksResponse(secondMockBooksService);
    }
    @Test
    void whenGetBooks_thenRequestsAreLoadBalanced() {
        for (int k = 0; k < 10; k++) {
            booksClient.getBooks();
        }
        mockBooksService.verify(
          moreThan(0), getRequestedFor(WireMock.urlEqualTo("/books")));
        secondMockBooksService.verify(
          moreThan(0), getRequestedFor(WireMock.urlEqualTo("/books")));
    }
    @Test
    public void whenGetBooks_thenTheCorrectBooksShouldBeReturned() {
        assertTrue(booksClient.getBooks()
          .containsAll(asList(
            new Book("Dune", "Frank Herbert"),
            new Book("Foundation", "Isaac Asimov"))));
    }
}

6. Eureka Integration

We have seen, so far, how to test a client that uses Ribbon for load balancing. But what if our setup uses a service discovery system like Eureka. We should write an integration test that makes sure that our BooksClient works as expected in such a context also.

For this purpose, we'll run a Eureka server as a test container. Then we startup and register a mock book-service with our Eureka container. And finally, once this installation is up, we can run our test against it.

Before moving further, let's add the Testcontainers and Netflix Eureka Client Maven dependencies:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
</dependency>
<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>testcontainers</artifactId>
    <scope>test</scope>
</dependency>

6.1. TestContainer Setup

Let's create a TestContainer configuration that will spin up our Eureka server:

public class EurekaContainerConfig {
    public static class Initializer implements ApplicationContextInitializer {
        public static GenericContainer eurekaServer = 
          new GenericContainer("springcloud/eureka").withExposedPorts(8761);
        @Override
        public void initialize(@NotNull ConfigurableApplicationContext configurableApplicationContext) {
            Startables.deepStart(Stream.of(eurekaServer)).join();
            TestPropertyValues
              .of("eureka.client.serviceUrl.defaultZone=http://localhost:" 
                + eurekaServer.getFirstMappedPort().toString() 
                + "/eureka")
              .applyTo(configurableApplicationContext);
        }
    }
}

As we can see, the initializer above starts the container. Then it exposes port 8761, on which the Eureka server is listening.

And finally, after the Eureka service has started, we need to update the eureka.client.serviceUrl.defaultZone property. This defines the address of the Eureka server used for service discovery.

6.2. Register Mock Server

Now that our Eureka server is up and running we need to register a mock books-service. We do this by simply creating a RestController:

@Configuration
@RestController
@ActiveProfiles("eureka-test")
public class MockBookServiceConfig {
    @RequestMapping("/books")
    public List getBooks() {
        return Collections.singletonList(new Book("Hitchhiker's Guide to the Galaxy", "Douglas Adams"));
    }
}

All we have to do now, in order to register this controller, is to make sure the spring.application.name property in our application-eureka-test.yml is books-service, the same as the service name used in the BooksClient interface.

Note: Now that the netflix-eureka-client library is in our list of dependencies, Eureka will be used by default for service discovery. So, if we want our previous tests, that don't use Eureka, to keep passing, we'll need to manually set eureka.client.enabled to false. In that way, even if the library is on the path, the BooksClient will not try to use Eureka for locating the service, but instead use the Ribbon configuration.

6.3. Integration Test

Once again, we have all the needed configuration pieces, so let's put them all together in a test:

@ActiveProfiles("eureka-test")
@EnableConfigurationProperties
@ExtendWith(SpringExtension.class)
@SpringBootTest(classes = Application.class, webEnvironment =  SpringBootTest.WebEnvironment.RANDOM_PORT)
@ContextConfiguration(classes = { MockBookServiceConfig.class }, 
  initializers = { EurekaContainerConfig.Initializer.class })
class ServiceDiscoveryBooksClientIntegrationTest {
    @Autowired
    private BooksClient booksClient;
    @Lazy
    @Autowired
    private EurekaClient eurekaClient;
    @BeforeEach
    void setUp() {
        await().atMost(60, SECONDS).until(() -> eurekaClient.getApplications().size() > 0);
    }
    @Test
    public void whenGetBooks_thenTheCorrectBooksAreReturned() {
        List books = booksClient.getBooks();
        assertEquals(1, books.size());
        assertEquals(
          new Book("Hitchhiker's guide to the galaxy", "Douglas Adams"), 
          books.stream().findFirst().get());
    }
}

There are a few things happening in this test. Let's look at them one by one.

Firstly, the context initializer inside EurekaContainerConfig starts the Eureka service.

Then, the SpringBootTest starts the books-service application that exposes the controller defined in MockBookServiceConfig.

Because the startup of the Eureka container and the web application can take a few seconds, we need to wait until the books-service gets registered. This happens in the setUp of the test.

And finally, the tests method verifies that the BooksClient indeed works correctly in combination with the Eureka configuration.

7. Conclusion

In this article, we've explored the different ways we can write integration tests for a Spring Cloud Feign Client. We started with a basic client which we tested with the help of WireMock. After that, we moved on to adding load balancing with Ribbon. We wrote an integration test and made sure our Feign Client works correctly with the client-side load balancing provided by Ribbon. And finally, we added Eureka service discovery to the mix. And again, we made sure our client still works as expected.

As always, the complete code is available over on GitHub.

The post Integration Tests With Spring Cloud Netflix and Feign first appeared on Baeldung.

        

Character#isAlphabetic vs. Character#isLetter

$
0
0

1. Overview

In this tutorial, we'll start by briefly going through some general category types for every defined Unicode code point or character range to understand the difference between letters and alphabetic characters.

Further, we'll look at the isAlphabetic() and isLetter() methods of the Character class in Java. Finally, we'll cover the similarities and distinctions between these methods.

2. General Category Types of Unicode Characters

The Unicode Character Set (UCS) contains 1,114,112 code points: U+0000—U+10FFFF. Characters and code point ranges are grouped by categories.

The Character class provides two overloaded versions of the getType() method that returns a value indicating the character's general category type.

Let's look at the signature of the first method:

public static int getType(char ch)

This method cannot handle supplementary characters. To handle all Unicode characters, including supplementary characters, Java's Character class provides an overloaded getType method which has the following signature:

public static int getType(int codePoint)

Next, let's start looking at some general category types.

2.1. UPPERCASE_LETTER

The UPPERCASE_LETTER general category type represents upper-case letters.

When we call the Character#getType method on an upper-case letter, for example, ‘U‘, the method returns the value 1, which is equivalent to the UPPERCASE_LETTER enum value:

assertEquals(Character.UPPERCASE_LETTER, Character.getType('U'));

2.2. LOWERCASE_LETTER

The LOWERCASE_LETTER general category type is associated with lower-case letters.

When calling the Character#getType method on a lower-case letter, for instance, ‘u‘, the method will return the value 2, which is the same as the enum value of LOWERCASE_LETTER:

assertEquals(Character.LOWERCASE_LETTER, Character.getType('u'));

2.3. TITLECASE_LETTER

Next, the TITLECASE_LETTER general category represents title case characters.

Some characters look like pairs of Latin letters. When we call the Character#getType method on such Unicode characters, this will return the value 3, which is equal to the TITLECASE_LETTER enum value:

assertEquals(Character.TITLECASE_LETTER, Character.getType('\u01f2'));

Here, the Unicode character ‘\u01f2‘ represents the Latin capital letter ‘D‘ followed by a small ‘Z‘ with a caron.

2.4. MODIFIER_LETTER

A modifier letter, in the Unicode Standard, is “a letter or symbol typically written next to another letter that it modifies in some way”.

The MODIFIER_LETTER general category type represents such modifier letters.

For example, the modifier letter small H, ‘ʰ‘, when passed to Character#getType method returns the value of 4, which is the same as the enum value of MODIFIER_LETTER:

assertEquals(Character.MODIFIER_LETTER, Character.getType('\u02b0'));

The Unicode character ‘\u020b‘ represents the modifier letter small H.

2.5. OTHER_LETTER

The OTHER_LETTER general category type represents an ideograph or a letter in a unicase alphabet. An ideograph is a graphic symbol representing an idea or a concept, independent of any particular language.

A unicase alphabet has just one case for its letters. For example, Hebrew is a unicase writing system.

Let's look at an example of a Hebrew letter Alef, ‘א‘, when we pass it to the Character#getType method, it returns the value of 5, which is equal to the enum value of OTHER_LETTER:

assertEquals(Character.OTHER_LETTER, Character.getType('\u05d0'));

The Unicode character ‘\u05d0‘ represents the Hebrew letter Alef.

2.6. LETTER_NUMBER

Finally, the LETTER_NUMBER category is associated with numerals composed of letters or letterlike symbols.

For example, the Roman numerals come under LETTER_NUMBER general category. When we call the Character#getType method with Roman Numeral Five, ‘Ⅴ', it returns the value 10, which is equal to the enum LETTER_NUMBER value:

assertEquals(Character.LETTER_NUMBER, Character.getType('\u2164'));

The Unicode character ‘\u2164‘ represents the Roman Numeral Five.

Next, let's look at the Character#isAlphabetic method.

3. Character#isAlphabetic

First, let's look at the signature of the isAlphabetic method:

public static boolean isAlphabetic(int codePoint)

This takes the Unicode code point as the input parameter and returns true if the specified Unicode code point is alphabetic and false otherwise.

A character is alphabetic if its general category type is any of the following:

  • UPPERCASE_LETTER
  • LOWERCASE_LETTER
  • TITLECASE_LETTER
  • MODIFIER_LETTER
  • OTHER_LETTER
  • LETTER_NUMBER

Additionally, a character is alphabetic if it has contributory property Other_Alphabetic as defined by the Unicode Standard.

Let's look at a few examples of characters that are alphabets:

assertTrue(Character.isAlphabetic('A'));
assertTrue(Character.isAlphabetic('\u01f2'));

In the above examples, we pass the UPPERCASE_LETTER ‘A' and TITLECASE_LETTER ‘\u01f2' which represents the Latin capital letter ‘D‘ followed by a small ‘Z‘ with a caron to the isAlphabetic method and it returns true.

4. Character#isLetter

Java's Character class provides the isLetter() method to determine if a specified character is a letter. Let's look at the method signature:

public static boolean isLetter(char ch)

It takes a character as an input parameter and returns true if the specified character is a letter and false otherwise.

A character is considered to be a letter if its general category type, provided by Character#getType method, is any of the following:

  • UPPERCASE_LETTER
  • LOWERCASE_LETTER
  • TITLECASE_LETTER
  • MODIFIER_LETTER
  • OTHER_LETTER

However, this method cannot handle supplementary characters. To handle all Unicode characters, including supplementary characters, Java's Character class provides an overloaded version of the isLetter() method:

public static boolean isLetter(int codePoint)

This method can handle all the Unicode characters as it takes a Unicode code point as the input parameter. Furthermore, it returns true if the specified Unicode code point is a letter as we defined earlier.

Let's look at a few examples of characters that are letters:

assertTrue(Character.isAlphabetic('a'));
assertTrue(Character.isAlphabetic('\u02b0'));

In the above examples, we input the LOWERCASE_LETTER ‘a' and MODIFIER_LETTER ‘\u02b0' which represents the modifier letter small H to the isLetter method and it returns true.

5. Compare and Contrast

Finally, we can see that all letters are alphabetic characters, but not all alphabetic characters are letters.

In other words, the isAlphabetic method returns true if a character is a letter or has the general category LETTER_NUMBER. Besides, it also returns true if the character has the Other_Alphabetic property defined by the Unicode Standard.

First, let's look at an example of a character which is a letter as well as an alphabet —  character ‘a‘:

assertTrue(Character.isLetter('a')); 
assertTrue(Character.isAlphabetic('a'));

The character ‘a‘, when passed to both isLetter() as well as isAlphabetic() methods as an input parameter, returns true.

Next, let's look at an example of a character that is an alphabet but not a letter. In this case, we'll use the Unicode character ‘\u2164‘, which represents the Roman Numeral Five:

assertFalse(Character.isLetter('\u2164'));
assertTrue(Character.isAlphabetic('\u2164'));

The Unicode character ‘\u2164‘ when passed to the isLetter() method returns false. On the other hand, when passed to the isAlphabetic() method, it returns true.

Certainly, for the English language, the distinction makes no difference. Since all the letters of the English language come under the category of alphabets. On the other hand, some characters in other languages might have a distinction.

6. Conclusion

In this article, we learned about the different general categories of the Unicode code point. Moreover, we covered the similarities and differences between the isAlphabetic() and isLetter() methods.

As always, all these code samples are available over on GitHub.

The post Character#isAlphabetic vs. Character#isLetter first appeared on Baeldung.
        

Java Weekly, Issue 366

$
0
0

1. Spring and Java

>> Do Loom’s Claims Stack Up? Part 1: Millions of Threads? [webtide.com]

Gym membership and millions of loom virtual threads – evaluating the effect of GC and deep stacks on virtual threads!

>> Do Looms Claims Stack Up? Part 2: Thread Pools? [webtide.com]

Cheap threads causing expensive things: should we still use thread pools for better resource management?

>> What's New in MicroProfile 4.0 [infoq.com]

Configuration properties, more fault-tolerant, readiness probes, more metrics, and many more, all in a new MicroProfile version!

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Don't use Protobuf for Telemetry [richardstartin.github.io]

Protocol buffer, Java, and low-overhead serializations – why Protobuf isn't a great fit for tracers and telemetry!

Also worth reading:

3. Musings

>> Musings around a Dockerfile for Jekyll [blog.frankel.ch]

Static site generation on Docker – dockerizing Jekyll, image size reduction, and image squashing!

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Can't Tell When He Is Joking [dilbert.com]

>> Dogbert The Watcher [dilbert.com]

>> Important Context [dilbert.com]

5. Pick of the Week

>> Shields Down [randsinrepose.com]

The post Java Weekly, Issue 366 first appeared on Baeldung.
        

Difference Between spring-boot:repackage and Maven package

$
0
0

1. Overview

Apache Maven is a widely used project dependency management tool and project building tool.

Over the last few years, Spring Boot has become a quite popular framework to build applications. There is also the Spring Boot Maven Plugin providing Spring Boot support in Apache Maven.

We know when we want to package our application in a JAR or WAR artifact using Maven, we can use mvn package. However, the Spring Boot Maven Plugin ships with a repackage goal, and it's called in an mvn command as well.

Sometimes, the two mvn commands are confusing. In this tutorial, we'll discuss the difference between mvn package and spring-boot:repackage.

2. A Spring Boot Application Example

First of all, we'll create a straightforward Spring Boot application as an example:

@SpringBootApplication
public class DemoApplication {
    public static void main(String[] args) {
        SpringApplication.run(DemoApplication.class, args);
    }
}

To verify if our application is up and running, let's create a simple REST endpoint:

@RestController
public class DemoRestController {
    @GetMapping(value = "/welcome")
    public ResponseEntity welcomeEndpoint() {
        return ResponseEntity.ok("Welcome to Baeldung Spring Boot Demo!");
    }
}

3. Maven's package Goal

We only need the spring-boot-starter-web dependency to build our Spring Boot application:

<artifactId>spring-boot-artifacts-2</artifactId>
<packaging>jar</packaging>
...
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>
...

Maven's package goal will take the compiled code and package it in its distributable format, which in this case is the JAR format:

$ mvn package
[INFO] Scanning for projects...
[INFO] ------< com.baeldung.spring-boot-modules:spring-boot-artifacts-2 >------
[INFO] Building spring-boot-artifacts-2 1.0.0-SNAPSHOT
[INFO] --------------------------------[ jar ]---------------------------------
 ... 
[INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ spring-boot-artifacts-2 ---
[INFO] Building jar: /home/kent ... /target/spring-boot-artifacts-2.jar
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
 ...

After executing the mvn package command, we can find the built JAR file spring-boot-artifacts-2.jar under the target directory. Let's check the content of the created JAR file:

$ jar tf target/spring-boot-artifacts-2.jar
META-INF/
META-INF/MANIFEST.MF
com/
com/baeldung/
com/baeldung/demo/
application.yml
com/baeldung/demo/DemoApplication.class
com/baeldung/demo/DemoRestController.class
META-INF/maven/...

As we can see in the output above, the JAR file created by the mvn package command contains only the resources and compiled Java classes from our project's source.

We can use this JAR file as a dependency in another project. However, we cannot execute the JAR file using java -jar JAR_FILE even if it's a Spring Boot application. This is because the runtime dependencies are not bunded. For example, we don't have a servlet container to start the web context.

To start our Spring Boot application using the simple java -jar command, we need to build a fat JAR. The Spring Boot Maven Plugin can help us with that.

4. The Spring Boot Maven Plugin's repackage Goal

Now, let's figure out what spring-boot:repackage does.

4.1. Adding Spring Boot Maven Plugin

To execute the repackage goal, we need to add the Spring Boot Maven Plugin in our pom.xml:

<build>
    <finalName>${project.artifactId}</finalName>
    <plugins>
        <plugin>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-maven-plugin</artifactId>
        </plugin>
    </plugins>
</build>

4.2. Executing the spring-boot:repackage Goal

Now, let's clean the previously built JAR file and give spring-boot:repackage a try:

$ mvn clean spring-boot:repackage     
 ...
[INFO] --- spring-boot-maven-plugin:2.3.3.RELEASE:repackage (default-cli) @ spring-boot-artifacts-2 ---
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
...
[ERROR] Failed to execute goal org.springframework.boot:spring-boot-maven-plugin:2.3.3.RELEASE:repackage (default-cli) 
on project spring-boot-artifacts-2: Execution default-cli of goal 
org.springframework.boot:spring-boot-maven-plugin:2.3.3.RELEASE:repackage failed: Source file must not be null -> [Help 1]
...

Oops, it doesn't work. This is because the spring-boot:repackage goal takes the existing JAR or WAR archive as the source and repackages all the project runtime dependencies inside the final artifact together with project classes. In this way, the repackaged artifact is executable using the command line java -jar JAR_FILE.jar.

Therefore, we need to first build the JAR file before executing the spring-boot:repackage goal:

$ mvn clean package spring-boot:repackage
 ...
[INFO] Building spring-boot-artifacts-2 1.0.0-SNAPSHOT
[INFO] --------------------------------[ jar ]---------------------------------
 ...
[INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ spring-boot-artifacts-2 ---
[INFO] Building jar: /home/kent/.../target/spring-boot-artifacts-2.jar
[INFO] 
[INFO] --- spring-boot-maven-plugin:2.3.3.RELEASE:repackage (default-cli) @ spring-boot-artifacts-2 ---
[INFO] Replacing main artifact with repackaged archive
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
 ...

4.3. The Content of the Repackaged JAR File

Now, if we check the target directory, we'll see the repackaged JAR file and the original JAR file:

$ ls -1 target/*jar*
target/spring-boot-artifacts-2.jar
target/spring-boot-artifacts-2.jar.original

Let's check the content of the repackaged JAR file:

$ jar tf target/spring-boot-artifacts-2.jar 
META-INF/
META-INF/MANIFEST.MF
 ...
org/springframework/boot/loader/JarLauncher.class
 ...
BOOT-INF/classes/com/baeldung/demo/
BOOT-INF/classes/application.yml
BOOT-INF/classes/com/baeldung/demo/DemoApplication.class
BOOT-INF/classes/com/baeldung/demo/DemoRestController.class
META-INF/maven/com.baeldung.spring-boot-modules/spring-boot-artifacts-2/pom.xml
META-INF/maven/com.baeldung.spring-boot-modules/spring-boot-artifacts-2/pom.properties
BOOT-INF/lib/
BOOT-INF/lib/spring-boot-starter-web-2.3.3.RELEASE.jar
...
BOOT-INF/lib/spring-boot-starter-tomcat-2.3.3.RELEASE.jar
BOOT-INF/lib/tomcat-embed-core-9.0.37.jar
BOOT-INF/lib/jakarta.el-3.0.3.jar
BOOT-INF/lib/tomcat-embed-websocket-9.0.37.jar
BOOT-INF/lib/spring-web-5.2.8.RELEASE.jar
...
BOOT-INF/lib/httpclient-4.5.12.jar
...

If we check the output above, it's much longer than the JAR file built by the mvn package command.

Here, in the repackaged JAR file, we have not only the compiled Java classes from our project but also all the runtime libraries that are needed to start our Spring Boot application. For example, an embedded tomcat library is packaged into the BOOT-INF/lib directory.

Next, let's start our application and check if it works:

$ java -jar target/spring-boot-artifacts-2.jar 
  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
2020-12-22 23:36:32.704  INFO 115154 [main] com.baeldung.demo.DemoApplication      : Starting DemoApplication on YK-Arch with PID 11515...
...
2020-12-22 23:36:34.070  INFO 115154 [main] o.s.b.w.embedded.tomcat.TomcatWebServer: Tomcat started on port(s): 8080 (http) ...
2020-12-22 23:36:34.078  INFO 115154 [main] com.baeldung.demo.DemoApplication      : Started DemoApplication in 1.766 seconds ...

Our Spring Boot application is up and running. Now, let's verify it by calling our /welcome endpoint:

$ curl http://localhost:8080/welcome
Welcome to Baeldung Spring Boot Demo!

Great! We've got the expected response. Our application is running properly.

4.4. Executing spring-boot:repackage Goal During Maven's package Lifecycle

We can configure the Spring Boot Maven Plugin in our pom.xml to repackage the artifact during the package phase of the Maven lifecycle. In other words, when we execute mvn package, the spring-boot:repackage will be automatically executed.

The configuration is pretty straightforward. We just add the repackage goal to an execution element:

<build>
    <finalName>${project.artifactId}</finalName>
    <plugins>
        <plugin>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-maven-plugin</artifactId>
            <executions>
                <execution>
                    <goals>
                        <goal>repackage</goal>
                    </goals>
                </execution>
            </executions>
        </plugin>
    </plugins>
</build>

Now, let's run mvn clean package once again:

$ mvn clean package
 ...
[INFO] Building spring-boot-artifacts-2 1.0.0-SNAPSHOT
[INFO] --------------------------------[ jar ]---------------------------------
...
[INFO] --- spring-boot-maven-plugin:2.3.3.RELEASE:repackage (default) @ spring-boot-artifacts-2 ---
[INFO] Replacing main artifact with repackaged archive
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
 ...

The output shows the repackage goal has been executed. If we check the file system, we'll find the repackaged JAR file is created:

$ ls -lh target/*jar*
-rw-r--r-- 1 kent kent  29M Dec 22 23:56 target/spring-boot-artifacts-2.jar
-rw-r--r-- 1 kent kent 3.6K Dec 22 23:56 target/spring-boot-artifacts-2.jar.original

5. Conclusion

In this article, we've discussed the difference between mvn package and spring-boot:repackage.

Also, we've learned how to execute spring-boot:repackage during the package phase of the Maven lifecycle.

As always, the code in this write-up is all available over on GitHub.

The post Difference Between spring-boot:repackage and Maven package first appeared on Baeldung.
        

New Features in Java 11

$
0
0

1. Overview

Oracle released Java 11 in September 2018, only 6 months after its predecessor, version 10.

Java 11 is the first long-term support (LTS) release after Java 8. Oracle also stopped supporting Java 8 in January 2019. As a consequence, a lot of us will upgrade to Java 11.

In this tutorial, we'll take a look at our options for choosing a Java 11 JDK. Then, we'll explore new features, removed features, and performance enhancements introduced in Java 11.

2. Oracle vs. Open JDK

Java 10 was the last free Oracle JDK release that we could use commercially without a license. Starting with Java 11, there's no free long-term support (LTS) from Oracle.

Thankfully, Oracle continues to provide Open JDK releases, which we can download and use without charge.

Besides Oracle, there are other Open JDK providers that we may consider.

3. Developer Features

Let's take a look at changes to the common APIs, as well as a few other features useful for developers.

3.1. New String Methods

Java 11 adds a few new methods to the String class: isBlank, lines, strip, stripLeading, stripTrailing, and repeat.

Let's check how we can make use of the new methods to extract non-blank, stripped lines from a multiline string:

String multilineString = "Baeldung helps \n \n developers \n explore Java.";
List<String> lines = multilineString.lines()
  .filter(line -> !line.isBlank())
  .map(String::strip)
  .collect(Collectors.toList());
assertThat(lines).containsExactly("Baeldung helps", "developers", "explore Java.");

These methods can reduce the amount of boilerplate involved in manipulating string objects, and save us having to import libraries.

In the case of the strip methods, they provide similar functionality to the more familiar trim method. However, with finer control and Unicode support.

3.2. New File Methods

It's now easier to read and write Strings from files.

We can use the new readString and writeString static methods from the Files class:

Path filePath = Files.writeString(Files.createTempFile(tempDir, "demo", ".txt"), "Sample text");
String fileContent = Files.readString(filePath);
assertThat(fileContent).isEqualTo("Sample text");

3.3. Collection to an Array

The java.util.Collection interface contains a new default toArray method which take an IntFunction argument.

This makes it easier to create an array of the right type from a collection:

List sampleList = Arrays.asList("Java", "Kotlin");
String[] sampleArray = sampleList.toArray(String[]::new);
assertThat(sampleArray).containsExactly("Java", "Kotlin");

3.4. The Not Predicate Method

A static not method has been added to the Predicate interface. We can use it to negate an existing predicate, much like the negate method:

List<String> sampleList = Arrays.asList("Java", "\n \n", "Kotlin", " ");
List withoutBlanks = sampleList.stream()
  .filter(Predicate.not(String::isBlank))
  .collect(Collectors.toList());
assertThat(withoutBlanks).containsExactly("Java", "Kotlin");

While not(isBlank) reads more naturally than isBlank.negate(), the big advantage is that we can also use not with method references, like not(String:isBlank).

3.5. Local-Variable Syntax for Lambda

Support for using the local variable syntax (var keyword) in lambda parameters was added in Java 11.

We can make use of this feature to apply modifiers to our local variables, like defining a type annotation:

List<String> sampleList = Arrays.asList("Java", "Kotlin");
String resultString = sampleList.stream()
  .map((@Nonnull var x) -> x.toUpperCase())
  .collect(Collectors.joining(", "));
assertThat(resultString).isEqualTo("JAVA, KOTLIN");

3.6. HTTP Client

The new HTTP client from the java.net.http package was introduced in Java 9. It has now become a standard feature in Java 11.

The new HTTP API improves overall performance and provides support for both HTTP/1.1 and HTTP/2:

HttpClient httpClient = HttpClient.newBuilder()
  .version(HttpClient.Version.HTTP_2)
  .connectTimeout(Duration.ofSeconds(20))
  .build();
HttpRequest httpRequest = HttpRequest.newBuilder()
  .GET()
  .uri(URI.create("http://localhost:" + port))
  .build();
HttpResponse httpResponse = httpClient.send(httpRequest, HttpResponse.BodyHandlers.ofString());
assertThat(httpResponse.body()).isEqualTo("Hello from the server!");

3.7. Nest Based Access Control

Java 11 introduces the notion of nestmates and the associated access rules within the JVM.

A nest of classes in Java implies both the outer/main class and all its nested classes:

assertThat(MainClass.class.isNestmateOf(MainClass.NestedClass.class)).isTrue();

Nested classes are linked to the NestMembers attribute, while the outer class is linked to the NestHost attribute:

assertThat(MainClass.NestedClass.class.getNestHost()).isEqualTo(MainClass.class);

JVM access rules allow access to private members between nestmates. However, in previous Java versions, the reflection API denied the same access.

Java 11 fixes this issue and provides means to query the new class file attributes using the reflection API:

Set<String> nestedMembers = Arrays.stream(MainClass.NestedClass.class.getNestMembers())
  .map(Class::getName)
  .collect(Collectors.toSet());
assertThat(nestedMembers).contains(MainClass.class.getName(), MainClass.NestedClass.class.getName());

3.8. Running Java Files

A major change in this version is that we don't need to compile the Java source files with javac explicitly anymore:

$ javac HelloWorld.java
$ java HelloWorld 
Hello Java 8!

Instead, we can directly run the file using the java command:

$ java HelloWorld.java
Hello Java 11!

4. Performance Enhancements

Let's now take a look at a couple of new features whose main purpose is improving performance.

4.1. Dynamic Class-File Constants

Java class-file format is extended to support a new constant-pool form, named CONSTANT_Dynamic.

Loading the new constant-pool will delegate creation to a bootstrap method, just as linking an invokedynamic call site delegates linkage to a bootstrap method.

This feature enhances performances and targets language designers and compiler implementors.

4.2. Improved Aarch64 Intrinsics

Java 11 optimizes the existing string and array intrinsics on ARM64 or AArch64 processors. Also, new intrinsics are implemented for sin, cos, and log methods of java.lang.Math.

We use an intrinsic function like any other. However, the intrinsic function gets handled in a special way by the compiler. It leverages CPU architecture-specific assembly code to boost performance.

4.3. A No-Op Garbage Collector

A new garbage collector called Epsilon is available for use in Java 11 as an experimental feature.

It's called a No-Op (no operations) because it allocates memory but does not actually collect any garbage. Thus, Epsilon is applicable for simulating out of memory errors.

Obviously, Epsilon won't be suitable for a typical production Java application. However, there are a few specific use-cases where it could be useful:

  • Performance testing
  • Memory pressure testing
  • VM interface testing and
  • Extremely short-lived jobs

In order to enable it, use the -XX:+UnlockExperimentalVMOptions -XX:+UseEpsilonGC flag.

4.4. Flight Recorder

Java Flight Recorder (JFR) is now open-source in Open JDK, where it used to be a commercial product in Oracle JDK. JFR is a profiling tool that we can use to gather diagnostics and profiling data from a running Java application.

To start a 120 seconds JFR recording, we can use the following parameter:

-XX:StartFlightRecording=duration=120s,settings=profile,filename=java-demo-app.jfr

We can use JFR in production since its performance overhead is usually below 1%. Once the time elapses, we can access the recorded data saved in a JFR file.

However, in order to analyze and visualize the data, we need to make use of another tool called JDK Mission Control (JMC).

5. Removed and Deprecated Modules

As Java evolves, we can no longer use any of its removed features, and should stop using any deprecated features. Let's take a quick look at the most notable ones.

5.1. Java EE and CORBA

Standalone versions of the Java EE technologies are available on third-party sites. Therefore, there is no need for Java SE to include them.

Java 9 already deprecated selected Java EE and CORBA modules. In release 11, it has now completely removed:

  • Java API for XML-Based Web Services (java.xml.ws)
  • Java Architecture for XML Binding (java.xml.bind)
  • JavaBeans Activation Framework (java.activation)
  • Common Annotations (java.xml.ws.annotation)
  • Common Object Request Broker Architecture (java.corba)
  • JavaTransaction API (java.transaction)

5.2. JMC and JavaFX

JDK Mission Control (JMC) is no longer included in the JDK. A standalone version of JMC is now available as a separate download.

The same is true for JavaFX modules. JavaFX will be available as a separate set of modules outside of the JDK.

5.3. Deprecated Modules

Furthermore, Java 11 deprecated the following modules:

  • Nashorn JavaScript engine, including the JJS tool
  • Pack200 compression scheme for JAR files

6. Miscellaneous Changes

Java 11 introduced a few more changes which are important to mention:

  • New ChaCha20 and ChaCha20-Poly1305 cipher implementations replace the insecure RC4 stream cipher
  • Support for cryptographic key agreement with Curve25519 and Curve448 replace the existing ECDH scheme
  • Upgraded Transport Layer Security (TLS) to version 1.3 brings security and performance improvements
  • Introduced a low latency garbage collector ZGC, as an experimental feature with low pause times
  • Support for Unicode 10 brings more characters, symbols, and emojis

7. Conclusion

In this article, we explored some new features of Java 11.

We covered the differences between Oracle and Open JDK. Also, we reviewed API changes, as well as other useful development features, performance enhancements, and removed or deprecated modules.

As always, the source code is available over on GitHub.

The post New Features in Java 11 first appeared on Baeldung.
        
Viewing all 3763 articles
Browse latest View live