Quantcast
Channel: Baeldung
Viewing all 3782 articles
Browse latest View live

Java IndexOutOfBoundsException “Source Does Not Fit in Dest”

$
0
0

1. Overview

In Java, making a copy of a List can sometimes produce an IndexOutOfBoundsException: “Source does not fit in dest”. In this short tutorial, we're going to look at why we get this error when using the Collections.copy method and how it can be solved. We'll also look at alternatives to Collections.copy to make a copy of the list.

2. Reproducing the Problem

Let's start with a method to create a copy of a List using the Collections.copy method:

static List<Integer> copyList(List<Integer> source) {
    List<Integer> destination = new ArrayList<>(source.size());
    Collections.copy(destination, source);
    return destination;
}

Here, the copyList method creates a new list with an initial capacity equal to the size of the source list. Then it tries to copy the elements of the source list to the destination list:

List<Integer> source = Arrays.asList(1, 2, 3, 4, 5);
List<Integer> copy = copyList(source);

However, once we make a call to the copyList method, it throws an exception java.lang.IndexOutOfBoundsException: Source does not fit in dest.

3. Cause of the Exception

Let's try to understand what went wrong. According to the documentation for the Collections.copy method:

The destination list must be at least as long as the source list. If it's longer, the remaining elements in the destination list are unaffected.

In our example, we've created a new List using a constructor with an initial capacity equal to the size of the source list. It simply allocates enough memory and doesn't actually define elements. The size of the new list remains zero because the capacity and the size are different attributes of the List.

Therefore, when the Collections.copy method tries to copy the source list into the destination list, it throws java.lang.IndexOutOfBoundsException.

4. Solutions

4.1. Collections.copy

Let's look at a working example to copy a List to another List, using the Collections.copy method:

List<Integer> destination = Arrays.asList(1, 2, 3, 4, 5);
List<Integer> source = Arrays.asList(11, 22, 33);
Collections.copy(destination, source);

In this case, we're copying all three elements of the source list to the destination list. The Arrays.asList method initializes the list with elements and not just a size, therefore, we're able to copy the source list to the destination list successfully.

If we just swap the arguments of the Collections.copy method, it will throw java.lang.IndexOutOfBoundsException because the size of the source list is less than the size of the destination list.

After this copy operation, the destination list looks like:

[11, 22, 33, 4, 5]

Along with the Collections.copy method, there are other ways in Java to make a copy of List. Let's take a look at some of them.

4.2. ArrayList Constructor

The simplest approach to copy a List is using a constructor that takes a Collection parameter:

List<Integer> source = Arrays.asList(11, 22, 33);
List<Integer> destination = new ArrayList<>(source);

Here, we simply pass the source list to the constructor of the destination list, which creates a shallow copy of the source list.

The destination list will be just another reference to the same object referenced by the source list. So, every change made by any reference will affect the same object.

Therefore, using a constructor is a good option for copying immutable objects like Integers and Strings.

4.3. addAll

Another simple way is to use the addAll method of List:

List<Integer> destination = new ArrayList<>();
destination.addAll(source);

The addAll method will copy all the elements of the source list to the destination list.

There are a couple of points to note regarding this approach:

  1. It creates a shallow copy of the source list.
  2. The elements of the source list are appended to the destination list.

4.4. Java 8 Streams

Java 8 has introduced the Stream API, which is a great tool for working with Java Collections.

Using the stream() method, we make a copy of the list using Stream API:

List<Integer> copy = source.stream()
  .collect(Collectors.toList());

4.5. Java 10

Copying a List is even simpler in Java 10. Using the copyOf() method allows us to create an immutable list containing the elements of the given Collection:

List<Integer> destination = List.copyOf(sourceList);

If we want to go with this approach, we need to make sure the input List isn't null and that it doesn't contain any null elements.

5. Conclusion

In this article, we looked at how and why the Collections.copy method throws IndexOutOfBoundException “Source does not file in dest”. Along with it, we also explored different ways to copy a List to another List.

Both the pre-Java-10 examples and the Java 10 examples can be found over on GitHub.

The post Java IndexOutOfBoundsException "Source Does Not Fit in Dest" first appeared on Baeldung.

        

Localizing Exception Messages in Java

$
0
0

1. Overview

Exceptions in Java are used to signal that something has gone wrong in a program. In addition to throwing the exception, we can even add a message to provide additional information.

In this article, we'll take advantage of the getLocalizedMessage method to provide exception messages in both English and French.

2. Resource Bundle

We need a way to lookup messages using a messageKey to identify the message and the Locale to identify which translation will provide the value for the messageKey. We'll create a simple class to abstract access to our ResourceBundle for retrieving English and French message translations:

public class Messages {
    public static String getMessageForLocale(String messageKey, Locale locale) {
        return ResourceBundle.getBundle("messages", locale)
          .getString(messageKey);
    }
}

Our Messages class uses ResourceBundle to load the properties files into our bundle, which is at the root of our classpath. We have two files – one for our English messages and one for our French messages:

# messages.properties
message.exception = I am an exception.
# messages_fr.properties
message.exception = Je suis une exception.

3. Localized Exception Class

Our Exception subclass will use the default Locale to determine which translation to use for our messages. We'll get the default Locale using Locale#getDefault.

If our application were running on a server, we would use the HTTP request headers to identify the Locale to use instead of setting the default. For this purpose, we'll create a constructor to accept a Locale.

Let's create our Exception subclass. For this, we could extend either RuntimeException or Exception. Let's extend Exception and override getLocalizedMessage:

public class LocalizedException extends Exception {
    private final String messageKey;
    private final Locale locale;
    public LocalizedException(String messageKey) {
        this(messageKey, Locale.getDefault());
    }
    public LocalizedException(String messageKey, Locale locale) {
        this.messageKey = messageKey;
        this.locale = locale;
    }
    public String getLocalizedMessage() {
        return Messages.getMessageForLocale(messageKey, locale);
    }
}

4. Putting It All Together

Let's create some unit tests to verify that everything works. We'll create tests for English and French translations to verify passing a custom Locale to the exception during construction:

@Test
public void givenUsEnglishProvidedLocale_whenLocalizingMessage_thenMessageComesFromDefaultMessage() {
    LocalizedException localizedException = new LocalizedException("message.exception", Locale.US);
    String usEnglishLocalizedExceptionMessage = localizedException.getLocalizedMessage();
    assertThat(usEnglishLocalizedExceptionMessage).isEqualTo("I am an exception.");
}
@Test
public void givenFranceFrenchProvidedLocale_whenLocalizingMessage_thenMessageComesFromFrenchTranslationMessages() {
    LocalizedException localizedException = new LocalizedException("message.exception", Locale.FRANCE);
    String franceFrenchLocalizedExceptionMessage = localizedException.getLocalizedMessage();
    assertThat(franceFrenchLocalizedExceptionMessage).isEqualTo("Je suis une exception.");
}

Our exception can use the default Locale as well. Let's create two more tests to verify the default Locale functionality works:

@Test
public void givenUsEnglishDefaultLocale_whenLocalizingMessage_thenMessageComesFromDefaultMessages() {
    Locale.setDefault(Locale.US);
    LocalizedException localizedException = new LocalizedException("message.exception");
    String usEnglishLocalizedExceptionMessage = localizedException.getLocalizedMessage();
    assertThat(usEnglishLocalizedExceptionMessage).isEqualTo("I am an exception.");
}
@Test
public void givenFranceFrenchDefaultLocale_whenLocalizingMessage_thenMessageComesFromFrenchTranslationMessages() {
    Locale.setDefault(Locale.FRANCE);
    LocalizedException localizedException = new LocalizedException("message.exception");
    String franceFrenchLocalizedExceptionMessage = localizedException.getLocalizedMessage();
    assertThat(franceFrenchLocalizedExceptionMessage).isEqualTo("Je suis une exception.");
}

5. Caveats

5.1. Logging Throwables

We'll need to keep in mind the logging framework we're using to send Exception instances to the log.

Log4J, Log4J2, and Logback use getMessage to retrieve the message to write to the log appender. If we use java.util.logging, the content comes from getLocalizedMessage.

We might want to consider overriding getMessage to invoke getLocalizedMessage so we won't have to worry about which logging implementation is used.

5.2. Server-Side Applications

When we localize our exception messages for client applications, we only need to worry about one system's current Locale. However, if we want to localize exception messages in a server-side application, we should keep in mind that switching the default Locale will affect all requests within our application server.

Should we decide to localize exception messages, we'll create a constructor on our exception to accept the Locale. This will give us the ability to localize our messages without updating the default Locale.

6. Summary

Localizing exception messages is fairly straightforward. All we need to do is create a ResourceBundle for our messages, then implement getLocalizedMessage in our Exception subclasses.

As usual, the examples are available over on GitHub.

The post Localizing Exception Messages in Java first appeared on Baeldung.

        

Ignoring Fields With the JPA @Transient Annotation

$
0
0

1. Introduction

When persisting Java objects into database records using an Object-Relational Mapping (ORM) framework, we often want to ignore certain fields. If the framework is compliant with the Java Persistence API (JPA), we can add the @Transient annotation to these fields.

In this tutorial, we'll demonstrate proper usage of the @Transient annotation. We'll also look at its relationship with Java's built-in transient keyword.

2. @Transient Annotation vs. transient Keyword

There is generally some confusion over the relationship between the @Transient annotation and Java's built-in transient keyword. The transient keyword is primarily meant for ignoring fields during Java object serialization, but it also prevents these fields from being persisted when using a JPA framework.

In other words, the transient keyword has the same effect as the @Transient annotation when saving to a database. However, the @Transient annotation does not affect Java object serialization.

3. JPA @Transient Example

Let's say we have a User class, which is a JPA entity that maps to a Users table in our database. When a user logs in, we retrieve their record from the Users table, and then we set some additional fields on the User entity afterwards. These extra fields don't correspond to any columns in the Users table because we don't want to save these values.

For example, we'll set a timestamp on the User entity that represents when the user logged in to their current session:

@Entity
@Table(name = "Users")
public class User {
    @Id
    private Integer id;
 
    private String email;
 
    private String password;
 
    @Transient
    private Date loginTime;
    
    // getters and setters
}

When we save this User object to the database using a JPA provider like Hibernate, the provider ignores the loginTime field because of the @Transient annotation.

If we serialize this User object and pass it to another service in our system, the loginTime field will be included in the serialization. If we didn't want to include this field, we could replace the @Transient annotation with the transient keyword instead:

@Entity
@Table(name = "Users")
public class User implements Serializable {
    @Id
    private Integer id;
 
    private String email;
 
    private String password;
 
    private transient Date loginTime;
    //getters and setters
}

Now, the loginTime field is ignored during database persistence and object serialization.

4. Conclusion

In this article, we have investigated how to properly use the JPA @Transient annotation in a typical use case. Be sure to check out other articles on JPA to learn more about persistence.

As always, the full source code of the article is available over on GitHub.

The post Ignoring Fields With the JPA @Transient Annotation first appeared on Baeldung.

        

Sealed Classes and Interfaces in Java 15

$
0
0

1. Overview

The release of Java SE 15 introduces sealed classes (JEP 360) as a preview feature.

This feature is about enabling more fine-grained inheritance control in Java. Sealing allows classes and interfaces to define their permitted subtypes.

In other words, a class or an interface can now define which classes can implement or extend it. It is a useful feature for domain modeling and increasing the security of libraries.

2. Motivation

A class hierarchy enables us to reuse code via inheritance. However, the class hierarchy can also have other purposes. Code reuse is great but is not always our primary goal.

2.1. Modeling Possibilities

An alternative purpose of a class hierarchy can be to model various possibilities that exist in a domain.

As an example, imagine a business domain that only works with cars and trucks, not motorcycles. When creating the Vehicle abstract class in Java, we should be able to allow only Car and Truck classes to extend it. In that way, we want to ensure that there will be no misuse of the Vehicle abstract class within our domain.

In this example, we are more interested in the clarity of code handling known subclasses then defending against all unknown subclasses.

Before version 15, Java assumed that code reuse is always a goal. Every class was extendable by any number of subclasses.

2.2. The Package-Private Approach

In earlier versions, Java provided limited options in the area of inheritance control.

A final class can have no subclasses. A package-private class can only have subclasses in the same package.

Using the package-private approach, users cannot access the abstract class without also allowing them to extend it:

public class Vehicles {
    abstract static class Vehicle {
        private final String registrationNumber;
        public Vehicle(String registrationNumber) {
            this.registrationNumber = registrationNumber;
        }
        public String getRegistrationNumber() {
            return registrationNumber;
        }
    }
    public static final class Car extends Vehicle {
        private final int numberOfSeats;
        public Car(int numberOfSeats, String registrationNumber) {
            super(registrationNumber);
            this.numberOfSeats = numberOfSeats;
        }
        public int getNumberOfSeats() {
            return numberOfSeats;
        }
    }
    public static final class Truck extends Vehicle {
        private final int loadCapacity;
        public Truck(int loadCapacity, String registrationNumber) {
            super(registrationNumber);
            this.loadCapacity = loadCapacity;
        }
        public int getLoadCapacity() {
            return loadCapacity;
        }
    }
}

2.3. Superclass Accessible, Not Extensible

A superclass that is developed with a set of its subclasses should be able to document its intended usage, not constrain its subclasses. Also, having restricted subclasses should not limit the accessibility of its superclass.

Thus, the main motivation behind sealed classes is to have the possibility for a superclass to be widely accessible but not widely extensible.

3. Creation

The sealed feature introduces a couple of new modifiers and clauses in Java: sealed, non-sealed, and permits.

3.1. Sealed Interfaces

To seal an interface, we can apply the sealed modifier to its declaration. The permits clause then specifies the classes that are permitted to implement the sealed interface:

public sealed interface Service permits Car, Truck {
    int getMaxServiceIntervalInMonths();
    default int getMaxDistanceBetweenServicesInKilometers() {
        return 100000;
    }
}

3.2. Sealed Classes

Similar to interfaces, we can seal classes by applying the same sealed modifier. The permits clause should be defined after any extends or implements clauses:

public abstract sealed class Vehicle permits Car, Truck {
    protected final String registrationNumber;
    public Vehicle(String registrationNumber) {
        this.registrationNumber = registrationNumber;
    }
    public String getRegistrationNumber() {
        return registrationNumber;
    }
}

A permitted subclass must define a modifier. It may be declared final to prevent any further extensions:

public final class Truck extends Vehicle implements Service {
    private final int loadCapacity;
    public Truck(int loadCapacity, String registrationNumber) {
        super(registrationNumber);
        this.loadCapacity = loadCapacity;
    }
    public int getLoadCapacity() {
        return loadCapacity;
    }
    @Override
    public int getMaxServiceIntervalInMonths() {
        return 18;
    }
}

A permitted subclass may also be declared sealed. However, if we declare it non-sealed, then it is open for extension:

public non-sealed class Car extends Vehicle implements Service {
    private final int numberOfSeats;
    public Car(int numberOfSeats, String registrationNumber) {
        super(registrationNumber);
        this.numberOfSeats = numberOfSeats;
    }
    public int getNumberOfSeats() {
        return numberOfSeats;
    }
    @Override
    public int getMaxServiceIntervalInMonths() {
        return 12;
    }
}

3.4. Constraints

A sealed class imposes three important constraints on its permitted subclasses:

  1. All permitted subclasses must belong to the same module as the sealed class.
  2. Every permitted subclass must explicitly extend the sealed class.
  3. Every permitted subclass must define a modifier: final, sealed, or non-sealed.

4. Usage

4.1. The Traditional Way

When sealing a class, we enable the client code to reason clearly about all permitted subclasses.

The traditional way to reason about subclass is using a set of if-else statements and instanceof checks:

if (vehicle instanceof Car) {
    return ((Car) vehicle).getNumberOfSeats();
} else if (vehicle instanceof Truck) {
    return ((Truck) vehicle).getLoadCapacity();
} else {
    throw new RuntimeException("Unknown instance of Vehicle");
}

4.2. Pattern Matching

By applying pattern matching, we can avoid the additional class cast, but we still need a set of if-else statements:

if (vehicle instanceof Car car) {
    return car.getNumberOfSeats();
} else if (vehicle instanceof Truck truck) {
    return truck.getLoadCapacity();
} else {
    throw new RuntimeException("Unknown instance of Vehicle");
}

Using if-else makes it difficult for the compiler to determine that we covered all permitted subclasses. For that reason, we are throwing a RuntimeException.

In future versions of Java, the client code will be able to use a switch statement instead of if-else (JEP 375).

By using type test patterns, the compiler will be able to check that every permitted subclass is covered. Thus, there will be no more need for a default clause/case.

4. Compatibility

Let's now take a look at the compatibility of sealed classes with other Java language features like records and the reflection API.

4.1. Records

Sealed classes work very well with records. Since records are implicitly final, the sealed hierarchy is even more concise. Let's try to rewrite our class example using records:

public sealed interface Vehicle permits Car, Truck {
    String getRegistrationNumber();
}
public record Car(int numberOfSeats, String registrationNumber) implements Vehicle {
    @Override
    public String getRegistrationNumber() {
        return registrationNumber;
    }
    public int getNumberOfSeats() {
        return numberOfSeats;
    }
}
public record Truck(int loadCapacity, String registrationNumber) implements Vehicle {
    @Override
    public String getRegistrationNumber() {
        return registrationNumber;
    }
    public int getLoadCapacity() {
        return loadCapacity;
    }
}

4.2. Reflection

Sealed classes are also supported by the reflection API, where two public methods have been added to the java.lang.Class:

  • The isSealed method returns true if the given class or interface is sealed.
  • Method permittedSubclasses returns an array of objects representing all the permitted subclasses.

We can make use of these methods to create assertions that are based on our example:

Assertions.assertThat(truck.getClass().isSealed()).isEqualTo(false);
Assertions.assertThat(truck.getClass().getSuperclass().isSealed()).isEqualTo(true);
Assertions.assertThat(truck.getClass().getSuperclass().permittedSubclasses())
  .contains(ClassDesc.of(truck.getClass().getCanonicalName()));

5. Conclusion

In this article, we explored sealed classes and interfaces, a preview feature in Java SE 15. We covered the creation and usage of sealed classes and interface, as well as their constraints and compatibility with other language features.

In the examples, we covered the creation of a sealed interface and a sealed class, the usage of the sealed class (with and without pattern matching), and sealed classes compatibility with records and the reflection API.

As always, the complete source code is available over on GitHub.

The post Sealed Classes and Interfaces in Java 15 first appeared on Baeldung.

        

Creating a Discord Bot with Discord4J + Spring Boot

$
0
0

1. Overview

Discord4J is an open-source Java library that can primarily be used to quickly access the Discord Bot API. It heavily integrates with Project Reactor to provide a completely non-blocking reactive API.

We'll use Discord4J in this tutorial to create a simple Discord bot capable of responding to a predefined command. We'll build the bot on top of Spring Boot to demonstrate how easy it would be to scale our bot across many other features enabled by Spring Boot.

When we're finished, this bot will be able to listen for a command called “!todo” and will print out a statically defined to-do list.

2. Create a Discord Application

For our bot to receive updates from Discord and post responses in channels, we'll need to create a Discord Application in the Discord Developer Portal and set it up to be a bot. This is a simple process. Since Discord allows the creation of multiple applications or bots under a single developer account, feel free to try this multiple times with different settings.

Here are the steps to create a new application:

  • Log in to the Discord Developer Portal
  • In the Applications tab, click “New Application”
  • Enter a name for our bot and click “Create”
  • Upload an App Icon and a description and click “Save Changes”

Now that an application exists, we simply need to add bot functionality to it. This will generate the bot token that Discord4J requires.

Here are the steps to transform an application into a bot:

  • In the Applications tab, select our application (if it is not already selected).
  • In the Bot tab, click “Add Bot” and confirm that we want to do it.

Now that our application has become a real bot, copy the token so that we can add it to our application properties. Be careful not to share this token publicly since someone else would be able to execute malicious code while impersonating our bot.

We're now ready to write some code!

3. Create a Spring Boot App

After constructing a new Spring Boot app, we need to be sure to include the Discord4J core dependency:

<dependency>
    <groupId>com.discord4j</groupId>
    <artifactId>discord4j-core</artifactId>
    <version>3.1.1</version>
</dependency>

Discord4J works by initializing a GatewayDiscordClient with the bot token we created earlier. This client object allows us to register event listeners and configure many things, but at a bare minimum, we must at least call the login() method. This will display our bot as being online.

First, let's add our bot token to our application.yml file:

token: 'our-token-here'

Next, let's inject it into a @Configuration class where we can instantiate our GatewayDiscordClient:

@Configuration
public class BotConfiguration {
    @Value("${token}")
    private String token;
    @Bean
    public GatewayDiscordClient gatewayDiscordClient() {
        GatewayDiscordClient client = DiscordClientBuilder.create(token)
          .build()
          .login()
          .block();
        client.onDisconnect().block();
        return client;
    }
}

The block() calls are necessary to keep the program from exiting immediately.

At this point, our bot would be seen as online, but it doesn't do anything yet. Let's add some functionality.

4. Add Event Listeners

The most common feature of a chatbot is the command. This is an abstraction seen in CLIs where a user types some text to trigger certain functions. We can achieve this in our Discord bot by listening for new messages that users send and replying with intelligent responses when appropriate.

There are many types of events for which we can listen. However, registering a listener is the same for all of them, so let's first create an interface for all of our event listeners:

import discord4j.core.event.domain.Event;
public interface EventListener<T extends Event> {
    Logger LOG = LoggerFactory.getLogger(EventListener.class);
    
    Class<T> getEventType();
    Mono<Void> execute(T event);
    
    default Mono<Void> handleError(Throwable error) {
        LOG.error("Unable to process " + getEventType().getSimpleName(), error);
        return Mono.empty();
    }
}

Now we can implement this interface for as many discord4j.core.event.domain.Event extensions as we want.

Before we implement our first event listener, let's modify our client @Bean configuration to expect a list of EventListener so that it can register every one found in the Spring ApplicationContext:

@Bean
public <T extends Event> GatewayDiscordClient gatewayDiscordClient(List<EventListener<T>> eventListeners) {
    GatewayDiscordClient client = DiscordClientBuilder.create(token)
      .build()
      .login()
      .block();
    for(EventListener<T> listener : eventListeners) {
        client.on(listener.getEventType())
          .flatMap(listener::execute)
          .onErrorResume(listener::handleError)
          .subscribe();
    }
    client.onDisconnect().block();
    return client;
}

Now, all we have to do to register event listeners is to implement our interface and annotate it with Spring's @Component-based stereotype annotations. The registration will now happen automatically for us!

We could have chosen to register each event separately and explicitly. However, it is generally better to take a more modular approach for better code scalability.

Our event listener setup is now complete, but the bot still doesn't do anything yet, so let's add some events to listen to.

4.1. Command Processing

To receive a user's command, we can listen to two different event types: MessageCreateEvent for new messages and MessageUpdateEvent for updated messages. We may only want to listen for new messages, but as a learning opportunity, let's assume we want to support both kinds of events for our bot. This will provide an extra layer of robustness that our users may appreciate.

Both event objects contain all the relevant information about each event. In particular, we're interested in the message contents, the author of the message, and the channel it was posted to. Luckily, all of these data points live in the Message object that both of these event types provide.

Once we have the Message, we can check the author to make sure it is not a bot, we can check the message contents to make sure it matches our command, and we can use the message's channel to send a response.

Since we can fully operate from both events through their Message objects, let's put all downstream logic into a common location so that both event listeners can use it:

import discord4j.core.object.entity.Message;
public abstract class MessageListener {
    public Mono<Void> processCommand(Message eventMessage) {
        return Mono.just(eventMessage)
          .filter(message -> message.getAuthor().map(user -> !user.isBot()).orElse(false))
          .filter(message -> message.getContent().equalsIgnoreCase("!todo"))
          .flatMap(Message::getChannel)
          .flatMap(channel -> channel.createMessage("Things to do today:\n - write a bot\n - eat lunch\n - play a game"))
          .then();
    }
}

A lot is going on here, but this is the most basic form of a command and response. This approach uses a reactive functional design, but it is possible to write this in a more traditional imperative way using block().

Scaling across multiple bot commands, invoking different services or data repositories, or even using Discord roles as authorization for certain commands are common parts of a good bot command architecture. Since our listeners are Spring-managed @Services, we could easily inject other Spring-managed beans to take care of those tasks. However, we won't tackle any of that in this article.

4.2. EventListener<MessageCreateEvent>

To receive new messages from a user, we must listen to the MessageCreateEvent. Since the command processing logic already lives in MessageListener, we can extend it to inherit that functionality. Also, we need to implement our EventListener interface to comply with our registration design:

@Service
public class MessageCreateListener extends MessageListener implements EventListener<MessageCreateEvent> {
    @Override
    public Class<MessageCreateEvent> getEventType() {
        return MessageCreateEvent.class;
    }
    @Override
    public Mono<Void> execute(MessageCreateEvent event) {
        return processCommand(event.getMessage());
    }
}

Through inheritance, the message is passed off to our processCommand() method where all verification and responses occur.

At this point, our bot will receive and respond to the “!todo” command. However, if a user corrects their mistyped command, the bot would not respond. Let's support this use case with another event listener.

4.3. EventListener<MessageUpdateEvent>

The MessageUpdateEvent is emitted when a user edits a message. We can listen for this event to recognize commands, much like how we listen for the MessageCreateEvent.

For our purposes, we only care about this event if the message contents were changed. We can ignore other instances of this event. Fortunately, we can use the isContentChanged() method to filter out such instances:

@Service
public class MessageUpdateListener extends MessageListener implements EventListener<MessageUpdateEvent> {
    
    @Override
    public Class<MessageUpdateEvent> getEventType() {
        return MessageUpdateEvent.class;
    }
    @Override
    public Mono<Void> execute(MessageUpdateEvent event) {
        return Mono.just(event)
          .filter(MessageUpdateEvent::isContentChanged)
          .flatMap(MessageUpdateEvent::getMessage)
          .flatMap(super::processCommand);
    }
}

In this case, since getMessage() returns Mono<Message> instead of a raw Message, we need to use flatMap() to send it to our superclass.

5. Test Bot in Discord

Now that we have a functioning Discord bot, we can invite it to a Discord server and test it.

To create an invite link, we must specify which permissions the bot requires to function properly. A popular third-party Discord Permissions Calculator is often used to generate an invite link with the needed permissions. Although it's not recommended for production, we can simply choose “Administrator” for testing purposes and not worry about the other permissions. Simply supply the Client ID for our bot (found in the Discord Developer Portal) and use the generated link to invite our bot to a server.

If we do not grant Administrator permissions to the bot, we might need to tweak channel permissions so that the bot can read and write in a channel.

The bot now responds to the message “!todo” and when a message is edited to say “!todo”:

6. Overview

This tutorial described all the necessary steps for creating a Discord bot using the Discord4J library and Spring Boot. Finally, it described how to set up a basic scalable command and response structure for the bot.

For a complete and working bot, view the source code over on GitHub. A valid bot token is required to run it.

The post Creating a Discord Bot with Discord4J + Spring Boot first appeared on Baeldung.

        

Accessing Spring Boot Logs in Docker

$
0
0

1. Overview

In this tutorial, we'll explain how to access Spring Boot logs in Docker, from local development to sustainable multi-container solutions.

2. Basic Console Output

To begin with, let's build our Spring Boot Docker image from our previous article:

$> mvn spring-boot:build-image

Then, when we run our container, we can immediately see STDOUT logs in the console:

$> docker run --name=demo-container docker.io/library/spring-boot-docker:0.0.1-SNAPSHOT
Setting Active Processor Count to 1
WARNING: Container memory limit unset. Configuring JVM for 1G container.

This command follows the logs like Linux shell tail -f  command.

Now, let's configure our Spring Boot application with a log file appender by adding a line to the application.properties file:

logging.file.path=logs

Then, we can obtain the same result by running the tail -f command in our running container:

$> docker exec -it demo-container tail -f /workspace/logs/spring.log > $HOME/spring.log
Setting Active Processor Count to 1
WARNING: Container memory limit unset. Configuring JVM for 1G container.

That's it for single-container solutions. In the next chapters, we'll learn how to analyze log history and log output from composed containers.

3. Docker Volume for Log Files

If we must access log files from the host filesystem, we have to create a Docker volume.

To do this, we can run our application container with the command:

$> mvn spring-boot:build-image -v /path-to-host:/workspace/logs

Then, we can see the spring.log  file in the /path-to-host directory.

Starting with our previous article on Docker Compose, we can run multiple containers from a Docker Compose file.

If we're using a Docker Compose file, we should add the volumes configuration:

network-example-service-available-to-host-on-port-1337:
image: karthequian/helloworld:latest
container_name: network-example-service-available-to-host-on-port-1337
volumes:
- /path-to-host:/workspace/logs

Then, let's run the article Compose file:

$> docker-compose up

The log files are available in the /path-to-host directory.

Now that we've reviewed the basic solutions, let's explore the more advanced docker logs command.

In the following chapters, we assume that our Spring Boot application is configured to print logs to STDOUT.

4. Docker Logs for Multiple Containers

As soon as we run multiple containers at once, we'll no longer be able to read mixed logs from multiple containers.

We can find in the Docker Compose documentation that containers are set up by default with the json-file log driver, which supports the docker logs command.

Let's see how it works with our Docker Compose example.

First, let's find our container id:

$> docker ps
CONTAINER ID        IMAGE                           COMMAND                  
877bb028a143        karthequian/helloworld:latest   "/runner.sh nginx"       

Then, we can display our container logs with the docker logs -f command. We can see that, despite the json-file driver, the output is still plain text — JSON is only used internally by Docker:

$> docker logs -f 877bb028a143
172.27.0.1 - - [22/Oct/2020:11:19:52 +0000] "GET / HTTP/1.1" 200 4369 "
172.27.0.1 - - [22/Oct/2020:11:19:52 +0000] "GET / HTTP/1.1" 200 4369 "

The -f option behaves like the tail -f shell command: it echoes the log output as it's produced.

Note that if we're running our containers in Swarm mode, we should use the docker service ps and docker service logs commands instead.

In the documentation, we can see that the docker logs command supports limited output options: json-file, local, or journald.

5. Docker Drivers for Log Aggregation Services

The docker logs command is especially useful for instant watching: it doesn't provide complex filters or long-term statistics.

For that purpose, Docker supports several log aggregation service drivers. As we studied Graylog in a previous article, we'll configure the appropriate driver for this platform.

This configuration can be global for the host in the daemon.json file. It's located in /etc/docker on Linux hosts or C:\ProgramData\docker\config on Windows servers.

Note that we should create the daemon.json file if it doesn't exist:

{ 
    "log-driver": "gelf",
    "log-opts": {
        "gelf-address": "udp://1.2.3.4:12201"
    }
}

The Graylog driver is called GELF — we simply specify the IP address of our Graylog instance.

We can also override this configuration when running a single container:

$> docker run \
      --log-driver gelf –-log-opt gelf-address=udp://1.2.3.4:12201 \
      alpine echo hello world

6. Conclusion

In this article, we've reviewed different ways to access Spring Boot logs in Docker.

Logging to STDOUT makes log watching quite easy from a single-container execution.

However, using file appenders isn't the best option if we want to benefit from the Docker logging features, as containers don't have the same constraints as proper servers.

The post Accessing Spring Boot Logs in Docker first appeared on Baeldung.

Setting Memory And CPU Limits In Docker

$
0
0

1. Overview

There are many cases in which we need to limit the usage of resources on the docker host machine.

In this tutorial, we'll learn how to set the memory and CPU limit for docker containers.

2. Setting Resources Limit With docker run

We can set the resource limits directly using the docker run command. It's a simple solution. However, the limit will apply only to one specific execution of the image.

2.1. Memory

For instance, let's limit the memory that the container can use to 512 megabytes. To constrain memory, we need to use the m parameter:

$ docker run -m 512m nginx

We can also set a soft limit called a reservation. It's activated when docker detects low memory on the host machine:

$ docker run -m 512m --memory-reservation=256m nginx

2.2. CPU

By default, access to the computing power of the host machine is unlimited. We can set the CPUs limit using the cpus parameter. For example, let's constrain our container to use at most two CPUs:

$ docker run --cpus=2 nginx

We can also specify the priority of CPU allocation. The default is 1024, higher numbers are higher priority:

$ docker run --cpus=2 --cpu-shares=2000 nginx

Similarly to the memory reservation, CPU shares play the main role when computing power is scarce and needs to be divided between competing processes.

3. Setting Memory Limit With the docker-compose File

We can achieve similar results using docker-compose files. Mind that the format and possibilities will vary between versions of docker-compose.

3.1. Versions 3 and Newer With docker swarm

Let's give the Nginx service limit of half of CPU and 512 megabytes of memory, and reservation of a quarter of CPU and 128 megabytes of memory. We need to create “deploy” and then “resources” segments in our service configuration:

services:
  service:
    image: nginx
    deploy:
        resources:
            limits:
              cpus: 0.50
              memory: 512M
            reservations:
              cpus: 0.25
              memory: 128M

To take advantage of the deploy segment in a docker-compose file, we need to use the docker stack command. To deploy a stack to the swarm, we run the deploy command:

$ docker stack deploy --compose-file docker-compose.yml bael_stack

3.2. Version 2 With docker-compose

In older versions of docker-compose, we can put resources limits on the same level as the service's main properties. They also have slightly different naming:

service:
  image: nginx
  mem_limit: 512m
  mem_reservation: 128M
  cpus: 0.5
  ports:
    - "80:80"

To create configured containers, we need to run the docker-compose command:

$ docker-compose up

4. Verifying Resources Usage

After we set the limits, we can verify them using the docker stats command:

$ docker stats
CONTAINER ID        NAME                                             CPU %               MEM USAGE / LIMIT   MEM %               NET I/O             BLOCK I/O           PIDS
8ad2f2c17078        bael_stack_service.1.jz2ks49finy61kiq1r12da73k   0.00%               2.578MiB / 512MiB   0.50%               936B / 0B           0B / 0B             2

5. Summary

In this tutorial, we explored ways of limiting the docker's access to the host's resources. We looked at usage with the docker run and docker-compose commands. Finally, we controlled resource consumption with docker stats.

The post Setting Memory And CPU Limits In Docker first appeared on Baeldung.

JPA CascadeType.REMOVE vs orphanRemoval

$
0
0

1. Overview

In this tutorial, we'll be discussing the difference between two of the options we have for removing entities from our databases when working with JPA.

First, we'll start with CascadeType.REMOVE which is a way to delete a child entity or entities when the deletion of its parent happens. Then we'll take a look at the orphanRemoval attribute, which was introduced in JPA 2.0. This provides us with a way to delete orphaned entities from the database.

Throughout the tutorial, we'll be using a simple online store domain to demonstrate our examples.

2. Domain Model

As mentioned earlier, this article makes use of a simple online store domain. Wherein the OrderRequest has a ShipmentInfo and a list of LineItem.

Given that, let's consider:

  • For the removal of ShipmentInfo, when the deletion of an OrderRequest happens, we'll use CascadeType.REMOVE
  • For the removal of a LineItem from an OrderRequest, we'll use orphanRemoval

First, let's create a ShipmentInfo entity:

@Entity
public class ShipmentInfo {
    
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;
    private String name;
    // constructors
}

Next, let's create a LineItem entity:

@Entity
public class LineItem {
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;
    private String name;
    @ManyToOne
    private OrderRequest orderRequest;
    // constructors, equals, hashCode
}

Lastly, let's put it all together by creating an OrderRequest entity:

@Entity
public class OrderRequest {
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;
    @OneToOne(cascade = { CascadeType.REMOVE, CascadeType.PERSIST })
    private ShipmentInfo shipmentInfo;
    @OneToMany(orphanRemoval = true, cascade = CascadeType.PERSIST, mappedBy = "orderRequest")
    private List<LineItem> lineItems;
    // constructors
    public void removeLineItem(LineItem lineItem) {
        lineItems.remove(lineItem);
    }
}

It's worth highlighting the removeLineItem method, which detaches a LineItem from an OrderRequest.

3. CascadeType.REMOVE

As stated earlier, marking a reference field with CascadeType.REMOVE is a way to delete a child entity or entities whenever the deletion of its parent happens.

In our case, an OrderRequest has a ShipmentInfo, which has a CascadeType.REMOVE

To verify the deletion of ShipmentInfo from the database when the deletion of an OrderRequest happens, let's create a simple integration test:

@Test
public void whenOrderRequestIsDeleted_thenDeleteShipmentInfo() {
    createOrderRequestWithShipmentInfo();
    OrderRequest orderRequest = entityManager.find(OrderRequest.class, 1L);
    entityManager.getTransaction().begin();
    entityManager.remove(orderRequest);
    entityManager.getTransaction().commit();
    Assert.assertEquals(0, findAllOrderRequest().size());
    Assert.assertEquals(0, findAllShipmentInfo().size());
}
private void createOrderRequestWithShipmentInfo() {
    ShipmentInfo shipmentInfo = new ShipmentInfo("name");
    OrderRequest orderRequest = new OrderRequest(shipmentInfo);
    entityManager.getTransaction().begin();
    entityManager.persist(orderRequest);
    entityManager.getTransaction().commit();
    Assert.assertEquals(1, findAllOrderRequest().size());
    Assert.assertEquals(1, findAllShipmentInfo().size());
}

From the assertions, we can see that the deletion of OrderRequest resulted in the successful deletion of the related ShipmentInfo as well.

4. orphanRemoval 

As stated earlier, its usage is to delete orphaned entities from the databaseAn entity that is no longer attached to its parent is the definition of being an orphan

In our case, an OrderRequest has a collection of LineItem objects where we use the @OneToMany annotation to identify the relationship. This is where we also set the orphanRemoval attribute to true. To detach a LineItem from an OrderRequest, we can use the removeLineItem method that we previously created.

With everything in place, once we use the removeLineItem method and save the OrderRequest, the deletion of the orphaned LineItem from the database should happen.  

To verify the deletion of the orphaned LineItem from the database, let's create another integration test:

@Test
public void whenLineItemIsRemovedFromOrderRequest_thenDeleteOrphanedLineItem() {
    createOrderRequestWithLineItems();
    OrderRequest orderRequest = entityManager.find(OrderRequest.class, 1L);
    LineItem lineItem = entityManager.find(LineItem.class, 2L);
    orderRequest.removeLineItem(lineItem);
    entityManager.getTransaction().begin();
    entityManager.merge(orderRequest);
    entityManager.getTransaction().commit();
    Assert.assertEquals(1, findAllOrderRequest().size());
    Assert.assertEquals(2, findAllLineItem().size());
}
private void createOrderRequestWithLineItems() {
    List<LineItem> lineItems = new ArrayList<>();
    lineItems.add(new LineItem("line item 1"));
    lineItems.add(new LineItem("line item 2"));
    lineItems.add(new LineItem("line item 3"));
    OrderRequest orderRequest = new OrderRequest(lineItems);
    entityManager.getTransaction().begin();
    entityManager.persist(orderRequest);
    entityManager.getTransaction().commit();
    Assert.assertEquals(1, findAllOrderRequest().size());
    Assert.assertEquals(3, findAllLineItem().size());
}

Again, from the assertions, it shows that we have successfully deleted the orphaned LineItem from the database.

Additionally, it's worth mentioning that the removeLineItem method modifies the list of LineItem instead of reassigning a value to it. Doing the latter will lead to a PersistenceException.

To verify the stated behavior, let's create a final integration test:

@Test(expected = PersistenceException.class)
public void whenLineItemsIsReassigned_thenThrowAnException() {
    createOrderRequestWithLineItems();
    OrderRequest orderRequest = entityManager.find(OrderRequest.class, 1L);
    orderRequest.setLineItems(new ArrayList<>());
    entityManager.getTransaction().begin();
    entityManager.merge(orderRequest);
    entityManager.getTransaction().commit();
}

5. Conclusion

In this article, we've explored the difference between CascadeType.REMOVE and orphanRemoval using a simple online store domain. Also, in order to verify the entities were deleted correctly from our database, we created several integration tests.

As always, the full source code of the article is available over on GitHub.

The post JPA CascadeType.REMOVE vs orphanRemoval first appeared on Baeldung.

        

Java Weekly, Issue 359

$
0
0

1. Spring and Java

>> R2DBC joins Reactive Foundation [r2dbc.io]

A good day for open standards: Reactive Relational Database Connectivity (R2DBC) joins the Reactive Foundation!

>> The Reactive Principles [reactive.foundation]

Design Principles for Distributed Applications: a set of best practices to design and implement highly efficient, performant, scalable, and resilient distributed systems.

>> NUMA-Aware Memory Allocations for G1 GC [sangheon.github.io]

How NUMA-awareness affects heap initialization, allocation, and logging in G1 GC.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> How Netflix Scales its API with GraphQL Federation [netflixtechblog.com]

GraphQL Federation: providing unified APIs with distributed ownership and implementation!

Also worth reading:

3. Musings

>> Write Angry! [zachholman.com]

An unorthodox approach to making more efficient processes – being angry and extremely opinionated about the current way of doing things!

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Coffee Productivity [dilbert.com]

>> Banana Is Not An Apple [dilbert.com]

>> No Performance Reviews [dilbert.com]

5. Pick of the Week

>> 5 Common Beliefs that Can Subtly Screw You Over [markmanson.net]

The post Java Weekly, Issue 359 first appeared on Baeldung.

        

DispatcherServlet and web.xml in Spring Boot

$
0
0

1. Overview

The DispatcherServlet is the front controller in Spring web applications. It's used to create web applications and REST services in Spring MVC. In a traditional Spring web application, this servlet is defined in the web.xml file.

In this tutorial, we'll migrate code from a web.xml file to DispatcherServlet in a Spring Boot application. Also, we'll map Filter, Servlet, and Listener classes from web.xml to the Spring Boot application.

2. Maven Dependency

First, we have to add the spring-boot-starter-web Maven dependency to our pom.xml file:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>

3. DispatcherServlet

DispatcherServlet receives all of the HTTP requests and delegates them to controller classes.

Before the Servlet 3.x specification, DispatcherServlet would be registered in the web.xml file for a Spring MVC application. Since the Servlet 3.x specification, we can register servlets programmatically using ServletContainerInitializer.

Let's see a DispatcherServlet example configuration in the web.xml file:

<servlet>
    <servlet-name>dispatcher</servlet-name>
    <servlet-class>
        org.springframework.web.servlet.DispatcherServlet
    </servlet-class>
</servlet>
<servlet-mapping>
    <servlet-name>dispatcher</servlet-name>
    <url-pattern>/</url-pattern>
</servlet-mapping>

Spring Boot provides the spring-boot-starter-web library for developing web applications using Spring MVC. One of the main features of Spring Boot is autoconfiguration. The Spring Boot autoconfiguration registers and configures the DispatcherServlet automatically. Therefore, we don’t need to register the DispatcherServlet manually.

By default, the spring-boot-starter-web starter configures DispatcherServlet to the URL pattern “/”. So, we don't need to complete any additional configuration for the above DispatcherServlet example in the web.xml file. However, we can customize the URL pattern using server.servlet.* in the application.properties file:

server.servlet.context-path=/demo
spring.mvc.servlet.path=/baeldung

With these customizations, DispatcherServlet is configured to handle the URL pattern /baeldung and the root contextPath will be /demo. Thus, DispatcherServlet listens at http://localhost:8080/demo/baeldung/.

4. Application Configuration

Spring MVC web applications use the web.xml file as a deployment descriptor file. Also, it defines mappings between URL paths and the servlets in the web.xml file.

This is no longer the case with Spring Boot. If we need a special filter, we can register it in a Java class configuration. The web.xml file includes filters, servlets, and listeners.

When we want to migrate from a traditional Spring MVC to a modern Spring Boot application, how can we port our web.xml to a new Spring Boot application? In Spring Boot applications, we can add these concepts in several ways.

4.1. Registering a Filter

Let's create a filter by implementing the Filter interface:

@Component
public class CustomFilter implements Filter {
    Logger logger = LoggerFactory.getLogger(CustomFilter.class);
    @Override
    public void init(FilterConfig filterConfig) throws ServletException {
    }
    @Override
    public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain)
      throws IOException, ServletException {
        logger.info("CustomFilter is invoked");
        chain.doFilter(request, response);
    }
    // other methods 
}

Without Spring Boot, we would configure our CustomFilter in the web.xml file:

<filter>
    <filter-name>customFilter</filter-name>
    <filter-class>CustomFilter</filter-class>
</filter>
<filter-mapping>
    <filter-name>customFilter</filter-name>
    <url-pattern>/*</url-pattern>
</filter-mapping>

In order for Spring Boot to be able to recognize a filter, we just needed to define it as a bean with the @Component annotation.

4.2. Registering a Servlet

Let's define a servlet by extending the HttpServlet class:

public class CustomServlet extends HttpServlet {
    Logger logger = LoggerFactory.getLogger(CustomServlet.class);
    @Override
    protected void doGet(
        HttpServletRequest req,
        HttpServletResponse resp) throws ServletException, IOException {
            logger.info("CustomServlet doGet() method is invoked");
            super.doGet(req, resp);
    }
    @Override
    protected void doPost(
        HttpServletRequest req,
        HttpServletResponse resp) throws ServletException, IOException {
            logger.info("CustomServlet doPost() method is invoked");
            super.doPost(req, resp);
    }
}

Without Spring Boot, we would configure our CustomServlet in the web.xml file:

<servlet>
    <servlet-name>customServlet</servlet-name>
    <servlet-class>CustomServlet</servlet-class>
</servlet>
<servlet-mapping>
    <servlet-name>customServlet</servlet-name>
    <url-pattern>/servlet</url-pattern>
</servlet-mapping>

In a Spring Boot application, the servlet is registered either as a Spring @Bean or by scanning the @WebServlet annotated classes with an embedded container.

With the Spring @Bean approach, we can use the ServletRegistrationBean class to register the servlet.

So, we'll define CustomServlet as a bean with the ServletRegistrationBean class:

@Bean
public ServletRegistrationBean customServletBean() {
    ServletRegistrationBean bean = new ServletRegistrationBean(new CustomServlet(), "/servlet");
    return bean;
}

4.3. Registering a Listener

Let's define a listener by extending the ServletContextListener class:

public class CustomListener implements ServletContextListener {
    Logger logger = LoggerFactory.getLogger(CustomListener.class);
    @Override
    public void contextInitialized(ServletContextEvent sce) {
        logger.info("CustomListener is initialized");
    }
    @Override
    public void contextDestroyed(ServletContextEvent sce) {
        logger.info("CustomListener is destroyed");
    }
}

Without Spring Boot, we would configure our CustomListener in the web.xml file:

<listener>
    <listener-class>CustomListener</listener-class>
</listener>

To define a listener in a Spring Boot application, we can use either the @Bean or @WebListener annotations.

With the Spring @Bean approach, we can use the ServletListenerRegistrationBean class to register the Listener.

So, let's define CustomListener as a bean with the ServletListenerRegistrationBean class:

@Bean
public ServletListenerRegistrationBean<ServletContextListener> customListenerBean() {
    ServletListenerRegistrationBean<ServletContextListener> bean = new ServletListenerRegistrationBean();
    bean.setListener(new CustomListener());
    return bean;
}

Upon starting our application, we can check the log output to see confirmation that the listener has been successfully initialized:

2020-09-28 08:50:30.872 INFO 19612 --- [main] c.baeldung.demo.listener.CustomListener: CustomListener is initialized

5. Conclusion

In this quick tutorial, we saw how to define DispatcherServlet and web.xml elements including filter, servlet, and listener in a Spring Boot application. And, as always, the source code for the above example can be found over on GitHub.

The post DispatcherServlet and web.xml in Spring Boot first appeared on Baeldung.

        

Maven Packaging Types

$
0
0

1. Overview

The packaging type is an important aspect of any Maven project. It specifies the type of artifact the project produces. Generally, a build produces a jar, war, pom, or other executable.

Maven offers many default packaging types and also provides the flexibility to define a custom one.

In this tutorial, we'll take a deep dive into Maven packaging types. First, we'll look at the build lifecycles in Maven. Then, we'll discuss each packaging type, what they represent, and their effect on the project's lifecycle. In the end, we'll see how to define a custom packaging type.

2. Default Packaging Types

Maven offers many default packaging types that include a jar, war, ear, pom, rar, ejb, and maven-plugin. Each packaging type follows a build lifecycle that consists of phases. Usually, every phase is a sequence of goals and performs a specific task.

Different packaging types may have a different goal in a particular phase. For example, in the package phase of jar packaging type, maven-jar-plugin‘s jar goal is executed. Conversely, for a war project, maven-war-plugin‘s war goal is executed in the same phase.

2.1. jar

Java archive – or jar – is one of the most popular packaging types. Projects with this packaging type produce a compressed zip file with the .jar extension. It may include pure Java classes, interfaces, resources, and metadata files.

To begin with, let's look at some of the default goal-to-build-phase bindings for the jar:

  • resources: resources
  • compiler: compile
  • resources: testResources
  • compiler: testCompile
  • surefire: test
  • jar: jar
  • install: install
  • deploy: deploy

Without delay, let's define the packaging type of a jar project:

<packaging>jar</packaging>

If nothing has been specified, Maven assumes the packaging type is a jar.

2.2. war

Simply put, a web application archive – or war – contains all files related to a web application. It may include Java servlets, JSPs, HTML pages, a deployment descriptor, and related resources. Overall, war has the same goal bindings as a jar, but with one exception —the package phase of the war has a different goal, which is war.

Without a doubt, jar and war are the most popular packaging types in the Java community. A detailed difference between these two might be an interesting read.

Let's define the packaging type of a web application:

<packaging>war</packaging>

The other packaging types ejb, par, and rar also have similar lifecycles, but each has a different package goal.

ejb:ejb or par:par or rar:rar

2.3. ear

Enterprise application archive – or ear – is a compressed file that contains a J2EE application. It consists of one or more modules that can be either web modules (packaged as a war file) or EJB modules (packaged as a jar file) or both of them.

To put it differently, the ear is a superset of jars and wars and requires an application server to run the application, whereas war requires only a web container or webserver to deploy it. The aspects that distinguish a web server from an application server, and what those popular servers are in Java, are important concepts for a Java developer.

Let's define the default goal bindings for the ear:

  • ear: generate-application-xml
  • resources: resources
  • ear: ear
  • install: install
  • deploy: deploy

Here's how we can define the packaging type of such projects:

<packaging>ear</packaging>

2.4. pom

Among all packaging types, pom is the simplest one. It helps to create aggregator and parent projects.

An aggregator or multi-module project assembles submodules coming from different sources. These submodules are regular Maven projects and follow their own build lifecycles. The aggregator POM has all the references of submodules under the modules element.

A parent project allows you to define the inheritance relationship between POMs.  The parent POM shares certain configurations, plugins, and dependencies, along with their versions. Most elements from the parent are inherited by its children — exceptions include artifactId, name, and prerequisites.

Because there are no resources to process and no code to compile or test. Hence, the artifacts of pom projects generate itself instead of any executable.

Let's define the packaging type of a multi-module project:

<packaging>pom</packaging>

Such projects have the simplest lifecycle that consists of only two steps: install and deploy.

2.5. maven-plugin

Maven offers a variety of useful plugins. However, there might be cases when default plugins are not sufficient enough. In this case, the tool provides the flexibility to create a maven-plugin, according to project needs.

To create a plugin, set the packaging type of the project:

<packaging>maven-plugin</packaging>

The maven-plugin has a lifecycle similar to jar‘s lifecycle, but with two exceptions:

  • plugin: descriptor is bound to the generate-resources phase
  • plugin: addPluginArtifactMetadata is added to the package phase

For this type of project, a maven-plugin-api dependency is required.

2.6. ejb

Enterprise Java Beans – or ejb – help to create scalable, distributed server-side applications. EJBs often provide the business logic of an application. A typical EJB architecture consists of three components: Enterprise Java Beans (EJBs), the EJB container, and an application server.

Now, let's define the packaging type of the EJB project:

<packaging>ejb</packaging>

The ejb packaging type also has a similar lifecycle as jar packaging, but with a different package goal. The package goal for this type of project is ejb:ejb.

The project, with ejb packaging type, requires a maven-ejb-plugin to execute lifecycle goals. Maven provides support for EJB 2 and 3. If no version is specified, then default version 2 is used.

2.7. rar

Resource adapter – or rar – is an archive file that serves as a valid format for the deployment of resource adapters to an application server. Basically, it is a system-level driver that connects a Java application to an enterprise information system (EIS).

Here's the declaration of packaging type for a resource adapter:

<packaging>rar</packaging>

Every resource adapter archive consists of two parts: a jar file that contains source code and a ra.xml that serves as a deployment descriptor.

Again, the lifecycle phases are the same as a jar or war packaging with one exception: The package phase executes the rar goal that consists of a maven-rar-plugin to package the archives.

3. Other Packaging Types

So far, we've looked at various packaging types that Maven offers as default. Now, let's imagine we want our project to produce an artifact with a .zip extension. In this case, the default packaging types can't help us.

Maven also provides some more packaging types through plugins. With the help of these plugins, we can define a custom packaging type and its build lifecycle. Some of these types are:

  • msi
  • rpm
  • tar
  • tar.bz2
  • tar.gz
  • tbz
  • zip

To define a custom type, we have to define its packaging type and phases in its lifecycle. For this, create a components.xml file under the src/main/resources/META-INF/plexus directory:

<component>
   <role>org.apache.maven.lifecycle.mapping.LifecycleMapping</role>
   <role-hint>zip</role-hint>
   <implementation>org.apache.maven.lifecycle.mapping.DefaultLifecycleMapping</implementation>
   <configuration>
      <phases>
         <process-resources>org.apache.maven.plugins:maven-resources-plugin:resources</process-resources>
         <package>com.baeldung.maven.plugins:maven-zip-plugin:zip</package>
         <install>org.apache.maven.plugins:maven-install-plugin:install</install>
         <deploy>org.apache.maven.plugins:maven-deploy-plugin:deploy</deploy>
      </phases>
   </configuration>
</component>

Until now, Maven doesn't know anything about our new packaging type and its lifecycle. To make it visible, let's add the plugin in the pom file of the project and set extensions to true:

<plugins>
    <plugin>
        <groupId>com.baeldung.maven.plugins</groupId>
        <artifactId>maven-zip-plugin</artifactId>
        <extensions>true</extensions>
    </plugin>
</plugins>

Now, the project will be available for a scan, and the system will look into plugins and compnenets.xml file, too.

Other than all these types, Maven offers a lot of other packaging types through external projects and plugins. For example, nar (native archive), swf, and swc are packaging types for the projects that produce Adobe Flash and Flex content. For such projects, we need a plugin that defines custom packaging and a repository that contains the plugin.

4. Conclusion

In this article, we looked at various packaging types available in Maven. Also, we got familiar with what these types represent and how they differ in their lifecycles. In the end, we also learned how to define a custom packaging type and customize the default build lifecycle.

All code examples on Baeldung are built using Maven. Be sure to check out our various Maven configurations over 0n GitHub.

The post Maven Packaging Types first appeared on Baeldung.

        

Defining Indexes in JPA

$
0
0

1. Introduction

In this tutorial, we'll discuss defining indexes using JPA's @Index annotation. Through examples, we'll learn how to define our first index using JPA and Hibernate. After that, we're going to modify the definition showing additional ways to customize the index.

2. @Index Annotation

Let's begin by making a quick recap. The database index is a data structure that improves the speed of data retrieval operations on a table at the cost of additional writes and storage space. Mostly, it's a copy of selected columns of data from a single table. We should create indexes to increase performance on our persistence layer.

JPA allows us to achieve that by defining indexes from our code using @Index. This annotation is interpreted by the schema generation process, creating artifacts automatically. Note that it's not necessary to specify any index for our entities.

Now, let's take a look at the definition.

2.1. javax.persistence.Index

The index support has been finally added in the JPA 2.1 specification by javax.persistence.Index. This annotation let us define an index for our table and customize it accordingly:

@Target({})
@Retention(RUNTIME)
public @interface Index {
    String name() default "";
    String columnList();
    boolean unique() default false;
}

As we can see, only the columnList attribute is mandatory, which we have to define. We'll take a better look at each of the parameters later, going through examples.

2.2. JPA vs. Hibernate

We know that JPA is only a specification. To work correctly, we also need to specify a persistence provider. By default, the Hibernate Framework is JPA's implementation delivered by Spring. More about it, you can read here.

We should remember the index support has been added to the JPA very late. Before that, many ORM Frameworks support indexes by introducing their own custom implementation, which might work differently. The Hibernate Framework also did it and introduced the org.hibernate.annotations.Index annotation. While working with that framework, we must be careful that it has been deprecated since the JPA 2.1 specification support, and we should use the JPA's one.

Now when we have some technical background, we can go through examples and define our first index in JPA.

3. Defining the @Index

In this section, we're implementing our index. Later, we'll try to modify it, presenting different customization possibilities.

Before we start, we need to initialize our project properly and define a model.

Let's implement a Student entity:

@Entity
@Table
public class Student implements Serializable {
    @Id
    @GeneratedValue
    private Long id;
    private String firstName;
    private String lastName;
    // getters, setters
}

When we have our model, let's implement the first index. All we have to do is add an @Index annotation. We do that in the @Table annotation under the indexes attribute. Let's remember to specify the name of the column:

@Table(indexes = @Index(columnList = "firstName"))

We've declared the very first index using the firstName column. When we execute the schema creation process, we can validate it:

[main] DEBUG org.hibernate.SQL -
  create index IDX2gdkcjo83j0c2svhvceabnnoh on Student (firstName)

Now, it's time to modify our declaration showing additional features.

3.1. @Index Name

As we can see, our index must have a name. By default, if we don't specify so, it's a provider-generated value. When we want to have a custom label, we should simply add the name attribute:

@Index(name = "fn_index", columnList = "firstName")

This variant creates an index with a user-defined name:

[main] DEBUG org.hibernate.SQL -
  create index fn_index on Student (firstName)

Moreover, we can create our index in the different schema by specifying the schema's name in the name:

@Index(name = "schema2.fn_index", columnList = "firstName")

3.2. Multicolumn @Index

Now, let's take a closer look at the columnList syntax:

column ::= index_column [,index_column]*
index_column ::= column_name [ASC | DESC]

As we already know, we can specify the column names to be included in the index. Of course, we can specify multiple columns to the single index. We do that by separating the names by a comma:

@Index(name = "mulitIndex1", columnList = "firstName, lastName")
@Index(name = "mulitIndex2", columnList = "lastName, firstName")
[main] DEBUG org.hibernate.SQL -
  create index mulitIndex1 on Student (firstName, lastName)
   
[main] DEBUG org.hibernate.SQL -
  create index mulitIndex2 on Student (lastName, firstName)

Note that the persistence provider must observe the specified ordering of the columns. In our example, indexes are slightly different, even if they specify the same set of columns.

3.3. @Index Order

As we reviewed the syntax in the previous section, we also can specify ASC (ascending) and DESC (descending) values after the column_name. We use it to set the sort order of the values in the indexed column:

@Index(name = "mulitSortIndex", columnList = "firstName, lastName DESC")
[main] DEBUG org.hibernate.SQL -
  create index mulitSortIndex on Student (firstName, lastName desc)

We can specify the order for each column. If we don't, the ascending order is assumed.

3.4. @Index Uniqueness

The last optional parameter is a unique attribute, which defines whether the index is unique. A unique index ensures that the indexed fields don't store duplicate values. By default, it's false. If we want to change it, we can declare:

@Index(name = "uniqueIndex", columnList = "firstName", unique = true)
[main] DEBUG org.hibernate.SQL -
  alter table Student add constraint uniqueIndex unique (firstName)

When we create an index in that way, we add a uniqueness constraint on our columns, similarly, how as a unique attribute on @Column annotation do. @Index has an advantage over @Column due to the possibility to declare multi-column unique constraint:

@Index(name = "uniqueMulitIndex", columnList = "firstName, lastName", unique = true)

3.5. Multiple @Index on a Single Entity

So far, we've implemented different variants of the index. Of course, we're not limited to declaring a single index on the entity. Let's collect our declarations and specify every single index at once. We do that by repeating @Index annotation in braces and separated by a comma:

@Entity
@Table(indexes = {
  @Index(columnList = "firstName"),
  @Index(name = "fn_index", columnList = "firstName"),
  @Index(name = "mulitIndex1", columnList = "firstName, lastName"),
  @Index(name = "mulitIndex2", columnList = "lastName, firstName"),
  @Index(name = "mulitSortIndex", columnList = "firstName, lastName DESC"),
  @Index(name = "uniqueIndex", columnList = "firstName", unique = true),
  @Index(name = "uniqueMulitIndex", columnList = "firstName, lastName", unique = true)
})
public class Student implements Serializable

What is more, we can also create multiple indexes for the same set of columns.

3.6. Primary Key

When we talk about indexes, we have to stop for a while at primary keys. As we know, every entity managed by the EntityManager must specify an identifier that is mapped into the primary key.

Generally, the primary key is a specific type of unique index. It's worth adding that we don't have to declare the definition of this key in the way presented before. Everything is done automatically by the @Id annotation.

3.7. Non-entity @Index

After we've learned different ways to implement indexes, we should mention that @Table isn't the only place to specify them. In the same way, we can declare indexes in @SecondaryTable, @CollectionTable, @JoinTable, @TableGenerator annotations. Those examples aren't covered in this article. For more details, please check the javax.persistence JavaDoc.

4. Conclusion

In this article, we discussed declaring indexes using JPA. We started by reviewing the general knowledge about them. Later we implemented our first index and, through examples, learned how to customize it by changing name, included columns, order, and uniqueness. In the end, we talked about primary keys and additional ways and places where we can declare them.

As always, the examples from the article are available over on GitHub.

The post Defining Indexes in JPA first appeared on Baeldung.

        

Testing Kafka and Spring Boot

$
0
0

1. Overview

Apache Kafka is a powerful, distributed, fault-tolerant stream processing system. In a previous tutorial, we learned how to work with Spring and Kafka.

In this tutorial, we'll build on the previous one and learn how to write reliable, self-contained integration tests that don't rely on an external Kafka server running.

First, we'll start but looking at how to use and configure an embedded instance of Kafka. Then we'll see how we can make use of the popular framework Testcontainers from our tests.

2. Dependencies

Of course, we'll need to add the standard spring-kafka dependency to our pom.xml:

<dependency>
    <groupId>org.springframework.kafka</groupId>
    <artifactId>spring-kafka</artifactId>
    <version>2.6.3.RELEASE</version>
</dependency>

Then we'll need two more dependencies specifically for our tests. First, we'll add the spring-kafka-test artifact:

<dependency>
    <groupId>org.springframework.kafka</groupId>
    <artifactId>spring-kafka-test</artifactId>
    <version>2.6.3.RELEASE</version>
    <scope>test</scope>
</dependency>

And finally, we'll add the Testcontainers Kafka dependency, which is also available over on Maven Central:

<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>kafka</artifactId>
    <version>1.15.0</version>
    <scope>test</scope>
</dependency>

Now that we have all the necessary dependencies configured, we can write a simple Spring Boot application using Kafka.

3. A Simple Kafka Producer-Consumer Application

Throughout this tutorial, the focus of our tests will be a simple producer-consumer Spring Boot Kafka application.

Let's start by defining our application entry point:

@SpringBootApplication
@EnableAutoConfiguration
public class KafkaProducerConsumerApplication {
    public static void main(String[] args) {
        SpringApplication.run(KafkaProducerConsumerApplication.class, args);
    }
}

As we can see, this is a standard Spring Boot application. Where possible, we want to make use of default configuration values. With this in mind, we make use of the @EnableAutoConfiguration annotation to auto-config our application.

3.1. Producer Setup

Next, let's consider a producer bean that we'll use to send messages to a given Kafka topic:

@Component
public class KafkaProducer {
    private static final Logger LOGGER = LoggerFactory.getLogger(KafkaProducer.class);
    @Autowired
    private KafkaTemplate<String, String> kafkaTemplate;
    public void send(String topic, String payload) {
        LOGGER.info("sending payload='{}' to topic='{}'", payload, topic);
        kafkaTemplate.send(topic, payload);
    }
}

Our KafkaProducer bean defined above is merely a wrapper around the KafkaTemplate class. This class provides high-level thread-safe operations, such as sending data to the provided topic, which is exactly what we do in our send method.

3.2. Consumer Setup

Likewise, we'll now define a simple consumer bean which will listen to a Kafka topic and receives messages:

@Component
public class KafkaConsumer {
    private static final Logger LOGGER = LoggerFactory.getLogger(KafkaConsumer.class);
    private CountDownLatch latch = new CountDownLatch(1);
    private String payload = null;
    @KafkaListener(topics = "${test.topic}")
    public void receive(ConsumerRecord<?, ?> consumerRecord) {
        LOGGER.info("received payload='{}'", consumerRecord.toString());
        setPayload(consumerRecord.toString());
        latch.countDown();
    }
    public CountDownLatch getLatch() {
        return latch;
    }
    public String getPayload() {
        return payload;
    }
}

Our simple consumer uses the @KafkaListener annotation on the receive method to listen to messages on a given topic. We'll see later how we configure the test.topic from our tests.

Furthermore, the receive method stores the message content in our bean and decrements the count of the latch variable. This variable is a simple thread-safe counter field that we'll use later from our tests to ensure we successfully received a message.

Now that we have our simple Kafka application using Spring Boot implemented let's see how we can write integration tests.

4. A Word on Testing

In general, when writing clean integration tests, we shouldn't depend on external services that we might not be able to control or might suddenly stop working. This could have adverse effects on our test results.

Similarly, if we're dependent on an external service, in this case, a running Kafka broker, we likely won't be able to set it up, control it and tear it down in the way we want from our tests.

4.1. Application Properties

We're going to use a very light set of application configuration properties from our tests. We'll define these properties in our src/test/resources/application.yml file:

spring:
  kafka:
    consumer:
      auto-offset-reset: earliest
      group-id: baeldung
test:
  topic: embedded-test-topic

This is the minimum set of properties that we need when working with an embedded instance of Kafka or a local broker.

Most of these are self-explanatory, but the one we should highlight of particular importance is the consumer property auto-offset-reset: earliest. This property ensures that our consumer group gets the messages we send because the container might start after the sends have completed.

Additionally, we configure a topic property with the value embedded-test-topic, which is the topic we'll use from our tests.

5. Testing Using Embedded Kafka

In this section, we'll take a look at how to use an in-memory Kafka instance to run our tests against. This is also known as Embedded Kafka.

The dependency spring-kafka-test we added previously contains some useful utilities to assist with testing our application. Most notably, it contains the EmbeddedKafkaBroker class.

With that in mind, let's go ahead and write our first integration test:

@SpringBootTest
@DirtiesContext
@EmbeddedKafka(partitions = 1, brokerProperties = { "listeners=PLAINTEXT://localhost:9092", "port=9092" })
class EmbeddedKafkaIntegrationTest {
    @Autowired
    private KafkaConsumer consumer;
    @Autowired
    private KafkaProducer producer;
    @Value("${test.topic}")
    private String topic;
    @Test
    public void givenEmbeddedKafkaBroker_whenSendingtoSimpleProducer_thenMessageReceived() 
      throws Exception {
        producer.send(topic, "Sending with own simple KafkaProducer");
        consumer.getLatch().await(10000, TimeUnit.MILLISECONDS);
        
        assertThat(consumer.getLatch().getCount(), equalTo(0L));
        assertThat(consumer.getPayload(), containsString("embedded-test-topic"));
    }
}

Let's walk through the key parts of our test. First, we start by decorating our test class with two pretty standard Spring annotations:

  • The @SpringBootTest annotation will ensure that our test bootstraps the Spring application context
  • We also use the @DirtiesContext annotation, which will make sure this context is cleaned and reset between different tests

Here comes the crucial part, we use the @EmbeddedKafka annotation to inject an instance of an EmbeddedKafkaBroker into our tests. Moreover, there are several properties available we can use to configure the embedded Kafka node:

  • partitions – this is the number of partitions used per topic. To keep things nice and simple, we only want one to be used from our tests
  • brokerProperties – additional properties for the Kafka broker. Again we keep things simple and specify a plain text listener and a port number

Next, we auto-wire our consumer and producer classes and configure a topic to use the value from our application.properties.

For the final piece of the jigsaw, we simply send a message to our test topic and verify that the message has been received and contains the name of our test topic.

When we run our test, we'll see amongst the verbose Spring output:

...
12:45:35.099 [main] INFO  c.b.kafka.embedded.KafkaProducer -
  sending payload='Sending with our own simple KafkaProducer' to topic='embedded-test-topic'
...
12:45:35.103 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1]
  INFO  c.b.kafka.embedded.KafkaConsumer - received payload=
  'ConsumerRecord(topic = embedded-test-topic, partition = 0, leaderEpoch = 0, offset = 1,
  CreateTime = 1605267935099, serialized key size = -1, 
  serialized value size = 41, headers = RecordHeaders(headers = [], isReadOnly = false),
  key = null, value = Sending with our own simple KafkaProducer)'

This confirms that our test is working properly. Awesome! We now have a way to write self-contained, independent integration tests using an in-memory Kafka broker.

6. Testing Kafka With TestContainers

Sometimes we might see small differences between a real external service vs. an embedded in-memory instance of a service that has been specifically provided for testing purposes. Although unlikely, it could also be that the port used from our test might be occupied, causing a failure.

With that in mind, in this section, we'll see a variation on our previous approach to testing using the Testcontainers framework. We'll see how to instantiate and manage an external Apache Kafka broker hosted inside a Docker container from our integration test.

Let's define another integration test which will be quite similar to the one we saw in the previous section:

@RunWith(SpringRunner.class)
@Import(com.baeldung.kafka.testcontainers.KafkaTestContainersIntegrationTest.KafkaTestContainersConfiguration.class)
@SpringBootTest(classes = KafkaProducerConsumerApplication.class)
@DirtiesContext
public class KafkaTestContainersIntegrationTest {
    @ClassRule
    public static KafkaContainer kafka = 
      new KafkaContainer(DockerImageName.parse("confluentinc/cp-kafka:5.4.3"));
    @Autowired
    private KafkaConsumer consumer;
    @Autowired
    private KafkaProducer producer;
    @Value("${test.topic}")
    private String topic;
    @Test
    public void givenKafkaDockerContainer_whenSendingtoSimpleProducer_thenMessageReceived() 
      throws Exception {
        producer.send(topic, "Sending with own controller");
        consumer.getLatch().await(10000, TimeUnit.MILLISECONDS);
        
        assertThat(consumer.getLatch().getCount(), equalTo(0L));
        assertThat(consumer.getPayload(), containsString("embedded-test-topic"));
    }
}

Let's take a look at the differences this time around. We're declaring the kafka field, which is a standard JUnit @ClassRule. This field is an instance of the KafkaContainer class that will prepare and manage the lifecycle of our container running Kafka.

To avoid port clashes, Testcontainers allocates a port number dynamically when our docker container starts. For this reason, we provide a custom consumer and producer factory configuration using the class KafkaTestContainersConfiguration:

@Bean
public Map<String, Object> consumerConfigs() {
    Map<String, Object> props = new HashMap<>();
    props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafka.getBootstrapServers());
    props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
    props.put(ConsumerConfig.GROUP_ID_CONFIG, "baeldung");
    // more standard configuration
    return props;
}
@Bean
public ProducerFactory<String, String> producerFactory() {
    Map<String, Object> configProps = new HashMap<>();
    configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafka.getBootstrapServers());
    // more standard configuration
    return new DefaultKafkaProducerFactory<>(configProps);
}

We then reference this configuration via the @Import annotation at the beginning of our test.

The reason for this is that we need a way to inject the server address into our application, which as previously mentioned, is generated dynamically. We achieve this by calling the getBootstrapServers() method, which will return the bootstrap server location:

bootstrap.servers = [PLAINTEXT://localhost:32789]

Now when we run our test, we should see that Testcontainers does several things:

  • Checks our local Docker setup.
  • Pulls the confluentinc/cp-kafka:5.4.3 docker image if necessary
  • Starts a new container and waits for it to be ready
  • Finally, shuts down and deletes the container after our test finishes

Again, this is confirmed by inspecting the test output:

13:33:10.396 [main] INFO  🐳 [confluentinc/cp-kafka:5.4.3]
  - Creating container for image: confluentinc/cp-kafka:5.4.3
13:33:10.454 [main] INFO  🐳 [confluentinc/cp-kafka:5.4.3]
  - Starting container with ID: b22b752cee2e9e9e6ade38e46d0c6d881ad941d17223bda073afe4d2fe0559c3
13:33:10.785 [main] INFO  🐳 [confluentinc/cp-kafka:5.4.3]
  - Container confluentinc/cp-kafka:5.4.3 is starting: b22b752cee2e9e9e6ade38e46d0c6d881ad941d17223bda073afe4d2fe0559c3

Presto! A working integration test using a Kafka docker container.

7. Conclusion

In this article, we've learned about a couple of approaches for testing Kafka applications with Spring Boot. In the first approach, we saw how to configure and use a local in-memory Kafka broker.

Then we saw how to use Testcontainers to set up an external Kafka broker running inside a docker container from our tests.

As always, the full source code of the article is available over on GitHub.

The post Testing Kafka and Spring Boot first appeared on Baeldung.

        

Reusing Docker Layers with Spring Boot

$
0
0

1. Introduction

Docker is the de facto standard for creating self-contained applications. From version 2.3.0, Spring Boot includes several enhancements to help us create efficient Docker Images. Thus, it allows the decomposition of the application into different layers.

In other words, the source code resides in its own layer. Therefore, it can be independently rebuilt, improving efficiency and start-up time. In this tutorial, we'll see how to exploit the new capabilities of Spring Boot to reuse Docker layers.

2. Layered Jars in Docker

Docker containers consist of a base image and additional layers. Once the layers are built, they'll remain cached. Therefore, subsequent generations will be much faster:

Changes in the lower-level layers also rebuild the upper-level ones. Thus, the infrequently changing layers should remain at the bottom, and the frequently changing ones should be placed on top.

In the same way, Spring Boot allows mapping the content of the artifact into layers. Let's see the default mapping of layers:

As we can see, the application has its own layer. When modifying the source code, only the independent layer is rebuilt. The loader and the dependencies remain cached, reducing Docker image creation and startup time. Let's see how to do it with Spring Boot!

3. Creating Efficient Docker Images with Spring Boot

In the traditional way of building Docker images, Spring Boot uses the fat jar approach. As a result, a single artifact embeds all the dependencies and the application source code. So, any change in our source code forces the rebuilding of the entire layer.

3.1. Layers Configuration with Spring Boot

Spring Boot version 2.3.0 introduces two new features to improve the Docker image generation:

  • Buildpack support provides the Java runtime for the application, so it's now possible to skip the Dockerfile and build the Docker image automatically
  • Layered jars help us to get the most out of the Docker layer generation

In this tutorial, we'll extend the layered jar approach.

Initially, we'll set up the layered jar in Maven. When packaging the artifact, we'll generate the layers. Let's inspect the jar file:

jar tf target/spring-boot-docker-0.0.1-SNAPSHOT.jar

As we can see, a new layers.idx file in the BOOT-INF folder inside the fat jar is created. Certainly, it maps dependencies, resources, and application source code to independent layers:

BOOT-INF/layers.idx

Likewise, the content of the file breaks down the different layers stored:

- "dependencies":
  - "BOOT-INF/lib/"
- "spring-boot-loader":
  - "org/"
- "snapshot-dependencies":
- "application":
  - "BOOT-INF/classes/"
  - "BOOT-INF/classpath.idx"
  - "BOOT-INF/layers.idx"
  - "META-INF/"

3.2. Interacting with Layers

Let's list the layers inside the artifact:

java -Djarmode=layertools -jar target/docker-spring-boot-0.0.1.jar list

The result provides a simplistic view of the content of the layers.idx file:

dependencies
spring-boot-loader
snapshot-dependencies
application

We can also extract the layers into folders:

java -Djarmode=layertools -jar target/docker-spring-boot-0.0.1.jar extract

Then, we can reuse the folders inside the Dockerfile as we'll see in the next section:

$ ls
application/
snapshot-dependencies/
dependencies/
spring-boot-loader/

3.3. Dockerfile Configuration

To get the most out of the Docker capabilities, we need to add the layers to our image.

First, let's add the fat jar file to the base image:

FROM adoptopenjdk:11-jre-hotspot as builder
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} application.jar

Second, let's extract the layers of the artifact:

RUN java -Djarmode=layertools -jar application.jar extract

Finally, let's copy the extracted folders to add the corresponding Docker layers:

FROM adoptopenjdk:11-jre-hotspot
COPY --from=builder dependencies/ ./
COPY --from=builder snapshot-dependencies/ ./
COPY --from=builder spring-boot-loader/ ./
COPY --from=builder application/ ./
ENTRYPOINT ["java", "org.springframework.boot.loader.JarLauncher"]

With this configuration, when we change our source code, we'll only rebuild the application layer. The rest will remain cached.

4. Custom Layers

It seems everything is working like a charm. But if we look carefully, the dependency layer is not shared between our builds. That is to say, all of them come to a single layer, even the internal ones. Therefore, if we change the class of an internal library, we'll rebuild again all the dependency layers.

4.1. Custom Layers Configuration with Spring Boot

In Spring Boot, it's possible to tune custom layers through a separate configuration file:

<layers xmlns="http://www.springframework.org/schema/boot/layers"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.springframework.org/schema/boot/layers
                     https://www.springframework.org/schema/boot/layers/layers-2.3.xsd">
    <application>
        <into layer="spring-boot-loader">
            <include>org/springframework/boot/loader/**</include>
        </into>
        <into layer="application" />
    </application>
    <dependencies>
        <into layer="snapshot-dependencies">
            <include>*:*:*SNAPSHOT</include>
        </into>
        <into layer="dependencies" />
    </dependencies>
    <layerOrder>
        <layer>dependencies</layer>
        <layer>spring-boot-loader</layer>
        <layer>snapshot-dependencies</layer>
        <layer>application</layer>
    </layerOrder>
</layers>

As we can see, we're mapping and ordering the dependencies and resources into layers. Furthermore, we can add as many custom layers as we want.

Let's name our file layers.xml. Then, in Maven, we can configure this file to customize the layers:

<plugin>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-maven-plugin</artifactId>
    <configuration>
        <layers>
            <enabled>true</enabled>
            <configuration>${project.basedir}/src/layers.xml</configuration>
        </layers>
    </configuration>
</plugin>

If we package the artifact, the result will be similar to the default behavior.

4.2. Adding New Layers

Let's create an internal dependency adding our application classes:

<into layer="internal-dependencies">
    <include>com.baeldung.docker:*:*</include>
</into>

In addition, we'll order the new layer:

<layerOrder>
    <layer>internal-dependencies</layer>
</layerOrder>

As a result, if we list the layers inside the fat jar, the new internal dependency appears:

dependencies
spring-boot-loader
internal-dependencies
snapshot-dependencies
application

4.3. Dockerfile Configuration

Once extracted, we can add the new internal layer to our Docker image:

COPY --from=builder internal-dependencies/ ./

So, if we generate the image, we'll see how Docker builds the internal dependency as a new layer:

$ mvn package
$ docker build -f src/main/docker/Dockerfile . --tag spring-docker-demo
....
Step 8/11 : COPY --from=builder internal-dependencies/ ./
 ---> 0e138e074118
.....

After that, we can check in the history the composition of layers in the Docker image:

$ docker history --format "{{.ID}} {{.CreatedBy}} {{.Size}}" spring-docker-demo
c0d77f6af917 /bin/sh -c #(nop)  ENTRYPOINT ["java" "org.s… 0B
762598a32eb7 /bin/sh -c #(nop) COPY dir:a87b8823d5125bcc4… 7.42kB
80a00930350f /bin/sh -c #(nop) COPY dir:3875f37b8a0ed7494… 0B
0e138e074118 /bin/sh -c #(nop) COPY dir:db6f791338cb4f209… 2.35kB
e079ad66e67b /bin/sh -c #(nop) COPY dir:92a8a991992e9a488… 235kB
77a9401bd813 /bin/sh -c #(nop) COPY dir:f0bcb2a510eef53a7… 16.4MB
2eb37d403188 /bin/sh -c #(nop)  ENV JAVA_HOME=/opt/java/o… 0B

As we can see, the layer now includes the internal dependencies of the project.

5. Conclusion

In this tutorial, we showed how to generate efficient Docker images. In short, we used the new Spring Boot features to create layered jars. For simple projects, we can use the default configuration. We also demonstrated a more advanced configuration to reuse the layers.

As always, the code is available over on GitHub.

The post Reusing Docker Layers with Spring Boot first appeared on Baeldung.

        

Creating a Generic Array in Java

$
0
0

1. Introduction

We may wish to use arrays as part of classes or functions that support generics. Due to the way Java handles generics, this can be difficult.

In this tutorial, we'll understand the challenges of using generics with arrays. Then, we'll create an example of a generic array.

We'll also look at where the Java API has solved a similar problem.

2. Considerations When Using Generic Arrays

An important difference between arrays and generics is how they enforce type checking. Specifically, arrays store and check type information at runtime. Generics, however, check for type errors at compile-time and don't have type information at runtime.

Java's syntax suggests we might be able to create a new generic array:

T[] elements = new T[size];

But, if we attempted this, we'd get a compile error.

To understand why, let's consider the following:

public <T> T[] getArray(int size) {
    T[] genericArray = new T[size]; // suppose this is allowed
    return genericArray;
}

As an unbound generic type T resolves to Object, our method at runtime will be:

public Object[] getArray(int size) {
    Object[] genericArray = new Object[size];
    return genericArray;
}

Then, if we call our method and store the result in a String array:

String[] myArray = getArray(5);

The code will compile fine but fail at runtime with a ClassCastException. This is because we've just assigned an Object[] to a String[] reference. Specifically, an implicit cast by the compiler would fail to convert Object[] to our required type String[].

Although we can't initialize generic arrays directly, it is still possible to achieve the equivalent operation if the precise type information is provided by the calling code.

3. Creating a Generic Array

For our example, let's consider a bounded stack data structure MyStack, where the capacity is fixed to a certain size. Also, as we'd like the stack to work with any type, a reasonable implementation choice would be a generic array.

First, let's create a field to store the elements of our stack, which is a generic array of type E:

private E[] elements;

Second, let's add a constructor:

public MyStack(Class<E> clazz, int capacity) {
    elements = (E[]) Array.newInstance(clazz, capacity);
}

Notice how we use java.lang.reflect.Array#newInstance to initialize our generic array, which requires two parameters. The first parameter specifies the type of object inside the new array. The second parameter specifies how much space to create for the array. As the result of Array#newInstance is of type Object, we need to cast it to E[] to create our generic array.

We should also note the convention of naming a type parameter clazz rather than class, which is a reserved word in Java.

4. Considering ArrayList

4.1. Using ArrayList in Place of an Array

It's often easier to use a generic ArrayList in place of a generic array. Let's see how we can change MyStack to use an ArrayList.

First, let's create a field to store our elements:

private List<E> elements;

Secondly, in our stack constructor, we can initialize the ArrayList with an initial capacity:

elements = new ArrayList<>(capacity);

It makes our class simpler, as we don't have to use reflection. Also, we aren't required to pass in a class literal when creating our stack. Finally, as we can set the initial capacity of an ArrayList, we can get the same benefits as an array.

Therefore, we only need to construct arrays of generics in rare situations or when we're interfacing with some external library that requires an array.

4.2. ArrayList Implementation

Interestingly, ArrayList itself is implemented using generic arrays. Let's peek inside ArrayList to see how.

First, let's see the list elements field:

transient Object[] elementData;

Notice ArrayList uses Object as the element type. As our generic type is not known until runtime, Object is used as the superclass of any type.

It's worth noting that nearly all the operations in ArrayList can use this generic array as they don't need to provide a strongly typed array to the outside world, except for one method – toArray!

5. Building an Array from a Collection

5.1. LinkedList Example

Let's look at using generic arrays in the Java Collections API, where we'll build a new array from a collection.

First, let's create a new LinkedList with a type argument String and add items to it:

List<String> items = new LinkedList();
items.add("first item");
items.add("second item");

Second, let's build an array of the items we've just added:

String[] itemsAsArray = items.toArray(new String[0]);

To build our array, the List.toArray method requires an input array. It uses this array purely to get the type information to create a return array of the right type.

In our example above, we've used new String[0] as our input array to build the resulting String array.

5.2. LinkedList.toArray Implementation

Let's take a peek inside LinkedList.toArray, to see how it's implemented in the Java JDK.

First, let's look at the method signature:

public <T> T[] toArray(T[] a)

Second, let's see how a new array is created when required:

a = (T[])java.lang.reflect.Array.newInstance(a.getClass().getComponentType(), size);

Notice how it makes use of Array#newInstance to build a new array, like in our stack example earlier. Also, notice how parameter a is used to provide a type to Array#newInstance. Finally, the result from Array#newInstance is cast to T[] create a generic array.

6. Conclusion

In this article, we first looked at differences between arrays and generics, followed by an example of creating a generic array. Then, we showed how using an ArrayList may be easier than using a generic array. Finally, we also looked at the use of a generic array in the Collections API.

As always, the example code is available over on GitHub.

The post Creating a Generic Array in Java first appeared on Baeldung.

        

Explanation of ClassCastException in Java

$
0
0

1. Overview

In this short tutorial, we'll focus on ClassCastException, a common Java exception.

ClassCastException is an unchecked exception that signals the code has attempted to cast a reference to a type of which it's not a subtype.

Let's look at some scenarios that lead to this exception being thrown and how we can avoid them.

2. Explicit Casting

For our next experiments, let's consider the following classes:

public interface Animal {
    String getName();
}
public class Mammal implements Animal {
    @Override
    public String getName() {
        return "Mammal";
    }
}
public class Amphibian implements Animal {
    @Override
    public String getName() {
        return "Amphibian";
    }
}
public class Frog extends Amphibian {
    @Override
    public String getName() {
        return super.getName() + ": Frog";
    }
}

2.1. Casting Classes

By far, the most common scenario for encountering a ClassCastException is explicitly casting to an incompatible type.

For example, let's try to cast a Frog to a Mammal:

Frog frog = new Frog();
Mammal mammal = (Mammal) frog;

We might expect a ClassCastException here, but in fact, we get a compilation error: “incompatible types: Frog cannot be converted to Mammal”. However, the situation changes when we use the common super-type:

Animal animal = new Frog();
Mammal mammal = (Mammal) animal;

Now, we get a ClassCastException from the second line:

Exception in thread "main" java.lang.ClassCastException: class Frog cannot be cast to class Mammal (Frog and Mammal are in unnamed module of loader 'app') 
at Main.main(Main.java:9)

A checked downcast to Mammal is incompatible from a Frog reference because Frog is not a subtype of Mammal. In this case, the compiler cannot help us, as the Animal variable may hold a reference of a compatible type.

It's interesting to note that the compilation error only occurs when we attempt to cast to an unequivocally incompatible class. The same is not true for interfaces because Java supports multiple interface inheritance, but only single inheritance for classes. Thus, the compiler can't determine if the reference type implements a specific interface or not. Let's exemplify:

Animal animal = new Frog();
Serializable serial = (Serializable) animal;

We get a ClassCastException on the second line instead of a compilation error:

Exception in thread "main" java.lang.ClassCastException: class Frog cannot be cast to class java.io.Serializable (Frog is in unnamed module of loader 'app'; java.io.Serializable is in module java.base of loader 'bootstrap') 
at Main.main(Main.java:11)

2.2. Casting Arrays

We've seen how classes handle casting, now let's look at arrays. Array casting works the same as class casting. However, we might get confused by autoboxing and type-promotion, or lack thereof.

Thus, let's see what happens for primitive arrays when we attempt the following cast:

Object primitives = new int[1];
Integer[] integers = (Integer[]) primitives;

The second line throws a ClassCastException as autoboxing doesn't work for arrays.

How about type promotion? Let's try the following:

Object primitives = new int[1];
long[] longs = (long[]) primitives;

We also get a ClassCastException because type promotion doesn't work for entire arrays.

2.3. Safe Casting

In the case of explicit casting, it is highly recommended to check the compatibility of the types before attempting to cast using instanceof.

Let's look at a safe cast example:

Mammal mammal;
if (animal instanceof Mammal) {
    mammal = (Mammal) animal;
} else {
    // handle exceptional case
}

3. Heap Pollution

As per the Java Specification: “Heap pollution can only occur if the program performed some operation involving a raw type that would give rise to a compile-time unchecked warning”.

For our experiment, let's consider the following generic class:

public static class Box<T> {
    private T content;
    public T getContent() {
        return content;
    }
    public void setContent(T content) {
        this.content = content;
    }
}

We will now attempt to pollute the heap as follows:

Box<Long> originalBox = new Box<>();
Box raw = originalBox;
raw.setContent(2.5);
Box<Long> bound = (Box<Long>) raw;
Long content = bound.getContent();

The last line will throw a ClassCastException as it cannot transform a Double reference to Long.

4. Generic Types

When using generics in Java, we must be wary of type erasure, which can lead to ClassCastException as well in some conditions.

Let's consider the following generic method:

public static <T> T convertInstanceOfObject(Object o) {
    try {
        return (T) o;
    } catch (ClassCastException e) {
        return null;
    }
}

And now let's call it:

String shouldBeNull = convertInstanceOfObject(123);

At first look, we can reasonably expect a null reference returned from the catch block. However, at runtime, due to type erasure, the parameter is cast to Object instead of String. Thus the compiler is faced with the task of assigning an Integer to String, which throws ClassCastException.

5. Conclusion

In this article, we have looked at a series of common scenarios for inappropriate casting.

Whether implicit or explicit, casting Java references to another type can lead to ClassCastException unless the target type is the same or a descendent of the actual type.

The code used in this article can be found over on GitHub.

The post Explanation of ClassCastException in Java first appeared on Baeldung.

        

Performance Difference Between save() and saveAll() in Spring Data

$
0
0

1. Overview

In this quick tutorial, we'll learn about the performance difference between save() and saveAll() methods in Spring Data.

2. Application

In order to test the performance, we'll need a Spring application with an entity and a repository.

Let's create a book entity:

@Entity
public class Book {
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private long id;
    private String title;
    private String author;
    // constructors, standard getters and setters
}

In addition, let's create a repository for it:

public interface BookRepository extends JpaRepository<Book, Long> {
}

3. Performance

To test the performance, we'll save 10,000 books using both methods.

First, we'll use the save() method:

for(int i = 0; i < bookCount; i++) {
    bookRepository.save(new Book("Book " + i, "Author " + i));
}

Then, we'll create a list of books and use the saveAll() method to save all of them at once:

List<Book> bookList = new ArrayList<>();
for (int i = 0; i < bookCount; i++) {
    bookList.add(new Book("Book " + i, "Author " + i));
}
bookRepository.saveAll(bookList);

In our tests, we noticed that the first method took around 2 seconds, and the second one took approximately 0.3 seconds.

Furthermore, when we enabled JPA Batch Inserts, we observed a decrease of up to 10% in the performance of the save() method, and an increase of up to 60% on the saveAll() method.

4. Differences

Looking into the implementation of the two methods, we can see that saveAll() iterates over each element and uses the save() method in each iteration. This implies that it shouldn't be such a big performance difference.

Looking more closely, we observe that both methods are annotated with @Transactional.

Furthermore, the default transaction propagation type is REQUIRED, which means that, if not provided, a new transaction is created each time the methods are called.

In our case, each time we call the save() method, a new transaction is created, whereas when we call saveAll(), only one transaction is created, and it's reused later by save().

This overhead translates into the performance difference that we noticed earlier.

Finally, the overhead is bigger when batching is enabled due to the fact that it's done at the transaction level.

5. Conclusion

In this article, we've learned about the performance difference between the save() and saveAll() methods in Spring Data.

Ultimately, choosing whether to use one method over another can have a big performance impact on the application.

As always, the code for these examples is available over on GitHub.

The post Performance Difference Between save() and saveAll() in Spring Data first appeared on Baeldung.

        

The Capacity of an ArrayList vs the Size of an Array in Java

$
0
0

1. Overview

Java allows us to create arrays of fixed size or use collection classes to do a similar job.

In this tutorial, we're going to look at the difference between the capacity of an ArrayList and the size of an Array.

We'll also look at examples of when we should initialize ArrayList with a capacity and the benefits and disadvantages in terms of memory usage.

2. Example

To understand the differences, let's first try both options.

2.1. Size of an Array

In java, it's mandatory to specify the size of an array while creating a new instance of it:

Integer[] array = new Integer[100]; 
System.out.println("Size of an array:" + array.length);

Here, we created an Integer array of size 100, which resulted in the below output

Size of an array:100

2.2. Capacity of an ArrayList

Now, let's create an ArrayList with an initial capacity of 100:

List<Integer> list = new ArrayList<>(100);
System.out.println("Size of the list is :" + list.size());
Size of the list is :0

As no elements have been added yet, the size is zero.

Now, let's add an element to the list and check the size of it:

list.add(10);
System.out.println("Size of the list is :" + list.size());
Size of the list is :1

3. Size in Arrays vs. ArrayList

Below are some major differences between the size of an array and the capacity of an ArrayList.

3.1. Modification of Size

Arrays are fixed size. Once we initialize the array with some int value as its size, it can't change. The size and capacity are equal to each other too.

ArrayList‘s size and capacity are not fixed. The logical size of the list changes based on the insertion and removal of elements in it. This is managed separately from its physical storage size. Also when the threshold of ArrayList capacity is reached, it increases its capacity to make room for more elements.

3.2. Memory Allocation

Array memory is allocated on creation. When we initialize an array, it allocates the memory according to the size and type of an array. It initializes all the elements with a null value for reference types and the default value for primitive types.

ArrayList changes the memory allocation as it grows. When we specify the capacity while initializing the ArrayList, it allocates enough memory to store objects up to that capacity. The logical size remains 0. When it is time to expand the capacity, a new, larger array is created, and the values are copied to it.

We should note that there's a special singleton 0-sized array for empty ArrayList objects, making them very cheap to create. It's also worth noting that ArrayList internally uses an array of Object references.

4. When to Initialize ArrayList with Capacity

We may expect to initialize the capacity of an ArrayList when we know its required size before we create it, but it's not usually necessary. However, there are a few reasons why this may be the best option.

4.1. Building a Large ArrayList

It is good to initialize a list with an initial capacity when we know that it will get large. This prevents some costly grow operations as we add elements.

Similarly, if the list is very large, the automatic grow operations may allocate more memory than necessary for the exact maximum size. This is because the amount to grow each time is calculated as a proportion of the size so far. So, with large lists, this could result in a waste of memory.

4.2. Building Small Multiple ArrayLists

If we have a lot of small collections, then the automatic capacity of an ArrayList may provide a large percentage of wasted memory. Let's say that ArrayList prefers a size of 10 with smaller numbers of elements, but we are only storing 2 or 3. That means 70% wasted memory, which might matter if we have a huge number of these lists.

Setting the capacity upfront can avoid this situation.

5. Avoiding Waste

We should note that ArrayList is a good solution for a flexible-sized container of objects that is to support random access. It consumes slightly more memory than an array but provides a richer set of operations.

In some use cases, especially around large collections of primitive values, the standard array may be faster and use less memory.

Similarly, for storing a variable number of elements that do not need to be accessed by index, LinkedList can be more performant. It does not come with any overhead of memory management.

6. Summary

In this short article, we saw the difference between the capacity of the ArrayList and the size of an array. We also looked at when we should initialize the ArrayList with capacity and its benefits with regards to memory usage and performance.

As always, the example code is available over on GitHub.

The post The Capacity of an ArrayList vs the Size of an Array in Java first appeared on Baeldung.

        

Java Weekly, Issue 360

$
0
0

1. Spring and Java

>> From Spring Boot to Quarkus [blog.frankel.ch]

The new kid on the block: a practical guide on how to migrate a typical Spring Boot app to Quarkus!

>> New language features since Java 8 to 15 [advancedweb.hu]

Java evolution for people in a hurry: an anthology of improvements and new features available in modern Java.

>> Getting Started with Spring Data Specifications [reflectoring.io]

Flexing data access with Specifications: a more maintainable approach for ad-hoc queries with a myriad of custom filters.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> On distributed systems and elliptic curve cryptography [martin.kleppmann.com]

Going all academic on distributed systems and EC cryptography: An academical and yet practical take on distributed systems and ECC.

Also worth reading:

3. Musings

>> The No-Excuses Guide to Innovation with APIs [blog.scottlogic.com]

Reinventing digital services by thinking and acting more strategically instead of pursuing one-off solutions.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Information From Carl [dilbert.com]

>> Rewriting Shakespeare [dilbert.com]

>> Real Men Multitask [dilbert.com]

5. Pick of the Week

Datadog is back, with full visibility into an always growing number of solid, native integrations – both in and out of the Java ecosystem:

>> End-to-end visibility into performance and code efficiency with Datadog APM

Simply put, you can drill in starting with the browser all the way down to individual DB queries – with no sampling, which is a little bit crazy to think about 🙂

The post Java Weekly, Issue 360 first appeared on Baeldung.

        

What’s New in Java 15

$
0
0

1. Introduction

Java 15 reached general availability in September 2020 and is the next short-term release for the JDK platform. It builds on several features from earlier releases and also provides some new enhancements.

In this post, we'll look at some of the new features of Java 15, as well as other changes that are of interest to Java developers.

2. Records (JEP 384)

The record is a new type of class in Java that makes it easy to create immutable data objects.

Originally introduced in Java 14 as an early preview, Java 15 aims to refine a few aspects before becoming an official product feature.

Let's look at an example using current Java and how it could change with records.

2.1. Without Records

Prior to records, we would create an immutable data transfer object (DTO) as:

public class Person {
    private final String name;
    private final int age;
    public Person(String name, int age) {
        this.name = name;
        this.age = age;
    }
    public String getName() {
        return name;
    }
    public int getAge() {
        return age;
    }
}

Notice that there's a lot of code here to create an immutable object that really just holds state. All of our fields are explicitly defined using final, we have a single all-arguments constructor, and we have an accessor method for every field. In some cases, we might even declare the class itself as final to prevent any sub-classing.

In many cases, we would also go a step further and override the toString method to provide meaningful logging output. We would probably also want to override the equals and hashCode methods to avoid unexpected consequences when comparing two instances of these objects.

2.2. With Records

Using the new record class, we can define the same immutable data object in a much more compact way:

public record Person(String name, int age) {
}

A few things have happened here. First and foremost, the class definition has a new syntax that is specific for records. This header is where we provide the details about the fields inside the record.

Using this header, the compiler can infer the internal fields. This means we don't need to define specific member variables and accessors, as they're provided by default. We also don't have to provide a constructor.

Additionally, the compiler provides sensible implementations for the toString, equals, and hashCode methods.

While records eliminate a lot of boilerplate code, they do allow us to override some of the default behaviors. For example, we could define a canonical constructor that does some validation:

public record Person(String name, int age) {
    public Person {
        if(age < 0) {
            throw new IllegalArgumentException("Age cannot be negative");
        }
    }
}

It's worth mentioning that records do have some restrictions. Among other things, they are always final, they cannot be declared abstract, and they can't use native methods.

3. Sealed Classes (JEP 360)

Currently, Java provides no fine-grained control over inheritance. Access modifiers such as public, protected, private, as well as the default package-private, provide very coarse-grained control.

To that end, the goal of sealed classes is to allow individual classes to declare which types may be used as sub-types. This also applies to interfaces and determining which types can implement them.

Sealed classes involve two new keywords — sealed and permits:

public abstract sealed class Person
    permits Employee, Manager {
 
    //...
}

In this example, we've declared an abstract class named Person. We've also specified that the only classes that can extend it are Employee and Manager. Extending the sealed class is done just as it is today in Java, using the extends keyword:

public final class Employee extends Person {
}
public non-sealed class Manager extends Person {
}

It's important to note that any class that extends a sealed class must itself be declared sealed, non-sealed, or final. This ensures the class hierarchy remains finite and known by the compiler.

This finite and exhaustive hierarchy is one of the great benefits of using sealed classes. Let's see an example of this in action:

if (person instanceof Employee) {
    return ((Employee) person).getEmployeeId();
} 
else if (person instanceof Manager) {
    return ((Manager) person).getSupervisorId();
}

Without a sealed class, the compiler can't reasonably determine that all possible sub-classes are covered with our if-else statements. Without an else clause at the end, the compiler would likely issue a warning indicating our logic doesn't cover every case.

4. Hidden Classes (JEP 371)

A new feature being introduced in Java 15 is known as hidden classes. While most developers won't find a direct benefit from them, anyone who works with dynamic bytecode or JVM languages will likely find them useful.

The goal of hidden classes is to allow the runtime creation of classes that are not discoverable. This means they cannot be linked by other classes, nor can they be discovered via reflection. Classes such as these typically have a short lifecycle, and thus, hidden classes are designed to be efficient with both loading and unloading.

Note that current versions of Java do allow for the creation of anonymous classes similar to hidden classes. However, they rely on the Unsafe API. Hidden classes have no such dependency.

5. Pattern Matching Type Checks (JEP 375)

The pattern matching feature was previewed in Java 14, and Java 15 aims to continue its preview status with no new enhancements.

As a review, the goal of this feature is to remove a lot of boilerplate code that typically comes with the instanceof operator:

if (person instanceof Employee) {
    Employee employee = (Employee) person;
    Date hireDate = employee.getHireDate();
    //...
}

This is a very common pattern in Java. Whenever we check if a variable is a certain type, we almost always follow it with a cast to that type.

The pattern matching feature simplifies this by introducing a new binding variable:

if (person instanceof Employee employee) {
    Date hireDate = employee.getHireDate();
    //...
}

Notice how we provide a new variable name, employee, as part of the type check. If the type check is true, then the JVM automatically casts the variable for us and assigns the result to the new binding variable.

We can also combine the new binding variable with conditional statements:

if (person instanceof Employee employee && employee.getYearsOfService() > 5) {
    //...
}

In future Java versions, the goal is to expand pattern matching to other language features such as switch statements.

6. Foreign Memory API (JEP 383)

Foreign memory access is already an incubating feature of Java 14. In Java 15, the goal is to continue its incubation status while adding several new features:

  • A new VarHandle API, to customize memory access var handles
  • Support for parallel processing of a memory segment using the Spliterator interface
  • Enhanced support for mapped memory segments
  • Ability to manipulate and dereference addresses coming from things like native calls

Foreign memory generally refers to memory that lives outside the managed JVM heap. Because of this, it's not subject to garbage collection and can typically handle incredibly large memory segments.

While these new APIs likely won't impact most developers directly, they will provide a lot of value to third-party libraries that deal with foreign memory. This includes distributed caches, denormalized document stores, large arbitrary byte buffers, memory-mapped files, and more.

7. Garbage Collectors

In Java 15, both ZGC (JEP 377) and Shenandoah (JEP 379) will be no longer be experimental. Both will be supported configurations that teams can opt to use, while the G1 collector will remain the default.

Both were previously available using experimental feature flags. This approach allows developers to test the new garbage collectors and submit feedback without downloading a separate JDK or add-on.

One note on Shenandoah: it isn't available from all vendor JDKs — most notably, Oracle JDK doesn't include it.

8. Other Changes

There are several other noteworthy changes in Java 15.

After multiple rounds of previews in Java 13 and 14, text blocks will be a fully supported product feature in Java 15.

Helpful null pointer exceptions, originally delivered in Java 14 under JEP 358, are now enabled by default.

The legacy DatagramSocket API has been rewritten. This is a follow-on to a rewrite in Java 14 of the Socket API. While it won't impact most developers, it is interesting as it's a prerequisite for Project Loom.

Also of note, Java 15 includes cryptographic support for Edwards-Curve Digital Signature Algorithm. EdDSA is a modern elliptic curve signature scheme that has several advantages over the existing signature schemes in the JDK.

Finally, several things have been deprecated in Java 15. Biased locking, Solaris/SPARC ports, and RMI Activation are all removed or scheduled for removal in a future release.

Of note, The Nashorn JavaScript engine, originally introduced in Java 8, is now removed. With the introduction of GraalVM and other VM technologies recently, it's clear Nashorn no longer has a place in the JDK ecosystem.

9. Conclusion

Java 15 builds on several features of past releases, including records, text blocks, new garbage collection algorithms, and more. It also adds new preview features, including sealed classes and hidden classes.

As Java 15 is not a long-term-support release, we can expect support for it to end in March 2021. At that time, we can look forward to Java 16, followed soon after with a new long-term-support version in Java 17.

The post What's New in Java 15 first appeared on Baeldung.

        
Viewing all 3782 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>