Quantcast
Channel: Baeldung
Viewing all 3852 articles
Browse latest View live

A Guide to Java 9 Modularity

$
0
0

1. Overview

Java 9 introduces a new level of abstraction above packages, formally known as the Java Platform Module System (JPMS), or “Modules” for short.

In this tutorial, we’ll go through the new system and discuss its various aspects.

We’ll also build a simple project to demonstrate all concepts we’ll be learning in this guide.

2. What’s a Module?

First of all, we need to understand what a module is before we can understand how to use them.

A Module is a group of closely related packages and resources along with a new module descriptor file.

In other words, it’s a “package of Java Packages” abstraction that allows us to make our code even more reusable.

2.1. Packages

The packages inside a module are identical to the Java packages we’ve been using since the inception of Java.

When we create a module, we organize the code internally in packages just like we previously did with any other project.

Aside from organizing our code, packages are used to determine what code is publicly accessible outside of the module. We’ll spend more time talking about this later in the article.

2.2. Resources

Each module is responsible for its resources, like media or configuration files.

Previously we’d put all resources into the root level of our project and manually manage which resources belonged to different parts of the application.

With modules, we can ship required images and XML files with the module that needs it, making our projects much easier to manage.

2.3. Module Descriptor

When we create a module, we include a descriptor file that defines several aspects of our new module:

  • Name – the name of our module
  • Dependencies – a list of other modules that this module depends on
  • Public Packages – a list of all packages we want accessible from outside the module
  • Services Offered – we can provide service implementations that can be consumed by other modules
  • Services Consumed – allows the current module to be a consumer of a service
  • Reflection Permissions – explicitly allows other classes to use reflection to access the private members of a package

The module naming rules are similar to how we name packages (dots are allowed, dashes are not). It’s very common to do either project-style (my.module) or Reverse-DNS (com.baeldung.mymodule) style names. We’ll use project-style in this guide.

We need to list all packages we want to be public because by default all packages are module private.

The same is true for reflection. By default, we cannot use reflection on classes we import from another module.

Later in the article, we’ll look at examples of how to use the module descriptor file.

2.4. Module Types

There are four types of modules in the new module system:

  • System Modules – These are the modules listed when we run the list-modules command above. They include the Java SE and JDK modules.
  • Application Modules – These modules are what we usually want to build when we decide to use Modules. They are named and defined in the compiled module-info.class file included in the assembled JAR.
  • Automatic Modules – We can include unofficial modules by adding existing JAR files to the module path. The name of the module will be derived from the name of the JAR. Automatic modules will have full read access to every other module loaded by the path.
  • Unnamed Module – When a class or JAR is loaded onto the classpath, but not the module path, it’s automatically added to the unnamed module. It’s a catch-all module to maintain backward compatibility with previously-written Java code.

2.5. Distribution

Modules can be distributed one of two ways: as a JAR file or as an “exploded” compiled project. This, of course, is the same as any other Java project so it should come as no surprise.

We can create multi-module projects comprised of a “main application” and several library modules.

We have to be careful though because we can only have one module per JAR file.

When we set up our build file, we need to make sure to bundle each module in our project as a separate jar.

3. Default Modules

When we install Java 9, we can see that the JDK now has a new structure.

They have taken all the original packages and moved them into the new module system.

We can see what these modules are by typing into the command line:

java --list-modules

These modules are split into four major groups: java, javafx, jdk, and Oracle.

java modules are the implementation classes for the core SE Language Specification.

javafx modules are the FX UI libraries.

Anything needed by the JDK itself is kept in the jdk modules.

And finally, anything that is Oracle-specific is in the oracle modules.

4. Module Declarations

To set up a module, we need to put a special file at the root of our packages named module-info.java.

This file is known as the module descriptor and contains all of the data needed to build and use our new module.

We construct the module with a declaration whose body is either empty or made up of module directives:

module myModuleName {
    // all directives are optional
}

We start the module declaration with the module keyword, and we follow that with the name of the module.

The module will work with this declaration, but we’ll commonly need more information.

That is where the module directives come in.

4.1. Requires

Our first directive is requires. This module directive allows us to declare module dependencies:

module my.module {
    requires module.name;
}

Now, my.module has both a runtime and a compile-time dependency on module.name.

And all public types exported from a dependency are accessible by our module when we use this directive.


4.2. Requires Static

Sometimes we write code that references another module, but that users of our library will never want to use.

For instance, we might write a utility function that pretty-prints our internal state when another logging module is present. But, not every consumer of our library will want this functionality, and they don’t want to include an extra logging library.

In these cases, we want to use an optional dependency. By using the requires static directive, we create a compile-time-only dependency:

module my.module {
    requires static module.name;
}

4.3. Requires Transitive

We commonly work with libraries to make our lives easier.

But, we need to make sure that any module that brings in our code will also bring in these extra ‘transitive’ dependencies or they won’t work.

Luckily, we can use the requires transitive directive to force any downstream consumers also to read our required dependencies:

module my.module {
    requires transitive module.name;
}

Now, when a developer requires my.module, they won’t also have also to say requires module.name for our module to still work.

4.4. Exports

By default, a module doesn’t expose any of its API to other modules. This strong encapsulation was one of the key motivators for creating the module system in the first place.

Our code is significantly more secure, but now we need to explicitly open our API up to the world if we want it to be usable.

We use the exports directive to expose all public members of the named package:

module my.module {
    exports com.my.package.name;
}

Now, when someone does requires my.module, they will have access to the public types in our com.my.package.name package, but not any other package.

4.5. Exports … To

We can use exports…to to open up our public classes to the world.

But, what if we don’t want the entire world to access our API?

We can restrict which modules have access to our APIs using the exports…to directive.

Similar to the exports directive, we declare a package as exported. But, we also list which modules we are allowing to import this package as a requires. Let’s see what this looks like:

module my.module {
    export com.my.package.name to com.specific.package;
}

4.6. Uses

service is an implementation of a specific interface or abstract class that can be consumed by other classes.

We designate the services our module consumes with the uses directive.

Note that the class name we use is either the interface or abstract class of the service, not the implementation class:

module my.module {
    uses class.name;
}

We should note here that there’s a difference between a requires directive and the uses directive.

We might require a module that provides a service we want to consume, but that service implements an interface from one of its transitive dependencies.

Instead of forcing our module to require all transitive dependencies just in case, we use the uses directive to add the required interface to the module path.

4.7. Provides … With

A module can also be a service provider that other modules can consume.

The first part of the directive is the provides keyword. Here is where we put the interface or abstract class name.

Next, we have the with directive where we provide the implementation class name that either implements the interface or extends the abstract class.

Here’s what it looks like put together:

module my.module {
    provides MyInterface with MyInterfaceImpl;
}

4.8. Open

We mentioned earlier that encapsulation was a driving motivator for the design of this module system.

Before Java 9, it was possible to use reflection to examine every type and member in a package, even the private ones. Nothing was truly encapsulated, which can open up all kinds of problems for developers of the libraries.

Because Java 9 enforces strong encapsulation, we now have to explicitly grant permission for other modules to reflect on our classes.

If we want to continue to allow full reflection as older versions of Java did, we can simply open the entire module up:

open module my.module {
}

4.9. Opens

If we need to allow reflection of private types, but we don’t want all of our code exposed, we can use the opens directive to expose specific packages.

But remember, this will open the package up to the entire world, so make sure that is what you want:

module my.module {
  opens com.my.package;
}

4.10. Opens … To

Okay, so reflection is great sometimes, but we still want as much security as we can get from encapsulation. We can selectively open our packages to a pre-approved list of modules, in this case, using the opens…to directive:

module my.module {
    opens com.my.package to moduleOne, moduleTwo, etc.;
}

5. Command Line Options

By now, support for Java 9 modules has been added to Maven and Gradle, so you won’t need to do a lot of manual building of your projects. However, it’s still valuable to know how to use the module system from the command line.

We’ll be using the command line for our full example down below to help solidify how the entire system works in our minds.

  • module-path – We use the –module-path option to specify the module path. This is a list of one or more directories that contain your modules.
  • add-reads – Instead of relying on the module declaration file, we can use the command line equivalent of the requires directive; –add-reads.
  • add-exports – Command line replacement for the exports directive.
  • add-opens – Replace the open clause in the module declaration file.
  • add-modules – Adds the list of modules into the default set of modules
  • list-modules – Prints a list of all modules and their version strings
  • patch-module – Add or override classes in a modules
  • illegal-access=permit|warn|deny – Either relax strong encapsulation by showing a single global warning, shows every warning, or fails with errors. The default is permit.

6. Visibility

We should spend a little time talking about the visibility of our code.

A lot of libraries depend on reflection to work their magic (JUnit and Spring come to mind).

By default in Java 9, we will only have access to public classes, methods, and fields in our exported packages. Even if we use reflection to get access to non-public members and call setAccessible(true), we won’t be able to access these members.

We can use the openopens, and opens…to options to grant runtime-only access for reflection. Note, this is runtime-only!

We won’t be able to compile against private types, and we should never need to anyway.

If we must have access to a module for reflection, and we’re not the owner of that module (i.e., we can’t use the opens…to directive), then it’s possible to use the command line –add-opens option to allow own modules reflection access to the locked down module at runtime.

The only caveat here’s that you need to have access to the command line arguments that are used to run a module for this to work.

7. Putting It All Together

Now that we know what a module is and how to use them let’s go ahead and build a simple project to demonstrate all the concepts we just learned.

To keep things simple, we won’t be using Maven or Gradle. Instead, we’ll rely on the command line tools to build our modules.

7.1. Setting Up Our Project

First, we need to set up our project structure. We’ll create several directories to organize our files.

Start by creating the project folder:

mkdir module-project
cd module-project

This is the base of our whole project, so add files in here such as Maven or Gradle build files, other source directories, and resources.

We also put a directory to hold all our project specific modules.

Next, we create a module directory:

mkdir simple-modules

Here’s what our project structure will look like:

module-project
|- // src if we use the default package
|- // build files also go at this level
|- simple-modules
  |- hello.modules
    |- com
      |- baeldung
        |- modules
          |- hello
  |- main.app
    |- com
      |- baeldung
        |- modules
          |- main

7.2. Our First Module

Now that we have the basic structure in place, let’s add our first module.

Under the simple-modules directory, create a new directory called hello.modules.

We can name this anything we want but follow package naming rules (i.e., periods to separate words, etc.). We can even use the name of our main package as the module name if we want, but usually, we want to stick to the same name we would use to create a JAR of this module.

Under our new module, we can create the packages we want. In our case, we are going to create one package structure:

com.baeldung.modules.hello

Next, create a new class called HelloModules.java in this package. We will keep the code simple:

package com.baeldung.modules.hello;

public class HelloModules {
    public static void doSomething() {
        System.out.println("Hello, Modules!");
    }
}

And finally, in the hello.modules root directory, add in our module descriptor; module-info.java:

module hello.modules {
    exports com.baeldung.modules.hello;
}

To keep this example simple, all we are doing is exporting all public members of the com.baeldung.modules.hello package.

7.3. Our Second Module

Our first module is great, but it doesn’t do anything.

We can create a second module that uses it now.

Under our simple-modules directory, create another module directory called main.app. We are going to start with the module descriptor this time:

module main.app {
    requires hello.modules;
}

We don’t need to expose anything to the outside world. Instead, all we need to do is depend on our first module, so we have access to the public classes it exports.

Now we can create an application that uses it.

Create a new package structure: com.baeldung.modules.main.

Now, create a new class file called MainApp.java.

package com.baeldung.modules.main;

import com.baeldung.modules.hello.HelloModules;

public class MainApp {
    public static void main(String[] args) {
        HelloModules.doSomething();
    }
}

And that is all the code we need to demonstrate modules. Our next step is to build and run this code from the command line.

7.4. Building Our Modules

To build our project, we can create a simple bash script and place it at the root of our project.

Create a file called compile-simple-modules.sh:

#!/usr/bin/env bash
javac -d outDir --module-source-path simple-modules $(find simple-modules -name "*.java")

There are two parts to this command, the javac and find commands.

The find command is simply outputting a list of all .java files under our simple-modules directory. We can then feed that list directly into the Java compiler.

The only thing we have to do differently than the older versions of Java is to provide a module-source-path parameter to inform the compiler that it’s building modules.

Once we run this command, we will have an outDir folder with two compiled modules inside.

7.5. Running Our Code

And now we can finally run our code to verify modules are working correctly.

Create another file in the root of the project: run-simple-module-app.sh.

#!/usr/bin/env bash
java --module-path outDir -m main.app/com.baeldung.modules.main.MainApp

To run a module, we must provide at least the module-path and the main class. If all works, you should see:

>$ ./run-simple-module-app.sh 
Hello, Modules!

7.6. Adding a Service

Now that we have a basic understanding of how to build a module, let’s make it a little more complicated.

We’re going to see how to use the provides…with and uses directives.

Start by defining a new file in the hello.modules module named HelloInterface.java:

public interface HelloInterface {
    void sayHello();
}

To make things easy, we’re going to implement this interface with our existing HelloModules.java class:

public class HelloModules implements HelloInterface {
    public static void doSomething() {
        System.out.println("Hello, Modules!");
    }

    public void sayHello() {
        System.out.println("Hello!");
    }
}

That is all we need to do to create a service.

Now, we need to tell the world that our module provides this service.

Add the following to our module-info.java:

provides com.baeldung.modules.hello.HelloInterface with com.baeldung.modules.hello.HelloModules;

As we can see, we declare the interface and which class implements it.

Next, we need to consume this service. In our main.app module, let’s add the following to our module-info.java:

uses com.baeldung.modules.hello.HelloInterface;

Finally, in our main method we can use this service like this:

HelloModules module = new HelloModules();
module.sayHello();

Compile and run:

#> ./run-simple-module-app.sh 
Hello, Modules!
Hello!

We use these directives to be much more explicit about how our code is to be used.

We could put the interface into a private package while exposing the implementation in a public package.

This makes our code much more secure with very little extra overhead.

Go ahead and try out some of the other directives to learn more about modules and how they work.

8. Conclusion

In this extensive guide, we focused on and covered the basics of the new Java 9 Module system.

We started by talking about what a module is.

Next, we talked about how to discover which modules are included in the JDK.

We also covered the module declaration file in detail.

We rounded out the theory by talking about the various command line arguments we’ll need to build our modules.

Finally, we put all our previous knowledge into practice and created a simple application built on top of the module system.

To see this code and more, be sure to check it out over on Github.


A Guide to Apache Ignite

$
0
0

1. Introduction

Apache Ignite is an open source memory-centric distributed platform. We can use it as a database, a caching system or for the in-memory data processing.

The platform uses memory as a storage layer, therefore has impressive performance rate. Simply put, this is one of the fastest atomic data processing platforms currently in production use.

2. Installation and Setup

As a beginning, check out the getting started page for the initial setup and installation instructions.

The Maven dependencies for the application we are going to build:

<dependency>
    <groupId>org.apache.ignite</groupId>
    <artifactId>ignite-core</artifactId>
    <version>${ignite.version}</version>
</dependency>
<dependency>
    <groupId>org.apache.ignite</groupId>
    <artifactId>ignite-indexing</artifactId>
    <version>${ignite.version}</version>
</dependency>

ignite-core is the only mandatory dependency for the project. As we also want to interact with the SQL, ignite-indexing is also here. ${ignite.version} is the latest version of Apache Ignite.

As the last step, we start the Ignite node:

Ignite node started OK (id=53c77dea)
Topology snapshot [ver=1, servers=1, clients=0, CPUs=4, offheap=1.2GB, heap=1.0GB]
Data Regions Configured:
^-- default [initSize=256.0 MiB, maxSize=1.2 GiB, persistenceEnabled=false]

The console output above shows that we’re ready to go.

3. Memory Architecture

The platform is based on Durable Memory Architecture. This enables to store and process the data both on disk and in memory. It increases the performance by using the RAM resources of the cluster effectively.

The data in memory and on the disk has the same binary representation. This means no additional conversion of the data while moving from one layer to another.

Durable memory architecture splits into fixed-size blocks called pages. Pages are stored outside of Java heap and organized in a RAM. It has a unique identifier: FullPageId.

Pages interact with the memory using the PageMemory abstraction.

It helps to read, write a page, also to allocate a page id. Inside the memory, Ignite associates pages with Memory Buffers.

4. Memory Pages

A Page can have the following states:

  • Unloaded – no page buffer loaded in memory
  • Clear – the page buffer is loaded and synchronized with the data on disk
  • Durty – the page buffer holds a data which is different from the one in disk
  • Dirty in checkpoint – there is another modification starts before the first one persists to disk. Here a checkpoint starts and PageMemory keeps two memory buffers for each Page.

Durable memory allocates local a memory segment called Data Region. By default, it has a capacity of 20% of the cluster memory. Multiple regions configuration allows keeping the usable data in a memory.

The maximum capacity of the region is a Memory Segment. It’s a physical memory or a continuous byte array.

To avoid memory fragmentations, a single page holds multiple key-value entries. Every new entry will be added to the most optimal page. If the key-value pair size exceeds the maximum capacity of the page, Ignite stores the data in more than one page. The same logic applies to updating the data.

SQL and cache indexes are stored in structures known as B+ Trees. Cache keys are ordered by their key values.

5. Lifecycle

Each Ignite node runs on a single JVM instance. However, it’s possible to configure to have multiple Ignite nodes running in a single JVM process.

Let’s go through the lifecycle event types:

  • BEFORE_NODE_START – before the Ignite node startup
  • AFTER_NODE_START – fires just after the Ignite node start
  • BEFORE_NODE_STOP – before initiating the node stop
  • AFTER_NODE_STOP – after the Ignite node stops

To start a default Ignite node:

Ignite ignite = Ignition.start();

Or from a configuration file:

Ignite ignite = Ignition.start("config/example-cache.xml");

In case we need more control over the initialization process, there is another way with the help of LifecycleBean interface:

public class CustomLifecycleBean implements LifecycleBean {
 
    @Override
    public void onLifecycleEvent(LifecycleEventType lifecycleEventType) 
      throws IgniteException {
 
        if(lifecycleEventType == LifecycleEventType.AFTER_NODE_START) {
            // ...
        }
    }
}

Here, we can use the lifecycle event types to perform actions before or after the node starts/stops.

For that purpose, we pass the configuration instance with the CustomLifecycleBean to the start method:

IgniteConfiguration configuration = new IgniteConfiguration();
configuration.setLifecycleBeans(new CustomLifecycleBean());
Ignite ignite = Ignition.start(configuration);

6. In-Memory Data Grid

Ignite data grid is a distributed key-value storage, very familiar to partitioned HashMap. It is horizontally scaled. This means more cluster nodes we add, more data is cached or stored in memory.

It can provide significant performance improvement to the 3rd party software, like NoSql, RDMS databases as an additional layer for caching.

6.1. Caching Support

The data access API is based on JCache JSR 107 specification.

As an example, let’s create a cache using a template configuration:

IgniteCache<Employee, Integer> cache = ignite.getOrCreateCache(
  "baeldingCache");

Let’s see what’s happening here for more details. First, Ignite finds the memory region where the cache stored.

Then, the B+ tree index Page will be located based on the key hash code. If the index exists, a data Page of the corresponding key will be located.

When the index is NULL, the platform creates the new data entry by using the given key.

Next, let’s add some Employee objects:

cache.put(1, new Employee(1, "John", true));
cache.put(2, new Employee(2, "Anna", false));
cache.put(3, new Employee(3, "George", true));

Again, the durable memory will look for the memory region where the cache belongs. Based on the cache key, the index page will be located in a B+ tree structure.

When the index page doesn’t exist, a new one is requested and added to the tree.

Next, a data page is assigning to the index page.

To read the employee from the cache, we just use the key value:

Employee employee = cache.get(1);

6.2. Streaming Support

In memory data streaming provides an alternative approach for the disk and file system based data processing applications. The Streaming API splits the high load data flow into multiple stages and routes them for processing.

We can modify our example and stream the data from the file. First, we define a data streamer:

IgniteDataStreamer<Integer, Employee> streamer = ignite
  .dataStreamer(cache.getName());

Next, we can register a stream transformer to mark the received employees as employed:

streamer.receiver(StreamTransformer.from((e, arg) -> {
    Employee employee = e.getValue();
    employee.setEmployed(true);
    e.setValue(employee);
    return employee;
}));

As a final step, we iterate over the employees.txt file lines and convert them into Java objects:

Path path = Paths.get(IgniteStream.class.getResource("employees.txt")
  .toURI());
Gson gson = new Gson();
Files.lines(path)
  .forEach(l -> streamer.addData(
    employee.getId(), 
    gson.fromJson(l, Employee.class)));

With the use of streamer.addData() put the employee objects into the stream.

7. SQL Support

The platform provides memory-centric, fault-tolerant SQL database.

We can connect either with pure SQL API or with JDBC. SQL syntax here is ANSI-99, so all the standard aggregation functions in the queries, DML, DDL language operations are supported.

7.1. JDBC

To get more practical, let’s create a table of employees and add some data to it.

For that purpose, we register a JDBC driver and open a connection as a next step:

Class.forName("org.apache.ignite.IgniteJdbcThinDriver");
Connection conn = DriverManager.getConnection("jdbc:ignite:thin://127.0.0.1/");

With the help of the standard DDL command, we populate the Employee table:

sql.executeUpdate("CREATE TABLE Employee (" +
  " id LONG PRIMARY KEY, name VARCHAR, isEmployed tinyint(1)) " +
  " WITH \"template=replicated\"");

After the WITH keyword, we can set the cache configuration template. Here we use the REPLICATED. By default, the template mode is PARTITIONED. To specify the number of copies of the data we can also specify BACKUPS parameter here, which is 0 by default.

Then, let’s add up some data by using INSERT DML statement:

PreparedStatement sql = conn.prepareStatement(
  "INSERT INTO Employee (id, name, isEmployed) VALUES (?, ?, ?)");

sql.setLong(1, 1);
sql.setString(2, "James");
sql.setBoolean(3, true);
sql.executeUpdate();

// add the rest

Afterward, we select the records:

ResultSet rs 
  = sql.executeQuery("SELECT e.name, e.isEmployed " 
    + " FROM Employee e " 
    + " WHERE e.isEmployed = TRUE ")

7.2. Query the Objects

It’s also possible to perform a query over Java objects stored in the cache. Ignite treats Java object as a separate SQL record:

IgniteCache<Integer, Employee> cache = ignite.cache("baeldungCache");

SqlFieldsQuery sql = new SqlFieldsQuery(
  "select name from Employee where isEmployed = 'true'");

QueryCursor<List<?>> cursor = cache.query(sql);

for (List<?> row : cursor) {
    // do something with the row
}

8. Summary

In this tutorial, we had a quick look at Apache Ignite project. This guide highlights the advantages of the platform over other simial products such as performance gains, durability, lightweight APIs.

As a result, we learned how to use the SQL language and Java API for to store, retrieve, stream the data inside of the persistence or in-memory grid.

As usual, the complete code for this article is available over on GitHub.

Filtering Kotlin Collections

$
0
0

1. Overview

Kotlin collections are powerful data structures with many beneficial methods that put it over and beyond Java collections.

We’re going to cover a handful of filtering methods available in enough detail to be able to utilize all of the others that we don’t explicitly cover in this article.

All of these methods return a new collection, leaving the original collection unmodified.

We’ll be using lambda expressions to perform some of the filters. To read more about lambdas, have a look at our Kotlin Lambda article here.

2. Drop

We’ll start with a basic way of trimming down a collection. Dropping allows us to take a portion of the collection and return a new List missing the number of elements listed in the number:

@Test
fun whenDroppingFirstTwoItemsOfArray_thenTwoLess() {
    val array = arrayOf(1, 2, 3, 4)
    val result = array.drop(2)
    val expected = listOf(3, 4)

    assertIterableEquals(expected, result)
}

On the other hand, if we want to drop the last n elements, we call dropLast:

@Test
fun givenArray_whenDroppingLastElement_thenReturnListWithoutLastElement() {
    val array = arrayOf("1", "2", "3", "4")
    val result = array.dropLast(1)
    val expected = listOf("1", "2", "3")

    assertIterableEquals(expected, result)
}

Now we’re going to look at our first filter condition which contains a predicate.

This function will take our code and work backward through the list until we reach an element that does not meet the condition:

@Test
fun whenDroppingLastUntilPredicateIsFalse_thenReturnSubsetListOfFloats() {
    val array = arrayOf(1f, 1f, 1f, 1f, 1f, 2f, 1f, 1f, 1f)
    val result = array.dropLastWhile { it == 1f }
    val expected = listOf(1f, 1f, 1f, 1f, 1f, 2f)

    assertIterableEquals(expected, result)
}

dropLastWhile removed the final three 1fs from the list as method cycled through each item until the first instance where an array element did not equal 1f.

The method stops removing elements as soon as an element does not meet the condition of the predicate.

dropWhile is another filter that takes a predicate but dropWhile works from index 0 -> n and dropLastWhile works from index n -> 0.

If we try to drop more elements than the collection contains, we’ll just be left with an empty List.

3. Take

Very similar to drop, take will keep the elements up to the given index or predicate:

@Test
fun `when predicating on 'is String', then produce list of array up until predicate is false`() {
    val originalArray = arrayOf("val1", 2, "val3", 4, "val5", 6)
    val actualList = originalArray.takeWhile { it is String }
    val expectedList = listOf("val1")

    assertIterableEquals(expectedList, actualList)
}

The difference between drop and take is that drop removes the items, whereas take keeps the items.

Attempting to take more items than are available in the collection – will just return a List that is the same size as the original collection

An important note here is that takeIf is NOT a collection method. takeIf uses a predicate to determine whether to return a null value or not – think Optional#filter.

Although it may seem that it’d meet the function name pattern to take all of the items that match the predicate into the returned List, we use the filter to perform that action.

4. Filter

The filter creates a new List based on the predicate provided:

@Test
fun givenAscendingValueMap_whenFilteringOnValue_ThenReturnSubsetOfMap() {
    val originalMap = mapOf("key1" to 1, "key2" to 2, "key3" to 3)
    val filteredMap = originalMap.filter { it.value < 2 }
    val expectedMap = mapOf("key1" to 1)

    assertTrue { expectedMap == filteredMap }
}

When filtering we have a function that allows us to accumulate the results of our filters of different arrays. It is called filterTo and takes a mutable list copy to a given mutable array.

This allows us to take several collections and filter them into a single, accumulative collection.

This example takes; an array, a sequence, and a list.

It then applies the same predicate to all three to filter the prime numbers contained in each collection:

@Test
fun whenFilteringToAccumulativeList_thenListContainsAllContents() {
    val array1 = arrayOf(90, 92, 93, 94, 92, 95, 93)
    val array2 = sequenceOf(51, 31, 83, 674_506_111, 256_203_161, 15_485_863)
    val list1 = listOf(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
    val primes = mutableListOf<Int>()
    
    val expected = listOf(2, 3, 5, 7, 31, 83, 15_485_863, 256_203_161, 674_506_111)

    val primeCheck = { num: Int -> Primes.isPrime(num) }

    array1.filterTo(primes, primeCheck)
    list1.filterTo(primes, primeCheck)
    array2.filterTo(primes, primeCheck)

    primes.sort()

    assertIterableEquals(expected, primes)
}

Filters with or without predicate also work well with Maps:

val originalMap = mapOf("key1" to 1, "key2" to 2, "key3" to 3)
val filteredMap = originalMap.filter { it.value < 2 }

A very beneficial pair of filter methods is filterNotNull and filterNotNullTo which will just filter out all null elements.

Lastly, if we ever need to use the index of the collection item, filterIndexed and filterIndexedTo provide the ability to use a predicate lambda with both the element and its position index.

5. Slice

We may also use a range to perform slicing. To perform a slice, we just define a Range that our slice wants to extract:

@Test
fun whenSlicingAnArrayWithDotRange_ThenListEqualsTheSlice() {
    val original = arrayOf(1, 2, 3, 2, 1)
    val actual = original.slice(3 downTo 1)
    val expected = listOf(2, 3, 2)

    assertIterableEquals(expected, actual)
}

The slice can go either upwards or down.

When using Ranges, we may also set the range step size.

Using a range without steps and slicing beyond the bounds of a collection we will create many null objects in our result List.

However, stepping beyond the bounds of a collection using a Range with steps can trigger an ArrayIndexOutOfBoundsException:

@Test
fun whenSlicingBeyondRangeOfArrayWithStep_thenOutOfBoundsException() {
    assertThrows(ArrayIndexOutOfBoundsException::class.java) {
        val original = arrayOf(12, 3, 34, 4)
        original.slice(3..8 step 2)
    }
}

6. Distinct

Another filter we’re going to look at in this article is distinct. We can use this method to collect unique objects from our list:

@Test
fun whenApplyingDistinct_thenReturnListOfNoDuplicateValues() {
    val array = arrayOf(1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 5, 6, 7, 8, 9)
    val result = array.distinct()
    val expected = listOf(1, 2, 3, 4, 5, 6, 7, 8, 9)

    assertIterableEquals(expected, result)
}

We also have the option of using a selector function. The selector returns the value we’re going to evaluate for uniqueness.

We’ll implement a small data class called SmallClass to explore working with an object within the selector:

data class SmallClass(val key: String, val num: Int)

using an array of SmallClass:

val original = arrayOf(
  SmallClass("key1", 1),
  SmallClass("key2", 2),
  SmallClass("key3", 3),
  SmallClass("key4", 3),
  SmallClass("er", 9),
  SmallClass("er", 10),
  SmallClass("er", 11))

We can use various fields within the distinctBy:

val actual = original.distinctBy { it.key }
val expected = listOf(
  SmallClass("key1", 1),
  SmallClass("key2", 2),
  SmallClass("key3", 3),
  SmallClass("key4", 3),
  SmallClass("er", 9))

The function doesn’t need to directly return a variable property, we can also perform calculations to determine our distinct values.

For example, to numbers for each 10 range (0 – 9, 10 – 19, 20-29, etc.), we can round down to the nearest 10, and that is the value that our selector:

val actual = array.distinctBy { Math.floor(it.num / 10.0) }

7. Chunked

One interesting feature of Kotlin 1.2 is chunked. Chunking is taking a single Iterable collection and creating a new List of chunks matching the defined size. This doesn’t work with Arrays; only Iterables.

We can chunk with either simply a size of chunk to extract:

@Test
fun givenDNAFragmentString_whenChunking_thenProduceListOfChunks() {
    val dnaFragment = "ATTCGCGGCCGCCAA"

    val fragments = dnaFragment.chunked(3)

    assertIterableEquals(listOf("ATT", "CGC", "GGC", "CGC", "CAA"), fragments)
}

Or a size and a transformer:

@Test
fun givenDNAString_whenChunkingWithTransformer_thenProduceTransformedList() {
    val codonTable = mapOf(
      "ATT" to "Isoleucine", 
      "CAA" to "Glutamine", 
      "CGC" to "Arginine", 
      "GGC" to "Glycine")
    val dnaFragment = "ATTCGCGGCCGCCAA"

    val proteins = dnaFragment.chunked(3) { codon ->
        codonTable[codon.toString()] ?: error("Unknown codon")
    }

    assertIterableEquals(listOf(
      "Isoleucine", "Arginine", 
      "Glycine", "Arginine", "Glutamine"), proteins)
}

The above selector example of DNA Fragments is extracted from the Kotlin documentation on chunked available here.

When passing chunked a size that is not a divisor of our collection size. In those instances, the last element in our list of chunks will simply be a smaller list.

Be careful not to assume that every chunk is the full size and encounter an ArrayIndexOutOfBoundsException!

8. Conclusion

All of the Kotlin filters allow us to apply lambdas to determine whether an item should be filtered or not. Not all of these functions can be used on Maps, however, all filter functions that work on Maps will work on Arrays.

The Kotlin collections documentation gives us information on whether we can use a filter function on only arrays or both. The documentation can be found here.

As always, all of the examples are available over on GitHub.

Maven Resources Plugin

$
0
0

1. Overview

This tutorial describes the resources plugin, one of the core plugins of the Maven build tool.

For an overview of the other core plugins, refer to this article.

2. Plugin Goals

The resources plugin copies files from input resource directories to an output directory. This plugin has three goals, which are different only in how the resources and output directories are specified.

The three goals of this plugin are:

  • resources – copy resources that are part of the main source code to the main output directory
  • testResources copy resources that are part of the test source code to the test output directory
  • copy-resources copy arbitrary resource files to an output directory, requiring us to specify the input files and the output directory

Let’s take a look at the resources plugin in the pom.xml:

<plugin>
    <artifactId>maven-resources-plugin</artifactId>
    <version>3.0.2</version>
    <configuration>
        ...
    </configuration>
</plugin>

We can find the latest version of this plugin here.

3. Example

Assume we want to copy resource files from the directory input-resources to the directory output-resources and we want to exclude all files ending with the extension .png.

These requirements are satisfied with this configuration:

<configuration>
    <outputDirectory>output-resources</outputDirectory>
    <resources>
        <resource>
            <directory>input-resources</directory>
            <excludes>
                <exclude>*.png</exclude>
            </excludes>
            <filtering>true</filtering>
        </resource>
    </resources>
</configuration>

The configuration applies to all executions of the resources plugin.

For example, when the resources goal of this plugin is executed with the command mvn resources:resources, all resources from the input-resources directory, except for PNG files, will be copied to output-resources.

Since, by default, the resources goal is bound to the process-resources phase in the Maven default lifecycle, we can execute this goal and all the preceding phases by running the command mvn process-resources.

In the given configuration, there’s a parameter named filtering with the value of true. The filtering parameter is used to replace placeholder variables in the resource files.

For instance, if we have a property in the POM:

<properties>
    <resources.name>Baeldung</resources.name>
</properties>

and one of the resource files contains:

Welcome to ${resources.name}!

then the variable will be evaluated in the output resource, and the resulting file will contain:

Welcome to Baeldung!

4. Conclusion

In this quick article, we went over the resources plugin and gave instructions on using and customizing it.

The complete source code for this tutorial can be found over on GitHub.

Maven Compiler Plugin

$
0
0

1. Overview

This quick tutorial introduces the compiler plugin, one of the core plugins of the Maven build tool.

For an overview of the other core plugins, refer to this article.

2. Plugin Goals

The compiler plugin is used to compile the source code of a Maven project. This plugin has two goals, which are already bound to specific phases of the default lifecycle:

  • compile compile main source files
  • testCompile compile test source files

Here’s the compiler plugin in the POM:

<plugin>
    <artifactId>maven-compiler-plugin</artifactId>
    <version>3.7.0</version>
    <configuration>
        ...
    </configuration>
</plugin>

We can find the latest version of this plugin here.

3. Configuration

By default, the compiler plugin compiles source code compatible with Java 5, and the generated classes also work with Java 5 regardless of the JDK in use. We can modify these settings in the configuration element:

<configuration>
    <source>1.8</source>
    <target>1.8</target>
    <-- other customizations -->
</configuration>

For convenience, we can set the Java version as properties of the POM:

<properties>
    <maven.compiler.source>1.8</maven.compiler.source>
    <maven.compiler.target>1.8</maven.compiler.target>
</properties>

Sometimes we want to pass arguments to the javac compiler. This is where the compilerArgs parameter comes in handy.

For instance, we can specify the following configuration for the compiler to warn about unchecked operations:

<configuration>
    <!-- other configuration -->
    <compilerArgs>
        <arg>-Xlint:unchecked</arg>
    </compilerArgs>
</configuration>

When compiling this class:

public class Data {
    List<String> textList = new ArrayList();

    public void addText(String text) {
        textList.add(text);
    }

    public List getTextList() {
        return this.textList;
    }
}

we’ll see an unchecked warning on the console:

[WARNING] ... Data.java:[7,29] unchecked conversion
  required: java.util.List<java.lang.String>
  found:    java.util.ArrayList

As both goals of the compiler plugin are automatically bound to phases in the Maven default lifecycle, we can execute these goals with the commands mvn compile and mvn test-compile.

4. Conclusion

In this article, we went over the compiler plugin and described how to use it.

The complete source code for this tutorial can be found over on GitHub.

Quick Guide to the Maven Surefire Plugin

$
0
0

1. Overview

This tutorial demonstrates the surefire plugin, one of the core plugins of the Maven build tool. For an overview of the other core plugins, refer to this article.

2. Plugin Goal

We can run the tests of a project using the surefire plugin. By default, this plugin generates XML reports in the directory target/surefire-reports.

This plugin has only one goal, test. This goal is bound to the test phase of the default build lifecycle, and the command mvn test will execute it.

3. Configuration

The surefire plugin can work with the test frameworks JUnit and TestNG. No matter which framework we use, the behavior of surefire is the same.

By default, surefire automatically includes all test classes whose name starts with Test, or ends with Test, Tests or TestCase.

We can change this configuration using the excludes and includes parameters, however:

<plugin>
    <artifactId>maven-surefire-plugin</artifactId>
    <version>2.21.0</version>
    <configuration>
        <excludes>
            <exclude>DataTest.java</exclude>
        </excludes>
        <includes>
            <include>DataCheck.java</include>
        </includes>
    </configuration>
</plugin>

With this configuration, test cases in the DataCheck class are executed while the ones in DataTest aren’t.

We can find the latest version of the plugin here.

4. Conclusion

In this quick article, we went through the surefire plugin, describing its only goal as well as how to configure it.

As always, the complete source code for this tutorial can be found over on GitHub.

The Maven Failsafe Plugin

$
0
0

1. Overview

This to-the-point tutorial describes the failsafe plugin, one of the core plugins of the Maven build tool.

For an overview of the other core plugins, refer to this article.

2. Plugin Goals

The failsafe plugin is used for integration tests of a project. It has two goals:

  • integration-test – run integration tests; this goal is bound to the integration-test phase by default
  • verify – verify that the integration tests passed; this goal is bound to the verify phase by default

3. Goal Execution

This plugin runs methods in test classes just like the surefire plugin. We can configure both plugins in similar ways. However, there’re some crucial differences between them.

First, unlike surefire (see this article) which is included in the super pom.xml, the failsafe plugin with its goals must be explicitly specified in the pom.xml to be part of a build lifecycle:

<plugin>
    <artifactId>maven-failsafe-plugin</artifactId>
    <version>2.21.0</version>
    <executions>
        <execution>
            <goals>
                <goal>integration-test</goal>
                <goal>verify</goal>
            </goals>
            <configuration>
                ...
            </configuration>
        </execution>
    </executions>
</plugin>

The newest version of this plugin is here.

Second, the failsafe plugin runs and verifies tests using different goals. A test failure in the integration-test phase doesn’t fail the build straight away, allowing the phase post-integration-test to execute, where clean-up operations are performed.

Failed tests, if any, are only reported during the verify phase, after the integration test environment has been torn down properly.

4. Conclusion

In this article, we introduced the failsafe plugin, comparing it with the surefire plugin, another popular plugin used for testing.

The complete source code for this tutorial can be found over on GitHub.

Quick Guide to the Maven Install Plugin

$
0
0

1. Overview

This article describes the install plugin, one of the core plugins of the Maven build tool.

For an overview of the other core plugins, refer to this article.

2. Plugin Goals

We use the install plugin to add artifacts to the local repository. This plugin is included in the super POM, therefore a POM doesn’t need to explicitly include it.

The most noteworthy goal of this plugin is install, which is bound to the install phase by default.

Other goals are install-file used to automatically install external artifacts into the local repository, and help showing help information on the plugin itself.

In most cases, the install plugin doesn’t need any custom configuration. That’s why we won’t dive deeper into this plugin.

3. Conclusion

This post gave a brief introduction to the install plugin.

We can find more information about this plugin on the Maven website.


The Maven Deploy Plugin

$
0
0

1. Overview

This tutorial introduces the deploy plugin, one of the core plugins of the Maven build tool.

For a quick overview of the other core plugins, refer to this article.

2. Plugin Goals

We use the deploy plugin during the deploy phase, pushing artifacts to a remote repository to share with other developers.

In addition to the artifact itself, this plugin ensures that all associated information, such as POMs, metadata or hash values, is correct.

The deploy plugin is specified in the super POM, hence it’s not necessary to add this plugin to the POM.

The most noticeable goal of this plugin is deploy, bound to the deploy phase by default. This tutorial covers the deploy plugin in detail, hence we won’t go any further.

The deploy plugin has another goal named deploy-file, deploying an artifact in a remote repository. This goal isn’t in common use though.

3. Conclusion

This article gave a brief description of the deploy plugin.

We can find more information about this plugin on the Maven website.

The Maven Clean Plugin

$
0
0

1. Overview

This quick tutorial describes the clean plugin, one of the core plugins of the Maven build tool.

For an overview of the other core plugins, refer to this article.

2. Plugin Goal

The clean lifecycle has only one phase named clean that is automatically bound to the only goal of the plugin with the same name. This goal can, therefore, be executed with the command mvn clean.

The clean plugin is already included in the super POM, thus we can use it without specifying anything in the project’s POM.

This plugin, as its name implies, cleans the files and directories generated during the previous build. By default, the plugin removes the target directory.

3. Configuration

We can add directories to be cleaned using the filesets parameter:

<plugin>
    <artifactId>maven-clean-plugin</artifactId>
    <version>3.0.0</version>
    <configuration>
        <filesets>
            <fileset>
                <directory>output-resources</directory>
            </fileset>
        </filesets>
    </configuration>
</plugin>

The latest version of this plugin is listed here.

If the output-resources directory contains some generated resources, it cannot be removed with the default settings. The change we’ve just made instructs the clean plugin to delete that directory in addition to the default one.

4. Conclusion 

In this article, we went over the clean plugin and instructed how to customize it.

The complete source code for this tutorial can be found over on GitHub.

The Maven Verifier Plugin

$
0
0

1. Overview

This tutorial introduces the verifier plugin, one of the core plugins of the Maven build tool.

For an overview of the other core plugins, refer to this overview article.

2. Plugin Goal

The verifier plugin has only one goal – verify. This goal verifies the existence or non-existence of files and directories, optionally checking file content against a regular expression.

Despite its name, the verify goal is bound to the integration-test phase by default rather than the verify phase.

3. Configuration

The verifier plugin is triggered only if it’s explicitly added to the pom.xml:

<plugin>
    <artifactId>maven-verifier-plugin</artifactId>
    <version>1.1</version>
    <configuration>
        <verificationFile>input-resources/verifications.xml</verificationFile>
    </configuration>
    <executions>
        <execution>
            <goals>
                <goal>verify</goal>
            </goals>
        </execution>
    </executions>
</plugin>

This link shows the newest version of the plugin.

The default location of the verification file is src/test/verifier/verifications.xml. We must set a value for the verificationFile parameter if we want to use another file.

Here’s the content of the verification file shown in the given configuration:

<verifications 
  xmlns="http://maven.apache.org/verifications/1.0.0" 
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/verifications/1.0.0 
  http://maven.apache.org/xsd/verifications-1.0.0.xsd">
    <files>
        <file>
            <location>input-resources/baeldung.txt</location>
            <contains>Welcome</contains>
        </file>
    </files>
</verifications>

This verification file confirms that a file named input-resources/baeldung.txt exists and that it contains the word Welcome. We’ve already added such a file before, thus the goal execution succeeds.

4. Conclusion

In this article, we walk through the verifier plugin and described how to customize it.

The complete source code for this tutorial can be found over on GitHub.

The Maven Site Plugin

$
0
0

1. Overview

This tutorial introduces the site plugin, one of the core plugins of the Maven build tool.

For an overview of the other core plugins, refer to this tutorial.

2. Plugin Goals

The Maven site lifecycle has two phases bound to goals of the site plugin by default: the site phase is bound to the site goal, and the site-deploy phase is bound to the deploy goal.

Here are the descriptions of those goals:

  • site – generate a site for a single project; the generated site only shows information about the artifacts specified in the POM
  • deploy – deploy the generated site to the URL specified in the distributionManagement element of the POM

In addition to site and deploy, the site plugin has several other goals to customize the content of the generated files and to control the deployment process.

3. Goal Execution

We can use this plugin without adding it to the POM as the super POM already includes it.

To generate a site, just run mvn site:site or mvn site.

To view the generated site on a local machine, run mvn site:run. This command will deploy the site to a Jetty web server at the address localhost:8080.

The run goal of this plugin isn’t implicitly bound to a phase in the site lifecycle, hence we need to call it directly.

If we want to stop the server, we can simply hit Ctrl + C.

4. Conclusion

This article covered the site plugin and how to execute its goals.

We can find more information about this plugin on the Maven website.

Guide to the Core Maven Plugins

$
0
0

1. Overview

Maven is the most commonly used build tool in the Java world. Mainly, it’s just a plugin execution framework in which all jobs are implemented by plugins.

In this tutorial, we’ll give an introduction to the core Maven plugins, providing links to other tutorials focusing on what these plugins can do and how their goals are bound to the build lifecycles.

2. Maven Build Lifecycles

Core plugins closely relate to the build lifecycles.

Maven defines three build lifecycles:  default, site and clean. Each lifecycle is composed of multiple phases, which run in order up to the phase specified in the mvn command.

The most important lifecycle is default, responsible for all steps in the build process, from project validation to package deployment.

The site lifecycle is in charge of building a site, showing Maven related information of the project, whereas the clean lifecycle takes care of removing files generated in the previous build.

Many phases in all three lifecycles are automatically bound to the goals of core plugins. The referenced articles will go over these goals and the built-in bindings in detail.

All plugins are enclosed in a build element of the POM:

<build>
    <plugins>
        <!-- plugins go here -->
    </plugins>
</build>

3. Plugins Bound to the Default Lifecycle 

The built-in bindings of the default lifecycle are dependent on the value of the POM’s packaging element. For the sake of brevity, we’ll go over bindings of the most common packaging types:  jar and war.

Here’s a list of the goals that are bound to each phase of the default lifecycle in the format “phase -> plugin:goal”:

  • process-resources -> resources:resources
  • compile -> compiler:compile
  • process-test-resources -> resources:testResources
  • test-compile -> compiler:testCompile
  • test -> surefire:test
  • package -> ejb:ejb or ejb3:ejb3 or jar:jar or par:par or rar:rar or war:war
  • install -> install:install
  • deploy -> deploy:deploy

The goals above are contained in the following plugins. Follow the links for an article on each of the plugins:

4. Other Plugins

In addition to the plugins mentioned in the previous section, there are two other core plugins whose goals are bound to phases of the site and clean lifecycles:

5. Conclusion

In this article, we went over Maven build lifecycles and provided references to tutorials covering the core plugins of the Maven build tool in detail.

The code examples of most of the referenced articles can be found over on GitHub.

Java Weekly, Issue 225

$
0
0

Here we go…

1. Spring and Java

>> Monitor and troubleshoot Java applications and services with Datadog

Optimize performance with end-to-end tracing and out-of-the-box support for popular Java frameworks, application servers, and databases. Try it free

>> Event Storming and Spring with a Splash of DDD [spring.io]

Event Storming is a powerful technique that can speed up the understanding of the business domain we’re working on.

By the way, welcome to the Pivotal advocacy team, Jakub!

>> Multiple modules in Spring Boot apps [blog.frankel.ch]

Although not always super popular, Spring Boot applications can be modularized quite easily.

>> Chaos Monkey for Spring Boot [codecentric.github.io]

Chaos Monkey allows you to easily abuse your Spring application and see how it performs under that kind of impact. Super cool.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> FP vs. OO [blog.cleancoder.com]

FP and OOP work great together – there’s no need to pick one approach exclusively 🙂

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> Dumb Question [dilbert.com]

>> Terrible Personality [dilbert.com]

>> The Losing Team [dilbert.com]

4. Pick of the Week

>> Productivity [blog.samaltman.com]

Measure Elapsed Time in Java

$
0
0

1. Overview

In this article, we’re going to have a look at how to measure elapsed time in Java. While this may sound easy, there’re a few pitfalls that we must be aware of.

We’ll explore standard Java classes and external packages that provide functionality to measure elapsed time.

2. Simple Measurements

2.1. currentTimeMillis()

When we encounter a requirement to measure elapsed time in Java, we may try to do it like:

long start = System.currentTimeMillis();
// ...
long finish = System.currentTimeMillis();
long timeElapsed = finish - start;

If we look at the code it makes perfect sense. We get a timestamp at the start and we get another timestamp when the code finished. Time elapsed is the difference between these two values.

However, the result may and will be inaccurate as System.currentTimeMillis() measures wall-clock time. Wall-clock time may change for many reasons, e.g. changing the system time can affect the results or a leap second will disrupt the result.

2.2. nanoTime()

Another method in java.lang.System class is nanoTime(). If we look at the Java documentation, we’ll find the following statement:

“This method can only be used to measure elapsed time and is not related to any other notion of system or wall-clock time.”

Let’s use it:

long start = System.nanoTime();
// ...
long finish = System.nanoTime();
long timeElapsed = finish - start;

The code is basically the same as before. The only difference is the method used to get timestamps – nanoTime() instead of currentTimeMillis().

Let’s also note that nanoTime(), obviously, returns time in nanoseconds. Therefore, if the elapsed time is measured in a different time unit we must convert it accordingly.

For example, to convert to milliseconds we must divide the result in nanoseconds by 1.000.000.

Another pitfall with nanoTime() is that even though it provides nanosecond precision, it doesn’t guarantee nanosecond resolution (i.e. how often the value is updated).

However, it does guarantee that the resolution will be at least as good as that of currentTimeMillis().

3. Java 8

If we’re using Java 8 – we can try the new java.time.Instant and java.time.Duration classes. Both are immutable, thread-safe and use their own time-scale, the Java Time-Scale, as do all classes within the new java.time API.

3.1. Java Time-Scale

The traditional way of measuring time is to divide a day into 24 hours of 60 minutes of 60 seconds, which gives 86.400 seconds a day. However, solar days are not always equally long.

UTC time-scale actually allows a day to have 86.399 or 86.401 SI seconds. An SI second is a scientific “Standard International second” and is defined by periods of radiation of the cesium 133 atom). This is required to keep the day aligned with the Sun.

The Java Time-Scale divides each calendar day into exactly 86.400 subdivisions, known as seconds. There are no leap seconds.

3.2. Instant Class

The Instant class represents an instant on the timeline. Basically, it is a numeric timestamp since the standard Java epoch of 1970-01-01T00:00:00Z.

In order to get the current timestamp, we can use the Instant.now() static method. This method allows passing in an optional Clock parameter. If omitted, it uses the system clock in the default time zone.

We can store start and finish times in two variables, as in previous examples. Next, we can calculate time elapsed between both instants.

We can additionally use the Duration class and it’s between() method to obtain the duration between two Instant objects. Finally, we need to convert Duration to milliseconds:

Instant start = Instant.now();
// CODE HERE        
Instant finish = Instant.now();
long timeElapsed = Duration.between(start, finish).toMillis();

4. StopWatch

Moving on to libraries, Apache Commons Lang provides the StopWatch class that can be used to measure elapsed time.

4.1. Maven Dependency

We can get the latest version by updating the pom.xml:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-lang3</artifactId>
    <version>3.7</version>
</dependency>

The latest version of the dependency can be checked here.

4.2. Measuring Elapsed Time with StopWatch

First of all, we need to get an instance of the class and then we can simply measure the elapsed time:

StopWatch watch = new StopWatch();
watch.start();

Once we have a watch running, we can execute the code we want to benchmark and then at the end, we simply call the stop() method. Finally, to get the actual result, we call getTime():

watch.stop();
System.out.println("Time Elapsed: " + watch.getTime()); // Prints: Time Elapsed: 2501

StopWatch has a few additional helper methods that we can use in order to pause or resume our measurement. This may be helpful if we need to make our benchmark more complex.

Finally, let’s note that the class is not thread-safe.

5. Conclusion

There are many ways to measure time in Java. We’ve covered a very “traditional” (and inaccurate) way by using currentTimeMillis(). Additionally, we checked Apache Common’s StopWatch and looked at the new classes available in Java 8.

Overall, for simple and correct measurements of the time elapsed, the nanoTime() method is sufficient. It is also shorter to type than currentTimeMillis().

Let’s note, however, that for proper benchmarking, instead of measuring time manually, we can use a framework like the Java Microbenchmark Harness (JMH). This topic goes beyond the scope of this article but we explored it here.

Finally, as always, the code used during the discussion can be found over on GitHub.


Introduction to Kubernetes

$
0
0

1. Overview

In this tutorial, we’ll have a brief theoretical introduction to Kubernetes. In particular, we’ll discuss the following topics:

  • Need for a container orchestration tool
  • Features of Kubernetes
  • Kubernetes architecture
  • Kubernetes API

For a more in-depth understanding, we can also have a look at the official documentation.

2. Container Orchestration

In this previous article, we already discussed some Docker basics, as well as how to package and deploy custom applications.

In a nutshell, Docker is a container runtime: it provides features for packaging, shipping and running single instances of an app in a standardized way, also known as a container.

However, as complexity increases, new needs appear; automated deployment, the orchestration of containers, scheduling apps, granting high-availability, managing a cluster of several app instances, and so on.

There are quite a few tools available on the market. However, Kubernetes is establishing itself more and more like a substantial competitor.

3. Kubernetes Features

Kubernetes, in short, is a system for orchestration of containerized applications across a cluster of nodes, including networking and storage infrastructure. Some of the most important features are:

  • Resource scheduling: it ensures, that Pods are distributed optimally over all available nodes
  • Auto-scaling: with increasing load, the cluster can dynamically allocate additional nodes, and deploy new Pods on them
  • Self-healing: the cluster supervises containers and restarts them, if required, based on defined policies
  • Service-discovery: Pods and Services are registered and published via DNS
  • Rolling updates/rollbacks: supports rolling updates based on sequential redeployment of Pods and containers
  • Secret/configuration management: supports secure handling of sensitive data like passwords or API keys
  • Storage orchestration: several 3rd party storage solutions are supported, which can be used as external volumes to persist data

4. Understanding Kubernetes

The Master maintains the desired state of a cluster. When we interact with our cluster, e. g. by using the kubectl command-line interface, we’re always communicating with our cluster’s master.

Nodes in a cluster are the machines (VMs, physical servers, etc.) that run our applications. The Master controls each node.

A node needs a container runtime. Docker is the most common runtime used with Kubernetes.

Minikube is a Kubernetes distribution, which enables us to run a single-node cluster inside a VM on a workstation for development and testing.

The Kubernetes API provides an abstraction of the Kubernetes concepts by wrapping them into objects (we’ll have a look in the following section).

kubectl is a command-line tool, we can use it to create, update, delete, and inspect these API objects.

5. Kubernetes API Objects

An API object is a “record of intent” – once we create the object, the cluster system will continuously work to ensure that object exists.

Every object consists of two parts: the object spec and the object status. The spec describes the desired state for the object. The status describes the actual state of the object and is supplied and updated by the cluster.

In the following section, we’ll discuss the most important objects. After that, we’ll have a look at an example, how spec and status look like in reality.

5.1. Basic Objects

A Pod is a basic unit that Kubernetes deals with. It encapsulates one or more closely related containers, storage resources, a unique network IP, and configurations on how the container(s) should run, and thereby represents a single instance of an application.

Service is an abstraction which groups together logical collections of Pods and defines how to access them. Services are an interface to a group of containers so that consumers do not have to worry about anything beyond a single access location.

Using Volumes, containers can access external storage resources (as their file system is ephemeral), and they can read files or store them permanently. Volumes also support the sharing of files between containers. A long list of Volume types is supported.

With Namespaces, Kubernetes provides the possibility to run multiple virtual clusters on one physical cluster. Namespaces provide scope for names of resources, which have to be unique within a namespace.

5.2. Controllers

Additionally, there are some higher-level abstractions, called controllers. Controllers build on the basic objects and provide additional functionality:

A Deployment controller provides declarative updates for Pods and ReplicaSets. We describe the desired state in a Deployment object, and the Deployment controller changes the actual state to the desired.

A ReplicaSet ensures that a specified number of Pod replicas are running at any given time.

With StatefulSet, we can run stateful applications: different from a Deployment, Pods will have a unique and persistent identity. Using StatefulSet, we can implement applications with unique network identifiers, or persistent storage, and can guarantee ordered, graceful deployment, scaling, deletion, and termination, as well as ordered and automated rolling updates.

With DaemonSet, we can ensure that all or a specific set of nodes in our cluster run one copy of a specific Pod. This might be helpful if we need a daemon running on each node, e. g. for application monitoring, or for collecting logs.

A GarbageCollection makes sure that certain objects are deleted, which once had an owner, but no longer have one. This helps to save resources by deleting objects, which are not needed anymore.

A Job creates one or more Pods, makes sure that a specific number of them terminates successfully, and tracks the successful completions. Jobs are helpful for parallel processing of a set of independent but related work items, like sending emails, rendering frames, transcoding files, and so on.

5.3. Object Metadata

Metadata are attributes, which provide additional information on objects.

Mandatory attributes are:

  • Each object must have a Namespace (we already discussed that before). If not specified explicitly, an object belongs to the default Namespace.
  • A Name is a unique identifier for an object in its Namespace.
  • A Uid is a value unique in time and space. It helps to distinguish between objects, which have been deleted and recreated.

There are also optional metadata attributes. Some of the most important are:

  • Labels are a key/value pairs, which can be attached to objects to categorize them. It helps us to identify a collection of objects, which satisfy a specific condition. They help us to map our organizational structures on objects in a loosely coupled way.
  • Label selectors help us to identify a set of objects by their labels.
  • Annotations are key/value pairs, too. In contrast to labels, they are not used to identify objects. Instead, they can hold information about their respective object, like build, release, or image information.

5.4. Example

After having discussed the Kubernetes API in theory, we’ll now have a look at an example.

API objects can be specified as JSON or YAML files. However, the documentation recommends YAML for manual configuration.

In the following, we’ll define the spec part for the Deployment of a stateless application. After that, we will have a look how a status returned from the cluster might look like.

The specification for an application called demo-backend could look like this:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo-backend
spec:
  selector:
      matchLabels:
        app: demo-backend
        tier: backend
  replicas: 3
  template:
    metadata:
      labels:
        app: demo-backend
        tier: backend
    spec:
      containers:
        - name: demo-backend
          image: demo-backend:latest
          ports:
            - containerPort: 8080

As we can see, we specify a Deployment object, called demo-backend. The spec: part below actually is a nested structure, and contains the following API objects discussed in the previous sections:

  • replicas: 3 specifies a ReplicationSet with replication factor 3 (i. e. we’ll have three instances of the Deployment)
  • template: specifies one Pod
  • Within this Pod, we can use spec: containers: to assign one or more containers to our Pod. In this case, we have one container called demo-backend, which is instantiated from an image also called demo-backend, version latest, and it listens to port 8080
  • We also attach labels to our pod: app: demo-backend and tier: backend
  • With selector: matchLabels:, we link our Pod to the Deployment controller (mapping to labels app: demo-backend and tier: backend)

If we query the state of our Deployment from the cluster, the response will look something like this:

Name:                   demo-backend
Namespace:              default
CreationTimestamp:      Thu, 22 Mar 2018 18:58:32 +0100
Labels:                 app=demo-backend
Annotations:            deployment.kubernetes.io/revision=1
Selector:               app=demo-backend
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=demo-backend
  Containers:
   demo-backend:
    Image:        demo-backend:latest
    Port:         8080/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Progressing    True    NewReplicaSetAvailable
  Available      True    MinimumReplicasAvailable
OldReplicaSets:  <none>
NewReplicaSet:   demo-backend-54d955ccf (3/3 replicas created)
Events:          <none>

As we can see, the deployment seems to be up and running, and we can recognize most of the elements from our specification.

We have a Deployment with the replication factor of 3, with one pod containing one container, instantiated from image demo-backend:latest.

All attributes, which are present in the response but weren’t defined in our specification, are default values.

6. Getting Started With Kubernetes

We can run Kubernetes on various platforms: from our laptop to VMs on a cloud provider, or a rack of bare metal servers.

To get started, Minikube might be the easiest choice: it enables us to run a single-node cluster on a local workstation for development and testing.

Have a look at the official documentation for further local-machine solutions, hosted solutions, distributions to be run on IaaS clouds, and some more.

7. Conclusion

In this article, we had a quick look at some Kubernetes basics.

Simply put, we covered the following aspects:

  • Why we might need a container orchestration tool
  • Some of the most important features of Kubernetes
  • Kubernetes architecture and its most important components
  • Kubernetes API and how we can use it to specify the desired state of our cluster

EasyMock Argument Matchers

$
0
0

1. Overview

In this tutorial, we’ll explore EasyMock argument matchers. We’ll discuss different types of predefined matchers and how to create a custom matcher as well.

We already covered EasyMock basics in the introduction to EasyMock article, so you may need to read it first to get yourself familiar with EasyMock.

2. Simple Mocking Example

Before we start exploring different matchers, let’s take a look at our context. Throughout this tutorial, we’ll use a pretty basic user service in our examples.

Here’s our simple IUserService interface:

public interface IUserService {
    public boolean addUser(User user);
    public List<User> findByEmail(String email);
    public List<User> findByAge(double age);  
}

And the related User model:

public class User {
    private long id;
    private String firstName;
    private String lastName;
    private double age;
    private String email;

    // standard constructor, getters, setters
}

So, we’ll start by merely mocking our IUserService to use it in our examples:

private IUserService userService = mock(IUserService.class);

Now, let’s explore the EasyMock argument matchers.

3. Equality Matchers

First, we’ll use eq() matcher to match the new added User:

@Test
public void givenUserService_whenAddNewUser_thenOK() {        
    expect(userService.addUser(eq(new User()))).andReturn(true);
    replay(userService);

    boolean result = userService.addUser(new User());
    verify(userService);
    assertTrue(result);
}

This matcher is available for both primitive and objects, and uses the equals() method for objects.

Similarly, we can use same() matcher for matching a specific User:

@Test
public void givenUserService_whenAddSpecificUser_thenOK() {
    User user = new User();
    
    expect(userService.addUser(same(user))).andReturn(true);
    replay(userService);

    boolean result = userService.addUser(user);
    verify(userService);
    assertTrue(result);
}

The same() matcher compares arguments using “==”, meaning it compares User instances in our case.

If we don’t use any matchers, arguments are compared by default using equals().

For arrays, we also have the aryEq() matcher which is based on the Arrays.equals() method.

4. Any Matchers

There are multiple any matchers like anyInt(), anyBoolean(), anyDouble(),… etc. These specify that the argument should have the given type.

Let’s see an example of using anyString() to match the expected email to be any String value:

@Test
public void givenUserService_whenSearchForUserByEmail_thenFound() {
    expect(userService.findByEmail(anyString()))
      .andReturn(Collections.emptyList());
    replay(userService);

    List<User> result = userService.findByEmail("test@example.com");
    verify(userService);
    assertEquals(0,result.size());
}

We can also use isA() to match an argument to be an instance of a specific class:

@Test
public void givenUserService_whenAddUser_thenOK() {
    expect(userService.addUser(isA(User.class))).andReturn(true);
    replay(userService);

    boolean result = userService.addUser(new User());
    verify(userService);
    assertTrue(result);
}

Here, we’re asserting that we expect the addUser() method parameter to be of type User.

5. Null Matchers

Next, we can use the isNull() and notNull() matchers to match null values.

In the following example, we’ll use the isNull() matcher to match if the added User value is null:

@Test
public void givenUserService_whenAddNull_thenFail() {
    expect(userService.addUser(isNull())).andReturn(false);
    replay(userService);

    boolean result = userService.addUser(null);
    verify(userService);
    assertFalse(result);
}

We can also notNull() to match if added user value is not null in a similar way:

@Test
public void givenUserService_whenAddNotNull_thenOK() {
    expect(userService.addUser(notNull())).andReturn(true);
    replay(userService);

    boolean result = userService.addUser(new User());
    verify(userService);
    assertTrue(result);
}

6. String Matchers

There are multiple useful matchers that we can use with String arguments.

First, we’ll use the startsWith() matcher to match a user’s email prefix:

@Test
public void whenSearchForUserByEmailStartsWith_thenFound() {        
    expect(userService.findByEmail(startsWith("test")))
      .andReturn(Collections.emptyList());
    replay(userService);

    List<User> result = userService.findByEmail("test@example.com");
    verify(userService);
    assertEquals(0,result.size());
}

Similarly, we’ll use the endsWith() matcher for the email suffix:

@Test
public void givenUserService_whenSearchForUserByEmailEndsWith_thenFound() {        
    expect(userService.findByEmail(endsWith(".com")))
      .andReturn(Collections.emptyList());
    replay(userService);

    List<User> result = userService.findByEmail("test@example.com");
    verify(userService);
    assertEquals(0,result.size());
}

More generally, we can use contains() to match the email with a given substring:

@Test
public void givenUserService_whenSearchForUserByEmailContains_thenFound() {        
    expect(userService.findByEmail(contains("@")))
      .andReturn(Collections.emptyList());
    replay(userService);

    List<User> result = userService.findByEmail("test@example.com");
    verify(userService);
    assertEquals(0,result.size());
}

Or even match our email to a specific regex using matches():

@Test
public void givenUserService_whenSearchForUserByEmailMatches_thenFound() {        
    expect(userService.findByEmail(matches(".+\\@.+\\..+")))
      .andReturn(Collections.emptyList());
    replay(userService);

    List<User> result = userService.findByEmail("test@example.com");
    verify(userService);
    assertEquals(0,result.size());
}

7. Number Matchers

We also have a few matchers for numeric values that we can use.

Let’ see an example of using the lt() matcher to match the age argument to be less than 100:

@Test
public void givenUserService_whenSearchForUserByAgeLessThan_thenFound() {    
    expect(userService.findByAge(lt(100.0)))
      .andReturn(Collections.emptyList());
    replay(userService);

    List<User> result = userService.findByAge(20);        
    verify(userService);
    assertEquals(0,result.size());
}

Similarly, we also use geq() to match the age argument to be greater than or equal to 10:

@Test
public void givenUserService_whenSearchForUserByAgeGreaterThan_thenFound() {    
    expect(userService.findByAge(geq(10.0)))
      .andReturn(Collections.emptyList());
    replay(userService);

    List<User> result = userService.findByAge(20);        
    verify(userService);
    assertEquals(0,result.size());
}

The available number matchers are:

  • lt() – less than the given value
  • leq() – less than or equal
  • gt() – greater than
  • geq() – greater than or equal

8. Combine Matchers

We can also combine multiple matchers using and(), or() and not() matchers.

Let’s see how we can combine two matchers to verify that the age value is both greater than 10 and less than 100:

@Test
public void givenUserService_whenSearchForUserByAgeRange_thenFound() {
    expect(userService.findByAge(and(gt(10.0),lt(100.0))))
      .andReturn(Collections.emptyList());
    replay(userService);

    List<User> result = userService.findByAge(20);        
    verify(userService);
    assertEquals(0,result.size());
}

Another example we can look at is combining not() with endsWith() to match emails that don’t end with “.com”:

@Test
public void givenUserService_whenSearchForUserByEmailNotEndsWith_thenFound() {
    expect(userService.findByEmail(not(endsWith(".com"))))
      .andReturn(Collections.emptyList());
    replay(userService);

    List<User> result = userService.findByEmail("test@example.org");
    verify(userService);
    assertEquals(0,result.size());
}

9. Custom Matcher

Finally, we’ll discuss how to create a custom EasyMock matcher.

The goal is to create a simple minCharCount() matcher to match strings with a length greater than or equal to the given value:

@Test
public void givenUserService_whenSearchForUserByEmailCharCount_thenFound() {        
    expect(userService.findByEmail(minCharCount(5)))
      .andReturn(Collections.emptyList());
    replay(userService);

    List<User> result = userService.findByEmail("test@example.com");
    verify(userService);
    assertEquals(0,result.size());
}

To create a custom argument matcher, we need to:

  • create a new class that implements the IArgumentMatcher interface
  •  create a static method with the new matcher name and register an instance of the class above using reportMatcher()

Let’s see both steps in our minCharCount() method that declares an anonymous class within it:

public static String minCharCount(int value){
    EasyMock.reportMatcher(new IArgumentMatcher() {
        @Override
        public boolean matches(Object argument) {
            return argument instanceof String 
              && ((String) argument).length() >= value;
        }
 
        @Override
        public void appendTo(StringBuffer buffer) {
            buffer.append("charCount(\"" + value + "\")");
        }
    });    
    return null;
}

Also, note that the IArgumentMatcher interface has two methods: matches() and appendTo(). 

The first method contains the argument validation and logic for our matcher, while the second is used to append the matcher String representation to be printed in case of failure.

10. Conclusion

We covered EasyMock predefined argument matchers for different data types and how to create our custom matcher.

The full source code for the examples is available over on GitHub.

Double-Checked Locking with Singleton

$
0
0

1. Introduction

In this tutorial, we’ll talk about the double-checked locking design pattern. This pattern reduces the number of lock acquisitions by simply checking the locking condition beforehand. As a result of this, there’s usually a performance boost.

Let’s take a deeper look at how it works.

2. Implementation

To begin with, let’s consider a simple singleton with draconian synchronization:

public class DraconianSingleton {
    private static DraconianSingleton instance;
    public static synchronized DraconianSingleton getInstance() {
        if (instance == null) {
            instance = new DraconianSingleton();
        }
        return instance;
    }

    // private constructor and other methods ...
}

Despite this class being thread-safe, we can see that there’s a clear performance drawback: each time we want to get the instance of our singleton, we need to acquire a potentially unnecessary lock.

To fix that, we could instead start by verifying if we need to create the object in the first place and only in that case we would acquire the lock.

Going further, we want to perform the same check again as soon as we enter the synchronized block, in order to keep the operation atomic:

public class DclSingleton {
    private static volatile DclSingleton instance;
    public static DclSingleton getInstance() {
        if (instance == null) {
            synchronized (DclSingleton .class) {
                if (instance == null) {
                    instance = new DclSingleton();
                }
            }
        }
        return instance;
    }

    // private constructor and other methods...
}

One thing to keep in mind with this pattern is that the field needs to be volatile to prevent cache incoherence issues. In fact, the Java memory model allows the publication of partially initialized objects and this may lead in turn to subtle bugs.

3. Alternatives

Even though the double-checked locking can potentially speed things up, it has at least two issues:

  • since it requires the volatile keyword to work properly, it’s not compatible with Java 1.4 and lower versions
  • it’s quite verbose and it makes the code difficult to read

For these reasons, let’s look into some other options without these flaws. All of the following methods delegate the synchronization task to the JVM.

3.1. Early Initialization

The easiest way to achieve thread safety is to inline the object creation or to use an equivalent static block. This takes advantage of the fact that static fields and blocks are initialized one after another (Java Language Specification 12.4.2):

public class EarlyInitSingleton {
    private static final EarlyInitSingleton INSTANCE = new EarlyInitSingleton();
    public static EarlyInitSingleton getInstance() {
        return INSTANCE;
    }
    
     // private constructor and other methods...
}

3.2. Initialization on Demand

Additionally, since we know from the Java Language Specification reference in the previous paragraph that a class initialization occurs the first time we use one of its methods or fields, we can use a nested static class to implement lazy initialization:

public class InitOnDemandSingleton {
    private static class InstanceHolder {
        private static final InitOnDemandSingleton INSTANCE = new InitOnDemandSingleton();
    }
    public static InitOnDemandSingleton getInstance() {
        return InstanceHolder.INSTANCE;
    }

     // private constructor and other methods...
}

In this case, the InstanceHolder class will assign the field the first time we access it by invoking getInstance.

3.3. Enum Singleton

The last solution comes from the Effective Java book (Item 3) by Joshua Block and uses an enum instead of a class. At the time of writing, this is considered to be the most concise and safe way to write a singleton:

public enum EnumSingleton {
    INSTANCE;

    // other methods...
}

4. Conclusion

To sum up, this quick article went through the double-checked locking pattern, its limits and some alternatives.

In practice, the excessive verbosity and lack of backward compatibility make this pattern error-prone and thus we should avoid it. Instead, we should use an alternative that lets the JVM do the synchronizing.

As always, the code of all examples is available on GitHub.

Apache Ignite with Spring Data

$
0
0

1. Overview

In this quick guide, we’re going to focus on how to integrate the Spring Data API with the Apache Ignite platform.

To learn about Apache Ignite check out our previous guide.

2. Maven Setup

In addition to the existing dependencies, we have to enable Spring Data support:

<dependency>
    <groupId>org.apache.ignite</groupId>
    <artifactId>ignite-spring-data</artifactId>
    <version>${ignite.version}</version>
</dependency>

The ignite-spring-data artifact can be downloaded from Maven Central.

3. Model and Repository

To demonstrate the integration, we’ll build an application which stores employees into Ignite’s cache by using a Spring Data API.

The POJO of the EmployeeDTO will look like this:

public class EmployeeDTO implements Serializable {
 
    @QuerySqlField(index = true)
    private Integer id;
    
    @QuerySqlField(index = true)
    private String name;
    
    @QuerySqlField(index = true)
    private boolean isEmployed;

    // getters, setters
}

Here, the @QuerySqlField annotation enables querying the fields using SQL.

Next, we’ll create the repository to persist the Employee objects:

@RepositoryConfig(cacheName = "baeldungCache")
public interface EmployeeRepository 
  extends IgniteRepository<EmployeeDTO, Integer> {
    EmployeeDTO getEmployeeDTOById(Integer id);
}

Apache Ignite uses its own IgniteRepository which extends from Spring Data’s CrudRepository. It also enables access to the SQL grid from Spring Data. 

This supports standard CRUD methods, except a few that don’t require an id. We’ll take a look at why in more detail in our testing section.

The @RepositoryConfig annotation maps the EmployeeRepository to Ignite’s baeldungCache.

4. Spring Configuration

Let’s now create our Spring configuration class.

We’ll use the @EnableIgniteRepositories annotation to add support for Ignite repositories:

@Configuration
@EnableIgniteRepositories
public class SpringDataConfig {

    @Bean
    public Ignite igniteInstance() {
        IgniteConfiguration config = new IgniteConfiguration();

        CacheConfiguration cache = new CacheConfiguration("baeldungCache");
        cache.setIndexedTypes(Integer.class, EmployeeDTO.class);

        config.setCacheConfiguration(cache);
        return Ignition.start(config);
    }
}

Here, the igniteInstance() method creates and passes the Ignite instance to IgniteRepositoryFactoryBean in order to get access to the Apache Ignite cluster.

We’ve also defined and set the baeldungCache configuration. The setIndexedTypes() method sets the SQL schema for the cache.

5. Testing the Repository

To test the application, let’s register the SpringDataConfiguration in the application context and get the EmployeeRepository from it:

AnnotationConfigApplicationContext context
 = new AnnotationConfigApplicationContext();
context.register(SpringDataConfig.class);
context.refresh();

EmployeeRepository repository = context.getBean(EmployeeRepository.class);

Then, we want to create the EmployeeDTO instance and save it in the cache:

EmployeeDTO employeeDTO = new EmployeeDTO();
employeeDTO.setId(1);
employeeDTO.setName("John");
employeeDTO.setEmployed(true);

repository.save(employeeDTO.getId(), employeeDTO);

Here we used the save(key, value) method of IgniteRepository. The reason for this is that the standard CrudRepository save(entity), save(entities), delete(entity) operations aren’t supported yet.

The issue behind this is that the IDs generated by the CrudRepository.save() method are not unique in the cluster.

Instead, we have to use the save(key, value), save(Map<ID, Entity> values), deleteAll(Iterable<ID> ids) methods.

After, we can get the employee object from the cache by using Spring Data’s getEmployeeDTOById() method:

EmployeeDTO employee = repository.getEmployeeDTOById(employeeDTO.getId());
System.out.println(employee);

The output shows that we successfully fetched the initial object:

EmployeeDTO{id=1, name='John', isEmployed=true}

Alternatively, we can retrieve the same object using the IgniteCache API:

IgniteCache<Integer, EmployeeDTO> cache = ignite.cache("baeldungCache");
EmployeeDTO employeeDTO = cache.get(employeeId);

Or by using the standard SQL:

SqlFieldsQuery sql = new SqlFieldsQuery(
  "select * from EmployeeDTO where isEmployed = 'true'");

6. Summary

This short tutorial shows how to integrate the Spring Data Framework with the Apache Ignite project. With the help of the practical example, we learned to work with the Apache Ignite cache by using the Spring Data API.

As usual, the complete code for this article is available in the GitHub project.

Spring Assert Statements

$
0
0

1. Overview

In this tutorial, we’ll focus on and describe the purpose of the Spring Assert class and demonstrate how to use it.

2. Purpose of the Assert Class

The Spring Assert class helps us validate arguments. By using methods of the Assert class, we can write assumptions which we expect to be true. And if they aren’t met, a runtime exception is thrown.

Each Assert’s method can be compared with the Java assert statement. Java assert statement throws an Error at runtime if its condition fails. The interesting fact is that those assertions can be disabled.

Here are some characteristics of the Spring Assert’s methods:

  • Assert’s methods are static
  • They throw either IllegalArgumentException or IllegalStateException
  • The first parameter is usually an argument for validation or a logical condition to check
  • The last parameter is usually an exception message which is displayed if the validation fails
  • The message can be passed either as a String parameter or as a Supplier<String> parameter

Also note that despite the similar name, Spring assertions have nothing in common with the assertions of JUnit and other testing frameworks. Spring assertions aren’t for testing, but for debugging.

3. Example of Use

Let’s define a Car class with a public method drive():

public class Car {
    private String state = "stop";

    public void drive(int speed) {
        Assert.isTrue(speed > 0, "speed must be positive");
        this.state = "drive";
        // ...
    }
}

We can see how speed must be a positive number. The above row is a short way to check the condition and throw an exception if the condition fails:

if (!(speed > 0)) {
    throw new IllegalArgumentException("speed must be positive");
}

Each Assert’s public method contains roughly this code – a conditional block with a runtime exception from which the application is not expected to recover.

If we try to call the drive() method with a negative argument, an IllegalArgumentException exception will be thrown:

Exception in thread "main" java.lang.IllegalArgumentException: speed must be positive

4. Logical Assertions

4.1. isTrue()

This assertion was discussed above. It accepts a boolean condition and throws an IllegalArgumentException when the condition is false.

4.2. state()

The state() method has the same signature as isTrue() but throws the IllegalStateException.

As the name suggests, it should be used when the method mustn’t be continued because of an illegal state of the object.

Imagine that we can’t call the fuel() method if the car is running. Let’s use the state() assertion in this case:

public void fuel() {
    Assert.state(this.state.equals("stop"), "car must be stopped");
    // ...
}

Of course, we can validate everything using logical assertions. But for better readability, we can use additional assertions which make our code more expressive.

5. Object and Type Assertions

5.1. notNull()

We can assume that an object is not null by using the notNull() method:

public void сhangeOil(String oil) {
    Assert.notNull(oil, "oil mustn't be null");
    // ...
}

5.2. isNull()

On the other hand, we can check if an object is null using the isNull() method:

public void replaceBattery(CarBattery carBattery) {
    Assert.isNull(
      carBattery.getCharge(), 
      "to replace battery the charge must be null");
    // ...
}

5.3. isInstanceOf()

To check if an object is an instance of another object of the specific type we can use the isInstanceOf() method:

public void сhangeEngine(Engine engine) {
    Assert.isInstanceOf(ToyotaEngine.class, engine);
    // ...
}

In our example, the check passes successfully as ToyotaEngine is a subclass of Engine.

5.4. isAssignable()

To check types, we can use Assert.isAssignable():

public void repairEngine(Engine engine) {
    Assert.isAssignable(Engine.class, ToyotaEngine.class);
    // ...
}

Two recent assertions represent an is-a relationship.

6. Text assertions

Text assertions are used to perform checks on String arguments.

6.1. hasLength()

We can check if a String isn’t blank, meaning it contains at least one whitespace, by using the hasLength() method:

public void startWithHasLength(String key) {
    Assert.hasLength(key, "key must not be null and must not the empty");
    // ...
}

6.2. hasText()

We can strengthen the condition and check if a String contains at least one non-whitespace character, by using the hasText() method:

public void startWithHasText(String key) {
    Assert.hasText(
      key, 
      "key must not be null and must contain at least one non-whitespace  character");
    // ...
}

6.3. doesNotContain()

We can determine if a String argument doesn’t contain a specific substring by using the doesNotContain() method:

public void startWithNotContain(String key) {
    Assert.doesNotContain(key, "123", "key mustn't contain 123");
    // ...
}

7. Collection and Map Assertions

7.1. notEmpty() for collections

As the name says, the notEmpty() method asserts that a collection is not empty meaning that it’s not null and contains at least one element:

public void repair(Collection<String> repairParts) {
    Assert.notEmpty(
      repairParts, 
      "collection of repairParts mustn't be empty");
    // ...
}

7.2. notEmpty() for maps

The same method is overloaded for maps, and we can check if a map is not empty and contains at least one entry:

public void repair(Map<String, String> repairParts) {
    Assert.notEmpty(
      repairParts, 
      "map of repairParts mustn't be empty");
    // ...
}

8. Array Assertions

8.1. notEmpty() for arrays

Finally, we can check if an array is not empty and contains at least one element by using the notEmpty() method:

public void repair(String[] repairParts) {
    Assert.notEmpty(
      repairParts, 
      "array of repairParts mustn't be empty");
    // ...
}

8.2. noNullElements()

We can verify that an array doesn’t contain null elements by using the noNullElements() method:

public void repairWithNoNull(String[] repairParts) {
    Assert.noNullElements(
      repairParts, 
      "array of repairParts mustn't contain null elements");
    // ...
}

Note that this check still passes if the array is empty, as long as there are no null elements in it.

9. Conclusion

In this article, we explored the Assert class. This class is widely used within the Spring framework, but we could easily write more robust and expressive code taking advantage of it.

As always, the complete code for this article can be found in the GitHub project.

Viewing all 3852 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>