Quantcast
Channel: Baeldung
Viewing all 3743 articles
Browse latest View live

Publish and Receive Messages with Nats Java Client

$
0
0

1. Overview

In this tutorial, we’ll use the Java Client for NATs to connect to a NATS Server and publish and receive messages.

NATS offers three primary modes of message exchange. Publish/Subscribe semantics delivers messages to all subscribers of a topic. Request/Reply messaging sends requests via topics and routes responses back to the requestor.

Subscribers can also join message queue groups when they subscribe to a topic. Messages sent to the associated topic are only delivered to one subscriber in the queue group.

2. Setup

2.1. Maven Dependency

First, we need to add the NATS library to our pom.xml:

<dependency>
    <groupId>io.nats</groupId>
    <artifactId>jnats</artifactId>
    <version>1.0</version>
</dependency>

The latest version of the library can be found here, and the Github project is here.

2.2. NATS Server

Second, we’ll need a NATS Server for exchanging messages. There’re instructions for all major platforms here.

We assume that there’s a server running on localhost:4222.

3. Connect and Exchange Messages

3.1. Connect to NATS

The connect() method in the static NATS class creates Connections.

If we want to use a connection with default options and listening at localhost on port 4222, we can use the default method:

Connection natsConnection = Nats.connect();

But Connections have many configurable options, a few of which we want to override.

We’ll create an Options object and pass it to Nats:

private Connection initConnection() {
    Options options = new Options.Builder()
      .errorCb(ex -> log.error("Connection Exception: ", ex))
      .disconnectedCb(event -> log.error("Channel disconnected: {}", event.getConnection()))
      .reconnectedCb(event -> log.error("Reconnected to server: {}", event.getConnection()))
      .build();

    return Nats.connect(uri, options);
}

NATS Connections are durable. The API will attempt to reconnect a lost connection.

We’ve installed callbacks to notify us of when a disconnect occurs and when the connection is restored. In this example, we’re using lambdas, but for applications that need to do more than simply log the event, we can install objects that implement the required interfaces.

We can run a quick test. Create a connection and add a sleep for 60 seconds to keep the process running:

Connection natsConnection = initConnection();
Thread.sleep(60000);

Run this. Then stop and start your NATS server:

[jnats-callbacks] ERROR com.baeldung.nats.NatsClient 
  - Channel disconnected: io.nats.client.ConnectionImpl@79428dc1
[reconnect] WARN io.nats.client.ConnectionImpl 
  - couldn't connect to nats://localhost:4222 (nats: connection read error)
[jnats-callbacks] ERROR com.baeldung.nats.NatsClient 
  - Reconnected to server: io.nats.client.ConnectionImpl@79428dc1

We can see the callbacks log the disconnection and reconnect.

3.2. Subscribe to Messages

Now that we have a connection, we can work on message processing.

A NATS Message is a container for an array of bytes[]. In addition to the expected setData(byte[]) and byte[] getData() methods there’re methods for setting and getting the message destination and reply to topics.

We subscribe to topics, which are Strings.

NATS supports both synchronous and asynchronous subscriptions.

Let’s take a look at an asynchronous subscription:

AsyncSubscription subscription = natsConnection
  .subscribe( topic, msg -> log.info("Received message on {}", msg.getSubject()));

The API delivers Messages to our MessageHandler(), in its thread.

Some applications may want to control the thread that processes messages instead:

SyncSubscription subscription = natsConnection.subscribeSync("foo.bar");
Message message = subscription.nextMessage(1000);

SyncSubscription has a blocking nextMessage() method that will block for the specified number of milliseconds. We’ll use synchronous subscriptions for our tests to keep the test cases simple.

AsyncSubscription and SyncSubscription both have an unsubscribe() method that we can use to close the subscription.

subscription.unsubscribe();

3.3. Publish Messages

Publishing Messages can be done several ways.

The simplest method requires only a topic String and the message bytes:

natsConnection.publish("foo.bar", "Hi there!".getBytes());

If a publisher wishes a response or to provide specific information about the source of a message, it’s may also send a message with a reply-to topic:

natsConnection.publish("foo.bar", "bar.foo", "Hi there!".getBytes());

There are also overloads for a few other combinations such as passing in a Message instead of bytes.

3.4. A Simple Message Exchange

Given a valid Connection, we can write a test that verifies message exchange:

SyncSubscription fooSubscription = natsConnection.subscribe("foo.bar");
SyncSubscription barSubscription = natsConnection.subscribe("bar.foo");
natsConnection.publish("foo.bar", "bar.foo", "hello there".getBytes());

Message message = fooSubscription.nextMessage();
assertNotNull("No message!", message);
assertEquals("hello there", new String(message.getData()));

natsConnection
  .publish(message.getReplyTo(), message.getSubject(), "hello back".getBytes());

message = barSubscription.nextMessage();
assertNotNull("No message!", message);
assertEquals("hello back", new String(message.getData()));

We start by subscribing to two topics with synchronous subscriptions since they work much better inside a JUnit test. Then we send a message to one of them, specifying the other as a replyTo address.

After reading the message from the first destination we “flip” the topics to send a response.

3.5. Wildcard Subscriptions

NATS server supports topic wildcards.

Wildcards operate on topic tokens that are separated with the ’.’ character.  The asterisk character ‘*’ matches an individual token. The greater-than symbol ‘>’ is a wildcard match for the remainder of a topic, which may be more than one token.

For example:

  • foo.* matches foo.bar, foo.requests, but not foo.bar.requests
  • foo.> matches foo.bar, foo.requests, foo.bar.requests, foo.bar.baeldung, etc.

Let’s try a few tests:

SyncSubscription fooSubscription = client.subscribeSync("foo.*");

client.publishMessage("foo.bar", "bar.foo", "hello there");

Message message = fooSubscription.nextMessage(200);
assertNotNull("No message!", message);
assertEquals("hello there", new String(message.getData()));

client.publishMessage("foo.bar.plop", "bar.foo", "hello there");
message = fooSubscription.nextMessage(200);
assertNull("Got message!", message);

SyncSubscription barSubscription = client.subscribeSync("foo.>");

client.publishMessage("foo.bar.plop", "bar.foo", "hello there");

message = barSubscription.nextMessage(200);
assertNotNull("No message!", message);
assertEquals("hello there", new String(message.getData()));

4. Request/Reply Messaging

Our message exchange test resembled a common idiom on pub/sub messaging systems; request/reply. NATS has explicit support for this request/reply messaging.

Publishers can install a handler for requests using the asynchronous subscription method we used above:

AsyncSubscription subscription = natsConnection
  .subscribe("foo.bar.requests", new MessageHandler() {
    @Override
    public void onMessage(Message msg) {
        natsConnection.publish(message.getReplyTo(), reply.getBytes());
    }
});

Or they can respond to requests as they arrive.

The API provides a request() method:

Message reply = natsConnection.request("foo.bar.requests", request.getBytes(), 100);

This method creates a temporary mailbox for the response, and write the reply-to address for us.

Request() returns the response, or null if the request times out. The last argument is the number of milliseconds to wait.

We can modify our test for request/reply:

natsConnection.subscribe(salary.requests", message -> {
    natsConnection.publish(message.getReplyTo(), "denied!".getBytes());
});
Message reply = natsConnection.request("salary.requests", "I need a raise.", 100);
assertNotNull("No message!", reply);
assertEquals("denied!", new String(reply.getData()));

5. Message Queues

Subscribers may specify queue groups at subscription time. When a message is published to the group NATS will deliver it to a one-and-only-one subscriber.

Queue groups do not persist messages. If no listeners are available, the message is discarded.

5.1. Subscribing to Queues

Subscribers specify a queue group name as a String:

SyncSubscription subscription = natsConnection.subscribe("topic", "queue name");

There is also an asynchronous version, of course:

SyncSubscription subscription = natsConnection
  .subscribe("topic", "queue name", new MessageHandler() {
    @Override
    public void onMessage(Message msg) {
        log.info("Received message on {}", msg.getSubject());
    }
});

The subscription creates the queue on the NATS server.

5.2. Publishing to Queues

Publishing message to queue groups simply requires publishing to the associated topic:

natsConnection.publish("foo",  "queue message".getBytes());

The NATS server will route the message to the queue and select a message receiver.

We can verify this with a test:

SyncSubscription queue1 = natsConnection.subscribe("foo", "queue name");
SyncSubscription queue2 = natsConnection.subscribe("foo", "queue name");

natsConnection.publish("foo", "foobar".getBytes());

List<Message> messages = new ArrayList<>();

Message message = queue1.nextMessage(200);
if (message != null) messages.add(message);

message = queue2.nextMessage(200);
if (message != null) messages.add(message);

assertEquals(1, messages.size());

We only receive one message.

If we change the first two lines to a normal subscription:

SyncSubscription queue1 = natsConnection.subscribe("foo");
SyncSubscription queue2 = natsConnection.subscribe("foo");

The test fails because the message is delivered to both subscribers.

6. Conclusion

In this brief introduction, we connected to a NATS server and sent both pub/sub messages and load-balanced queue messages. We looked at NATS support for wildcard subscriptions.  We also used request/reply messaging.

Code samples, as always, can be found over on GitHub.


Integration Testing with a Local DynamoDB Instance

$
0
0

1. Overview

If we develop an application which uses Amazon’s DynamoDB, it can be tricky to develop integration tests without having a local instance.

In this tutorial, we’ll explore multiple ways of configuring, starting and stopping a local DynamoDB for our integration tests.

This tutorial also complements our existing DynamoDB article.

2. Configuration

2.1. Maven Setup

DynamoDB Local is a tool developed by Amazon which supports all the DynamoDB APIs. It doesn’t directly manipulate the actual DynamoDB tables in production but performs it locally instead.

First, we add the DynamoDB Local dependency to the list of dependencies in our Maven configuration:

<dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>DynamoDBLocal</artifactId>
    <version>1.11.86</version>
    <scope>test</scope>
</dependency>

Next, we also need to add the Amazon DynamoDB repository, since the dependency doesn’t exist in the Maven Central repository.

We can select the closest Amazon server to our current IP address geolocation:

<repository>
    <id>dynamodb-local</id>
    <name>DynamoDB Local Release Repository</name>
    <url>https://s3-us-west-2.amazonaws.com/dynamodb-local/release</url>
</repository>

2.2. Add SQLite4Java Dependency

The DynamoDB Local uses the SQLite4Java library internally; thus, we also need to include the library files when we run the test. The SQLite4Java library files depend on the environment where the test is running, but Maven can pull them transitively once we declare the DynamoDBLocal dependency.

Next, we need to add a new build step to copy native libraries into a specific folder that we’ll define in the JVM system property later on.

Let’s copy the transitively-pulled SQLite4Java library files to a folder named native-libs:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-dependency-plugin</artifactId>
    <version>2.10</version>
    <executions>
        <execution>
            <id>copy</id>
            <phase>test-compile</phase>
            <goals>
                <goal>copy-dependencies</goal>
            </goals>
            <configuration>
                <includeScope>test</includeScope>
                <includeTypes>so,dll,dylib</includeTypes>
                <outputDirectory>${project.basedir}/native-libs</outputDirectory>
            </configuration>
        </execution>
    </executions>
</plugin>

2.3. Set the SQLite4Java System Property

Now, we’ll reference the previously created folder (where the SQLite4Java libraries are located), using a JVM system property named sqlite4java.library.path:

System.setProperty("sqlite4java.library.path", "native-libs");

In order to successfully run the test later, it’s mandatory to have all the SQLite4Java libraries in the folder defined by the sqlite4java.library.path system property. We must run Maven test-compile (mvn test-compile) at least once to fulfill the prerequisite.

3. Setting up the Test Database’s Lifecycle

We can define the code to create and start the local DynamoDB server in a setup method annotated with @BeforeClass; and, symmetrically, stop the server in a teardown method annotated with @AfterClass.

In the following example, we’ll start up the local DynamoDB server on port 8000 and make sure it’s stopped again after running our tests:

public class ProductInfoDAOIntegrationTest {
    private static DynamoDBProxyServer server;

    @BeforeClass
    public static void setupClass() throws Exception {
        System.setProperty("sqlite4java.library.path", "native-libs");
        String port = "8000";
        server = ServerRunner.createServerFromCommandLineArgs(
          new String[]{"-inMemory", "-port", port});
        server.start();
        //...
    }

    @AfterClass
    public static void teardownClass() throws Exception {
        server.stop();
    }

    //...
}

We can also run the local DynamoDB server on any available port instead of a fixed port using java.net.ServerSocket. In this case, we must also configure the test to set the endpoint to the correct DynamoDB port:

public String getAvailablePort() throws IOException {
    ServerSocket serverSocket = new ServerSocket(0);
    return String.valueOf(serverSocket.getLocalPort());
}

4. Alternative Approach: Using @ClassRule

We can wrap the previous logic in a JUnit rule which performs the same action:

public class LocalDbCreationRule extends ExternalResource {
    private DynamoDBProxyServer server;

    public LocalDbCreationRule() {
        System.setProperty("sqlite4java.library.path", "native-libs");
    }

    @Override
    protected void before() throws Exception {
        String port = "8000";
        server = ServerRunner.createServerFromCommandLineArgs(
          new String[]{"-inMemory", "-port", port});
        server.start();
    }

    @Override
    protected void after() {
        this.stopUnchecked(server);
    }

    protected void stopUnchecked(DynamoDBProxyServer dynamoDbServer) {
        try {
            dynamoDbServer.stop();
        } catch (Exception e) {
            throw new RuntimeException(e);
        }    
    }
}

To use our custom rule, we’ll have to create and annotate an instance with @ClassRule as shown below. Again, the test will create and start the local DynamoDB server prior to the test class initialization.

Note that the access modifier of the test rule must be public in order to run the test:

public class ProductInfoRepositoryIntegrationTest {
    @ClassRule
    public static LocalDbCreationRule dynamoDB = new LocalDbCreationRule();

    //...
}

 

Before wrapping up, a very quick note – since DynamoDB Local uses the SQLite database internally, its performance doesn’t reflect the real performance in production.

5. Conclusion

In this article, we’ve seen how to setup and configure DynamoDB Local to run integration tests.

As always, the source code and the configuration example can be found over on Github.

Find Sum and Average in a Java Array

$
0
0

1. Introduction

In this quick tutorial, we’ll cover how we can calculate sum & average in an array using both Java standard loops and the Stream API.

2. Find Sum of Array Elements

2.1. Sum Using a For Loop

In order to find the sum of all elements in an array, we can simply iterate the array and add each element to a sum accumulating variable.

This very simply starts with a sum of 0 and add each item in the array as we go:

public static int findSumWithoutUsingStream(int[] array) {
    int sum = 0;
    for (int value : array) {
        sum += value;
    }
    return sum;
}

2.2. Sum With the Java Stream API

We can use the Stream API for achieving the same result:

public static int findSumUsingStream(int[] array) {
    return Arrays.stream(array).sum();
}

It’s important to know that the sum() method only supports primitive type streams.

If we want to use a stream on a boxed Integer value, we must first convert the stream into IntStream using the mapToInt method.

After that, we can apply the sum() method to our newly converted IntStream:

public static int findSumUsingStream(Integer[] array) {
    return Arrays.stream(array)
      .mapToInt(Integer::intValue)
      .sum();
}

You can read a lot more about the Stream API here.

3. Find Average in a Java Array

3.1. Average Without the Stream API

Once we know how to calculate the sum of array elements, finding average is pretty easy – as Average = Sum of Elements / Number of Elements:

public static double findAverageWithoutUsingStream(int[] array) {
    int sum = findSumWithoutUsingStream(array);
    return (double) sum / array.length;
}

Notes:

  1. Dividing an int by another int returns an int result. To get an accurate average, we first cast sum to double.
  2. Java Array has a length field which stores the number of elements in the array.

3.2. Average Using the Java Stream API

public static double findAverageUsingStream(int[] array) {
    return Arrays.stream(array).average().orElse(Double.NaN);
}

IntStream.average() returns an OptionalDouble which may not contain a value and which needs a special handling.

Read more about Optionals in this article and about the OptionalDouble class in the Java 8 Documentation.

4. Conclusion

In this article, we explored how to find sum/average of int array elements.

As always, the code is available over on Github.

Handling Cookies and a Session in a Java Servlet

$
0
0

1. Overview

In this tutorial, we’ll cover the handling of cookies and sessions in Java, using Servlets.

Additionally, we’ll shortly describe what a cookie is, and explore some sample use cases for it.

2. Cookie Basics

Simply put, a cookie is a small piece of data stored on the client-side which servers use when communicating with clients.

They’re used to identify a client when sending a subsequent request. They can also be used for passing some data from one servlet to another.

For more details, please refer to this article.

2.1. Create a Cookie

The Cookie class is defined in the javax.servlet.http package.

To send it to the client, we need to create one and add it to the response:

Cookie uiColorCookie = new Cookie("color", "red");
response.addCookie(uiColorCookie);

However, its API is a lot broader – let’s explore it.

2.2. Set the Cookie Expiration Date

We can set the max age (with a method maxAge(int)) which defines how many seconds a given cookie should be valid for:

uiColorCookie.setMaxAge(60*60);

We set a max age to one hour. After this time, the cookie cannot be used by a client (browser) when sending a request and it also should be removed from the browser cache.

2.3. Set the Cookie Domain

Another useful method in the Cookie API is setDomain(String).

This allows us to specify domain names to which it should be delivered by the client. It also depends on if we specify domain name explicitly or not.

Let’s set the domain for a cookie:

uiColorCookie.setDomain("example.com");

The cookie will be delivered to each request made by example.com and its subdomains.

If we don’t specify a domain explicitly, it will be set to the domain name which created a cookie.

For example, if we create a cookie from example.com and leave domain name empty, then it’ll be delivered to the www.example.com (without subdomains).

Along with a domain name, we can also specify a path. Let’s have a look at that next.

2.4. Set the Cookie Path

The path specifies where a cookie will be delivered.

If we specify a path explicitly, then a Cookie will be delivered to the given URL and all its subdirectories:

uiColorCookie.setPath("/welcomeUser");

Implicitly, it’ll be set to the URL which created a cookie and all its subdirectories.

Now let’s focus on how we can retrieve their values inside a Servlet.

2.5. Read Cookies in the Servlet

Cookies are added to the request by the client. The client checks its parameters and decides if it can deliver it to the current URL.

We can get all cookies by calling getCookies() on the request (HttpServletRequest) passed to the Servlet.

We can iterate through this array and search for the one we need, e.g., by comparing their names:

public Optional<String> readCookie(String key) {
    return Arrays.stream(request.getCookies())
      .filter(c -> key.equals(c.getName()))
      .map(Cookie::getValue)
      .findAny();
}

2.6. Remove a Cookie

To remove a cookie from a browser, we have to add a new one to the response with the same name, but with a maxAge value set to 0:

Cookie userNameCookieRemove = new Cookie("userName", "");
userNameCookieRemove.setMaxAge(0);
response.addCookie(userNameCookieRemove);

A sample use case for removing cookies is a user logout action – we may need to remove some data which was stored for an active user session.

Now we know how we can handle cookies inside a Servlet.

Next, we’ll cover another important object which we access very often from a Servlet – a Session object.

3. HttpSession Object

The HttpSession is another option for storing user-related data across different requests. A session is a server-side storage holding contextual data.

Data isn’t shared between different session objects (client can access data from its session only). It also contains key-value pairs, but in comparison to a cookie, a session can contain object as a value. The storage implementation mechanism is server-dependent.

A session is matched with a client by a cookie or request parameters. More info can be found here.

3.1. Getting a Session

We can obtain an HttpSession straight from a request:

HttpSession session = request.getSession();

The above code will create a new session in case it doesn’t exist. We can achieve the same by calling:

request.getSession(true)

In case we just want to obtain existing session and not create a new one, we need to use:

request.getSession(false)

If we access the JSP page for the first time, then a new session gets created by default. We can disable this behavior by setting the session attribute to false:

<%@ page contentType="text/html;charset=UTF-8" session="false" %>

In most cases, a web server uses cookies for session management. When a session object is created, then a server creates a cookie with JSESSIONID key and value which identifies a session.

3.2. Session Attributes

The session object provides a bunch of methods for accessing (create, read, modify, remove) attributes created for a given user session:

  • setAttribute(String, Object) which creates or replaces a session attribute with a key and a new value
  • getAttribute(String) which reads an attribute value with a given name (key)
  • removeAttribute(String) which removes an attribute with a given name

We can also easily check already existing session attributes by calling getAttributeNames().

As we already mentioned, we could retrieve a session object from a request. When we already have it, we can quickly perform methods mentioned above.

We can create an attribute:

HttpSession session = request.getSession();
session.setAttribute("attributeKey", "Sample Value");

The attribute value can be obtained by its key (name):

session.getAttribute("attributeKey");

We can remove an attribute when we don’t need it anymore:

session.removeAttribute("attributeKey");

A well-known use case for a user session is to invalidate whole data it stores when a user logs out from our website. The session object provides a solution for it:

session.invalidate();

This method removes the whole session from the web server so we cannot access attributes from it anymore.

HttpSession object has more methods, but the one we mentioned are the most common.

4. Conclusion

In this article, we covered two mechanism which allows us to store user data between subsequent requests to the server – the cookie and the session.

Keep in mind that the HTTP protocol is stateless, and so maintaining state across requests is a must.

As always, code snippets are available over on Github.

Shutdown a Spring Boot Application

$
0
0

1. Overview

Managing the lifecycle of Spring Boot Application is very important for a production-ready system. The Spring container handles the creation, initialization, and destruction of all the Beans with the help of the ApplicationContext.

The emphasize of this write-up is the destruction phase of the lifecycle. More specifically, we’ll have a look at different ways to shut down a Spring Boot Application.

To learn more about how to set up a project using Spring Boot, check out the Spring Boot Starter article, or go over the Spring Boot Configuration.

2. Shutdown Endpoint

By default, all the endpoints are enabled in Spring Boot Application except /shutdown; this is, naturally, part of the Actuator endpoints.

Here’s the Maven dependency to set up these up:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

And, if we want to also set up security support, we need:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-security</artifactId>
</dependency>

Lastly, we enable the shutdown endpoint in application.properties file:

management.endpoint.shutdown.enabled=true
endpoints.shutdown.enabled=true

To shut down the Spring Boot application, we simply call a POST method like this:

curl -X POST localhost:port/shutdown

And a quick test fragment, using the Spring MVC testing support:

mockMvc.perform(post("/shutdown")).andExpect(status().isOk());

3. Close Application Context

We can also call the close() method directly using the application context:

ConfigurableApplicationContext ctx = new SpringApplicationBuilder(Application.class).web(false).run();
System.out.println("Spring Boot application started");
ctx.getBean(TerminateBean.class);
ctx.close();

This destroys all the beans, releases the locks, then closes the bean factory. To verify the application shutdown, we use the Spring’s standard lifecycle callback with @PreDestroy annotation:

@PreDestroy
public void onDestroy() throws Exception {
    System.out.println("Spring Container is destroyed!");
}

Here’s the output after running this example:

Spring Boot application started
Closing AnnotationConfigApplicationContext@39b43d60
DefaultLifecycleProcessor - Stopping beans in phase 0
Unregistering JMX-exposed beans on shutdown
Spring Container is destroyed!

The important thing here to keep in mind: while closing the application context, the parent context isn’t affected due to separate lifecycles.

4. Exit SpringApplication

SpringApplication registers a shutdown hook with the JVM to make sure the application exits appropriately.

Beans may implement the ExitCodeGenerator interface to return a specific error code:

ConfigurableApplicationContext ctx = new SpringApplicationBuilder(Application.class).web(false).run();

int exitCode = SpringApplication.exit(ctx, new ExitCodeGenerator() {
@Override
public int getExitCode() {
        // return the error code
        return 0;
    }
});

System.exit(exitCode);

The same code with the application of Java 8 lambdas:

SpringApplication.exit(ctx, () -> 0);

After calling the System.exit(exitCode), the program terminates with a 0 return code:

Process finished with exit code 0

5. Kill the App Process

Finally, we can also shut down a Spring Boot Application from outside the application by using a bash script. Our first step for this option is to have the application context write it’s PID into a file:

SpringApplicationBuilder app = new SpringApplicationBuilder(Application.class).web(false);
app.build().addListeners(new ApplicationPidFileWriter("./bin/shutdown.pid"));
app.run();

Next, create a shutdown.bat file with the following content:

kill $(cat ./bin/shutdown.pid)

The execution of shutdown.bat extracts the Process ID from the shutdown.pid file and uses the kill command to terminate the Boot application.

6. Conclusion

In this quick write-up, we’ve covered few simple methods that can be used to shut down a running Spring Boot Application.

While it’s up to the developer to choose an appropriate a method; all of these methods should be used by design and on purpose.

For example, .exit() is preferred when we need to pass an error code to another environment, say JVM for further actions. Using Application PID gives more flexibility, as we can also start or restart the application with the use of bash script.

Finally, /shutdown is here to make it possible to terminate the applications externally via HTTP. For all the other cases .close() will work perfectly.

As usual, the complete code for this article is available over on the GitHub project.

Creating and Deploying Smart Contracts with Solidity

$
0
0

 1. Overview

The ability to run smart contracts is what has made the Ethereum blockchain so popular and disruptive.

Before we explain what a smart contract is, let’s start with a definition of blockchain:

Blockchain is a public database that keeps a permanent record of digital transactions. It operates as a trustless transactional system, a framework in which individuals can make peer-to-peer transactions without needing to trust a third party or one another.

Let’s see how we can create smart contracts on Ethereum with solidity:

2. Ethereum

Ethereum is a platform that allows people to write decentralized applications using blockchain technology efficiently.

A decentralized application (Dapp) is a tool for people and organizations on different sides of an interaction used to come together without any centralized intermediary. Early examples of Dapps include BitTorrent (file sharing) and Bitcoin (currency).

We can describe Ethereum as a blockchain with a built-in programming language.

2.1. Ethereum Virtual Machine (EVM)

From a practical standpoint, the EVM can be thought of as a large, decentralized system containing millions of objects, called accounts, which can maintain an internal database, execute code and talk to each other.

The first type of account is probably the most familiar for the average user who uses the network. Its name is EOA (Externally Owned Account); it is used to transmit value (such as Ether) and is controlled by a private key.

On the other hand, there is another type of account which is the contract. Let’s go ahead and see what is this about:

3. What is a Smart Contract?

In simple terms, we can see a smart contract as a collection of code stored in the blockchain network that defines conditions to which all parties using the contract agree upon.

This enables developers to create things that haven’t been invented yet. Think about it for a second – there is no need for a middleman, and there is also no counterparty risk. We can create new markets, store registries of debts or promises and rest assure that we have the consensuses of the network that validates the transactions.

Anyone can deploy a smart contract to the decentralized database for a fee proportional to the storage size of the containing code. Nodes wishing to use the smart contract must somehow indicate the result of their participation to the rest of the network.

3.1. Solidity

The main language used in Ethereum is Solidity – which is a Javascript-like language developed specifically for writing smart contracts. Solidity is statically typed, supports inheritance, libraries and complex user-defined types among other features.

The solidity compiler turns code into EVM bytecode, which can then be sent to the Ethereum network as a deployment transaction. Such deployments have more substantial transaction fees than smart contract interactions and must be paid by the owner of the contract.

4. Creating a Smart Contract with Solidity

The first line in a solidity contract sets the source code version. This is to ensure that the contract doesn’t suddenly behave differently with a new compiler version.

pragma solidity ^0.4.0;

For our example, the name of the contract is Greeting and as we can see the creation of it’s similar to a class in Java or another object-oriented programming language:

contract Greeting {
    address creator;
    string message;

    // functions that interact with state variables
}

In this example, we declared two states variables: creator and message. In Solidity, we use the data type named address to store addresses of accounts.

Next, we need to initialize both variables in the constructor.

4.1. Constructor

We declare a constructor by using the function keyword followed by the name of the contract (just like in Java).

The constructor is a special function that is invoked only once when a contract is first deployed to the Ethereum blockchain. We can only declare a single constructor for a contract:

function Greeting(string _message) {
    message = _message;
    creator = msg.sender;
}

We also inject the initial string _message as a parameter into the constructor and set it to the message state variable.

In the second line of the constructor, we initialize the creator variable to a value called msg.sender. The reason why there’s no need for injecting msg into the constructor is because msg is a global variable that provides specific information about the message such as the address of the account sending it.

We could potentially use this information to implement access control for certain functions.

4.2. Setter and Getter Methods

Finally, we implement the setter and getter methods for the message:

function greet() constant returns (string) {
    return message;
}

function setGreeting(string _message) {
    message = _message;
}

Invoking the function greet will simply return the currently saved message. We use the constant keyword to specify that this function doesn’t modify the contract state and doesn’t trigger any writes to the blockchain.

We can now change the value of the state in the contract by calling the function setGreeting. Anyone can alter the value just by calling this function. This method doesn’t have a return type but does take a String type as a parameter.

Now that we’ve created our first smart contract the next step will be to deploy it into the Ethereum blockchain so everybody can use it. We can use Remix, which’s currently the best online IDE and it’s effortless to use.

5. Interacting with a Smart Contract

To interact with a smart contract in the decentralized network (blockchain) we need to have access to one of the clients.

There’re two ways to do this:

Infura is the most straightforward option, so we’ll request a free access token. Once we sign up, we need to pick the URL of the Rinkeby test network: “https://rinkeby.infura.io/<token>”.

To be able to transact with the smart contract from Java, we need to use a library called Web3j. Here is the Maven dependency:

<dependency>
    <groupId>org.web3j</groupId>
    <artifactId>core</artifactId>
    <version>3.3.1</version>
</dependency>

And in Gradle:

compile ('org.web3j:core:3.3.1')

Before starting to write code, there are some things that we need to do first.

5.1. Creating a Wallet

Web3j allow us to use some of its functionality from the command line:

  • Wallet creation
  • Wallet password management
  • Transfer of funds from one wallet to another
  • Generate Solidity smart contract function wrappers

Command line tools can be obtained as a zip file/tarball from the releases page of the project repository, under the downloads section, or for OS X users via homebrew:

brew tap web3j/web3j
brew install web3j

To generate a new Ethereum wallet we simply type the following on the command line:

$ web3j wallet create

It will ask us for a password and a location where we can save our wallet. The file is in Json format, and the main thing to keep in mind is the Ethereum address.

We’ll use it in the next step to request an Ether.

5.2. Requesting Ether in the Rinkeby Testnet

We can request free Ether here. To prevent malicious actors from exhausting all available funds, they ask us to provide a public link to one social media post with our Ethereum address.

This is a very simple step, almost instantly they provide the Ether so we can run the tests.

5.3. Generating the Smart Contract Wrapper

Web3j can auto-generate smart contract wrapper code to deploy and interact with smart contracts without leaving the JVM.

To generate the wrapper code, we need to compile our smart contract. We can find the instruction to install the compiler here. From there, we type the following on the command line:

$ solc Greeting.sol --bin --abi --optimize -o <output_dir>/

The latter will create two files: Greeting.bin and Greeting.abi. Now, we can generate the wrapper code using web3j’s command line tools:

$ web3j solidity generate /path/to/Greeting.bin 
  /path/to/Greeting.abi -o /path/to/src/main/java -p com.your.organisation.name

With this, we’ll now have the Java class to interact with the contract in our main code.

6. Interacting with the Smart Contract

In our main class, we start by creating a new web3j instance to connect to remote nodes on the network:

Web3j web3j = Web3j.build(
  new HttpService("https://rinkeby.infura.io/<your_token>"));

We then need to load our Ethereum wallet file:

Credentials credentials = WalletUtils.loadCredentials(
  "<password>",
 "/path/to/<walletfile>");

Now let’s deploy our smart contract:

Greeting contract = Greeting.deploy(
  web3j, credentials,
  ManagedTransaction.GAS_PRICE, Contract.GAS_LIMIT,
  "Hello blockchain world!").send();

Deploying the contract may take a while depending the work in the network. Once is deployed, we might want to store the address where the contract was deployed. We can obtain the address this way:

String contractAddress = contract.getContractAddress();

All the transactions made with the contract can be seen in the url: “https://rinkeby.etherscan.io/address/<contract_address>”. 

On the other hand, we can modify the value of the smart contract performing a transaction:

TransactionReceipt transactionReceipt = contract.setGreeting("Hello again").send();

Finally, if we want to view the new value stored, we can simply write:

String newValue = contract.greet().send();

7. Conclusion

In this tutorial, we saw that Solidity is a statically-typed programming language designed for developing smart contracts that run on the EVM.

We also created a straightforward contract with this language and saw that it’s very similar to other programming languages.

The smart contract is just a phrase used to describe computer code that can facilitate the exchange of value. When running on the blockchain, a smart contract becomes a self-operating computer program that automatically executes when specific conditions are met.

We saw in this article that the ability to run code in the blockchain is the main differentiation in Ethereum because it allows developers to build a new type of applications that go way beyond anything we have seen before.

As always, code samples can be found over on GitHub.

Introduction to Atlassian Fugue

$
0
0

1. Introduction

Fugue is a Java library by Atlassian; it’s a collection of utilities supporting Functional Programming.

In this write-up, we’ll focus on and explore the most important Fugue’s APIs.

2. Getting Started with Fugue

To start using Fugue in our projects, we need to add the following dependency:

<dependency>
    <groupId>io.atlassian.fugue</groupId>
    <artifactId>fugue</artifactId>
    <version>4.5.1</version>
</dependency>

We can find the most recent version of Fugue over on Maven Central.

3. Option

Let’s start our journey by looking at the Option class which is Fugue’s answer to java.util.Optional.

As we can guess by the name, Option’s a container representing a potentially absent value.

In other words, an Option is either Some value of a certain type or None:

Option<Object> none = Option.none();
assertFalse(none.isDefined());

Option<String> some = Option.some("value");
assertTrue(some.isDefined());
assertEquals("value", some.get());

Option<Integer> maybe = Option.option(someInputValue);

3.1. The map Operation

One of the standard Functional Programming APIs is the map() method which allows applying a provided function to underlying elements.

The method applies the provided function to the Option‘s value if it’s present:

Option<String> some = Option.some("value") 
  .map(String::toUpperCase);
assertEquals("VALUE", some.get());

3.2. Option and a Null Value

Besides naming differences, Atlassian did make a few design choices for Option that differ from Optional; let’s now look at them.

We cannot directly create a non-empty Option holding a null value:

Option.some(null);

The above throws an exception.

However, we can get one as a result of using the map() operation:

Option<Object> some = Option.some("value")
  .map(x -> null);
assertNull(some.get());

This isn’t possible when simply using java.util.Optional.

3.3. Option is Iterable

Option can be treated as a collection that holds maximum one element, so it makes sense for it to implement the Iterable interface.

This highly increases the interoperability when working with collections/streams.

And now, for example, can be concatenated with another collection:

Option<String> some = Option.some("value");
Iterable<String> strings = Iterables
  .concat(some, Arrays.asList("a", "b", "c"));

3.4. Converting Option to Stream

Since an Option is an Iterable, it can be converted to a Stream easily too.

After converting, the Stream instance will have exactly one element if the option is present, or zero otherwise:

assertEquals(0, Option.none().toStream().count());
assertEquals(1, Option.some("value").toStream().count());

3.5. java.util.Optional Interoperability

If we need a standard Optional implementation, we can obtain it easily using the toOptional() method:

Optional<Object> optional = Option.none()
  .toOptional();
assertTrue(Option.fromOptional(optional)
  .isEmpty());

3.6. The Options Utility Class

Finally, Fugue provides some utility methods for working with Options in the aptly named Options class.

It features methods such as filterNone for removing empty Options from a collection, and flatten for turning a collection of Options into a collection of enclosed objects, filtering out empty Options.

Additionally, it features several variants of the lift method that lifts a Function<A,B> into a Function<Option<A>, Option<B>>:

Function<Integer, Integer> f = (Integer x) -> x > 0 ? x + 1 : null;
Function<Option<Integer>, Option<Integer>> lifted = Options.lift(f);

assertEquals(2, (long) lifted.apply(Option.some(1)).get());
assertTrue(lifted.apply(Option.none()).isEmpty());

This is useful when we want to pass a function which is unaware of Option to some method that uses Option.

Note that, just like the map method, lift doesn’t map null to None:

assertEquals(null, lifted.apply(Option.some(0)).get());

4. Either for Computations With Two Possible Outcomes

As we’ve seen, the Option class allows us to deal with the absence of a value in a functional manner.

However, sometimes we need to return more information than “no value”; for example, we might want to return either a legitimate value or an error object.

The Either class covers that use case.

An instance of Either can be a Right or a Left but never both at the same time.

By convention, the right is the result of a successful computation, while the left is the exceptional case.

4.1. Constructing an Either

We can obtain an Either instance by calling one of its two static factory methods.

We call right if we want an Either containing the Right value:

Either<Integer, String> right = Either.right("value");

Otherwise, we call left:

Either<Integer, String> left = Either.left(-1);

Here, our computation can either return a String or an Integer.

4.2. Using an Either

When we have an Either instance, we can check whether it’s left or right and act accordingly:

if (either.isRight()) {
    ...
}

More interestingly, we can chain operations using a functional style:

either
  .map(String::toUpperCase)
  .getOrNull();

4.3. Projections

The main thing that differentiates Either from other monadic tools like Option, Try, is the fact that often it’s unbiased. Simply put, if we call the map() method, Either doesn’t know if to work with Left or Right side.

This is where projections come in handy.

Left and right projections are specular views of an Either that focus on the left or right value, respectively:

either.left()
  .map(x -> decodeSQLErrorCode(x));

In the above code snippet, if Either is Left, decodeSQLErrorCode() will get applied to the underlying element. If Either is Right, it won’t. Same the other way around when using the right projection.

4.4. Utility Methods

As with Options, Fugue provides a class full of utilities for Eithers, as well, and it’s called just like that: Eithers.

It contains methods for filtering, casting and iterating over collections of Eithers.

5. Exception Handling with Try

We conclude our tour of either-this-or-that data types in Fugue with another variation called Try.

Try is similar to Either, but it differs in that it’s dedicated for working with exceptions.

Like Option and unlike Either, Try is parameterized over a single type, because the “other” type is fixed to Exception (while for Option it’s implicitly Void).

So, a Try can be either a Success or a Failure:

assertTrue(Try.failure(new Exception("Fail!")).isFailure());
assertTrue(Try.successful("OK").isSuccess());

5.1. Instantiating a Try

Often, we won’t be creating a Try explicitly as a success or a failure; rather, we’ll create one from a method call.

Checked.of calls a given function and returns a Try encapsulating its return value or any thrown exception:

assertTrue(Checked.of(() -> "ok").isSuccess());
assertTrue(Checked.of(() -> { throw new Exception("ko"); }).isFailure());

Another method, Checked.lift, takes a potentially throwing function and lifts it to a function returning a Try:

Checked.Function<String, Object, Exception> throwException = (String x) -> {
    throw new Exception(x);
};
        
assertTrue(Checked.lift(throwException).apply("ko").isFailure());

5.2. Working with Try

Once we have a Try, the three most common things we might ultimately want to do with it are:

  1. extracting its value
  2. chaining some operation to the successful value
  3. handling the exception with a function

Besides, obviously, discarding the Try or passing it along to other methods, the above three aren’t the only options that we have, but all the other built-in methods are just a convenience over these three.

5.3. Extracting the Successful Value

To extract the value, we use the getOrElse method:

assertEquals(42, failedTry.getOrElse(() -> 42));

It returns the successful value if present, or some computed value otherwise.

There is no getOrThrow or similar, but since getOrElse doesn’t catch any exception, we can easily write it:

someTry.getOrElse(() -> {
    throw new NoSuchElementException("Nothing to get");
});

5.4. Chaining Calls After Success

In a functional style, we can apply a function to the success value (if present) without extracting it explicitly first.

This is the typical map method we find in Option, Either and most other containers and collections:

Try<Integer> aTry = Try.successful(42).map(x -> x + 1);

It returns a Try so we can chain further operations.

Of course, we also have the flatMap variety:

Try.successful(42).flatMap(x -> Try.successful(x + 1));

5.5. Recovering From Exceptions

We have analogous mapping operations that work with the exception of a Try (if present), rather than its successful value.

However, those methods differ in that their meaning is to recover from the exception, i.e. to produce a successful Try in the default case.

Thus, we can produce a new value with recover:

Try<Object> recover = Try
  .failure(new Exception("boo!"))
  .recover((Exception e) -> e.getMessage() + " recovered.");

assertTrue(recover.isSuccess());
assertEquals("boo! recovered.", recover.getOrElse(() -> null));

As we can see, the recovery function takes the exception as its only argument.

If the recovery function itself throws, the result is another failed Try:

Try<Object> failure = Try.failure(new Exception("boo!")).recover(x -> {
    throw new RuntimeException(x);
});

assertTrue(failure.isFailure());

The analogous to flatMap is called recoverWith:

Try<Object> recover = Try
  .failure(new Exception("boo!"))
  .recoverWith((Exception e) -> Try.successful("recovered again!"));

assertTrue(recover.isSuccess());
assertEquals("recovered again!", recover.getOrElse(() -> null));

6. Other Utilities

Let’s now have a quick look at some of the other utilities in Fugue, before we wrap it up.

6.1. Pairs

A Pair is a really simple and versatile data structure, made of two equally important components, which Fugue calls left and right:

Pair<Integer, String> pair = Pair.pair(1, "a");
        
assertEquals(1, (int) pair.left());
assertEquals("a", pair.right());

Fugue doesn’t provide many built-in methods on Pairs, besides mapping and the applicative functor pattern.

However, Pairs are used throughout the library and they are readily available for user programs.

The next poor person’s implementation of Lisp is just a few keystrokes away!

6.2. Unit

Unit is an enum with a single value which is meant to represent “no value”.

It’s a replacement for the void return type and Void class, that does away with null:

Unit doSomething() {
    System.out.println("Hello! Side effect");
    return Unit();
}

Quite surprisingly, however, Option doesn’t understand Unit, treating it like some value instead of none.

6.3. Static Utilities

We have a few classes packed full of static utility methods that we won’t have to write and test.

The Functions class offers methods that use and transform functions in various ways: composition, application, currying, partial functions using Option, weak memoization et cetera.

The Suppliers class provides a similar, but more limited, collection of utilities for Suppliers, that is, functions of no arguments.

Iterables and Iterators, finally, contain a host of static methods for manipulating those two widely used standard Java interfaces.

7. Conclusion

In this article, we’ve given an overview of the Fugue library by Atlassian.

We haven’t touched the algebra-heavy classes like Monoid and Semigroups because they don’t fit in a generalist article.

However, you can read about them and more in the Fugue javadocs and source code.

We also haven’t touched on any of the optional modules, that offer for example integrations with Guava and Scala.

The implementation of all these examples and code snippets can be found in the GitHub project – this is a Maven project, so it should be easy to import and run as is.

Java Weekly, Issue 223

$
0
0

Here we go…

1. Spring and Java

>> Monitor and troubleshoot Java applications and services with Datadog 

Optimize performance with end-to-end tracing and out-of-the-box support for popular Java frameworks, application servers, and databases. Try it free.

>> The 30-seconds “State of Java in 2018” Survey [docs.google.com]

I’m running the annual Java survey, to get a clear idea of the state of the Java ecosystem right now.

If you haven’t – definitely take the 30 seconds and fill it in. Thanks.

>> Java 10: Parallel Full GC in G1GC [javaspecialists.eu]

JDK 10 finally fixed the problem with G1 which would do the full GC using a single thread.

>> Why I Moved Back from Gradle to Maven [blog.philipphauer.de]

Just like any tool out there, Gradle isn’t flaw-free. It’s always a good idea to weigh and understand the tool before committing to it for your project.

>> CountDownLatch vs Phaser [javaspecialists.eu]

Definitely, Phaser is harder to understand but easier to use 🙂

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> Why I Deleted My IDE and How It Changed My Life For the Better [blog.takipi.com]

Sometimes it can be beneficial to ditch the technology and go back to basics. Or try a better IDE 🙂

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> Spare Time [dilbert.com]

>> Anyone Fired Lately [dilbert.com]

>> Meetings [dilbert.com]

4. Pick of the Week

>> The Mistakes I Made As a Beginner Programmer [medium.com]


New Password Storage In Spring Security 5

$
0
0

1. Introduction

With the latest Spring Security release, a lot has changed. One of those changes is how we can handle password encoding in our applications.

In this tutorial, we’re going to explore some of these changes.

Later, we’ll see how to configure the new delegation mechanism and how to update our existing password encoding, without our users recognizing it.

2. Relevant Changes in Spring Security 5.x

The Spring Security team declared the PasswordEncoder in org.springframework.security.authentication.encoding as deprecated. It was a logical move, as the old interface wasn’t designed for a randomly generated salt. Consequently, version 5 removed this interface.

Additionally, Spring Security changes the way it handles encoded passwords. In previous versions, each application employed one password encoding algorithm only.

By default, StandardPasswordEncoder dealt with that. It used SHA-256 for the encoding. By changing the password encoder, we could switch to another algorithm. But our application had to stick to exactly one algorithm.

Version 5.0 introduces the concept of password encoding delegation. Now, we can use different encodings for different passwords. Spring recognizes the algorithm by an identifier prefixing the encoded password.

Here’s an example of a bcrypt encoded password:

{bcrypt}$2b$12$FaLabMRystU4MLAasNOKb.HUElBAabuQdX59RWHq5X.9Ghm692NEi

Note how bcrypt is specified in curly braces in the very beginning.

3. Delegation Configuration

If the password hash has no prefix, the delegation process uses a default encoder. Hence, by default, we get the StandardPasswordEncoder.

That makes it compatible with the default configuration of previous Spring Security versions.

With version 5, Spring Security introduces PasswordEncoderFactories.createDelegatingPasswordEncoder(). This factory method returns a configured instance of DelegationPasswordEncoder.

For passwords without a prefix, that instance ensures the just mentioned default behavior. And for password hashes that contain a prefix, the delegation is done accordingly.

The Spring Security team lists the supported algorithms in the latest version of the corresponding JavaDoc.

Of course, Spring lets us configure this behavior.

Let’s assume we want to support:

  • bcrypt as our new default
  • scrypt as an alternative
  • SHA-256 as the currently used algorithm.

The configuration for this set-up will look like this:

@Bean
public PasswordEncoder delegatingPasswordEncoder() {
    PasswordEncoder defaultEncoder = new StandardPasswordEncoder();
    Map<String, PasswordEncoder> encoders = new HashMap<>();
    encoders.put("bcrypt", new BCryptPasswordEncoder());
    encoders.put("scrypt", new SCryptPasswordEncoder());

    DelegatingPasswordEncoder passworEncoder = new DelegatingPasswordEncoder(
      "bcrypt", encoders);
    passworEncoder.setDefaultPasswordEncoderForMatches(defaultEncoder);

    return passworEncoder;
}

4. Migrating the Password Encoding Algorithm

In the previous section, we explored how to configure password encoding according to our needs. Therefore, now we’ll work on how to switch an already encoded password to a new algorithm.

Let’s imagine we want to change the encoding from SHA-256 to bcrypt, however, we don’t want our user to change their passwords.

One possible solution is to use the login request. At this point, we can access the credentials in plain text. That is the moment we can take the current password and re-encode it.

Consequently, we can use Spring’s AuthenticationSuccessEvent for that. This event fires after a user successfully logged into our application.

Here is the example code:

@Bean
public ApplicationListener<AuthenticationSuccessEvent>
  authenticationSuccessListener( PasswordEncoder encoder) {
    return (AuthenticationSuccessEvent event) -> {
        Authentication auth = event.getAuthentication();

        if (auth instanceof UsernamePasswordAuthenticationToken
          && auth.getCredentials() != null) {

            CharSequence clearTextPass = (CharSequence) auth.getCredentials();
            String newPasswordHash = encoder.encode(clearTextPass);

            // [...] Update user's password

            ((UsernamePasswordAuthenticationToken) auth).eraseCredentials();
        }
    };
}

In the previous snippet:

  • We retrieved the user password in clear text from the provided authentication details
  • Created a new password hash with the new algorithm
  • Removed the clear text password from the authentication token

By default, extracting the password in clear text wouldn’t be possible because Spring Security deletes it as soon as possible.

Hence, we need to configure Spring so that it keeps the cleartext version of the password.

Additionally, we need to register our encoding delegation:

@Configuration
public class PasswordStorageWebSecurityConfigurer
  extends WebSecurityConfigurerAdapter {

    @Override
    protected void configure(AuthenticationManagerBuilder auth) 
      throws Exception {
        auth.eraseCredentials(false)
          .passwordEncoder(delegatingPasswordEncoder());
    }

    // ...
}

5. Conclusion

In this quick article, we talked about some new password encoding features available in 5.x.

We also saw how to configure multiple password encoding algorithms to encode our passwords. Furthermore, we explored a way to change the password encoding, without breaking the existing one.

Lastly, we described how to use Spring events to update encrypted user password transparently, allowing us to seamlessly change our encoding strategy without disclosing that to our users.

Lastly and as always, all code examples are available in our GitHub repository.

Introduction to EasyMock

$
0
0

1. Introduction

In the past, we’ve talked extensively about JMockit and Mockito.

In this tutorial, we’ll give an introduction to another mocking tool – EasyMock.

2. Maven Dependencies

Before we dive in, let’s add the following dependency to our pom.xml:

<dependency>
    <groupId>org.easymock</groupId>
    <artifactId>easymock</artifactId>
    <version>3.5.1</version>
    <scope>test</scope>
</dependency>

The latest version can always be found here.

3. Core Concepts

When generating a mock, we can simulate the target object, specify its behavior, and finally verify whether it’s used as expected.

Working with EasyMock’s mocks involves four steps:

  1. creating a mock of the target class
  2. recording its expected behavior, including the action, result, exceptions, etc.
  3. using mocks in tests
  4. verifying if it’s behaving as expected

After our recording finishes, we switch it to “replay” mode, so that the mock behaves as recorded when collaborating with any object that will be using it.

Eventually, we verify if everything goes as expected.

The four steps mentioned above relate to methods in org.easymock.EasyMock:

  1. mock(…): generates a mock of the target class, be it a concrete class or an interface. Once created, a mock is in “recording” mode, meaning that EasyMock will record any action the Mock Object takes, and replay them in the “replay” mode
  2. expect(…): with this method, we can set expectations, including calls, results, and exceptions,  for associated recording actions
  3. replay(…): switches a given mock to “replay” mode. Then, any action triggering previously recorded method calls will replay “recorded results”
  4. verify(…): verifies that all expectations were met and that no unexpected call was performed on a mock

In the next section, we’ll show how these steps work in action, using real-world examples.

4. A Practical Example of Mocking

Before we continue, let’s take a look at the example context: say we have a reader of the Baeldung blog, who likes to browse articles on the website, and then he/she tries to write articles.

Let’s start by creating the following model:

public class BaeldungReader {

    private ArticleReader articleReader;
    private IArticleWriter articleWriter;

    // constructors

    public BaeldungArticle readNext(){
        return articleReader.next();
    }

    public List<BaeldungArticle> readTopic(String topic){
        return articleReader.ofTopic(topic);
    }

    public String write(String title, String content){
        return articleWriter.write(title, content);
    }
}

In this model, we have two private members: the articleReader(a concrete class) and the articleWriter (an interface).

Next, we’ll mock them to verify BaeldungReader‘s behavior.

5. Mock with Java Code

Let’s begin with mocking an ArticleReader.

5.1. Typical Mocking

We expect the articleReader.next() method to be called when a reader skips an article:

@Test
public void whenReadNext_thenNextArticleRead(){
    ArticleReader mockArticleReader = mock(ArticleReader.class);
    BaeldungReader baeldungReader
      = new BaeldungReader(mockArticleReader);

    expect(mockArticleReader.next()).andReturn(null);
    replay(mockArticleReader);

    baeldungReader.readNext();

    verify(mockArticleReader);
}

In the sample code above, we stick strictly to the 4-step procedure and mock the ArticleReader class.

Although we really don’t care what mockArticleReader.next() returns, we still need to specify a return value for mockArticleReader.next() by using expect(…).andReturn(…).

With expect(…), EasyMock is expecting the method to return a value or throw an Exception.

If we simply do:

mockArticleReader.next();
replay(mockArticleReader);

EasyMock will complain about this, as it requires a call on expect(…).andReturn(…) if the method returns anything.

If it’s a void method, we can expect its action using expectLastCall() like this:

mockArticleReader.someVoidMethod();
expectLastCall();
replay(mockArticleReader);

5.2. Replay Order

If we need actions to be replayed in a specific order, we can be more strict:

@Test
public void whenReadNextAndSkimTopics_thenAllAllowed(){
    ArticleReader mockArticleReader
      = strictMock(ArticleReader.class);
    BaeldungReade baeldungReader
      = new BaeldungReader(mockArticleReader);

    expect(mockArticleReader.next()).andReturn(null);
    expect(mockArticleReader.ofTopic("easymock")).andReturn(null);
    replay(mockArticleReader);

    baeldungReader.readNext();
    baeldungReader.readTopic("easymock");

    verify(mockArticleReader);
}

In this snippet, we use strictMock(…) to check the order of method calls. For mocks created by mock(…) and strictMock(…), any unexpected method calls would cause an AssertionError.

To allow any method call for the mock, we can use niceMock(…):

@Test
public void whenReadNextAndOthers_thenAllowed(){
    ArticleReader mockArticleReader = niceMock(ArticleReader.class);
    BaeldungReade baeldungReader = new BaeldungReader(mockArticleReader);

    expect(mockArticleReader.next()).andReturn(null);
    replay(mockArticleReader);

    baeldungReader.readNext();
    baeldungReader.readTopic("easymock");

    verify(mockArticleReader);
}

Here we didn’t expect the baeldungReader.readTopic(…) to be called, but EasyMock won’t complain. With niceMock(…), EasyMock now only cares if the target object performed expected action or not.

5.3. Mocking Exception Throws

Now, let’s continue with mocking the interface IArticleWriter, and how to handle expected Throwables:

@Test
public void whenWriteMaliciousContent_thenArgumentIllegal() {
    // mocking and initialization

    expect(mockArticleWriter
      .write("easymock","<body onload=alert('baeldung')>"))
      .andThrow(new IllegalArgumentException());
    replay(mockArticleWriter);

    // write malicious content and capture exception as expectedException

    verify(mockArticleWriter);
    assertEquals(
      IllegalArgumentException.class, 
      expectedException.getClass());
}

In the snippet above, we expect the articleWriter is solid enough to detect XSS(Cross-site Scripting) attacks.

So when the reader tries to inject malicious code into the article content, the writer should throw an IllegalArgumentException. We recorded this expected behavior using expect(…).andThrow(…).

6. Mock with Annotation

EasyMock also supports injecting mocks using annotations. To use them, we need to run our unit tests with EasyMockRunner so that it processes @Mock and @TestSubject annotations.

Let’s rewrite previous snippets:

@RunWith(EasyMockRunner.class)
public class BaeldungReaderAnnotatedTest {

    @Mock
    ArticleReader mockArticleReader;

    @TestSubject
    BaeldungReader baeldungReader = new BaeldungReader();

    @Test
    public void whenReadNext_thenNextArticleRead() {
        expect(mockArticleReader.next()).andReturn(null);
        replay(mockArticleReader);
        baeldungReader.readNext();
        verify(mockArticleReader);
    }
}

Equivalent to mock(…), a mock will be injected into fields annotated with @Mock. And these mocks will be injected into fields of the class annotated with @TestSubject.

In the snippet above, we didn’t explicitly initialize the articleReader field in baeldungReader. When calling baeldungReader.readNext(), we can inter that implicitly called mockArticleReader.

That was because mockArticleReader was injected to the articleReader field.

Note that if we want to use another test runner instead of EasyMockRunner, we can use the JUnit test rule EasyMockRule:

public class BaeldungReaderAnnotatedWithRuleTest {

    @Rule
    public EasyMockRule mockRule = new EasyMockRule(this);

    //...

    @Test
    public void whenReadNext_thenNextArticleRead(){
        expect(mockArticleReader.next()).andReturn(null);
        replay(mockArticleReader);
        baeldungReader.readNext();
        verify(mockArticleReader);
    }

}

7. Mock with EasyMockSupport

Sometimes we need to introduce multiple mocks in a single test, and we have to repeat manually:

replay(A);
replay(B);
replay(C);
//...
verify(A);
verify(B);
verify(C);

This is ugly, and we need an elegant solution.

Luckily, we have a class EasyMockSupport in EasyMock to help deal with this. It helps keep track of mocks, such that we can replay and verify them in a batch like this:

//...
public class BaeldungReaderMockSupportTest extends EasyMockSupport{

    //...

    @Test
    public void whenReadAndWriteSequencially_thenWorks(){
        expect(mockArticleReader.next()).andReturn(null)
          .times(2).andThrow(new NoSuchElementException());
        expect(mockArticleWriter.write("title", "content"))
          .andReturn("BAEL-201801");
        replayAll();

        // execute read and write operations consecutively
 
        verifyAll();
 
        assertEquals(
          NoSuchElementException.class, 
          expectedException.getClass());
        assertEquals("BAEL-201801", articleId);
    }

}

Here we mocked both articleReader and articleWriter. When setting these mocks to “replay” mode, we used a static method replayAll() provided by EasyMockSupport, and used verifyAll() to verify their behaviors in batch.

We also introduced times(…) method in the expect phase. It helps specify how many times we expect the method to be called so that we can avoid introducing duplicate code.

We can also use EasyMockSupport through delegation:

EasyMockSupport easyMockSupport = new EasyMockSupport();

@Test
public void whenReadAndWriteSequencially_thenWorks(){
    ArticleReader mockArticleReader = easyMockSupport
      .createMock(ArticleReader.class);
    IArticleWriter mockArticleWriter = easyMockSupport
      .createMock(IArticleWriter.class);
    BaeldungReader baeldungReader = new BaeldungReader(
      mockArticleReader, mockArticleWriter);

    expect(mockArticleReader.next()).andReturn(null);
    expect(mockArticleWriter.write("title", "content"))
      .andReturn("");
    easyMockSupport.replayAll();

    baeldungReader.readNext();
    baeldungReader.write("title", "content");

    easyMockSupport.verifyAll();
}

Previously, we used static methods or annotations to create and manage mocks. Under the hood, these static and annotated mocks are controlled by a global EasyMockSupport instance.

Here, we explicitly instantiated it and take all these mocks under our own control, through delegation. This may help avoid confusion if there’s any name conflicts in our test code with EasyMock or be there any similar cases.

8. Conclusion

In this article, we briefly introduced the basic usage of EasyMock, about how to generate mock objects, record and replay their behaviors, and verify if they behaved correctly.

In case you may be interested, check out this article for a comparison of EasyMock, Mocket, and JMockit.

As always, the full implementation can be found over on Github.

Filtering Observables in RxJava

$
0
0

1. Introduction

Following the Introduction to RxJava, we’re going to look at the filtering operators.

In particular, we’re going to focus on filtering, skipping, time-filtering, and some more advanced filtering operations.

2. Filtering

When working with Observable, sometimes it’s useful to select only a subset of emitted items. For this purpose, RxJava offers various filtering capabilities.

Let’s start looking at the filter method.

2.1. The filter Operator

Simply put, the filter operator filters an Observable making sure that emitted items match specified condition, which comes in the form of a Predicate.

Let’s see how we can filter only the odd values from those emitted:

Observable<Integer> sourceObservable = Observable.range(1, 10);
TestSubscriber<Integer> subscriber = new TestSubscriber();

Observable<Integer> filteredObservable = sourceObservable
  .filter(i -> i % 2 != 0);

filteredObservable.subscribe(subscriber);

subscriber.assertValues(1, 3, 5, 7, 9);

2.2. The take Operator

When filtering with take, the logic results in the emission of the first n items while ignoring the remaining items.

Let’s see how we can filter the sourceObservable and emit only the first two items:

Observable<Integer> sourceObservable = Observable.range(1, 10);
TestSubscriber<Integer> subscriber = new TestSubscriber();

Observable<Integer> filteredObservable = sourceObservable.take(3);

filteredObservable.subscribe(subscriber);

subscriber.assertValues(1, 2, 3);

2.3. The takeWhile Operator

When using takeWhile, the filtered Observable will keep emitting items until it encounters a first element that doesn’t match the Predicate.

Let’s see how we can use the takeWhile – with a filtering Predicate:

Observable<Integer> sourceObservable = Observable.just(1, 2, 3, 4, 3, 2, 1);
TestSubscriber<Integer> subscriber = new TestSubscriber();

Observable<Integer> filteredObservable = sourceObservable
  .takeWhile(i -> i < 4);

filteredObservable.subscribe(subscriber);

subscriber.assertValues(1, 2, 3);

2.4. The takeFirst Operator

Whenever we want to emit only the first item matching a given condition, we can use takeFirst().

Let’s have a quick look at how we can emit the first item that is greater than 5:

Observable<Integer> sourceObservable = Observable
  .just(1, 2, 3, 4, 5, 7, 6);
TestSubscriber<Integer> subscriber = new TestSubscriber();

Observable<Integer> filteredObservable = sourceObservable
  .takeFirst(x -> x > 5);

filteredObservable.subscribe(subscriber);

subscriber.assertValue(7);

2.5. first and firstOrDefault Operators

A similar behavior can be achieved using the first API:

Observable<Integer> sourceObservable = Observable.range(1, 10);
TestSubscriber<Integer> subscriber = new TestSubscriber();

Observable<Integer> filteredObservable = sourceObservable.first();

filteredObservable.subscribe(subscriber);

subscriber.assertValue(1);

However, in case we want to specify a default value, if no items are emitted, we can use firstOrDefault:

Observable<Integer> sourceObservable = Observable.empty();

Observable<Integer> filteredObservable = sourceObservable.firstOrDefault(-1);

filteredObservable.subscribe(subscriber);

subscriber.assertValue(-1);

2.6. The takeLast Operator

Next, if we want to emit only the last n items emitted by an Observable, we can use takeLast.

Let’s see how it’s possible to emit only the last three items:

Observable<Integer> sourceObservable = Observable.range(1, 10);
TestSubscriber<Integer> subscriber = new TestSubscriber();

Observable<Integer> filteredObservable = sourceObservable.takeLast(3);

filteredObservable.subscribe(subscriber);

subscriber.assertValues(8, 9, 10);

We have to remember that this delays the emission of any item from the source Observable until it completes.

2.7. last and lastOrDefault

If we want to emit only the last element, other then using takeLast(1), we can use last.

This filters the Observable, emitting only the last element, which optionally verifies a filtering Predicate:

Observable<Integer> sourceObservable = Observable.range(1, 10);
TestSubscriber<Integer> subscriber = new TestSubscriber();

Observable<Integer> filteredObservable = sourceObservable
  .last(i -> i % 2 != 0);

filteredObservable.subscribe(subscriber);

subscriber.assertValue(9);

In case the Observable is empty, we can use lastOrDefault, that filters the Observable emitting the default value.

The default value is also emitted if the lastOrDefault operator is used and there aren’t any items that verify the filtering condition:

Observable<Integer> sourceObservable = Observable.range(1, 10);
TestSubscriber<Integer> subscriber = new TestSubscriber();

Observable<Integer> filteredObservable = 
  sourceObservable.lastOrDefault(-1, i -> i > 10);

filteredObservable.subscribe(subscriber);

subscriber.assertValue(-1);

2.8. elementAt and elementAtOrDefault Operators

With the elementAt operator, we can pick a single item emitted by the source Observable, specifying its index:

Observable<Integer> sourceObservable = Observable
  .just(1, 2, 3, 5, 7, 11);
TestSubscriber<Integer> subscriber = new TestSubscriber();

Observable<Integer> filteredObservable = sourceObservable.elementAt(4);

filteredObservable.subscribe(subscriber);

subscriber.assertValue(7);

However, elementAt will throw an IndexOutOfBoundException if the specified index exceeds the number of items emitted.

To avoid this situation, it’s possible to use elementAtOrDefault – which will return a default value in case the index is out of range:

Observable<Integer> sourceObservable = Observable
  .just(1, 2, 3, 5, 7, 11);
TestSubscriber<Integer> subscriber = new TestSubscriber();

Observable<Integer> filteredObservable
 = sourceObservable.elementAtOrDefault(7, -1);

filteredObservable.subscribe(subscriber);

subscriber.assertValue(-1);

2.9. The ofType Operator

Whenever the Observable emits Object items, it’s possible to filter them based on their type.

Let’s see how we can only filter the String type items emitted:

Observable sourceObservable = Observable.just(1, "two", 3, "five", 7, 11);
TestSubscriber subscriber = new TestSubscriber();

Observable filteredObservable = sourceObservable.ofType(String.class);

filteredObservable.subscribe(subscriber);

subscriber.assertValues("two", "five");

3. Skipping

On the other hand, when we want to filter out or skip some of the items emitted by an ObservableRxJava offers a few operators as a counterpart of the filtering ones, that we’ve previously discussed.

Let’s start looking at the skip operator, the counterpart of take.

3.1. The skip Operator

When an Observable emits a sequence of items, it’s possible to filter out or skip some of the firsts emitted items using skip.

For example. let’s see how it’s possible to skip the first four elements:

Observable<Integer> sourceObservable = Observable.range(1, 10);
TestSubscriber<Integer> subscriber = new TestSubscriber();

Observable<Integer> filteredObservable = sourceObservable.skip(4);

filteredObservable.subscribe(subscriber);

subscriber.assertValues(5, 6, 7, 8, 9, 10);

3.2. The skipWhile Operator

Whenever we want to filter out all the first values emitted by an Observable that fail a filtering predicate, we can use the skipWhile operator:

Observable<Integer> sourceObservable = Observable
  .just(1, 2, 3, 4, 5, 4, 3, 2, 1);
TestSubscriber<Integer> subscriber = new TestSubscriber();

Observable<Integer> filteredObservable = sourceObservable
  .skipWhile(i -> i < 4);

filteredObservable.subscribe(subscriber);

subscriber.assertValues(4, 5, 4, 3, 2, 1);

3.3. The skipLast Operator

The skipLast operator allows us to skip the final items emitted by the Observable accepting only those emitted before them.

With this, we can, for example, skip the last five items:

Observable<Integer> sourceObservable = Observable.range(1, 10);
TestSubscriber<Integer> subscriber = new TestSubscriber();

Observable<Integer> filteredObservable = sourceObservable.skipLast(5);

filteredObservable.subscribe(subscriber);

subscriber.assertValues(1, 2, 3, 4, 5);

3.4. distinct and distinctUntilChanged Operators

The distinct operator returns an Observable that emits all the items emitted by the sourceObservable that are distinct:

Observable<Integer> sourceObservable = Observable
  .just(1, 1, 2, 2, 1, 3, 3, 1);
TestSubscriber<Integer> subscriber = new TestSubscriber();

Observable<Integer> distinctObservable = sourceObservable.distinct();

distinctObservable.subscribe(subscriber);

subscriber.assertValues(1, 2, 3);

However, if we want to obtain an Observable that emits all the items emitted by the sourceObservable that are distinct from their immediate predecessor, we can use the distinctUntilChanged operator:

Observable<Integer> sourceObservable = Observable
  .just(1, 1, 2, 2, 1, 3, 3, 1);
TestSubscriber<Integer> subscriber = new TestSubscriber();

Observable<Integer> distinctObservable = sourceObservable.distinctUntilChanged();

distinctObservable.subscribe(subscriber);

subscriber.assertValues(1, 2, 1, 3, 1);

3.5. The ignoreElements Operator

Whenever we want to ignore all the elements emitted by the sourceObservable, we can simply use the ignoreElements:

Observable<Integer> sourceObservable = Observable.range(1, 10);
TestSubscriber<Integer> subscriber = new TestSubscriber();

Observable<Integer> ignoredObservable = sourceObservable.ignoreElements();

ignoredObservable.subscribe(subscriber);

subscriber.assertNoValues();

4. Time Filtering Operators

When working with observable sequence, the time axis is unknown but sometimes getting timely data from a sequence could be useful.

With this purpose, RxJava offers a few methods that allow us to work with Observable using also the time axis.

Before moving on to the first one, let’s define a timed Observable that will emit an item every second:

TestScheduler testScheduler = new TestScheduler();

Observable<Integer> timedObservable = Observable
  .just(1, 2, 3, 4, 5, 6)
  .zipWith(Observable.interval(
    0, 1, TimeUnit.SECONDS, testScheduler), (item, time) -> item);

The TestScheduler is a special scheduler that allows advancing the clock manually at whatever pace we prefer.

4.1. sample and throttleLast Operators

The sample operator filters the timedObservable, returning an Observable that emits the most recent items emitted by this API within period time intervals.

Let’s see how we can sample the timedObservable, filtering only the last emitted item every 2.5 seconds:

TestSubscriber<Integer> subscriber = new TestSubscriber();

Observable<Integer> sampledObservable = timedObservable
  .sample(2500L, TimeUnit.MILLISECONDS, testScheduler);

sampledObservable.subscribe(subscriber);

testScheduler.advanceTimeBy(7, TimeUnit.SECONDS);

subscriber.assertValues(3, 5, 6);

This kind of behavior can be achieved also using the throttleLast operator.

4.2. The throttleFirst Operator

The throttleFirst operator differs from throttleLast/sample since it emits the first item emitted by the timedObservable in each sampling period instead of the most recently emitted one.

Let’s see how we can emit the first items, using a sampling period of 4 seconds:

TestSubscriber<Integer> subscriber = new TestSubscriber();

Observable<Integer> filteredObservable = timedObservable
  .throttleFirst(4100L, TimeUnit.SECONDS, testScheduler);

filteredObservable.subscribe(subscriber);

testScheduler.advanceTimeBy(7, TimeUnit.SECONDS);

subscriber.assertValues(1, 6);

4.3. debounce and throttleWithTimeout Operators

With the debounce operator, it’s possible to emit only an item if a particular timespan has passed without emitting another item.

Therefore, if we select a timespan that is greater than the time interval between the emitted items of the timedObservable, it will only emit the last oneOn the other hand, if it’s smaller, it will emit all the items emitted by the timedObservable.

Let’s see what happens in the first scenario:

TestSubscriber<Integer> subscriber = new TestSubscriber();

Observable<Integer> filteredObservable = timedObservable
  .debounce(2000L, TimeUnit.MILLISECONDS, testScheduler);

filteredObservable.subscribe(subscriber);

testScheduler.advanceTimeBy(7, TimeUnit.SECONDS);

subscriber.assertValue(6);

This kind of behavior can also be achieved using throttleWithTimeout.

4.4. The timeout Operator

The timeout operator mirrors the source Observable, but issue a notification error, aborting the emission of items, if the source Observable fails to emit any items during a specified time interval.

Let’s see what happens if we specify a timeout of 500 milliseconds to our timedObservable:

TestSubscriber<Integer> subscriber = new TestSubscriber();

Observable<Integer> filteredObservable = timedObservable
  .timeout(500L, TimeUnit.MILLISECONDS, testScheduler);

filteredObservable.subscribe(subscriber);

testScheduler.advanceTimeBy(7, TimeUnit.SECONDS);

subscriber.assertError(TimeoutException.class); subscriber.assertValues(1);

5. Multiple Observable Filtering

When working with Observable, it’s definitely possible to decide if filtering or skipping items based on a second Observable.

Before moving on, let’s define a delayedObservable, that will emit only 1 item after 3 seconds:

Observable<Integer> delayedObservable = Observable.just(1)
  .delay(3, TimeUnit.SECONDS, testScheduler);

Let’s start with takeUntil operator.

5.1. The takeUntil Operator

The takeUntil operator discards any item emitted by the source Observable (timedObservable) after a second Observable (delayedObservable) emits an item or terminates:

TestSubscriber<Integer> subscriber = new TestSubscriber();

Observable<Integer> filteredObservable = timedObservable
  .skipUntil(delayedObservable);

filteredObservable.subscribe(subscriber);

testScheduler.advanceTimeBy(7, TimeUnit.SECONDS);

subscriber.assertValues(4, 5, 6);

5.2. The skipUntil Operator

On the other hand, skipUntil discards any item emitted by the source Observable (timedObservable) until a second Observable (delayedObservable) emits an item:

TestSubscriber<Integer> subscriber = new TestSubscriber();

Observable<Integer> filteredObservable = timedObservable
  .takeUntil(delayedObservable);

filteredObservable.subscribe(subscriber);

testScheduler.advanceTimeBy(7, TimeUnit.SECONDS);

subscriber.assertValues(1, 2, 3);

6. Conclusion

In this extensive tutorial, we explored the different filtering operators available within RxJava, providing a simple example of each one.

As always, all the code examples in this article can be found over on Github.

Objects in Kotlin

$
0
0

1. Introduction

Kotlin borrowed many ideas from other languages; one of such constructs is the object.

In this quick article, we’ll see what objects are and how can be used.

2. Objects in Kotlin

In Kotlin, as in almost all JVM languages, there’s the concept of a class as the core of the Object-Oriented Programming model. Kotlin introduces the concept of an object on top of that.

Whereas a class describes structures that can be instantiated as and when desired and allows for as many instances as needed, an object instead represents a single static instance, and can never have any more or any less than this one instance.

This is useful for various techniques, including singleton objects and simple packaging up of functionality for encapsulation:

object SimpleSingleton {
    val answer = 42;
    fun greet(name: String) = "Hello, $name!"
}

assertEquals(42, SimpleSingleton.answer)
assertEquals("Hello, world!", SimpleSingleton.greet("world"))

Objects also offer full support for visibility modifiers, allowing for data hiding and encapsulation as with any other class:

object Counter {
    private var count: Int = 0

    fun currentCount() = count

    fun increment() {
        ++count
    }
}
Counter.increment()
println(Counter.currentCount())
println(Counter.count) // this will fail to compile

In addition, objects can extend classes and implement interfaces. In doing so, they are effectively singleton instances of parent classes, exactly as expected.

This can be very useful for cases where we have a stateless implementation and there’s no need for creating a new instance every time — e.g. Comparator:

object ReverseStringComparator : Comparator<String> {
    override fun compare(o1: String, o2: String) = o1.reversed().compareTo(o2.reversed())
}

val strings = listOf("Hello", "World")
val sortedStrings = strings.sortedWith(ReverseStringComparator)

3. What is a Companion Object?

Companion objects are essentially the same as a standard object definition, only with a couple of additional features to make development easier.

A companion object is always declared inside of another class. Whilst it can have a name, it doesn’t need to have one, in which case it automatically has the name Companion:

class OuterClass {
    companion object { // Equivalent to "companion object Companion"
    }
}

Companion objects allow their members to be accessed from inside the companion class without specifying the name.

At the same time, visible members can be accessed from outside the class when prefixed by the class name:

class OuterClass {
    companion object {
        private val secret = "You can't see me"
        val public = "You can see me"
    }

    fun getSecretValue() = secret
}

assertEquals("You can see me", OuterClass.public)
assertEquals("You can't see me", OuterClass.secret) // Cannot access 'secret'

4. Static Fields

The main use for companion objects is to replace static fields/methods known from Java. However, these fields aren’t automatically generated as such in the resulting class file.

If we need them to be generated, we need to use the @JvmStatic annotation on the field instead, which will then generate the bytecode as expected:

class StaticClass {
    companion object {
        @JvmStatic
        val staticField = 42
    }
}

Without doing this, the static field staticField isn’t easily accessible from Java code.

Adding this annotation generates the field exactly as needed for a standard static field, allowing for full interoperability from Java if necessary.

This means that the above generates a static method getStaticField() on the StaticClass class.

5. Conclusion

Objects in Kotlin add a whole extra layer that we can use, further streamlining our code and making it easier to develop.

Companion objects then take this even further, allowing for cleaner code that is easier to maintain and work with.

As always, code snippets can be found over on over on GitHub.

Hamcrest Custom Matchers

$
0
0

1. Introduction

As well as built-in matchers, Hamcrest also provides support for creating custom matchers.

In this tutorial, we’ll take a closer look at how to create and use them. To get a sneak peek on the available matchers, refer to this article.

2. Custom Matchers Setup

To get Hamcrest, we need to add the following Maven dependency to our pom.xml:

<dependency>
    <groupId>org.hamcrest</groupId>
    <artifactId>java-hamcrest</artifactId>
    <version>2.0.0.0</version>
    <scope>test</scope>
</dependency>

The latest Hamcrest version can be found on Maven Central.

3. Introducing TypeSafeMatcher

Before starting with our examples, it’s important to understand the class TypeSafeMatcher. We’ll have to extend this class to create a matcher of our own.

TypeSafeMatcher is an abstract class, so all subclasses have to implement the following methods:

  • matchesSafely(T t): contains our matching logic 
  • describeTo(Description description) customizes the message the client will get when our matching logic is not fulfilled

As we may see in the first method, TypeSafeMatcher is parametrized, so we’ll have to declare a type when we use it. That will be the type of the object we’re testing.

Let’s make this clearer by looking at our first example in the next section.

4. Creating the onlyDigits Matcher

For our first use case, we’ll create a matcher that returns true if a certain String contains only digits.

So, onlyDigits applied to “123” should return true while “hello1” and “bye” should return false.

Let’s get started!

4.1. Matcher Creation

To start with our matcher, we’ll create a class that extends TypeSafeMatcher:

public class IsOnlyDigits extends TypeSafeMatcher<String> {
   
    @Override
    protected boolean matchesSafely(String s) {
        // ...
    }

    @Override
    public void describeTo(Description description) {
        // ...
    }
}

Please note that as the object we’ll test is a text, we’re parametrizing our subclass of TypeSafeMatcher with the class String. 

Now we’re ready to add our implementation:

public class IsOnlyDigits extends TypeSafeMatcher<String> {

    @Override
    protected boolean matchesSafely(String s) {
        try {
            Integer.parseInt(s);
            return true;
        } catch (NumberFormatException nfe){
            return false;
        }
    }

    @Override
    public void describeTo(Description description) {
        description.appendText("only digits");
    }
}

As we can see, matchesSafey is trying to parse our input String into an Integer. If it succeeds, it returns true. If it fails, it returns false. It responds successfully to our use case.

On the other side, describeTo is attaching a text that represents our expectations. We’ll see how this shows next when we use our matcher.

We only need one more thing to complete our matcher: a static method to access it, so it behaves as the rest of the built-in matchers.

So, we’ll add something like this:

public static Matcher<String> onlyDigits() {
    return new IsOnlyDigits();
}

And we’re done! Let’s see how to use this matcher in the next section.

4.2. Matcher Usage

To use our brand new matcher, we’ll create a test:

@Test
public void givenAString_whenIsOnlyDigits_thenCorrect() {
    String digits = "1234";

    assertThat(digits, onlyDigits());
}

And that’s it. This test will pass because the input String contains only digits. Remember that, to make it a little more legible, we can use the matcher is that acts as a wrapper over any other matcher:

assertThat(digits, is(onlyDigits()));

Finally, if we ran the same test but with the input “123ABC”, the output message would be:

java.lang.AssertionError: 
Expected: only digits
     but: was "123ABC"

This is where we see the text that we appended to the describeTo method. As we may have noticed, it’s important to create a proper description of what’s expected in the test.

5. divisibleBy

So, what if we wanted to create a matcher that defines if a number is divisible by another number? For that scenario, we’ll have to store one of the parameters somewhere.

Let’s see how we can do that:

public class IsDivisibleBy extends TypeSafeMatcher<Integer> {

    private Integer divider;

    // constructors

    @Override
    protected boolean matchesSafely(Integer dividend) {
        if (divider == 0) {
            return false;
        }
        return ((dividend % divider) == 0);
    }

    @Override
    public void describeTo(Description description) {
        description.appendText("divisible by " + divider);
    }

    public static Matcher<Integer> divisibleBy(Integer divider) {
        return new IsDivisibleBy(divider);
    }
}

Simple enough, we just added a new attribute to our class and assigned it during construction. Then, we just passed it as a parameter to our static method:

@Test
public void givenAnEvenInteger_whenDivisibleByTwo_thenCorrect() {
    Integer ten = 10;
    Integer two = 2;

    assertThat(ten,is(divisibleBy(two)));
}

@Test
public void givenAnOddInteger_whenNotDivisibleByTwo_thenCorrect() {
    Integer eleven = 11;
    Integer two = 2;

    assertThat(eleven,is(not(divisibleBy(two))));
}

And that’s it! We already have our matcher using more than one input!

6. Conclusion

Hamcrest provides matchers that cover most use cases a developer usually has to deal with when creating assertions.

What’s more, if any specific case isn’t covered, Hamcrest also gives support to create custom matchers to be used under specific scenarios – as we’ve explored here. They’re simple to create, and they are used exactly like the ones included in the library.

To get the complete implementation of this examples, please refer to the GitHub Project.

Class Loaders in Java

$
0
0

1. Introduction to Class Loaders

Class loaders are responsible for loading Java classes during runtime dynamically to the JVM (Java Virtual Machine). Also, they are part of the JRE (Java Runtime Environment). Hence, the JVM doesn’t need to know about the underlying files or file systems in order to run Java programs thanks to class loaders.

Also, these Java classes aren’t loaded into memory all at once, but when required by an application. This is where class loaders come into the picture. They are responsible for loading classes into memory.

In this tutorial, we’re going to talk about different types of built-in class loaders, how they work and an introduction to our own custom implementation.

2. Types of Built-in Class Loaders

Let’s start by learning how different classes are loaded using various class loaders using a simple example:

public void printClassLoaders() throws ClassNotFoundException {

    System.out.println("Classloader of this class:"
        + PrintClassLoader.class.getClassLoader());

    System.out.println("Classloader of Logging:"
        + Logging.class.getClassLoader());

    System.out.println("Classloader of ArrayList:"
        + ArrayList.class.getClassLoader());
}

When executed the above method prints:

Class loader of this class:sun.misc.Launcher$AppClassLoader@18b4aac2
Class loader of Logging:sun.misc.Launcher$ExtClassLoader@3caeaf62
Class loader of ArrayList:null

As we can see, there are three different class loaders here; application, extension, and bootstrap (displayed as null).

The application class loader loads the class where the example method is contained. An application or system class loader loads our own files in the classpath.

Next, the extension one loads the Logging class. Extension class loaders load classes that are an extension of the standard core Java classes.

Finally, the bootstrap one loads the ArrayList class. A bootstrap or primordial class loader is the parent of all the others.

However, we can see that the last out, for the ArrayList it displays null in the output. This is because the bootstrap class loader is written in native code, not Java – so it doesn’t show up as a Java class. Due to this reason, the behavior of the bootstrap class loader will differ across JVMs.

Let’s now discuss more in detail about each of these class loaders.

2.1. Bootstrap Class Loader

Java classes are loaded by an instance of java.lang.ClassLoader. However, class loaders are classes themselves. Hence, the question is, who loads the java.lang.ClassLoader itself? 

This is where the bootstrap or primordial class loader comes into the picture.

It’s mainly responsible for loading JDK internal classes, typically rt.jar and other core libraries located in $JAVA_HOME/jre/lib directory. Additionally, Bootstrap class loader serves as a parent of all the other ClassLoader instances.

This bootstrap class loader is part of the core JVM and is written in native code as pointed out in the above example. Different platforms might have different implementations of this particular class loader.

2.2. Extension Class Loader

The extension class loader is a child of the bootstrap class loader and takes care of loading the extensions of the standard core Java classes so that it’s available to all applications running on the platform.

Extension class loader loads from the JDK extensions directory, usually $JAVA_HOME/lib/ext directory or any other directory mentioned in the java.ext.dirs system property.

2.3. System Class Loader

The system or application class loader, on the other hand, takes care of loading all the application level classes into the JVM. It loads files found in the classpath environment variable, -classpath or -cp command line option. Also, it’s a child of Extensions classloader.

3. How do Class Loaders Work?

Class loaders are part of the Java Runtime Environment. When the JVM requests a class, the class loader tries to locate the class and load the class definition into the runtime using the fully qualified class name.

The java.lang.ClassLoader.loadClass() method is responsible for loading the class definition into runtime. It tries to load the class based on a fully qualified name.

If the class isn’t already loaded, it delegates the request to the parent class loader. This process happens recursively.

Eventually, if the parent class loader doesn’t find the class, then the child class will call java.net.URLClassLoader.findClass() method to look for classes in the file system itself.

If the last child class loader isn’t able to load the class either, it throws java.lang.NoClassDefFoundError or java.lang.ClassNotFoundException.

Let’s look at an example of output when ClassNotFoundException is thrown.

java.lang.ClassNotFoundException: com.baeldung.classloader.SampleClassLoader    
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381)    
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)    
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)    
    at java.lang.Class.forName0(Native Method)    
    at java.lang.Class.forName(Class.java:348)

If we go through the sequence of events right from calling java.lang.Class.forName(), we can understand that it first tries to load the class through parent class loader and then java.net.URLClassLoader.findClass() to look for the class itself.

When it still doesn’t find the class, it throws a ClassNotFoundException.

There are three important features of class loaders.

3.1. Delegation Model

Class loaders follow the delegation model where on request to find a class or resource, a ClassLoader instance will delegate the search of the class or resource to the parent class loader.

Let’s say we have a request to load an application class into the JVM. The system class loader first delegates the loading of that class to its parent extension class loader which in turn delegates it to the bootstrap class loader.

Only if the bootstrap and then the extension class loader is unsuccessful in loading the class, the system class loader tries to load the class itself.

3.2. Unique Classes

As a consequence of the delegation model, it’s easy to ensure unique classes as we always try to delegate upwards.

If the parent class loader isn’t able to find the class, only then the current instance would attempt to do so itself. 

3.3. Visibility

In addition, children class loaders are visible to classes loaded by its parent class loaders.

For instance, classes loaded by the system class loader have visibility into classes loaded by the extension and Bootstrap class loaders but not vice-versa.

To illustrate this, if Class A is loaded by an application class loader and class B is loaded by the extensions class loader, then both A and B classes are visible as far as other classes loaded by Application class loader are concerned.

Class B, nonetheless, is the only class visible as far as other classes loaded by the extension class loader are concerned.

4. Custom ClassLoader

The built-in class loader would suffice in most of the cases where the files are already in the file system.

However, in scenarios where we need to load classes out of the local hard drive or a network, we may need to make use of custom class loaders.

In this section, we’ll cover some other uses cases for custom class loaders and we’ll demonstrate how to create one.

4.1. Custom Class Loaders Use-Cases

Custom class loaders are helpful for more than just loading the class during runtime, a few use cases might include:

  1. Helping in modifying the existing bytecode, e.g. weaving agents
  2. Creating classes dynamically suited to the user’s needs. e.g in JDBC, switching between different driver implementations is done through dynamic class loading.
  3. Implementing a class versioning mechanism while loading different bytecodes for classes with same names and packages. This can be done either through URL class loader (load jars via URLs) or custom class loaders.

There are more concrete examples where custom class loaders might come in handy.

Browsers, for instance, use a custom class loader to load executable content from a website. A browser can load applets from different web pages using separate class loaders. The applet viewer which is used to run applets contains a ClassLoader that accesses a website on a remote server instead of looking in the local file system.

And then loads the raw bytecode files via HTTP, and turns them into classes inside the JVM. Even if these applets have the same name, they are considered as different components if loaded by different class loaders.

Now that we understand why custom class loaders are relevant, let’s implement a subclass of ClassLoader to extend and summarise the functionality of how the JVM loads classes.

4.2. Creating our Custom Class Loader

For illustration purposes, let’s say we need to load classes through FTP, the loading of the class isn’t possible through built-in class loaders since it isn’t present in the classpath at the time:

public class CustomClassLoader extends ClassLoader {
    public CustomClassLoader(ClassLoader parent) {
        super(parent);
    }
    public Class getClass(String name) throws ClassNotFoundException {
        byte[] b = loadClassFromFTP(name);
        return defineClass(name, b, 0, b.length);
    }

    @Override
    public Class loadClass(String name) throws ClassNotFoundException {

        if (name.startsWith("com.baeldung")) {
            System.out.println("Loading Class from Custom Class Loader");
            return getClass(name);
        }
        return super.loadClass(name);
    }

    private byte[] loadClassFromFTP(String fileName)  {
        // Returns a byte array from specified file.
    }
}

In the above example, we defined a custom class loader that extends the default class loader and loads files from the com.baeldung package.

We set the parent class loader in the constructor. Then we load the class using FTP specifying a fully qualified class name as an input.

5. Understanding java.lang.ClassLoader

Let’s discuss a few essential methods from the java.lang.ClassLoader class to get a clearer picture of how it works.

5.1. The loadClass() Method

public Class<?> loadClass(String name, boolean resolve) throws ClassNotFoundException {

This method is responsible for loading the class given a name parameter. The name parameter refers to the fully qualified class name.

The Java Virtual Machine invokes loadClass() method to resolve class references setting resolve to true. However, it isn’t always necessary to resolve a class. If we only need to determine if the class exists or not, then resolve parameter is set to false.

This method serves as an entry point for the class loader.

We can try to understand the internal working of the loadClass() method from the source code of java.lang.ClassLoader:

protected Class<?> loadClass(String name, boolean resolve)
  throws ClassNotFoundException {
    
    synchronized (getClassLoadingLock(name)) {
        // First, check if the class has already been loaded
        Class<?> c = findLoadedClass(name);
        if (c == null) {
            long t0 = System.nanoTime();
                try {
                    if (parent != null) {
                        c = parent.loadClass(name, false);
                    } else {
                        c = findBootstrapClassOrNull(name);
                    }
                } catch (ClassNotFoundException e) {
                    // ClassNotFoundException thrown if class not found
                    // from the non-null parent class loader
                }

                if (c == null) {
                    // If still not found, then invoke findClass in order
                    // to find the class.
                    c = findClass(name);
                }
            }
            if (resolve) {
                resolveClass(c);
            }
            return c;
        }
    }

The default implementation of the method searches for classes in the following order:

  1. Invokes the findLoadedClass(String) method to see if the class is already loaded.
  2. Invokes the loadClass(String) method on the parent class loader.
  3. Invoke the findClass(String) method to find the class.

5.2. The defineClass() Method

protected final Class<?> defineClass(
  String name, byte[] b, int off, int len) throws ClassFormatError

This method is responsible for the conversion of an array of bytes into an instance of a class. And before we use the class, we need to resolve it.

In case data didn’t contain a valid class, it throws a ClassFormatError.

Also, we can’t override this method since it’s marked as final.

5.3. The findClass() Method

protected Class<?> findClass(
  String name) throws ClassNotFoundException

This method finds the class with the fully qualified name as a parameter. We need to override this method in custom class loader implementations that follow the delegation model for loading classes.

Also, loadClass() invokes this method if the parent class loader couldn’t find the requested class.

The default implementation throws a ClassNotFoundException if no parent of the class loader finds the class.

5.4. The getParent() Method

public final ClassLoader getParent()

This method returns the parent class loader for delegation.

Some implementations like the one seen before in Section 2. use null to represent the bootstrap class loader.

5.5. The getResource() Method

public URL getResource(String name)

This method tries to find a resource with the given name.

It will first delegate to the parent class loader for the resource. If the parent is null, the path of the class loader built into the virtual machine is searched.

If that fails, then the method will invoke findResource(String) to find the resource. The resource name specified as an input can be relative or absolute to the classpath.

It returns an URL object for reading the resource, or null if the resource could not be found or if the invoker doesn’t have adequate privileges to return the resource.

It’s important to note that Java loads resources from the classpath.

Finally, resource loading in Java is considered location-independent as it doesn’t matter where the code is running as long as the environment is set to find the resources.

6. Context Classloaders

In general, context class loaders provide an alternative method to the class-loading delegation scheme introduced in J2SE.

Like we’ve learned before, classloaders in a JVM follow a hierarchical model such that every class loader has a single parent with the exception of the bootstrap class loader.

However, sometimes when JVM core classes need to dynamically load classes or resources provided by application developers, we might encounter a problem.

For example, in JNDI the core functionality is implemented by bootstrap classes in rt.jar. But these JNDI classes may load JNDI providers implemented by independent vendors (deployed in the application classpath). This scenario calls for the bootstrap class loader (parent class loader) to load a class visible to application loader (child class loader).

J2SE delegation doesn’t work here and to get around this problem, we need to find alternative ways of class loading. And it can be achieved using thread context loaders.

The java.lang.Thread class has a method getContextClassLoader() that returns the ContextClassLoader for the particular thread. The ContextClassLoader is provided by the creator of the thread when loading resources and classes.

If the value isn’t set, then it defaults to the class loader context of the parent thread.

7. Conclusion

Class loaders are essential to execute a Java program. We’ve provided a good introduction as part of this article.

We talked about different types of class loaders namely – Bootstrap, Extensions and System class loaders. Bootstrap serves as a parent for all of them and is responsible for loading the JDK internal classes. Extensions and system, on the other hand, loads classes from the Java extensions directory and classpath respectively.

Then we talked about how class loaders work and we discussed some features such as delegation, visibility, and uniqueness followed by a brief explanation of how to create a custom one. Finally, we provided an introduction to Context class loaders.

Code samples, as always, can be found over on GitHub.

Spring Security with Thymeleaf

$
0
0

1. Overview

In this quick tutorial, we’ll focus on Spring Security with Thymeleaf. We’re going to create a Spring Boot application where we’ll demonstrate the usage of security dialect.

Our choice for frontend technology is Thymeleaf – a modern, server-side web templating engine, with good integration with Spring MVC framework. For more details, please look at our intro article on it.

Lastly, the Spring Security Dialect is a Thymeleaf extras module which, naturally, helps integrate both of these together.

We’re going to be using the simple project we built in our Spring Boot tutorial article; we also have a Thymeleaf tutorial with Spring, where the standard Thymeleaf configuration can be found.

2. Dependencies

First of all, let’s add the new dependency to our Maven pom.xml:

<dependency>
    <groupId>org.thymeleaf.extras</groupId>
    <artifactId>thymeleaf-extras-springsecurity4</artifactId>
</dependency>

It’s recommended always to use the latest version – which we can get over on Maven Central.

3. Spring Security Configuration

Next, let’s define the configuration for Spring Security.

We also need at least two different users to demonstrate the security dialect usage:

@Configuration
@EnableWebSecurity
public class SecurityConfiguration extends WebSecurityConfigurerAdapter {

    // [...] 
    @Autowired
    public void configureGlobal(AuthenticationManagerBuilder auth) 
      throws Exception {
        auth
          .inMemoryAuthentication()
          .withUser("user").password("password").roles("USER")
          .and()
          .withUser("admin").password("admin").roles("ADMIN");
    }
}

As we can see, in configureGlobal(AuthenticationManagerBuilder auth) we define two users with username and password. We can use these to access our application.

Our users have different roles: ADMIN and USER respectively so that we can present them specific content based on a role.

4. Security Dialect

The Spring Security dialect allows us to conditionally display content based on user roles, permissions or other security expressions. It also gives us access to the Spring Authentication object.

Let’s look at the index page, which contains examples of security dialect:

<!DOCTYPE html>
<html xmlns:th="http://www.thymeleaf.org">
    <head>
        <title>Welcome to Spring Security Thymeleaf tutorial</title>
    </head>
    <body>
        <h2>Welcome</h2>
        <p>Spring Security Thymeleaf tutorial</p>
        <div sec:authorize="hasRole('USER')">Text visible to user.</div>
        <div sec:authorize="hasRole('ADMIN')">Text visible to admin.</div>
        <div sec:authorize="isAuthenticated()">
            Text visible only to authenticated users.
        </div>
        Authenticated username:
        <div sec:authentication="name"></div>
        Authenticated user roles:
        <div sec:authentication="principal.authorities"></div>
    </body>
</html>

We can see the attributes specific to the Spring Security Dialect: sec:authorize and sec:authentication.

Let’s discuss these, one by one.

4.1. Understanding sec:authorize

Simply put, we use sec:authorize attribute to control displayed content.

For example, if we want to only show content to a user with the role USER – we can do: <div sec:authorize=”hasRole(‘USER’)”>.

And, if we want to broaden the access to all authenticated users we can use the following expression:

<div sec:authorize=”isAuthenticated()”>.

4.2. Understanding sec:authentication

The Spring Security Authentication interface exposes useful methods concerning the authenticated principal or authentication request.

To access an authentication object withing Thymeleaf, we can simply use <div sec:authentication=”name”> or <div sec:authentication=”principal.authorities”>.

The former gives us access to the name of the authenticated user, the later allows us to access roles of the authenticated user.

5. Summary

In this article, we used the Spring Security support in Thymeleaf, in a simple Spring Boot application.

As always, a working version of the code shown in this article is available in our GitHub repository.


Introduction to JavaFx

$
0
0

1. Introduction

JavaFX is a library for building rich client applications with Java. It provides an API for designing GUI applications that run on almost every device with Java support.

In this tutorial, we’re going to focus on and cover some its key capabilities and functionality.

2. JavaFX API

Starting with Java 8, no additional setup is necessary to start working with the JavaFX library.

2.1. Architecture

JavaFX uses hardware accelerated graphics pipeline for the rendering, known as Prism. What’s more, to fully accelerate the graphics usage, it leverages either software or hardware rendering mechanism, by internally using DirectX and OpenGL.

JavaFX has a platform dependent Glass windowing toolkit layer to connect to the native operating system. It uses the operating system’s event queue to schedule thread usage. Also, it asynchronously handles windows, events, timers.

The Media and Web engines enable media playback and HTML/CSS support.

Let’s see what the main structure of a JavaFX application looks like:

Here, we notice two main containers:

  • Stage is the main container and the entry point of the application. It represents the main window and passed as an argument of the start() method.
  • Scene is a container for holding the UI elements, such as Image Views, Buttons, Grids, TextBoxes.

The Scene can be replaced or switched to another Scene. This represents a graph of hierarchical objects, which is known as a Scene Graph. Each element in that hierarchy is called a node. A single node has its ID, style, effects, event handlers, state.

Additionally, the Scene also contains the layout containers, images, media.

2.2. Threads

At the system level, the JVM creates separate threads for running and rendering the application:

  • Prism rendering thread – responsible for rendering the Scene Graph separately.
  • Application thread – is the main thread of any JavaFX application. All the live nodes and components are attached to this thread.

2.3. Lifecycle

The javafx.application.Application class has the following lifecycle methods:

  • init() – is called after the application instance is created. At this point, the JavaFX API isn’t ready yet, so we can’t create graphical components here.
  • start(Stage stage) – all the graphical components are created here. Also, the main thread for the graphical activities starts here.
  • stop() – is called before the application shutdown; for example, when a user closes the main window. It’s useful to override this method for some cleanup before the application termination.

The static launch() method starts the JavaFX application.

2.4. FXML

JavaFX uses a special FXML markup language to create the view interfaces.

This provides an XML based structure for separating the view from the business logic. XML is more suitable here, as it’s able to quite naturally represent a Scene Graph hierarchy.

Finally, to load up the .fxml file, we use the FXMLLoader class, which results in the object graph of the scene hierarchy.

3. Getting Started

To get practical, and let’s build a small application that allows searching through a list of people.

First, let’s add a Person model class – to represent our domain:

public class Person {
    private SimpleIntegerProperty id;
    private SimpleStringProperty name;
    private SimpleBooleanProperty isEmployed;

    // getters, setters
}

Notice how, to wrap up the int, String and boolean values, we’re using the SimpleIntegerProperty, SimpleStringProperty, SimpleBooleanProperty classes in the javafx.beans.property package.


Next, let’s create the Main class that extends the Application abstract class:

public class Main extends Application {

    @Override
    public void start(Stage primaryStage) throws Exception {
        FXMLLoader loader = new FXMLLoader(
          Main.class.getResource("/SearchController.fxml"));
        AnchorPane page = (AnchorPane) loader.load();
        Scene scene = new Scene(page);

        primaryStage.setTitle("Title goes here");
        primaryStage.setScene(scene);
        primaryStage.show();
    }

    public static void main(String[] args) {
        launch(args);
    }
}

Our main class overrides the start() method, which is the entry point for the program.

Then, the FXMLLoader loads up the object graph hierarchy from SearchController.fxml into the AnchorPane.

After starting a new Scene, we set it to the primary Stage. We also set the title for our window and show() it.

Note that it’s useful to include the main() method to be able to run the JAR file without the JavaFX Launcher.

3.1. FXML View

Let’s now dive deeper into the SearchController XML file.

For our searching application, we’ll add a text field to enter the keyword and the search button:

<AnchorPane 
  xmlns:fx="http://javafx.com/fxml"
  xmlns="http://javafx.com/javafx"
  fx:controller="com.baeldung.view.SearchController">
    <children>

        <HBox id="HBox" alignment="CENTER" spacing="5.0">
            <children>
                <Label text="Search Text:"/>
                <TextField fx:id="searchField"/>
                <Button fx:id="searchButton"/>
            </children>
        </HBox>

    </children>
</AnchorPane>

AnchorPane is the root container here, and the first node of the graph hierarchy. While resizing the window, it will reposition the child to its anchor point. The fx: controller attribute wires the Java class with the markup.

There are some other built-in layouts available:

  • BorderPane – divides the layout into five sections: top, right, bottom, left, center
  • HBox – arrange the child components in a horizontal panel
  • VBox – the child nodes are arranged in a vertical column
  • GridPane – useful for creating a grid with rows and columns

In our example, inside of the horizontal HBox panel, we used a Label to place text, TextField for the input, and a Button. With fx: id we mark the elements so that we can use them later in the Java code.

Then, to map them to the Java fields – we use the @FXML annotation:

public class SearchController {
 
    @FXML
    private TextField searchField;
    @FXML
    private Button searchButton;
    
    @FXML
    private void initialize() {
        // search panel
        searchButton.setText("Search");
        searchButton.setOnAction(event -> loadData());
        searchButton.setStyle("-fx-background-color: #457ecd; -fx-text-fill: #ffffff;");
    }
}

After populating the @FXML annotated fields, initialize() will be called automatically. Here, we’re able to perform further actions over the UI  components – like registering event listeners, adding style or changing the text property.

Finally, all of this logic described here will produce the following window:

4. Binding API

Now that the visual aspects are handled, let’s start looking at binding data. 

The binding API provides some interfaces that notify objects when a value change of another object occurs.

We can bind a value using the bind() method or by adding listeners.

Unidirectional binding provides a binding for one direction only:

searchLabel.textProperty().bind(searchField.textProperty());

Here, any change in the search field will update the text value of the label.

By comparison, bidirectional binding synchronizes the values of two properties in both directions.

The alternative way of binding the fields are ChangeListeners:

searchField.textProperty().addListener((observable, oldValue, newValue) -> {
    searchLabel.setText(newValue);
});

The Observable interface allows observing the value of the object for changes.

To exemplify this, the most commonly used implementation is the javafx.collections.ObservableList<T> interface:

ObservableList<Image> masterData = FXCollections.observableArrayList();

Here, any model change like insertion, update or removal of the elements, will notify the UI controls immediately.

5. Concurrency

Working with the UI components in a scene graph isn’t thread-safe, as it’s accessed only from the Application thread. The javafx.concurrent package is here to help with multithreading.

Let’s see how we can perform the data search in the background thread:

Task<ObservableList<Person>> task = new Task<ObservableList<Person>>() {
    @Override
    protected ObservableList<Person> call() throws Exception {
        updateMessage("Loading data");
        return FXCollections.observableArrayList(masterData
          .stream()
          .filter(value -> value.getName().toLowerCase().contains(searchText))
          .collect(Collectors.toList()));
    }
};

Here, we create a one-time task javafx.concurrent.Task object and override the call() method.

The call() method runs entirely on the background thread and returns the result to the Application thread. This means any manipulation of the UI components within this method, will throw a runtime exception.

However, updateProgress(), updateMessage() can be called to update Application thread items. When the task state transitions to SUCCEEDED state, the onSucceeded() event handler is called from the Application thread:

task.setOnSucceeded(event -> {
    masterData = task.getValue();
    // update other UI components
});

The Task is Runnable, so to start it we need just to start a new Thread with the task parameter:

Thread th = new Thread(task);
th.setDaemon(true);
th.start();

The setDaemon(true) flag indicates that the thread will terminate after finishing the work.

6. Event Handling

We can describe an event as an action that might be interesting to the application.

For example, user actions like mouse clicks, key presses, window resize are handled or notified by javafx.event.Event class or any of its subclasses.

Also, we distinguish three types of events:

  • InputEvent – all the types of key and mouse actions like KEY_PRESSED, KEY_TYPED, KEY_RELEASED or MOUSE_PRESSES, MOUSE_RELEASED
  • ActionEvent – represents a variety of actions like firing a Button or finishing a KeyFrame
  • WindowEventWINDOW_SHOWING, WINDOW_SHOWN

To demonstrate, the code fragment below catches the event of pressing the Enter key over the searchField:

searchField.setOnKeyPressed(event -> {
    if (event.getCode().equals(KeyCode.ENTER)) {
        //search and load some data
    }
});

7. Style

We can change the UI of the JavaFX application by applying a custom design to it.

By default, JavaFX uses modena.css as a CSS resource for the whole application. This is a part of the jfxrt.jar.

To override the default style, we can add a stylesheet to the scene:

scene.getStylesheets().add("/search.css");

We can also use inline style; for example, to set a style property for a specific node:

searchButton.setStyle("-fx-background-color: slateblue; -fx-text-fill: white;");

8. Conclusion

This brief write-up covers the basics of JavaFX API. We went through the internal structure and introduced key capabilities of its architecture, lifecycle, and components.

As a result, we learned and are now able to create a simple GUI application.

And, as always, the full source code of the tutorial is available over on GitHub.

Show Hibernate/JPA SQL Statements from Spring Boot

$
0
0

1. Overview

Spring JDBC and JPA provide abstractions over native JDBC APIs allowing developers to do away with native SQL queries. However, often we need to see those auto-generated SQL queries and the order in which they were executed for debugging purposes.

In this quick tutorial, we’re going to look at different ways of logging these SQL queries in Spring Boot.

2. Logging JPA Queries

2.1. To Standard Output

The simplest way is to dump the queries to standard out is to add the following to application.properties:

spring.jpa.show-sql=true

To beautify or pretty print the SQL, we can add:

spring.jpa.properties.hibernate.format_sql=true

While this is extremely simple, it’s not recommended as it directly unloads everything to standard output without any optimizations of a logging framework.

Moreover, it doesn’t log the parameters of prepared statements.

2.2. Via Loggers

Now, let’s see how we can log the SQL statements by configuring loggers in the properties file:

logging.level.org.hibernate.SQL=DEBUG
logging.level.org.hibernate.type.descriptor.sql.BasicBinder=TRACE

The first line logs the SQL queries, and the second statement logs the prepared statement parameters.

The pretty print property will work in this configuration as well.

By setting these properties, logs will be sent to the configured appender. By default, Spring Boot uses logback with a standard out appender.

3. Logging JdbcTemplate Queries

To configure statement logging when using JdbcTemplate, we need the following properties:

logging.level.org.springframework.jdbc.core.JdbcTemplate=DEBUG
logging.level.org.springframework.jdbc.core.StatementCreatorUtils=TRACE

Similar to the JPA logging configuration, the first line is for logging statements and the second one is to log parameters of prepared statements.

4. How Does it Work?

The Spring / Hibernate classes, which generate SQL statements and set the parameters, already contain the code for logging them.

However, the level of those log statements is set to DEBUG and TRACE respectively, which is lower than the default level in Spring Boot – INFO.

By adding these properties, we are just setting those loggers to the required level.

5. Conclusion

In this short article, we’ve looked at the ways to log SQL queries in Spring Boot.

If we choose to configure multiple appenders, we can also separate SQL statements and other log statements into different log files to keep things clean.

Inheritance and Composition (Is-a vs Has-a relationship) in Java

$
0
0

1. Overview

Inheritance and composition — along with abstraction, encapsulation, and polymorphism — are cornerstones of object-oriented programming (OOP).

In this tutorial, we’ll cover the basics of inheritance and composition, and we’ll focus strongly on spotting the differences between the two types of relationships.

2. Inheritance’s Basics

Inheritance is a powerful yet overused and misused mechanism.

Simply put, with inheritance, a base class (a.k.a. base type) defines the state and behavior common for a given type and lets the subclasses (a.k.a. subtypes) provide specialized versions of that state and behavior.

To have a clear idea on how to work with inheritance, let’s create a naive example: a base class Person that defines the common fields and methods for a person, while the subclasses Waitress and Actress provide additional, fine-grained method implementations.

Here’s the Person class:

public class Person {
    private final String name;

    // other fields, standard constructors, getters
}

And these are the subclasses:

public class Waitress extends Person {

    public String serveStarter(String starter) {
        return "Serving a " + starter;
    }
    
    // additional methods/constructors
}
public class Actress extends Person {
    
    public String readScript(String movie) {
        return "Reading the script of " + movie;
    } 
    
    // additional methods/constructors
}

In addition, let’s create a unit test to verify that instances of the Waitress and Actress classes are also instances of Person, thus showing that the “is-a” condition is met at the type level:

@Test
public void givenWaitressInstance_whenCheckedType_thenIsInstanceOfPerson() {
    assertThat(new Waitress("Mary", "mary@domain.com", 22))
      .isInstanceOf(Person.class);
}
    
@Test
public void givenActressInstance_whenCheckedType_thenIsInstanceOfPerson() {
    assertThat(new Actress("Susan", "susan@domain.com", 30))
      .isInstanceOf(Person.class);
}

It’s important to stress here the semantic facet of inheritance. Aside from reusing the implementation of the Person class, we’ve created a well-defined “is-a” relationship between the base type Person and the subtypes Waitress and Actress. Waitresses and actresses are, effectively, persons.

This may cause us to ask: in which use cases is inheritance the right approach to take?

If subtypes fulfill the “is-a” condition and mainly provide additive functionality further down the classes hierarchy, then inheritance is the way to go.

Of course, method overriding is allowed as long as the overridden methods preserve the base type/subtype substitutability promoted by the Liskov Substitution Principle.

Additionally, we should keep in mind that the subtypes inherit the base type’s API, which is some cases may be overkill or merely undesirable.

Otherwise, we should use composition instead.

3. Inheritance in Design Patterns

While the consensus is that we should favor composition over inheritance whenever possible, there are a few typical use cases where inheritance has its place.

3.1. The Layer Supertype Pattern

In this case, we use inheritance to move common code to a base class (the supertype), on a per-layer basis.

Here’s a basic implementation of this pattern in the domain layer:

public class Entity {
    
    protected long id;
    
    // setters
}
public class User extends Entity {
    
    // additional fields and methods   
}

We can apply the same approach to the other layers in the system, such as the service and persistence layers.

3.2. The Template Method Pattern

In the template method pattern, we can use a base class to define the invariant parts of an algorithm, and then implement the variant parts in the subclasses:

public abstract class ComputerBuilder {
    
    public final Computer buildComputer() {
        addProcessor();
        addMemory();
    }
    
    public abstract void addProcessor();
    
    public abstract void addMemory();
}
public class StandardComputerBuilder extends ComputerBuilder {

    @Override
    public void addProcessor() {
        // method implementation
    }
    
    @Override
    public void addMemory() {
        // method implementation
    }
}

4. Composition’s Basics

The composition is another mechanism provided by OOP for reusing implementation.

In a nutshell, composition allows us to model objects that are made up of other objects, thus defining a “has-a” relationship between them.

Furthermore, the composition is the strongest form of association, which means that the object(s) that compose or are contained by one object are destroyed too when that object is destroyed.

To better understand how composition works, let’s suppose that we need to work with objects that represent computers.

A computer is composed of different parts, including the microprocessor, the memory, a sound card and so forth, so we can model both the computer and each of its parts as individual classes.

Here’s how a simple implementation of the Computer class might look:

public class Computer {

    private Processor processor;
    private Memory memory;
    private SoundCard soundCard;

    // standard getters/setters/constructors
    
    public Optional<SoundCard> getSoundCard() {
        return Optional.ofNullable(soundCard);
    }
}

The following classes model a microprocessor, the memory, and a sound card (interfaces are omitted for brevity’s sake):

public class StandardProcessor implements Processor {

    private String model;
    
    // standard getters/setters
}
public class StandardMemory implements Memory {
    
    private String brand;
    private String size;
    
    // standard constructors, getters, toString
}
public class StandardSoundCard implements SoundCard {
    
    private String brand;

    // standard constructors, getters, toString
}

It’s easy to understand the motivations behind pushing composition over inheritance. In every scenario where it’s possible to establish a semantically correct “has-a” relationship between a given class and others, the composition is the right choice to make.

In the above example, Computer meets the “has-a” condition with the classes that model its parts.

It’s also worth noting that in this case, the containing Computer object has ownership of the contained objects if and only if the objects can’t be reused within another Computer object. If they can, we’d be using aggregation, rather than composition, where ownership isn’t implied.

5. Composition without Abstraction

Alternatively, we could’ve defined the composition relationship by hard-coding the dependencies of the Computer class, instead of declaring them in the constructor:

public class Computer {

    private StandardProcessor processor
      = new StandardProcessor("Intel I3");
    private StandardMemory memory
      = new StandardMemory("Kingston", "1TB");
    
    // additional fields / methods
}

Of course, this would be a rigid, tightly-coupled design, as we’d be making Computer strongly dependent on specific implementations of Processor and Memory.

We wouldn’t be taking advantage of the level of abstraction provided by interfaces and dependency injection.

With the initial design based on interfaces, we get a loosely-coupled design, which is also easier to test.

6. Conclusion

In this article, we learned the fundamentals of inheritance and composition in Java, and we explored in depth the differences between the two types of relationships (“is-a” vs. “has-a”).

As always, all the code samples shown in this tutorial are available over on GitHub.

Spring Boot Gradle Plugin

$
0
0

1. Overview

The Spring Boot Gradle plugin helps us manage Spring Boot dependencies, as well as package and run our application when using Gradle as a build tool.

In this tutorial, we’ll discuss how we can add and configure the plugin, and then we’ll see how to build and run a Spring Boot project.

2. Build File Configuration

First, we need to add the Spring Boot plugin to our build.gradle file by including it in our plugins section:

plugins {
    id "org.springframework.boot" version "2.0.1.RELEASE"
}

If we’re using a Gradle version earlier than 2.1 or we need dynamic configuration, we can add it like this instead:

buildscript {
    ext {
        springBootVersion = '2.0.1.RELEASE'
    }
    repositories {
        mavenCentral()
    }
    dependencies {
        classpath(
          "org.springframework.boot:spring-boot-gradle-plugin:${springBootVersion}")
    }
}

apply plugin: 'org.springframework.boot'

3. Packaging our Application

We can package our application to an executable archive (jar or war file) by building it using the build command:

./gradlew build

As a result, the generated executable archive will be placed in the build/libs directory.

If we want to generate an executable jar file, then we also need to apply the java plugin:

apply plugin: 'java'

On the other hand, if we need a war file, we’ll apply the war plugin:

apply plugin: 'war'

Building the application will generate executable archives for both Spring Boot 1.x and 2.x. However, for each version, Gradle triggers different tasks.

Next, let’s have a closer look at the build process for each Boot version.

3.1. Spring Boot 2.x

In Boot 2.x, the bootJar and bootWar tasks are responsible for packaging the application.

The bootJar task is responsible for creating the executable jar file. This is created automatically once the java plugin is applied.

Let’s see how we can execute the bootJar task directly:

./gradlew bootJar

Similarly, bootWar generates an executable war file and gets created once the war plugin is applied.

We can execute the bootWar task using:

./gradlew bootWar

Note that for Spring Boot 2.x, we need to use Gradle 4.0 or later.

We can also configure both tasks. For example, let’s set the main class by using the mainClassName property:

bootJar {
    mainClassName = 'com.baeldung.Application'
}

Alternatively, we can use use the same property from the Spring Boot DSL:

springBoot {
    mainClassName = 'com.baeldung.Application'
}

3.2. Spring Boot 1.x

With Spring Boot 1.x, bootRepackage is responsible for creating the executable archive (jar or war file depending on the configuration.

We can execute the bootRepackage task directly using:

./gradlew bootRepackage

Similar to the Boot 2.x version, we can add configurations to the bootRepackage task in our build.gradle:

bootRepackage {
    mainClass = 'com.example.demo.Application'
}

We can also disable the bootRepackage task by setting the enabled option to false:

bootRepackage {
    enabled = false
}

4. Running our Application

After building the application, we can just run it by using the java -jar command on the generated executable jar file:

java -jar build/libs/demo.jar

Spring Boot Gradle plugin also provides us with the bootRun task which enables us to run the application without the need to build it first:

./gradlew bootRun

The bootRun task can be simply configured in build.gradle.

For example, we can define the main class:

bootRun {
    main = 'com.example.demo.Application'
}

5. Relation with Other Plugins

5.1. Dependency Management Plugin

For Spring Boot 1.x, it used to apply the dependency management plugin automatically. This would import the Spring Boot dependencies BOM and act similar to dependency management for Maven.

But since Spring Boot 2.x, we need to apply it explicitly in our build.gradle if we need this functionality:

apply plugin: 'io.spring.dependency-management'

5.2. Java Plugin

When we apply the java plugin, the Spring Boot Gradle plugin takes multiple actions like:

  • creating a bootJar task, which we can use to generate an executable jar file
  • creating a bootRun task, which we can use to run our application directly
  • disabling jar task

5.3. War Plugin

Similarly, when we apply the war plugin, that results in:

  • creating the bootWar task, which we can use to generate an executable war file
  • disabling the war task

6. Conclusion

In this quick tutorial, we learned about the Spring Boot Gradle Plugin and its different tasks.

Also, we discussed how it interacts with other plugins.

Reading from a File in Kotlin

$
0
0

1. Overview

In this quick tutorial, we’ll learn about the various ways of reading a file in Kotlin.

We’ll cover both use cases of reading the entire file as a String, as well as reading it into a list of individual lines.

2. Reading a File

Let’s first create an input file that will be read by Kotlin. We create a file called Kotlin.in and place it in a directory that can be accessed by our code.

The file’s contents could be:

Hello to Kotlin. It's:
1. Concise
2. Safe
3. Interoperable
4. Tool-friendly

Now let’s look at the different ways in which we can read this file. We should pass the full path of the file created above as the input to each of the following methods.

2.1. forEachLine

Reads a file line by line using the specified charset (default is UTF-8) and calls an action for each line:

fun readFileLineByLineUsingForEachLine(fileName: String) 
  = File(fileName).forEachLine { println(it) }

2.2. useLines

Calls a given block callback, giving it a sequence of all the lines in a file.

Once the processing is complete, the file gets closed:

fun readFileAsLinesUsingUseLines(fileName: String): List<String>
  = File(fileName).useLines { it.toList() }

2.3. bufferedReader

Returns a new BufferedReader for reading the content of the file.

Once we have a BufferedReader, we can read all the lines in it:

fun readFileAsLinesUsingBufferedReader(fileName: String): List<String>
  = File(fileName).bufferedReader().readLines()

2.4. readLines

Directly reads the contents of the file as a list of lines:

fun readFileAsLinesUsingReadLines(fileName: String): List<String> 
  = File(fileName).readLines()

This method isn’t recommended being used for huge files.

2.5. inputStream

Constructs a new FileInputStream for the file and returns it as a result.

Once we have the input stream, we can convert that into bytes, and then into a complete String.

fun readFileAsTextUsingInputStream(fileName: String)
  = File(fileName).inputStream().readBytes().toString(Charsets.UTF_8)

2.6. readText

Reads the entire content of the file as a String the specified charset (default is UTF-8).

fun readFileDirectlyAsText(fileName: String): String 
  = File(fileName).readText(Charsets.UTF_8)

This method isn’t recommended for huge files and has an internal limitation of 2 GB file size.

3. Conclusion

In this article, we saw the various ways of reading a file in Kotlin. Depending on the use case, we can either opt for reading the file line-by-line or reading it entirely as a text.

The source code for this article can be found in the following GitHub repo.

Viewing all 3743 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>