Quantcast
Channel: Baeldung
Viewing all 3752 articles
Browse latest View live

Spring Remoting with RMI

$
0
0

1. Overview

Java Remote Method Invocation allows invoking an object residing in a different Java Virtual Machine. It is a well-established technology yet a little cumbersome to use, as we can see in the official Oracle trail dedicated to the subject.

In this quick article, we’ll explore how Spring Remoting allows to leverage RMI in an easier and cleaner way.

This article also completes the overview of Spring Remoting. You can find details about other supported technologies in the previous installments: HTTP Invokers, JMS, AMQP, Hessian, and Burlap.

2. Maven Dependencies

As we did in our previous articles, we’re going to set up a couple of Spring Boot applications: a server that exposes the remote callable object and a client that invokes the exposed service.

Everything we need is in the spring-context jar –  so we can bring it in using whatever Spring Boot helper we prefer – because our main goal is just to have the main libraries available.

Let’s now go forward with the usual spring-boot-starter-web – remembering to remove the Tomcat dependency to exclude the embedded web service:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <exclusions>
        <exclusion>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-tomcat</artifactId>
        </exclusion>
    </exclusions>
</dependency>

3. Server Application

We’ll start declaring an interface that defines a service to book a ride on a cab, that will be eventually exposed to clients:

public interface CabBookingService {
    Booking bookRide(String pickUpLocation) throws BookingException;
}

Then we’ll define a bean that implements the interface. This is the bean that will actually execute the business logic on the server:

@Bean 
CabBookingService bookingService() {
    return new CabBookingServiceImpl();
}

Let’s continue declaring the Exporter that makes the service available to clients. In this case, we’ll use the RmiServiceExporter:

@Bean 
RmiServiceExporter exporter(CabBookingService implementation) {
    Class<CabBookingService> serviceInterface = CabBookingService.class;
    RmiServiceExporter exporter = new RmiServiceExporter();
    exporter.setServiceInterface(serviceInterface);
    exporter.setService(implementation);
    exporter.setServiceName(serviceInterface.getSimpleName());
    exporter.setRegistryPort(1099); 
    return exporter;
}

Through setServiceInterface() we provide a reference to the interface that will be made remotely callable.

We should also provide a reference to the object actually executing the method with setService(). We then could provide the port of the RMI registry available on the machine where the server runs if we don’t want to use the default port 1099.

We should also set a service name, that allows identifying the exposed service in the RMI registry.

With the given configuration the client will be able to contact the CabBookingService at the following URL: rmi://HOST:1199/CabBookingService.

Let’s finally start the server. We don’t even need to start the RMI registry by ourselves because Spring will do that automatically for us if such registry is not available.

4. Client Application

Let’s write now the client application.

We start declaring the RmiProxyFactoryBean that will create a bean that has the same interface exposes by the service running on the server side and that will transparently route the invocations it will receive to the server:

@Bean 
RmiProxyFactoryBean service() {
    RmiProxyFactoryBean rmiProxyFactory = new RmiProxyFactoryBean();
    rmiProxyFactory.setServiceUrl("rmi://localhost:1099/CabBookingService");
    rmiProxyFactory.setServiceInterface(CabBookingService.class);
    return rmiProxyFactory;
}

Let’s then write a simple code that starts up the client application and uses the proxy defined in the previous step:

public static void main(String[] args) throws BookingException {
    CabBookingService service = SpringApplication
      .run(RmiClient.class, args).getBean(CabBookingService.class);
    Booking bookingOutcome = service
      .bookRide("13 Seagate Blvd, Key Largo, FL 33037");
    System.out.println(bookingOutcome);
}

It is now enough to launch the client to verify that it invokes the service exposed by the server.

5. Conclusion

In this tutorial, we saw how we could use Spring Remoting to ease the use of RMI that otherwise will require a series of tedious tasks as, among all, spinning up a registry and define services using interfaces that make heavy use of checked exceptions.

As usual, you’ll find the sources over on GitHub.


Intro to Security and WebSockets

$
0
0

1. Introduction

In a previous article, we showed how to add WebSockets to a Spring MVC project.

Here, we’ll describe how to add security to Spring WebSockets in Spring MVC. Before continuing, make sure you already have basic Spring MVC Security coverage in place – if not, check out this article.

2. Maven Dependencies

There are two main groups of Maven dependencies we need for our WebSocket implementation.

First, let’s specify the overarching versions of the Spring Framework and Spring Security that we will be using:

<properties>
    <springframework.version>4.3.8.RELEASE</springframework.version>
    <springsecurity.version>4.2.3.RELEASE</springsecurity.version>
</properties>

Second, let’s add the core Spring MVC and Spring Security libraries required to implement basic authentication and authorization:

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-core</artifactId>
    <version>${springframework.version}</version>
</dependency>
<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-web</artifactId>
    <version>${springframework.version}</version>
</dependency>
<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-webmvc</artifactId>
    <version>${springframework.version}</version>
</dependency>
<dependency>
    <groupId>org.springframework.security</groupId>
    <artifactId>spring-security-web</artifactId>
    <version>${springsecurity.version}</version>
</dependency>
<dependency>
    <groupId>org.springframework.security</groupId>
    <artifactId>spring-security-config</artifactId>
    <version>${springsecurity.version}</version>
</dependency>

The latest versions of spring-core, spring-web, spring-webmvc, spring-security-web, spring-security-config can be found on Maven Central.

Lastly, let’s add required dependencies:

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-websocket</artifactId>
    <version>${springframework.version}</version>
</dependency>
<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-messaging</artifactId>
    <version>${springframework.version}</version>
</dependency>
<dependency>
    <groupId>org.springframework.security</groupId>
    <artifactId>spring-security-messaging</artifactId>
    <version>${springsecurity.version}</version>
</dependency>

You can find the latest version of spring-websocket, spring-messaging, and spring-security-messaging on Maven Central.

3. Basic WebSocket Security

WebSocket-specific security using the spring-security-messaging library centers on the AbstractSecurityWebSocketMessageBrokerConfigurer class and its implementation within your project:

@Configuration
public class SocketSecurityConfig 
  extends AbstractSecurityWebSocketMessageBrokerConfigurer {
      //...
}

The AbstractSecurityWebSocketMessageBrokerConfigurer class provides additional security coverage provided by WebSecurityConfigurerAdapter.

The spring-security-messaging library is not the only way to implement security for WebSockets. If we stick with the ordinary spring-websocket library, we can implement the WebSocketConfigurer interface and attach security interceptors to our socket handlers.

Since we are using the spring-security-messaging library, we will use the AbstractSecurityWebSocketMessageBrokerConfigurer approach.

3.1. Implementing configureInbound()

The implementation of configureInbound() is the most important step in configuring your AbstractSecurityWebSocketMessageBrokerConfigurer subclass:

@Override 
protected void configureInbound(
  MessageSecurityMetadataSourceRegistry messages) { 
    messages
      .simpDestMatchers("/secured/**").authenticated()
      .anyMessage().authenticated(); 
}

Whereas the WebSecurityConfigurerAdapter lets you specify various application-wide authorization requirements for different routes, AbstractSecurityWebSocketMessageBrokerConfigurer allows you to specify the specific authorization requirements for socket destinations.

3.2. Type and Destination Matching

MessageSecurityMetadataSourceRegistry allows us to specify security constraints like paths, user roles, and which messages are allowed.

Type matchers constrain which SimpMessageType are allowed and in what way:

.simpTypeMatchers(CONNECT, UNSUBSCRIBE, DISCONNECT).permitAll()

Destination matchers constrain which endpoint patterns are accessible and in what way:

.simpDestMatchers("/app/**").hasRole("ADMIN")

Subscribe destination matchers map a List of SimpDestinationMessageMatcher instances that match on SimpMessageType.SUBSCRIBE:

.simpSubscribeDestMatchers("/topic/**").authenticated()

Here is the complete list of all available methods for type and destination matching.

4. Securing Socket Routes

Now that we’ve been introduced to basic socket security and type matching configuration, we can combine socket security, views, STOMP (a text-messaging protocol), message brokers, and socket controllers to enable secure WebSockets within our Spring MVC application.

First, let’s set up our socket views and controllers for basic Spring Security coverage:

@Configuration
@EnableGlobalMethodSecurity(prePostEnabled = true, securedEnabled = true)
@EnableWebSecurity
@ComponentScan("com.baeldung.springsecuredsockets")
public class SecurityConfig extends WebSecurityConfigurerAdapter { 
    @Override 
    protected void configure(HttpSecurity http) throws Exception { 
        http
          .authorizeRequests()
          .antMatchers("/", "/index", "/authenticate").permitAll()
          .antMatchers(
            "/secured/**/**",
            "/secured/success", 
            "/secured/socket",
            "/secured/success").authenticated()
          .anyRequest().authenticated()
          .and()
          .formLogin()
          .loginPage("/login").permitAll()
          .usernameParameter("username")
          .passwordParameter("password")
          .loginProcessingUrl("/authenticate")
          //...
    }
}

Second, let’s set up the actual message destination with authentication requirements:

@Configuration
public class SocketSecurityConfig 
  extends AbstractSecurityWebSocketMessageBrokerConfigurer {
    @Override
    protected void configureInbound(MessageSecurityMetadataSourceRegistry messages) {
        messages
          .simpDestMatchers("/secured/**").authenticated()
          .anyMessage().authenticated();
    }   
}

Now, in our AbstractWebSocketMessageBrokerConfigurer, we can register the actual message and STOMP endpoints:

@Configuration
@EnableWebSocketMessageBroker
public class SocketBrokerConfig 
  extends AbstractWebSocketMessageBrokerConfigurer {

    @Override
    public void configureMessageBroker(MessageBrokerRegistry config) {
        config.enableSimpleBroker("/secured/history");
        config.setApplicationDestinationPrefixes("/spring-security-mvc-socket");
    }

    @Override
    public void registerStompEndpoints(StompEndpointRegistry registry) {
        registry.addEndpoint("/secured/chat")
          .withSockJS();
    }
}

Let’s define an example socket controller and endpoint that we provided security coverage for above:

@Controller
public class SocketController {
 
    @MessageMapping("/secured/chat")
    @SendTo("/secured/history")
    public OutputMessage send(Message msg) throws Exception {
        return new OutputMessage(
           msg.getFrom(),
           msg.getText(), 
           new SimpleDateFormat("HH:mm").format(new Date())); 
    }
}

5. Same Origin Policy

The Same Origin Policy requires that all interactions with an endpoint must come from the same domain where the interaction was initiated.

For example, suppose your WebSockets implementation is hosted at foo.com, and you are enforcing same origin policy. If a user connects to your client hosted at foo.com and then opens another browser to bar.com, then bar.com will not have access to your WebSocket implementation.

5.1. Overriding the Same Origin Policy

Spring WebSockets enforce the Same Origin Policy out of the box, while ordinary WebSockets do not.

In fact, Spring Security requires a CSRF (Cross Site Request Forgery) token for any valid CONNECT message type:

@Controller
public class CsrfTokenController {
    @GetMapping("/csrf")
    public @ResponseBody String getCsrfToken(HttpServletRequest request) {
        CsrfToken csrf = (CsrfToken) request.getAttribute(CsrfToken.class.getName());
        return csrf.getToken();
    }
}

By calling the endpoint at /csrf, a client can acquire the token and authenticate through the CSRF security layer.

However, the Same Origin Policy for Spring can be overridden by adding the following configuration to your AbstractSecurityWebSocketMessageBrokerConfigurer:

@Override
protected boolean sameOriginDisabled() {
    return true;
}

5.2. STOMP, SockJS Support, and Frame Options

It is common to use STOMP along with SockJS to implement client-side support for Spring WebSockets.

SockJS is configured to disallow transports through HTML iframe elements by default. This is to prevent the threat of clickjacking.

However, there are certain use-cases where allowing iframes to leverage SockJS transports can be beneficial. To do so, you can override the default configuration in WebSecurityConfigurerAdapter:

@Override
protected void configure(HttpSecurity http) 
  throws Exception {
    http
      .csrf()
        //...
        .and()
      .headers()
        .frameOptions().sameOrigin()
      .and()
        .authorizeRequests();
}

Note that in this example, we follow the Same Origin Policy despite allowing transports through iframes.

6. Oauth2 Coverage

Oauth2-specific support for Spring WebSockets is made possible by implementing Oauth2 security coverage in addition to — and by extending — your standard WebSecurityConfigurerAdapter coverageHere’s an example of how to implement Oauth2.

To authenticate and gain access to a WebSocket endpoint, you can pass an Oauth2 access_token into a query parameter when connecting from your client to your back-end WebSocket.

Here’s an example demonstrating that concept using SockJS and STOMP:

var endpoint = '/ws/?access_token=' + auth.access_token;
var socket = new SockJS(endpoint);
var stompClient = Stomp.over(socket);

7. Conclusion

In this brief tutorial, we have shown how to add security to Spring WebSockets. Take a look at Spring’s WebSocket and WebSocket Security reference documentation to if you are looking to learn more about this integration.

As always, check our Github project for examples used in this article.

A Guide to Apache Commons Collections CollectionUtils

$
0
0

1. Overview

Simply put, the Apache CollectionUtils provides utility methods for common operations which cover a wide range of use cases and helps in avoiding writing boilerplate code. The library targets older JVM releases because currently, similar functionality is provided by the Java 8’s Stream API.

2. Maven Dependencies

We need to add the following dependency to get going with CollectionUtils:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-collections4</artifactId>
    <version>4.1</version>
</dependency>

The latest version of the library can be found here.

3. Setup

Let’s add Customer and Address classes:

public class Customer {
    private Integer id;
    private String name;
    private Address address;

    // standard getters and setters
}

public class Address {
    private String locality;
    private String city;
   
    // standard getters and setters
}

We will also keep handy the following Customer and List instances ready to test our implementation:

Customer customer1 = new Customer(1, "Daniel", "locality1", "city1");
Customer customer2 = new Customer(2, "Fredrik", "locality2", "city2");
Customer customer3 = new Customer(3, "Kyle", "locality3", "city3");
Customer customer4 = new Customer(4, "Bob", "locality4", "city4");
Customer customer5 = new Customer(5, "Cat", "locality5", "city5");
Customer customer6 = new Customer(6, "John", "locality6", "city6");

List<Customer> list1 = Arrays.asList(customer1, customer2, customer3);
List<Customer> list2 = Arrays.asList(customer4, customer5, customer6);
List<Customer> list3 = Arrays.asList(customer1, customer2);

List<Customer> linkedList1 = new LinkedList<>(list1);

4. CollectionUtils

Let’s go through some of the most used methods in Apache Commons CollectionUtils class.

4.1. Adding Only Non-Null Elements

We can use CollectionUtils’s addIgnoreNull method to add only non-null elements to a provided collection.

The first argument to this method is the collection to which we want to add the element and the second argument is the element that we want to add:

@Test
public void givenList_whenAddIgnoreNull_thenNoNullAdded() {
    CollectionUtils.addIgnoreNull(list1, null);
 
    assertFalse(list1.contains(null));
}

Notice that the null was not added to the list.

4.2. Collating Lists

We can use collate method to collate two already sorted lists. This method takes both lists, that we want to merge, as arguments and returns a single sorted list:

@Test
public void givenTwoSortedLists_whenCollated_thenSorted() {
    List<Customer> sortedList = CollectionUtils.collate(list1, list2);

    assertEquals(6, sortedList.size()); 
    assertTrue(sortedList.get(0).getName().equals("Bob"));
    assertTrue(sortedList.get(2).getName().equals("Daniel"));
}

4.3. Transforming Objects

We can use the transform method to transform objects of class A into different objects of class B. This method takes a list of objects of class A and a transformer as arguments.

The result of this operation is a list of objects of class B:

@Test
public void givenListOfCustomers_whenTransformed_thenListOfAddress() {
    Collection<Address> addressCol = CollectionUtils.collect(list1, 
      new Transformer<Customer, Address>() {
        public Address transform(Customer customer) {
            return customer.getAddress();
        }
    });
    
    List<Address> addressList = new ArrayList<>(addressCol);
    assertTrue(addressList.size() == 3);
    assertTrue(addressList.get(0).getLocality().equals("locality1"));
}

4.4. Filtering Objects

Using filter we can remove objects which do not satisfy a given condition from a list. The method takes the list as the first argument and a Predicate as its second argument.

The filterInverse method does the opposite. It removes objects from the list when the Predicate returns true.

Both filter and filterInverse return true if the input list was modified, i.e. if at least one object was filtered out from the list:

@Test
public void givenCustomerList_WhenFiltered_thenCorrectSize() {
    
    boolean isModified = CollectionUtils.filter(linkedList1, 
      new Predicate<Customer>() {
        public boolean evaluate(Customer customer) {
            return Arrays.asList("Daniel","Kyle").contains(customer.getName());
        }
    });
     
    assertTrue(linkedList1.size() == 2);
}

We can use select and selectRejected if we want the resultant list to be returned rather than a boolean flag.

4.5.  Checking For Non-Empty

The isNotEmpty method is quite handy when we want to check if there is at least single element in a list. The other way of checking the same is:

boolean isNotEmpty = (list != null && list.size() > 0);

Though the above line of code does the same, CollectionUtils.isNotEmpty keeps our code cleaner:

@Test
public void givenNonEmptyList_whenCheckedIsNotEmpty_thenTrue() {
    assertTrue(CollectionUtils.isNotEmpty(list1));
}

The isEmpty does the opposite. It checks whether the given list is null or there are zero elements in the list:

List<Customer> emptyList = new ArrayList<>();
List<Customer> nullList = null;
 
assertTrue(CollectionUtils.isEmpty(nullList));
assertTrue(CollectionUtils.isEmpty(emptyList));

4.6. Checking Inclusion

We can use isSubCollection to check if a collection is contained in another collection. isSubCollection takes two collections as arguments and returns true if the first collection is a subcollection of the second collection:

@Test
public void givenCustomerListAndASubcollection_whenChecked_thenTrue() {
    assertTrue(CollectionUtils.isSubCollection(list3, list1));
}

A collection is sub-collection of another collection if the number of times an object occurs in the first collection is less than or equal to the number of times it occurs in the second collection.

4.7. Intersection Of Collections

We can use CollectionUtils.intersection method to get the intersection of two collections. This method takes two collections and returns a collection of elements of which are common in both the input collections:

@Test
public void givenTwoLists_whenIntersected_thenCheckSize() {
    Collection<Customer> intersection = CollectionUtils.intersection(list1, list3);
    assertTrue(intersection.size() == 2);
}

The number of times an element occurs in the resultant collection is a minimum of the number of times it occurs in each of the given collections.

4.8. Subtracting Collections

CollectionUtils.subtract takes two collections as input and returns a collection which contains elements which are there in the first collection but not in the second collection:

@Test
public void givenTwoLists_whenSubtracted_thenCheckElementNotPresentInA() {
    Collection<Customer> result = CollectionUtils.subtract(list1, list3);
    assertFalse(result.contains(customer1));
}

The number of times a collection occurs in the result is the number of times it occurs in first collection minus the number of times it occurs in the second collection.

4.9. Union Of Collections

CollectionUtils.union does the union of two collections and returns a collection which contains all the elements which are there in either the first or the second collection.

@Test
public void givenTwoLists_whenUnioned_thenCheckElementPresentInResult() {
    Collection<Customer> union = CollectionUtils.union(list1, list2);
 
    assertTrue(union.contains(customer1));
    assertTrue(union.contains(customer4));
}

The number of times an element occurs in the resulting collection is the maximum of the number of times it occurs in each of the given collections.

5. Conclusion

And we’re done.

We went through some of the commonly used methods of CollectionUtils – which is very much useful to avoid boilerplate when we’re working with collections in our Java projects.

As usual, the code is available over on GitHub.

Introduction to Minimax Algorithm

$
0
0

1. Overview

In this article, we’re going to discuss Minimax algorithm and its applications in AI. As it’s a game theory algorithm, we’ll implement a simple game using it.

We’ll also discuss the advantages of using the algorithm and see how it can be improved.

2. Introduction

Minimax is a decision-making algorithm, typically used in a turn-based, two player games. The goal of the algorithm is to find the optimal next move.

In the algorithm, one player is called the maximizer, and the other player is a minimizer. If we assign an evaluation score to the game board, one player tries to choose a game state with the maximum score, while the other chooses a state with the minimum score.

In other words, the maximizer works to get the highest score, while the minimizer tries get the lowest score by trying to counter moves.

It is based on the zero-sum game concept. In a zero-sum game, the total utility score is divided among the players. An increase in one player’s score results into the decrease in another player’s score. So, the total score is always zero. For one player to win, the other one has to lose. Examples of such games are chess, poker, checkers, tic-tac-toe.

An interesting fact- in 1997, IBM’s chess-playing computer Deep Blue (built with Minimax) defeated Garry Kasparov (the world champion in chess).

3. Minimax Algorithm

Our goal is to find the best move for the player. To do so, we can just choose the node with best evaluation score. To make the process smarter, we can also look ahead and evaluate potential opponent’s moves.

For each move, we can look ahead as many moves as our computing power allows. The algorithm assumes that the opponent is playing optimally.

Technically, we start with the root node and choose the best possible node. We evaluate nodes based on their evaluation scores. In our case, evaluation function can assign scores to only result nodes (leaves). Therefore, we recursively reach leaves with scores and back propagate the scores.

Consider the below game tree:

Maximizer starts with the root node and chooses the move with the maximum score. Unfortunately, only leaves have evaluation scores with them, and hence the algorithm has to reach leaf nodes recursively. In the given game tree, currently it’s the minimizer’s turn to choose a move from the leaf nodes, so the nodes with minimum scores (here, node 3 and 4) will get selected. It keeps picking the best nodes similarly, till it reaches the root node.

Now, let’s formally define steps of the algorithm:

  1. Construct the complete game tree
  2. Evaluate scores for leaves using the evaluation function
  3. Back-up scores from leaves to root, considering the player type:
    • For max player, select the child with the maximum score
    • For min player, select the child with the minimum score
  4. At the root node, choose the node with max value and perform the corresponding move

4. Implementation

Now, let’s implement a game.

In the game, we have a heap with n number of bones. Both players have to pick up 1,2 or 3 bones in their turn. A player who can not take any bones loses the game. Each player plays optimally. Given the value of n, let’s write an AI.

To define the rules of the game, we will implement GameOfBones class:

class GameOfBones {
    static List<Integer> getPossibleStates(int noOfBonesInHeap) {
        return IntStream.rangeClosed(1, 3).boxed()
          .map(i -> noOfBonesInHeap - i)
          .filter(newHeapCount -> newHeapCount >= 0)
          .collect(Collectors.toList());
    }
}

Furthermore, we also need the implementation for Node and Tree classes as well:

public class Node {
    int noOfBones;
    boolean isMaxPlayer;
    int score;
    List<Node> children;
    // setters and getters
}
public class Tree {
    Node root;
    // setters and getters
}

Now we’ll implement the algorithm. It requires a game tree to look ahead and find the best move. Let’s implement that:

public class MiniMax {
    Tree tree;

    public void constructTree(int noOfBones) {
        tree = new Tree();
        Node root = new Node(noOfBones, true);
        tree.setRoot(root);
        constructTree(root);
    }

    private void constructTree(Node parentNode) {
        List<Integer> listofPossibleHeaps 
          = GameOfBones.getPossibleStates(parentNode.getNoOfBones());
        boolean isChildMaxPlayer = !parentNode.isMaxPlayer();
        listofPossibleHeaps.forEach(n -> {
            Node newNode = new Node(n, isChildMaxPlayer);
            parentNode.addChild(newNode);
            if (newNode.getNoOfBones() > 0) {
                constructTree(newNode);
            }
        });
    }
}

Now, we’ll implement the checkWin method which will simulate a play out, by selecting optimal moves for both players. It sets the score to:

  • +1, if maximizer wins
  • -1, if minimizer wins

The checkWin will return true if the first player (in our case – maximizer) wins:

public boolean checkWin() {
    Node root = tree.getRoot();
    checkWin(root);
    return root.getScore() == 1;
}

private void checkWin(Node node) {
    List<Node> children = node.getChildren();
    boolean isMaxPlayer = node.isMaxPlayer();
    children.forEach(child -> {
        if (child.getNoOfBones() == 0) {
            child.setScore(isMaxPlayer ? 1 : -1);
        } else {
            checkWin(child);
        }
    });
    Node bestChild = findBestChild(isMaxPlayer, children);
    node.setScore(bestChild.getScore());
}

Here, the findBestChild method finds the node with the maximum score if a player is a maximizer. Otherwise, it returns the child with the minimum score:

private Node findBestChild(boolean isMaxPlayer, List<Node> children) {
    Comparator<Node> byScoreComparator = Comparator.comparing(Node::getScore);
    return children.stream()
      .max(isMaxPlayer ? byScoreComparator : byScoreComparator.reversed())
      .orElseThrow(NoSuchElementException::new);
}

Finally, let’s implement a test case with some values of n (the number of bones in a heap):

@Test
public void givenMiniMax_whenCheckWin_thenComputeOptimal() {
    miniMax.constructTree(6);
    boolean result = miniMax.checkWin();
 
    assertTrue(result);
 
    miniMax.constructTree(8);
    result = miniMax.checkWin();
 
    assertFalse(result);
}

5. Improvement

For most of the problems, it is not feasible to construct an entire game tree. In practice, we can develop a partial tree (construct the tree till a predefined number of levels only).

Then, we will have to implement an evaluation function, which should be able to decide how good the current state is, for the player.

Even if we don’t build complete game trees, it can be time-consuming to compute moves for games with high branching factor.

Fortunately, there is an option to find the optimal move, without exploring every node of the game tree. We can skip some branches by following some rules, and it won’t affect the final result. This process is called pruning. Alpha–beta pruning is a prevalent variant of minimax algorithm.

6. Conclusion

Minimax algorithm is one of the most popular algorithms for computer board games. It is widely applied in turn based games. It can be a good choice when players have complete information about the game.

It may not be the best choice for the games with exceptionally high branching factor (e.g. game of GO). Nonetheless, given a proper implementation, it can be a pretty smart AI.

As always, the complete code for the algorithm can be found over on GitHub.

Introduction to Vavr’s Either

$
0
0

1. Overview

Vavr is an open source object-functional language extension library for Java 8+. It helps to reduce the amount of code and to increase the robustness.

In this article, we’ll learn about Vavr‘s tool called Either. If you want to learn more about the Vavr library, check this article.

2. What is Either

In a functional programming world, functional values or objects can’t be modified (i.e. in normal form); in Java terminology, it’s known as immutable variables.

Either represents a value of two possible data types. An Either is either a Left or a Right. By convention, the Left signifies a failure case result and the Right signifies a success.

3. Maven Dependencies

We need to add the following dependency in the pom.xml:

<dependency>
    <groupId>io.vavr</groupId>
    <artifactId>vavr</artifactId>
    <version>0.9.0</version>
</dependency>

The latest version of Vavr is available in the Central Maven Repository.

4. Use Cases

Let’s consider a use case where we need to create a method which takes an input and, based on the input, we’ll return either a String or an Integer.

4.1. Plain Java

We can implement this in two ways. Either our method can return a map with the key representing success/failure result, or it could return a fixed size List/Array where position denotes a result type.

This is how this could look like:

public static Map<String, Object> computeWithoutEitherUsingMap(int marks) {
    Map<String, Object> results = new HashMap<>();
    if (marks < 85) {
        results.put("FAILURE", "Marks not acceptable");
    } else {
        results.put("SUCCESS", marks);
    }
    return results;
}

public static void main(String[] args) {
    Map<String, Object> results = computeWithoutEitherUsingMap(8);

    String error = (String) results.get("FAILURE");
    int marks = (int) results.get("SUCCESS");
}

For the second approach, we could use the following code:

public static Object[] computeWithoutEitherUsingArray(int marks) {
    Object[] results = new Object[2];
    if (marks < 85) {
        results[0] = "Marks not acceptable";
    } else {
        results[1] = marks;
    }
    return results;
}

As we can see, both ways require quite a lot of work, and the final result is not very aesthetically appealing nor safe to use.

4.2. With Either

Now let’s see how we can utilize Vavr‘s Either utility to achieve the same result:

private static Either<String, Integer> computeWithEither(int marks) {
    if (marks < 85) {
        return Either.left("Marks not acceptable");
    } else {
        return Either.right(marks);
    }
}

No, explicit type-casting, null checking, or unused object creation is required.

Moreover, Either provides a very handy monadic-like API for dealing with both cases:

computeWithEither(80)
  .right()
  .filter(...)
  .map(...)
  // ...

By convention, Either’s Left attribute represents a failure case and the Right one represents a success. However, based on our needs we can change this using projections – Either in Vavr is not biased towards Left or Right.

If we project to Right, operations like filter(), map() will have no effect if Either was Left. 

For example, let’s create the Right projection and define some operations on it:

computeWithEither(90).right()
  .filter(...)
  .map(...)
  .getOrElse(Collections::emptyList);

If it turns out that we projected Left to the Right, we will get an empty list immediately.

We can interact with the Left projection in a similar way:

computeWithEither(9).left()
  .map(FetchError::getMsg)
  .forEach(System.out::println);

4.3. Additional Features

There are plenty of Either utilities available; let’s have a look at some of them.

We can check if an Either contain only Left or Right using isLeft and isRight methods:

result.isLeft();
result.isRight();

We can check if Either contains a given Right value:

result.contains(100)

We can fold Left and Right to one common type:

Either<String, Integer> either = Either.right(42);
String result = either.fold(i -> i, Object::toString);

or… even swap sides:

Either<String, Integer> either = Either.right(42);
Either<Integer, String> swap = either.swap();

5. Conclusion

In this quick tutorial, we’ve learned about using the Either utility of Vavr‘s framework. More details on Either can be found here.

As always, the full source code is available over on GitHub.

Overview of Kotlin Collections API

$
0
0

1. Overview

In this quick tutorial, we’ll introduce the Kotlin’s Collections API, and we’ll discuss the different collection types in Kotlin and some common operations on collections.

2. Collection vs. Mutable Collection

First, let’s take a look at different types of collections in Kotlin. We will see how to initialize basic types of collections.

The Collection interface supports read-only methods while MutableCollection support read/write methods.

2.1. List

We can create a simple read-only List using method listOf() and read-write MutableList using mutableListOf():

val theList = listOf("one", "two", "three")    

val theMutableList = mutableListOf("one", "two", "three")

2.2. Set

Similarly we can create a read-only Set using method setOf() and read-write MutableSet using mutableSetOf():

val theSet = setOf("one", "two", "three")  

val theMutableSet = mutableSetOf("one", "two", "three")

2.3. Map

We can also create a read-only Map using method mapOf() and read-write MutableMap using mutableMapOf():

val theMap = mapOf(1 to "one", 2 to "two", 3 to "three")

val theMutableMap = mutableMapOf(1 to "one", 2 to "two", 3 to "three")

3. Useful Operators

Kotlin’s Collections API is much richer than the one we can find in Java – it comes with a set of overloaded operators.

3.1. The “in” Operator

We can use expression “x in collection” which can be translated to collection.contains(x):

@Test
fun whenSearchForExistingItem_thenFound () {
    val theList = listOf("one", "two", "three")

    assertTrue("two" in theList)        
}

3.2. The “+” Operator

We can an element or entire collection to another using “+” operator:

@Test
fun whenJoinTwoCollections_thenSuccess () {
    val firstList = listOf("one", "two", "three")
    val secondList = listOf("four", "five", "six")
    val resultList = firstList + secondList

    assertEquals(6, resultList.size)   
    assertTrue(resultList.contains("two"))        
    assertTrue(resultList.contains("five"))        
}

3.3. The “-“ Operator

Similarly, we can remove an element or multiple elements using “-” operator:

@Test
fun whenExcludeItems_thenRemoved () {
    val firstList = listOf("one", "two", "three")
    val secondList = listOf("one", "three")
    val resultList = firstList - secondList

    assertEquals(1, resultList.size)        
    assertTrue(resultList.contains("two"))        
}

4. Other Methods

Finally, we will explore some common methods for collection. In Java, if we wanted to leverage advanced methods, we would need to use Stream API.

In Kotlin, we can find similar methods available in the Collections API.

We can obtain a sublist from a given List:

@Test
fun whenSliceCollection_thenSuccess () {
    val theList = listOf("one", "two", "three")
    val resultList = theList.slice(1..2)

    assertEquals(2, resultList.size)        
    assertTrue(resultList.contains("two"))        
}

We can easily remove all nulls from a List:

@Test
fun whenFilterNullValues_thenSuccess () {
    val theList = listOf("one", null, "two", null, "three")
    val resultList = theList.filterNotNull()

    assertEquals(3, resultList.size)        
}

We can filter collection items easily using filter(), which works similarly to the filter() method from Java Stream API:

@Test
fun whenFilterNonPositiveValues_thenSuccess () {
    val theList = listOf(1, 2, -3, -4, 5, -6)
    val resultList = theList.filter{ it > 0}

    assertEquals(3, resultList.size)  
    assertTrue(resultList.contains(1))
    assertFalse(resultList.contains(-4))      
}

We can drop first N items:

@Test
fun whenDropFirstItems_thenRemoved () {
    val theList = listOf("one", "two", "three", "four")
    val resultList = theList.drop(2)

    assertEquals(2, resultList.size)        
    assertFalse(resultList.contains("one"))        
    assertFalse(resultList.contains("two"))        
}

We can drop items first few items if they satisfy the given condition:

@Test
fun whenDropFirstItemsBasedOnCondition_thenRemoved () {
    val theList = listOf("one", "two", "three", "four")
    val resultList = theList.dropWhile{ it.length < 4 }

    assertEquals(2, resultList.size)        
    assertFalse(resultList.contains("one"))        
    assertFalse(resultList.contains("two"))        
}

We can group elements:

@Test
fun whenGroupItems_thenSuccess () {
    val theList = listOf(1, 2, 3, 4, 5, 6)
    val resultMap = theList.groupBy{ it % 3}

    assertEquals(3, resultMap.size)  
    
    assertTrue(resultMap[1]!!.contains(1))
    assertTrue(resultMap[2]!!.contains(5))      
}

We can map all elements using the provided function:

@Test
fun whenApplyFunctionToAllItems_thenSuccess () {
    val theList = listOf(1, 2, 3, 4, 5, 6)
    val resultList = theList.map{ it * it }
    
    assertEquals(4, resultList[1])
    assertEquals(9, resultList[2])
}

We can use flatmap() to flatten nested collections. Here, we are converting Strings to List<String> and avoiding ending up with List<List<String>>:

@Test
fun whenApplyMultiOutputFunctionToAllItems_thenSuccess () {
    val theList = listOf("John", "Tom")
    val resultList = theList.flatMap{ it.toLowerCase().toList() }
    
    assertEquals(7, resultList.size)
}

We can perform fold/reduce operation:

@Test
fun whenApplyFunctionToAllItemsWithStartingValue_thenSuccess () {
    val theList = listOf(1, 2, 3, 4, 5, 6)
    val finalResult = theList.fold(0, {acc, i -> acc + (i * i)})
    
    assertEquals(91, finalResult)
}

5. Conclusion

We explored Kotlin’s Collections API and some of the most interesting methods.

And, as always, the full source code can be found over on GitHub.

A Guide to JUnit 5 Extensions

$
0
0

1. Overview

In this article, we’re going to take a look at the extension model in the JUnit 5 testing library. As the name suggests, the purpose of Junit 5 extensions is to extend the behavior of test classes or methods, and these can be reused for multiple tests.

Before Junit 5, the JUnit 4 version of the library used two types of components for extending a test: test runners and rules. By comparison, JUnit 5 simplifies the extension mechanism by introducing a single concept: the Extension API.

2. JUnit 5 Extension Model

JUnit 5 extensions are related to a certain event in the execution of a test, referred to as an extension point. When a certain life cycle phase is reached, the JUnit engine calls registered extensions.

Five main types of extension points can be used:

  • test instance post-processing
  • conditional test execution
  • life-cycle callbacks
  • parameter resolution
  • exception handling

We’ll go through each of these in more detail in the following sections.

3. Maven Dependencies

First, let’s add the project dependencies we will need for our examples. The main JUnit 5 library we’ll need is junit-jupiter-engine:

<dependency>
    <groupId>org.junit.jupiter</groupId>
    <artifactId>junit-jupiter-engine</artifactId>
    <version>5.0.0-M5</version>
    <scope>test</scope>
</dependency>

Also, let’s also add two helper libraries to use for our examples:

<dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-core</artifactId>
    <version>2.8.2</version>
</dependency>
<dependency>
    <groupId>com.h2database</groupId>
    <artifactId>h2</artifactId>
    <version>1.4.196</version>
</dependency>

The latest versions of junit-jupiter-engine, h2 and log4j-core can be downloaded from Maven Central.

4. Creating JUnit 5 Extensions

To create a JUnit 5 extension, we need to define a class which implements one or more interfaces corresponding to the JUnit 5 extension points. All of these interfaces extend the main Extension interface, which is only a marker interface.

4.1. TestInstancePostProcessor Extension

This type of extension is executed after an instance of a test has been created. The interface to implement is TestInstancePostProcessor which has a postProcessTestInstance() method to override.

A typical use case for this extension is injecting dependencies into the instance. For example, let’s create an extension which instantiates a logger object, then calls the setLogger() method on the test instance:

public class LoggingExtension implements TestInstancePostProcessor {

    @Override
    public void postProcessTestInstance(Object testInstance, 
      ExtensionContext context) throws Exception {
        Logger logger = LogManager.getLogger(testInstance.getClass());
        testInstance.getClass()
          .getMethod("setLogger", Logger.class)
          .invoke(testInstance, logger);
    }
}

As can be seen above, the postProcessTestInstance() method provides access to the test instance and calls the setLogger() method of the test class using the mechanism of reflection.

4.2. Conditional Test Execution

JUnit 5 provides a type of extension that can control whether or not a test should be run. This is defined by implementing the ExecutionCondition interface.

Let’s create an EnvironmentExtension class which implements this interface and overrides the evaluateExecutionCondition() method.

The method verifies if a property representing the current environment name equals “qa” and disables the test in this case:

public class EnvironmentExtension implements ExecutionCondition {

    @Override
    public ConditionEvaluationResult evaluateExecutionCondition(
      ExtensionContext context) {
        
        Properties props = new Properties();
        props.load(EnvironmentExtension.class
          .getResourceAsStream("application.properties"));
        String env = props.getProperty("env");
        if ("qa".equalsIgnoreCase(env)) {
            return ConditionEvaluationResult
              .disabled("Test disabled on QA environment");
        }
        
        return ConditionEvaluationResult.enabled(
          "Test enabled on QA environment");
    }
}

As a result, tests that register this extension will not be run on the “qa” environment.

If we do not want a condition to be validated, we can deactivate it by setting the junit.conditions.deactivate configuration key to a pattern that matches the condition.

This can be achieved by starting the JVM with the -Djunit.conditions.deactivate=<pattern> property, or by adding a configuration parameter to the LauncherDiscoveryRequest:

public class TestLauncher {
    public static void main(String[] args) {
        LauncherDiscoveryRequest request
          = LauncherDiscoveryRequestBuilder.request()
          .selectors(selectClass("com.baeldung.EmployeesTest"))
          .configurationParameter(
            "junit.conditions.deactivate", 
            "com.baeldung.extensions.*")
          .build();

        TestPlan plan = LauncherFactory.create().discover(request);
        Launcher launcher = LauncherFactory.create();
        SummaryGeneratingListener summaryGeneratingListener
          = new SummaryGeneratingListener();
        launcher.execute(
          request, 
          new TestExecutionListener[] { summaryGeneratingListener });
 
        System.out.println(summaryGeneratingListener.getSummary());
    }
}

4.3. Lifecycle Callbacks

This set of extensions is related to events in a test’s lifecycle and can be defined by implementing the following interfaces:

  • BeforeAllCallback and AfterAllCallback – executed before and after all the test methods are executed
  • BeforeEachCallBack and AfterEachCallback – executed before and after each test method
  • BeforeTestExecutionCallback and AfterTestExecutionCallback – executed immediately before and immediately after a test method

If the test also defines its lifecycle methods, the order of execution is:

  1. BeforeAllCallback
  2. BeforeAll
  3. BeforeEachCallback
  4. BeforeEach
  5. BeforeTestExecutionCallback
  6. Test
  7. AfterTestExecutionCallback
  8. AfterEach
  9. AfterEachCallback
  10. AfterAll
  11. AfterAllCallback

For our example, let’s define a class which implements some of these interfaces and controls the behavior of a test that accesses a database using JDBC.

First, let’s create a simple Employee entity:

public class Employee {

    private long id;
    private String firstName;
    // constructors, getters, setters
}

We will also need a utility class that creates a Connection based on a .properties file:

public class JdbcConnectionUtil {

    private static Connection con;

    public static Connection getConnection() 
      throws IOException, ClassNotFoundException, SQLException{
        if (con == null) {
            // create connection
            return con;
        }
        return con;
    }
}

Finally, let’s add a simple JDBC-based DAO that manipulates Employee records:

public class EmployeeJdbcDao {
    private Connection con;

    public EmployeeJdbcDao(Connection con) {
        this.con = con;
    }

    public void createTable() throws SQLException {
        // create employees table
    }

    public void add(Employee emp) throws SQLException {
       // add employee record
    }

    public List<Employee> findAll() throws SQLException {
       // query all employee records
    }
}

Let’s create our extension which implements some of the lifecycle interfaces:

public class EmployeeDatabaseSetupExtension implements 
  BeforeAllCallback, AfterAllCallback, BeforeEachCallback, AfterEachCallback {
    //...
}

Each of these interfaces contains a method we need to override.

For the BeforeAllCallback interface, we will override the beforeAll() method and add the logic to create our employees table before any test method is executed:

private EmployeeJdbcDao employeeDao = new EmployeeJdbcDao();

@Override
public void beforeAll(ExtensionContext context) throws SQLException {
    employeeDao.createTable();
}

Next, we will make use of the BeforeEachCallback and AfterEachCallback to wrap each test method in a transaction. The purpose of this is to roll back any changes to the database executed in the test method so that the next test will run on a clean database.

In the beforeEach() method, we will create a save point to use for rolling back the state of the database to:

private Connection con = JdbcConnectionUtil.getConnection();
private Savepoint savepoint;

@Override
public void beforeEach(ExtensionContext context) throws SQLException {
    con.setAutoCommit(false);
    savepoint = con.setSavepoint("before");
}

Then, in the afterEach() method, we’ll roll back the database changes made during the execution of a test method:

@Override
public void afterEach(ExtensionContext context) throws SQLException {
    con.rollback(savepoint);
}

To close the connection, we’ll make use of the afterAll() method, executed after all the tests have finished:

@Override
public void afterAll(ExtensionContext context) throws SQLException {
    if (con != null) {
        con.close();
    }
}

4.4. Parameter Resolution

If a test constructor or method receives a parameter, this must be resolved at runtime by a ParameterResolver.

Let’s define our own custom ParameterResolver that resolves parameters of type EmployeeJdbcDao:

public class EmployeeDaoParameterResolver implements ParameterResolver {

    @Override
    public boolean supportsParameter(ParameterContext parameterContext, 
      ExtensionContext extensionContext) throws ParameterResolutionException {
        return parameterContext.getParameter().getType()
          .equals(EmployeeJdbcDao.class);
    }

    @Override
    public Object resolveParameter(ParameterContext parameterContext, 
      ExtensionContext extensionContext) throws ParameterResolutionException {
        return new EmployeeJdbcDao();
    }
}

Our resolver implements the ParameterResolver interface and overrides the supportsParameter() and resolveParameter() methods. The first of these verify the type of the parameter, while the second defines the logic to obtain a parameter instance.

4.5. Exception Handling

Last but not least, the TestExecutionExceptionHandler interface can be used to define the behavior of a test when encountering certain types of exceptions.

For example, we can create an extension which will log and ignore all exceptions of type FileNotFoundException, while rethrowing any other type:

public class IgnoreFileNotFoundExceptionExtension 
  implements TestExecutionExceptionHandler {

    Logger logger = LogManager
      .getLogger(IgnoreFileNotFoundExceptionExtension.class);
    
    @Override
    public void handleTestExecutionException(ExtensionContext context,
      Throwable throwable) throws Throwable {

        if (throwable instanceof FileNotFoundException) {
            logger.error("File not found:" + throwable.getMessage());
            return;
        }
        throw throwable;
    }
}

5. Registering Extensions

Now that we have defined our test extensions, we need to register them with a JUnit 5 test. To achieve this, we can make use of the @ExtendWith annotation.

The annotation can be added multiple time to a test, or receive a list of extensions as a parameter:

@ExtendWith({ EnvironmentExtension.class, 
  EmployeeDatabaseSetupExtension.class, EmployeeDaoParameterResolver.class })
@ExtendWith(LoggingExtension.class)
@ExtendWith(IgnoreFileNotFoundExceptionExtension.class)
public class EmployeesTest {
    private EmployeeJdbcDao employeeDao;
    private Logger logger;

    public EmployeesTest(EmployeeJdbcDao employeeDao) {
        this.employeeDao = employeeDao;
    }

    @Test
    public void whenAddEmployee_thenGetEmployee() throws SQLException {
        Employee emp = new Employee(1, "john");
        employeeDao.add(emp);
        assertEquals(1, employeeDao.findAll().size());   
    }
    
    @Test
    public void whenGetEmployees_thenEmptyList() throws SQLException {
        assertEquals(0, employeeDao.findAll().size());   
    }

    public void setLogger(Logger logger) {
        this.logger = logger;
    }
}

We can see our test class has a constructor with an EmployeeJdbcDao parameter which will be resolved by extending the EmployeeDaoParameterResolver extension.

By adding the EnvironmentExtension, our test will only be executed in an environment different than “qa”.

Our test will also have the employees table created and each method wrapped in a transaction by adding the EmployeeDatabaseSetupExtension. Even if the whenAddEmployee_thenGetEmploee() test is executed first, which adds one record to the table, the second test will find 0 records in the table.

A logger instance will be added to our class by using the LoggingExtension.

Finally, our test class will ignore all FileNotFoundException instances, since it is adding the corresponding extension.

5.1. Automatic Extension Registration

If we want to register an extension for all tests in our application, we can do so by adding the fully qualified name to the /META-INF/services/org.junit.jupiter.api.extension.Extension file:

com.baeldung.extensions.LoggingExtension

For this mechanism to be enabled, we also need to set the junit.extensions.autodetection.enabled configuration key to true. This can be done by starting the JVM with the –Djunit.extensions.autodetection.enabled=true property, or by adding a configuration parameter to LauncherDiscoveryRequest:

LauncherDiscoveryRequest request
  = LauncherDiscoveryRequestBuilder.request()
  .selectors(selectClass("com.baeldung.EmployeesTest"))
  .configurationParameter("junit.extensions.autodetection.enabled", "true")
.build();

6. Conclusion

In this tutorial, we have shown how we can make use of the JUnit 5 extension model to create custom test extensions.

The full source code of the examples can be found over on GitHub.

Drools Spring Integration

$
0
0

1. Introduction

In this quick tutorial, we’re going to integrate Drools with Spring. If you’re just getting started with Drools, check out this intro article.

2. Maven Dependencies

Let’s start by adding the following dependencies to our pom.xml file:

<dependency>
    <groupId>org.drools</groupId>
    <artifactId>drools-core</artifactId>
    <version>7.0.0.Final</version>
</dependency>
<dependency>
    <groupId>org.kie</groupId>
    <artifactId>kie-spring</artifactId>
    <version>7.0.0.Final</version>
</dependency>

The latest versions can be found here for drools-core and here for kie-spring.

3. Initial Data

Let’s now define the data which will be used in our example. We’re going to calculate the fare of a ride based on the distance traveled and the night surcharge flag.

Here’s a simple object which will be used as a Fact:

public class TaxiRide {
    private Boolean isNightSurcharge;
    private Long distanceInMile;
    
    // standard constructors, getters/setters
}

Let’s also define another business object which will be used for representing fares:

public class Fare {
    private Long nightSurcharge;
    private Long rideFare;
    
    // standard constructors, getters/setters
}

Now, let’s define a business rule for calculating taxi fares:

global com.baeldung.spring.drools.model.Fare rideFare;
dialect  "mvel"

rule "Calculate Taxi Fare - Scenario 1"
    when
        taxiRideInstance:TaxiRide(isNightSurcharge == false && distanceInMile < 10);
    then
      	rideFare.setNightSurcharge(0);
       	rideFare.setRideFare(70);
end

As we can see, a rule is defined to calculate the total fare of the given TaxiRide.

This rule accepts a TaxiRide object and checks if the isNightSurcharge attribute is false and the distanceInMile attribute value is less than 10, then calculate the fare as 70 and sets the nightSurcharge property to 0.

The calculated output is set to Fare object for further use.

4. Spring Integration

4.1. Spring Bean Configuration

Now, let’s move on to the Spring integration.

We’re going to define a Spring bean configuration class – which will be responsible for instantiating the TaxiFareCalculatorService bean and its dependencies:

@Configuration
@ComponentScan("com.baeldung.spring.drools.service")
public class TaxiFareConfiguration {
    private static final String drlFile = "TAXI_FARE_RULE.drl";

    @Bean
    public KieContainer kieContainer() {
        KieServices kieServices = KieServices.Factory.get();

        KieFileSystem kieFileSystem = kieServices.newKieFileSystem();
        kieFileSystem.write(ResourceFactory.newClassPathResource(drlFile));
        KieBuilder kieBuilder = kieServices.newKieBuilder(kieFileSystem);
        kieBuilder.buildAll();
        KieModule kieModule = kieBuilder.getKieModule();

        return kieServices.newKieContainer(kieModule.getReleaseId());
    }
}

KieServices is a singleton which acts as a single point entry to get all services provided by Kie. KieServices is retrieved using KieServices.Factory.get().

Next, we need to get the KieContainer which is a placeholder for all the object that we need to run the rule engine.

KieContainer is built with the help of other beans including KieFileSystem, KieBuilder, and KieModule.

Let’s proceed to create a KieModule which is a container of all the resources which are required to define rule knowledge known as KieBase. 

KieModule kieModule = kieBuilder.getKieModule();

KieBase is a repository which contains all knowledge related to the application such as rules, processes, functions, type models and it is hidden inside KieModule. The KieBase can be obtained from the KieContainer.

Once KieModule is created, we can proceed to create KieContainer – which contains the KieModule where the KieBase has been defined. The KieContainer is created using a module:

KieContainer kContainer = kieServices.newKieContainer(kieModule.getReleaseId());

4.2. Spring Service

Let’s define a service class which executes the actual business logic by passing the Fact object to the engine for processing the result:

@Service
public class TaxiFareCalculatorService {

    @Autowired
    private KieContainer kieContainer;

    public Long calculateFare(TaxiRide taxiRide, Fare rideFare) {
        KieSession kieSession = kieContainer.newKieSession();
        kieSession.setGlobal("rideFare", rideFare);
        kieSession.insert(taxiRide);
        kieSession.fireAllRules();
        kieSession.dispose();
        return rideFare.getTotalFare();
    }
}

Finally, a KieSession is created using KieContainer instance. A KieSession instance is a place where input data can be inserted. The KieSession interacts with the engine to process the actual business logic defined in rule based on inserted Facts.

Global (just like a global variable) is used to pass information into the engine. We can set the Global using setGlobal(“key”, value); in this example, we have set Fare object as Global to store the calculated taxi fare.

As we discussed in Section 4, Rule requires data to operate on. We’re inserting the Fact into session using kieSession.insert(taxiRide);

Once we are done with setting up the input Fact, we can request engine to execute the business logic by calling fireAllRules().

Finally, we need to clean up the session to avoid memory leak by calling the dispose() method.

5. Example in Action

Now, we can wire up a Spring context and see in action that Drools works as expected:

@Test
public void whenNightSurchargeFalseAndDistLessThan10_thenFixWithoutNightSurcharge() {
    TaxiRide taxiRide = new TaxiRide();
    taxiRide.setIsNightSurcharge(false);
    taxiRide.setDistanceInMile(9L);
    Fare rideFare = new Fare();
    Long totalCharge = taxiFareCalculatorService.calculateFare(taxiRide, rideFare);
 
    assertNotNull(totalCharge);
    assertEquals(Long.valueOf(70), totalCharge);
}

6. Conclusion

In this article, we learned about Drools Spring integration with a simple use case.

As always, the implementation of the example and code snippets are available over on GitHub.


Lazy Initialization in Kotlin

$
0
0

1. Overview

In this article, we’ll be looking at one of the most interesting features in the Kotlin syntax – which is the lazy keyword – used for creating lazy-initialized objects. We will be also looking at the

We’ll be also looking at the lateinit keyword that allows us to trick compiler and initialize non-null fields in the body of class – instead of in the constructor.

2. Lazy Initialization Pattern in Java

Sometimes we need to construct objects that have a very heavy initialization process. Also, often we cannot be sure that object, for which we paid the cost of initialization at the start of our program, will be used in our program at all.

The concept of ‘lazy initialization’ was designed to prevent unnecessary initialization of objects. In Java, creating an object in a lazy and thread-safe way is not an easy thing to do. Patterns like Singleton have major flaws in multi-threading, testing, etc – and they’re now widely known as anti-patterns to be avoided.

Alternatively, we can leverage the static initialization of inner object in Java to achieve laziness:

public class ClassWithHeavyInitialization {
 
    private ClassWithHeavyInitialization() {
    }

    private static class LazyHolder {
        public static final ClassWithHeavyInitialization INSTANCE = new ClassWithHeavyInitialization();
    }

    public static ClassWithHeavyInitialization getInstance() {
        return LazyHolder.INSTANCE;
    }
}

Notice how, only when we will call the getInstance() method on ClassWithHeavyInitialization, the static LazyHolder class will be loaded and the new instance of the ClassWithHeavyInitialization will be created. Next, the instance will be assigned to the static final INSTANCE reference.

We can test that the getInstance() is actually returning the same instance every time it is called:

@Test
public void giveHeavyClass_whenInitLazy_thenShouldReturnInstanceOnFirstCall() {
    // when
    ClassWithHeavyInitialization classWithHeavyInitialization 
      = ClassWithHeavyInitialization.getInstance();
    ClassWithHeavyInitialization classWithHeavyInitialization2 
      = ClassWithHeavyInitialization.getInstance();

    // then
    assertTrue(classWithHeavyInitialization == classWithHeavyInitialization2);
}

That’s technically OK but of course a little bit too complicated for such a simple concept.

3. Lazy Initialization in Kotlin

We can see that using lazy initialization pattern in Java is quite cumbersome. We need to write a lot of boilerplate code to achieve our goal. Luckily, the Kotlin language has a built-in support for lazy initialization.

To create an object that will be initialized at the first access to it we can use the lazy keyword:

@Test
fun givenLazyValue_whenGetIt_thenShouldInitializeItOnlyOnce() {
    // given
    val numberOfInitializations: AtomicInteger = AtomicInteger()
    val lazyValue: ClassWithHeavyInitialization by lazy {
        numberOfInitializations.incrementAndGet()
        ClassWithHeavyInitialization()
    }
    // when
    println(lazyValue)
    println(lazyValue)

    // then
    assertEquals(numberOfInitializations.get(), 1)
}

As we can see, the lambda passed to the lazy function was executed only once.

When we’re accessing the lazyValue for the first time – an actual initialization happened and the returned instance of the ClassWithHeavyInitialization class was assigned to the lazyValue reference. Subsequent access to the lazyValue returned the previously initialized object.

We can pass the LazyThreadSafetyMode as an argument to the lazy function. Default publication mode is SYNCHRONIZED meaning that only single thread can initialize the given object.

We can pass a PUBLICATION as a mode – which will cause that every thread can initialize given property. The object assigned to the reference will be the first returned value – so the first thread wins.

Let’s have a look at that scenario:

@Test
fun givenLazyValue_whenGetItUsingPublication_thenCouldInitializeItMoreThanOnce() {
    // given
    val numberOfInitializations: AtomicInteger = AtomicInteger()
    val lazyValue: ClassWithHeavyInitialization by lazy(LazyThreadSafetyMode.PUBLICATION) {
        numberOfInitializations.incrementAndGet()
        ClassWithHeavyInitialization()
    }
    val executorService = Executors.newFixedThreadPool(2)
    val countDownLatch = CountDownLatch(1)
    // when
    executorService.submit { countDownLatch.await(); println(lazyValue) }
    executorService.submit { countDownLatch.await(); println(lazyValue) }
    countDownLatch.countDown()

    // then
    executorService.awaitTermination(1, TimeUnit.SECONDS)
    executorService.shutdown()
    assertEquals(numberOfInitializations.get(), 2)
}

We can see that starting two threads at the same time cause initialization of the ClassWithHeavyInitialization happen twice.

There’s also a third mode – NONE – but it shouldn’t be used in the multi-threaded environment as its behavior is undefined.

4. Kotlin’s lateinit

In Kotlin, every variable that is declared in the class needs to be assigned in the constructor; otherwise, we’ll get a compiler error. On the other hand, there are some cases in which the variable can be assigned dynamically by for example dependency injection.

To defer initialization of the variable, we can specify that a field is lateinit. We are informing the compiler that this will variable will be assigned later and we are freeing the compiler from the responsibility of making sure that this variable gets initialized:

lateinit var a: String
 
@Test
fun givenLateInitProperty_whenAccessItAfterInit_thenPass() {
    // when
    a = "it"
    println(a)

    // then not throw
}

If we forget to initialize the lateinit property, we’ll get an UninitializedPropertyAccessException:

@Test(expected = UninitializedPropertyAccessException::class)
fun givenLateInitProperty_whenAccessItWithoutInit_thenThrow() {
    // when
    println(a)
}

5. Conclusion

In this quick tutorial, we looked at the lazy initialization of objects.

Firstly, we saw how to create a thread-safe lazy initialization in Java. We saw that it is a very cumbersome and needs a lot of boilerplate code.

Next, we delved into Kotlin lazy keyword that is used for lazy initialization of properties. At the end, we saw how to defer assigning variables using the lateinit keyword.

The implementation of all these examples and code snippets can be found in the GitHub project – this is a Maven project, so it should be easy to import and run as it is.

Java Weekly, Issue 185

$
0
0

Lots of interesting writeups on Java 9 this week.

Here we go…

1. Spring and Java

>> Introducing Spring Cloud Function [spring.io]

Spring has always adopted POJO-based approaches, now it’s time to focus on functional approaches. Spring Cloud is getting enhanced with a possibility of defining beans from function implementations – everything well integrated with Reactor.

>> How much projections can help? [blog.arnoldgalovics.com]

Using projection instead of entity-based fetching can significantly improve overall performance – which is not a surprise.

>> From Microservices to Service Blocks using Spring Cloud Function and AWS Lambda [kennybastani.com]

A practical look at Service Blocks using Spring Cloud Function and AWS Lambda. If you’re interested in seeing Spring Cloud Function in action – definitely have a look.

>> Scala vs Kotlin: Multiple Inheritance and the Diamond problem [blog.frankel.ch]

Scala and Kotlin have their own solutions to problems caused by multiple inheritance – worth having a look.

>> Mocking HTTP, Mockito style [specto.io]

When working with microservices, we often need to mock/stub HTTP endpoints – Hoverfly is one of the better tools for doing that.

>> Support for Java 9 in IntelliJ IDEA 2017.2 [jetbrains.com]

Java 9 will be (hopefully) released soon and IDE providers are coming up with new features for their tools – this time, we can have a look at new support in Intellij IDEA.

>> 5 Things You Need to Know When Using Hibernate with Mysql [thoughts-on-java.org]

Hibernate already supports most of the MySql’s features, but there are still a few things to remember that are not entirely abstracted away.

>> An alternative to passing through [ibm.com]

A very detailed guide to using method references in Java 8.

Also worth reading:

 Webinars and presentations:

Time to upgrade:

2. Technical

>> Project Package Organization [dolszewski.com]

Package structure in Java projects is often neglected or applied mindlessly – here we can see a comparison of the two most popular approaches: package-by-layer vs. package-by-feature.

>> Converting Queries to Commands [michaelfeathers.silvrback.com]

Raising the abstraction level and passing commands to objects can result in better decoupling – and Java 8 Lambda Expressions make it much easier and concise.

Also worth reading:

3. Musings

>> How to Write Test Cases [daedtech.com]

There is no universal answer to this problem – pick one of the scientific methods, follow it, and use the best tools possible.

>> Why Expert Developers Still Make Mistakes [daedtech.com]

We should make mistakes – those expose lacks in our knowledge that we can eventually fix.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Working sixty hours a week [dilbert.com]

>> You don’t take pride in your work [dilbert.com]

>> Leave early today [dilbert.com]

5. Pick of the Week

>> A Look at JUnit 5’s Core Features & New Testing Functionality [stackify.com]

Vavr (ex-Javaslang) Support in Spring Data

$
0
0

1. Overview

In this quick tutorial, we’re going to take a look at the support for Vavr in Spring Data – which was added in the 2.0.0 Spring build snapshot.

More specifically, we’re going to show an example of using Vavr Option and Vavr collections as return types of a Spring Data JPA repository.

2. Maven Dependencies

First, let’s setup a Spring Boot project, since it makes configuring Spring Data much quicker, by adding the spring-boot-parent dependency to the pom.xml:

<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>2.0.0.BUILD-SNAPSHOT</version>
    <relativePath />
</parent>

Since we are using a BUILD-SNAPSHOT, and not a RELEASE version, this cannot be found in the Maven Central repository, so we need to add the Spring snapshots repository to the pom.xml file:

<repositories>
    <repository>
        <id>spring-snapshot</id>
        <name>Spring Snapshot Repository</name>
        <url>https://repo.spring.io/snapshot</url>
        <snapshots>
            <enabled>true</enabled>
        </snapshots>
    </repository>
</repositories>

Evidently, we also need the vavr dependency, as well as a few other dependencies for Spring Data and testing:

<dependency>
    <groupId>io.vavr</groupId>
    <artifactId>vavr</artifactId>
    <version>0.9.0</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
    <groupId>com.h2database</groupId>
    <artifactId>h2</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-test</artifactId>
</dependency>

The latest versions of vavr, spring-boot-starter-data-jpa, spring-boot-starter-test and h2 can be downloaded from Maven Central.

In this example, we’re only using Spring Boot because it provides Spring Data auto-configuration. If you are working in a non-Boot project, you can add the spring-data-commons dependency with Vavr support directly:

<dependency>
    <groupId>org.springframework.data</groupId>
    <artifactId>spring-data-commons</artifactId>
    <version>2.0.0.BUILD-SNAPSHOT</version>
</dependency>

3. Spring Data JPA Repository with Vavr

Spring Data now contains support for defining repository query methods using Vavr‘s Option and Vavr collections: Seq, Set, and Map as return types.

First, let’s create a simple entity class to manipulate:

@Entity
public class User {

    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private long id;

    private String name;
    
    // standard constructor, getters, setters
}

Next, let’s create the JPA repository by implementing the Repository interface and defining two query methods:

public interface VavrUserRepository extends Repository<User, Long> {

    Option<User> findById(long id);

    Seq<User> findByName(String name);

    User save(User user);
}

Here, we have made use of Vavr Option for a method returning zero or one results, and Vavr Seq for a query method which returns multiple User records.

We also need a main Spring Boot class to auto-configure Spring Data and bootstrap our application:

@SpringBootApplication
public class Application {
    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }
}

Since we have added the h2 dependency, Spring Boot will auto-configure a DataSource using an in-memory H2 database.

4. Testing the JPA Repository

Let’s add a JUnit test to verify our repository methods:

@RunWith(SpringRunner.class)
@SpringBootTest(classes = Application.class)
public class VavrRepositoryIntegrationTest {

    @Autowired
    private VavrUserRepository userRepository;

    @Before
    public void setup() {
        User user1 = new User();
        user1.setName("John");
        User user2 = new User();
        user2.setName("John");

        userRepository.save(user1);
        userRepository.save(user2);
    }

    @Test
    public void whenAddUsers_thenGetUsers() {
        Option<User> user = userRepository.findById(1L);
        assertFalse(user.isEmpty());
        assertTrue(user.get().getName().equals("John"));

        Seq<User> users = userRepository.findByName("John");
        assertEquals(2, users.size());
    }
}

In the above test, we first add two user records to the database, then call the repository’s query methods. As you can see, the methods return the correct Vavr objects.

5. Conclusion

In this quick example, we have shown how we can define a Spring Data repository using Vavr types.

As always, the full source code can be found over on GitHub.

Introduction to Chronicle Queue

$
0
0

1. Overview

Chronicle Queue persists every single message using a memory-mapped file. This allows us to share messages between processes.

It stores data directly to off-heap memory, therefore, making it free of GC overhead. It is designed for providing low-latency message framework for high-performance applications.

In this quick article, we will look into the basic set of operations.

2. Maven Dependencies

We need to add the following dependency:

<dependency>
    <groupId>net.openhft</groupId>
    <artifactId>chronicle</artifactId>
    <version>3.6.4</version>
</dependency>

We can always check the latest versions hosted by the Maven Central with the link provided before.

3. Building Blocks

There are three concepts characteristic for Chronicle Queue:

  • Excerpt –  is a data container
  • Appender – appender is used for writing data
  • Trailer – is used for sequentially reading data

We’ll reserve the portion of memory for read-write operations using Chronicle interface.

Here is the example code for creating an instance:

File queueDir = Files.createTempDirectory("chronicle-queue").toFile();
Chronicle chronicle = ChronicleQueueBuilder.indexed(queueDir).build();

We will need a base directory where the queue will persist records in memory-mapped files.

ChronicleQueueBuilder class provides different types of queues. In this case, we used IndexedChronicleQueue which uses the sequential index to maintain memory offsets of records in a queue.

4. Writing to the Queue

To write the items to a queue, we’ll need to create an object of ExcerptAppender class using Chronicle instance. Here is example code for writing the messages to the queue:

Here is example code for writing the messages to the queue:

ExcerptAppender appender = chronicle.createAppender();
appender.startExcerpt();

String stringVal = "Hello World";
int intVal = 101;
long longVal = System.currentTimeMillis();
double doubleVal = 90.00192091d;

appender.writeUTF(stringValue);
appender.writeInt(intValue);
appender.writeLong(longValue);
appender.writeDouble(doubleValue);
appender.finish();

After creating the appender, we will start the appender using a startExcerpt method. It starts an Excerpt with the default message capacity of 128K. We can use an overloaded version of startExcerpt to provide a custom capacity.

Once started, we can write any literal or object value to the queue using a wide range of write methods provided by the library.

Finally, when we’re done with writing, we’ll finish the excerpt, save the data to a queue, and later to disc.

5. Reading from the Queue

Reading the values from the queue can easily be done using the ExcerptTrailer instance.

It is just like an iterator we use to traverse a collection in Java.

Let’s read values from the queue:

ExcerptTailer tailer = chronicle.createTailer();
while (tailer.nextIndex()) {
    tailer.readUTF();
    tailer.readInt();
    tailer.readLong();
    tailer.readDouble();
}
tailer.finish();

After creating the trailer, we use the nextIndex method to check if there is a new excerpt to read.

Once ExcerptTailer has a new Excerpt to read, we can read messages from it using a range of read methods for literal and object type values.

Finally, we finish the reading with the finish API.

6. Conclusion

In this tutorial, we gave a brief introduction to the Chronicle Queue and its building blocks. We saw how to create a queue, write and read data. Using it offers many benefits including low latency, durable interprocess communication (IPC) as well as no Garbage Collection overhead.

The solution provides data persistence through memory mapped files – with no data loss. It also allows concurrent read-writes from multiple processes; however, writes are handled synchronously.

As always, all code snippets can be found over on GitHub.

Introduction to Awaitlity

$
0
0

1. Introduction

A common problem with asynchronous systems is that it’s hard to write readable tests for them that are focused on business logic and are not polluted with synchronizations, timeouts, and concurrency control.

In this article, we are going to take a look at Awaitility — a library which provides a simple domain-specific language (DSL) for asynchronous systems testing.

With Awaitility, we can express our expectations from the system in an easy-to-read DSL.

2. Dependencies

We need to add Awaitility dependencies to our pom.xml.

The awaitility library will be sufficient for most use cases. In case we want to use proxy-based conditions, we also need to provide the awaitility-proxy library:

<dependency>
    <groupId>org.awaitility</groupId>
    <artifactId>awaitility</artifactId>
    <version>3.0.0</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.awaitility</groupId>
    <artifactId>awaitility-proxy</artifactId>
    <version>3.0.0</version>
    <scope>test</scope>
</dependency>

You can find the latest version of the awaitility and awaitility-proxy libraries on Maven Central.

3. Creating an Asynchronous Service

Let’s write a simple asynchronous service and test it:

public class AsyncService {
    private final int DELAY = 1000;
    private final int INIT_DELAY = 2000;

    private AtomicLong value = new AtomicLong(0);
    private Executor executor = Executors.newFixedThreadPool(4);
    private volatile boolean initialized = false;

    void initialize() {
        executor.execute(() -> {
            sleep(INIT_DELAY);
            initialized = true;
        });
    }

    boolean isInitialized() {
        return initialized;
    }

    void addValue(long val) {
        throwIfNotInitialized();
        executor.execute(() -> {
            sleep(DELAY);
            value.addAndGet(val);
        });
    }

    public long getValue() {
        throwIfNotInitialized();
        return value.longValue();
    }

    private void sleep(int delay) {
        try {
            Thread.sleep(delay);
        } catch (InterruptedException e) {
        }
    }

    private void throwIfNotInitialized() {
        if (!initialized) {
            throw new IllegalStateException("Service is not initialized");
        }
    }
}

4. Testing with Awaitility

Now, let’s create the test class:

public class AsyncServiceTest {
    private AsyncService asyncService;

    @Before
    public void setUp() {
        asyncService = new AsyncService();
    }
    
    //...
}

Our test checks whether initialization of our service occurs within a specified timeout period (default 10s) after calling the initialize method.

This test case merely waits for the service initialization state to change or throws a ConditionTimeoutException if the state change does not occur.

The status is obtained by a Callable that polls our service at defined intervals (100ms default) after a specified initial delay (default 100ms). Here we are using the default settings for the timeout, interval, and delay:

asyncService.initialize();
await()
  .until(asyncService::isInitialized);

Here, we use await — one of the static methods of the Awaitility class. It returns an instance of a ConditionFactory class. We can also use other methods like given for the sake of increasing readability.

The default timing parameters can be changed using static methods from the Awaitility class:

Awaitility.setDefaultPollInterval(10, TimeUnit.MILLISECONDS);
Awaitility.setDefaultPollDelay(Duration.ZERO);
Awaitility.setDefaultTimeout(Duration.ONE_MINUTE);

Here we can see the use of the Duration class, which provides useful constants for the most frequently used time periods.

We can also provide custom timing values for each await call. Here we expect that initialization will occur at most after five seconds and at least after 100ms with polling intervals of 100ms:

asyncService.initialize();
await()
    .atLeast(Duration.ONE_HUNDRED_MILLISECONDS)
    .atMost(Duration.FIVE_SECONDS)
  .with()
    .pollInterval(Duration.ONE_HUNDRED_MILLISECONDS)
    .until(asyncService::isInitialized);

It’s worth mentioning that the ConditionFactory contains additional methods like with, then, andgiven. These methods don’t do anything and just return this, but they could be useful to enhance the readability of test conditions.

5. Using Matchers

Awaitility also allows the use of hamcrest matchers to check the result of an expression. For example, we can check that our long value is changed as expected after calling the addValue method:

asyncService.initialize();
await()
  .until(asyncService::isInitialized);
long value = 5;
asyncService.addValue(value);
await()
  .until(asyncService::getValue, equalTo(value));

Note that in this example, we used the first await call to wait until the service is initialized. Otherwise, the getValue method would throw an IllegalStateException.

6. Ignoring Exceptions

Sometimes, we have a situation where a method throws an exception before an asynchronous job is done. In our service, it can be a call to the getValue method before the service is initialized.

Awaitility provides the possibility of ignoring this exception without failing a test.

For example, let’s check that the getValue result is equal to zero right after initialization, ignoring IllegalStateException:

asyncService.initialize();
given().ignoreException(IllegalStateException.class)
  .await().atMost(Duration.FIVE_SECONDS)
  .atLeast(Duration.FIVE_HUNDRED_MILLISECONDS)
  .until(asyncService::getValue, equalTo(0L));

7. Using Proxy

As described in section 2, we need to include awaitility-proxy to use proxy-based conditions. The idea of proxying is to provide real method calls for conditions without implementation of a Callable or lambda expression.

Let’s use the AwaitilityClassProxy.to static method to check that AsyncService is initialized:

asyncService.initialize();
await()
  .untilCall(to(asyncService).isInitialized(), equalTo(true));

8. Accessing Fields

Awaitility can even access private fields to perform assertions on them. In the following example, we can see another way to get the initialization status of our service:

asyncService.initialize();
await()
  .until(fieldIn(asyncService)
  .ofType(boolean.class)
  .andWithName("initialized"), equalTo(true));

9. Conclusion

In this quick tutorial, we introduced the Awaitility library, got acquainted with its basic DSL for the testing of asynchronous systems, and saw some advanced features which make the library flexible and easy to use in real projects.

As always, all code examples are available on Github.

Migrating from JUnit 4 to JUnit 5

$
0
0

1. Overview

In this article, we’ll see how we can migrate from JUnit 4 to the latest JUnit 5 release – with an overview of the differences between the two versions of the library.

For the general guidelines on using JUnit 5, see our article here.

2. JUnit 5 Advantages

Let’s start with the previous version – JUnit 4 has some clear limitations:

  • The entire framework was contained in a single jar library. The whole library needs to be imported even when only a particular feature is required. In JUnit 5, we get more granularity and can import only what is necessary
  • One test runner can only execute tests in JUnit 4 at a time (e.g. SpringJUnit4ClassRunner or Parameterized ). JUnit 5 allows multiple runners to work simultaneously
  • JUnit 4 never advanced beyond Java 7, missing out on a lot of features from Java 8. JUnit 5 makes good use of Java 8 features

The idea behind JUnit 5 was to completely rewrite JUnit 4 to solve most of these drawbacks.


3. Differences

JUnit 4 was divided into modules that comprise JUnit 5:

  • JUnit Platform – this module scopes all the extension frameworks we might be interested in test execution, discovery, and reporting
  • JUnit Vintage – this module allows backward compatibility with JUnit 4 or even JUnit 3

3.1. Annotations

JUnit 5 comes with important changes within its annotations. The most important one is that we can no longer use @Test annotation for specifying expectations.

The expected parameter in JUnit 4:

@Test(expected = Exception.class)
public void shouldRaiseAnException() throws Exception {
    // ...
}

Now, we can use a method assertThrows:

public void shouldRaiseAnException() throws Exception {
    Assertions.assertThrows(Exception.class, () -> {
        //...
    });
}

The timeout attribute in JUnit 4:

@Test(timeout = 1)
public void shouldFailBecauseTimeout() throws InterruptedException {
    Thread.sleep(10);
}

Now, the assertTimeout method in JUnit 5:

@Test
public void shouldFailBecauseTimeout() throws InterruptedException {
    Assertions.assertTimeout(Duration.ofMillis(1), () -> Thread.sleep(10));
}

Other annotations that were changed within JUnit 5:

  • @Before annotation is renamed to @BeforeEach
  • @After annotation is renamed to @AfterEach
  • @BeforeClass annotation is renamed to @BeforeAll
  • @AfterClass annotation is renamed to @AfterAll
  • @Ignore annotation is renamed to @Disabled

3.2. Assertions

We can now write assertion messages in a lambda in JUnit 5, allowing the lazy evaluation to skip complex message construction until needed:

@Test
public void shouldFailBecauseTheNumbersAreNotEqual_lazyEvaluation() {
    Assertions.assertTrue(
      2 == 3, 
      () -> "Numbers " + 2 + " and " + 3 + " are not equal!");
}

We can also group assertions in JUnit 5:

@Test
public void shouldAssertAllTheGroup() {
    List<Integer> list = Arrays.asList(1, 2, 4);
    Assertions.assertAll("List is not incremental",
        () -> Assertions.assertEquals(list.get(0).intValue(), 1),
        () -> Assertions.assertEquals(list.get(1).intValue(), 2),
        () -> Assertions.assertEquals(list.get(2).intValue(), 3));
}

3.3. Assumptions

The new Assumptions class is now in org.junit.jupiter.api.Assumptions. JUnit 5 fully supports the existing assumptions methods in JUnit 4 and also adds a set of new methods to allow running some assertions only under specific scenarios only:

@Test
public void whenEnvironmentIsWeb_thenUrlsShouldStartWithHttp() {
    assumingThat("WEB".equals(System.getenv("ENV")),
      () -> {
          assertTrue("http".startsWith(address));
      });
}

3.4. Tagging And Filtering

In JUnit 4 we could group tests by using the @Category annotation. With JUnit 5, the @Category annotation gets replaced with the @Tag annotation:

@Tag("annotations")
@Tag("junit5")
@RunWith(JUnitPlatform.class)
public class AnnotationTestExampleTest {
    /*...*/
}

We can include/exclude particular tags using the maven-surefire-plugin:

<build>
    <plugins>
        <plugin>
            <artifactId>maven-surefire-plugin</artifactId>
            <configuration>
                <properties>
                    <includeTags>junit5</includeTags>
                </properties>
            </configuration>
        </plugin>
    </plugins>
</build>

3.5. New Annotations for Running Tests

The @RunWith was used to integrate the test context with other frameworks or to change the overall execution flow in the test cases in JUnit 4.

With JUnit 5, we can now use the @ExtendWith annotation to provide similar functionality.

As an example, to use the Spring features in JUnit 4:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(
  {"/app-config.xml", "/test-data-access-config.xml"})
public class SpringExtensionTest {
    /*...*/
}

Now, in JUnit 5 it is a simple extension:

@ExtendWith(SpringExtension.class)
@ContextConfiguration(
  { "/app-config.xml", "/test-data-access-config.xml" })
public class SpringExtensionTest {
    /*...*/
}

3.6. New Test Rules Annotations

In JUnit 4, the @Rule and @ClassRule annotations were used to add special functionality to tests.

In JUnit 5. we can reproduce the same logic using the @ExtendWith annotation.

For example, say we have a custom rule in JUnit 4 to write log traces before and after a test:

public class TraceUnitTestRule implements TestRule {
 
    @Override
    public Statement apply(Statement base, Description description) {
        return new Statement() {
            @Override
            public void evaluate() throws Throwable {
                // Before and after an evaluation tracing here 
                ...
            }
        };
    }
}

And we implement it in a test suite:

@Rule
public TraceUnitTestRule traceRuleTests = new TraceUnitTestRule();

In JUnit 5, we can write the same in a much more intuitive manner:

public class TraceUnitExtension implements AfterEachCallback, BeforeEachCallback {

    @Override
    public void beforeEach(TestExtensionContext context) throws Exception {
        // ...
    }

    @Override
    public void afterEach(TestExtensionContext context) throws Exception {
        // ...
    }
}

Using JUnit 5’s AfterEachCallback and BeforeEachCallback interfaces available in the package org.junit.jupiter.api.extension, we easily implement this rule in the test suite:

@RunWith(JUnitPlatform.class)
@ExtendWith(TraceUnitExtension.class)
public class RuleExampleTest {
 
    @Test
    public void whenTracingTests() {
        /*...*/
    }
}

3.7. JUnit 5 Vintage

JUnit Vintage aids in the migration of JUnit tests by running  JUnit 3 or JUnit 4 tests within the JUnit 5 context.

We can use it by importing the JUnit Vintage Engine:

<dependency>
    <groupId>org.junit.vintage</groupId>
    <artifactId>junit-vintage-engine</artifactId>
    <version>${junit5.vintage.version}</version>
    <scope>test</scope>
</dependency>

4. Conclusion

As we’ve seen in this article, JUnit 5 is a modular and modern take on the JUnit 4 framework. We have introduced the major differences between these two versions and hinted how to migrate from one to another.

The full implementation of this tutorial can be found in over on GitHub.

Guide to the HyperLogLog Algorithm

$
0
0

1. Overview

The HyperLogLog (HLL) data structure is a probabilistic data structure used to estimate the cardinality of a data set.

Suppose that we have millions of users and we want to calculate the number of distinct visits to our web page. A naive implementation would be to store each unique user id in a set, and then the size of the set would be our cardinality.

When we are dealing with very large volumes of data, counting cardinality this way will be very inefficient because the data set will take up a lot of memory.

But if we are fine with an estimation within a few percent and don’t need the exact number of unique visits, then we can use the HLL, as it was designed for exactly such a use case – estimating the count of millions or even billions of distinct values.

2. Maven Dependency

To get started we’ll need to add the Maven dependency for the hll library:

<dependency>
    <groupId>net.agkn</groupId>
    <artifactId>hll</artifactId>
    <version>1.6.0</version>
</dependency>

3. Estimating Cardinality Using HLL

Jumping right in – the HLL constructor has two arguments that we can tweak according to our needs:

  • log2m (log base 2) – this is the number of registers used internally by HLL (note: we are specifying the m)
  • regwidth – this is the number of bits used per register

If we want a higher accuracy, we need to set these to higher values. Such a configuration will have additional overhead because our HLL will occupy more memory. If we’re fine with lower accuracy, we can lower those parameters, and our HLL will occupy less memory.

Let’s create an HLL to count distinct values for a data set with 100 million entries. We will set the log2m parameter equal to 14 and regwidth equal to 5 – reasonable values for a data set of this size.

When each new element is inserted to the HLL, it needs to be hashed beforehand. We will be using Hashing.murmur3_128() from the Guava library (included with the hll dependency) because it is both accurate and fast.

HashFunction hashFunction = Hashing.murmur3_128();
long numberOfElements = 100_000_000;
long toleratedDifference = 1_000_000;
HLL hll = new HLL(14, 5);

Choosing those parameters should give us an error rate below one percent (1,000,000 elements). We will be testing this in a moment.

Next, let’s insert the 100 million elements:

LongStream.range(0, numberOfElements).forEach(element -> {
    long hashedValue = hashFunction.newHasher().putLong(element).hash().asLong();
    hll.addRaw(hashedValue);
  }
);

Finally, we can test that the cardinality returned by the HLL is within our desired error threshold:

long cardinality = hll.cardinality();
assertThat(cardinality)
  .isCloseTo(numberOfElements, Offset.offset(toleratedDifference));

4. Memory Size of HLL

We can calculate how much memory our HLL from the previous section will take by using the following formula: numberOfBits = 2 ^ log2m * regwidth.

In our example that will be 2 ^ 14 * 5 bits (roughly 81000 bits or 8100 bytes). So estimating the cardinality of a 100-million member set using HLL occupied only 8100 bytes of memory.

Let’s compare this with a naive set implementation. In such an implementation, we need to have a Set of 100 million Long values, which would occupy 100,000,000 * 8 bytes = 800,000,000 bytes.

We can see the difference is astonishingly high. Using HLL, we need only 8100 bytes, whereas using the naive Set implementation we would need roughly 800 megabytes.

When we consider bigger data sets, the difference between HLL and the naive Set implementation becomes even higher.

5. Union of Two HLLs

HLL has one beneficial property when performing unions. When we take the union of two HLLs created from distinct data sets and measure its cardinality, we will get the same error threshold for the union that we would get if we had used a single HLL and calculated the hash values for all elements of both data sets from the beginning.

Note that when we union two HLLs, both should have the same log2m and regwidth parameters to yield proper results.

Let’s test that property by creating two HLLs – one is populated with values from 0 to 100 million, and the second is populated with values from 100 million to 200 million:

HashFunction hashFunction = Hashing.murmur3_128();
long numberOfElements = 100_000_000;
long toleratedDifference = 1_000_000;
HLL firstHll = new HLL(15, 5);
HLL secondHLL = new HLL(15, 5);

LongStream.range(0, numberOfElements).forEach(element -> {
    long hashedValue = hashFunction.newHasher()
      .putLong(element)
      .hash()
      .asLong();
    firstHll.addRaw(hashedValue);
    }
);

LongStream.range(numberOfElements, numberOfElements * 2).forEach(element -> {
    long hashedValue = hashFunction.newHasher()
      .putLong(element)
      .hash()
      .asLong();
    secondHLL.addRaw(hashedValue);
    }
);

Please note that we tuned the configuration parameters of the HLLs, increasing the log2m parameter from 14, as seen in the previous section, to 15 for this example, since the resulting HLL union will contain twice as many elements.

Next, let’s union the firstHll and secondHll using the union() method. As you can see, the estimated cardinality is within an error threshold as if we had taken the cardinality from one HLL with 200 million elements:

firstHll.union(secondHLL);
long cardinality = firstHll.cardinality();
assertThat(cardinality)
  .isCloseTo(numberOfElements * 2, Offset.offset(toleratedDifference * 2));

6. Conclusion

In this tutorial, we had a look at the HyperLogLog algorithm.

We saw how to use the HLL to estimate the cardinality of a set. We also saw that HLL is very space-efficient compared to the naive solution. And we performed the union operation on two HLLs and verified that the union behaves in the same way as a single HLL.

The implementation of all these examples and code snippets can be found in the GitHub project ; this is a Maven project, so it should be easy to import and run as it is.


Kotlin with Mockito

$
0
0

1. Introduction

Kotlin and Java walk hand in hand. This means we can leverage the vast number of existent Java libraries in our Kotlin projects.

In this short article, we’ll see how we can mock using Mockito in Kotlin. If you want to learn more about the library, check this article.

2. Setup

First of all, let’s create a Maven project and add JUnit and Mockito dependencies in the pom.xml:

<dependency>
    <groupId>org.mockito</groupId>
    <artifactId>mockito-all</artifactId>
    <version>2.0.2-beta</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>junit</groupId>
    <artifactId>junit</artifactId>
    <version>4.12</version>
    <scope>test</scope>
</dependency>

We also need to tell Maven that we’re working with Kotlin so that it compiles the source code for us. Check out the official Kotlin documentation for more information on how to configure that in the pom.xml.

3. Using Mockito with Kotlin

Suppose, we have an implementation we want to test – LendBookManagerThis class has a dependency on a service, called BookService, which is not yet implemented:

interface BookService {
    fun inStock(bookId: Int): Boolean
    fun lend(bookId: Int, memberId: Int)
}

The BookService is injected during the instantiation of LendBookManager and is used twice throughout the checkout method, which is the method we need to write our test for:

class LendBookManager(val bookService:BookService) {
    fun checkout(bookId: Int, memberId: Int) {
        if(bookService.inStock(bookId)) {
            bookService.lend(bookId, memberId)
        } else {
            throw IllegalStateException("Book is not available")
        }
    }
}

It would be tough to write unit tests for that method without having the ability to mock BookService – which is where Mockito comes in handy.

We can, with just two lines of code, create a mock of the BookService interface and instruct it to return a fixed value when the inStock() method is called:

val mockBookService = Mockito.mock(BookService::class.java)
Mockito.`when`(mockBookService. inStock(100)).thenReturn(true)

This will force the mockBookService instance to return true whenever the inStock() method is called with the argument 100 (notice that we had to escape the when() method using the backtick; this is required since when is a reserved keyword in the Kotlin language).

We can then pass this mocked instance to LendBookManager during instantiation, invoke the method we want to test, and verify that the lend() method was called as a result of the operation:

val manager = LendBookManager(mockBookService)
manager.checkout(100, 1)		
Mockito.verify(mockBookService).lend(100, 1)

We can quickly test the other logical path of our method’s implementation, which should throw an exception if the desired book is not in stock:

@Test(expected = IllegalStateException::class)
fun whenBookIsNotAvailable_thenAnExceptionIsThrown() {
    val mockBookService = Mockito.mock(BookService::class.java)
    Mockito.`when`(mockBookService. inStock(100)).thenReturn(false)
    val manager = LendBookManager(mockBookService)
    manager.checkout(100, 1)
}

Noticed that, for this test, we told mockBookService to return false when asked if the book with id 100 was in stock. This should cause the checkout() invocation to throw an IllegalStateException.

We use the expected property on the @Test annotation, indicating that we expect this test to throw an exception.

4. Mockito Kotlin library

We can make our code look more Kotlin-like by using an open-source library called mockito-kotlin. This library wraps some of Mockito’s functionality around its methods, providing a simpler API:

@Test
fun whenBookIsAvailable_thenLendMethodIsCalled() {
    val mockBookService : BookService = mock()
    whenever(mockBookService.inStock(100)).thenReturn(true)
    val manager = LendBookManager(mockBookService)
    manager.checkout(100, 1)
    verify(mockBookService).lend(100, 1)
}

It also provides its version of the mock() method. When using this method, we can leverage type inference so we can call the method without passing any additional parameters.

Finally, this library exposes a new whenever() method that can be used freely, without the need for back-ticks like we had to when using Mockito’s native when() method.

Check their wiki for a complete list of enhancements.

5. Conclusion

In this quick tutorial, we had a look at how to setup our project to use Mockito and Kotlin together, and how we can leverage this combination to create mocks and write effective unit tests.

As always, you can check out the complete source on the GitHub repo.

Data Modeling in Cassandra

$
0
0

1. Overview

Cassandra is a NoSQL database that provides high availability and horizontal scalability without compromising performance.

To get the best performance out of Cassandra, we need to carefully design the schema around query patterns specific to the business problem at hand.

In this article, we will review some of the key concepts around how to approach data modeling in Cassandra.

Before proceeding, you can go through our Cassandra with Java article to understand the basics and how to connect to Cassandra using Java.

2. Partition Key

Cassandra is a distributed database in which data is partitioned and stored across multiple nodes within a cluster.

The partition key is made up of one or more data fields and is used by the partitioner to generate a token via hashing to distribute the data uniformly across a cluster.

3. Clustering Key

A clustering key is made up of one or more fields and helps in clustering or grouping together rows with same partition key and storing them in sorted order.

Let’s say that we are storing time-series data in Cassandra and we want to retrieve the data in chronological order. A clustering key that includes time-series data fields will be very helpful for efficient retrieval of data for this use case.

Note: The combination of partition key and clustering key makes up the primary key and uniquely identifies any record in the Cassandra cluster.

4. Guidelines Around Query Patterns

Before starting with data modeling in Cassandra, we should identify the query patterns and ensure that they adhere to the following guidelines:

  1. Each query should fetch data from a single partition
  2. We should keep track of how much data is getting stored in a partition, as Cassandra has limits around the number of columns that can be stored in a single partition
  3. It is OK to denormalize and duplicate the data to support different kinds of query patterns over the same data

Based on the above guidelines, let’s look at some real-world use cases and how we would model the Cassandra data models for them.

5. Real World Data Modeling Examples

5.1. Facebook Posts

Suppose that we are storing Facebook posts of different users in Cassandra. One of the common query patterns will be fetching the top ‘N‘ posts made by a given user.

Thus, we need to store all data for a particular user on a single partition as per the above guidelines.

Also, using the post timestamp as the clustering key will be helpful for retrieving the top ‘N‘ posts more efficiently.

Let’s define the Cassandra table schema for this use case:

CREATE TABLE posts_facebook (
  user_id uuid,
  post_id timeuuid, 
  content text,
  PRIMARY KEY (user_id, post_id) )
WITH CLUSTERING ORDER BY (post_id DESC);

Now, let’s write a query to find the top 20 posts for the user Anna:

SELECT content FROM posts_facebook WHERE user_id = "Anna_id" LIMIT 20

5.2. Gyms Across the Country

Suppose that we are storing the details of different partner gyms across the different cities and states of many countries and we would like to fetch the gyms for a given city.

Also, let’s say we need to return the results having gyms sorted by their opening date.

Based on the above guidelines, we should store the gyms located in a given city of a specific state and country on a single partition and use the opening date and gym name as a clustering key.

Let’s define the Cassandra table schema for this example:

CREATE TABLE gyms_by_city (
 country_code text,
 state text,
 city text,
 gym_name text,
 opening_date timestamp,
 PRIMARY KEY (
   (country_code, state_province, city), 
   (opening_date, gym_name)) 
 WITH CLUSTERING ORDER BY (opening_date ASC, gym_name ASC);

Now, let’s look at a query that fetches the first ten gyms by their opening date for the city of Phoenix within the U.S. state of Arizona:

SELECT * FROM gyms_by_city
  WHERE country_code = "us" AND state = "Arizona" AND city = "Phoenix"
  LIMIT 10

Next, let’s see a query that fetches the ten most recently-opened gyms in Phoenix:

SELECT * FROM gyms_by_city
  WHERE country_code = "us" and state = "Arizona" and city = "Phoenix"
  ORDER BY opening_data DESC 
  LIMIT 10

Note: As the last query’s sort order is opposite of the sort order defined during the table creation, the query will run slower as Cassandra will first fetch the data and then sort it in memory.

5.3. E-commerce Customers and Products

Let’s say we are running an e-commerce store and that we are storing the Customer and Product information within Cassandra. Let’s look at some of the common query patterns around this use case:

  1. Get Customer info
  2. Get Product info
  3. Get all Customers who like a given Product
  4. Get all Products a given Customer likes

We will start by using separate tables for storing the Customer and Product information. However, we need to introduce a fair amount of denormalization to support the 3rd and 4th queries shown above.

We will create two more tables to achieve this – “Customer_by_Product” and “Product_by_Customer“.

Let’s look at the Cassandra table schema for this example:

CREATE TABLE Customer (
  cust_id text,
  first_name text, 
  last_name text,
  registered_on timestamp, 
  PRIMARY KEY (cust_id));

CREATE TABLE Product (
  prdt_id text,
  title text,
  PRIMARY KEY (prdt_id));

CREATE TABLE Customer_By_Liked_Product (
  liked_prdt_id text,
  liked_on timestamp,
  title text,
  cust_id text,
  first_name text, 
  last_name text, 
  PRIMARY KEY (prdt_id, liked_on));

CREATE TABLE Product_Liked_By_Customer (
  cust_id text, 
  first_name text,
  last_name text,
  liked_prdt_id text, 
  liked_on timestamp,
  title text,
  PRIMARY KEY (cust_id, liked_on));

Note: To support both the queries, recently-liked products by a given customer and customers who recently liked a given product, we have used the “liked_on” column as a clustering key.

Let’s look at the query to find the ten Customers who most recently liked the product “Pepsi“:

SELECT * FROM Customer_By_Liked_Product WHERE title = "Pepsi" LIMIT 10

And let’s see the query that finds the recently-liked products (up to ten) by a customer named “Anna“:

SELECT * FROM Product_Liked_By_Customer 
  WHERE first_name = "Anna" LIMIT 10

6. Inefficient Query Patterns

Due to the way that Cassandra stores data, some query patterns are not at all efficient, including the following:

  • Fetching data from multiple partitions – this will require a coordinator to fetch the data from multiple nodes, store it temporarily in heap, and then aggregate the data before returning results to the user
  • Join-based queries – due to its distributed nature, Cassandra does not support table joins in queries the same way a relational database does, and as a result, queries with joins will be slower and can also lead to inconsistency and availability issues

7. Conclusion

In this tutorial, we have covered several best practices around how to approach data modeling in Cassandra.

Understanding the core concepts and identifying the query patterns in advance is necessary for designing a correct data model that gets the best performance from a Cassandra cluster.

An Introduction to Atomic Variables in Java

$
0
0

1. Introduction

Simply put, shared state very easily leads to problems when concurrency is involved. If access to shared mutable objects is not managed properly, applications can quickly become prone to some hard-to-detect concurrency errors.

In this article, we’ll revisit the use of locks to handle concurrent access, explore some of the disadvantages associated with locks and finally, introduce atomic variables as an alternative.

2. Locks

Let’s have a look at the class:

public class Counter {
    int counter; 
 
    public void increment() {
        counter++;
    }
}

In the case of a single threaded environment, this works perfectly; however, as soon as we allow more than one thread to write, we start getting inconsistent results.

This is because of the simple increment operation (counter++), which may look like an atomic operation, but in fact is a combination of three operations: obtaining the value, incrementing, and writing the updated value back.

If two threads try to get and update the value at the same time, it may result in lost updates.

One of the ways to manage access to an object is to use locks. This can be achieved by using the synchronized keyword in the increment method signature. The synchronized keyword ensures that only one thread can enter the method at one time (to learn more about Locking and Synchronization refer to – Guide to Synchronized Keyword in Java):

public class SafeCounterWithLock {
    private volatile int counter;
 
    public synchronized void increment() {
        counter++;
    }
}

Additionally, we need to add the volatile keyword to ensure proper reference visibility among threads.

Using locks solves the problem. However, performance takes a hit.

When multiple threads attempt to acquire a lock, one of them wins, while the rest of the threads are either blocked or suspended.

The process of suspending and then resuming a thread is very expensive and affects the overall efficiency of the system.

In a small program, such as the counter, the time spent in context switching may become much more than actual code execution, thus greatly reducing overall efficiency.

3. Atomic Operations

There is a branch of research focused on creating non-blocking algorithms for concurrent environments. These algorithms exploit low-level atomic machine instructions such as compare-and-swap (CAS), to ensure data integrity.

A typical CAS operation works on three operands:

  1. The memory location on which to operate (M)
  2. The existing expected value (A) of the variable
  3. The new value (B) which needs to be set

The CAS operation updates atomically the value in M to B, but only if the existing value in M matches A, otherwise no action is taken.

In both cases, the existing value in M is returned. This combines three steps – getting the value, comparing the value and updating the value – into a single machine level operation.

When multiple threads attempt to update the same value through CAS, one of them wins and updates the value. However, unlike in the case of locks, no other thread gets suspended; instead, they’re simply informed that they did not manage to update the value. The threads can then proceed to do further work and context switches are completely avoided.

One other consequence is that the core program logic becomes more complex. This is because we have to handle the scenario when the CAS operation didn’t succeed. We can retry it again and again till it succeeds, or we can do nothing and move on depending on the use case.

4. Atomic Variables in Java

The most commonly used atomic variable classes in Java are AtomicInteger, AtomicLong, AtomicBoolean, and  AtomicReference. These classes represent an int, long, boolean and object reference respectively which can be atomically updated. The main methods exposed by these classes are:

  • get() – gets the value from the memory, so that changes made by other threads are visible; equivalent to reading a volatile variable
  • set() – writes the value to memory, so that the change is visible to other threads; equivalent to writing a volatile variable
  • lazySet() – eventually writes the value to memory, may be reordered with subsequent relevant memory operations. One use case is nullifying references, for the sake of garbage collection, which is never going to be accessed again. In this case, better performance is achieved by delaying the null volatile write
  • compareAndSet() – same as described in section 3, returns true when it succeeds, else false
  • weakCompareAndSet() – same as described in section  3, but weaker in the sense, that it does not create happens-before orderings. This means that it may not necessarily see updates made to other variables

A thread safe counter implemented with AtomicInteger is shown in the example below:

public class SafeCounterWithoutLock {
    private final AtomicInteger counter = new AtomicInteger(0);
    
    public int getValue() {
        return counter.get();
    }
    public void increment() {
        while(true) {
            int existingValue = getValue();
            int newValue = existingValue + 1;
            if(counter.compareAndSet(existingValue, newValue)) {
                return;
            }
        }
    }
}

As you can see, we retry the compareAndSet operation and again on failure, since we want to guarantee that the call to the increment method always increases the value by 1.

5. Conclusion

In this quick tutorial, we described an alternate way of handling concurrency where disadvantages associated with locking can be avoided. We also looked at the main methods exposed by the atomic variable classes in Java.

As always, the examples are all available over on GitHub.

To explore more classes which internally use non-blocking algorithms refer to a guide to ConcurrentMap.

Apache Commons Collections MapUtils

$
0
0

1. Introduction

MapUtils is one of the tools available in the Apache Commons Collections project.

Simply put, it provides utility methods and decorators to work with java.util.Map and java.util.SortedMap instances.

2. Setup

Let’s start by adding the dependency:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-collections4</artifactId>
    <version>4.1</version>
</dependency>

3. Utility Methods

3.1. Creating a Map from an Array

Now, let’s set up arrays that we will use for creating a map:

public class MapUtilsTest {
    private String[][] color2DArray = new String[][] {
        {"RED", "#FF0000"},
        {"GREEN", "#00FF00"},
        {"BLUE", "#0000FF"}
    };
    private String[] color1DArray = new String[] {
        "RED", "#FF0000",
        "GREEN", "#00FF00",
        "BLUE", "#0000FF"
    };
    private Map<String, String> colorMap;

    //...
}

Let’s see how we can create a map from a two-dimensional array:

@Test
public void whenCreateMapFrom2DArray_theMapIsCreated() {
    this.colorMap = MapUtils.putAll(
      new HashMap<>(), this.color2DArray);

    assertThat(
      this.colorMap, 
      is(aMapWithSize(this.color2DArray.length)));
    
    assertThat(this.colorMap, hasEntry("RED", "#FF0000"));
    assertThat(this.colorMap, hasEntry("GREEN", "#00FF00"));
    assertThat(this.colorMap, hasEntry("BLUE", "#0000FF"));
}

We could also use a one-dimensional array. In that case, the array is treated as keys and values in alternate indices:

@Test
public void whenCreateMapFrom1DArray_theMapIsCreated() {
    this.colorMap = MapUtils.putAll(
      new HashMap<>(), this.color1DArray);
    
    assertThat(
      this.colorMap, 
      is(aMapWithSize(this.color1DArray.length / 2)));

    assertThat(this.colorMap, hasEntry("RED", "#FF0000"));
    assertThat(this.colorMap, hasEntry("GREEN", "#00FF00"));
    assertThat(this.colorMap, hasEntry("BLUE", "#0000FF"));
}

3.2. Printing the Content of a Map

Many times while debugging or in debug logs, we would like to print the entire map:

@Test
public void whenVerbosePrintMap_thenMustPrintFormattedMap() {
    MapUtils.verbosePrint(System.out, "Optional Label", this.colorMap);
}

And the result:

Optional Label = 
{
    RED = #FF0000
    BLUE = #0000FF
    GREEN = #00FF00
}

We can also use debugPrint() which additionally prints the data types of the values.

3.3. Getting Values

MapUtils provides some methods for extracting value from a map for a given key in a null-safe manner.

For example, getString() gets a String from the Map.  The String value is obtained via toString(). We can optionally specify the default value to be returned if the value is null or if the conversion fails:

@Test
public void whenGetKeyNotPresent_thenMustReturnDefaultValue() {
    String defaultColorStr = "COLOR_NOT_FOUND";
    String color = MapUtils
      .getString(this.colorMap, "BLACK", defaultColorStr);
    
    assertEquals(color, defaultColorStr);
}

Note that these methods are null-safe i.e. they can safely handle the null map parameter:

@Test
public void whenGetOnNullMap_thenMustReturnDefaultValue() {
    String defaultColorStr = "COLOR_NOT_FOUND";
    String color = MapUtils.getString(null, "RED", defaultColorStr);
    
    assertEquals(color, defaultColorStr);
}

Here the color would get the value as COLOR_NOT_FOUND even though the map is null.

3.4. Inverting the Map

We can also easily reverse a map:

@Test
public void whenInvertMap_thenMustReturnInvertedMap() {
    Map<String, String> invColorMap = MapUtils.invertMap(this.colorMap);

    int size = invColorMap.size();
    Assertions.assertThat(invColorMap)
      .hasSameSizeAs(colorMap)
      .containsKeys(this.colorMap.values().toArray(new String[] {}))
      .containsValues(this.colorMap.keySet().toArray(new String[] {}));
}

This would invert the colorMap to:

{
    #00FF00 = GREEN
    #FF0000 = RED
    #0000FF = BLUE
}

If the source map associates same value for multiple keys then after inversion one of the values will become a key randomly.

3.5. Null and Empty Checks

isEmpty() method returns true if a Map is null or empty.

safeAddToMap() method prevents addition of null elements to a Map.

4. Decorators

These methods add additional functionality to a Map.

In most cases, it’s good practice not to store the reference to the decorated Map.

4.1. Fixed-Size Map

fixedSizeMap() returns a fixed-size map backed by the given map. Elements can be changed but not added or removed:

@Test(expected = IllegalArgumentException.class)
public void whenCreateFixedSizedMapAndAdd_thenMustThrowException() {
    Map<String, String> rgbMap = MapUtils
      .fixedSizeMap(MapUtils.putAll(new HashMap<>(), this.color1DArray));
    
    rgbMap.put("ORANGE", "#FFA500");
}

4.2. Predicated Map

The predicatedMap() method returns a Map ensures that all held elements match the provided predicate:

@Test(expected = IllegalArgumentException.class)
public void whenAddDuplicate_thenThrowException() {
    Map<String, String> uniqValuesMap 
      = MapUtils.predicatedMap(this.colorMap, null, 
        PredicateUtils.uniquePredicate());
    
    uniqValuesMap.put("NEW_RED", "#FF0000");
}

Here, we specified the predicate for values using PredicateUtils.uniquePredicate(). Any attempt to insert a duplicate value into this map will result in java.lang.IllegalArgumentException.

We can implement custom predicates by implementing the Predicate interface.

4.3. Lazy Map

lazyMap() returns a map where values are initialized when requested.

If a key passed to this map’s Map.get(Object) method is not present in the map, the Transformer instance will be used to create a new object that will be associated with the requested key:

@Test
public void whenCreateLazyMap_theMapIsCreated() {
    Map<Integer, String> intStrMap = MapUtils.lazyMap(
      new HashMap<>(),
      TransformerUtils.stringValueTransformer());
    
    assertThat(intStrMap, is(anEmptyMap()));
    
    intStrMap.get(1);
    intStrMap.get(2);
    intStrMap.get(3);
    
    assertThat(intStrMap, is(aMapWithSize(3)));
}

5. Conclusion

In this quick tutorial, we have explored the Apache Commons Collections MapUtils class and we looked at various utility methods and decorators that can simplify various common map operations.

As usual, the code is available over on GitHub.

Microbenchmarking with the Java Microbenchmark Harness

$
0
0

1. Introduction

This quick article is focused on JMH (the Java Microbenchmark Harness) – which is scheduled to become a part of JVM in the upcoming Java 9 release.

Simply put, JMH takes care of the things like JVM warm-up and code-optimization paths, making benchmarking as simple as possible.

2. Getting Started

To get started, we can actually keep working with Java 8 and simply define the dependencies:

<dependency>
    <groupId>org.openjdk.jmh</groupId>
    <artifactId>jmh-core</artifactId>
    <version>1.19</version>
</dependency>
<dependency>
    <groupId>org.openjdk.jmh</groupId>
    <artifactId>jmh-generator-annprocess</artifactId>
    <version>1.19</version>
</dependency>

The latest versions of the JMH Core and JMH Annotation Processor can be found in Maven Central.

Next, create a simple benchmark by utilizing @Benchmark annotation (in any public class):

@Benchmark
public void init() {
    // Do nothing
}

Then we add the main class that starts the benchmarking process:

public class BenchmarkRunner {
    public static void main(String[] args) throws Exception {
        org.openjdk.jmh.Main.main(args);
    }
}

Now running BenchmarkRunner will execute our arguably somewhat useless benchmark. Once the run is complete, a summary table is presented:

# Run complete. Total time: 00:06:45
Benchmark      Mode  Cnt Score            Error        Units
BenchMark.init thrpt 200 3099210741.962 ± 17510507.589 ops/s

3. Types of Benchmarks

JMH supports some possible benchmarks: Throughput, AverageTime, SampleTime, and SingleShotTime. These can be configured via @BenchmarkMode annotation:

@Benchmark
@BenchmarkMode(Mode.AverageTime)
public void init() {
    // Do nothing
}

The resulting table will have an average time metric (instead of throughput):

# Run complete. Total time: 00:00:40
Benchmark Mode Cnt  Score Error Units
BenchMark.init avgt 20 ≈ 10⁻⁹ s/op

4. Configuring Warmup and Execution

By using the @Fork annotation, we can set up how benchmark execution happens: the value parameter controls how many times the benchmark will be executed, and the warmup parameter controls how many times a benchmark will dry run before results are collected, for example:

@Benchmark
@Fork(value = 1, warmups = 2)
@BenchmarkMode(Mode.Throughput)
public void init() {
    // Do nothing
}

This instructs JMH to run two warm-up forks and discard results before moving onto real timed benchmarking.

Also, the @Warmup annotation can be used to control the number of warmup iterations. For example, @Warmup(iterations = 5) tells JMH that five warm-up iterations will suffice, as opposed to the default 20.

5. State

Let’s now examine how a less trivial and more indicative task of benchmarking a hashing algorithm can be performed by utilizing State. Suppose we decide to add extra protection from dictionary attacks on a password database by hashing the password a few hundred times.

We can explore performance impact by using a State object:

@State(Scope.Benchmark)
public class ExecutionPlan {

    @Param({ "100", "200", "300", "500", "1000" })
    public int iterations;

    public Hasher murmur3;

    public String password = "4v3rys3kur3p455w0rd";

    @Setup(Level.Invocation)
    public void setUp() {
        murmur3 = Hashing.murmur3_128().newHasher();
    }
}

Our benchmark method then will look like:

@Fork(value = 1, warmups = 1)
@Benchmark
@BenchmarkMode(Mode.Throughput)
public void benchMurmur3_128(ExecutionPlan plan) {

    for (int i = plan.iterations; i > 0; i--) {
        plan.murmur3.putString(plan.password, Charset.defaultCharset());
    }

    plan.murmur3.hash();
}

Here, the field iterations will be populated with appropriate values from the @Param annotation by the JMH when it is passed to the benchmark method. The @Setup annotated method is invoked before each invocation of the benchmark and creates a new Hasher ensuring isolation.

When the execution is finished, we’ll get a result similar to the one below:

# Run complete. Total time: 00:06:47

Benchmark                   (iterations)   Mode  Cnt      Score      Error  Units
BenchMark.benchMurmur3_128           100  thrpt   20  92463.622 ± 1672.227  ops/s
BenchMark.benchMurmur3_128           200  thrpt   20  39737.532 ± 5294.200  ops/s
BenchMark.benchMurmur3_128           300  thrpt   20  30381.144 ±  614.500  ops/s
BenchMark.benchMurmur3_128           500  thrpt   20  18315.211 ±  222.534  ops/s
BenchMark.benchMurmur3_128          1000  thrpt   20   8960.008 ±  658.524  ops/s

5. Conclusion

This tutorial focused on and showcased Java’s micro benchmarking harness.

As always, code examples can be found on GitHub.

Viewing all 3752 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>