Quantcast
Channel: Baeldung
Viewing all 3750 articles
Browse latest View live

Java Weekly, Issue 183

$
0
0

Lots of interesting writeups on Java 9 this week.

Here we go…

1. Spring and Java

>> What’s new in JPA 2.2 – Java 8 Date and Time Types [vladmihalcea.com]

JPA 2.2 finally has the support for java.time.

>> Oracle Defends the Java Module System [infoq.com]

Oracle officially answers the JPMS controversy; and, the recent vote passed unanimously.

>> Kotlin’s hidden costs – Benchmarks [sites.google.com]

Kotlin does have some additional overhead over core Java but surprisingly, some results are actually better than Java alternatives.

Also worth reading:

Time to upgrade:

2. Technical

>> Get the Most out of Git Aliases [blog.codecentric.de]

Leveraging aliases in Git can drastically increase productivity 🙂

>> Getting Started with Contract Tests [blog.thecodewhisperer.com]

Long gone are the days when it was enough only write a couple types of tests. The testing ecosystem is now a lot more mature and fleshed out.

This is a good place to start understanding and getting into contract-testing.

Also worth reading:

3. Musings

>> Exploring the Tech Debt In Your Codebase [daedtech.com]

Sitting down and calculating the technical debt of your codebase is a very worthwhile exercise to get some meaningful insight into the actual condition of the project.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Don’t hold back anything [dilbert.com]

>> Did you just forget to do it? [dilbert.com]

>> Describe our company culture [dilbert.com]

5. Pick of the Week

>> Goal Setting: A Scientific Guide to Setting and Achieving Goals [jamesclear.com]


Introduction to Kotlin Coroutines

$
0
0

1. Overview

In this article, we’ll be looking at coroutines from the Kotlin language. Simply put, coroutines allow us to create asynchronous programs in a very fluent way, and they’re based on the concept of Continuation-passing style programming.

The Kotlin language gives us basic constructs but can get access to more useful coroutines with the kotlinx-coroutines-core library. We’ll be looking at this library once we understand the basic building blocks of the Kotlin language.

2. Creating a Coroutine with BuildSequence

Let’s create a first coroutine using the buildSequence function.

And let’s implement a Fibonacci sequence generator using this function:

val fibonacciSeq = buildSequence {
    var a = 0
    var b = 1

    yield(1)

    while (true) {
        yield(a + b)

        val tmp = a + b
        a = b
        b = tmp
    }
}

The signature of a yield function is:

public abstract suspend fun yield(value: T)

The suspend keyword means that this function can be blocking. Such function can suspend a buildSequence coroutine.

Suspending functions can be created as standard Kotlin functions, but we need to be aware that we can only call them from within a coroutine. Otherwise, we’ll get a compiler error.

If we’ve suspended call within the buildSequence, that call will be transformed to the dedicated state in the state machine. A coroutine can be passed and assigned to a variable like any other function.

In the fibonacciSeq coroutine, we have two suspension points. First, when we’re calling yield(1) and second when we’re calling yield(a+b).

If that yield function results in some blocking call, the current thread will not block on it. It will be able to execute some other code. Once the suspended function finishes its execution, the thread can resume execution of the fibonacciSeq coroutine.

We can test our code by taking some elements from the Fibonacci sequence:

val res = fibonacciSeq
  .take(5)
  .toList()

assertEquals(res, listOf(1, 1, 2, 3, 5))

3. Adding the Maven Dependency for kotlinx-coroutines

Let’s look at the kotlinx-coroutines library which has useful constructs build on top of basic coroutines.

Let’s add the dependency to the kotlinx-coroutines-core library. Note that we also need to add the jcenter repository:

<dependency>
    <groupId>org.jetbrains.kotlinx</groupId>
    <artifactId>kotlinx-coroutines-core</artifactId>
    <version>0.16</version>
</dependency>

<repositories>
    <repository>
        <id>central</id>
        <url>http://jcenter.bintray.com</url>
     </repository>
</repositories>

4. Asynchronous Programming Using the launch() Coroutine

The kotlinx-coroutines library adds a lot of useful constructs that allow us to create asynchronous programs. Let’s say that we have an expensive computation function that is appending a String to the input list:

suspend fun expensiveComputation(res: MutableList<String>) {
    delay(1000L)
    res.add("word!")
}

We can use a launch coroutine that will execute that suspend function in a non-blocking way – we need to pass a thread pool as an argument to it.

The launch function is returning a Job instance on which we can call a join() method to wait for the results:

@Test
fun givenAsyncCoroutine_whenStartIt_thenShouldExecuteItInTheAsyncWay() {
    // given
    val res = mutableListOf<String>()

    // when
    runBlocking<Unit> {
        val promise = launch(CommonPool) { 
          expensiveComputation(res) 
        }
        res.add("Hello,")
        promise.join()
    }

    // then
    assertEquals(res, listOf("Hello,", "word!"))
}

To be able to test our code, we pass all logic into the runBlocking coroutine – which is a blocking call. Therefore our assertEquals() can be executed synchronously after the code inside of the runBlocking() method.

Note that in this example, although the launch() method is triggered first, it is a delayed computation. The main thread will proceed by appending the “Hello,” String to the result list.

After the one second delay that is introduced in the expensiveComputation() function, the “word!” String will be appended to the result.

5. Coroutines are Very Lightweight

Let’s imagine a situation in which we want to perform 100000 operations asynchronously. Spawning such a high number of threads will be very costly and will possibly yield an OutOfMemoryException.

Fortunately, when using the coroutines, this is not a case. We can execute as many blocking operations as we want. Under the hood, those operations will be handled by a fixed number of threads without excessive thread creation:

@Test
fun givenHugeAmountOfCoroutines_whenStartIt_thenShouldExecuteItWithoutOutOfMemory() {
    runBlocking<Unit> {
        // given
        val counter = AtomicInteger(0)
        val numberOfCoroutines = 100_000

        // when
        val jobs = List(numberOfCoroutines) {
            launch(CommonPool) {
                delay(1000L)
                counter.incrementAndGet()
            }
        }
        jobs.forEach { it.join() }

        // then
        assertEquals(counter.get(), numberOfCoroutines)
    }
}

Note that we’re executing 100,000 coroutines and each run adds a substantial delay. Nevertheless, there is no need to create too many threads because those operations are executed in an asynchronous way using thread from the CommonPool.

6. Cancellation and Timeouts

Sometimes, after we have triggered some long-running asynchronous computation, we want to cancel it because we’re no longer interested in the result.

When we start our asynchronous action with the launch() coroutine, we can examine the isActive flag. This flag is set to false whenever the main thread invokes the cancel() method on the instance of the Job:

@Test
fun givenCancellableJob_whenRequestForCancel_thenShouldQuit() {
    runBlocking<Unit> {
        // given
        val job = launch(CommonPool) {
            while (isActive) {
                println("is working")
            }
        }

        delay(1300L)

        // when
        job.cancel()

        // then cancel successfully

    }
}

This is a very elegant and easy way to use the cancellation mechanism. In the asynchronous action, we only need to check if the isActive flag is equal to false and cancel our processing.

When we’re requesting some processing and are not sure how much time that computation will take, it’s advisable to set the timeout on such an action. If the processing does not finish within the given timeout, we’ll get an exception, and we can react to it appropriately.

For example, we can retry the action:

@Test(expected = CancellationException::class)
fun givenAsyncAction_whenDeclareTimeout_thenShouldFinishWhenTimedOut() {
    runBlocking<Unit> {
        withTimeout(1300L) {
            repeat(1000) { i ->
                println("Some expensive computation $i ...")
                delay(500L)
            }
        }
    }
}

If we do not define a timeout, it’s possible that our thread will be blocked forever because that computation will hang. We cannot handle that case in our code if the timeout is not defined.

7. Running Asynchronous Actions Concurrently

Let’s say that we need to start two asynchronous actions concurrently and wait for their results afterward. If our processing takes one second and we need to execute that processing twice, the runtime of synchronous blocking execution will be two seconds.

It would be better if we could run both those actions in separate threads and wait for those results in the main thread.

We can leverage the async() coroutine to achieve this by starting processing in two separate threads concurrently:

@Test
fun givenHaveTwoExpensiveAction_whenExecuteThemAsync_thenTheyShouldRunConcurrently() {
    runBlocking<Unit> {
        val delay = 1000L
        val time = measureTimeMillis {
            // given
            val one = async(CommonPool) { 
                someExpensiveComputation(delay) 
            }
            val two = async(CommonPool) { 
                someExpensiveComputation(delay) 
            }

            // when
            runBlocking {
                one.await()
                two.await()
            }
        }

        // then
        assertTrue(time < delay * 2)
    }
}

After we submit the two expensive computations, we suspend the coroutine by executing the runBlocking() call. Once results one and two are available, the coroutine will resume, and the results are returned. Executing two tasks in this way should take around one second.

We can pass CoroutineStart.LAZY as the second argument to the async() method, but this will mean the asynchronous computation will not be started until requested. Because we are requesting computation in the runBlocking coroutine, it means the call to two.await() will be made only once the one.await() has finished:

@Test
fun givenTwoExpensiveAction_whenExecuteThemLazy_thenTheyShouldNotConcurrently() {
    runBlocking<Unit> {
        val delay = 1000L
        val time = measureTimeMillis {
            // given
            val one 
              = async(CommonPool, CoroutineStart.LAZY) {
                someExpensiveComputation(delay) 
              }
            val two 
              = async(CommonPool, CoroutineStart.LAZY) { 
                someExpensiveComputation(delay) 
            }

            // when
            runBlocking {
                one.await()
                two.await()
            }
        }

        // then
        assertTrue(time > delay * 2)
    }
}

The laziness of the execution in this particular example causes our code to run synchronously. That happens because when we call await(), the main thread is blocked and only after task one finishes task two will be triggered.

We need to be aware of performing asynchronous actions in a lazy way as they may run in a blocking way.

8. Conclusion

In this article, we looked at basics of Kotlin coroutines.

We saw that buildSequence is the main building block of every coroutine. We described how the flow of execution in this Continuation-passing programming style looks.

Finally, we looked at the kotlinx-coroutines library that ships a lot of very useful constructs for creating asynchronous programs.

The implementation of all these examples and code snippets can be found in the GitHub project – this is a Maven project, so it should be easy to import and run as it is.

Introduction to Apache Commons Text

$
0
0

1. Overview

Simply put, the Apache Commons Text library contains a number of useful utility methods for working with Strings, beyond what the core Java offers.

In this quick introduction, we’ll see what Apache Commons Text is, and what it is used for, as well as some practical examples of using the library.

2. Maven Dependency

Let’s start by adding the following Maven dependency to our pom.xml:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-text</artifactId>
    <version>1.1</version>
</dependency>

You can find the latest version of the library at the Maven Central Repository.

3. Overview

The root package org.apache.commons.text is divided into different sub-packages:

  • org.apache.commons.text.diff – diffs between Strings
  • org.apache.commons.text.similarity – similarities and distances between Strings
  • org.apache.commons.text.translate – translating text

Let’s see what each package can be used for – in more detail.

3. Handling Text

The org.apache.commons.text package contains multiple tools for working with Strings.

For instance, WordUtils has APIs capable of capitalizing the first letter of each word in a String, swapping the case of a String, and checking if a String contains all words in a given array.

Let’s see how we can capitalize the first letter of each word in a String:

@Test
public void whenCapitalized_thenCorrect() {
    String toBeCapitalized = "to be capitalized!";
    String result = WordUtils.capitalize(toBeCapitalized);
    
    assertEquals("To Be Capitalized!", result);
}

Here is how we can check if a string contains all words in an array:

@Test
public void whenContainsWords_thenCorrect() {
    boolean containsWords = WordUtils
      .containsAllWords("String to search", "to", "search");
    
    assertTrue(containsWords);
}

StrSubstitutor provides a convenient way to building Strings from templates:

@Test
public void whenSubstituted_thenCorrect() {
    Map<String, String> substitutes = new HashMap<>();
    substitutes.put("name", "John");
    substitutes.put("college", "University of Stanford");
    String templateString = "My name is ${name} and I am a student at the ${college}.";
    StrSubstitutor sub = new StrSubstitutor(substitutes);
    String result = sub.replace(templateString);
    
    assertEquals("My name is John and I am a student at the University of Stanford.", result);
}

StrBuilder is an alternative to Java.lang.StringBuilder. It provides some new features which are not provided by StringBuilder.

For example, we can replace all occurrences of a String in another String or clear a String without assigning a new object to its reference.

Here’s a quick example to replace part of a String:

@Test
public void whenReplaced_thenCorrect() {
    StrBuilder strBuilder = new StrBuilder("example StrBuilder!");
    strBuilder.replaceAll("example", "new");
   
    assertEquals(new StrBuilder("new StrBuilder!"), strBuilder);
}

To clear a String, we can simply do that by calling the clear() method on the builder:

strBuilder.clear();

4. Calculating the Diff between Strings

The package org.apache.commons.text.diff implements Myers algorithm for calculating diffs between two Strings.

The diff between two Strings is defined by a sequence of modifications that can convert one String to another.

There are three types of commands that can be used to convert a String to another – InsertCommand, KeepCommand, and DeleteCommand. 

An EditScript object holds the script that should be run in order to convert a String to another. Let’s calculate the number of single-char modifications that should be made in order to convert a String to another:

@Test
public void whenEditScript_thenCorrect() {
    StringsComparator cmp = new StringsComparator("ABCFGH", "BCDEFG");
    EditScript<Character> script = cmp.getScript();
    int mod = script.getModifications();
    
    assertEquals(4, mod);
}

5. Similarities and Distances between Strings

The org.apache.commons.text.similarity package contains algorithms useful for finding similarities and distances between Strings.

For example, LongestCommonSubsequence can be used to find the number of common characters in two Strings:

@Test
public void whenCompare_thenCorrect() {
    LongestCommonSubsequence lcs = new LongestCommonSubsequence();
    int countLcs = lcs.apply("New York", "New Hampshire");
    
    assertEquals(5, countLcs);
}

Similarly, LongestCommonSubsequenceDistance can be used to find the number of different characters in two Strings:

@Test
public void whenCalculateDistance_thenCorrect() {
    LongestCommonSubsequenceDistance lcsd = new LongestCommonSubsequenceDistance();
    int countLcsd = lcsd.apply("New York", "New Hampshire");
    
    assertEquals(11, countLcsd);
}

6. Text Translation

The org.apache.text.translate package was initially created to allow us to customize the rules provided by StringEscapeUtils.

The package has a set of classes which are responsible for translating text to some of the different character encoding models such as Unicode and Numeric Character Reference. We can also create our own customized routines for translation.

Let’s see how we can convert a String to its equivalent Unicode text:

@Test
public void whenTranslate_thenCorrect() {
    UnicodeEscaper ue = UnicodeEscaper.above(0);
    String result = ue.translate("ABCD");
    
    assertEquals("\\u0041\\u0042\\u0043\\u0044", result);
}

Here, we are passing the index of the character that we want to start translation from to the above() method.

LookupTranslator enables us to define our own lookup table where each character can have a corresponding value, and we can translate any text to its corresponding equivalent.

7. Conclusion

In this quick tutorial, we’ve seen an overview of what Apache Commons Text is all about and some of its common features.

The code samples can be found over on GitHub.

BigDecimal and BigInteger in Java

$
0
0

1. Overview

This is an introductory article on the BigDecimal & the BigInteger data types in the Java programming language.

In this write-up, we’ll show in what scenarios we can use these data types.

2. BigDecimal

BigDecimal is the preferred data type when writing code involving money, financial transactions. Also, in situations where precision is a must, and we require a more granular control over rounding off our calculations.

BigDecimal is an immutable data type which provides us two essential features:

  1. The ability to specify the scale. The scale is the number of digits to the right of the decimal point.
  2. Providing control over rounding off digits by a rounding off method.

These features are catered to the custom requirements of the developer when writing code dealing with money.

A BigDecimal object can be created like so:

BigDecimal bigDecimal = new BigDecimal("81.9065");

or:

BigDecimal bigDecimal = BigDecimal.valueOf(81.9065);

An enum RoundingMode provides several rounding modes for the BigDecimal type:

  • CEILING: Rounding mode to round towards positive infinity
  • DOWN: Rounding mode to round towards zero
  • FLOOR: Rounding mode to round towards negative infinity
  • HALF_DOWN: Rounding mode to round towards “nearest neighbor” unless both neighbors are equidistant, in which case round down
  • HALF_EVEN: Rounding mode to round towards the “nearest neighbor” unless both neighbors are equidistant, in which case, round towards the even neighbor
  • HALF_UP: Rounding mode to round towards “nearest neighbor” unless both neighbors are equidistant, in which case round up
  • UNNECESSARY: Rounding mode to assert that the requested operation has an exact result, hence no rounding is necessary
  • UP: Rounding mode to round away from zero

Let’s go through a scenario where we set scale & round off the tax values providing a RoundingMode parameter. Here we have two BigDecimal objects serviceTax with an initial value of 56.0084578639 & entertainmentTax with an initial value of 23.00689.

Since our precision for the currency is two digits after the decimal, we provide an argument 2 in the setScale method. The setScale method is used to set the scale for the tax amounts. Along with the set scale argument, rounding off arguments are provided which specify the rule for rounding off the digit:

BigDecimal serviceTax = new BigDecimal("56.0084578639");
serviceTax = serviceTax.setScale(2, RoundingMode.CEILING);

BigDecimal entertainmentTax = new BigDecimal("23.00689");
entertainmentTax = entertainmentTax.setScale(2,RoundingMode.FLOOR);

On printing the serviceTax & the entertainmentTax values we see:

serviceTax: 56.01
entertainmentTax: 23.00

The serviceTax amount which was initially 56.0084578639 has been rounded off to 56.01 as per the arguments in the setScale method.
And the entertainmentTax which was originally 23.00689 has been rounded off to 23.00. 

+  –  *  and  /  the core mathematical operators are not overloaded in the BigDecimal class. It has methods like add(), multiply(), divide() to perform math operations. So now, if we want to sum up serviceTax & the entertainmentTax to get the totalTax we have to use the add() method of the BigDecimal class:

BigDecimal totalTax = serviceTax.add(entertainmentTax);

The totalTax value we get is:

totalTax: 79.01

This pretty much covers the fundamentals of the BigDecimal data type. To learn further about the API, do visit the BigDecimal doc here.

3. BigInteger

BigInteger class in Java is required in mathematical operations where the integer value is large & using the int data type would cause an integer overflow.

BigInteger is used in scenarios where an int just won’t work. For instance, if we calculate the factorial of digit 50 it would come out to be 30414093201713378043612608166064768844377641568960512000000000000. This value is too big for an int data type to handle & can only be stored in a BigInteger variable.

Let’s assume a large numeric value to process, 8731409320171337804361260816606476. Let’s say this number represents the count of stars in our galaxy.

We will now check if this value can be stored in an int variable:

int numStars = 8731409320171337804361260816606476;

As soon as we do this, the compiler complains about it:

“The literal 8731409320171337804361260816606476 of type int is out of range”.

This means the int variable cannot store such a big number. To deal with these scenarios, Java has the BigInteger data type which facilitates storage of such large values.

A BigInteger object can be created as below:

BigInteger numStars = new BigInteger("8731409320171337804361260816606476");

Just like the BigDecimal class BigInteger is an immutable data type. Basic mathematical operators + – * and / are not overloaded.
We have to use methods like add(), multiply(), divide() to perform math operations.

As assumed above, the number of stars in our galaxy the Milky way is 8731409320171337804361260816606476. 
Suppose, there is another galaxy Andromeda which has 5379309320171337804361260816606476 stars in it.
Let’s add up the stars of these two galaxies to find out the total stars present.

BigInteger numStarsMilkyWay = new BigInteger("8731409320171337804361260816606476");
BigInteger numStarsAndromeda = new BigInteger("5379309320171337804361260816606476");
BigInteger totalStars = numStarsMilkyWay.add(numStarsAndromeda);

On printing the variable totalStars:

totalStars: 14110718640342675608722521633212952

This sums up the basics of the BigInteger class. Please refer the doc if you need to dig in further.

4. Conclusion

This quick tutorial demonstrated the scenarios where the BigDecimal & the BigInteger data-types would come in handy.

Please find the code for the tutorial over on GitHub.

Apache Commons BeanUtils

$
0
0

1. Overview

Apache Commons BeansUtils contains all tools necessary for working with Java beans.

Simply put, a bean is a simple Java classes containing fields, getters/setters, and a no-argument constructor.

Java provides reflection and introspection capabilities to identify getter-setter methods and call them dynamically. However, these APIs can be difficult to learn and may require developers to write boilerplate code to perform simplest operations.

2. Maven Dependencies

Here is the Maven dependency need to be included in the POM file before using it:

<dependency>
    <groupId>commons-beanutils</groupId>
    <artifactId>commons-beanutils</artifactId>
    <version>1.9.3</version>
</dependency>

The newest version can be found here.

3. Creating a Java Bean

Let’s create two bean classes Course and Student with typical getter and setter methods.

public class Course {
    private String name;
    private List<String> codes;
    private Map<String, Student> enrolledStudent = new HashMap<>();

    //  standard getters/setters
}

public class Student {
    private String name;

    //  standard getters/setters
}

We have a Course class that has a course name, course codes and multiple enrolled students. Enrolled Students are identified by unique enrollment Id. Course class maintains enrolled students in a Map object where enrollment Id is a key, and the student object will be the value.

4. Property Access

Bean properties can be divided into three categories.

4.1. Simple Property

Single-value properties are also called simple or scalar.

Their value might be a primitive (such as int, float) or complex type objects. BeanUtils has a PropertyUtils class that allows us to modify simple properties in a Java Bean.

Here is the example code to set the properties:

Course course = new Course();
String name = "Computer Science";
List<String> codes = Arrays.asList("CS", "CS01");

PropertyUtils.setSimpleProperty(course, "name", name);
PropertyUtils.setSimpleProperty(course, "codes", codes);

4.2. Indexed Property

Indexed properties have a collection as a value that can be individually accessed using an index number. As an extension to JavaBean, BeanUtils considers java.util.List type values as indexed as well.

We can modify an indexed property individual value using a PropertyUtils’s setIndexedProperty method.

Here is example code modifying indexed property:

PropertyUtils.setIndexedProperty(course, "codes[1]", "CS02");

4.3. Mapped Property

Any property that has a java.util.Map as the underlying type is called a mapped property. BeanUtils allows us to update the individual value in a map using a String-valued key.

Here is the example code to modify the value in a mapped property:

Student student = new Student();
String studentName = "Joe";
student.setName(studentName);

PropertyUtils.setMappedProperty(course, "enrolledStudent(ST-1)", student);

5. Nested Property Access

If a property value is an object and we need to access a property value inside that object – that would be accessing a nested property. PropertyUtils allow us to access and modify nested properties as well.

Assume we want to access the name property of Student class through Course object. We might write:

String name = course.getEnrolledStudent("ST-1").getName();

We can access the nested property values using getNestedProperty and modify the nested property using setNestedProperty methods in PropertyUtils. Here is the code:

Student student = new Student();
String studentName = "Joe";
student.setName(studentName);

String nameValue 
  = (String) PropertyUtils.getNestedProperty(
  course, "enrolledStudent(ST-1).name");

6. Copy Bean Properties

Copying properties of one object to another object is often tedious and error-prone for developers. BeanUtils class provides a copyProperties method that copies the properties of source object to target object where the property name is same in both objects.

Let’s create another bean class as Course we created above with same properties except it will not have enrolledStudent property instead property name will be students. Let’s name that class CourseEntity. The class would look like:

public class CourseEntity {
    private String name;
    private List<String> codes;
    private Map<String, Student> students = new HashMap<>();

    //  standard getters/setters
}

Now we will copy the properties of Course object to CourseEntity object:

Course course = new Course();
course.setName("Computer Science");
course.setCodes(Arrays.asList("CS"));
course.setEnrolledStudent("ST-1", new Student());

CourseEntity courseEntity = new CourseEntity();
BeanUtils.copyProperties(course, courseEntity);

Remember this will copy the properties with the same name only. Therefore, it will not copy the property enrolledStudent in Course class because there is no property with the same name in CourseEntity class.

7. Conclusion

In this quick article, we went over the utility classes provided by BeanUtils. We also looked into different types of properties and how can we access and modify their values.

Finally, we looked into accessing nested property values and copying properties of one object to another object.

Of course, reflection and introspection capabilities in the Java SDK also allow us to access properties dynamically but it can be difficult to learn and require some boilerplate code. BeanUtils allows us to access and modify these values with a single method call.

Code snippets can be found over on GitHub.

Spring 5 WebClient

$
0
0

1. Overview

In this article, we’re going to show the WebClient – a reactive web client that’s being introduced in Spring 5.

We’re going to have a look at the WebTestClient as well – which is a WebClient designed to be used in tests.

2. What Is the WebClient?

Simply put, WebClient is an interface representing the main entry point for performing web requests.

It has been created as a part of the Spring Web Reactive module and will be replacing the classic RestTemplate in these scenarios. The new client is a reactive, non-blocking solution that works over the HTTP/1.1 protocol.

Finally, the interface has a single implementation – the DefaultWebClient class – which we’ll be working with.

3. Dependencies

Since we are using a Spring Boot application, we need the spring-boot-starter-webflux dependency, as well as the Reactor project.

3.1. Building with Maven

Let’s add the following dependencies into the pom.xml file:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-webflux</artifactId>
</dependency>
<dependency>
    <groupId>org.projectreactor</groupId>
    <artifactId>reactor-spring</artifactId>
    <version>1.0.1.RELEASE</version>
</dependency>

3.2. Building with Gradle

With Gradle, we need to add the following entries to the build.gradle file:

dependencies {
    compile 'org.springframework.boot:spring-boot-starter-webflux'
    compile 'org.projectreactor:reactor-spring:1.0.1.RELEASE'
}

4.  Working With the WebClient

To work properly with the client, we need to know how to:

  • create an instance
  • make a request
  • handle the response

4.1. Creating a WebClient Instance

There are three options to choose from. The first one is creating a WebClient object with default settings:

WebClient client1 = WebClient.create();

The second alternative allows initiating a WebClient instance with a given base URI:

WebClient client2 = WebClient.create("http://localhost:8080");

The last way (and the most advanced one) is building a client by using the DefaultWebClientBuilder class, which allows full customization:

WebClient client3 = WebClient
  .builder()
    .baseUrl("http://localhost:8080")
    .defaultCookie("cookieKey", "cookieValue")
    .defaultHeader(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE) 
    .defaultUriVariables(Collections.singletonMap("url", "http://localhost:8080"))
  .build();

4.2. Preparing a Request

First, we need to specify an HTTP method of a request by invoking the method(HttpMethod method) or calling its shortcut methods such as get, post, delete:

WebClient.UriSpec<WebClient.RequestBodySpec> request1 = client3.method(HttpMethod.POST);
WebClient.UriSpec<WebClient.RequestBodySpec> request2 = client3.post();

The next move is to provide a URL. We can pass it to the uri API – as a String or a java.net.URL instance:

WebClient.RequestBodySpec uri1 = client3
  .method(HttpMethod.POST)
  .uri("/resource");

WebClient.RequestBodySpec uri2 = client3
  .post()
  .uri(URI.create("/resource"));

Moving on, we can set a request body, content type, length, cookies or headers – if we need to.

For example, if we want to set a request body – there are two available ways – filling it with a BodyInserter or delegating this work to a Publisher:

WebClient.RequestHeadersSpec requestSpec1 = WebClient
  .create()
  .method(HttpMethod.POST)
  .uri("/resource")
  .body(BodyInserters.fromPublisher(Mono.just("data")), String.class);

WebClient.RequestHeadersSpec<?> requestSpec2 = WebClient
  .create("http://localhost:8080")
  .post()
  .uri(URI.create("/resource"))
  .body(BodyInserters.fromObject("data"));

The BodyInserter is an interface responsible for populating a ReactiveHttpOutputMessage body with a given output message and a context used during the insertion. A Publisher is a reactive component that is in charge of providing a potentially unbounded number of sequenced elements.

The second way is the body method, which is a shortcut for the original body(BodyInserter inserter) method.

To alleviate this process of filling a BodyInserter, there is a BodyInserters class which a bunch of useful utility methods:

BodyInserter<Publisher<String>, ReactiveHttpOutputMessage> inserter1 = BodyInserters
  .fromPublisher(Subscriber::onComplete, String.class);

It is also possible with a MultiValueMap:

LinkedMultiValueMap map = new LinkedMultiValueMap();

map.add("key1", "value1");
map.add("key2", "value2");

BodyInserter<MultiValueMap, ClientHttpRequest> inserter2
 = BodyInserters.fromMultipartData(map);

Or by using a single object:

BodyInserter<Object, ReactiveHttpOutputMessage> inserter3
 = BodyInserters.fromObject(new Object());

After we set the body, we can set headers, cookies, acceptable media types. Values will be added to those have been set when instantiating the client.

Also, there is additional support for the most commonly used headers like “If-None-Match”, “If-Modified-Since”, “Accept”, “Accept-Charset”.

Here’s an example how these values can be used:

WebClient.ResponseSpec response1 = uri1
  .body(inserter3)
    .header(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE)
    .accept(MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML)
    .acceptCharset(Charset.forName("UTF-8"))
    .ifNoneMatch("*")
    .ifModifiedSince(ZonedDateTime.now())
  .retrieve();

4.3. Getting a Response

The final stage is sending the request and receiving a response. This can be done with either the exchange or the retrieve methods.

They differ in return types; the exchange method provides a ClientResponse along with its status, headers while the retrieve method is the shortest path to fetching a body directly:

String response2 = request1.exchange()
  .block()
  .bodyToMono(String.class)
  .block();
String response3 = request2
  .retrieve()
  .bodyToMono(String.class)
  .block();

Pay attention to the bodyToMono method, which will throw a WebClientException if the status code is 4xx (client error) or 5xx (Server error). We used the block method on Monos to subscribe and retrieve an actual data which was sent with the response.

5.  Working With the WebTestClient

The WebTestClient is the main entry point for testing WebFlux server endpoints. It has very similar API to the WebClient, and it delegates most of the work to an internal WebClient instance focusing mainly on providing a test context. The DefaultWebTestClient class is a single interface implementation.

The client for testing can be bound to a real server or work with specific controllers or functions. To complete end-to-end integration tests with actual requests to a running server, we can use the bindToServer method:

WebTestClient testClient = WebTestClient
  .bindToServer()
  .baseUrl("http://localhost:8080")
  .build();

We can test a particular RouterFunction by passing it to the bindToRouterFunction method:

RouterFunction function = RouterFunctions.route(
  RequestPredicates.GET("/resource"),
  request -> ServerResponse.ok().build()
);

WebTestClient
  .bindToRouterFunction(function)
  .build().get().uri("/resource")
  .exchange()
  .expectStatus().isOk()
  .expectBody().isEmpty();

The same behavior can be achieved with the bindToWebHandler method which takes a WebHandler instance:

WebHandler handler = exchange -> Mono.empty();
WebTestClient.bindToWebHandler(handler).build();

A more interesting situation occurs when we’re using the bindToApplicationContext method. It takes an ApplicationContext, analyses the context for controller beans and @EnableWebFlux configurations.

If we inject an instance of the ApplicationContext, a simple code snippet may look like this:

@Autowired
private ApplicationContext context;

WebTestClient testClient = WebTestClient.bindToApplicationContext(context)
  .build();

A shorter approach would be providing an array of controllers we want to test by the bindToController method. Assuming we’ve got a Controller class and we injected it into a needed class, we can write:

@Autowired
private Controller controller;

WebTestClient testClient = WebTestClient.bindToController(controller).build();

After building a WebTestClient object, all following operations in the chain are going to be similar to the WebClient up to the exchange method (one way to get a response), which provides the WebTestClient.ResponseSpec interface to work with useful methods like the expectStatus, expectBody, expectHeader:

WebTestClient
  .bindToServer()
    .baseUrl("http://localhost:8080")
    .build()
    .post()
    .uri("/resource")
  .exchange()
    .expectStatus().isCreated()
    .expectHeader().valueEquals("Content-Type", "application/json")
    .expectBody().isEmpty();

6. Conclusion

In this tutorial, we’ve considered a new enhanced Spring mechanism for making requests on the client side – the WebClient class.

Also, we have looked at the benefits it provides by going all the way through request processing.

All of the code snippets, mentioned in the article, can be found in our GitHub repository.

OAuth2 Remember Me with Refresh Token

$
0
0

1. Overview

In this article, we will add a “Remember Me” functionality to an OAuth 2 secured application, by leveraging the OAuth 2 Refresh Token.

This article is a continuation of our series on using OAuth 2 to secure a Spring REST API, which is accessed through an AngularJS Client. For setting up the Authorization Server, Resource Server, and front-end Client, you can follow the introductory article.

Then, you can continue with our article on handling the refresh token using a Zuul proxy.

2. OAuth 2 Access Token and Refresh Token

First, let’s do a quick recap on the OAuth 2 tokens and how they can be used.

On a first authentication attempt using the password grant type, the user needs to send a valid username and password, as well as the client id and secret. If the authentication request is successful, the server sends back a response of the form:

{
    "access_token": "2e17505e-1c34-4ea6-a901-40e49ba786fa",
    "token_type": "bearer",
    "refresh_token": "e5f19364-862d-4212-ad14-9d6275ab1a62",
    "expires_in": 59,
    "scope": "read write",
}

We can see the server response contains both an access token, as well as a refresh token. The access token will be used for subsequent API calls that require authentication, while the purpose of the refresh token is to obtain a new valid access token or just revoke the previous one.

To receive a new access token using the refresh_token grant type, the user no longer needs to enter their credentials, but only the client id, secret and of course the refresh token.

The goal of using two types of tokens is to enhance user security. Typically the access token has a shorter validity period so that if an attacker obtains the access token, they have a limited time in which to use it. On the other hand, if the refresh token is compromised, this is useless as the client id and secret are also needed.

Another benefit of refresh tokens is that it allows revoking the access token, and not sending another one back if the user displays unusual behavior such as logging in from a new IP.

3. Remember-Me Functionality with Refresh Tokens

Users usually find it useful to have the option to preserve their session, as they don’t need to enter their credentials every time they access the application.

Since the Access Token has a shorter validity time, we can instead make use of refresh tokens to generate new access tokens and avoid having to ask the user for their credentials every time an access token expires.

In the next sections, we’ll discuss two ways of implementing this functionality:

  • first, by intercepting any user request that returns a 401 status code, which means the access token is invalid. When this occurs, if the user has checked the “remember me” option, we’ll automatically issue a request for a new access token using refresh_token grant type, then execute the initial request again.
  • second, we can refresh the Access Token proactively – we’ll send a request to refresh the token a few seconds before it expires

The second option has the advantage that the user’s requests will not be delayed.

4. Storing the Refresh Token

In the previous article, we added a CustomPostZuulFilter which intercepts requests to the OAuth server, extracts the refresh token sent back on authentication, and stores it in a server-side cookie:

@Component
public class CustomPostZuulFilter extends ZuulFilter {

    @Override
    public Object run() {
        //...
        Cookie cookie = new Cookie("refreshToken", refreshToken);
        cookie.setHttpOnly(true);
        cookie.setPath(ctx.getRequest().getContextPath() + "/oauth/token");
        cookie.setMaxAge(2592000); // 30 days
        ctx.getResponse().addCookie(cookie);
        //...
    }
}

Next, let’s add a checkbox on our login form that has a data binding to the loginData.remember variable:

<input type="checkbox"  ng-model="loginData.remember" id="remember"/>
<label for="remember">Remeber me</label>

Our login form will now display an additional checkbox:

The loginData object is sent with the authentication request, so it will include the remember parameter. Before the authentication request is sent, we will set a cookie named remember based on the parameter:

function obtainAccessToken(params){
    if (params.username != null){
        if (params.remember != null){
            $cookies.put("remember","yes");
        }
        else {
            $cookies.remove("remember");
        }
    }
    //...
}

As a consequence, we’ll check this cookie to determine whether we should attempt to refresh the access token or not, depending on whether the user wishes to be remembered or not.

5. Refreshing Tokens by Intercepting 401 Responses

To intercept requests that come back with a 401 response, let’s modify our AngularJS application to add an interceptor with a responseError function:

app.factory('rememberMeInterceptor', ['$q', '$injector', '$httpParamSerializer', 
  function($q, $injector, $httpParamSerializer) {  
    var interceptor = {
        responseError: function(response) {
            if (response.status == 401){
                
                // refresh access token

                // make the backend call again and chain the request
                return deferred.promise.then(function() {
                    return $http(response.config);
                });
            }
            return $q.reject(response);
        }
    };
    return interceptor;
}]);

Our function checks if the status is 401 – which means the Access Token is invalid, and if so, attempts to use the Refresh Token in order to obtain a new valid Access Token.

If this is successful, the function continues to re-try the initial request which resulted in the 401 error. This ensures a seamless experience for the user.

Let’s take a closer look at the process of refreshing the access token. First, we will initialize the necessary variables:

var $http = $injector.get('$http');
var $cookies = $injector.get('$cookies');
var deferred = $q.defer();

var refreshData = {grant_type:"refresh_token"};
                
var req = {
    method: 'POST',
    url: "oauth/token",
    headers: {"Content-type": "application/x-www-form-urlencoded; charset=utf-8"},
    data: $httpParamSerializer(refreshData)
}

You can see the req variable which we will use to send a POST request to the /oauth/token endpoint, with parameter grant_type=refresh_token.

Next, let’s use the $http module we have injected to send the request. If the request is successful, we will set a new Authentication header with the new access token value, as well as a new value for the access_token cookie. If the request fails, which may happen if the refresh token also eventually expires, then the user is redirected to the login page:

$http(req).then(
    function(data){
        $http.defaults.headers.common.Authorization= 'Bearer ' + data.data.access_token;
        var expireDate = new Date (new Date().getTime() + (1000 * data.data.expires_in));
        $cookies.put("access_token", data.data.access_token, {'expires': expireDate});
        window.location.href="index";
    },function(){
        console.log("error");
        $cookies.remove("access_token");
        window.location.href = "login";
    }
);

The Refresh Token is added to the request by the CustomPreZuulFilter we implemented in the previous article:

@Component
public class CustomPreZuulFilter extends ZuulFilter {

    @Override
    public Object run() {
        //...
        String refreshToken = extractRefreshToken(req);
        if (refreshToken != null) {
            Map<String, String[]> param = new HashMap<String, String[]>();
            param.put("refresh_token", new String[] { refreshToken });
            param.put("grant_type", new String[] { "refresh_token" });

            ctx.setRequest(new CustomHttpServletRequest(req, param));
        }
        //...
    }
}

In addition to defining the interceptor, we need to register it with the $httpProvider:

app.config(['$httpProvider', function($httpProvider) {  
    $httpProvider.interceptors.push('rememberMeInterceptor');
}]);

6. Refreshing Tokens Proactively

Another way to implement the “remember-me” functionality is by requesting a new access token before the current one expires.

When receiving an access token, the JSON response contains an expires_in value that specifies the number of seconds that the token will be valid for.

Let’s save this value in a cookie for each authentication:

$cookies.put("validity", data.data.expires_in);

Then, to send a refresh request, let’s use the AngularJS $timeout service to schedule a refresh call 10 seconds before the token expires:

if ($cookies.get("remember") == "yes"){
    var validity = $cookies.get("validity");
    if (validity >10) validity -= 10;
    $timeout( function(){ $scope.refreshAccessToken(); }, validity * 1000);
}

7. Conclusion

In this tutorial, we’ve explored two ways we can implement “Remember Me” functionality with an OAuth2 applicaton and an AngularJS front-end.

The full source code of the examples can be found over on GitHub. You can access the login page with “remember me” functionality at the URL /login_remember.

Semaphores in Java

$
0
0

1. Overview

In this quick tutorial, we’ll explore the basics of semaphores and mutexes in Java.

2. Semaphore

We’ll start with java.util.concurrent.Semaphore. We can use semaphores to limit the number of concurrent threads accessing a specific resource.

In the following example, we will implement a simple login queue to limit number users in the system:

class LoginQueueUsingSemaphore {

    private Semaphore semaphore;

    public LoginQueueUsingSemaphore(int slotLimit) {
        semaphore = new Semaphore(slotLimit);
    }

    boolean tryLogin() {
        return semaphore.tryAcquire();
    }

    void logout() {
        semaphore.release();
    }

    int availableSlots() {
        return semaphore.availablePermits();
    }

}

Notice how we used the following methods:

  • tryAcquire() – return true if a permit is available immediately and acquire it otherwise return false, but acquire() acquires a permit and blocking until one is available
  • release() – release a permit
  • availablePermits() – return number of current permits available

To test our login queue, we will first try to reach the limit and check if the next login attempt will be blocked:

@Test
public void givenLoginQueue_whenReachLimit_thenBlocked() {
    int slots = 10;
    ExecutorService executorService = Executors.newFixedThreadPool(slots);
    LoginQueueUsingSemaphore loginQueue = new LoginQueueUsingSemaphore(slots);
    IntStream.range(0, slots)
      .forEach(user -> executorService.execute(loginQueue::tryLogin));
    executorService.shutdown();

    assertEquals(0, loginQueue.availableSlots());
    assertFalse(loginQueue.tryLogin());
}

Next, we will see if any slots are available after a logout:

@Test
public void givenLoginQueue_whenLogout_thenSlotsAvailable() {
    int slots = 10;
    ExecutorService executorService = Executors.newFixedThreadPool(slots);
    LoginQueueUsingSemaphore loginQueue = new LoginQueueUsingSemaphore(slots);
    IntStream.range(0, slots)
      .forEach(user -> executorService.execute(loginQueue::tryLogin));
    executorService.shutdown();
    assertEquals(0, loginQueue.availableSlots());
    loginQueue.logout();

    assertTrue(loginQueue.availableSlots() > 0);
    assertTrue(loginQueue.tryLogin());
}

3. Timed Semaphore

Next, we will discuss Apache Commons TimedSemaphore. TimedSemaphore allows a number of permits as a simple Semaphore but in a given period of time, after this period the time reset and all permits are released.

We can use TimedSemaphore to build a simple delay queue as follows:

class DelayQueueUsingTimedSemaphore {

    private TimedSemaphore semaphore;

    DelayQueueUsingTimedSemaphore(long period, int slotLimit) {
        semaphore = new TimedSemaphore(period, TimeUnit.SECONDS, slotLimit);
    }

    boolean tryAdd() {
        return semaphore.tryAcquire();
    }

    int availableSlots() {
        return semaphore.getAvailablePermits();
    }

}

When we use a delay queue with one second as time period and after using all the slots within one second, none should be available:

public void givenDelayQueue_whenReachLimit_thenBlocked() {
    int slots = 50;
    ExecutorService executorService = Executors.newFixedThreadPool(slots);
    DelayQueueUsingTimedSemaphore delayQueue 
      = new DelayQueueUsingTimedSemaphore(1, slots);
    
    IntStream.range(0, slots)
      .forEach(user -> executorService.execute(delayQueue::tryAdd));
    executorService.shutdown();

    assertEquals(0, delayQueue.availableSlots());
    assertFalse(delayQueue.tryAdd());
}

But after sleeping for some time, the semaphore should reset and release the permits:

@Test
public void givenDelayQueue_whenTimePass_thenSlotsAvailable() throws InterruptedException {
    int slots = 50;
    ExecutorService executorService = Executors.newFixedThreadPool(slots);
    DelayQueueUsingTimedSemaphore delayQueue = new DelayQueueUsingTimedSemaphore(1, slots);
    IntStream.range(0, slots)
      .forEach(user -> executorService.execute(delayQueue::tryAdd));
    executorService.shutdown();

    assertEquals(0, delayQueue.availableSlots());
    Thread.sleep(1000);
    assertTrue(delayQueue.availableSlots() > 0);
    assertTrue(delayQueue.tryAdd());
}

4. Semaphore vs. Mutex

Mutex acts similarly to a binary semaphore, we can use it to implement mutual exclusion.

In the following example, we’ll use a simple binary semaphore to build a counter:

class CounterUsingMutex {

    private Semaphore mutex;
    private int count;

    CounterUsingMutex() {
        mutex = new Semaphore(1);
        count = 0;
    }

    void increase() throws InterruptedException {
        mutex.acquire();
        this.count = this.count + 1;
        Thread.sleep(1000);
        mutex.release();

    }

    int getCount() {
        return this.count;
    }

    boolean hasQueuedThreads() {
        return mutex.hasQueuedThreads();
    }
}

When a lot of threads try to access the counter at once, they’ll simply be blocked in a queue:

@Test
public void whenMutexAndMultipleThreads_thenBlocked()
 throws InterruptedException {
    int count = 5;
    ExecutorService executorService
     = Executors.newFixedThreadPool(count);
    CounterUsingMutex counter = new CounterUsingMutex();
    IntStream.range(0, count)
      .forEach(user -> executorService.execute(() -> {
          try {
              counter.increase();
          } catch (InterruptedException e) {
              e.printStackTrace();
          }
      }));
    executorService.shutdown();

    assertTrue(counter.hasQueuedThreads());
}

When we wait, all threads will access the counter and no threads left in the queue:

@Test
public void givenMutexAndMultipleThreads_ThenDelay_thenCorrectCount()
 throws InterruptedException {
    int count = 5;
    ExecutorService executorService
     = Executors.newFixedThreadPool(count);
    CounterUsingMutex counter = new CounterUsingMutex();
    IntStream.range(0, count)
      .forEach(user -> executorService.execute(() -> {
          try {
              counter.increase();
          } catch (InterruptedException e) {
              e.printStackTrace();
          }
      }));
    executorService.shutdown();

    assertTrue(counter.hasQueuedThreads());
    Thread.sleep(5000);
    assertFalse(counter.hasQueuedThreads());
    assertEquals(count, counter.getCount());
}

5. Conclusion

In this article, we explored the basics of semaphores in Java.

As always, the full source code is available over on GitHub.


Type Erasure in Java

$
0
0

1. Overview

In this quick article, we’ll discuss the basics of an important mechanism in Java’s generics known as type erasure.

2. What is Type Erasure?

Type erasure can be explained as the process of enforcing type constraints only at compile time and discarding the element type information at runtime.

For example:

public static  <E> boolean containsElement(E [] elements, E element){
    for (E e : elements){
        if(e.equals(element)){
            return true;
        }
    }
    return false;
}

When compiled, the unbound type E gets replaced with an actual type of Object:

public static  boolean containsElement(Object [] elements, Object element){
    for (Object e : elements){
        if(e.equals(element)){
            return true;
        }
    }
    return false;
}

The compiler ensures type safety of our code and prevents runtime errors.

3. Types of Type Erasure

Type erasure can occur at class (or variable) and method levels.

3.1. Class Type Erasure

At the class level, type parameters on the class are discarded during code compilation and replaced with its first bound, or Object if the type parameter is unbound.

Let’s implement a Stack using an array:

public class Stack<E> {
    private E[] stackContent;

    public Stack(int capacity) {
        this.stackContent = (E[]) new Object[capacity];
    }

    public void push(E data) {
        // ..
    }

    public E pop() {
        // ..
    }
}

Upon compilation, the unbound type parameter E is replaced with Object:

public class Stack {
    private Object[] stackContent;

    public Stack(int capacity) {
        this.stackContent = (Object[]) new Object[capacity];
    }

    public void push(Object data) {
        // ..
    }

    public Object pop() {
        // ..
    }
}

In a case where the type parameter E is bound:

public class BoundStack<E extends Comparable<E>> {
    private E[] stackContent;

    public BoundStack(int capacity) {
        this.stackContent = (E[]) new Object[capacity];
    }

    public void push(E data) {
        // ..
    }

    public E pop() {
        // ..
    }
}

When compiled, the bound type parameter E is replaced with the first bound class, Comparable in this case:

public class BoundStack {
    private Comparable [] stackContent;

    public BoundStack(int capacity) {
        this.stackContent = (Comparable[]) new Object[capacity];
    }

    public void push(Comparable data) {
        // ..
    }

    public Comparable pop() {
        // ..
    }
}

3.2. Method Type Erasure

For method-level type erasure, the method’s type parameter is not stored but rather converted to its parent type Object if it’s unbound or it’s first bound class when it’s bound.

Let’s consider a method to display the contents of any given array:

public static <E> void printArray(E[] array) {
    for (E element : array) {
        System.out.printf("%s ", element);
    }
}

Upon compilation, the type parameter E is replaced with Object:

public static void printArray(Object[] array) {
    for (Object element : array) {
        System.out.printf("%s ", element);
    }
}

For a bound method type parameter:

public static <E extends Comparable<E>> void printArray(E[] array) {
    for (E element : array) {
        System.out.printf("%s ", element);
    }
}

We’ll have the type parameter erased and replaced with Comparable:

public static void printArray(Comparable[] array) {
    for (Comparable element : array) {
        System.out.printf("%s ", element);
    }
}

4. Edge Cases

Sometime during the type erasure process, the compiler creates a synthetic method to differentiate similar methods. These may come from method signatures extending the same first bound class.

Let’s create a new class that extends our previous implementation of Stack:

public class IntegerStack extends Stack<Integer> {

    public IntegerStack(int capacity) {
        super(capacity);
    }

    public void push(Integer value) {
        super.push(value);
    }
}

Now let’s look at the following code:

IntegerStack integerStack = new IntegerStack(5);
Stack stack = integerStack;
stack.push("Hello");
Integer data = integerStack.pop();

After type erasure, we have:

IntegerStack integerStack = new IntegerStack(5);
Stack stack = (IntegerStack) integerStack;
stack.push("Hello");
Integer data = (String) integerStack.pop();

Notice how we can push a String on the IntegerStack – because IntegerStack inherited push(Object) from the parent class Stack. This is, of course, incorrect – as it should be an integer since integerStack is a Stack<Integer> type. 

So, not surprisingly, an attempt to popString and assign to an Integer causes a ClassCastException from a cast inserted during the push by the compiler.

4.1. Bridge Methods

To solve the edge case above, the compiler sometimes creates a bridge method. This is a synthetic method created by the Java compiler while compiling a class or interface that extends a parameterized class or implements a parameterized interface where method signatures may be slightly different or ambiguous.

In our example above, the Java compiler preserves polymorphism of generic types after erasure by ensuring no method signature mismatch between IntegerStack‘s push(Integer) method and Stack‘s push(Object) method.

Hence, the compiler creates a bridge method here:

public class IntegerStack extends Stack {
    // Bridge method generated by the compiler
    
    public void push(Object value) {
        push((Integer)value);
    }

    public void push(Integer value) {
        super.push(value);
    }
}

Consequently, Stack class’s push method after type erasure, delegates to the original push method of IntegerStack class.

5. Conclusion

In this tutorial, we’ve discussed the concept of type erasure with examples in type parameter variables and methods.

You can read more about these concepts:

As always, the source code that accompanies this article is available over on GitHub.

Java Weekly, Issue 184

$
0
0

Lots of interesting writeups on Java 9 this week.

Here we go…

1. Spring and Java

>> Are Java 8 Streams Truly Lazy? Not Completely! [blog.jooq.org]

It turns out that the Java 8 Streams API is not as lazy as you might think – the flatmap() operation evaluates the inner Stream eagerly – which is not the case when working with Scala or Vavr.

>> Simple Spring Boot Admin Setup [techblog.bozho.net]

The cool Spring Boot Admin dashboard setup can be slightly unintuitive – here a good overview of how to set it up.

>> What’s new in JPA 2.2 – Stream the result of a Query execution [vladmihalcea.com]

The new addition to JPA 2.2 – returning Query results as Stream – is an interesting addition but still not as efficient as a paginated ResultSet.

>> Why you should avoid CascadeType.REMOVE for to-many associations and what to do instead [thoughts-on-java.org]

Using CascadeType.REMOVE can be quite dangerous – besides generating way too many queries, it can also remove more than expected.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> A Basic Programming Pattern: Filter First, Map Later [jooq.org]

In order to leverage the laziness of the Stream API and keep the complexity of the operations down, it’s important to rely on well-placed limits as much as possible – although even this might not enforce laziness in all scenarios.

>> ORMs Should Update “Changed” Values, Not Just “Modified” Ones [jooq.org]

Many ORMs update values that were “touched” but not necessarily changed – which is not ideal. Read the whole article to dive deeper into the problem and a few possible solutions.

3. Musings

>> A Look at 5 NoSQL Solutions [daedtech.com]

A quick and practical introduction to NoSQL and the most popular solutions.

>> Stop waiting for perfection and learn from your mistakes [allthingsdistributed.com]

Errors/mistakes happen and we need to learn how to embrace them in order to improve and innovate because they are the part of the process.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> You have a low opinion of people [dilbert.com]

>> Brain stimulator [dilbert.com]

>> Updating my friend resource [dilbert.com]

5. Pick of the Week

>> Make Your Life Better by Saying Thank You in These 7 Situations [jamesclear.com]

Consumer Driven Contracts with Pact

$
0
0

1. Overview

In this quick article, we’ll be looking at the concept of Consumer-Driven Contracts.

We will be testing integration with some external REST service through a contract using the Pact library. That contract can be defined by the external service and shared with every client that needs to call it. We’ll create a test based on the contract in the client application.

2. What is Pact?

Using Pactwe can define consumer expectations for a given provider (that can be an HTTP REST service) in the form of a contract (hence the name of the library).

We’re going to set up this contract using the DSL provided by Pact. Once defined, we can test interactions between consumers and the provider using the mock service that is created based on the defined contract.

3. Maven Dependency

To get started we’ll need to add Maven dependency to pact-jvm-consumer-junit_2.11 library:

<dependency>
    <groupId>au.com.dius</groupId>
    <artifactId>pact-jvm-consumer-junit_2.11</artifactId>
    <version>3.5.0</version>
    <scope>test</scope>
</dependency>

4. Defining a Contract

When we want to create a test using Pact, first we need to define a @Rule that will be used in our test:

@Rule
public PactProviderRuleMk2 mockProvider
  = new PactProviderRuleMk2("test_provider", "localhost", 8080, this);

We’re passing the provider name, host, and port on which the server mock (which is created from the contract) will be started.

Let’s say that service has defined the contract for two HTTP methods that it can handle.

The first method is a GET request that returns JSON with two fields. When the request succeeds, it returns a 200 HTTP response code and the content-type header for JSON.

Let’s define such a contract using Pact.

We need to use the @Pact annotation and pass the consumer name for which the contract is defined. Inside of the annotated method, we can define our GET contract:

@Pact(consumer = "test_consumer")
public RequestResponsePact createPact(PactDslWithProvider builder) {
    Map<String, String> headers = new HashMap<>();
    headers.put("Content-Type", "application/json");

    return builder
      .given("test GET")
        .uponReceiving("GET REQUEST")
        .path("/")
        .method("GET")
      .willRespondWith()
        .status(200)
        .headers(headers)
        .body("{\"condition\": true, \"name\": \"tom\"}")
        (...)
}

Using the Pact DSL we define that for a given GET request we want to return a 200 response with specific headers and body.

The second part of our contract is the POST method. When the client sends a POST request to the path /create with a proper JSON body it returns a 201 HTTP response code and the proper header.

Let’s define such contract with Pact:

(...)
.given("test POST")
.uponReceiving("POST REQUEST")
  .method("POST")
  .headers(headers)
  .body("{\"name\": \"Michael\"}")
  .path("/create")
.willRespondWith()
  .status(201)
  .headers(headers)
  .body("")
.toPact();

Note that we need to call the toPact() method at the end of the contract to return an instance of RequestResponsePact.

5. Testing Interactions Using the Defined Contract

Once we defined the contract we can test interactions with the service that will be created out of that contract. We can create normal JUnit test but we need to remember to put the @PactVerification annotation at the beginning of the test.

Let’s write a test for the GET request:

@Test
@PactVerification()
public void givenGet_whenSendRequest_shouldReturn200WithProperHeaderAndBody() {
 
    // when
    ResponseEntity<String> response = new RestTemplate()
      .getForEntity(mockProvider.getUrl(), String.class);

    // then
    assertThat(response.getStatusCode().value()).isEqualTo(200);
    assertThat(response.getHeaders().get("Content-Type").contains("application/json")).isTrue();
    assertThat(response.getBody()).contains("condition", "true", "name", "tom");
}

The @PactVerification annotation takes care of starting the HTTP service. In the test, we only need to send the GET request and assert that our response complies with the contract.

Let’s add the test for the POST method call as well:

HttpHeaders httpHeaders = new HttpHeaders();
httpHeaders.setContentType(MediaType.APPLICATION_JSON);
String jsonBody = "{\"name\": \"Michael\"}";

// when
ResponseEntity<String> postResponse = new RestTemplate()
  .exchange(
    mockProvider.getUrl() + "/create",
    HttpMethod.POST,
    new HttpEntity<>(jsonBody, httpHeaders), 
    String.class
);

//then
assertThat(postResponse.getStatusCode().value()).isEqualTo(201);

As we can see, the response code for the POST request is equal to 201 – exactly as it was defined in the Pact contract.

As we were using the @PactVerification() annotation, the Pact library is starting the web server based on the previously defined contract before our test case.

6. Conclusion

In this quick tutorial, we had a look at Consumer Driven Contracts.

We created a contract using the Pact library. Once we defined the contract, we were able to test interactions with the external service and assert that it complies with the specification.

The implementation of all these examples and code snippets can be found in the GitHub project – this is a Maven project, so it should be easy to import and run as it is.

Data Classes in Kotlin

$
0
0

1. Overview

The Kotlin language introduces the concept of Data Classes, which represent simple classes used as data containers and do not encapsulate any additional logic. Simply put, Kotlin’s solution enables us to avoid writing a lot of boilerplate code.

In this quick article, we’ll take a look at Data Classes in Kotlin and compare them with their Java counterparts.

2. Kotlin Setup

To get started setting up the Kotlin project, check our introduction to the Kotlin Language tutorial.

3. Data Classes in Java

If we wanted to create a Movie entry in Java, we’d need to write a lot of boilerplate code:

public class Movie {

    private String name;
    private String studio;
    private float rating;
    
    public Movie(String name, String studio, float rating) {
        this.name = name;
        this.studio = studio;
        this.rating = rating;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    public String getStudio() {
        return studio;
    }

    public void setStudio(String studio) {
        this.studio = studio;
    }

    public float getRating() {
        return rating;
    }

    public void setRating(float rating) {
        this.rating = rating;
    }

    @Override
    public int hashCode() {
        final int prime = 31;
        int result = 1;
        
        result = prime * result + ((name == null) ? 0 : name.hashCode());
        result = prime * result + Float.floatToIntBits(rating);
        result = prime * result + ((studio == null) ? 0 : studio.hashCode());
        
        return result;
    }

    @Override
    public boolean equals(Object obj) {
        if (this == obj)
            return true;
        
        if (obj == null)
            return false;
        
        if (getClass() != obj.getClass())
            return false;
        
        Movie other = (Movie) obj;
        
        if (name == null) {            
            if (other.name != null)
                return false;
            
        } else if (!name.equals(other.name))
            return false;
        
        if (Float.floatToIntBits(rating) != Float.floatToIntBits(other.rating))
            return false;
        
        if (studio == null) {
            if (other.studio != null)
                return false;
            
        } else if (!studio.equals(other.studio))
            return false;
        
        return true;
    }

    @Override
    public String toString() {
        return "Movie [name=" + name + ", studio=" + studio + ", rating=" + rating + "]";
    }
}

86 lines of code. That’s a lot to store only three fields in a simple class.

4. Kotlin Data Class

Now, we’ll create the same Movie class, with the same functionalities, using Kotlin:

data class Movie(var name: String, var studio: String, var rating: Float)

As we can see, that’s massively easier and cleaner. Constructor, toString(), equals(), hashCode(), and additional copy() and componentN() functions are generated automatically.

4.1. Usage

A data class is instantiated the same way as other classes:

val movie = Movie("Whiplash", "Sony Pictures", 8.5F)

Now, the properties and functions of are available:

println(movie.name)   //Whiplash
println(movie.studio) //Sony Pictures
println(movie.rating) //8.5

movie.rating = 9F

println(movie.toString()) //Movie(name=Whiplash, studio=Sony Pictures, rating=9.0)

4.2. Copy Function

The copy() function is created, in case that we need to copy an object altering some of its properties but keeping the rest unchanged.

val betterRating = movie.copy(rating = 9.5F)
println(betterRating.toString()) // Movie(name=Whiplash, studio=Sony Pictures, rating=9.5)

Java doesn’t provide a clear, native way for copying/cloning objects. We could use the Clonable interface, SerializationUtils.clone() or a cloning constructor.

4.3. Destructuring Declarations

Destructuring Declarations allow us to treat objects properties as individual values. For each property in out data class, a componentN() is generated:

movie.component1() // name
movie.component2() // studio
movie.component3() // rating

We can also create multiple variables from the object or directly from a function – it’s important to remember about using brackets:

val(name, studio, rating) = movie

fun getMovieInfo() = movie
val(namef, studiof, ratingf) = getMovieInfo()

4.4. Data Class Requirements

In order to create a data class, we have to fulfill the following requirements:

  • The primary constructor needs to have at least one parameter
  • All primary constructor parameters need to be marked as val or var
  • Data classes cannot be abstract, open, sealed or inner
  • (before 1.1.) Data classes may only implement interfaces

Since 1.1, data classes may extend other classes.

If the generated class needs to have a parameterless constructor, default values for all properties have to be specified:

data class Movie(var name: String = "", var studio: String = "", var rating: Float = 0F)

5. Conclusion

We’ve seen Data Classes in Kotlin, their usage and requirements, the reduced amount of boilerplate code written, and comparisons with the same code in Java.

If you want to learn more about Kotlin, check articles such as Kotlin Java Interoperability and the already mentioned Introduction to the Kotlin Language.

The full implementation of these examples can be found in our GitHub project.

Converting a List to Map in Kotlin

$
0
0

1. Introduction

In this quick tutorial, we’ll see how we can convert a List to a Map in Kotlin.

2. Implementation

Kotlin offers the convenient toMap method which, given a list of complex objects, will allow us to have elements in our list mapped by any values we provide:

val user1 = User("John", 18, listOf("Hiking"))
val user2 = User("Sara", 25, listOf("Chess"))
val user3 = User("Dave", 34, listOf("Games"))

@Test
fun givenList_whenConvertToMap_thenResult() {
    val myList = listOf(user1, user2, user3)
    val myMap = myList.map { it.name to it.age }.toMap()

    assertTrue(myMap.get("John") == 18)
}

Keep in mind that “to” keyword is being used here to create pairs of names and ages. This method should return a map that preserves the entry order of the elements in the array:

{John=18, Sara=25, Dave=34}

The same would happen when we map a smaller array of String:

@Test
fun givenStringList_whenConvertToMap_thenResult() {
    val myList = listOf("a", "b", "c")
    val myMap = myList.map { it to it }.toMap()

    assertTrue(myMap.get("a") == "a")
}

The only difference is that we don’t specify the value for it since it will only be mapped by that.

Then as a second alternative to convert a List to a Map is by using the associatedBy method:

@Test
fun givenList_whenAssociatedBy_thenResult() {
    val myList = listOf(user1, user2, user3)
    val myMap = myList.associateBy({ it.name }, { it.hobbies })
    
    assertTrue(myMap.get("John")!!.contains("Hiking"))
}

We modified the test so that it uses an array as the value:

{
    John=[Hiking, Swimming], 
    Sara=[Chess, Board Games], 
    Dave=[Games, Racing sports]
}

3. Which One to Use?

If both methods essentially achieve the same functionality, which one should we use?

toMap, in terms of implementation, is more intuitive. However using this method requires us to transform our Array into Pairs first, which later have to be translated to our Map, so this operation will be particularly useful if we’re already operating on collections of Pairs.

For collections of other types, the associate API will be the best choice.

4. Mapping Using associate* Methods

In our previous example, we used the associateBy method, however, Kotlin collections package has different versions for different use cases.

4.1. The associate() Method

We’ll start by using the associate method – which simply returns a Map by using a transform function on the elements of the array:

@Test
fun givenStringList_whenAssociate_thenResult() {
    val myList = listOf("a", "b", "c", "d")
    val myMap = myList.associate{ it to it }

    assertTrue(myMap.get("a") == "a")
}

4.2. The associateTo Method

Using this method, we can collect our elements to an already existing map:

@Test
fun givenStringList_whenAssociateTo_thenResult() {
    val myList = listOf("a", "b", "c", "c", "b")
    val myMap = mutableMapOf<String, String>()

    myList.associateTo(myMap) {it to it}

    assertTrue(myMap.get("a") == "a")
}

It’s important to remember to use the mutable Map – this example will not work with an immutable one.

4.3. The associateByTo Method

The associateByTo gives us the most flexibility of the three since we can either pass the map that will be populated, a keySelector function. For each specified key, the associated value will be the object where the key was extracted from:

@Test
fun givenStringList_whenAssociateByToUser_thenResult() {
    val myList = listOf(user1, user2, user3, user4)
    val myMap = mutableMapOf<String, User>()

    myList.associateByTo(myMap) {it.name}

    assertTrue(myMap.get("Dave")!!.age == 34)
}

Or we can use a valueTransform function:

@Test
fun givenStringList_whenAssociateByTo_thenResult() {
    val myList = listOf(user1, user2, user3, user4)
    val myMap = mutableMapOf<String, Int>()

    myList.associateByTo(myMap, {it.name}, {it.age})

    assertTrue(myMap.get("Dave") == 34)
}

It’s important to remember that if key collisions happen, only the last added value is retained.

5. Conclusion

In this article, we explored different ways of converting a List to a Map in Kotlin.

As always, the implementation of all of these examples and snippets can be found over on GitHub. This is a Maven-based project so it should be easy to import and run.

What is the serialVersionUID?

$
0
0

1. Overview

Simply put, the serialVersionUID is a unique identifier for Serializable classes.

This is used during the deserialization of an object, to ensure that a loaded class is compatible with the serialized object. If no matching class is found, an InvalidClassException is thrown.

2. Example

Let’s start by creating a serializable class, and declare a serialVersionUID identifier:

public class AppleProduct implements Serializable {

    private static final long serialVersionUID = 1234567L;

    public String headphonePort;
    public String thunderboltPort;
    public String lighteningPort;
}

Next, we’ll need two utility classes: one to serialize an AppleProduct object into a String, and another to deserialize the object from that String:

public class SerializationUtility {

    public static void main(String[] args) {
        AppleProduct macBook = new AppleProduct();
        macBook.headphonePort = "headphonePort2020";
        macBook.thunderboltPort = "thunderboltPort2020";

        String serializedObj = serializeObjectToString(macBook);
 
        System.out.println("Serialized AppleProduct object to string:");
        System.out.println(serializedObj);
    }

    public static String serializeObjectToString(Serializable o) {
        ByteArrayOutputStream baos = new ByteArrayOutputStream();
        ObjectOutputStream oos = new ObjectOutputStream(baos);
        oos.writeObject(o);
        oos.close();
        
        return Base64.getEncoder().encodeToString(baos.toByteArray());
    }
}
public class DeserializationUtility {
 
    public static void main(String[] args) {
 
        String serializedObj = ... // ommited for clarity
        System.out.println(
          "Deserializing AppleProduct...");
 
        AppleProduct deserializedObj = (AppleProduct) deSerializeObjectFromString(serializedObj);
 
        System.out.println(
          "Headphone port of AppleProduct:" + deserializedObj.getHeadphonePort());
        System.out.println(
          "Thunderbolt port of AppleProduct:" + deserializedObj.getThunderboltPort());
    }
 
    public static Object deSerializeObjectFromString(String s)
      throws IOException, ClassNotFoundException {
  
        byte[] data = Base64.getDecoder().decode(s);
        ObjectInputStream ois = new ObjectInputStream(new ByteArrayInputStream(data));
        Object o = ois.readObject();
        ois.close();
        return o;
    }
}

We begin by running SerializationUtility.java, which saves (serializes) the AppleProduct object into a String instance, encoding the bytes using Base64.

Then, using that String as an argument for the deserialization method, we run DeserializationUtility.java, which reassembles (deserializes) the AppleProduct object from the given String.

The output generated should be similar to this:

Serialized AppleProduct object to string:
rO0ABXNyACljb20uYmFlbGR1bmcuZGVzZXJpYWxpemF0aW9uLkFwcGxlUHJvZHVjdAAAAAAAEta
HAgADTAANaGVhZHBob25lUG9ydHQAEkxqYXZhL2xhbmcvU3RyaW5nO0wADmxpZ2h0ZW5pbmdQb3
J0cQB+AAFMAA90aHVuZGVyYm9sdFBvcnRxAH4AAXhwdAARaGVhZHBob25lUG9ydDIwMjBwdAATd
Gh1bmRlcmJvbHRQb3J0MjAyMA==
Deserializing AppleProduct...
Headphone port of AppleProduct:headphonePort2020
Thunderbolt port of AppleProduct:thunderboltPort2020

Now, let’s modify the serialVersionUID constant in AppleProduct.java, and reattempt to deserialize the AppleProduct object from the same String produced earlier. Re-running DeserializationUtility.java should generate this output. 

Deserializing AppleProduct...
Exception in thread "main" java.io.InvalidClassException: com.baeldung.deserialization.AppleProduct; local class incompatible: stream classdesc serialVersionUID = 1234567, local class serialVersionUID = 7654321
	at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:616)
	at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1630)
	at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1521)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1781)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:373)
	at com.baeldung.deserialization.DeserializationUtility.deSerializeObjectFromString(DeserializationUtility.java:24)
	at com.baeldung.deserialization.DeserializationUtility.main(DeserializationUtility.java:15)

By changing the serialVersionUID of the class, we modified its version/state. As a result, no compatible classes were found during deserialization, and an InvalidClassException was thrown.

3. Conclusion

In this quick article, we demonstrated the use of the serialVersionUID constant to facilitate versioning of serialized data.

As always, the code samples used throughout this article can be found over on GitHub.

Quick Guide to the Guava RateLimiter

$
0
0

1. Overview

In this article, we’ll be looking at the RateLimiter class from the Guava library.

The RateLimiter class is a construct that allows us to regulate the rate at which some processing happens. If we create a RateLimiter with N permits – it means that process can issue at most N permits per second.

2. Maven Dependency

We’ll be using Guava’s library:

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>22.0</version>
</dependency>

The latest version can be found here.

3. Creating and Using RateLimiter 

Let’s say that we want to limit the rate of execution of the doSomeLimitedOperation() to 2 times per second.

We can create a RateLimiter instance using its create() factory method:

RateLimiter rateLimiter = RateLimiter.create(2);

Next, in order to get an execution permit from the RateLimiter, we need to call the acquire() method:

rateLimiter.acquire(1);

In order to check that works, we’ll make 2 subsequent calls to the throttled method:

long startTime = ZonedDateTime.now().getSecond();
rateLimiter.acquire(1);
doSomeLimitedOperation();
rateLimiter.acquire(1);
doSomeLimitedOperation();
long elapsedTimeSeconds = ZonedDateTime.now().getSecond() - startTime;

To simplify our testing, let’s assume that doSomeLimitedOperation() method is completing immediately.

In such case, both invocations of the acquire() method should not block and the elapsed time should be less or below one second – because both permits can be acquired immediately:

assertThat(elapsedTimeSeconds <= 1);

Additionally, we can acquire all permits in one acquire() call:

@Test
public void givenLimitedResource_whenRequestOnce_thenShouldPermitWithoutBlocking() {
    // given
    RateLimiter rateLimiter = RateLimiter.create(100);

    // when
    long startTime = ZonedDateTime.now().getSecond();
    rateLimiter.acquire(100);
    doSomeLimitedOperation();
    long elapsedTimeSeconds = ZonedDateTime.now().getSecond() - startTime;

    // then
    assertThat(elapsedTimeSeconds <= 1);
}

This can be useful if, for example, we need to send 100 bytes per second. We can send one hundred times one byte acquiring one permit at a time. On the other hand, we can send all 100 bytes at once acquiring all 100 permits in one operation.

4. Acquiring Permits in a Blocking Way

Now, let’s consider a slightly more complex example.

We’ll create a RateLimiter with 100 permits. Then we’ll execute an action that needs to acquire 1000 permits. According to the specification of the RateLimiter, such action will need at least 10 seconds to complete because we’re able to execute only 100 units of action per second:

@Test
public void givenLimitedResource_whenUseRateLimiter_thenShouldLimitPermits() {
    // given
    RateLimiter rateLimiter = RateLimiter.create(100);

    // when
    long startTime = ZonedDateTime.now().getSecond();
    IntStream.range(0, 1000).forEach(i -> {
        rateLimiter.acquire();
        doSomeLimitedOperation();
    });
    long elapsedTimeSeconds = ZonedDateTime.now().getSecond() - startTime;

    // then
    assertThat(elapsedTimeSeconds >= 10);
}

Note, how we’re using the acquire() method here – this is a blocking method and we should be cautious when using it. When the acquire() method gets called, it blocks the executing thread until a permit is available.

Calling the acquire() without an argument is the same as calling it with a one as an argument – it will try to acquire one permit.

5. Acquiring Permits With a Timeout 

The RateLimiter API has also a very useful acquire() method that accepts a timeout and TimeUnit as arguments.

Calling this method when there are no available permits will cause it to wait for specified time and then time out – if there are not enough available permits within the timeout.

When there are no available permits within the given timeout, it returns false. If an acquire() succeeds, it returns true:

@Test
public void givenLimitedResource_whenTryAcquire_shouldNotBlockIndefinitely() {
    // given
    RateLimiter rateLimiter = RateLimiter.create(1);

    // when
    rateLimiter.acquire();
    boolean result = rateLimiter.tryAcquire(2, 10, TimeUnit.MILLISECONDS);

    // then
    assertThat(result).isFalse();
}

We created a RateLimiter with one permit so trying to acquire two permits will always cause tryAcquire() to return false.

6. Conclusion

In this quick tutorial, we had a look at the RateLimiter construct from the Guava library.

We learned how to use the RateLimtiter to limit the number of permits per second. We saw how to use its blocking API and we also used an explicit timeout to acquire the permit.

As always, the implementation of all these examples and code snippets can be found in the GitHub project – this is a Maven project, so it should be easy to import and run as it is.


Apache Commons Collections BidiMap

$
0
0

1. Overview

In this short article, we’ll be looking at an interesting data structure in the Apache Commons Collections library – the BidiMap.

The BidiMap adds a possibility of looking up the key using the corresponding value on top of the standard Map interface.

2. Dependencies

We need to include the following dependency in our project for us to use BidiMap and its implementations. For Maven based projects, we have to add the following dependency to our pom.xml:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-collections4</artifactId>
    <version>4.1</version>
</dependency>

For Gradle-based projects, we have to add the same artifact to our build.gradle file:

compile 'org.apache.commons:commons-collections4:4.1'

The latest version of this dependency can be found on Maven Central.

3. Implementations and Instantiation

BidiMap itself is just an interface that defines behaviors unique to a bi-directional map – and there are of course multiple implementations available.

It’s important to understand that implementations of BidiMap do not allow key and value duplicates. When a BidiMap gets inverted, any duplicate values will be converted to duplicate keys and will violate the map contract. A map must always have unique keys.

Let’s look at different concrete implementations of this interface:

  • DualHashBidiMap: This implementation uses two HashMap instances to implement the BidiMap internallyIt provides fast lookup of entries using either an entry’s key or value. However, two instances of HashMap have to be maintained
  • DualLinkedHashBidiMap: This implementation uses two LinkedHashMap instances and consequently maintains the insertion order of map entries. If we don’t need the insert order of the map entries to be maintained, we can just use the less expensive DualHashBidiMap
  • TreeBidiMap: This implementation is efficient and is realized by a Red-Black tree implementation. The keys and values of TreeBidiMap are guaranteed to be sorted in ascending order using the natural ordering of the keys and values
  • There is also DualTreeBidiMap that uses two instances of TreeMap to achieve the same thing as TreeBidiMapDualTreeBidiMap is obviously more expensive than TreeBidiMap

The BidiMap interface extends the java.util.Map interface and so can serve as a drop-in replacement for it. We can use the no-arg constructor of the concrete implementations to instantiate a concrete object instance.

4. Unique BidiMap Methods

Now that we’ve explored the different implementations let’s look at methods that are unique to the interface.

The put() inserts a new key-value entry into the map. Note that if the value of the new entry matches the value of any existing entry, the existing entry will be removed in favor of the new entry.

The method returns the removed old entry or null if there’s none:

BidiMap<String, String> map = new DualHashBidiMap<>();
map.put("key1", "value1");
map.put("key2", "value2");
assertEquals(map.size(), 2);

The inverseBidiMap() reverses the key-value pair of a BidiMap. This method returns a new BidiMap where the keys have become the values and vice-versa. This operation can be very useful in translation and dictionary applications:

BidiMap<String, String> rMap = map.inverseBidiMap();
assertTrue(rMap.containsKey("value1") && rMap.containsKey("value2"));

The removeValue() is used to remove a map entry by specifying a value, instead of a key. This is an addition to Map implementations found in the java.util package:

map.removeValue("value2");
assertFalse(map.containsKey("key2"));

We can get the key mapped to a particular value in BidiMap with the getKey(). The method returns null if no key is mapped onto the specified value:

assertEquals(map.getKey("value1"), "key1");

5. Conclusion

This quick tutorial provided a look into the Apache Commons Collections library – specifically at BidiMap, its implementations and idiosyncratic methods.

The most exciting and distinguishing feature of BidiMap is its ability to look up and manipulate entries via keys as well as values.

As always, code snippets are available over on GitHub.

Zipping Collections in Java

$
0
0

1. Introduction

In this tutorial, we’ll illustrate how to zip two collections into one logical collection.

The “zip” operation is slightly different from the standard “concat” or “merge”. While the “concat” or “merge” operations will simply add the new collection at the end of the existing collection, “zip” operation will take an element from each collection and combine them.

The core library does not support “zip” implicitly, but there are certainly third-party libraries which do feature this useful operation.

Consider two lists, one having names of people, other contains their ages.

List<String> names = new ArrayList<>(Arrays.asList("John", "Jane", "Jack", "Dennis"));

List<Integer> ages = new ArrayList<>(Arrays.asList(24, 25, 27));

After zipping, we end up with name-age pairs constructed from corresponding elements from those two collections.

2. Using Java 8 IntStream

Using core Java, we could generate indexes using IntStream and then use them to extract corresponding elements from two collections:

IntStream
  .range(0, Math.min(names.size(), ages.size()))
  .mapToObj(i -> names.get(i) + ":" + ages.get(i))
  // ...

3. Using Guava Streams

Google Guava 21 provides a zip helper method in the Streams class. This removes all the fuss of creating and mapping indexes and reduces the syntax to inputs and operations:

Streams
  .zip(names.stream(), ages.stream(), (name, age) -> name + ":" + age)
  // ...

4. Using  jOOλ (jOOL)

jOOL also provides some of the fascinating additions over Java 8 Lambda, and with the support of Tuple1 to Tuple16, the zip operation becomes much more interesting:

Seq
  .of("John","Jane", "Dennis")
  .zip(Seq.of(24,25,27));

This will produce a result of a Seq containing Tuples of zipped elements:

(tuple(1, "a"), tuple(2, "b"), tuple(3, "c"))

jOOL’s zip method gives the flexibility to provide custom transformation function:

Seq
  .of(1, 2, 3)
  .zip(Seq.of("a", "b", "c"), (x, y) -> x + ":" + y);

or if one wishes to zip with index only, he can go with the zipWithIndex method provided by jOOL:

Seq.of("a", "b", "c").zipWithIndex();

5. Conclusion

In this quick tutorial, we had a look at how to perform the zip operation.

As always, the code examples in the article can be found over on GitHub.

Bloom Filter in Java using Guava

$
0
0

1. Overview

In this article, we’ll be looking at the Bloom filter construct from the Guava library. A Bloom filter is a memory-efficient, probabilistic data structure that we can use to answer the question of whether or not a given element is in a set.

There are no false negatives with a Bloom filter, so when it returns false, we can be 100% certain that the element is not in the set.

However, a Bloom filter can return false positives, so when it returns true, there is a high probability that the element is in the set, but we can not be 100% sure.

For a more in-depth analysis of how a Bloom filter works, you can go through this tutorial.

2. Maven Dependency

We will be using Guava’s implementation of the Bloom filter, so let’s add the guava dependency:

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>22.0</version>
</dependency>

The latest version can be found on Maven Central.

3. Why Use Bloom Filter?

The Bloom filter is designed to be space-efficient and fast. When using it, we can specify the probability of false positive responses which we can accept and, according to that configuration, the Bloom filter will occupy as little memory as it can.

Due to this space-efficiency, the Bloom filter will easily fit in memory even for huge numbers of elements. Some databases, including Cassandra and Oracle, use this filter as the first check before going to disk or cache, for example, when a request for a specific ID comes in.

If the filter returns that the ID is not present, the database can stop further processing of the request and return to the client. Otherwise, it goes to the disk and returns the element if it is found on disk.

4. Creating a Bloom Filter

Suppose we want to create a Bloom filter for up to 500 Integers and that we can tolerate a one-percent (0.01) probability of false positives.

We can use the BloomFilter class from the Guava library to achieve this. We need to pass the number of elements that we expect to be inserted into the filter and the desired false positive probability:

BloomFilter<Integer> filter = BloomFilter.create(
  Funnels.integerFunnel(),
  500,
  0.01);

Now let’s add some numbers to the filter:

filter.put(1);
filter.put(2);
filter.put(3);

We added only three elements, and we defined that the maximum number of insertions will be 500, so our Bloom filter should yield very accurate results. Let’s test it using the mightContain() method:

assertThat(filter.mightContain(1)).isTrue();
assertThat(filter.mightContain(2)).isTrue();
assertThat(filter.mightContain(3)).isTrue();

assertThat(filter.mightContain(100)).isFalse();

As the name suggests, we cannot be 100% sure that a given element is actually in the filter when the method returns true.

When mightContain() returns true in our example, we can be 99% certain that the element is in the filter, and there is a one-percent probability that the result is a false positive. When the filter returns false, we can be 100% certain that the element is not present.

5. Over-Saturated Bloom Filter

When we design our Bloom filter, it is important that we provide a reasonably accurate value for the expected number of elements. Otherwise, our filter will return false positives at a much higher rate than desired. Let’s see an example.

Suppose that we created a filter with a desired false-positive probability of one percent and an expected some elements equal to five, but then we inserted 100,000 elements:

BloomFilter<Integer> filter = BloomFilter.create(
  Funnels.integerFunnel(),
  5,
  0.01);

IntStream.range(0, 100_000).forEach(filter::put);

Because the expected number of elements is so small, the filter will occupy very little memory.

However, as we add more items than expected, the filter becomes over-saturated and has a much higher probability of returning false positive results than the desired one percent:

assertThat(filter.mightContain(1)).isTrue();
assertThat(filter.mightContain(2)).isTrue();
assertThat(filter.mightContain(3)).isTrue();
assertThat(filter.mightContain(1_000_000)).isTrue();

Note that the mightContatin() method returned true even for a value that we didn’t insert into the filter previously.

6. Conclusion

In this quick tutorial, we looked at the probabilistic nature of the Bloom filter data structure – making use of the Guava implementation.

You can find the implementation of all these examples and code snippets in the GitHub project.

This is a Maven project, so it should be easy to import and run as it is.

TemporalAdjuster in Java

$
0
0

1. Overview

In this tutorial, we’ll have a quick look at the TemporalAdjuster and use it in a few practical scenarios.

Java 8 introduced a new library for working with dates and times – java.time and TemporalAdjuster is a part of it. If you want to read more about the java.time, check this introductory article.

Simply put, TemporalAdjuster is a strategy for adjusting a Temporal object. Before getting into the usage of TemporalAdjuster, let’s have a look at the Temporal interface itself.

2. Temporal

Temporal defines a representation of a date, time, or a combination of both, depending on the implementation we’re going to be using.

There are a number of implementations of the Temporal interface, including:

  • LocalDate – which represents a date without a timezone
  • LocalDateTime – which represents a date and time without a timezone
  • HijrahDate – which represents a date in the Hijrah calendar system
  • MinguoDate – which represents a date in the Minguo calendar system
  • ThaiBuddhistDate – which represents a date in the Thai Buddhist calendar system

3. TemporalAdjuster

One of the interfaces included in this new library is TemporalAdjuster.

TemporalAdjuster is a functional interface which has many predefined implementations in the TemporalAdjusters class. The interface has a single abstract method named adjustInto() which can be called in any of its implementations by passing a Temporal object to it.

TemporalAdjuster allows us to perform complex date manipulations. For example, we can obtain the date of the next Sunday, the last day of the current month or the first day of the next year. We can, of course, do this using the old java.util.Calendar.

However, the new API abstracts away the underlying logic using its predefined implementations. For more information, visit the Javadoc.

4. Predefined TemporalAdjusters

The class TemporalAdjusters has many predefined static methods that return a TemporalAdjuster object to adjust Temporal objects in many different ways no matter what implementation of Temporal they might be.

Here’s a short list of these methods and a quick definition of them:

  • dayOfWeekInMonth() – an adjuster for the ordinal day-of-week. For example the date of the second Tuesday in March
  • firstDayOfMonth() – an adjuster for the date of the first day of the current month
  • firstDayOfNextMonth() – an adjuster for the date of the first day of the next month
  • firstDayOfNextYear() – an adjuster for the date of the first day of the next year
  • firstDayOfYear() – an adjuster for the date of the first day of the current year
  • lastDayOfMonth() – an adjuster for the date of the last day of the current month
  • nextOrSame() – an adjuster for the date of the next occurrence of a specific day-of-week or the same day in case today matches the required day-of-week

As we can see, the methods’ names are pretty much self-explanatory. For more TemporalAdjusters, visit the Javadoc.

Let’s start with a simple example – instead of using a specific date as in the examples, we can use LocalDate.now() to get the current date from the system clock.

But, for this tutorial, we’re going to use a fixed date so that the tests won’t fail later when the expected result changes. Let’s see how we can use the TemporalAdjusters class to obtain the date of the Sunday after 2017-07-08:

@Test
public void whenAdjust_thenNextSunday() {
    LocalDate localDate = LocalDate.of(2017, 07, 8);
    LocalDate nextSunday = localDate.with(TemporalAdjusters.next(DayOfWeek.SUNDAY));
    
    String expected = "2017-07-09";
    
    assertEquals(expected, nextSunday.toString());
}

Here’s how we can obtain the last day of the current month:

LocalDate lastDayOfMonth = localDate.with(TemporalAdjusters.lastDayOfMonth());

5. Defining Custom TemporalAdjuster Implementations

We can also define our custom implementations for TemporalAdjuster. There are two different ways of doing this.

5.1. Using Lambda Expressions

Let’s see how we can obtain the date that’s 14 days after 2017-07-08 using the Temporal.with() method:

@Test
public void whenAdjust_thenFourteenDaysAfterDate() {
    LocalDate localDate = LocalDate.of(2017, 07, 8);
    TemporalAdjuster temporalAdjuster = t -> t.plus(Period.ofDays(14));
    LocalDate result = localDate.with(temporalAdjuster);
    
    String fourteenDaysAfterDate = "2017-07-22";
    
    assertEquals(fourteenDaysAfterDate, result.toString());
}

In this example, using a lambda expression, we set the temporalAdjuster object to add 14 days to the localDate object, which holds the date (2017-07-08).

Let’s see how we can obtain the date of the working day right after 2017-07-08 by defining our own TemporalAdjuster implementations using a lambda expression. But, this time, by using the ofDateAdjuster() static factory method:

static TemporalAdjuster NEXT_WORKING_DAY = TemporalAdjusters.ofDateAdjuster(date -> {
    DayOfWeek dayOfWeek = date.getDayOfWeek();
    int daysToAdd;
    if (dayOfWeek == DayOfWeek.FRIDAY)
        daysToAdd = 3;
    else if (dayOfWeek == DayOfWeek.SATURDAY)
        daysToAdd = 2;
    else
        daysToAdd = 1;
    return today.plusDays(daysToAdd);
});

Testing our code:

@Test
public void whenAdjust_thenNextWorkingDay() {
    LocalDate localDate = LocalDate.of(2017, 07, 8);
    TemporalAdjuster temporalAdjuster = NEXT_WORKING_DAY;
    LocalDate result = localDate.with(temporalAdjuster);

    assertEquals("2017-07-10", date.toString());
}

5.2. By Implementing the TemporalAdjuster Interface

Let’s see how we can write a custom TemporalAdjuster that obtains the working day after 2017-07-08 by implementing the TemporalAdjuster interface:

public class CustomTemporalAdjuster implements TemporalAdjuster {

    @Override
    public Temporal adjustInto(Temporal temporal) {
        DayOfWeek dayOfWeek 
          = DayOfWeek.of(temporal.get(ChronoField.DAY_OF_WEEK));
        
        int daysToAdd;
        if (dayOfWeek == DayOfWeek.FRIDAY)
            daysToAdd = 3;
        else if (dayOfWeek == DayOfWeek.SATURDAY)
            daysToAdd = 2;
        else
            daysToAdd = 1;
        return temporal.plus(daysToAdd, ChronoUnit.DAYS);
    }
}

Now, let’s run our test:

@Test
public void whenAdjustAndImplementInterface_thenNextWorkingDay() {
    LocalDate localDate = LocalDate.of(2017, 07, 8);
    CustomTemporalAdjuster temporalAdjuster = new CustomTemporalAdjuster();
    LocalDate nextWorkingDay = localDate.with(temporalAdjuster);
    
    assertEquals("2017-07-10", nextWorkingDay.toString());
}

6. Conclusion

In this tutorial, we’ve shown what TemporalAdjuster is, predefined TemporalAdjusters, how they can be used, and how we can implement our custom TemporalAdjuster implementations in two different ways.

The full implementation of this tutorial can be found over on GitHub.

A Guide to Apache Commons DbUtils

$
0
0

1. Overview

Apache Commons DbUtils is a small library that makes working with JDBC a lot easier.

In this article, we’ll implement examples to showcase its features and capabilities.

2. Setup

2.1. Maven Dependencies

First, we need to add the commons-dbutils and h2 dependencies to our pom.xml:

<dependency>
    <groupId>commons-dbutils</groupId>
    <artifactId>commons-dbutils</artifactId>
    <version>1.6</version>
</dependency>
<dependency>
    <groupId>com.h2database</groupId>
    <artifactId>h2</artifactId>
    <version>1.4.196</version>
</dependency>

You can find the latest version of commons-dbutils and h2 on Maven Central.

2.2. Test Database

With our dependencies in place, let’s create a script to create the tables and records we’ll use:

CREATE TABLE employee(
    id int NOT NULL PRIMARY KEY auto_increment,
    firstname varchar(255),
    lastname varchar(255),
    salary double,
    hireddate date,
);

CREATE TABLE email(
    id int NOT NULL PRIMARY KEY auto_increment,
    employeeid int,
    address varchar(255)
);

INSERT INTO employee (firstname,lastname,salary,hireddate)
  VALUES ('John', 'Doe', 10000.10, to_date('01-01-2001','dd-mm-yyyy'));
// ...
INSERT INTO email (employeeid,address)
  VALUES (1, 'john@baeldung.com');
// ...

All example test cases in this article will use a newly created connection to an H2 in-memory database:

public class DbUtilsUnitTest {
    private Connection connection;

    @Before
    public void setupDB() throws Exception {
        Class.forName("org.h2.Driver");
        String db
          = "jdbc:h2:mem:;INIT=runscript from 'classpath:/employees.sql'";
        connection = DriverManager.getConnection(db);
    }

    @After
    public void closeBD() {
        DbUtils.closeQuietly(connection);
    }
    // ...
}

2.3. POJOs

Finally, we’ll need two simple classes:

public class Employee {
    private Integer id;
    private String firstName;
    private String lastName;
    private Double salary;
    private Date hiredDate;

    // standard constructors, getters, and setters
}

public class Email {
    private Integer id;
    private Integer employeeId;
    private String address;

    // standard constructors, getters, and setters
}

3. Introduction

The DbUtils library provides the QueryRunner class as the main entry point for most of the available functionality.

This class works by receiving a connection to the database, a SQL statement to be executed, and an optional list of parameters to supply values for the placeholders of the query.

As we’ll see later, a few methods also receive a ResultSetHandler implementation – which is responsible for transforming ResultSet instances into the objects our application expects.

Of course, the library already provides several implementations that handle the most common transformations, such as lists, maps, and JavaBeans.

4. Querying Data

Now that we know the basics, we’re ready to query our database.

Let’s start with a quick example of obtaining all the records in the database as a list of maps using a MapListHandler:

@Test
public void givenResultHandler_whenExecutingQuery_thenExpectedList()
  throws SQLException {
    MapListHandler beanListHandler = new MapListHandler();

    QueryRunner runner = new QueryRunner();
    List<Map<String, Object>> list
      = runner.query(connection, "SELECT * FROM employee", beanListHandler);

    assertEquals(list.size(), 5);
    assertEquals(list.get(0).get("firstname"), "John");
    assertEquals(list.get(4).get("firstname"), "Christian");
}

Next, here’s an example using a BeanListHandler to transform the results into Employee instances:

@Test
public void givenResultHandler_whenExecutingQuery_thenEmployeeList()
  throws SQLException {
    BeanListHandler<Employee> beanListHandler
      = new BeanListHandler<>(Employee.class);

    QueryRunner runner = new QueryRunner();
    List<Employee> employeeList
      = runner.query(connection, "SELECT * FROM employee", beanListHandler);

    assertEquals(employeeList.size(), 5);
    assertEquals(employeeList.get(0).getFirstName(), "John");
    assertEquals(employeeList.get(4).getFirstName(), "Christian");
}

For queries that return a single value, we can use a ScalarHandler:

@Test
public void givenResultHandler_whenExecutingQuery_thenExpectedScalar()
  throws SQLException {
    ScalarHandler<Long> scalarHandler = new ScalarHandler<>();

    QueryRunner runner = new QueryRunner();
    String query = "SELECT COUNT(*) FROM employee";
    long count
      = runner.query(connection, query, scalarHandler);

    assertEquals(count, 5);
}

To learn all the ResultSerHandler implementations, you can refer to the ResultSetHandler documentation.

4.1. Custom Handlers

We can also create a custom handler to pass to QueryRunner‘s methods when we need more control on how the results will be transformed into objects.

This can be done by either implementing the ResultSetHandler interface or extending one of the existing implementations provided by the library.

Let’s see how the second approach looks. First, let’s add another field to our Employee class:

public class Employee {
    private List<Email> emails;
    // ...
}

Now, let’s create a class that extends the BeanListHandler type and sets the email list for each employee:

public class EmployeeHandler extends BeanListHandler<Employee> {

    private Connection connection;

    public EmployeeHandler(Connection con) {
        super(Employee.class);
        this.connection = con;
    }

    @Override
    public List<Employee> handle(ResultSet rs) throws SQLException {
        List<Employee> employees = super.handle(rs);

        QueryRunner runner = new QueryRunner();
        BeanListHandler<Email> handler = new BeanListHandler<>(Email.class);
        String query = "SELECT * FROM email WHERE employeeid = ?";

        for (Employee employee : employees) {
            List<Email> emails
              = runner.query(connection, query, handler, employee.getId());
            employee.setEmails(emails);
        }
        return employees;
    }
}

Notice we are expecting a Connection object in the constructor so that we can execute the queries to get the emails.

Finally, let’s test our code to see if everything is working as expected:

@Test
public void
  givenResultHandler_whenExecutingQuery_thenEmailsSetted()
    throws SQLException {
    EmployeeHandler employeeHandler = new EmployeeHandler(connection);

    QueryRunner runner = new QueryRunner();
    List<Employee> employees
      = runner.query(connection, "SELECT * FROM employee", employeeHandler);

    assertEquals(employees.get(0).getEmails().size(), 2);
    assertEquals(employees.get(2).getEmails().size(), 3);
}

4.2. Custom Row Processors

In our examples, the column names of the employee table match the field names of our Employee class (the matching is case insensitive). However, that’s not always the case – for instance when column names use underscores to separate compound words.

In these situations, we can take advantage of the RowProcessor interface and its implementations to map the column names to the appropriate fields in our classes.

Let’s see how this looks like. First, let’s create another table and insert some records into it:

CREATE TABLE employee_legacy (
    id int NOT NULL PRIMARY KEY auto_increment,
    first_name varchar(255),
    last_name varchar(255),
    salary double,
    hired_date date,
);

INSERT INTO employee_legacy (first_name,last_name,salary,hired_date)
  VALUES ('John', 'Doe', 10000.10, to_date('01-01-2001','dd-mm-yyyy'));
// ...

Now, let’s modify our EmployeeHandler class:

public class EmployeeHandler extends BeanListHandler<Employee> {
    // ...
    public EmployeeHandler(Connection con) {
        super(Employee.class,
          new BasicRowProcessor(new BeanProcessor(getColumnsToFieldsMap())));
        // ...
    }
    public static Map<String, String> getColumnsToFieldsMap() {
        Map<String, String> columnsToFieldsMap = new HashMap<>();
        columnsToFieldsMap.put("FIRST_NAME", "firstName");
        columnsToFieldsMap.put("LAST_NAME", "lastName");
        columnsToFieldsMap.put("HIRED_DATE", "hiredDate");
        return columnsToFieldsMap;
    }
    // ...
}

Notice we are using a BeanProcessor to do the actual mapping of columns to fields and only for those that need to be addressed.

Finally, let’s test everything is ok:

@Test
public void
  givenResultHandler_whenExecutingQuery_thenAllPropertiesSetted()
    throws SQLException {
    EmployeeHandler employeeHandler = new EmployeeHandler(connection);

    QueryRunner runner = new QueryRunner();
    String query = "SELECT * FROM employee_legacy";
    List<Employee> employees
      = runner.query(connection, query, employeeHandler);

    assertEquals((int) employees.get(0).getId(), 1);
    assertEquals(employees.get(0).getFirstName(), "John");
}

5. Inserting Records

The QueryRunner class provides two approaches to creating records in a database.

The first one is to use the update() method and pass the SQL statement and an optional list of replacement parameters. The method returns the number of inserted records:

@Test
public void whenInserting_thenInserted() throws SQLException {
    QueryRunner runner = new QueryRunner();
    String insertSQL
      = "INSERT INTO employee (firstname,lastname,salary, hireddate) "
        + "VALUES (?, ?, ?, ?)";

    int numRowsInserted
      = runner.update(
        connection, insertSQL, "Leia", "Kane", 60000.60, new Date());

    assertEquals(numRowsInserted, 1);
}

The second one is to use the insert() method that, in addition to the SQL statement and replacement parameters, needs a ResultSetHandler to transform the resulting auto-generated keys. The return value will be what the handler returns:

@Test
public void
  givenHandler_whenInserting_thenExpectedId() throws SQLException {
    ScalarHandler<Integer> scalarHandler = new ScalarHandler<>();

    QueryRunner runner = new QueryRunner();
    String insertSQL
      = "INSERT INTO employee (firstname,lastname,salary, hireddate) "
        + "VALUES (?, ?, ?, ?)";

    int newId
      = runner.insert(
        connection, insertSQL, scalarHandler,
        "Jenny", "Medici", 60000.60, new Date());

    assertEquals(newId, 6);
}

6. Updating and Deleting

The update() method of the QueryRunner class can also be used to modify and erase records from our database.

Its usage is trivial. Here’s an example of how to update an employee’s salary:

@Test
public void givenSalary_whenUpdating_thenUpdated()
 throws SQLException {
    double salary = 35000;

    QueryRunner runner = new QueryRunner();
    String updateSQL
      = "UPDATE employee SET salary = salary * 1.1 WHERE salary <= ?";
    int numRowsUpdated = runner.update(connection, updateSQL, salary);

    assertEquals(numRowsUpdated, 3);
}

And here’s another to delete an employee with the given id:

@Test
public void whenDeletingRecord_thenDeleted() throws SQLException {
    QueryRunner runner = new QueryRunner();
    String deleteSQL = "DELETE FROM employee WHERE id = ?";
    int numRowsDeleted = runner.update(connection, deleteSQL, 3);

    assertEquals(numRowsDeleted, 1);
}

7. Asynchronous Operations

DbUtils provides the AsyncQueryRunner class to execute operations asynchronously. The methods on this class have a correspondence with those of QueryRunner class, except that they return a Future instance.

Here’s an example to obtain all employees in the database, waiting up to 10 seconds to get the results:

@Test
public void
  givenAsyncRunner_whenExecutingQuery_thenExpectedList() throws Exception {
    AsyncQueryRunner runner
      = new AsyncQueryRunner(Executors.newCachedThreadPool());

    EmployeeHandler employeeHandler = new EmployeeHandler(connection);
    String query = "SELECT * FROM employee";
    Future<List<Employee>> future
      = runner.query(connection, query, employeeHandler);
    List<Employee> employeeList = future.get(10, TimeUnit.SECONDS);

    assertEquals(employeeList.size(), 5);
}

8. Conclusion

In this tutorial, we explored the most notable features of the Apache Commons DbUtils library.

We queried data and transformed it into different object types, inserted records obtaining the generated primary keys and updated and deleted data based on a given criteria. We also took advantage of the AsyncQueryRunner class to asynchronously execute a query operation.

And, as always, the complete source code for this article can be found over on Github.

Viewing all 3750 articles
Browse latest View live