Quantcast
Channel: Baeldung
Viewing all 3744 articles
Browse latest View live

A Guide to Jdbi

$
0
0

1. Introduction

In this article, we’re going to look at how to query a relational database with jdbi.

Jdbi is an open source Java library (Apache license) that uses lambda expressions and reflection to provide a friendlier, higher level interface than JDBC to access the database.

Jdbi, however, isn’t an ORM; even though it has an optional SQL Object mapping module, it doesn’t have a session with attached objects, a database independence layer, and any other bells and whistles of a typical ORM.

2. Jdbi Setup

Jdbi is organized into a core and several optional modules.

To get started, we just have to include the core module in our dependencies:

<dependencies>
    <dependency>
        <groupId>org.jdbi</groupId>
        <artifactId>jdbi3-core</artifactId>
        <version>3.1.0</version>
    </dependency>
</dependencies>

Over the course of this article, we’ll show examples using the HSQL database:

<dependency>
    <groupId>org.hsqldb</groupId>
    <artifactId>hsqldb</artifactId>
    <version>2.4.0</version>
    <scope>test</scope>
</dependency>

We can find the latest version of jdbi3-core, HSQLDB and the other Jdbi modules on Maven Central.

3. Connecting to the Database

First, we need to connect to the database. To do that, we have to specify the connection parameters.

The starting point is the Jdbi class:

Jdbi jdbi = Jdbi.create("jdbc:hsqldb:mem:testDB", "sa", "");

Here, we’re specifying the connection URL, a username, and, of course, a password.

3.1. Additional Parameters

If we need to provide other parameters, we use an overloaded method accepting a Properties object:

Properties properties = new Properties();
properties.setProperty("username", "sa");
properties.setProperty("password", "");
Jdbi jdbi = Jdbi.create("jdbc:hsqldb:mem:testDB", properties);

In these examples, we’ve saved the Jdbi instance in a local variable. That’s because we’ll use it to send statements and queries to the database.

In fact, merely calling create doesn’t establish any connection to the DB. It just saves the connection parameters for later.

3.2. Using a DataSource

If we connect to the database using a DataSource, as is usually the case, we can use the appropriate create overload:

Jdbi jdbi = Jdbi.create(datasource);

3.3. Working with Handles

Actual connections to the database are represented by instances of the Handle class.

The easiest way to work with handles, and have them automatically closed, is by using lambda expressions:

jdbi.useHandle(handle -> {
    doStuffWith(handle);
});

We call useHandle when we don’t have to return a value.

Otherwise, we use withHandle:

jdbi.withHandle(handle -> {
    return computeValue(handle);
});

It’s also possible, though not recommended, to manually open a connection handle; in that case, we have to close it when we’re done:

Jdbi jdbi = Jdbi.create("jdbc:hsqldb:mem:testDB", "sa", "");
try (Handle handle = jdbi.open()) {
    doStuffWith(handle);
}

Luckily, as we can see, Handle implements Closeable, so it can be used with try-with-resources.

4. Simple Statements

Now that we know how to obtain a connection let’s see how to use it.

In this section, we’ll create a simple table that we’ll use throughout the article.

To send statements such as create table to the database, we use the execute method:

handle.execute(
  "create table project "
  + "(id integer identity, name varchar(50), url varchar(100))");

execute returns the number of rows that were affected by the statement:

int updateCount = handle.execute(
  "insert into project values "
  + "(1, 'tutorials', 'github.com/eugenp/tutorials')");

assertEquals(1, updateCount);

Actually, execute is just a convenience method.

We’ll look at more complex use cases in later sections, but before doing that, we need to learn how to extract results from the database.

5. Querying the Database

The most straightforward expression that produces results from the DB is a SQL query.

To issue a query with a Jdbi Handle, we have to, at least:

  1. create the query
  2. choose how to represent each row
  3. iterate over the results

We’ll now look at each of the points above.

5.1. Creating a Query

Unsurprisingly, Jdbi represents queries as instances of the Query class.

We can obtain one from a handle:

Query query = handle.createQuery("select * from project");

5.2. Mapping the Results

Jdbi abstracts away from the JDBC ResultSet, which has a quite cumbersome API.

Therefore, it offers several possibilities to access the columns resulting from a query or some other statement that returns a result. We’ll now see the simplest ones.

We can represent each row as a map:

query.mapToMap();

The keys of the map will be the selected column names.

Or, when a query returns a single column, we can map it to the desired Java type:

handle.createQuery("select name from project").mapTo(String.class);

Jdbi has built-in mappers for many common classes. Those that are specific to some library or database system are provided in separate modules.

Of course, we can also define and register our mappers. We’ll talk about it in a later section.

Finally, we can map rows to a bean or some other custom class. Again, we’ll see the more advanced options in a dedicated section.

5.3. Iterating Over the Results

Once we’ve decided how to map the results by calling the appropriate method, we receive a ResultIterable object.

We can then use it to iterate over the results, one row at a time.

Here we’ll look at the most common options.

We can merely accumulate the results in a list:

List<Map<String, Object>> results = query.mapToMap().list();

Or to another Collection type:

List<String> results = query.mapTo(String.class).collect(Collectors.toSet());

Or we can  iterate over the results as a stream:

query.mapTo(String.class).useStream((Stream<String> stream) -> {
    doStuffWith(stream)
});

Here, we explicitly typed the stream variable for clarity, but it’s not necessary to do so.

5.4. Getting a Single Result

As a special case, when we expect or are interested in just one row, we have a couple of dedicated methods available.

If we want at most one result, we can use findFirst:

Optional<Map<String, Object>> first = query.mapToMap().findFirst();

As we can see, it returns an Optional value, which is only present if the query returns at least one result.

If the query returns more than one row, only the first is returned.

If instead, we want one and only one result, we use findOnly:

Date onlyResult = query.mapTo(Date.class).findOnly();

Finally, if there are zero results or more than one, findOnly throws an IllegalStateException.

6. Binding Parameters

Often, queries have a fixed portion and a parameterized portion. This has several advantages, including:

  • security: by avoiding string concatenation, we prevent SQL injection
  • ease: we don’t have to remember the exact syntax of complex data types such as timestamps
  • performance: the static portion of the query can be parsed once and cached

Jdbi supports both positional and named parameters.

We insert positional parameters as question marks in a query or statement:

Query positionalParamsQuery =
  handle.createQuery("select * from project where name = ?");

Named parameters, instead, start with a colon:

Query namedParamsQuery =
  handle.createQuery("select * from project where url like :pattern");

In either case, to set the value of a parameter, we use one of the variants of the bind method:

positionalParamsQuery.bind(0, "tutorials");
namedParamsQuery.bind("pattern", "%github.com/eugenp/%");

Note that, unlike JDBC, indexes start at 0.

6.1. Binding Multiple Named Parameters at Once

We can also bind multiple named parameters together using an object.

Let’s say we have this simple query:

Query query = handle.createQuery(
  "select id from project where name = :name and url = :url");
Map<String, String> params = new HashMap<>();
params.put("name", "REST with Spring");
params.put("url", "github.com/eugenp/REST-With-Spring");

Then, for example, we can use a map:

query.bindMap(params);

Or we can use an object in various ways. Here, for example, we bind an object that follows the JavaBean convention:

query.bindBean(paramsBean);

But we could also bind an object’s fields or methods; for all the supported options, see the Jdbi documentation.

7. Issuing More Complex Statements

Now that we’ve seen queries, values, and parameters, we can go back to statements and apply the same knowledge.

Recall that the execute method we saw earlier is just a handy shortcut.

In fact, similarly to queries, DDL and DML statements are represented as instances of the class Update.

We can obtain one by calling the method createUpdate on a handle:

Update update = handle.createUpdate(
  "INSERT INTO PROJECT (NAME, URL) VALUES (:name, :url)");

Then, on an Update we have all the binding methods that we have in a Query, so section 6. applies for updates as well.url

Statements are executed when we call, surprise, execute:

int rows = update.execute();

As we have already seen, it returns the number of affected rows.

7.1. Extracting Auto-Increment Column Values

As a special case, when we have an insert statement with auto-generated columns (typically auto-increment or sequences), we may want to obtain the generated values.

Then, we don’t call execute, but executeAndReturnGeneratedKeys:

Update update = handle.createUpdate(
  "INSERT INTO PROJECT (NAME, URL) "
  + "VALUES ('tutorials', 'github.com/eugenp/tutorials')");
ResultBearing generatedKeys = update.executeAndReturnGeneratedKeys();

ResultBearing is the same interface implemented by the Query class that we’ve seen previously, so we already know how to use it:

generatedKeys.mapToMap()
  .findOnly().get("id");

8. Transactions

We need a transaction whenever we have to execute multiple statements as a single, atomic operation.

As with connection handles, we introduce a transaction by calling a method with a closure:

handle.useTransaction((Handle h) -> {
    haveFunWith(h);
});

And, as with handles, the transaction is automatically closed when the closure returns.

However, we must commit or rollback the transaction before returning:

handle.useTransaction((Handle h) -> {
    h.execute("...");
    h.commit();
});

If, however, an exception is thrown from the closure, Jdbi automatically rolls back the transaction.

As with handles, we have a dedicated method, inTransaction, if we want to return something from the closure:

handle.inTransaction((Handle h) -> {
    h.execute("...");
    h.commit();
    return true;
});

8.1. Manual Transaction Management

Although in the general case it’s not recommended, we can also begin and close a transaction manually:

handle.begin();
// ...
handle.commit();
handle.close();

9. Conclusions and Further Reading

In this tutorial, we’ve introduced the core of Jdbi: queries, statements, and transactions.

We’ve left out some advanced features, like custom row and column mapping and batch processing.

We also haven’t discussed any of the optional modules, most notably the SQL Object extension.

Everything is presented in detail in the Jdbi documentation.

The implementation of all these examples and code snippets can be found in the GitHub project – this is a Maven project, so it should be easy to import and run as is.


The @JvmSynthetic Annotation in Kotlin

$
0
0

1. Introduction

Kotlin is a programming language for the JVM and compiles directly to Java Bytecode. However, it’s a lot more concise than Java, and certain JVM features don’t directly fit into the language.

Instead, Kotlin provides a set of annotations that we can apply to our code to trigger these features. These all exist in the kotlin.jvm package within kotlin-stdlib.

One of the more esoteric of these is the @JvmSynthetic annotation.

2. What Does @JvmSynthetic Do?

This annotation is applicable to methods, fields, getters, and setters — and it marks the appropriate element as synthetic in the generated class file.

We can use this annotation in our code exactly the same as any other annotation:

@JvmSynthetic
val syntheticField: String = "Field"

var syntheticAccessor: String
  @JvmSynthetic
  get() = "Accessor"
  
  @JvmSynthetic
  set(value) {
  }

@JvmSynthetic
fun syntheticMethod() {
}

When the above code is compiled, the compiler assigns the ACC_SYNTHETIC attribute to the corresponding elements in the class file:

private final java.lang.String syntheticField;
  descriptor: Ljava/lang/String;
  flags: ACC_PRIVATE, ACC_FINAL, ACC_SYNTHETIC
  ConstantValue: String Field
  RuntimeInvisibleAnnotations:
    0: #9()

public final void syntheticMethod();
  descriptor: ()V
  flags: ACC_PUBLIC, ACC_FINAL, ACC_SYNTHETIC
  Code:
    stack=0, locals=1, args_size=1
       0: return
    LocalVariableTable:
      Start  Length  Slot  Name   Signature
          0       1     0  this   Lcom/baeldung/kotlin/SyntheticTest;
    LineNumberTable:
      line 20: 0

3. What Is the Synthetic Attribute?

The ACC_SYNTHETIC attribute is intended by the JVM Bytecode to indicate that an element wasn’t actually present in the original source code, but was instead generated by the compiler.

Its original intent was to support nested classes and interfaces in Java 1.1, but now we can apply it to any elements we may need it for.

Any element that the compiler marks as synthetic will be inaccessible from the Java language. This includes not being visible in any tooling, such as our IDE. However, our Kotlin code has no such restrictions and can both see and access these elements perfectly fine.

Note that if we have a Kotlin field annotated with @JvmSynthetic but not annotated with @JvmField, then the generated getter and setter are not considered synthetic methods and can be accessed just fine.

We can access synthetic elements from Java using the Reflection API if we’re able to locate them — for example, by name:

Method syntheticMethod = SyntheticClass.class.getMethod("syntheticMethod");
syntheticMethod.invoke(syntheticClass);

4. What Can I Use This For?

The only real benefits of this are hiding code from Java developers and tools, and as an indication to other developers as to the state of the code. It’s intended for working at a much lower level than most typical application code.

The intention of it is to support code generation, allowing the compiler to generate fields and methods that shouldn’t be exposed to other developers but that are needed to support the actual exposed interface. We can think of it as a level of protection beyond private or internal.

Alternatively, we can use it to hide code from other tools, such as code coverage or static analysis.

However, there’s no guarantee that any given tool will honor this flag, so it might not always be useful here.

5. Conclusion

The @JvmSynthetic annotation may not be the most useful tool available, but it does have uses in certain situations, as we’ve seen here.

As always, even though we may only rarely use it, another tool available in your developer toolbox can be quite beneficial. When the time comes that you have a need for this tool, it’s well worth knowing how it works.

Assertions in JUnit 4 and JUnit 5

$
0
0

1. Introduction

In this article, we’re going to explore in details the assertions available within JUnit.

Following the migrating from JUnit 4 to JUnit 5 and A Guide to JUnit 5 articles, we’re now going into details about the different assertions available in JUnit 4 and JUnit 5.

We’ll also highlight the enhancements made on the assertions with JUnit 5.

2. Assertions

Assertions are utility methods to support asserting conditions in tests; these methods are accessible through the Assert class, in JUnit 4, and the Assertions one, in JUnit 5.

In order to increase the readability of the test and of the assertions itself, it’s always recommended to import statically the respective class. In this way, we can refer directly to the assertion method itself without the representing class as a prefix.

Let’s start exploring the assertions available with JUnit 4.

3. Assertions in JUnit 4

In this version of the library, assertions are available for all primitive types, Objects, and arrays (either of primitives or Objects).

The parameters order, within the assertion, is the expected value followed by the actual value; optionally the first parameter can be a String message that represents the message output of the evaluated condition.

There’s only one slightly different in how is defined the assertThat assertions, but we’ll cover it later on.

Let’s start with the assertEquals one.

3.1. assertEquals

The assertEquals assertion verifies that the expected and the actual values are equal:

@Test
public void whenAssertingEquality_thenEqual() {
    String expected = "Baeldung";
    String actual = "Baeldung";

    assertEquals(expected, actual);
}

It’s also possible to specify a message to display when the assertion fails:

assertEquals("failure - strings are not equal", expected, actual);

3.2. assertArrayEquals

If we want to assert that two arrays are equals, we can use the assertArrayEquals:

@Test
public void whenAssertingArraysEquality_thenEqual() {
    char[] expected = {'J','u','n','i','t'};
    char[] actual = "Junit".toCharArray();
    
    assertArrayEquals(expected, actual);
}

If both arrays are null, the assertion will consider them equal:

@Test
public void givenNullArrays_whenAssertingArraysEquality_thenEqual() {
    int[] expected = null;
    int[] actual = null;

    assertArrayEquals(expected, actual);
}

3.3. assertNotNull and assertNull

When we want to test if an object is null we can use the assertNull assertion:

@Test
public void whenAssertingNull_thenTrue() {
    Object car = null;
    
    assertNull("The car should be null", car);
}

In the opposite way, if we want to assert that an object should not be null we can use the assertNotNull assertion.

3.4. assertNotSame and assertSame

With assertNotSame, it’s possible to verify if two variables don’t refer to the same object:

@Test
public void whenAssertingNotSameObject_thenDifferent() {
    Object cat = new Object();
    Object dog = new Object();

    assertNotSame(cat, dog);
}

Otherwise, when we want to verify that two variables refer to the same object, we can use the assertSame assertion.

3.5. assertTrue and assertFalse

In case we want to verify that a certain condition is true or false, we can respectively use the assertTrue assertion or the assertFalse one:

@Test
public void whenAssertingConditions_thenVerified() {
    assertTrue("5 is greater then 4", 5 > 4);
    assertFalse("5 is not greater then 6", 5 > 6);
}

3.6. fail

The fail assertion fails a test throwing an AssertionFailedError. It can be used to verify that an actual exception is thrown or when we want to make a test failing during its development.

Let’s see how we can use it in the first scenario:

@Test
public void whenCheckingExceptionMessage_thenEqual() {
    try {
        methodThatShouldThrowException();
        fail("Exception not thrown");
    } catch (UnsupportedOperationException e) {
        assertEquals("Operation Not Supported", e.getMessage());
    }
}

3.7. assertThat

The assertThat assertion is the only one in JUnit 4 that has a reverse order of the parameters compared to the other assertions.

In this case, the assertion has an optional failure message, the actual value, and a Matcher object.

Let’s see how we can use this assertion to check if an array contains particular values:

@Test
public void testAssertThatHasItems() {
    assertThat(
      Arrays.asList("Java", "Kotlin", "Scala"), 
      hasItems("Java", "Kotlin"));
}

Additional information, on the powerful use of the assertThat assertion with Matcher object, is available at Testing with Hamcrest.

4. JUnit 5 Assertions

JUnit 5 kept many of the assertion methods of JUnit 4 while adding few new ones that take advantage of the Java 8 support.

Also in this version of the library, assertions are available for all primitive types, Objects, and arrays (either of primitives or Objects).

The order of the parameters of the assertions changed, moving the output message parameter as the last parameter. Thanks to the support of Java 8, the output message can be a Supplier, allowing lazy evaluation of it.

Let’s start reviewing the assertions available also in JUnit 4.

4.1. assertArrayEquals

The assertArrayEquals assertion verifies that the expected and the actual arrays are equals:

@Test
public void whenAssertingArraysEquality_thenEqual() {
    char[] expected = { 'J', 'u', 'p', 'i', 't', 'e', 'r' };
    char[] actual = "Jupiter".toCharArray();

    assertArrayEquals(expected, actual, "Arrays should be equal");
}

If the arrays aren’t equal, the message “Arrays should be equal” will be displayed as output.

4.2. assertEquals

In case we want to assert that two floats are equals, we can use the simple assertEquals assertion:

@Test
public void whenAssertingEquality_thenEqual() {
    float square = 2 * 2;
    float rectangle = 2 * 2;

    assertEquals(square, rectangle);
}

However, if we want to assert that the actual value differs by a predefined delta from the expected value, we can still use the assertEquals but we have to pass the delta value as the third parameter:

@Test
public void whenAssertingEqualityWithDelta_thenEqual() {
    float square = 2 * 2;
    float rectangle = 3 * 2;
    float delta = 2;

    assertEquals(square, rectangle, delta);
}

4.3. assertTrue and assertFalse

With the assertTrue assertion, it’s possible to verify the supplied conditions are true:

@Test
public void whenAssertingConditions_thenVerified() {
    assertTrue(5 > 4, "5 is greater the 4");
    assertTrue(null == null, "null is equal to null");
}

Thanks to the support of the lambda expression, it’s possible to supply a BooleanSupplier to the assertion instead of a boolean condition.

Let’s see how we can assert the correctness of a BooleanSupplier using the assertFalse assertion:

@Test
public void givenBooleanSupplier_whenAssertingCondition_thenVerified() {
    BooleanSupplier condition = () -> 5 > 6;

    assertFalse(condition, "5 is not greater then 6");
}

4.4. assertNull and assertNotNull

When we want to assert that an object is not null we can use the assertNotNull assertion:

@Test
public void whenAssertingNotNull_thenTrue() {
    Object dog = new Object();

    assertNotNull(dog, () -> "The dog should not be null");
}

In the opposite way, we can use the assertNull assertion to check if the actual is null:

@Test
public void whenAssertingNull_thenTrue() {
    Object cat = null;

    assertNull(cat, () -> "The cat should be null");
}

In both cases, the failure message will be retrieved in a lazy way since it’s a Supplier.

4.5. assertSame and assertNotSame

When we want to assert that the expected and the actual refer to the same Object, we must use the assertSame assertion:

@Test
public void whenAssertingSameObject_thenSuccessfull() {
    String language = "Java";
    Optional<String> optional = Optional.of(language);

    assertSame(language, optional.get());
}

In the opposite way, we can use the assertNotSame one.

4.6. fail

The fail assertion fails a test with the provided failure message as well as the underlying cause. This can be useful to mark a test when it’s development it’s not completed:

@Test
public void whenFailingATest_thenFailed() {
    // Test not completed
    fail("FAIL - test not completed");
}

4.7. assertAll

One of the new assertion introduced in JUnit 5 is assertAll.

This assertion allows the creation of grouped assertions, where all the assertions are executed and their failures are reported together. In details, this assertion accepts a heading, that will be included in the message string for the MultipleFailureError, and a Stream of Executable.

Let’s define a grouped assertion:

@Test
public void givenMultipleAssertion_whenAssertingAll_thenOK() {
    assertAll(
      "heading",
      () -> assertEquals(4, 2 * 2, "4 is 2 times 2"),
      () -> assertEquals("java", "JAVA".toLowerCase()),
      () -> assertEquals(null, null, "null is equal to null")
    );
}

The execution of a grouped assertion is interrupted only when one of the executables throws a blacklisted exception (OutOfMemoryError for example).

4.8. assertIterableEquals

The assertIterableEquals asserts that the expected and the actual iterables are deeply equal.

In order to be equal, both iterable must return equal elements in the same order and it isn’t required that the two iterables are of the same type in order to be equal.

With this consideration, let’s see how we can assert that two lists of different types (LinkedList and ArrayList for example) are equal:

@Test
public void givenTwoLists_whenAssertingIterables_thenEquals() {
    Iterable<String> al = new ArrayList<>(asList("Java", "Junit", "Test"));
    Iterable<String> ll = new LinkedList<>(asList("Java", "Junit", "Test"));

    assertIterableEquals(al, ll);
}

In the same way of the assertArrayEquals, if both iterables are null, they are considered equal.

4.9. assertLinesMatch

The assertLinesMatch asserts that the expected list of String matches the actual list.

This method differs from the assertEquals and assertIterableEquals since, for each pair of expected and actual lines, it performs this algorithm:

  1. check if the expected line is equal to the actual one. If yes it continues with the next pair
  2. treat the expected line as a regular expression and performs a check with the String.matches() method. If yes it continues with the next pair
  3. check if the expected line is a fast-forward marker. If yes apply fast-forward and repeat the algorithm from the step 1

Let’s see how we can use this assertion to assert that two lists of String have matching lines:

@Test
public void whenAssertingEqualityListOfStrings_thenEqual() {
    List<String> expected = asList("Java", "\\d+", "JUnit");
    List<String> actual = asList("Java", "11", "JUnit");

    assertLinesMatch(expected, actual);
}

4.10. assertNotEquals

Complementary to the assertEquals, the assertNotEquals assertion asserts that the expected and the actual values aren’t equal:

@Test
public void whenAssertingEquality_thenNotEqual() {
    Integer value = 5; // result of an algorithm
    
    assertNotEquals(0, value, "The result cannot be 0");
}

If both are null, the assertion fails.

4.11. assertThrows

In order to increase simplicity and readability, the new assertThrows assertion allows us a clear and a simple way to assert if an executable throws the specified exception type.

Let’s see how we can assert a thrown exception:

@Test
void whenAssertingException_thenThrown() {
    Throwable exception = assertThrows(
      IllegalArgumentException.class, 
      () -> {
          throw new IllegalArgumentException("Exception message");
      }
    );
    assertEquals("Exception message", exception.getMessage());
}

The assertion will fail if no exception is thrown, or if an exception of a different type is thrown.

4.12. assertTimeout and assertTimeoutPreemptively

In case we want to assert that the execution of a supplied Executable ends before a given Timeout, we can use the assertTimeout assertion:

@Test
public void whenAssertingTimeout_thenNotExceeded() {
    assertTimeout(
      ofSeconds(2), 
      () -> {
        // code that requires less then 2 minutes to execute
        Thread.sleep(1000);
      }
    );
}

However, with the assertTimeout assertion, the supplied executable will be executed in the same thread of the calling code. Consequently, execution of the supplier won’t be preemptively aborted if the timeout is exceeded.

In case we want to be sure that execution of the executable will be aborted once it exceeds the timeout, we can use the assertTimeoutPreemptively assertion.

Both assertions can accept, instead of an Executable, ThrowingSupplier, representing any generic block of code that returns an object and that can potentially throw a Throwable.

5. Conclusion

In this tutorial, we covered all the assertions available in both JUnit 4 and JUnit 5.

We highlighted briefly the improvements made in JUnit 5, with the introductions of new assertions and the support of lambdas.

As always, the complete source code for this article is available over on GitHub.

Maven Dependency Scopes

$
0
0

1. Introduction

Maven is one of the most popular build tools in the Java ecosystem, and one of its core features is dependency management.

In this article, we’re going to describe and explore the mechanism that helps in managing transitive dependencies in Maven projects – dependency scopes.

2. Transitive Dependency

Simply put, there’re two types of dependencies in Maven direct and transitive.

Direct dependencies are the ones that are explicitly included in the project. These can be included in the project using <dependency> tags:

<dependency>
    <groupId>junit</groupId>
    <artifactId>junit</artifactId>
    <version>4.12</version>
</dependency>

Transitive dependencies, on the other hand, are dependencies required by our direct dependencies. Required transitive dependencies are automatically included in our project by Maven.

We can list all dependencies including transitive dependencies in the project using: mvn dependency:tree command.

3. Dependency Scopes

Dependency scopes can help to limit transitivity of the dependencies and they modify classpath for different built tasks. Maven has 6 default dependency scopes.

And it’s important to understand that each scope – except for import – does have an impact on transitive dependencies.

3.1. Compile

This is the default scope when no other scope is provided.

Dependencies with this scope are available on the classpath of the project in all build tasks and they’re propagated to the dependent projects.

More importantly, these dependencies are also transitive:

<dependency>
    <groupId>commons-lang</groupId>
    <artifactId>commons-lang</artifactId>
    <version>2.6</version>
</dependency>

3.2. Provided

This scope is used to mark dependencies that should be provided at runtime by JDK or a container, hence the name.

A good use case for this scope would be a web application deployed in some container, where the container already provides some libraries itself.

For example, a web server that already provides the Servlet API at runtime, thus in our project, those dependencies can be defined with the provided scope:

<dependency>
    <groupId>javax.servlet</groupId>
    <artifactId>servlet-api</artifactId>
    <version>2.5</version>
    <scope>provided</scope>
</dependency>

The provided dependencies are available only at compile-time and in the test classpath of the project; what’s more, they aren’t transitive.

3.3. Runtime

The dependencies with this scope are required at runtime, but they’re not needed for compilation of the project code. Because of that, dependencies marked with the runtime scope will be present in runtime and test classpath, but they will be missing from compile classpath.

A good example of dependencies that should use the runtime scope is a JDBC driver:

<dependency>
    <groupId>mysql</groupId>
    <artifactId>mysql-connector-java</artifactId>
    <version>6.0.6</version>
    <scope>runtime</scope>
</dependency>

3.4. Test

This scope is used to indicate that dependency isn’t required at standard runtime of the application, but is used only for test purposes. Test dependencies aren’t transitive and are only present for test and execution classpaths.

The standard use case for this scope is adding test library like JUnit to our application:

<dependency>
    <groupId>junit</groupId>
    <artifactId>junit</artifactId>
    <version>4.12</version>
    <scope>test</scope>
</dependency>

3.5. System

System scope is much similar to the provided scope. The main difference between those two scopes is that system requires us to directly point to specific jar on the system.

The important thing to remember is that building the project with system scope dependencies may fail on different machines if dependencies aren’t present or are located in a different place than the one systemPath points to:

<dependency>
    <groupId>com.baeldung</groupId>
    <artifactId>custom-dependency</artifactId>
    <version>1.3.2</version>
    <scope>system</scope>
    <systemPath>${project.basedir}/libs/custom-dependency-1.3.2.jar</systemPath>
</dependency>

3.6. Import

This scope was added in Maven 2.0.9 and it’s only available for the dependency type pom. We’re going to speak more about the type of the dependency in future articles.

Import indicates that this dependency should be replaced with all effective dependencies declared in it’s POM:

<dependency>
    <groupId>com.baeldung</groupId>
    <artifactId>custom-project</artifactId>
    <version>1.3.2</version>
    <type>pom</type>
    <scope>import</scope>
</dependency>

4. Scope and Transitivity 

Each dependency scope affects transitive dependencies in its own way. This means that different transitive dependencies may end up in the project with different scopes.

However, dependencies with scopes provided and test will never be included in the main project.

Then:

  • For the compile scope, all dependencies with runtime scope will be pulled in with the runtime scope, in the project and all dependencies with the compile scope will be pulled in with the compile scope, in the project
  • For the provided scope, both runtime and compile scope dependencies will be pulled in with the provided scope, in the project
  • For the test scope, both runtime and compile scope transitive dependencies will be pulled in with the test scope, in the project
  • For the runtime scope, both runtime and compile scope transitive dependencies will be pulled in with the runtime scope, in the project

5. Conclusion

In this quick tutorial, we focused on Maven dependency scopes, their purpose, and the details of how they operate.

If you want to dig deeper into Maven, the documentation is a great place to start.

A Guide to Unirest

$
0
0

1. Overview

Unirest is a lightweight HTTP client library from Mashape. Along with Java, it’s also available for Node.js, .Net, Python, Ruby, etc.

Before we jump in, note that we’ll use mocky.io for all our HTTP requests here.

2. Maven Setup

To get started, let’s add the necessary dependencies first:

<dependency>
    <groupId>com.mashape.unirest</groupId>
    <artifactId>unirest-java</artifactId>
    <version>1.4.9</version>
</dependency>

Check out the latest version here.

3. Simple Requests

Let’s send a simple HTTP request, to understand the semantics of the framework:

@Test
public void shouldReturnStatusOkay() {
    HttpResponse<JsonNode> jsonResponse 
      = Unirest.get("http://www.mocky.io/v2/5a9ce37b3100004f00ab5154")
      .header("accept", "application/json").queryString("apiKey", "123")
      .asJson();

    assertNotNull(jsonResponse.getBody());
    assertEquals(200, jsonResponse.getStatus());
}

Notice that the API is fluent, efficient and quite easy to read.

We’re passing headers and parameters with the header() and fields() APIs.

And the request gets invoked on the asJson() method call; we also have other options here, such as asBinary(), asString() and asObject().

To pass multiple headers or fields, we can create a map and pass them to .headers(Map<String, Object> headers) and .fields(Map<String, String> fields) respectively:

@Test
public void shouldReturnStatusAccepted() {
    Map<String, String> headers = new HashMap<>();
    headers.put("accept", "application/json");
    headers.put("Authorization", "Bearer 5a9ce37b3100004f00ab5154");

    Map<String, Object> fields = new HashMap<>();
    fields.put("name", "Sam Baeldung");
    fields.put("id", "PSP123");

    HttpResponse<JsonNode> jsonResponse 
      = Unirest.put("http://www.mocky.io/v2/5a9ce7853100002a00ab515e")
      .headers(headers).fields(fields)
      .asJson();
 
    assertNotNull(jsonResponse.getBody());
    assertEquals(202, jsonResponse.getStatus());
}

3.1. Passing Query Params

To pass data as a query String, we’ll use the queryString() method:

HttpResponse<JsonNode> jsonResponse 
  = Unirest.get("http://www.mocky.io/v2/5a9ce37b3100004f00ab5154")
  .queryString("apiKey", "123")

3.2. Using Path Params

For passing any URL parameters, we can use the routeParam() method:

HttpResponse<JsonNode> jsonResponse 
  = Unirest.get("http://www.mocky.io/v2/5a9ce37b3100004f00ab5154/{userId}")
  .routeParam("userId", "123")

The parameter placeholder name must be same as the first argument to the method.

3.3. Requests with Body

If our request requires a string/JSON body, we pass it using the body() method:

@Test
public void givenRequestBodyWhenCreatedThenCorrect() {

    HttpResponse<JsonNode> jsonResponse 
      = Unirest.post("http://www.mocky.io/v2/5a9ce7663100006800ab515d")
      .body("{\"name\":\"Sam Baeldung\", \"city\":\"viena\"}")
      .asJson();
 
    assertEquals(201, jsonResponse.getStatus());
}

3.4. Object Mapper

In order to use the asObject() or body() in the request, we need to define our object mapper. For simplicity, we’ll use the Jackson object mapper.

Let’s first add the following dependencies to pom.xml:

<dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-databind</artifactId>
    <version>2.9.4</version>
</dependency>

Always use the latest version over on Maven Central.

Now let’s configure our mapper:

Unirest.setObjectMapper(new ObjectMapper() {
    com.fasterxml.jackson.databind.ObjectMapper mapper 
      = new com.fasterxml.jackson.databind.ObjectMapper();

    public String writeValue(Object value) {
        return mapper.writeValueAsString(value);
    }

    public <T> T readValue(String value, Class<T> valueType) {
        return mapper.readValue(value, valueType);
    }
});

Note that setObjectMapper() should only be called once, for setting the mapper; once the mapper instance is set, it will be used for all request and responses.

Let’s now test the new functionality using a custom Article object:

@Test
public void givenArticleWhenCreatedThenCorrect() {
    Article article 
      = new Article("ID1213", "Guide to Rest", "baeldung");
    HttpResponse<JsonNode> jsonResponse 
      = Unirest.post("http://www.mocky.io/v2/5a9ce7663100006800ab515d")
      .body(article)
      .asJson();
 
    assertEquals(201, jsonResponse.getStatus());
}

4. Request Methods

Similar to any HTTP client, the framework provides separate methods for each HTTP verb:

POST:

Unirest.post("http://www.mocky.io/v2/5a9ce7663100006800ab515d")

PUT:

Unirest.put("http://www.mocky.io/v2/5a9ce7663100006800ab515d")

GET:

Unirest.get("http://www.mocky.io/v2/5a9ce7663100006800ab515d")

DELETE:

Unirest.delete("http://www.mocky.io/v2/5a9ce7663100006800ab515d")

PATCH:

Unirest.patch("http://www.mocky.io/v2/5a9ce7663100006800ab515d")

OPTIONS:

Unirest.delete("http://www.mocky.io/v2/5a9ce7663100006800ab515d")

5. Response Methods

Once we get the response, let check the status code and status message:

//...
jsonResponse.getStatus()

//...

Extract the headers:

//...
jsonResponse.getHeaders();
//...

Get the response body:

//...
jsonResponse.getBody();
jsonResponse.getRawBody();
//...

Notice that, the getRawBody(), returns a stream of the unparsed response body, whereas the getBody() returns the parsed body, using the object mapper defined in the earlier section.

6. Handling Asynchronous Requests

Unirest also has the capability to handle asynchronous requests – using java.util.concurrent.Future and callback methods:

@Test
public void whenAysncRequestShouldReturnOk() {
    Future<HttpResponse<JsonNode>> future = Unirest.post(
      "http://www.mocky.io/v2/5a9ce37b3100004f00ab5154?mocky-delay=10000ms")
      .header("accept", "application/json")
      .asJsonAsync(new Callback<JsonNode>() {

        public void failed(UnirestException e) {
            // Do something if the request failed
        }

        public void completed(HttpResponse<JsonNode> response) {
            // Do something if the request is successful
        }

        public void cancelled() {
            // Do something if the request is cancelled
        }
        });
 
    assertEquals(200, future.get().getStatus());
}

The com.mashape.unirest.http.async.Callback<T> interface provides three methods, failed(), cancelled() and completed(). 

Override the methods to perform the necessary operations depending on the response.

7. File Uploads

To upload or send a file as a part of the request, pass a java.io.File object as a field with name file:

@Test
public void givenFileWhenUploadedThenCorrect() {

    HttpResponse<JsonNode> jsonResponse = Unirest.post(
      "http://www.mocky.io/v2/5a9ce7663100006800ab515d")
      .field("file", new File("/path/to/file"))
      .asJson();
 
    assertEquals(201, jsonResponse.getStatus());
}

We can also use ByteStream:

@Test
public void givenByteStreamWhenUploadedThenCorrect() {
    try (InputStream inputStream = new FileInputStream(
      new File("/path/to/file/artcile.txt"))) {
        byte[] bytes = new byte[inputStream.available()];
        inputStream.read(bytes);
        HttpResponse<JsonNode> jsonResponse = Unirest.post(
          "http://www.mocky.io/v2/5a9ce7663100006800ab515d")
          .field("file", bytes, "article.txt")
          .asJson();
 
        assertEquals(201, jsonResponse.getStatus());
    }
}

Or use the input stream directly, adding the ContentType.APPLICATION_OCTET_STREAM as the second argument in the fields() method:

@Test
public void givenInputStreamWhenUploadedThenCorrect() {
    try (InputStream inputStream = new FileInputStream(
      new File("/path/to/file/artcile.txt"))) {

        HttpResponse<JsonNode> jsonResponse = Unirest.post(
          "http://www.mocky.io/v2/5a9ce7663100006800ab515d")
          .field("file", inputStream, ContentType.APPLICATION_OCTET_STREAM, "article.txt").asJson();
 
        assertEquals(201, jsonResponse.getStatus());
    }
}

8. Unirest Configurations

The framework also supports typical configurations of an HTTP client like connection pooling, timeouts, global headers etc.

Let’s set the number of connections and number maximum connections per route:

Unirest.setConcurrency(20, 5);

Configure connection and socket timeouts :

Unirest.setTimeouts(20000, 15000);

Note that the time values are in milliseconds.

Now let’s set  HTTP headers for all our requests:

Unirest.setDefaultHeader("X-app-name", "baeldung-unirest");
Unirest.setDefaultHeader("X-request-id", "100004f00ab5");

We can clear the global headers anytime:

Unirest.clearDefaultHeaders();

At some point, we might need to make requests through a proxy server:

Unirest.setProxy(new HttpHost("localhost", 8080));

One important aspect to be aware of is closing or exiting the application gracefully. Unirest spawns a background event loop to handle the operations, we need to shut down that loop before exiting our application:

Unirest.shutdown();

9. Conclusion

In this tutorial, we focused on the lightweight HTTP client framework – Unirest. We worked with some simple examples, both in a synchronous but also async modes.

Finally, we also used several advanced configurations – such as connection pooling, proxy settings etc.

As usual, the source code is available over on GitHub.

Chain of Responsibility Design Pattern in Java

$
0
0

1. Introduction

In this article, we’re going to take a look at a widely used behavioral design pattern: Chain of Responsibility.

We can find more design patterns in our previous article.

2. Chain of Responsibility

Wikipedia defines Chain of Responsibility as a design pattern consisting of “a source of command objects and a series of processing objects”.

Each processing object in the chain is responsible for a certain type of command, and the processing is done, it forwards the command to the next processor in the chain.

The Chain of Responsibility pattern is handy for:

  • Decoupling a sender and receiver of a command
  • Picking a processing strategy at processing-time

So, let’s see a simple example of the pattern.

3. Example

We’re going to use Chain of Responsibility to create a chain for handling authentication requests.

So, the input authentication provider will be the command, and each authentication processor will be a separate processor object.

Let’s first create an abstract base class for our processors:

public abstract class AuthenticationProcessor {

    public AuthenticationProcessor nextProcessor;
    
    // standard constructors

    public abstract boolean isAuthorized(AuthenticationProvider authProvider);
}

Next, let’s create concrete processors which extend AuthenticationProcessor:

public class OAuthProcessor extends AuthenticationProcessor {

    public OAuthProcessor(AuthenticationProcessor nextProcessor) {
        super(nextProcessor);
    }

    @Override
    public boolean isAuthorized(AuthenticationProvider authProvider) {
        if (authProvider instanceof OAuthTokenProvider) {
            return true;
        } else if (nextProcessor != null) {
            return nextProcessor.isAuthorized(authProvider);
        }
        
        return false;
    }
}
public class UsernamePasswordProcessor extends AuthenticationProcessor {

    public UsernamePasswordProcessor(AuthenticationProcessor nextProcessor) {
        super(nextProcessor);
    }

    @Override
    public boolean isAuthorized(AuthenticationProvider authProvider) {
        if (authProvider instanceof UsernamePasswordProvider) {
            return true;
        } else if (nextProcessor != null) {
            return nextProcessor.isAuthorized(authProvider);
        }
        return false;
    }
}

Here, we created two concrete processors for our incoming authorization requests: UsernamePasswordProcessor and OAuthProcessor.

For each one, we overrode the isAuthorized method.

Now let’s create a couple of tests:

public class ChainOfResponsibilityTest {

    private static AuthenticationProcessor getChainOfAuthProcessor() {
        AuthenticationProcessor oAuthProcessor = new OAuthProcessor(null);
        return new UsernamePasswordProcessor(oAuthProcessor);
    }

    @Test
    public void givenOAuthProvider_whenCheckingAuthorized_thenSuccess() {
        AuthenticationProcessor authProcessorChain = getChainOfAuthProcessor();
        assertTrue(authProcessorChain.isAuthorized(new OAuthTokenProvider()));
    }

    @Test
    public void givenSamlProvider_whenCheckingAuthorized_thenSuccess() {
        AuthenticationProcessor authProcessorChain = getChainOfAuthProcessor();
 
        assertFalse(authProcessorChain.isAuthorized(new SamlTokenProvider()));
    }
}

The example above creates a chain of authentication processors: UsernamePasswordProcessor -> OAuthProcessor. In the first test, the authorization succeeds, and in the other, it fails.

First, UsernamePasswordProcessor checks to see if the authentication provider is an instance of UsernamePasswordProvider.

Not being the expected input, UsernamePasswordProcessor delegates to OAuthProcessor.

Last, the OAuthProcessor processes the command. In the first test, there is a match and the test passes. In the second, there are no more processors in the chain, and, as a result, the test fails.

4. Implementation Principles

We need to keep few important principles in mind while implementing Chain of Responsibility:

  • Each processor in the chain will have its implementation for processing a command
    • In our example above, all processors have their implementation of isAuthorized
  • Every processor in the chain should have reference to the next processor
    • Above, UsernamePasswordProcessor delegates to OAuthProcessor
  • Each processor is responsible for delegating to the next processor so beware of dropped commands
    • Again in our example, if the command is an instance of SamlProvider then the request may not get processed and will be unauthorized
  • Processors should not form a recursive cycle
    • In our example, we don’t have a cycle in our chain: UsernamePasswordProcessor -> OAuthProcessor. But, if we explicitly set UsernamePasswordProcessor as next processor of  OAuthProcessor, then we end up with a cycle in our chain: UsernamePasswordProcessor -> OAuthProcessor -> UsernamePasswordProcessor. Taking the next processor in the constructor can help with this
  • Only one processor in the chain handles a given command
    • In our example, if an incoming command contains an instance of  OAuthTokenProvider, then only OAuthProcessor will handle the command

5. Usage in the Real World

In the Java world, we benefit from Chain of Responsibility every day. One such classic example is Servlet Filters in Java that allow multiple filters to process an HTTP request. Though in that case, each filter invokes the chain instead of the next filter.

Let’s take a look at the code snippet below for better understanding of this pattern in Servlet Filters:

public class CustomFilter implements Filter {

    public void doFilter(
      ServletRequest request,
      ServletResponse response,
      FilterChain chain)
      throws IOException, ServletException {

        // process the request

        // pass the request (i.e. the command) along the filter chain
        chain.doFilter(request, response);
    }
}

As seen in the code snippet above, we need to invoke FilterChain‘s doFilter method in order to pass the request on to next processor in the chain.

6. Disadvantages

And now that we’ve seen how interesting Chain of Responsibility is, let’s keep in mind some drawbacks:

  • Mostly, it can get broken easily:
    • if a processor fails to call the next processor, the command gets dropped
    • if a processor calls the wrong processor, it can lead to a cycle
  • It can create deep stack traces, which can affect performance
  • It can lead to duplicate code across processors, increasing maintenance

7. Conclusion

In this article, we talked about Chain of Responsibility and its strengths and weaknesses with the help of a chain to authorize incoming authentication requests.

And, as always, the source code can be found over on GitHub.

Java Weekly, Issue 220

$
0
0

Here we go…

1. Spring and Java

>> Monitor and troubleshoot Java applications and services with Datadog 

Optimize performance with end-to-end tracing and out-of-the-box support for popular Java frameworks, application servers, and databases. Try it free.

>> Testing auto-configurations with Spring Boot 2.0 [spring.io]

Cool – Spring Boot 2.0 provides a suite of new test helpers for easily configuring an ApplicationContext to simulate auto-configuration test scenarios.

>> How to customize the Jackson ObjectMapper used by Hibernate-Types [vladmihalcea.com]

The new configuration mechanism allows customizing an ObjectMapper used by hibernate-types and some useful other behaviors.

>> Feature lifecycle in Java [blog.frankel.ch]

A quick reminder what feature lifecycle in Java looks like.

>> Improve Launch Times On Java 10 With Application Class-Data Sharing [blog.codefx.org]

Application Class-Data Sharing is another interesting feature of Java 10 which can turbocharge Java applications start-up time.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> Team Building Lunch [dilbert.com]

>> Boss The Bottleneck [dilbert.com]

>> Workload [dilbert.com]

4. Pick of the Week

>> The Evolution of Code Deploys at Reddit [redditblog.com]

Hamcrest Text Matchers

$
0
0

1. Overview 

In this tutorial, we’ll explore Hamcrest Text Matchers.

We discussed Hamcrest Matchers in general before in testing with Hamcrest, in this tutorial we’ll focus on Text Matchers only.

2. Maven Configuration

First, we need to add the following dependency to our pom.xml:

<dependency>
    <groupId>org.hamcrest</groupId>
    <artifactId>java-hamcrest</artifactId>
    <version>2.0.0.0</version>
    <scope>test</scope>
</dependency>

The latest version of java-hamcrest can be downloaded from Maven Central.

Now, we’ll dive right into Hamcrest Text Matchers.

3. Text Equality Matchers

We can, of course, check if two Strings are equal using the standard isEqual() matcher.

In addition, we have two matchers that are specific to String types: equalToIgnoringCase() and equalToIgnoringWhiteSpace().

Let’s check if two Strings are equal – ignoring case:

@Test
public void whenTwoStringsAreEqual_thenCorrect() {
    String first = "hello";
    String second = "Hello";

    assertThat(first, equalToIgnoringCase(second));
}

We can also check if two Strings are equal – ignoring leading and trailing whitespace:

@Test
public void whenTwoStringsAreEqualWithWhiteSpace_thenCorrect() {
    String first = "hello";
    String second = "   Hello   ";

    assertThat(first, equalToIgnoringWhiteSpace(second));
}

4. Empty Text Matchers

We can check if a String is blank, meaning it contains only whitespace, by using the blankString() and blankOrNullString() matchers:

@Test
public void whenStringIsBlank_thenCorrect() {
    String first = "  ";
    String second = null;
    
    assertThat(first, blankString());
    assertThat(first, blankOrNullString());
    assertThat(second, blankOrNullString());
}

On the other hand, if we want to verify if a String is empty, we can use the emptyString() matchers:

@Test
public void whenStringIsEmpty_thenCorrect() {
    String first = "";
    String second = null;

    assertThat(first, emptyString());
    assertThat(first, emptyOrNullString());
    assertThat(second, emptyOrNullString());
}

5. Pattern Matchers

We can also check if a given text matches a regular expression using the matchesPattern() function:

@Test
public void whenStringMatchPattern_thenCorrect() {
    String first = "hello";

    assertThat(first, matchesPattern("[a-z]+"));
}

6. Sub-String Matchers

We can determine if a text contains another sub-text by using the containsString() function or containsStringIgnoringCase():

@Test
public void whenVerifyStringContains_thenCorrect() {
    String first = "hello";

    assertThat(first, containsString("lo"));
    assertThat(first, containsStringIgnoringCase("EL"));
}

If we expect the sub-strings to be in a specific order, we can call the stringContainsInOrder() matcher:

@Test
public void whenVerifyStringContainsInOrder_thenCorrect() {
    String first = "hello";
    
    assertThat(first, stringContainsInOrder("e","l","o"));
}

Next, let’s see how to check that a String starts with a given String:

@Test
public void whenVerifyStringStartsWith_thenCorrect() {
    String first = "hello";

    assertThat(first, startsWith("he"));
    assertThat(first, startsWithIgnoringCase("HEL"));
}

And finally, we can check if a String ends with a specified String:

@Test
public void whenVerifyStringEndsWith_thenCorrect() {
    String first = "hello";

    assertThat(first, endsWith("lo"));
    assertThat(first, endsWithIgnoringCase("LO"));
}

7. Conclusion

In this quick tutorial, we explored Hamcrest Text Matchers.

As always, the full source code for the examples can be found over on GitHub.


Hamcrest File Matchers

$
0
0

1. Overview 

In this tutorial, we’ll discuss Hamcrest File Matchers.

We discussed Hamcrest Matchers in general before in the previous Testing with Hamcrest article. In the next sections, we’ll focus only File Matchers.

2. Maven Configuration

First, we need to add the following dependency to our pom.xml:

<dependency>
    <groupId>org.hamcrest</groupId>
    <artifactId>java-hamcrest</artifactId>
    <version>2.0.0.0</version>
    <scope>test</scope>
</dependency>

The latest version of java-hamcrest can be downloaded from Maven Central.

Let’s continue with exploring the Hamcrest File Matchers.

3. File Properties

Hamcrest provides several matchers that verify commonly used File properties.

Let’s see how we can verify the File name using aFileNamed() combined with a String Matcher:

@Test
public void whenVerifyingFileName_thenCorrect() {
    File file = new File("src/test/resources/test1.in");
 
    assertThat(file, aFileNamed(equalToIgnoringCase("test1.in")));
}

We can also assess the file path – again in combination with a String Matcher:

@Test
public void whenVerifyingFilePath_thenCorrect() {
    File file = new File("src/test/resources/test1.in");
    
    assertThat(file, aFileWithCanonicalPath(containsString("src/test/resources")));
    assertThat(file, aFileWithAbsolutePath(containsString("src/test/resources")));
}

Let’s also see a file’s size – in bytes:

@Test
public void whenVerifyingFileSize_thenCorrect() {
    File file = new File("src/test/resources/test1.in");

    assertThat(file, aFileWithSize(11));
    assertThat(file, aFileWithSize(greaterThan(1L)));;
}

Finally, we can check if a File is readable and writable:

@Test
public void whenVerifyingFileIsReadableAndWritable_thenCorrect() {
    File file = new File("src/test/resources/test1.in");

    assertThat(file, aReadableFile());
    assertThat(file, aWritableFile());        
}

4. Existing File Matcher

If we want to verify that a File or directory exists, we can use the anExistingFile() or anExistingDirectory() matchers:

@Test
public void whenVerifyingFileOrDirExist_thenCorrect() {
    File file = new File("src/test/resources/test1.in");
    File dir = new File("src/test/resources");
    
    assertThat(file, anExistingFile());
    assertThat(dir, anExistingDirectory());
    assertThat(file, anExistingFileOrDirectory());
    assertThat(dir, anExistingFileOrDirectory());
}

The anExistingFileOrDirectory() matcher that combines the two is also available.


5. Conclusion

In this quick article, we went through Hamcrest File Matchers and their use.

As always, the full source code for the examples is available over on GitHub.

Guide to Externalizable Interface

$
0
0

1. Introduction

In this tutorial, we’ll have a quick look at java’s java.io.Externalizable interface. The main goal of this interface is to facilitate custom serialization and deserialization.

Before we go ahead, make sure you check out the serialization in Java article. The next chapter is about how to serialize a Java object with this interface.

After that, we’re going to discuss the key differences compared to the java.io.Serializable interface.

2. The Externalizable Interface

Externalizable extends from the java.io.Serializable marker interface. Any class that implements Externalizable interface should override the writeExternal(), readExternal() methods. That way we can change the JVM’s default serialization behavior.

2.1. Serialization

Let’s have a look at this simple example:

public class Country implements Externalizable {
  
    private static final long serialVersionUID = 1L;
  
    private String name;
    private int code;
  
    // getters, setters
  
    @Override
    public void writeExternal(ObjectOutput out) throws IOException {
        out.writeUTF(name);
        out.writeInt(code);
    }
  
    @Override
    public void readExternal(ObjectInput in) 
      throws IOException, ClassNotFoundException {
        this.name = in.readUTF();
        this.code = in.readInt();
    }
}

Here, we’ve defined a class Country that implements the Externalizable interface and implements the two methods mentioned above.

In the writeExternal() method, we’re adding the object’s properties to the ObjectOutput stream. This has standard methods like writeUTF() for String and writeInt() for the int values.

Next, for deserializing the object, we’re reading from the ObjectInput stream using the readUTF(), readInt() methods to read the properties in the same exact order in which they were written.

It’s a good practice to add the serialVersionUID manually. If this is absent, the JVM will automatically add one.

The automatically generated number is compiler dependent. This means it may cause an unlikely InvalidClassException.

Let’s test the behavior we implemented above:

@Test
public void whenSerializing_thenUseExternalizable() 
  throws IOException, ClassNotFoundException {
       
    Country c = new Country();
    c.setCode(374);
    c.setName("Armenia");
   
    FileOutputStream fileOutputStream
     = new FileOutputStream(OUTPUT_FILE);
    ObjectOutputStream objectOutputStream
     = new ObjectOutputStream(fileOutputStream);
    c.writeExternal(objectOutputStream);
   
    objectOutputStream.flush();
    objectOutputStream.close();
    fileOutputStream.close();
   
    FileInputStream fileInputStream
     = new FileInputStream(OUTPUT_FILE);
    ObjectInputStream objectInputStream
     = new ObjectInputStream(fileInputStream);
   
    Country c2 = new Country();
    c2.readExternal(objectInputStream);
   
    objectInputStream.close();
    fileInputStream.close();
   
    assertTrue(c2.getCode() == c.getCode());
    assertTrue(c2.getName().equals(c.getName()));
}

In this example, we’re first creating a Country object and writing it to a file. Then, we’re deserializing the object from the file and verifying the values are correct.

The output of the printed c2 object:

Country{name='Armenia', code=374}

This shows we’ve successfully deserialized the object.

2.2. Inheritance

When a class inherits from the Serializable interface, the JVM automatically collects all the fields from sub-classes as well and makes them serializable.

Keep in mind that we can apply this to Externalizable as well. We just need to implement the read/write methods for every sub-class of the inheritance hierarchy.

Let’s look at the Region class below which extends our Country class from the previous section:

public class Region extends Country implements Externalizable {
 
    private static final long serialVersionUID = 1L;
 
    private String climate;
    private Double population;
 
    // getters, setters
 
    @Override
    public void writeExternal(ObjectOutput out) throws IOException {
        super.writeExternal(out);
        out.writeUTF(climate);
    }
 
    @Override
    public void readExternal(ObjectInput in) 
      throws IOException, ClassNotFoundException {
 
        super.readExternal(in);
        this.climate = in.readUTF();
    }
}

Here, we added two additional properties and serialized the first one.

Note that we also called super.writeExternal(out), super.readExternal(in) within serializer methods to save/restore the parent class fields as well.

Let’s run the unit test with the following data:

Region r = new Region();
r.setCode(374);
r.setName("Armenia");
r.setClimate("Mediterranean");
r.setPopulation(120.000);

Here’s the deserialized object:

Region{
  country='Country{
    name='Armenia',
    code=374}'
  climate='Mediterranean', 
  population=null
}

Notice that since we didn’t serialize the population field in Region class, the value of that property is null.

3. Externalizable vs Serializable

Let’s go through the key differences between the two interfaces:

  • Serialization Responsibility 

The key difference here is how we handle the serialization process. When a class implements the java.io.Serializable interface, the JVM takes full responsibility for serializing the class instance. In case of Externalizable, it’s the programmer who should take care of the whole serialization and also deserialization process.

  • Use Case

If we need to serialize the entire object, the Serializable interface is a better fit. On the other hand, for custom serialization, we can control the process using Externalizable.

  • Performance

The java.io.Serializable interface uses reflection and metadata which causes relatively slow performance. By comparison, the Externalizable interface gives you full control over the serialization process.

  • Reading Order

While using Externalizable, it’s mandatory to read all the field states in the exact order as they were written. Otherwise, we’ll get an exception.

For example, if we change the reading order of the code and name properties in the Country class, a java.io.EOFException will be thrown.

Meanwhile, the Serializable interface doesn’t have that requirement.

  • Custom Serialization

We can achieve custom serialization with the Serializable interface by marking the field with transient keyword. The JVM won’t serialize the particular field but it’ll add up the field to file storage with the default value. That’s why it’s a good practice to use Externalizable in case of custom serialization.

4. Conclusion

In this short guide to the Externalizable interface, we discussed the key features,  advantages and demonstrated examples of simple use. We also made a comparison with the Serializable interface.

As usual, the full source code of the tutorial is available over on GitHub.

Session Attributes in Spring MVC

$
0
0

1. Overview

When developing web applications, we often need to refer to the same attributes in several views. For example, we may have shopping cart contents that need to be displayed on multiple pages.

A good location to store those attributes is in the user’s session.

In this tutorial, we’ll focus on a simple example and examine 2 different strategies for working with a session attribute:

  • Using a scoped proxy
  • Using the @SessionAttributes annotation

2. Maven Setup

We’ll use Spring Boot starters to bootstrap our project and bring in all necessary dependencies.

Our setup requires a parent declaration, web starter, and thymeleaf starter.

We’ll also include the spring test starter to provide some additional utility in our unit tests:

<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>2.0.0.RELEASE</version>
    <relativePath/>
</parent>
 
<dependencies>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
    </dependency>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-thymeleaf</artifactId>
     </dependency>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-test</artifactId>
        <scope>test</scope>
    </dependency>
</dependencies>

The most recent versions of these dependencies can be found on Maven Central.

3. Example Use Case

Our example will implement a simple “TODO” application. We’ll have a form for creating instances of TodoItem and a list view that displays all TodoItems.

If we create a TodoItem using the form, subsequent accesses of the form will be prepopulated with the values of the most recently added TodoItem. We’ll use this feature to demonstrate how to “remember” form values that are stored in session scope.

Our 2 model classes are implemented as simple POJOs:

public class TodoItem {

    private String description;
    private LocalDateTime createDate;

    // getters and setters
}
public class TodoList extends ArrayDeque<TodoItem>{

}

Our TodoList class extends ArrayDeque to give us convenient access to the most recently added item via the peekLast method.

We’ll need 2 controller classes: 1 for each of the strategies we’ll look at. They’ll have subtle differences but the core functionality will be represented in both. Each will have 3 @RequestMappings:

  • @GetMapping(“/form”) – This method will be responsible for initializing the form and rendering the form view. The method will prepopulate the form with the most recently added TodoItem if the TodoList is not empty.
  • @PostMapping(“/form”) – This method will be responsible for adding the submitted TodoItem to the TodoList and redirecting to the list URL.
  • @GetMapping(“/todos.html”) – This method will simply add the TodoList to the Model for display and render the list view.

4. Using a Scoped Proxy

4.1. Setup

In this setup, our TodoList is configured as a session-scoped @Bean that is backed by a proxy. The fact that the @Bean is a proxy means that we are able to inject it into our singleton-scoped @Controller.

Since there is no session when the context initializes, Spring will create a proxy of TodoList to inject as a dependency. The target instance of TodoList will be instantiated as needed when required by requests.

For a more in-depth discussion of bean scopes in Spring, refer to our article on the topic.

First, we define our bean within a @Configuration class:

@Bean
@Scope(
  value = WebApplicationContext.SCOPE_SESSION, 
  proxyMode = ScopedProxyMode.TARGET_CLASS)
public TodoList todos() {
    return new TodoList();
}

Next, we declare the bean as a dependency for the @Controller and inject it just as we would any other dependency:

@Controller
@RequestMapping("/scopedproxy")
public class TodoControllerWithScopedProxy {

    private TodoList todos;

    // constructor and request mappings
}

Finally, using the bean in a request simply involves calling its methods:

@GetMapping("/form")
public String showForm(Model model) {
    if (!todos.isEmpty()) {
        model.addAttribute("todo", todos.peekLast());
    } else {
        model.addAttribute("todo", new TodoItem());
    }
    return "scopedproxyform";
}

4.2. Unit Testing

In order to test our implementation using the scoped proxy, we first configure a SimpleThreadScope. This will ensure that our unit tests accurately simulate runtime conditions of the code we are testing.

First, we define a TestConfig and a CustomScopeConfigurer:

@Configuration
public class TestConfig {

    @Bean
    public CustomScopeConfigurer customScopeConfigurer() {
        CustomScopeConfigurer configurer = new CustomScopeConfigurer();
        configurer.addScope("session", new SimpleThreadScope());
        return configurer;
    }
}

Now we can start by testing that an initial request of the form contains an uninitialized TodoItem:

@RunWith(SpringRunner.class) 
@SpringBootTest
@AutoConfigureMockMvc
@Import(TestConfig.class) 
public class TodoControllerWithScopedProxyTest {

    // ...

    @Test
    public void whenFirstRequest_thenContainsUnintializedTodo() throws Exception {
        MvcResult result = mockMvc.perform(get("/scopedproxy/form"))
          .andExpect(status().isOk())
          .andExpect(model().attributeExists("todo"))
          .andReturn();

        TodoItem item = (TodoItem) result.getModelAndView().getModel().get("todo");
 
        assertTrue(StringUtils.isEmpty(item.getDescription()));
    }
}

We can also confirm that our submit issues a redirect and that a subsequent form request is prepopulated with the newly added TodoItem:

@Test
public void whenSubmit_thenSubsequentFormRequestContainsMostRecentTodo() throws Exception {
    mockMvc.perform(post("/scopedproxy/form")
      .param("description", "newtodo"))
      .andExpect(status().is3xxRedirection())
      .andReturn();

    MvcResult result = mockMvc.perform(get("/scopedproxy/form"))
      .andExpect(status().isOk())
      .andExpect(model().attributeExists("todo"))
      .andReturn();
    TodoItem item = (TodoItem) result.getModelAndView().getModel().get("todo");
 
    assertEquals("newtodo", item.getDescription());
}

4.3. Discussion

A key feature of using the scoped proxy strategy is that it has no impact on request mapping method signatures. This keeps readability on a very high level compared to the @SessionAttributes strategy.

It can be helpful to recall that controllers have singleton scope by default.

This is the reason why we must use a proxy instead of simply injecting a non-proxied session-scoped bean. We can’t inject a bean with a lesser scope into a bean with greater scope. 

Attempting to do so, in this case, would trigger an exception with a message containing: Scope ‘session’ is not active for the current thread.

If we’re willing to define our controller with session scope, we could avoid specifying a proxyMode. This can have disadvantages, especially if the controller is expensive to create because a controller instance would have to be created for each user session.

Note that TodoList is available to other components for injection. This may be a benefit or a disadvantage depending on the use case. If making the bean available to the entire application is problematic, the instance can be scoped to the controller instead using @SessionAttributes as we’ll see in the next example.

5. Using the @SessionAttributes Annotation

5.1. Setup

In this setup, we don’t define TodoList as a Spring-managed @Bean. Instead, we declare it as a @ModelAttribute and specify the @SessionAttributes annotation to scope it to the session for the controller.

The first time our controller is accessed, Spring will instantiate an instance and place it in the Model. Since we also declare the bean in @SessionAttributes, Spring will store the instance.

For a more in-depth discussion of @ModelAttribute in Spring, refer to our article on the topic.

First, we declare our bean by providing a method on the controller and we annotate the method with @ModelAttribute:

@ModelAttribute("todos")
public TodoList todos() {
    return new TodoList();
}

Next, we inform the controller to treat our TodoList as session-scoped by using @SessionAttributes:

@Controller
@RequestMapping("/sessionattributes")
@SessionAttributes("todos")
public class TodoControllerWithSessionAttributes {
    // ... other methods
}

Finally, to use the bean within a request, we provide a reference to it in the method signature of a @RequestMapping:

@GetMapping("/form")
public String showForm(
  Model model,
  @ModelAttribute("todos") TodoList todos) {
 
    if (!todos.isEmpty()) {
        model.addAttribute("todo", todos.peekLast());
    } else {
        model.addAttribute("todo", new TodoItem());
    }
    return "sessionattributesform";
}

In the @PostMapping method, we inject RedirectAttributes and call addFlashAttribute before returning our RedirectView. This is an important difference in implementation compared to our first example:

@PostMapping("/form")
public RedirectView create(
  @ModelAttribute TodoItem todo, 
  @ModelAttribute("todos") TodoList todos, 
  RedirectAttributes attributes) {
    todo.setCreateDate(LocalDateTime.now());
    todos.add(todo);
    attributes.addFlashAttribute("todos", todos);
    return new RedirectView("/sessionattributes/todos.html");
}

Spring uses a specialized RedirectAttributes implementation of Model for redirect scenarios to support the encoding of URL parameters. During a redirect, any attributes stored on the Model would normally only be available to the framework if they were included in the URL.

By using addFlashAttribute we are telling the framework that we want our TodoList to survive the redirect without needing to encode it in the URL.

5.2. Unit Testing

The unit testing of the form view controller method is identical to the test we looked at in our first example. The test of the @PostMapping, however, is a little different because we need to access the flash attributes in order to verify the behavior:

@Test
public void whenTodoExists_thenSubsequentFormRequestContainsesMostRecentTodo() throws Exception {
    FlashMap flashMap = mockMvc.perform(post("/sessionattributes/form")
      .param("description", "newtodo"))
      .andExpect(status().is3xxRedirection())
      .andReturn().getFlashMap();

    MvcResult result = mockMvc.perform(get("/sessionattributes/form")
      .sessionAttrs(flashMap))
      .andExpect(status().isOk())
      .andExpect(model().attributeExists("todo"))
      .andReturn();
    TodoItem item = (TodoItem) result.getModelAndView().getModel().get("todo");
 
    assertEquals("newtodo", item.getDescription());
}

5.3. Discussion

The @ModelAttribute and @SessionAttributes strategy for storing an attribute in the session is a straightforward solution that requires no additional context configuration or Spring-managed @Beans.

Unlike our first example, it’s necessary to inject TodoList in the @RequestMapping methods.

In addition, we must make use of flash attributes for redirect scenarios.

6. Conclusion

In this article, we looked at using scoped proxies and @SessionAttributes as 2 strategies for working with session attributes in Spring MVC. Note that in this simple example, any attributes stored in session will only survive for the life of the session.

If we needed to persist attributes between server restarts or session timeouts, we could consider using Spring Session to transparently handle saving the information. Have a look at our article on Spring Session for more information.

As always, all code used in this article’s available over on GitHub.

Introduction to Apache Curator

$
0
0

1. Introduction

Apache Curator is a Java client for Apache Zookeeper, the popular coordination service for distributed applications.

In this tutorial, we’ll introduce some of the most relevant features provided by Curator:

  • Connection Management – managing connections and retry policies
  • Async – enhancing existing client by adding async capabilities and the use of Java 8 lambdas
  • Configuration Management – having a centralized configuration for the system
  • Strongly-Typed Models – working with typed models
  • Recipes – implementing leader election, distributed locks or counters

2. Prerequisites

To start with, it’s recommended to take a quick look at the Apache Zookeeper and its features.

For this tutorial, we assume that there’s already a standalone Zookeeper instance running on 127.0.0.1:2181; here are instructions on how to install and run it, if you’re just getting started.

First, we’ll need to add the curator-x-async dependency to our pom.xml:

<dependency>
    <groupId>org.apache.curator</groupId>
    <artifactId>curator-x-async</artifactId>
    <version>4.0.1</version>
    <exclusions>
        <exclusion>
            <groupId>org.apache.zookeeper</groupId>
            <artifactId>zookeeper</artifactId>
        </exclusion>
    </exclusions>
</dependency>

The latest version of Apache Curator 4.X.X has a hard dependency with Zookeeper 3.5.X which is still in beta right now.

And so, in this article, we’re going to use the currently latest stable Zookeeper 3.4.11 instead.

So we need to exclude the Zookeeper dependency and add the dependency for our Zookeeper version to our pom.xml:

<dependency>
    <groupId>org.apache.zookeeper</groupId>
    <artifactId>zookeeper</artifactId>
    <version>3.4.11</version>
</dependency>

For more information about compatibility, please refer to this link.

3. Connection Management

The basic use case of Apache Curator is connecting to a running Apache Zookeeper instance.

The tool provides a factory to build connections to Zookeeper using retry policies:

int sleepMsBetweenRetries = 100;
int maxRetries = 3;
RetryPolicy retryPolicy = new RetryNTimes(
  maxRetries, sleepMsBetweenRetries);

CuratorFramework client = CuratorFrameworkFactory
  .newClient("127.0.0.1:2181", retryPolicy);
client.start();
 
assertThat(client.checkExists().forPath("/")).isNotNull();

In this quick example, we’ll retry 3 times and will wait 100 ms between retries in case of connectivity issues.

Once connected to Zookeeper using the CuratorFramework client, we can now browse paths, get/set data and essentially interact with the server.

4. Async

The Curator Async module wraps the above CuratorFramework client to provide non-blocking capabilities using the CompletionStage Java 8 API.

Let’s see how the previous example looks like using the Async wrapper:

int sleepMsBetweenRetries = 100;
int maxRetries = 3;
RetryPolicy retryPolicy 
  = new RetryNTimes(maxRetries, sleepMsBetweenRetries);

CuratorFramework client = CuratorFrameworkFactory
  .newClient("127.0.0.1:2181", retryPolicy);

client.start();
AsyncCuratorFramework async = AsyncCuratorFramework.wrap(client);

AtomicBoolean exists = new AtomicBoolean(false);

async.checkExists()
  .forPath("/")
  .thenAcceptAsync(s -> exists.set(s != null));

await().until(() -> assertThat(exists.get()).isTrue());

Now, the checkExists() operation works in asynchronous mode, not blocking the main thread. We can also chain actions one after each other using the thenAcceptAsync() method instead, which uses the CompletionStage API.

5. Configuration Management

In a distributed environment, one of the most common challenges is to manage shared configuration among many applications. We can use Zookeeper as a data store where to keep our configuration.

Let’s see an example using Apache Curator to get and set data:

CuratorFramework client = newClient();
client.start();
AsyncCuratorFramework async = AsyncCuratorFramework.wrap(client);
String key = getKey();
String expected = "my_value";

client.create().forPath(key);

async.setData()
  .forPath(key, expected.getBytes());

AtomicBoolean isEquals = new AtomicBoolean();
async.getData()
  .forPath(key)
  .thenAccept(data -> isEquals.set(new String(data).equals(expected)));

await().until(() -> assertThat(isEquals.get()).isTrue());

In this example, we create the node path, set the data in Zookeeper, and then we recover it checking that the value is the same. The key field could be a node path like /config/dev/my_key.

5.1. Watchers

Another interesting feature in Zookeeper is the ability to watch keys or nodes. It allows us to listen to changes in the configuration and update our applications without needing to redeploy.

Let’s see how the above example looks like when using watchers:

CuratorFramework client = newClient()
client.start();
AsyncCuratorFramework async = AsyncCuratorFramework.wrap(client);
String key = getKey();
String expected = "my_value";

async.create().forPath(key);

List<String> changes = new ArrayList<>();

async.watched()
  .getData()
  .forPath(key)
  .event()
  .thenAccept(watchedEvent -> {
    try {
        changes.add(new String(client.getData()
          .forPath(watchedEvent.getPath())));
    } catch (Exception e) {
        // fail ...
    }});

// Set data value for our key
async.setData()
  .forPath(key, expected.getBytes());

await()
  .until(() -> assertThat(changes.size()).isEqualTo(1));

We configure the watcher, set the data, and then confirm the watched event was triggered. We can watch one node or a set of nodes at once.

6. Strongly Typed Models

Zookeeper primarily works with byte arrays, so we need to serialize and deserialize our data. This allows us some flexibility to work with any serializable instance, but it can be hard to maintain.

To help here, Curator adds the concept of typed models which delegates the serialization/deserialization and allows us to work with our types directly. Let’s see how that works.

First, we need a serializer framework. Curator recommends using the Jackson implementation, so let’s add the Jackson dependency to our pom.xml:

<dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-databind</artifactId>
    <version>2.9.4</version>
</dependency>

Now, let’s try to persist our custom class HostConfig:

public class HostConfig {
    private String hostname;
    private int port;

    // getters and setters
}

We need to provide the model specification mapping from the HostConfig class to a path, and use the modeled framework wrapper provided by Apache Curator:

ModelSpec<HostConfig> mySpec = ModelSpec.builder(
  ZPath.parseWithIds("/config/dev"), 
  JacksonModelSerializer.build(HostConfig.class))
  .build();

CuratorFramework client = newClient();
client.start();

AsyncCuratorFramework async 
  = AsyncCuratorFramework.wrap(client);
ModeledFramework<HostConfig> modeledClient 
  = ModeledFramework.wrap(async, mySpec);

modeledClient.set(new HostConfig("host-name", 8080));

modeledClient.read()
  .whenComplete((value, e) -> {
     if (e != null) {
          fail("Cannot read host config", e);
     } else {
          assertThat(value).isNotNull();
          assertThat(value.getHostname()).isEqualTo("host-name");
          assertThat(value.getPort()).isEqualTo(8080);
     }
   });

The whenComplete() method when reading the path /config/dev will return the HostConfig instance in Zookeeper.

7. Recipes

Zookeeper provides this guideline to implement high-level solutions or recipes such as leader election, distributed locks or shared counters.

Apache Curator provides an implementation for most of these recipes. To see the full list, visit the Curator Recipes documentation.

All of these recipes are available in a separate module:

<dependency>
    <groupId>org.apache.curator</groupId>
    <artifactId>curator-recipes</artifactId>
    <version>4.0.1</version>
</dependency>

Let’s jump right in and start understanding these with some simple examples.

7.1. Leader Election

In a distributed environment, we may need one master or leader node to coordinate a complex job.

This is how the usage of the Leader Election recipe in Curator looks like:

CuratorFramework client = newClient();
client.start();
LeaderSelector leaderSelector = new LeaderSelector(client, 
  "/mutex/select/leader/for/job/A", 
  new LeaderSelectorListener() {
      @Override
      public void stateChanged(
        CuratorFramework client, 
        ConnectionState newState) {
      }

      @Override
      public void takeLeadership(
        CuratorFramework client) throws Exception {
      }
  });

// join the members group
leaderSelector.start();

// wait until the job A is done among all members
leaderSelector.close();

When we start the leader selector, our node joins a members group within the path /mutex/select/leader/for/job/A. Once our node becomes the leader, the takeLeadership method will be invoked, and we as leaders can resume the job.

7.2. Shared Locks

The Shared Lock recipe is about having a fully distributed lock:

CuratorFramework client = newClient();
client.start();
InterProcessSemaphoreMutex sharedLock = new InterProcessSemaphoreMutex(
  client, "/mutex/process/A");

sharedLock.acquire();

// do process A

sharedLock.release();

When we acquire the lock, Zookeeper ensures that there’s no other application acquiring the same lock at the same time.

7.3. Counters

The Counters recipe coordinates a shared Integer among all the clients:

CuratorFramework client = newClient();
client.start();

SharedCount counter = new SharedCount(client, "/counters/A", 0);
counter.start();

counter.setCount(counter.getCount() + 1);

assertThat(counter.getCount()).isEqualTo(1);

In this example, Zookeeper stores the Integer value in the path /counters/A and initializes the value to 0 if the path has not been created yet.

8. Conclusion

In this article, we’ve seen how to use Apache Curator to connect to Apache Zookeeper and take benefit of its main features.

We’ve also introduced a few of the main recipes in Curator.

As usual, sources can be found over on GitHub.

How to Make a Deep Copy of an Object in Java

$
0
0

1. Introduction

When we want to copy an object in Java, there’re two possibilities that we need to consider — a shallow copy and a deep copy.

The shallow copy is the approach when we only copy field values and therefore the copy might be dependant on the original object. In the deep copy approach, we make sure that all the objects in the tree are deeply copied, so the copy isn’t dependant on any earlier existing object that might ever change.

In this article, we’ll compare these two approaches and learn four methods to implement the deep copy.

2. Maven Setup

We’ll use three Maven dependencies — Gson, Jackson, and Apache Commons Lang — to test different ways of performing a deep copy.

Let’s add these dependencies to our pom.xml:

<dependency>
    <groupId>com.google.code.gson</groupId>
    <artifactId>gson</artifactId>
    <version>2.8.2</version>
</dependency>
<dependency>
    <groupId>commons-lang</groupId>
    <artifactId>commons-lang</artifactId>
    <version>2.6</version>
</dependency>
<dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-databind</artifactId>
    <version>2.9.3</version>
</dependency>

The latest versions of Gson, Jackson, and Apache Commons Lang can be found on Maven Central.

3. Model

To compare different methods to copy Java objects, we’ll need two classes to work on:

class Address {

    private String street;
    private String city;
    private String country;

    // standard constructors, getters and setters
}
class User {

    private String firstName;
    private String lastName;
    private Address address;

    // standard constructors, getters and setters
}

4. Shallow Copy

A shallow copy is one in which we only copy values of fields from one object to another:

@Test
public void whenShallowCopying_thenObjectsShouldNotBeSame() {

    Address address = new Address("Downing St 10", "London", "England");
    User pm = new User("Prime", "Minister", address);
    
    User shallowCopy = new User(
      pm.getFirstName(), pm.getLastName(), pm.getAddress());

    assertThat(shallowCopy)
      .isNotSameAs(pm);
}

In this case pm != shallowCopy, which means that they’re different objects, but the problem is that when we change any of the original address’ properties, this will also affect the shallowCopy‘s address.

We wouldn’t bother about it if Address was immutable, but it’s not:

@Test
public void whenModifyingOriginalObject_ThenCopyShouldChange() {
 
    Address address = new Address("Downing St 10", "London", "England");
    User pm = new User("Prime", "Minister", address);
    User shallowCopy = new User(
      pm.getFirstName(), pm.getLastName(), pm.getAddress());

    address.setCountry("Great Britain");
    assertThat(shallowCopy.getAddress().getCountry())
      .isEqualTo(pm.getAddress().getCountry());
}

5. Deep Copy

A deep copy is an alternative that solves this problem. Its advantage is that at least each mutable object in the object graph is recursively copied.

Since the copy isn’t dependent on any mutable object that was created earlier, it won’t get modified by accident like we saw with the shallow copy.

In the following sections, we’ll show several deep copy implementations and demonstrate this advantage.

5.1. Copy Constructor

The first implementation we’ll implement is based on copy constructors:

public Address(Address that) {
    this(that.getStreet(), that.getCity(), that.getCountry());
}
public User(User that) {
    this(that.getFirstName(), that.getLastName(), new Address(that.getAddress()));
}

In the above implementation of the deep copy, we haven’t created new Strings in our copy constructor because String is an immutable class.

As a result, they can’t be modified by accident. Let’s see if this works:

@Test
public void whenModifyingOriginalObject_thenCopyShouldNotChange() {
    Address address = new Address("Downing St 10", "London", "England");
    User pm = new User("Prime", "Minister", address);
    User deepCopy = new User(pm);

    address.setCountry("Great Britain");
    assertNotEquals(
      pm.getAddress().getCountry(), 
      deepCopy.getAddress().getCountry());
}

5.2. Cloneable Interface

The second implementation is based on the clone method inherited from Object. It’s protected, but we need to override it as public.

We’ll also add a marker interface, Cloneable, to the classes to indicate that the classes are actually cloneable.

Let’s add the clone() method to the Address class:

@Override
public Object clone() {
    try {
        return (Address) super.clone();
    } catch (CloneNotSupportedException e) {
        return new Address(this.street, this.getCity(), this.getCountry());
    }
}

And now let’s implement clone() for the User class:

@Override
public Object clone() {
    User user = null;
    try {
        user = (User) super.clone();
    } catch (CloneNotSupportedException e) {
        user = new User(
          this.getFirstName(), this.getLastName(), this.getAddress());
    }
    user.address = (Address) this.address.clone();
    return user;
}

Note that the super.clone() call returns a shallow copy of an object, but we set deep copies of mutable fields manually, so the result is correct:

@Test
public void whenModifyingOriginalObject_thenCloneCopyShouldNotChange() {
    Address address = new Address("Downing St 10", "London", "England");
    User pm = new User("Prime", "Minister", address);
    User deepCopy = (User) pm.clone();

    address.setCountry("Great Britain");

    assertThat(deepCopy.getAddress().getCountry())
      .isNotEqualTo(pm.getAddress().getCountry());
}

6. External Libraries

The above examples look easy, but sometimes they don’t apply as a solution when we can’t add an additional constructor or override the clone method.

This might happen when we don’t own the code, or when the object graph is so complicated that we wouldn’t finish our project on time if we focused on writing additional constructors or implementing the clone method on all classes in the object graph.

What then? In this case, we can use an external library. To achieve a deep copy, we can serialize an object and then deserialize it to a new object.

Let’s look at a few examples.

6.1. Apache Commons Lang

Apache Commons Lang has SerializationUtils#clone, which performs a deep copy when all classes in the object graph implement the Serializable interface.

If the method encounters a class that isn’t serializable, it’ll fail and throw an unchecked SerializationException.

Because of that, we need to add the Serializable interface to our classes:

@Test
public void whenModifyingOriginalObject_thenCommonsCloneShouldNotChange() {
    Address address = new Address("Downing St 10", "London", "England");
    User pm = new User("Prime", "Minister", address);
    User deepCopy = (User) SerializationUtils.clone(pm);

    address.setCountry("Great Britain");

    assertThat(deepCopy.getAddress().getCountry())
      .isNotEqualTo(pm.getAddress().getCountry());
}

6.2. JSON Serialization with Gson

The other way to serialize is to use JSON serialization. Gson is a library that’s used for converting objects into JSON and vice versa.

Unlike Apache Commons Lang, GSON does not need the Serializable interface to make the conversions.

Let’s have a quick look at an example:

@Test
public void whenModifyingOriginalObject_thenGsonCloneShouldNotChange() {
    Address address = new Address("Downing St 10", "London", "England");
    User pm = new User("Prime", "Minister", address);
    Gson gson = new Gson();
    User deepCopy = gson.fromJson(gson.toJson(pm), User.class);

    address.setCountry("Great Britain");

    assertThat(deepCopy.getAddress().getCountry())
      .isNotEqualTo(pm.getAddress().getCountry());
}

6.3. JSON Serialization with Jackson

Jackson is another library that supports JSON serialization. This implementation will be very similar to the one using Gson, but we need to add the default constructor to our classes.

Let’s see an example:

@Test
public void whenModifyingOriginalObject_thenJacksonCopyShouldNotChange() throws IOException {
    Address address = new Address("Downing St 10", "London", "England");
    User pm = new User("Prime", "Minister", address);
    ObjectMapper objectMapper = new ObjectMapper();
    
    User deepCopy = objectMapper
      .readValue(objectMapper.writeValueAsString(pm), User.class);

    address.setCountry("Great Britain");

    assertThat(deepCopy.getAddress().getCountry())
      .isNotEqualTo(pm.getAddress().getCountry());
}

7. Conclusion

Which implementation should we use when making a deep copy? The final decision will often depend on the classes we’ll copy and whether we own the classes in the object graph.

As always, the complete code samples for this tutorial can be found over on Github.

Introduction to Akka Actors in Java

$
0
0

1. Introduction

Akka is an open-source library that helps to easily develop concurrent and distributed applications using Java or Scala by leveraging the Actor Model.

In this tutorial, we’ll present the basic features like defining actors, how they communicate and how we can kill them. In the final notes, we’ll also note some best practices when working with Akka.

2. The Actor Model

The Actor Model isn’t new to the computer science community. It was first introduced by Carl Eddie Hewitt in 1973, as a theoretical model for handling concurrent computation.

It started to show its practical applicability when the software industry started to realize the pitfalls of implementing concurrent and distributed applications.

An actor represents an independent computation unit. Some important characteristics are:

  • an actor encapsulates its state and part of the application logic
  • actors interact only through asynchronous messages and never through direct method calls
  • each actor has a unique address and a mailbox in which other actors can deliver messages
  • the actor will process all the messages in the mailbox in sequential order (the default implementation of the mailbox being a FIFO queue)
  • the actor system is organized in a tree-like hierarchy
  • an actor can create other actors, can send messages to any other actor and stop itself or any actor is has created

2.1. Advantages

Developing concurrent application is difficult because we need to deal with synchronization, locks and shared memory. By using Akka actors we can easily write asynchronous code without the need for locks and synchronization.

One of the advantages of using message instead of method calls is that the sender thread won’t block to wait for a return value when it sends a message to another actor. The receiving actor will respond with the result by sending a reply message to the sender.

Another great benefit of using messages is that we don’t have to worry about synchronization in a multi-threaded environment. This is because of the fact that all the messages are processed sequentially.

Another advantage of the Akka actor model is error handling. By organizing the actors in a hierarchy, each actor can notify its parent of the failure, so it can act accordingly. The parent actor can decide to stop or restart the child actors.

3. Setup

To take advantage of the Akka actors we need to add the following dependency from Maven Central:

<dependency>
    <groupId>com.typesafe.akka</groupId>
    <artifactId>akka-actor_2.12</artifactId>
    <version>2.5.11</version>
</dependency>

4. Creating an Actor

As mentioned, the actors are defined in a hierarchy system. All the actors that share a common configuration will be defined by an ActorSystem.

For now, we’ll simply define an ActorSystem with the default configuration and a custom name:

ActorSystem system = ActorSystem.create("test-system");

Even though we haven’t created any actors yet, the system will already contain 3 main actors:

  • the root guardian actor having the address “/” which as the name states represent the root of the actor system hierarchy
  • the user guardian actor having the address “/user”. This will be the parent of all the actor we define
  • the system guardian actor having the address “/system”. This will be the parent for all the actors defined internally by the Akka system

Any Akka actor will extend the AbstractActor abstract class and implement the createReceive() method for handling the incoming messages from other actors:

public class MyActor extends AbstractActor {
    public Receive createReceive() {
        return receiveBuilder().build();
    }
}

This is the most basic actor we can create. It can receive messages from other actors and will discard them because no matching message patterns are defined in the ReceiveBuilder. We’ll talk about message pattern matching later on in this article.

Now that we’ve created our first actor we should include it in the ActorSystem:

ActorRef readingActorRef 
  = system.actorOf(Props.create(MyActor.class), "my-actor");

4.1. Actor Configuration

The Props class contains the actor configuration. We can configure things like the dispatcher, the mailbox or deployment configuration. This class is immutable, thus thread-safe, so it can be shared when creating new actors.

It’s highly recommended and considered a best-practice to define the factory methods inside the actor object that will handle the creation of the Props object.

To exemplify, let’s define an actor the will do some text processing. The actor will receive a String object on which it’ll do the processing:

public class ReadingActor extends AbstractActor {
    private String text;

    public static Props props(String text) {
        return Props.create(ReadingActor.class, text);
    }
    // ...
}

Now, to create an instance of this type of actor we just use the props() factory method to pass the String argument to the constructor:

ActorRef readingActorRef = system.actorOf(
  ReadingActor.props(TEXT), "readingActor");

Now that we know how to define an actor, let’s see how they communicate inside the actor system.

5. Actor Messaging

To interact with each other, the actors can send and receive messages from any other actor in the system. These messages can be any type of object with the condition that it’s immutable.

It’s a best practice to define the messages inside the actor class. This helps to write code that is easy to understand and know what messages an actor can handle.

5.1. Sending Messages

Inside the Akka actor system messages are sent using methods:

  • tell()
  • ask()
  • forward()

When we want to send a message and don’t expect a response, we can use the tell() method. This is the most efficient method from a performance perspective:

readingActorRef.tell(new ReadingActor.ReadLines(), ActorRef.noSender());

The first parameter represents the message we send to the actor address readingActorRef.

The second parameter specifies who the sender is. This is useful when the actor receiving the message needs to send a response to an actor other than the sender (for example the parent of the sending actor).

Usually, we can set the second parameter to null or ActorRef.noSender(), because we don’t expect a reply. When we need a response back from an actor, we can use the ask() method:

CompletableFuture<Object> future = ask(wordCounterActorRef, 
  new WordCounterActor.CountWords(line), 1000).toCompletableFuture();

When asking for a response from an actor a CompletionStage object is returned, so the processing remains non-blocking.

A very important fact that we must pay attention to is error handling insider the actor which will respond. To return a Future object that will contain the exception we must send a Status.Failure message to the sender actor.

This is not done automatically when an actor throws an exception while processing a message and the ask() call will timeout and no reference to the exception will be seen in the logs:

@Override
public Receive createReceive() {
    return receiveBuilder()
      .match(CountWords.class, r -> {
          try {
              int numberOfWords = countWordsFromLine(r.line);
              getSender().tell(numberOfWords, getSelf());
          } catch (Exception ex) {
              getSender().tell(
               new akka.actor.Status.Failure(ex), getSelf());
               throw ex;
          }
    }).build();
}

We also have the forward() method which is similar to tell(). The difference is that the original sender of the message is kept when sending the message, so the actor forwarding the message only acts as an intermediary actor:

printerActorRef.forward(
  new PrinterActor.PrintFinalResult(totalNumberOfWords), getContext());

5.2. Receiving Messages

Each actor will implement the createReceive() method, which handles all incoming messages. The receiveBuilder() acts like a switch statement, trying to match the received message to the type of messages defined:

public Receive createReceive() {
    return receiveBuilder().matchEquals("printit", p -> {
        System.out.println("The address of this actor is: " + getSelf());
    }).build();
}

When received, a message is put into a FIFO queue, so the messages are handled sequentially.

6. Killing an Actor

When we finished using an actor we can stop it by calling the stop() method from the ActorRefFactory interface:

system.stop(myActorRef);

We can use this method to terminate any child actor or the actor itself. It’s important to note stopping is done asynchronously and that the current message processing will finish before the actor is terminated. No more incoming messages will be accepted in the actor mailbox.

By stopping a parent actor, we’ll also send a kill signal to all of the child actors that were spawned by it.

When we don’t need the actor system anymore, we can terminate it to free up all the resources and prevent any memory leaks:

Future<Terminated> terminateResponse = system.terminate();

This will stop the system guardian actors, hence all the actors defined in this Akka system.

We could also send a PoisonPill message to any actor that we want to kill:

myActorRef.tell(PoisonPill.getInstance(), ActorRef.noSender());

The PoisonPill message will be received by the actor like any other message and put into the queue. The actor will process all the messages until it gets to the PoisonPill one. Only then the actor will begin the termination process.

Another special message used for killing an actor is the Kill message. Unlike the PoisonPill, the actor will throw an ActorKilledException when processing this message:

myActorRef.tell(Kill.getInstance(), ActorRef.noSender());

7. Conclusion

In this article, we presented the basics of the Akka framework. We showed how to define actors, how they communicate with each other and how to terminate them.

We’ll conclude with some best practices when working with Akka:

  • use tell() instead of ask() when performance is a concern
  • when using ask() we should always handle exceptions by sending a Failure message
  • actors should not share any mutable state
  • an actor shouldn’t be declared within another actor
  • actors aren’t stopped automatically when they are no longer referenced. We must explicitly destroy an actor when we don’t need it anymore to prevent memory leaks
  • messages used by actors should always be immutable

As always, the source code for the article is available over on GitHub.

Multipart Uploads in Amazon S3 with Java

$
0
0

1. Overview

In this tutorial, we’ll see how to handle multipart uploads in Amazon S3 with AWS Java SDK.

Simply put, in a multipart upload, we split the content into smaller parts and upload each part individually. All parts are re-assembled when received.

Multipart uploads offer the following advantages:

  • Higher throughput – we can upload parts in parallel
  • Easier error recovery – we need to re-upload only the failed parts
  • Pause and resume uploads – we can upload parts at any point in time. The whole process can be paused and remaining parts can be uploaded later

Note that when using multipart upload with Amazon S3, each part except the last part must be at least 5 MB in size.

2. Maven Dependencies

Before we begin, we need to add the AWS SDK dependency in our project:

<dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>aws-java-sdk</artifactId>
    <version>1.11.290</version>
</dependency>

To view the latest version, check out Maven Central.

3. Performing Multipart Upload

3.1. Creating Amazon S3 Client

First, we need to create a client for accessing Amazon S3. We’ll use the AmazonS3ClientBuilder for this purpose:

AmazonS3 amazonS3 = AmazonS3ClientBuilder
  .standard()
  .withCredentials(new DefaultAWSCredentialsProviderChain())
  .withRegion(Regions.DEFAULT_REGION)
  .build();

This creates a client using the default credential provider chain for accessing AWS credentials.

For more details on how default the credential provider chain works, please see the documentation. If you’re using a region other than the default (US West-2), make sure you replace Regions.DEFAULT_REGION with that custom region.

3.2. Creating TransferManager for Managing Uploads

We’ll use TransferManagerBuilder to create a TransferManager instance.

This class provides simple APIs to manage uploads and downloads with Amazon S3 and manages all related tasks:

TransferManager tm = TransferManagerBuilder.standard()
  .withS3Client(amazonS3)
  .withMultipartUploadThreshold((long) (5 * 1024 * 1025))
  .build();

Multipart upload threshold specifies the size, in bytes, above which the upload should be performed as multipart upload.

Amazon S3 imposes a minimum part size of 5 MB (for parts other than last part), so we have used 5 MB as multipart upload threshold.

3.3. Uploading Object

To upload object using TransferManager we simply need to call its upload() function. This uploads the parts in parallel:

String bucketName = "baeldung-bucket";
String keyName = "my-picture.jpg";
String file = new File("documents/my-picture.jpg");
Upload upload = tm.upload(bucketName, keyName, file);

TransferManager.upload() returns an Upload object. This can be used to check the status of and manage uploads. We’ll do so in the next section.

3.4. Waiting For Upload to Complete

TransferManager.upload() is a non-blocking function; it returns immediately while the upload runs in the background.

We can use the returned Upload object to wait for the upload to complete before exiting the program:

try {
    upload.waitForCompletion();
} catch (AmazonClientException e) {
    // ...
}

3.5. Tracking the Upload Progress

Track the progress of the upload is quite a common requirement; we can do that with the help of a ProgressListener instance:

ProgressListener progressListener = progressEvent -> System.out.println(
  "Transferred bytes: " + progressEvent.getBytesTransferred());
PutObjectRequest request = new PutObjectRequest(
  bucketName, keyName, file);
request.setGeneralProgressListener(progressListener);
Upload upload = tm.upload(request);

The ProgressListener we created will simply continue to print the number of bytes transferred until the upload completes.

3.6. Controlling Upload Parallelism

By default, TransferManager uses a maximum of ten threads to perform multipart uploads.

We can, however, control this by specifying an ExecutorService while building TransferManager:

int maxUploadThreads = 5;
TransferManager tm = TransferManagerBuilder.standard()
  .withS3Client(amazonS3)
  .withMultipartUploadThreshold((long) (5 * 1024 * 1025))
  .withExecutorFactory(() -> Executors.newFixedThreadPool(maxUploadThreads))
  .build();

Here, we used a lambda for creating a wrapper implementation of ExecutorFactory and passed it to withExecutorFactory() function.

4. Conclusion

In this quick article, we learned how to perform multipart uploads using AWS SDK for Java, and we saw how to control some aspects of upload and to keep track of its progress.

As always, the complete code of this article is available over on GitHub.


Spring Data with Spring Security

$
0
0

1. Overview

Spring Security provides a good support for integration with Spring Data. While the former handles security aspects of our application, the latter provides convenient access to the database containing the application’s data.

In this article, we’ll discuss how Spring Security can be integrated with Spring Data to enable more user-specific queries.

2. Spring Security + Spring Data Configuration

In our introduction to Spring Data JPA, we saw how to setup Spring Data in a Spring project. To enable spring security and spring data, as usual, we can adopt either the Java or XML-based configuration.

2.1. Java Configuration

Recall that from Spring Security Login Form (sections 4 & 5), we can add Spring Security to our project using the annotation based configuration:

@EnableWebSecurity
public class WebSecurityConfig extends WebSecurityConfigurerAdapter {
    // Bean definitions
}

Other configuration details would include the definition of filters, beans, and other security rules as required.

To enable Spring Data in Spring Security, we simply add this bean to WebSecurityConfig:

@Bean
public SecurityEvaluationContextExtension securityEvaluationContextExtension() {
    return new SecurityEvaluationContextExtension();
}

The above definition enables activation of automatic resolving of spring-data specific expressions annotated on classes.

2.2. XML Configuration

The XML-based configuration begins with the inclusion of the Spring Security namespace:

<beans:beans xmlns="http://www.springframework.org/schema/security"
  xmlns:beans="http://www.springframework.org/schema/beans"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://www.springframework.org/schema/beans
  http://www.springframework.org/schema/beans/spring-beans-4.3.xsd
  http://www.springframework.org/schema/security
  http://www.springframework.org/schema/security/spring-security.xsd">
...
</beans:beans>

Just like in the Java-based configuration, for the XML or namespace based configuration, we’d add SecurityEvaluationContextExtension bean to the XML configuration file:

<bean class="org.springframework.security.data.repository
  .query.SecurityEvaluationContextExtension"/>

Defining the SecurityEvaluationContextExtension makes all the common expressions in Spring Security available from within Spring Data queries.

Such common expressions include principal, authentication, isAnonymous(), hasRole([role]), isAuthenticated, etc.

3. Example Usage

Let’s consider some use cases of Spring Data and Spring Security.

3.1. Restrict AppUser Field Update

In this example, we’ll look at restricting AppUser‘s lastLogin field update to the only currently authenticated user.

By this, we mean that anytime updateLastLogin method is triggered, it only updates the lastLogin field of the currently authenticated user.

To achieve this, we add the query below to our UserRepository interface:

@Query("UPDATE AppUser u SET u.lastLogin=:lastLogin WHERE" 
  +" u.username = ?#{ principal?.username }")
public void updateLastLogin (Date lastLogin);

Without Spring Data and Spring Security integration, we’d normally have to pass the username as an argument to updateLastLogin.

In a case where the wrong user credentials are provided, the login process will fail and we do not need to bother about ensuring validation of access.

3.2. Fetch Specific AppUser’ Content with Pagination

Another scenario where Spring Data and Spring Security work perfectly hand-in-hand is a case where we need to retrieve content from our database that is owned by the currently authenticated user.

For instance, if we have a tweeter application, we may want to display tweets created or liked by current user on their personalized feeds page.

Of course, this may involve writing queries to interact with one or more tables in our database. With Spring Data and Spring Security, this is as simple as writing:

public interface TweetRepository extends PagingAndSortingRepository<Tweet, Long> {
    @Query("select twt from Tweet twt  JOIN twt.likes as lk where lk ="+
      " ?#{ principal?.username } or twt.owner = ?#{ principal?.username }")
    Page<Tweet> getMyTweetsAndTheOnesILiked(Pageable pageable);
}

Because we want our results paginated, our TweetRepository extends PagingAndSortingRepository in the above interface definition.

4. Conclusion

Spring Data and Spring Security integration bring a lot of flexibility to managing authenticated states in Spring applications.

In this session, we’ve had a look at how to add Spring Security to Spring Data. More about other powerful features of Spring Data or Spring Security can be found in our collection of Spring Data and Spring Security articles.

As usual, code snippets can be found over on GitHub.

Java 8 Math New Methods

$
0
0

1. Introduction

Usually, when we think about the new features that came with version 8 of Java, functional programming and lambda expressions are first things that come to mind.

Nevertheless, besides those big features there’re others, maybe having a smaller impact but also interesting and many times not really well known or even covered by any review.

In this tutorial, we’ll enumerate and give a little example of each of the new methods added to one of the core classes of the language: java.lang.Math.

2. New *exact() Methods

First, we have a group of new methods that extend some of the existing and most common arithmetic operations.

As we’ll see, they’re quite self-explanatory, as they have exactly the same functionality than the methods they derive from but with the addition of throwing an exception in case, the resulting value overflows the max or min values of their types.

We can use these methods with both integers and longs as parameters.

2.1. addExact()

Adds the two parameters, throwing an ArithmeticException in case of overflow (which goes for all *Exact() methods) of the addition:

Math.addExact(100, 50);               // returns 150
Math.addExact(Integer.MAX_VALUE, 1);  // throws ArithmeticException

2.2. substractExact()

Substracts the value of the second parameter from the first one, throwing an ArithmeticException in case of overflow of the subtraction:

Math.subtractExact(100, 50);           // returns 50
Math.subtractExact(Long.MIN_VALUE, 1); // throws ArithmeticException

2.3. incrementExact()

Increments the parameter by one, throwing an ArithmeticException in case of overflow:

Math.incrementExact(100);               // returns 101
Math.incrementExact(Integer.MAX_VALUE); // throws ArithmeticException

2.4. decrementExact()

Decrements the parameter by one, throwing an ArithmeticException in case of overflow:

Math.decrementExact(100);            // returns 99
Math.decrementExact(Long.MIN_VALUE); // throws ArithmeticException

2.5. multiplyExact()

Multiply the two parameters, throwing an ArithmeticException in case of overflow of the product:

Math.multiplyExact(100, 5);            // returns 500
Math.multiplyExact(Long.MAX_VALUE, 2); // throws ArithmeticException

2.6. negateExact()

Changes the sign of the parameter, throwing an ArithmeticException in case of overflow.

In this case, we have to think about the internal representation of the value in memory to understand why there’s an overflow, as is not as intuitive as the rest of the “exact” methods:

Math.negateExact(100);               // returns -100
Math.negateExact(Integer.MIN_VALUE); // throws ArithmeticException

The second example requires an explanation as it’s not obvious:  The overflow is due to the Integer.MIN_VALUE being −2.147.483.648, and on the other side the Integer.MAX_VALUE being 2.147.483.647 so the returned value doesn’t fit into an Integer by one unit.

3. Other Methods

3.1. floorDiv()

Divides the first parameter by the second one, and then performs a floor() operation over the result, returning the Integer that is less or equal to the quotient:

Math.floorDiv(7, 2));  // returns 3

The exact quotient is 3.5 so floor(3.5) == 3.

Let’s look at another example:

Math.floorDiv(-7, 2)); // returns -4

The exact quotient is -3.5 so floor(-3.5) == -4.

3.2. modDiv()

This one is similar to the previous method floorDiv(), but applying the floor() operation over the modulus or remainder of the division instead of the quotient:

Math.modDiv(5, 3));  // returns 2

As we can see, the modDiv() for two positive numbers is the same as % operator. Let’s look at a different example:

Math.modDiv(-5, 3));  // returns 1

It returns 1 and not 2 because floorDiv(-5, 3) is -2 and not -1.

3.3. nextDown()

Returns the immediately lower value of the parameter (supports float or double parameters):

float f = Math.nextDown(3);  // returns 2.9999998
double d = Math.nextDown(3); // returns 2.999999761581421

4. Conclusion

In this article, we’ve described briefly the functionality of all the new methods added to the class java.lang.Math in the version 8 of the Java platform and also seen some examples of how to use them.

As always, the full source code is available over on GitHub.

REST-assured with Groovy

$
0
0

1. Overview

In this tutorial, we’ll take a look at using the REST-assured library with Groovy.

Since REST-assured uses Groovy under the hood, we actually have the opportunity to use raw Groovy syntax to create more powerful test cases. This is where the framework really comes to life.

For the setup necessary to use REST-assured, check out our previous article.

2. Groovy’s Collection API

Let’s start by taking a quick look at some basic Groovy concepts – with a few simple examples to equip us with just what we need.

2.1. The findAll method

In this example, we’ll just pay attention to methods, closures and the it implicit variable. Let’s first create a Groovy collection of words:

def words = ['ant', 'buffalo', 'cat', 'dinosaur']

Let’s now create another collection out of the above with words with lengths that exceed four letters:

def wordsWithSizeGreaterThanFour = words.findAll { it.length() > 4 }

Here, findAll() is a method applied to the collection with a closure applied to the method. The method defines what logic to apply to the collection and the closure gives the method a predicate to customize the logic.

We’re telling Groovy to loop through the collection and find all words whose length is greater than four and return the result into a new collection.

2.2. The it variable

The implicit variable it holds the current word in the loop. The new collection wordsWithSizeGreaterThanFour will contain the words buffalo and dinosaur.

['buffalo', 'dinosaur']

Apart from findAll(), there are other Groovy methods.

2.3. The collect iterator

Finally, there is collect, it calls the closure on each item in the collection and returns a new collection with the results of each. Let’s create a new collection out of the sizes of each item in the words collection:

def sizes = words.collect{it.length()}

The result:

[3,7,3,8]

We use sum, as the name suggests to add up all elements in the collection. We can sum up the items in the sizes collection like so:

def charCount = sizes.sum()

and the result will be 21, the character count of all the items in the words collection.

2.4. The max/min operators

The max/min operators are intuitively named to find the maximum or minimum number in a collection :

def maximum = sizes.max()

The result should be obvious, 8.

2.5. The find iterator

We use find to search for only one collection value matching the closure predicate.

def greaterThanSeven=sizes.find{it>7}

The result, 8, the first occurrence of the collection item that meets the predicate.

3. Validate JSON with Groovy

If we have a service at http://localhost:8080/odds, that returns a list of odds of our favorite football matches, like this:

{
    "odds": [{
        "price": 1.30,
        "status": 0,
        "ck": 12.2,
        "name": "1"
    },
    {
        "price": 5.25,
        "status": 1,
        "ck": 13.1,
        "name": "X"
    },
    {
        "price": 2.70,
        "status": 0,
        "ck": 12.2,
        "name": "0"
    },
    {
        "price": 1.20,
        "status": 2,
        "ck": 13.1,
        "name": "2"
    }]
}

And if we want to verify that the odds with a status greater than 1 have prices 1.20 and 5.25, then we do this:

@Test
public void givenUrl_whenVerifiesOddPricesAccuratelyByStatus_thenCorrect() {
    get("/odds").then().body("odds.findAll { it.status > 0 }.price",
      hasItems(5.25f, 1.20f));
}

What is happening here is this; we use Groovy syntax to load the JSON array under the key odds. Since it has more than one item, we obtain a Groovy collection. We then invoke the findAll method on this collection.

The closure predicate tells Groovy to create another collection with JSON objects where status is greater than zero.

We end our path with price which tells groovy to create another list of only prices of the odds in our previous list of JSON objects. We then apply the hasItems Hamcrest matcher to this list.

4. Validate XML with Groovy

Let’s assume we have a service at http://localhost:8080/teachers, that returns a list of teachers by their id, department and subjects taught as below:

<teachers>
    <teacher department="science" id=309>
        <subject>math</subject>
        <subject>physics</subject>
    </teacher>
    <teacher department="arts" id=310>
        <subject>political education</subject>
        <subject>english</subject>
    </teacher>
</teachers>

Now we can verify that the science teacher returned in the response teaches both math and physics:

@Test
public void givenUrl_whenVerifiesScienceTeacherFromXml_thenCorrect() {
    get("/teachers").then().body(
      "teachers.teacher.find { it.@department == 'science' }.subject",
        hasItems("math", "physics"));
}

We have used the XML path teachers.teacher to get a list of teachers by the XML attribute, department. We then call the find method on this list.

Our closure predicate to find ensures we end up with only teachers from the science department. Our XML path terminates at the subject tag.

Since there is more than one subject, we will get a list which we validate with the hasItems Hamcrest matcher.

5. Conclusion

In this article, we’ve seen how we can use the REST-assured library with the Groovy language.

For the full source code of the article, check out our GitHub project.

Mapping LOB Data in Hibernate

$
0
0

1. Overview

LOB or Large OBject refers to a variable length datatype for storing large objects.

The datatype has two variants:

  • CLOB – Character Large Object will store large text data
  • BLOB – Binary Large Object is for storing binary data like image, audio, or video

In this tutorial, we’ll show how we can utilize Hibernate ORM for persisting large objects.

2. Setup

For example, we’ll use Hibernate 5 and H2 Database. Therefore we must declare them as dependencies in our pom.xml:

<dependency>
    <groupId>org.hibernate</groupId>
    <artifactId>hibernate-core</artifactId>
    <version>5.2.12.Final</version>
</dependency>
<dependency>
    <groupId>com.h2database</groupId>
    <artifactId>h2</artifactId>
    <version>1.4.196</version>
</dependency>

The latest version of the dependencies is in Maven Central Repositories.

For a more in-depth look at configuring Hibernate please refer to one of our introductory articles.

3. LOB Data Model

Our model “User” has id, name, and photo as properties. We’ll store an image in the User‘s photo property, and we will map it to a BLOB:

@Entity
@Table(name="user")
public class User {

    @Id
    private String id;
	
    @Column(name = "name", columnDefinition="VARCHAR(128)")
    private String name;
	
    @Lob
    @Column(name = "photo", columnDefinition="BLOB")
    private byte[] photo;

    // ...
}

The @Lob annotation specifies that the database should store the property as Large Object. The columnDefinition in the @Column annotation defines the column type for the property.

Since we’re going to save byte array, we’re using BLOB.

4. Usage

4.1. Initiate Hibernate Session

session = HibernateSessionUtil
  .getSessionFactory("hibernate.properties")
  .openSession();

Using the helper class, we will build the Hibernate Session using the database information provided in hibernate.properties file.

4.2. Creating User Instance

Let’s assume the user uploads the photo as an image file:

User user = new User();
		
InputStream inputStream = this.getClass()
  .getClassLoader()
  .getResourceAsStream("profile.png");

if(inputStream == null) {
    fail("Unable to get resources");
}
user.setId("1");
user.setName("User");
user.setPhoto(IOUtils.toByteArray(inputStream));

We convert the image file into the byte array by using the help of Apache Commons IO library, and finally, we assign the byte array as part of the newly created User object.

4.3. Persisting Large Object

By storing the User using the Session, the Hibernate will convert the object into the database record:

session.persist(user);

Because of the @Lob annotation declared on the class User, Hibernate understands it should store the “photo” property as BLOB data type.

4.4. Data Validation

We’ll retrieve the data back from the database and using Hibernate to map it back to Java object to compare it with the inserted data.

Since we know the inserted Users id, we will use it to retrieve the data from the database:

User result = session.find(User.class, "1");

Let’s compare the query’s result with the input User‘s data:

assertNotNull(
  "Query result is null", 
  result);
 
assertEquals(
  "User's name is invalid", 
  user.getName(), result.getName() );
 
assertTrue(
  "User's photo is corrupted", 
  Arrays.equals(user.getPhoto(), result.getPhoto()) );

Hibernate will map the data in the database to the Java object using the same mapping information on the annotations.

Therefore the retrieved User object will have the same information as the inserted data.

5. Conclusion

LOB is datatype for storing large object data. There’re two varieties of LOB which is called BLOB and CLOB. BLOB is for storing binary data, while CLOB is for storing text data.

Using Hibernate, we have demonstrated how it’s quite easy to map the data to and from Java objects, as long as we’re defining the correct data model and the appropriate table structure in the database.

As always the code for this article is available over on GitHub.

How to Find and Open a Class with Eclipse

$
0
0

1. Introduction

In this article, we’re going to take a look at a number of ways to find a class in Eclipse. All the examples are based on Eclipse Oxygen.

2. Overview

In Eclipse, we often need to look for a class or an interface. We have many ways to do that:

  • The Open Type dialog
  • The Open Resource dialog
  • The Package Explorer view
  • The Open Declaration function
  • The Type Hierarchy view

3. Open Type

One of the most powerful ways to do this is with the Open Type dialog.

3.1. Accessing the Tool

We can access it in three ways:

  1. Using the keyboard shortcut, which is Ctrl + Shift + T on a PC or Cmd + Shift + T on a Mac.
  2. Opening the menu under Navigate > Open Type
  3. Clicking on the icon in the main toolbar:

3.2. Using It to Find a Class

Once we have Open Type up, we simply need to start typing, and we’ll see results:

The results will contain classes in the build path of our open projects, which includes project classes, libraries, and the JRE itself.

In addition, it shows the package and its location in our environment.

As we can see in the image, the results are any classes whose name starts with what we typed. This type of search is not case sensitive.

We can search in camel case too. For example, to find the class ArraysParallelSortHelpers we could just type APSH or ArrayPSH. This type of search is case sensitive.

In addition, it’s also possible to use the wildcard characters “*” or “?” in the search text. “*” is for any string, including the empty string and “?” for any character, excluding the empty string.

So, for example, let’s say we would like to find a class that we remember contains Linked, and then something else, and then Multi. “*” comes in handy:

Or if we add a “?”:

The “?” here excludes the empty string so LinkedMultiValueMap is removed from the results.

Note also that there is an implicit “*” at the end of every input, but not at the beginning.

4. Open Resource

Another simple way to find and open a class in Eclipse is Open Resource.

4.1. Accessing the Tool

We can access it in two ways:

  • Using the keyboard shortcut, which is Ctrl + Shift + R on a PC or Cmd + Shift + R on a Mac.
  • Opening the menu under Navigate > Open Resource

4.2. Using It to Find a Class

Once we have the dialog up, we simply need to start typing, and we’ll see results:

The results will contain classes as well as all other files in the build path of our open projects.

For usage details about wildcards and camel case search, check out the Open Type section above.

5. Package Explorer

When we know the package to which our class belongs, we can use Package Explorer.

5.1. Accessing the Tool

If it isn’t already visible, then we can open this Eclipse view through the menu under Window > Show View > Package Explorer.

5.2. Using the Tool to Find a Class

Here the classes are displayed in alphabetical order:

If the list is very long, we can use a trick: we click anywhere on the package tree and then we start typing the name of the class. We’ll see the selection scrolling automatically among the classes until it matches our class.

There’s also the Navigator view, which works nearly the same way.

The main difference is that while Package Explorer shows classes relative to packages, Navigator shows classes relative to the underlying file system.

To open this view, we can find it in the menu under Window > Show View > Navigator.

6. Open Declaration

In the case where we’re looking at code that references our class, Open Declaration is a very quick way to jump to it.

6.1. Accessing the Tool

There are three ways to access this function:

  1. Clicking anywhere on the class name that we want to open and pressing F3
  2. Clicking anywhere on the class name and going to the menu under Navigate > Open Declaration
  3. While keeping the Ctrl button pressed, mousing over the class name and then just clicking on it

6.2. Using It to Find a Class

Thinking about the screenshot below, if we press Ctrl and hover over ModelMap, then a link appears:

Notice that the color changed to light blue and it became underlined. This indicates that it is now available as a direct link to the class. If we click the link, Eclipse will open ModelMap in the editor.

7. Type Hierarchy

In an object-oriented language like Java, we can also think about types relative to their hierarchy of super- and sub classes.

Type Hierarchy is a view similar to Package Explorer and Navigator, this time focused on hierarchy.

7.1. Accessing the Tool

We can access this view in three ways:

  1. Clicking anywhere in a class name and then pressing F4
  2. Clicking anywhere in a class name and going to the menu under Navigate > Open Type Hierarchy
  3. Using the Open Type in Hierarchy dialog

The Open Type in Hierarchy dialog behaves just like Open Type we saw in section 3.

To get there, we go to the menu under Navigate > Open Type in Hierarchy or we use the shortcut: Ctrl + Shift + H on a PC or Cmd + Shift + H on a Mac.

This dialog is similar to the Open Type dialog. Except for this time when we click on a class, then we get the Type Hierarchy view.

7.2. Using the Tool to Find a Class

Once we know a superclass or subclass of the class we want to open, we can navigate through the hierarchy tree, and look for the class there:

If the list is very long, we can use the same trick we used with Package Explorer: we click anywhere on the tree and then we start typing the name of the class. We’ll see the selection scrolling automatically among the classes until it matches our class.

8. Conclusion

In this article, we looked at the most common ways to find and open a Java class with the Eclipse IDE including Open Type, Open Resource, Package Explorer, Open Declaration, and Type Hierarchy.

Viewing all 3744 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>