Quantcast
Channel: Baeldung
Viewing all 3702 articles
Browse latest View live

Java Web Weekly, Issue 131

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> Java 9 Additions To Optional [codefx.org]

Some interesting stuff is definitely coming to Optional in the JDK.

>> 5 Common Hibernate Exceptions and How to Fix Them [takipi.com]

I like to go through these exception focused articles – they usually have new insights I can glean for when I do get the exception.

>> Managing Secrets with Vault [spring.io]

Storing secret configuration data is almost always an important thing to get right in the overall architecture of a system.

It’s also one of the most common question I get from readers when it comes to project configuration. So this writeup is an interesting solution to that question. Not the only solution, but certainly an interesting one.

>> Turn Around. Don’t Use JPA’s loadgraph and fetchgraph Hints. Use SQL Instead. [jooq.org]

A different perspective on picking the persistence solution of your next greenfield project, talking about preferring plain SQL over something higher level such as JPA.

>> 14 High-Performance Java Persistence Tips [vladmihalcea.com]

Some low-hanging fruit (and not so low-hanging) to improve the performance of a Hibernate implementation.

>> “Micro Profile in Enterprise Java” Announced ! [antoniogoncalves.org] and >> The Enterprise Java Future Is Bright: Java EE 8 MicroProfile Launched [adam-bien.com]

Big announcements in the Java EE world (seems like every week now).

>> Close Encounters of The Java Memory Model Kind [shipilev.net]

A fantastic deep-dive into the JMM (still reading through it now). Definitely one to bookmark.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> Code Review and How Enterprises Can Miss The Point [daedtech.com]

An insightful analysis of the motivations of different players in a large organization when it comes to code reviews and to getting something useful out of the practice. Well worth reading.

>> How I prepared for the NDC keynote (and other speaker tips) [troyhunt.com]

Some solid, to the point advice on speaking well.

I feel that speaking is a life-long journey and there’s always a lot to learn. And delivering a good presentation is such an important skill that it really makes sense to spend time and learn how to do it well, as much as possible.

>> Learning a Healthy Fear of Legacy Code [daedtech.com]

Here be dragons.

>> Expanding the Cloud: Introducing the AWS Asia Pacific (Mumbai) Region [allthingsdistributed.com]

Yeah, one more region to play with, after Frankfurt.

>> Special Skills [dandreamsofcoding.com]

There’s a time to study the foundations and there’s a time to specialize. And while foundations are important, specialization and niching down are more and more critical today.

>> Jepsen: Crate 0.54.9 version divergence [aphyr.com]

Who knew that the Elasticsearch data consistency problems (which are quite real) would go beyond the core product and spread to other solutions as well. It’s not that surprising though.

>> Amazon Elastic File System – Production-Ready in Three Regions [aws.amazon.com] and

Elastic Network Adapter – High Performance Network Interface for Amazon EC2 [aws.amazon.com]

Two important announcements of new AWS goodness in a single week.

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> Nothing about you is normal [dilbert.com]

>> Two good ways to avoid listening to others [dilbert.com]

>> Did someone tell you Twitter was a video game? [dilbert.com]

4. Pick of the Week

>> This I Believe – 25 Thoughts for Life [conversionxl.com]

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE


Introduction to the Java 8 Date/Time API

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

Java 8 introduced new APIs for Date and Time to address the shortcomings of the older java.util.Date and java.util.Calendar.

As part of this article, let’s start with the issues in the existing Date and Calendar APIs and let’s discuss how the new Java 8 Date and Time APIs address them.

We will also look at some of the core classes of the new Java 8 project that are part of the java.time package like LocalDate, LocalTime, LocalDateTime, ZonedDateTime, Period, Duration and their supported APIs.

2. Issues with the Existing Date/Time APIs

  • Thread Safety – The Date and Calendar classes are not thread safe, leaving developers to deal with the headache of hard to debug concurrency issues and to write additional code to handle thread safety. On the contrary the new Date and Time APIs introduced in Java 8 are immutable and thread safe, thus taking that concurrency headache away from developers.
  • APIs Design and Ease of Understanding – The Date and Calendar APIs are poorly designed with inadequate methods to perform day-to-day operations. The new Date/Time APIs is ISO centric and follows consistent domain models for date, time, duration and periods. There are a wide variety of utility methods that support the commonest operations.
  • ZonedDate and Time – Developers had to write additional logic to handle timezone logic with the old APIs, whereas with the new APIs, handling of timezone can be done with Local and ZonedDate/Time APIs.

3. Using LocalDate, LocalTime and LocalDateTime

The most commonly used classes are LocalDate, LocalTime and LocalDateTime. As their names indicate, they represent the local Date/Time from the context of the observer.

These classes are mainly used when timezone are not required to be explicitly specified in the context. As part of this section, we will cover the most commonly used APIs.

3.1. Working with LocalDate

The LocalDate represents a date in ISO format (yyyy-MM-dd) without time.

It can be used to store dates like birthdays and paydays.

An instance of current date can be created from the system clock as below:

LocalDate localDate = LocalDate.now();

The LocalDate representing a specific day, month and year can be obtained using the “of” method or by using the “parse” method. For example the below code snippets represents the LocalDate for 20 February 2015:

LocalDate.of(2015, 02, 20);

LocalDate.parse("2015-02-20");

The LocalDate provides various utility methods to obtain a variety of information. Let’s have a quick peek at some of these APIs methods.

The following code snippet gets the current local date and adds one day:

LocalDate tomorrow = LocalDate.now().plusDays(1);

This example obtains the current date and subtracts one month. Note how it accepts an enum as the time unit:

LocalDate previousMonthSameDay = LocalDate.now().minus(1, ChronoUnit.MONTHS);

In the following two code examples we parse the date “2016-06-12” and get the day of the week and the day of the month respectively. Note the return values, the first is an object representing the DayOfWeek while the second in an int representing the ordinal value of the month:

DayOfWeek sunday = LocalDate.parse("2016-06-12").getDayOfWeek();

int twelve = LocalDate.parse("2016-06-12").getDayOfMonth();

We can test if a date occurs in a leap year. In this example we test if the current date occurs is a leap year:

boolean leapYear = LocalDate.now().isLeapYear();

The relationship of a date to another can be determined to occur before or after another date:

boolean notBefore = LocalDate.parse("2016-06-12")
  .isBefore(LocalDate.parse("2016-06-11"));

boolean isAfter = LocalDate.parse("2016-06-12").isAfter(LocalDate.parse("2016-06-11"));

Date boundaries can be obtained from a given date. In the following two examples we get the LocalDateTime that represents the beginning of the day (2016-06-12T00:00) of the given date and the LocalDate that represents the beginning of the month (2016-06-01) respectively:

LocalDateTime beginningOfDay = LocalDate.parse("2016-06-12").atStartOfDay();
LocalDate firstDayOfMonth = LocalDate.parse("2016-06-12")
  .with(TemporalAdjusters.firstDayOfMonth());

Now let’s have a look at how we work with local time.

3.2. Working with LocalTime

The LocalTime represents time without a date.

Similar to LocalDate an instance of LocalTime can be created from system clock or by using “parse” and “of” method. Quick look at some of the commonly used APIs below.

An instance of current LocalTime can be created from the system clock as below:

LocalTime now = LocalTime.now();

In the below code sample, we create a LocalTime representing 06:30 AM by parsing a string representation:

LocalTime sixThirty = LocalTime.parse("06:30");

The Factory method “of” can be used to create a LocalTime. For example the below code creates LocalTime representing 06:30 AM using the factory method:

LocalTime sixThirty = LocalTime.of(6, 30);

The below example creates a LocalTime by parsing a string and adds an hour to it by using the “plus” API. The result would be LocalTime representing 07:30 AM:

LocalTime sevenThirty = LocalTime.parse("06:30").plus(1, ChronoUnit.HOURS);

Various getter methods are available which can be used to get specific units of time like hour, min and secs like below:

int six = LocalTime.parse("06:30").getHour();

We can also check if a specific time is before or after another specific time. The below code sample compares two LocalTime for which the result would be true:

boolean isbefore = LocalTime.parse("06:30").isBefore(LocalTime.parse("07:30"));

The max, min and noon time of a day can be obtained by constants in LocalTime class. This is very useful when performing database queries to find records within a given span of time. For example, the below code represents 23:59:59.99:

LocalTime maxTime = LocalTime.MAX

Now let’s dive into LocalDateTime.

3.3. Working with LocalDateTime

The LocalDateTime is used to represent a combination of date and time.

This is the most commonly used class when we need a combination of date and time. The class offers a variety of APIs and we will look at some of the most commonly used ones.

An instance of LocalDateTime can be obtained from the system clock similar to LocalDate and LocalTime:

LocalDateTime.now();

The below code samples explain how to create an instance using the factory “of” and “parse” methods. The result would be a LocalDateTime instance representing 20 February 2015, 06:30 AM:

LocalDateTime.of(2015, Month.FEBRUARY, 20, 06, 30);
LocalDateTime.parse("2015-02-20T06:30:00");

There are utility APIs to support addition and subtraction of specific units of time like days, months, year and minutes are available. The below code samples demonstrates the usage of “plus” and “minus” methods. These APIs behave exactly like their counterparts in LocalDate and LocalTime:

localDateTime.plusDays(1);
localDateTime.minusHours(2);

Getter methods are available to extract specific units similar to the date and time classes. Given the above instance of LocalDateTime, the below code sample will return the month February:

localDateTime.getMonth();

4. Using ZonedDateTime API

Java 8 provides ZonedDateTime when we need to deal with time zone specific date and time. The ZoneId is an identifier used to represent different zones. There are about 40 different time zones and the ZoneId are used to represent them as follows.

In this code snippet we create a Zone for Paris:

ZoneId zoneId = ZoneId.of("Europe/Paris");

A set of all zone ids can be obtained as below:

Set<String> allZoneIds = ZoneId.getAvailableZoneIds();

The LocalDateTime can be converted to a specific zone:

ZonedDateTime zonedDateTime = ZonedDateTime.of(localDateTime, zoneId);

The ZonedDateTime provides parse method to get time zone specific date time:

ZonedDateTime.parse("2015-05-03T10:15:30+01:00[Europe/Paris]");

Another way to work with time zone is by using OffsetDateTime. The OffsetDateTime is an immutable representation of a date-time with an offset. This class stores all date and time fields, to a precision of nanoseconds, as well as the offset from UTC/Greenwich.

The OffSetDateTime instance can be created as below using ZoneOffset. Here we create a LocalDateTime representing 6:30 am on 20th February 2015:

LocalDateTime localDateTime = LocalDateTime.of(2015, Month.FEBRUARY, 20, 06, 30);

Then we add two hours to the time by creating a ZoneOffset and setting for the localDateTime instance:

ZoneOffset offset = ZoneOffset.of("+02:00");

OffsetDateTime offSetByTwo = OffsetDateTime
  .of(localDateTime, offset);

We now have a localDateTime of 2015-02-20 06:30 +02:00. Now let’s move on to how to modify date and time values using the Period and Duration classes.

5. Using Period and Duration

The Period class represents a quantity of time in terms of years, months and days and the Duration class represents a quantity of time in terms of seconds and nano seconds.

5.1. Working with Period

The Period class is widely used to modify values of given a date or to obtain the difference between two dates:

LocalDate initialDate = LocalDate.parse("2007-05-10");

The Date can be manipulated using Period as shown in the following code snippet:

LocalDate finalDate = initialDate.plus(Period.ofDays(5));

The Period class has various getter methods such as getYears, getMonths and getDays to get values from a Period object. The below code example returns an int value of 5 as we try to get difference in terms of days:

int five = Period.between(finalDate, initialDate).getDays();

The Period between two dates can be obtained in a specific unit such as days or month or years, using ChronoUnit.between:

int five = ChronoUnit.DAYS.between(initialDate , initialDate);

This code example returns five days. Let’s continue by taking a look at the Duration class.

5.2. Working with Duration

Similar to Period, the Duration class is use to deal with Time. In the following code we create a LocalTime of 6:30 am and then add a duration of 30 seconds to make a LocalTime of 06:30:30am:

LocalTime initialTime = LocalTime.of(6, 30, 0);

LocalTime finalTime = initialTime.plus(Duration.ofSeconds(30));

The Duration between two instants can be obtained either as a Duration or as a specific unit. In the first code snippet we use the between() method of the Duration class to find the time difference between finalTime and initialTime and return the difference in seconds:

int thirty = Duration.between(finalTime, initialTime).getSeconds();

In the second example we use the between() method of the ChronoUnit class to perform the same operation:

int thirty = ChronoUnit.SECONDS.between(finalTime, initialTime);

Now we will look at how to convert existing Date and Calendar to new Date/Time.

6. Compatibility with Date and Calendar

Java 8 has added the toInstant() method which helps to convert existing Date and Calendar instance to new Date Time API as in the following code snippet:

LocalDateTime.ofInstant(date.toInstant(), ZoneId.systemDefault());
LocalDateTime.ofInstant(calendar.toInstant(), ZoneId.systemDefault());

 The LocalDateTime can be constructed from epoch seconds as below. The result of the below code would be a LocalDateTime representing 2016-06-13T11:34:50:

LocalDateTime.ofEpochSecond(1465817690, 0, ZoneOffset.UTC);

Now let’s move on to Date and Time formatting.

7. Date and Time Formatting

Java 8 provides APIs for the easy formatting of Date and Time:

LocalDateTime localDateTime = LocalDateTime.of(2015, Month.JANUARY, 25, 6, 30);

The below code passes an ISO date format to format the local date. The result would be 2015-01-25 :

LocalDate localDate = localDateTime.format(DateTimeFormatter.ISO_DATE);

The DateTimeFormatter provides various standard formatting options. Custom patterns can be provided to format method as well, like below, which would return a LocalDate as 2015/01/25:

localDateTime.format(DateTimeFormatter.ofPattern("yyyy/MM/dd"));

We can pass in formatting style either as SHORT, LONG or MEDIUM as part of the formatting option. The below code sample would give an output representing LocalDateTime in 25-Jan-2015 06:30:00:

localDateTime
  .format(DateTimeFormatter.ofLocalizedDateTime(FormatStyle.MEDIUM)
  .withLocale(Locale.UK);

Let us take a look at alternatives available to Java 8 Core Date/Time APIs.

8. Backport and Alternate Options

8.1. Using Project Threeten

For organization that are on the path of moving to Java 8 from Java 7 or Java 6 and want to use date and time API, project threeten provides the backport capability. Developers can use classes available in this project to achieve the same functionality as that of new Java 8 Date and Time API and once they move to Java 8, the packages can be switched. Artifact for the project threeten can be found in the maven central repository:

<dependency>
    <groupId>org.threeten</groupId>
    <artifactId>threetenbp</artifactId>
    <version>1.3.1</version>
</dependency>

8.2. Joda-Time Library

Another alternative for Java 8 Date and Time library is Joda-Time library. In fact Java 8 Date Time APIs has been led jointly by the author of Joda-Time library (Stephen Colebourne) and Oracle. This library provides pretty much all capabilities that is supported in Java 8 Date Time project. The Artifact can be found in the maven central by including the below pom dependency in your project:

<dependency>
    <groupId>joda-time</groupId>
    <artifactId>joda-time</artifactId>
    <version>2.9.4</version>
</dependency>

9. Conclusion

Java 8 provides a rich set of APIs with consistent API design for easier development.

The code samples for the above article can be found in the Java 8 Date/Time git repository.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Introduction to Spring Data Neo4j

$
0
0

1. Overview

This article is an introduction to Spring Data Neo4j, the popular graph database.

Spring Data Neo4j enables POJO based development for the Neo4j Graph Database and uses familiar Spring concepts such as a template classes for core API usage and provides an annotation based programming model.

Also, a lot of developers don’t really know if Neo4j will actually be a good match for their specific needs; here’s a solid overview on Stackoverflow discussing why to use Neo4j and the pros and cons.

2. Maven Dependencies

Let’s start by declaring the Spring Data Neo4j dependencies in the pom.xml. The below mentioned spring modules are also required for Spring Data Neo4j:

<dependency>
    <groupId>org.springframework.data</groupId>
    <artifactId>spring-data-neo4j</artifactId>
    <version>${spring-data-neo4j.version}</version>
</dependency>
<dependency>
    <groupId>org.neo4j</groupId>
    <artifactId>neo4j-ogm-test</artifactId>
    <version>${neo4j-ogm-test.version}</version>
    <scope>test</scope>
</dependency>

These dependencies include the required modules for testing along with an embedded server as well.

Note that the last dependency is scoped as ‘test’. But also note that, in a real world application development, you’re more likely to have an full Neo4J server running.

3. Neo4Jj Configuration

The Neo4j configuration is very straight forward and defines the connection setting for the application to connect to the server. Similar to the most of the other spring data modules, this is a spring configuration which can be defined as XML or Java configuration.

In this tutorial, we’ll use Java-based configuration only:

public static final String URL = 
  System.getenv("NEO4J_URL") != null ? 
  System.getenv("NEO4J_URL") : "http://neo4j:movies@localhost:7474";

@Bean
public org.neo4j.ogm.config.Configuration getConfiguration() {
    org.neo4j.ogm.config.Configuration config = new org.neo4j.ogm.config.Configuration();
    config.driverConfiguration().setDriverClassName(
      "org.neo4j.ogm.drivers.http.driver.HttpDriver").setURI(URL);
    return config;
}

@Override
public SessionFactory getSessionFactory() {
    return new SessionFactory(getConfiguration(), 
      "com.baeldung.spring.data.neo4j.domain");
}

As mentioned above, the config is simple and contains only two settings. First – the SessionFactory is referencing the models that we created to represent the data objects. Then, the connection properties with the server endpoints and access credentials.

Please note that in this example, the connection related properties are configured directly to the server; however in a production application, these should be properly externalized and part of the standard configuration of the project.

4. Neo4j Repositories

Aligning with the Spring Data framework, Neo4j supports the Spring Data repository abstraction behavior. That means accessing the underlying persistent mechanism is abstracted in the inbuilt GraphRepository where simple project can directly extend it and use the provided operations out-of-the-box.

The repositories are extensible by annotated, named or derived finder methods. Support for Spring Data Neo4j Repositories are also based on Neo4jTemplate, so the underlying functionality is identical.

4.1. Creating the MovieRepositoryPersonRepository

We use two repositories in this tutorial for data persistence:

@Repository
public interface MovieRepository extends GraphRepository<Movie> {

    Movie findByTitle(@Param("title") String title);

    @Query("MATCH (m:Movie) WHERE m.title =~ ('(?i).*'+{title}+'.*') RETURN m")
    Collection<Movie> 
      findByTitleContaining(@Param("title") String title);

    @Query("MATCH (m:Movie)<-[:ACTED_IN]-(a:Person) 
      RETURN m.title as movie, collect(a.name) as cast LIMIT {limit}")
    List<Map<String,Object>> graph(@Param("limit") int limit);
}

As you can, the repository contains some custom operations as well as the standard ones inherited from the base class.

 Next we have the simpler PersonRepository, which just has the standard operations:

@Repository
public interface PersonRepository extends GraphRepository <Person> {
    //
}

You may have already noticed that PersonRepository is just the standard Spring Data interface. This is because in this simple example, it is almost sufficient to use the inbuilt operations basically as our operation set is related to the Movie entity. However you can always add custom operations here which may wrap single/ multiple inbuilt operations.

4.2. Configuring Neo4jRepositories

As the next step, we have to let Spring know the relevant repository indicating it in the Neo4jConfiguration class created in section 3.1:

@Configuration
@ComponentScan("com.baeldung.spring.data.neo4j")
@EnableNeo4jRepositories(
  basePackages = "com.baeldung.spring.data.neo4j.repostory")
public class LibraryNeo4jConfiguration extends Neo4jConfiguration {
    //
}

5. The Full Data Model

We already started looking at the data model, so let’s now lay it all out – the full Movie, Role and Person. The Person entity references the Movie entity through the Role relationship.

@NodeEntity
public class Movie {

    @GraphId
    Long id;

    private String title;

    private int released;

    private String tagline;

    @Relationship(type="ACTED_IN", direction = Relationship.INCOMING)

    private List<Role> roles;

    // standard constructor, getters and setters 
}

Notice how we’ve annotated Movie with @NodeEntity to indicate that this class is directly mapped to a node in Neo4j.

@JsonIdentityInfo(generator=JSOGGenerator.class)
@NodeEntity
public class Person {

    @GraphId
    Long id;

    private String name;

    private int born;

    @Relationship(type = "ACTED_IN")
    private List<Movie> movies;

    // standard constructor, getters and setters 
}

@JsonIdentityInfo(generator=JSOGGenerator.class)
@RelationshipEntity(type = "ACTED_IN")
public class Role {

    @GraphId
    Long id;

    private Collection<String> roles;

    @StartNode
    private Person person;

    @EndNode
    private Movie movie;

    // standard constructor, getters and setters 
}

Of course, these last couple of classes are similarly annotated and the -movies reference is linking Person to Movie class by the “ACTED_IN” relationship.

6. Data Access using MovieRepository

6.1. Saving a New Movie Object

Let’s save some data – first, a new Movie, then a Person and of course a Role – including all the relation data we have as well:

Movie italianJob = new Movie();
italianJob.setTitle("The Italian Job");
italianJob.setReleased(1999);
movieRepository.save(italianJob);

Person mark = new Person();
mark.setName("Mark Wahlberg");
personRepository.save(mark);

Role charlie = new Role();
charlie.setMovie(italianJob);
charlie.setPerson(mark);
Collection<String> roleNames = new HashSet();
roleNames.add("Charlie Croker");
charlie.setRoles(roleNames);
List<Role> roles = new ArrayList();
roles.add(charlie);
italianJob.setRoles(roles);
movieRepository.save(italianJob);

6.2. Retrieving an Existing Movie Object by Title

Let’s now verify the inserted movie by retrieving it using the defined title which is a custom operation:

Movie result = movieRepository.findByTitle(title);

6.3. Retrieving an Existing Movie Object by a Part of the Title

It is possible to search to search an existing movie using a part of the title:

Collection<Movie> result = movieRepository.findByTitleContaining("Italian");

6.4. Retrieving All the Movies

All the movies can be retrieve once and can be check for the correct count:

Collection<Movie> result = (Collection<Movie>) movieRepository.findAll();

However there are number of find methods provided with default behavior which is useful for customs requirements and not all are described here.

6.5. Count the Existing Movie Objects

After inserting several movie objects, we can get exiting movie count:

long movieCount = movieRepository.count();

6.6. Deleting an Existing Movie

movieRepository.delete(movieRepository.findByTitle("The Italian Job"));

After deleting the inserted movie, we can search for the movie object and verify the result is null:

assertNull(movieRepository.findByTitle("The Italian Job"));

6.7. Delete all Inserted Data

It is possible to delete all the elements in the database making the database empty:

movieRepository.deleteAll();

The result of this operation quickly removes all data from a table.

7. Conclusion

In this tutorial, we went through the basics of Spring Data Neo4j using a very simple example.

However Neo4j is capable of catering to very advanced and complex applications having a huge set of relations and networks. And Spring Data Neo4j also offers advanced features to map annotated entity classes to the Neo4j Graph Database.

The implementation of the above code snippets and examples can be found in the GitHub project – this is an Maven based project, so it should be easy to import and run as it is.

Testing with Hamcrest

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

Hamcrest is the well-know framework used for unit testing in the Java ecosystem. It bundled in JUnit and simply put, it uses existing predicates – called matcher classes – for making assertions.

In this tutorial, we will explore the Hamcrest API and learn how to take advantage of it to write neater and more intuitive unit tests for our software.

2. Hamcrest Setup

We can use Hamcrest with maven by adding the following dependency to our pom.xml file:

<dependency>
    <groupId>org.hamcrest</groupId>
    <artifactId>hamcrest-all</artifactId>
    <version>1.3</version>
</dependency>

The latest version of this library can always be found here.

3. An Example Test

Hamcrest is commonly used with junit and other testing frameworks for making assertions. Specifically, instead of using junit‘s numerous assert methods, we only use the API’s single assertThat statement with appropriate matchers.

Let’s look at an example that tests two Strings for equality regardless of case. This should give us a clear idea about how Hamcrest fits in to a testing method:

public class StringMatcherTest {
    
    @Test
    public void given2Strings_whenEqual_thenCorrect() {
        String a = "foo";
        String b = "FOO";
        assertThat(a, equalToIgnoringCase(b));
    }
}

In the following sections we shall take a look at several other common matchers Hamcrest offers.

4. The Object Matcher

Hamcrest provides matchers for making assertions on arbitrary Java objects.

To assert that the toString method of an Object returns a specified String:

@Test
public void givenBean_whenToStringReturnsRequiredString_thenCorrect(){
    Person person=new Person("Barrack", "Washington");
    String str=person.toString();
    assertThat(person,hasToString(str));
}

We can also check that one class is a sub-class of another:

@Test
public void given2Classes_whenOneInheritsFromOther_thenCorrect(){
        assertThat(Cat.class,typeCompatibleWith(Animal.class));
    }
}

5. The Bean Matcher

We can use Hamcrest‘s Bean matcher to inspect properties of a Java bean.

Assume the following Person bean:

public class Person {
    String name;
    String address;

    public Person(String personName, String personAddress) {
        name = personName;
        address = personAddress;
    }
}

We can check if the bean has the property, name like so:

@Test
public void givenBean_whenHasValue_thenCorrect() {
    Person person = new Person("Baeldung", 25);
    assertThat(person, hasProperty("name"));
}

We can also check if Person has the address property, initialized to New York:

@Test
public void givenBean_whenHasCorrectValue_thenCorrect() {
    Person person = new Person("Baeldung", "New York");
    assertThat(person, hasProperty("address", equalTo("New York")));
}

We can as well check if two Person objects are constructed with the same values:

@Test
public void given2Beans_whenHavingSameValues_thenCorrect() {
    Person person1 = new Person("Baeldung", "New York");
    Person person2 = new Person("Baeldung", "New York");
    assertThat(person1, samePropertyValuesAs(person2));
}

6. The Collection Matcher

Hamcrest provides matchers for inspecting Collections.

Simple check to find out if a Collection is empty:

@Test
public void givenCollection_whenEmpty_thenCorrect() {
    List<String> emptyList = new ArrayList<>();
    assertThat(emptyList, empty());
}

To check the size of a Collection:

@Test
public void givenAList_whenChecksSize_thenCorrect() {
    List<String> hamcrestMatchers = Arrays.asList(
      "collections", "beans", "text", "number");
    assertThat(hamcrestMatchers, hasSize(4));
}

We can also use it to assert that an array has a required size:

@Test
public void givenArray_whenChecksSize_thenCorrect() {
    String[] hamcrestMatchers = { "collections", "beans", "text", "number" };
    assertThat(hamcrestMatchers, arrayWithSize(4));
}

To check if a Collection contains given members, regardless of order:

@Test
public void givenAListAndValues_whenChecksListForGivenValues_thenCorrect() {
    List<String> hamcrestMatchers = Arrays.asList(
      "collections", "beans", "text", "number");
    assertThat(hamcrestMatchers,
    containsInAnyOrder("beans", "text", "collections", "number"));
}

To further assert that the Collection members are in given order:

@Test
public void givenAListAndValues_whenChecksListForGivenValuesWithOrder_thenCorrect() {
    List<String> hamcrestMatchers = Arrays.asList(
      "collections", "beans", "text", "number");
    assertThat(hamcrestMatchers,
    contains("collections", "beans", "text", "number"));
}

To check if an array has a single given element:

@Test
public void givenArrayAndValue_whenValueFoundInArray_thenCorrect() {
    String[] hamcrestMatchers = { "collections", "beans", "text", "number" };
    assertThat(hamcrestMatchers, hasItemInArray("text"));
}

We can also use an alternative matcher for the same test:

@Test
public void givenValueAndArray_whenValueIsOneOfArrayElements_thenCorrect() {
    String[] hamcrestMatchers = { "collections", "beans", "text", "number" };
    assertThat("text", isOneOf(hamcrestMatchers));
}

Or still we can do the same with a different matcher like so:

@Test
public void givenValueAndArray_whenValueFoundInArray_thenCorrect() {
    String[] array = new String[] { "collections", "beans", "text",
      "number" };
    assertThat("beans", isIn(array));
}

We can also check if the array contains given elements regardless of order:

@Test
public void givenArrayAndValues_whenValuesFoundInArray_thenCorrect() {
    String[] hamcrestMatchers = { "collections", "beans", "text", "number" };
      assertThat(hamcrestMatchers,
    arrayContainingInAnyOrder("beans", "collections", "number",
      "text"));
}

To check if the array contains given elements but in the given order:

@Test
public void givenArrayAndValues_whenValuesFoundInArrayInOrder_thenCorrect() {
    String[] hamcrestMatchers = { "collections", "beans", "text", "number" };
    assertThat(hamcrestMatchers,
    arrayContaining("collections", "beans", "text", "number"));
}

When our Collection is a Map, we can use the following matchers in these respective functions:

To check if it contains a given key:

@Test
public void givenMapAndKey_whenKeyFoundInMap_thenCorrect() {
    Map<String, String> map = new HashMap<>();
    map.put("blogname", "baeldung");
    assertThat(map, hasKey("blogname"));
}

and a given value:

@Test
public void givenMapAndValue_whenValueFoundInMap_thenCorrect() {
    Map<String, String> map = new HashMap<>();
    map.put("blogname", "baeldung");
    assertThat(map, hasValue("baeldung"));
}

and finally a given entry (key, value):

@Test
public void givenMapAndEntry_whenEntryFoundInMap_thenCorrect() {
    Map<String, String> map = new HashMap<>();
    map.put("blogname", "baeldung");
    assertThat(map, hasEntry("blogname", "baeldung"));
}

7. The Number Matcher

The Number matchers are used to perform assertions on variables of the Number class.

To check greaterThan condition:

@Test
public void givenAnInteger_whenGreaterThan0_thenCorrect() {
    assertThat(1, greaterThan(0));
}

To check greaterThan or equalTo condition:

@Test
public void givenAnInteger_whenGreaterThanOrEqTo5_thenCorrect() {
    assertThat(5, greaterThanOrEqualTo(5));
}

To check lessThan condition:

@Test
public void givenAnInteger_whenLessThan0_thenCorrect() {
    assertThat(-1, lessThan(0));
}

To check lessThan or equalTo condition:

@Test
public void givenAnInteger_whenLessThanOrEqTo5_thenCorrect() {
    assertThat(-1, lessThanOrEqualTo(5));
}

To check closeTo condition:

@Test
public void givenADouble_whenCloseTo_thenCorrect() {
    assertThat(1.2, closeTo(1, 0.5));
}

Let’s pay close attention to the last matcher, closeTo. The first argument, the operand, is the one to which the target is compared and the second argument is the allowable deviation from the operand . This means that if the target is operand+deviation or operand-deviation, then the test will pass.

8. The Text Matcher

Assertion on Strings is made easier, neater and more intuitive with Hamcrest‘s text matchers. We are going to take a look at them in this section.

To check if a String is empty:

@Test
public void givenString_whenEmpty_thenCorrect() {
    String str = "";
    assertThat(str, isEmptyString());
}

To check if a String is empty or null:

@Test
public void givenString_whenEmptyOrNull_thenCorrect() {
    String str = null;
    assertThat(str, isEmptyOrNullString());
}

To check for equality of two Strings while ignoring white space:

@Test
public void given2Strings_whenEqualRegardlessWhiteSpace_thenCorrect() {
    String str1 = "text";
    String str2 = " text ";
    assertThat(str1, equalToIgnoringWhiteSpace(str2));
}

We can also check for the presence of one or more sub-strings in a given String in a given order:

@Test
public void givenString_whenContainsGivenSubstring_thenCorrect() {
    String str = "calligraphy";
    assertThat(str, stringContainsInOrder(Arrays.asList("call", "graph")));
}

Finally, we can check for equality of two Strings regardless of case:

@Test
 public void given2Strings_whenEqual_thenCorrect() {
    String a = "foo";
    String b = "FOO";
    assertThat(a, equalToIgnoringCase(b));
}

9. The Core API

The Hamcrest core API is to be used by third-party framework providers. However, it offers us some great constructs to make our unit tests more readable and also some core matchers that can be used just as easily.

Readability with the is construct on a matcher:

@Test
public void given2Strings_whenIsEqualRegardlessWhiteSpace_thenCorrect() {
    String str1 = "text";
    String str2 = " text ";
    assertThat(str1, is(equalToIgnoringWhiteSpace(str2)));
}

The is construct on a simple data type:

@Test
public void given2Strings_whenIsEqual_thenCorrect() {
    String str1 = "text";
    String str2 = "text";
    assertThat(str1, is(str2));
}

Negation with the not construct on a matcher:

@Test
public void given2Strings_whenIsNotEqualRegardlessWhiteSpace_thenCorrect() {
    String str1 = "text";
    String str2 = " texts ";
    assertThat(str1, not(equalToIgnoringWhiteSpace(str2)));
}

The not construct on a simple data type:

@Test
public void given2Strings_whenNotEqual_thenCorrect() {
    String str1 = "text";
    String str2 = "texts";
    assertThat(str1, not(str2));
}

Check if a String contains a given sub-string:

@Test
public void givenAStrings_whenContainsAnotherGivenString_thenCorrect() {
    String str1 = "calligraphy";
    String str2 = "call";
    assertThat(str1, containsString(str2));
}

Check if a String starts with given sub-string:

@Test
public void givenAString_whenStartsWithAnotherGivenString_thenCorrect() {
    String str1 = "calligraphy";
    String str2 = "call";
    assertThat(str1, startsWith(str2));
}

Check if a String ends with given sub-string:

@Test
public void givenAString_whenEndsWithAnotherGivenString_thenCorrect() {
    String str1 = "calligraphy";
    String str2 = "phy";
    assertThat(str1, endsWith(str2));
}

Check if two Objects are of the same instance:

@Test
public void given2Objects_whenSameInstance_thenCorrect() {
    Cat cat=new Cat();
    assertThat(cat, sameInstance(cat));
}

Check if an Object is an instance of a given class:

@Test
public void givenAnObject_whenInstanceOfGivenClass_thenCorrect() {
    Cat cat=new Cat();
    assertThat(cat, instanceOf(Cat.class));
}

Check if all members of a Collection meet a condition:

@Test
public void givenList_whenEachElementGreaterThan0_thenCorrect() {
    List<Integer> list = Arrays.asList(1, 2, 3);
    int baseCase = 0;
    assertThat(list, everyItem(greaterThan(baseCase)));
}

Check that a String is not null:

@Test
public void givenString_whenNotNull_thenCorrect() {
    String str = "notnull";
    assertThat(str, notNullValue());
}

Chain conditions together, test passes when target meets any of the conditions, similar to logical OR:

@Test
public void givenString_whenMeetsAnyOfGivenConditions_thenCorrect() {
    String str = "calligraphy";
    String start = "call";
    String end = "foo";
    assertThat(str, anyOf(startsWith(start), containsString(end)));
}

Chain conditions together, test passes only when target meets all conditions, similar to logical AND:

@Test
public void givenString_whenMeetsAllOfGivenConditions_thenCorrect() {
    String str = "calligraphy";
    String start = "call";
    String end = "phy";
    assertThat(str, allOf(startsWith(start), endsWith(end)));
}

10. A Custom Matcher

We can define our own matcher by extending TypeSafeMatcher. In this section, we will create a custom matcher which allows a test to pass only when the target is a positive integer.

public class IsPositiveInteger extends TypeSafeMatcher<Integer> {

    public void describeTo(Description description) {
        description.appendText("a positive integer");
    }

    @Factory
    public static Matcher<Integer> isAPositiveInteger() {
        return new IsPositiveInteger();
    }

    @Override
    protected boolean matchesSafely(Integer integer) {
        return integer > 0;
    }

}

We need only to implement the matchSafely method which checks that the target is indeed a positive integer and the describeTo method which produces a failure message in case the test does not pass.

Here is a test that uses our new custom matcher:

@Test
public void givenInteger_whenAPositiveValue_thenCorrect() {
    int num = 1;
    assertThat(num, isAPositiveInteger());
}

and here is a failure message we get since we have passed in a non-positive integer:

java.lang.AssertionError: Expected: a positive integer but: was <-1>

11. Conclusion

In this tutorial, we have explored the Hamcrest API and learnt how we can write better and more maintainable unit tests with it.

The full implementation of all these examples and code snippets can be found in my Hamcrest github project – this is an Eclipse based project, so it should be easy to import and run as it is.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Introduction to Couchbase SDK for Java

$
0
0

I usually post about Persistence on Twitter - you can follow me there:

1. Introduction

In this introduction to the Couchbase SDK for Java, we demonstrate how to interact with a Couchbase document database, covering basic concepts such as creating a Couchbase environment, connecting to a cluster, opening data buckets, using the basic persistence operations, and working with document replicas.

2. Maven Dependencies

If you are using Maven, add the following to your pom.xml file:

<dependency>
    <groupId>com.couchbase.client</groupId>
    <artifactId>java-client</artifactId>
    <version>2.2.6</version>
</dependency>

3. Getting Started

The SDK provides the CouchbaseEnvironment interface and an implementation class DefaultCouchbaseEnvironment containing default settings for managing access to clusters and buckets. The default environment settings can be overridden if necessary, as we will see in section 3.2.

Important: The official Couchbase SDK documentation cautions users to ensure that only one CouchbaseEnvironment is active in the JVM, since the use of two or more may result in unpredictable behavior.

3.1. Connecting to a Cluster with a Default Environment

To have the SDK automatically create a CouchbaseEnvironment with default settings and associate it with our cluster, we can connect to the cluster simply by providing the IP address or hostname of one or more nodes in the cluster.

In this example, we connect to a single-node cluster on our local workstation:

Cluster cluster = CouchbaseCluster.create("localhost");

To connect to a multi-node cluster, we would specify at least two nodes in case one of them is unavailable when the application attempts to establish the connection:

Cluster cluster = CouchbaseCluster.create("192.168.4.1", "192.168.4.2");

Note: It is not necessary to specify every node in the cluster when creating the initial connection. The CouchbaseEnvironment will query the cluster once the connection is established in order to discover the remaining nodes (if any).

3.2. Using a Custom Environment

If your application requires fine tuning of any of the settings provided by DefaultCouchbaseEnvironment, you can create a custom environment and then use that environment when connecting to your cluster.

Here’s an example that connects to a single-node cluster using a custom CouchbaseEnvironment with a ten-second connection timeout and a three-second key-value lookup timeout:

CouchbaseEnvironment env = DefaultCouchbaseEnvironment.builder()
  .connectTimeout(10000)
  .kvTimeout(3000)
  .build();
Cluster cluster = CouchbaseCluster.create(env, "localhost");

And to connect to a multi-node cluster with the custom environment:

Cluster cluster = CouchbaseCluster.create(env,
  "192.168.4.1", "192.168.4.2");

3.3. Opening a Bucket

Once you have connected to the Couchbase cluster, you can open one or more buckets.

When you first set up a Couchbase cluster, the installation package automatically creates a bucket named “default” with a blank password.

Here’s one way to open the “default” bucket when it has a blank password:

Bucket bucket = cluster.openBucket();

You can also specify the bucket name when opening it:

Bucket bucket = cluster.openBucket("default");

For any other bucket with a blank password, you must supply the bucket name:

Bucket myBucket = cluster.openBucket("myBucket");

To open a bucket that has a non-blank password, you must supply the bucket name and password:

Bucket bucket = cluster.openBucket("bucketName", "bucketPassword");

4. Persistence Operations

In this section, we show how to perform CRUD operations in Couchbase. In our examples, we will be working with simple JSON documents representing a person, as in this sample document:

{
  "name": "John Doe",
  "type": "Person",
  "email": "john.doe@mydomain.com",
  "homeTown": "Chicago"
}

The “type” attribute is not required, however it is common practice to include an attribute specifying the document type in case one decides to store multiple types in the same bucket.

4.1. Document IDs

Each document stored in Couchbase is associated with an id that is unique to the bucket in which the document is being stored. The document id is analogous to the primary key column in a traditional relational database row.

Document id values must be UTF-8 strings of 250 or fewer bytes.

Since Couchbase does not provide a mechanism for automatically generating the id on insertion, we must provide our own.

Common strategies for generating ids include key-derivation using a natural key, such as the “email” attribute shown in our sample document, and the use of UUID strings.

For our examples, we will generate random UUID strings.

4.2. Inserting a Document

Before we can insert a new document into our bucket, we must first create an instance of JSONObject containing the document’s contents:

JsonObject content = JsonObject.empty()
  .put("name", "John Doe")
  .put("type", "Person")
  .put("email", "john.doe@mydomain.com")
  .put("homeTown", "Chicago");

Next, we create a JSONDocument object consisting of an id value and the JSONObject:

String id = UUID.randomUUID().toString();
JsonDocument document = JsonDocument.create(id, content);

To add a new document to the bucket, we use the insert method:

JsonDocument inserted = bucket.insert(document);

The JsonDocument returned contains all of the properties of the original document, plus a value known as the “CAS” (compare-and-swap) value that Couchbase uses for version tracking.

If a document with the supplied id already exists in the bucket, Couchbase throws a DocumentAlreadyExistsException.

We can also use the upsert method, which will either insert the document (if the id is not found) or update the document (if the id is found):

JsonDocument upserted = bucket.upsert(document);

4.3. Retrieving a Document

To retrieve a document by its id, we use the get method:

JsonDocument retrieved = bucket.get(id);

If no document exists with the given id, the method returns null.

4.4. Updating or Replacing a Document

We can update an existing document using the upsert method:

JsonObject content = document.content();
content.put("homeTown", "Kansas City");
JsonDocument upserted = bucket.upsert(document);

As we mentioned in section 4.2, upsert will succeed whether a document with the given id was found or not.

If enough time has passed between the time we originally retrieved the document and our attempt to upsert the revised document, there is a possibility that the original document will have been deleted from the bucket by another process or user.

If we need to guard against this scenario in our application, we can instead use the replace method, which fails with a DocumentDoesNotExistException if a document with the given id is not found in Couchbase:

JsonDocument replaced = bucket.replace(document);

4.5. Deleting a Document

To delete a Couchbase document, we use the remove method:

JsonDocument removed = bucket.remove(document);

You may also remove by id:

JsonDocument removed = bucket.remove(id);

The JsonDocument object returned has only the id and CAS properties set; all other properties (including the JSON content) are removed from the returned object.

If no document exists with the given id, Couchbase throws a DocumentDoesNotExistException.

5. Working with Replicas

This section discusses Couchbase’s virtual bucket and replica architecture and introduces a mechanism for retrieving a replica of a document in the event that a document’s primary node is unavailable.

5.1. Virtual Buckets and Replicas

Couchbase distributes a bucket’s documents across a collection of 1024 virtual buckets, or vbuckets, using a hashing algorithm on the document id to determine the vbucket in which to store each document.

Each Couchbase bucket can also be configured to maintain one or more replicas of each vbucket. Whenever a document is inserted or updated and written to its vbucket, Couchbase initiates a process to replicate the new or updated document to its replica vbucket.

In a multi-node cluster, Couchbase distributes vbuckets and replica vbuckets among all the data nodes in the cluster. A vbucket and its replica vbucket are kept on separate data nodes in order to achieve a certain measure of high-availability.

5.2. Retrieving a Document From a Replica

When retrieving a document by its id, if the document’s primary node is down or otherwise unreachable due to a network error, Couchbase throws an exception.

You can have your application catch the exception and attempt to retrieve one or more replicas of the document using the getFromReplica method.

The following code would use the first replica found:

JsonDocument doc;
try{
    doc = bucket.get(id);
}
catch(CouchbaseException e) {
    List<JsonDocument> list = bucket.getFromReplica(id, ReplicaMode.FIRST);
    if(!list.isEmpty()) {
        doc = list.get(0);
     }
}

Note that it is possible, when writing your application, to have write operations block until persistence and replication are complete. However the more common practice, for reasons of performance, is to have the application return from writes immediately after writing to memory of a document’s primary node, because disk writes are inherently slower than memory writes.

When using the latter approach, if a recently updated document’s primary node should fail or go offline before the updates have been fully replicated, replica reads may or may not return the latest version of the document.

It is also worth noting that Couchbase retrieves replicas (if any are found) asynchronously. Therefore if your bucket is configured for multiple replicas, there is no guarantee as to the order in which the SDK returns them, and you may want to loop through all the replicas found in order to ensure that your application has the latest replica version available:

long maxCasValue = -1;
for(JsonDocument replica : bucket.getFromReplica(id, ReplicaMode.ALL)) {
    if(replica.cas() > maxCasValue) {
        doc = replica;
        maxCasValue = replica.cas();
    }
}

6. Conclusion

We have introduced some basic usage scenarios that you will need in order to get started with the Couchbase SDK.

Code snippets presented in this tutorial can be found in the github project.

You can learn more about the SDK at the official Couchbase SDK developer documentation site.

I usually post about Persistence on Twitter - you can follow me there:


JMockit 101

$
0
0

1. Introduction

With this article, we’ll be starting a new series centered around the mocking toolkit JMockit.

In this first installment we’ll talk about what JMockit is, it’s characteristics and how mocks are created and used with it.

Later articles will focus on and go deeper into its capabilities.

2. JMockit

2.1. Introduction

First of all, let’s talk about what JMockit is: a Java framework for mocking objects in tests (you can use it for both JUnit and TestNG ones).

It uses Java’s instrumentation APIs to modify the classes’ bytecode during runtime in order to dynamically alter their behavior. Some of its strong points are its expressibility and its out-of-the-box ability to mock static and private methods.

Maybe you’re new to JMockit, but it’s definitely not due to it being new. JMockit’s development started in June 2006 and its first stable release dates to December 2012, so it’s been around for a time now (current version is 1.24 at the time of writing the article).

2.2. The Expressibility of JMockit

As told before, one of the strongest points of JMockit is its expressibility. In order to create mocks and define their behavior, instead of calling methods from the mocking API, you just need to define them directly.

This means that you won’t do things like:

API.expect(mockInstance.method()).andThenReturn(value).times(2);

Instead, expect things like:

new Expectation() {
    mockInstance.method(); 
    result = value; 
    times = 2;
}

It might seem that it is more code, but you could simply put all three lines just on one. The really important part is that you don’t end up with a big “train” of chained method calls. Instead, you end up with a definition of how you want the mock to behave when called.

If you take into account that on the result = value part you could return anything (fixed values, dynamically generated values, exceptions, etc), the expressiveness of JMockit gets even more evident.

2.3. The Record-Replay-Verify Model

Tests using JMockit are divided into three differentiated stages: record, replay and verify.

  1. On the record phase, during test preparation and before the invocations to the methods we want to be executed, we will define the expected behavior for all tests to be used during the next stage.
  2. The replay phase is the one in which the code under test is executed. The invocations of mocked methods/constructors previously recorded on the previous stage will now be replayed.
  3. Lastly, on the verify phase, we will assert that the result of the test was the one we expected (and that mocks behaved and were used according to what was defined in the record phase).

With a code example, a wireframe for a test would look something like this:

@Test
public void testWireframe() {
   // preparation code not specific to JMockit, if any

   new Expectations() {{ 
       // define expected behaviour for mocks
   }};

   // execute code-under-test

   new Verifications() {{ 
       // verify mocks
   }};

   // assertions
}

3. Creating Mocks

3.1. JMockit’s Annotations

When using JMockit, the easiest way to use mocks, is to use annotations. There are three for creating mocks (@Mocked@Injectable and @Capturing) and one to specify the class under testing (@Tested).

When using the @Mocked annotation on a field, it will create mocked instances of each and every new object of that particular class.

On the other hand, with the @Injectable annotation, only one mocked instance will be created.

The last annotation, @Capturing will behave like @Mocked, but will extend its reach to every subclass extending or implementing the annotated field’s type.

 3.2. Passing Arguments to Tests

When using JMockit is possible to pass mocks as test parameters. This is quite useful for creating a mock just for that one test in particular, like some complex model object that needs a specific behavior just for one test for instance. It would be something like this:

@RunWith(JMockit.class)
public class TestPassingArguments {
   
   @Injectable
   private Foo mockForEveryTest;

   @Tested
   private Bar bar;

   @Test
   public void testExample(@Mocked Xyz mockForJustThisTest) {
       new Expectations() {{
           mockForEveryTest.someMethod("foo");
           mockForJustThisTest.someOtherMethod();
       }};

       bar.codeUnderTest();
   }
}

This way of creating a mock by passing it as a parameter, instead of having to call some API method, again shows us the expressibility we’re talking about since the beginning.

3.3. Complete Example

To end this article, we’ll be including a complete example of a test using JMockit.

In this example, we’ll be testing a Performer class that uses Collaborator in its perform() method. This perform() method, receives a Model object as a parameter from which it will use its getInfo() that returns a String, this String will be passed to the collaborate() method from Collaborator that will return true for this particular test, and this value will be passed to the receive() method from Collaborator.

So, the tested classes will look like this:

public class Model {
    public String getInfo(){
        return "info";
    }
}

public class Collaborator {
    public boolean collaborate(String string){
        return false;
    }
    public void receive(boolean bool){
        // NOOP
    }
}

public class Performer {
    private Collaborator collaborator;
	
    public void perform(Model model) {
        boolean value = collaborator.collaborate(model.getInfo());
        collaborator.receive(value);
    }
}

And the test’s code will end up being like:

@RunWith(JMockit.class)
public class PerformerTest {

    @Injectable
    private Collaborator collaborator;

    @Tested
    private Performer performer;

    @Test
    public void testThePerformMethod(@Mocked Model model) {
        new Expectations() {{
    	    model.getInfo();result = "bar";
    	    collaborator.collaborate("bar"); result = true;
        }};
        performer.perform(model);
        new Verifications() {{
    	    collaborator.receive(true);
        }};
    }
}

4. Conclusion

With this, we’ll wrap up our practical intro to JMockit. If you want to learn more about JMockit, stay tuned for future articles.

The full implementation of this tutorial can be found on the GitHub project.

Intro to QueryDSL

$
0
0

I usually post about Persistence on Twitter - you can follow me there:

1. Introduction

This is an introductory article to get you up and running with the powerful QueryDSL API for data persistence.

The goal here is to give you the practical tools to add QueryDSL into your project, understand the structure and purpose of the generated classes, and get a basic understanding of how to write type-safe database queries for most common scenarios.

2. The Purpose of QueryDSL

Object-relational mapping frameworks are at the core of Enterprise Java. These compensate the mismatch between object-oriented approach and relational database model. They also allow developers to write cleaner and more concise persistence code and domain logic.

However, one of the most difficult design choices for an ORM framework is the API for building correct and type-safe queries.

One of the most widely used Java ORM frameworks, Hibernate (and also closely related JPA standard), proposes a string-based query language HQL (JPQL) very similar to SQL. The obvious drawbacks of this approach are the lack of type safety and absence of static query checking. Also, in more complex cases (for instance, when the query needs to be constructed at runtime depending on some conditions), building an HQL query typically involves concatenation of strings which is usually very unsafe and error-prone.

The JPA 2.0 standard brought an improvement in the form of Criteria Query API — a new and type-safe method of building queries that took advantage of metamodel classes generated during annotation preprocessing. Unfortunately, being groundbreaking in its essence, Criteria Query API ended up very verbose and practically unreadable. Here’s an example from Java EE tutorial for generating a query as simple as SELECT p FROM Pet p:

EntityManager em = ...;
CriteriaBuilder cb = em.getCriteriaBuilder();
CriteriaQuery<Pet> cq = cb.createQuery(Pet.class);
Root<Pet> pet = cq.from(Pet.class);
cq.select(pet);
TypedQuery<Pet> q = em.createQuery(cq);
List<Pet> allPets = q.getResultList();

No wonder that a more adequate QueryDSL library soon emerged, based on the same idea of generated metadata classes, yet implemented with a fluent and readable API.

3. QueryDSL Class Generation

Let’s start with generating and exploring the magical metaclasses that account for the fluent API of QueryDSL.

3.1. Adding QueryDSL to Maven Build

Including QueryDSL in your project is as simple as adding several dependencies to your build file and configuring a plugin for processing JPA annotations. Let’s start with the dependencies. The version of QueryDSL libraries should be extracted to a separate property inside the <project><properties> section, as follows (for the latest version of QueryDSL libraries, check the Maven Central repository):

<properties>
    <querydsl.version>4.1.3</querydsl.version>
</properties>

Next, add the following dependencies to the <project><dependencies> section of your pom.xml file:

<dependencies>

    <dependency>
        <groupId>com.querydsl</groupId>
        <artifactId>querydsl-apt</artifactId>
        <version>${querydsl.version}</version>
        <scope>provided</scope>
    </dependency>

    <dependency>
        <groupId>com.querydsl</groupId>
        <artifactId>querydsl-jpa</artifactId>
        <version>${querydsl.version}</version>
    </dependency>

</dependencies>

The querydsl-apt dependency is an annotation processing tool (APT) — implementation of corresponding Java API that allows processing of annotations in source files before they move on to the compilation stage. This tool generates the so called Q-types — classes that directly relate to the entity classes of your application, but are prefixed with letter Q. For instance, if you have a User class marked with the @Entity annotation in your application, then the generated Q-type will reside in a QUser.java source file.

The provided scope of the querydsl-apt dependency means that this jar should be made available only at build time, but not included into the application artifact.

The querydsl-jpa library is the QueryDSL itself, designed to be used together with a JPA application.

To configure annotation processing plugin that takes advantage of querydsl-apt, add the following plugin configuration to your pom – inside the <project><build><plugins> element:

<plugin>
    <groupId>com.mysema.maven</groupId>
    <artifactId>apt-maven-plugin</artifactId>
    <version>1.1.3</version>
    <executions>
        <execution>
            <goals>
                <goal>process</goal>
            </goals>
            <configuration>
                <outputDirectory>target/generated-sources/java</outputDirectory>
                <processor>com.querydsl.apt.jpa.JPAAnnotationProcessor</processor>
            </configuration>
        </execution>
    </executions>
</plugin>

This plugin makes sure that the Q-types are generated during the process goal of Maven build. The outputDirectory configuration property points to the directory where the Q-type source files will be generated. The value of this property will be useful later on, when you’ll go exploring the Q-files.

You should also add this directory to the source folders of the project, if your IDE does not do this automatically — consult the documentation for your favorite IDE on how to do that.

For this article we will use a simple JPA model of a blog service, consisting of Users and their BlogPosts with a one-to-many relationship between them:

@Entity
public class User {

    @Id
    @GeneratedValue
    private Long id;

    private String login;

    private Boolean disabled;

    @OneToMany(cascade = CascadeType.PERSIST, mappedBy = "user")
    private Set<BlogPost> blogPosts = new HashSet<>(0);

    // getters and setters

}

@Entity
public class BlogPost {

    @Id
    @GeneratedValue
    private Long id;

    private String title;

    private String body;

    @ManyToOne
    private User user;

    // getters and setters

}

To generate Q-types for your model, simply run:

mvn compile

3.2. Exploring Generated Classes

Now go to the directory specified in the outputDirectory property of apt-maven-plugin (target/generated-sources/java in our example). You will see a package and class structure that directly mirrors your domain model, except all the classes start with letter Q (QUser and QBlogPost in our case).

Open the file QUser.java. This is your entry point to building all queries that have User as a root entity. First thing you’ll notice is the @Generated annotation which means that this file was automatically generated and should not be edited manually. Should you change any of your domain model classes, you will have to run mvn compile again to regenerate all of the corresponding Q-types.

Aside from several QUser constructors present in this file, you should also take notice of a public static final instance of the QUser class:

public static final QUser user = new QUser("user");

This is the instance that you can use in most of your QueryDSL queries to this entity, except when you need to write some more complex queries, like joining several different instances of a table in a single query.

The last thing that should be noted is that for every field of the entity class there is a corresponding *Path field in the Q-type, like NumberPath id, StringPath login and SetPath blogPosts in the QUser class (notice that the name of the field corresponding to Set is pluralized). These fields are used as parts of fluent query API that we will encounter later on.

4. Querying with QueryDSL

4.1. Simple Querying and Filtering

To build a query, first we’ll need an instance of a JPAQueryFactory, which is a preferred way of starting the building process. The only thing that JPAQueryFactory needs is an EntityManager, which should already be available in your JPA application via EntityManagerFactory.createEntityManager() call or @PersistenceContext injection.

EntityManagerFactory emf = 
  Persistence.createEntityManagerFactory("org.baeldung.querydsl.intro");
EntityManager em = entityManagerFactory.createEntityManager();
JPAQueryFactory queryFactory = new JPAQueryFactory(em);

Now let’s create our first query:

QUser user = QUser.user;

User c = queryFactory.selectFrom(user)
  .where(user.login.eq("David"))
  .fetchOne();

Notice we’ve defined a local variable QUser user and initialized it with QUser.user static instance. This is done purely for brevity, alternatively you may import the static QUser.user field.

The selectFrom method of the JPAQueryFactory starts building a query. We pass it the QUser instance and continue building the conditional clause of the query with the .where() method. The user.login is a reference to a StringPath field of the QUser class that we’ve seen before. The StringPath object also has the .eq() method that allows to fluently continue building the query by specifying the field equality condition.

Finally, to fetch the value from the database into persistence context, we end the building chain with the call to the fetchOne() method. This method returns null if the object can’t be found, but throws a NonUniqueResultException if there are multiple entities satisfying the .where() condition.

4.2. Ordering and Grouping

Now let’s fetch all users in a list, sorted by their login in ascension order.

List<User> c = queryFactory.selectFrom(user)
  .orderBy(user.login.asc())
  .fetch();

This syntax is possible because the *Path classes have the .asc() and .desc() methods. You can also specify several arguments for the .orderBy() method to sort by multiple fields.

Now let’s try something more difficult. Suppose we need to group all posts by title and count duplicating titles. This is done with the .groupBy() clause. We’ll also want to order the titles by resulting occurrence count.

NumberPath<Long> count = Expressions.numberPath(Long.class, "c");

List<Tuple> userTitleCounts = queryFactory.select(
  blogPost.title, blogPost.id.count().as(count))
  .from(blogPost)
  .groupBy(blogPost.title)
  .orderBy(count.desc())
  .fetch();

We selected the blog post title and count of duplicates, grouping by title and then ordering by aggregated count. Notice we first created an alias for the count() field in the .select() clause, because we needed to reference it in the .orderBy() clause.

4.3. Complex Queries with Joins and Subqueries

Let’s find all users that wrote a post titled “Hello World!” For such query we could use an inner join. Notice we’ve created an alias blogPost for the joined table to reference it in the .on() clause:

QBlogPost blogPost = QBlogPost.blogPost;

List<User> users = queryFactory.selectFrom(user)
  .innerJoin(user.blogPosts, blogPost)
  .on(blogPost.title.eq("Hello World!"))
  .fetch();

Now let’s try to achieve the same with subquery:

List<User> users = queryFactory.selectFrom(user)
  .where(user.id.in(
    JPAExpressions.select(blogPost.user.id)
      .from(blogPost)
      .where(blogPost.title.eq("Hello World!"))))
  .fetch();

As we can see, subqueries are very similar to queries, and they are also quite readable, but they start with JPAExpressions factory methods. To connect subqueries with the main query, as always, we reference the aliases defined and used earlier.

4.4. Modifying Data

JPAQueryFactory allows not only constructing queries, but also modifying and deleting records. Let’s change the user’s login and disable the account:

queryFactory.update(user)
  .where(user.login.eq("Ash"))
  .set(user.login, "Ash2")
  .set(user.disabled, true)
  .execute();

We can have any number of .set() clauses we want for different fields. The .where() clause is not necessary, so we can update all the records at once.

To delete the records matching a certain condition, we can use a similar syntax:

queryFactory.delete(user)
  .where(user.login.eq("David"))
  .execute();

The .where() clause is also not necessary, but be careful, because omitting the .where() clause results in deleting all of the entities of a certain type.

You may wonder, why JPAQueryFactory doesn’t have the .insert() method. This is a limitation of JPA Query interface. The underlying javax.persistence.Query.executeUpdate() method is capable of executing update and delete but not insert statements. To insert data, you should simply persist the entities with EntityManager.

If you still want to take advantage of a similar QueryDSL syntax for inserting data, you should use SQLQueryFactory class that resides in the querydsl-sql library.

5. Conclusion

In this article we’ve discovered a powerful and type-safe API for persistent object manipulation that is provided by QueryDSL.

We’ve learned to add QueryDSL to project and explored the generated Q-types. We’ve also covered some typical use cases and enjoyed their conciseness and readability.

All the source code for the examples can be found in the github repository.

Finally, there are of course many more features that QueryDSL provides, including working with raw SQL, non-persistent collections, NoSQL databases and full-text search – and we’ll explore some of these in future articles.

I usually post about Persistence on Twitter - you can follow me there:


Java Web Weekly, Issue 132

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> Implementing HAL hypermedia REST API using Spring HATEOAS [opencredo.com]

I’ve been talking about HATEOAS for such a long time now and consistently see clients get value out of it for not a lot of effort. And so of course this write get the first spot here in the review.

A solid, practical article detailing quite a bit of what you have to know when implementing a Hypermedia API with Spring.

>> Playing with HTTP/2 [kaczmarzyk.net]

Very nice primer on starting down the HTTP/2 path in the Java ecosystem, while we’re waiting for the long overdue Servlet 4 specification.

>> How I Caused Confusion about Spring Boot [codecentric.de]

A quick writeup going beyond simple usecase and discussing some good practices of how configuration should be handled with Spring Boot.

>> How Functional Programming will (Finally) do Away With the GoF Patterns [jooq.org]

There’s a quote I can’t place right now – that goes something like this: Design Patterns are missing language features.

Java 8 gave us a much more powerful language, which of course changed the landscape when it comes to needing patterns. So I fully expect to keep seeing these style of writeup as Java 8 gets adopted and understood more and more.

>> Tabs vs Spaces: How They Write Java at Google, Twitter, Mozilla and Pied Piper [takipi.com]

Yeah, you read that right – tabs vs spaces! Back to trolling basics 🙂 – it made me reconsider my life choices.

Joking aside, it’s a fun read.

>> Spring Sweets: Using Groovy Configuration As PropertySource [jdriven.com]

Some interesting Groovy alternative configuration for handling properties in Spring.

>> Java 9 on the Brink of a Delivery Date and Scope Review [infoq.com]

Looks like we’re close to getting the real release date for Java 9.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Goldilocks Microservices [vanilla-java.github.io]

Sizing your microservices right and keeping the overall architecture flexible can definitely make or break an implementation; this article is about making the pragmatic choices that make sense for your particular scenario.

>> Adding service virtualization to your Continuous Delivery pipeline [ontestautomation.com]

A quick intro to a highly useful technique and trend that’s been picking up lots of momentum lately, and for good reason – making heavy use of virtualization within a CD pipeline.

Also worth reading:

3. Musings

>> Security insanity: how we keep failing at the basics [troyhunt.com]

A fantastic deep-dive into broken password security rules.

>> Does Github Enhance the Need for Code Review? [daedtech.com]

An three-decade look at the proprietary vs open source software world from the vantage point of the seminal work The Cathedral and the Bazaar.

>> Surviving The Dreaded Company Framework [daedtech.com]

Internal frameworks are a pain point with so many developers, give that for every one that makes sense, a hundred that don’t are built. I cringed when I first read this title.

>> With Commercial Licensing, Invest in Innovation, not Protection [jooq.org]

That’s good advice, and also scary if you actually have a product that the advice applies to. It’s also worth mentioning that the advice comes out of practical experience and not just out of “thinking about it a bit”.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> I always plan my schedule around your incompetence [dilbert.com]

>> My productivity plunges whenever you learn new jargon [dilbert.com]

>> Yeah, that’s how it works [dilbert.com]

5. Pick of the Week

>> Don’t let anyone overpay you [m.signalvnoise.com]

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE


Wiring in Spring: @Autowired, @Resource and @Inject

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

This Spring Framework article will demonstrate the use of annotations related to dependency injection, namely the @Resource, @Inject, and @Autowired annotations. These annotations provide classes with a declarative way to resolve dependencies. For example:

@Autowired 
ArbitraryClass arbObject;

as opposed to instantiating them directly (the imperative way), for example:

ArbitraryClass arbObject = new ArbitraryClass();

Two of the three annotations belong to the Java extension package: javax.annotation.Resource and javax.inject.Inject. The @Autowired annotation belongs to the org.springframework.beans.factory.annotation package.

Each of these annotations can resolve dependencies either by field injection or by setter injection. A simplified, but practical, example will be used to demonstrate the distinction between the three annotations, based on the execution paths taken by each annotation.

The examples will focus on how to use the three injection annotations during integration testing. The dependency required by the test can either be an arbitrary file or an arbitrary class.

2. The @Resource Annotation

The @Resource annotation is part of the JSR-250 annotation collection and is packaged with Java EE. This annotation has the following execution paths, listed by precedence:

  1. Match by Name
  2. Match by Type
  3. Match by Qualifier

These execution paths are applicable to both setter and field injection.

2.1. Field Injection

Resolving dependencies by field injection is achieved by annotating an instance variable with the @Resource annotation.

2.1.1. Match by Name

The integration test used to demonstrate match-by-name field injection is listed as follows:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(
  loader=AnnotationConfigContextLoader.class,
  classes=ApplicationContextTestResourceNameType.class)
public class FieldResourceInjectionTest {

    @Resource(name="namedFile")
    private File defaultFile;

    @Test
    public void givenResourceAnnotation_WhenOnField_ThenDependencyValid(){
        assertNotNull(defaultFile);
        assertEquals("namedFile.txt", defaultFile.getName());
    }
}

Let’s go through the code. In the FieldResourceInjectionTest integration test, at line 7, the resolution of the dependency by name is achieved by passing in the bean name as an attribute value to the @Resource annotation:

@Resource(name="namedFile")
private File defaultFile;

This configuration will resolve dependencies using the match-by-name execution path. The bean namedFile must be defined in the ApplicationContextTestResourceNameType application context.

Note that the bean id and the corresponding reference attribute value must match:

@Configuration
public class ApplicationContextTestResourceNameType {

    @Bean(name="namedFile")
    public File namedFile() {
        File namedFile = new File("namedFile.txt");
        return namedFile;
    }
}

Failure to define the bean in the application context will result in a org.springframework.beans.factory.NoSuchBeanDefinitionException being thrown. This can be demonstrated by changing the attribute value passed in to the @Bean annotation, in the ApplicationContextTestResourceNameType application context; or changing the attribute value passed in to the @Resource annotation, in the FieldResourceInjectionTest integration test.

2.1.2. Match by Type

To demonstrate the match-by-type execution path, just remove the attribute value at line 7 of the FieldResourceInjectionTest integration test so that it looks as follows:

@Resource
private File defaultFile;

and run the test again.

The test will still pass because if the @Resource annotation does not receive a bean name as an attribute value, the Spring Framework will proceed with the next level of precedence, match-by-type, in order to try resolve the dependency.

2.1.3. Match by Qualifier

To demonstrate the match-by-qualifier execution path, the integration testing scenario will be modified so that there are two beans defined in the ApplicationContextTestResourceQualifier application context:

@Configuration
public class ApplicationContextTestResourceQualifier {

    @Bean(name="defaultFile")
    public File defaultFile() {
        File defaultFile = new File("defaultFile.txt");
        return defaultFile;
    }

    @Bean(name="namedFile")
    public File namedFile() {
        File namedFile = new File("namedFile.txt");
        return namedFile;
    }
}

The QualifierResourceInjectionTest integration test will be used to demonstrate match-by-qualifier dependency resolution. In this scenario, a specific bean dependency needs to be injected into each reference variable:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(
  loader=AnnotationConfigContextLoader.class,
  classes=ApplicationContextTestResourceQualifier.class)
public class QualifierResourceInjectionTest {

    @Resource
    private File dependency1;
	
    @Resource
    private File dependency2;

    @Test
    public void givenResourceAnnotation_WhenField_ThenDependency1Valid(){
        assertNotNull(dependency1);
        assertEquals("defaultFile.txt", dependency1.getName());
    }

    @Test
    public void givenResourceQualifier_WhenField_ThenDependency2Valid(){
        assertNotNull(dependency2);
        assertEquals("namedFile.txt", dependency2.getName());
    }
}

Run the integration test, and a org.springframework.beans.factory.NoUniqueBeanDefinitionException is thrown. This exception is thrown because the application context has found two bean definitions of type File, and it is confused as to which bean should resolve the dependency.

To resolve this issue, please refer to line 7 to line 10 of the QualifierResourceInjectionTest integration test:

@Resource
private File dependency1;

@Resource
private File dependency2;

and add the following lines of code:

@Qualifier("defaultFile")

@Qualifier("namedFile")

so that the code block looks as follows:

@Resource
@Qualifier("defaultFile")
private File dependency1;

@Resource
@Qualifier("namedFile")
private File dependency2;

Run the integration test again, this time round it should pass. The objective of this test was to demonstrate that even if there are multiple beans defined in an application context, the @Qualifier annotation clears any confusion by allowing specific dependencies to be injected into a class.

2.2. Setter Injection

The execution paths taken when injecting dependencies on a field are applicable to setter-based injection.

2.2.1. Match by Name

The only difference is the MethodResourceInjectionTest integration test has a setter method:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(
  loader=AnnotationConfigContextLoader.class,
  classes=ApplicationContextTestResourceNameType.class)
public class MethodResourceInjectionTest {

    private File defaultFile;

    @Resource(name="namedFile")
    protected void setDefaultFile(File defaultFile) {
        this.defaultFile = defaultFile;
    }

    @Test
    public void givenResourceAnnotation_WhenSetter_ThenDependencyValid(){
        assertNotNull(defaultFile);
        assertEquals("namedFile.txt", defaultFile.getName());
    }
}

Resolving dependencies by setter injection is done by annotating a reference variable’s corresponding setter method. Pass the name of the bean dependency as an attribute value to the @Resource annotation:

private File defaultFile;

@Resource(name="namedFile")
protected void setDefaultFile(File defaultFile) {
    this.defaultFile = defaultFile;
}

The namedFile bean dependency will be reused in this example. The bean name and the corresponding attribute value must match.

Run the integration test as-is and it will pass.

To see that the dependency was indeed resolved by the match-by-name execution path, change the attribute value passed to the @Resource annotation to a value of your choice and run the test again. This time, the test will fail with a NoSuchBeanDefinitionException.

2.2.2. Match by Type

To demonstrate setter-based, match-by-type execution, we will use the MethodByTypeResourceTest integration test:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(
  loader=AnnotationConfigContextLoader.class,
  classes=ApplicationContextTestResourceNameType.class)
public class MethodByTypeResourceTest {

    private File defaultFile;

    @Resource
    protected void setDefaultFile(File defaultFile) {
        this.defaultFile = defaultFile;
    }

    @Test
    public void givenResourceAnnotation_WhenSetter_ThenValidDependency(){
        assertNotNull(defaultFile);
        assertEquals("namedFile.txt", defaultFile.getName());
    }
}

Run this test as-is, and it will pass.

In order to verify that the File dependency was indeed resolved by the match-by-type execution path, change the class type of the defaultFile variable to another class type like String. Execute the MethodByTypeResourceTest integration test again and this time a NoSuchBeanDefinitionException will be thrown.

The exception verifies that match-by-type was indeed used to resolve the File dependency. The NoSuchBeanDefinitionException confirms that the reference variable name does not need to match the bean name. Instead, dependency resolution depends on the bean’s class type matching the reference variable’s class type.

2.2.3. Match by Qualifier

The MethodByQualifierResourceTest integration test will be used to demonstrate the match-by-qualifier execution path:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(
  loader=AnnotationConfigContextLoader.class,
  classes=ApplicationContextTestResourceQualifier.class)
public class MethodByQualifierResourceTest {

    private File arbDependency;
    private File anotherArbDependency;

    @Test
    public void givenResourceQualifier_WhenSetter_ThenValidDependencies(){
      assertNotNull(arbDependency);
        assertEquals("namedFile.txt", arbDependency.getName());
        assertNotNull(anotherArbDependency);
        assertEquals("defaultFile.txt", anotherArbDependency.getName());
    }

    @Resource
    @Qualifier("namedFile")
    public void setArbDependency(File arbDependency) {
        this.arbDependency = arbDependency;
    }

    @Resource
    @Qualifier("defaultFile")
    public void setAnotherArbDependency(File anotherArbDependency) {
        this.anotherArbDependency = anotherArbDependency;
    }
}

The objective of this test is to demonstrate that even if multiple bean implementations of a particular type are defined in an application context, a @Qualifier annotation can be used together with the @Resource annotation to resolve a dependency.

Similar to field-based dependency injection, if there are multiple beans defined in an application context, a NoUniqueBeanDefinitionException is thrown if no @Qualifier annotation is used to specify which bean should be used to resolve dependencies.

3. The @Inject Annotation

The @Inject annotation belongs to the JSR-330 annotations collection. This annotation has the following execution paths, listed by precedence:

  1. Match by Type
  2. Match by Qualifier
  3. Match by Name

These execution paths are applicable to both setter and field injection. In order to access the @Inject annotation, the javax.inject library has to be declared as a Gradle or Maven dependency.

For Gradle:

testCompile group: 'javax.inject', name: 'javax.inject', version: '1'

For Maven:

<dependency>
    <groupId>javax.inject</groupId>
    <artifactId>javax.inject</artifactId>
    <version>1</version>
</dependency>

3.1. Field Injection

3.1.1. Match by Type

The integration test example will be modified to use another type of dependency, namely the ArbitraryDependency class. The ArbitraryDependency class dependency merely serves as a simple dependency, and holds no further significance. It is listed as follows:

@Component
public class ArbitraryDependency {

    private final String label = "Arbitrary Dependency";

    public String toString() {
        return label;
    }
}

The FieldInjectTest integration test in question is listed as follows:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(
  loader=AnnotationConfigContextLoader.class,
  classes=ApplicationContextTestInjectType.class)
public class FieldInjectTest {

    @Inject
    private ArbitraryDependency fieldInjectDependency;

    @Test
    public void givenInjectAnnotation_WhenOnField_ThenValidDependency(){
        assertNotNull(fieldInjectDependency);
        assertEquals("Arbitrary Dependency",
          fieldInjectDependency.toString());
    }
}

Unlike the @Resource annotation, which resolves dependencies by name first; the default behavior of the @Inject annotation resolves dependencies by type.

This means that even if a class reference variable name differs from the bean name, the dependency will still be resolved, provided that the bean is defined in the application context. Note how the reference variable name in the following test:

@Inject
private ArbitraryDependency fieldInjectDependency;

differs from the bean name configured in the application context:

@Bean
public ArbitraryDependency injectDependency() {
    ArbitraryDependency injectDependency = new ArbitraryDependency();
    return injectDependency;
}

and when the test is executed, it is able to resolve the dependency.

3.1.2. Match by Qualifier

But what if there are multiple implementations of a particular class type, and a certain class requires a specific bean? Let us modify the integration testing example so that another dependency is required.

In this example, we subclass the ArbitraryDependency class, used in the match-by-type example, to create the AnotherArbitraryDependency class:

public class AnotherArbitraryDependency extends ArbitraryDependency {

    private final String label = "Another Arbitrary Dependency";

    public String toString() {
        return label;
    }
}

The objective of each test case is to ensure that each dependency is injected correctly into each reference variable:

@Inject
private ArbitraryDependency defaultDependency;

@Inject
private ArbitraryDependency namedDependency;

The FieldQualifierInjectTest integration test used to demonstrate match by qualifier is listed as follows:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(
  loader=AnnotationConfigContextLoader.class,
  classes=ApplicationContextTestInjectQualifier.class)
public class FieldQualifierInjectTest {

    @Inject
    private ArbitraryDependency defaultDependency;

    @Inject
    private ArbitraryDependency namedDependency;

    @Test
    public void givenInjectQualifier_WhenOnField_ThenDefaultFileValid(){
        assertNotNull(defaultDependency);
        assertEquals("Arbitrary Dependency",
          defaultDependency.toString());
    }

    @Test
    public void givenInjectQualifier_WhenOnField_ThenNamedFileValid(){
        assertNotNull(defaultDependency);
        assertEquals("Another Arbitrary Dependency",
          namedDependency.toString());
    }
}

If there are multiple implementations of a particular class in an application context and the FieldQualifierInjectTest integration test attempts to inject the dependencies in the manner listed below:

@Inject 
private ArbitraryDependency defaultDependency;

@Inject 
private ArbitraryDependency namedDependency;

a NoUniqueBeanDefinitionException will be thrown.

Throwing this exception is the Spring Framework’s way of pointing out that there are multiple implementations of a certain class and it is confused about which one to use. In order to elucidate the confusion, go to line 7 and 10 of the FieldQualifierInjectTest integration test:

@Inject
private ArbitraryDependency defaultDependency;

@Inject
private ArbitraryDependency namedDependency;

pass the required bean name to the @Qualifier annotation, which is used together with the @Inject annotation. The code block will should now look as follows:

@Inject
@Qualifier("defaultFile")
private ArbitraryDependency defaultDependency;

@Inject
@Qualifier("namedFile")
private ArbitraryDependency namedDependency;

The @Qualifier annotation expects a strict match when receiving a bean name. Ensure that the bean name is passed to the Qualifier correctly, otherwise a NoUniqueBeanDefinitionException will be thrown. Run the test again, and this time it should pass.

3.1.3. Match by Name

The FieldByNameInjectTest integration test used to demonstrate match by name is similar to the match by type execution path. The only difference is now a specific bean is required, as opposed to a specific type. In this example, we subclass the ArbitraryDependency class again to produce the YetAnotherArbitraryDependency class:

public class YetAnotherArbitraryDependency extends ArbitraryDependency {

    private final String label = "Yet Another Arbitrary Dependency";

    public String toString() {
        return label;
    }
}

In order to demonstrate the match-by-name execution path, we will use the following integration test:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(
  loader=AnnotationConfigContextLoader.class,
  classes=ApplicationContextTestInjectName.class)
public class FieldByNameInjectTest {

    @Inject
    @Named("yetAnotherFieldInjectDependency")
    private ArbitraryDependency yetAnotherFieldInjectDependency;

    @Test
    public void givenInjectQualifier_WhenSetOnField_ThenDependencyValid(){
        assertNotNull(yetAnotherFieldInjectDependency);
        assertEquals("Yet Another Arbitrary Dependency",
          yetAnotherFieldInjectDependency.toString());
    }
}

The application context is listed as follows:

@Configuration
public class ApplicationContextTestInjectName {

    @Bean
    public ArbitraryDependency yetAnotherFieldInjectDependency() {
        ArbitraryDependency yetAnotherFieldInjectDependency =
          new YetAnotherArbitraryDependency();
        return yetAnotherFieldInjectDependency;
    }
}

Run the integration test as-is, and it will pass.

In order to verify that the dependency was indeed injected by the match-by-name execution path, change the value, yetAnotherFieldInjectDependency, that was passed in to the @Named annotation to another name of your choice. Run the test again – this time, a NoSuchBeanDefinitionException is thrown.

3.2. Setter Injection

Setter-based injection for the @Inject annotation is similar to the approach used for @Resource setter-based injection. Instead of annotating the reference variable, the corresponding setter method is annotated. The execution paths followed by field-based dependency injection also apply to setter based injection.

4. The @Autowired Annotation

The behaviour of @Autowired annotation is similar to the @Inject annotation. The only difference is that the @Autowired annotation is part of the Spring framework. This annotation has the same execution paths as the @Inject annotation, listed in order of precedence:

  1. Match by Type
  2. Match by Qualifier
  3. Match by Name

These execution paths are applicable to both setter and field injection.

4.1. Field Injection

4.1.1. Match by Type

The integration testing example used to demonstrate the @Autowired match-by-type execution path will be similar to the test used to demonstrate the @Inject match-by-type execution path. The FieldAutowiredTest integration test used to demonstrate match-by-type using the @Autowired annotation is listed as follows:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(
  loader=AnnotationConfigContextLoader.class,
  classes=ApplicationContextTestAutowiredType.class)
public class FieldAutowiredTest {

    @Autowired
    private ArbitraryDependency fieldDependency;

    @Test
    public void givenAutowired_WhenSetOnField_ThenDependencyResolved() {
        assertNotNull(fieldDependency);
        assertEquals("Arbitrary Dependency", fieldDependency.toString());
    }
}

The application context for this integration test is listed as follows:

@Configuration
public class ApplicationContextTestAutowiredType {

    @Bean
    public ArbitraryDependency autowiredFieldDependency() {
        ArbitraryDependency autowiredFieldDependency =
          new ArbitraryDependency();
        return autowiredFieldDependency;
    }
}

The objective of the integration test is to demonstrate that match-by-type takes first precedence over the other execution paths. Notice in line 8 of the FieldAutowiredTest integration test how the reference variable name:

@Autowired
private ArbitraryDependency fieldDependency;

is different to the bean name in the application context:

@Bean
public ArbitraryDependency autowiredFieldDependency() {
    ArbitraryDependency autowiredFieldDependency =
      new ArbitraryDependency();
    return autowiredFieldDependency;
}

When the test is run, it will pass.

In order to confirm that the dependency was indeed resolved using the match-by-type execution path, change the type of the fieldDependency reference variable and run the integration test again. This time round the FieldAutowiredTest integration test must fail, with a NoSuchBeanDefinitionException being thrown. This verifies that match-by-type was used to resolve the dependency.

4.1.2. Match by Qualifier

What if faced with a situation where multiple bean implementations have been defined in the application context, like the one listed below:

@Configuration
public class ApplicationContextTestAutowiredQualifier {

    @Bean
    public ArbitraryDependency autowiredFieldDependency() {
        ArbitraryDependency autowiredFieldDependency =
          new ArbitraryDependency();
        return autowiredFieldDependency;
    }

    @Bean
    public ArbitraryDependency anotherAutowiredFieldDependency() {
        ArbitraryDependency anotherAutowiredFieldDependency =
          new AnotherArbitraryDependency();
        return anotherAutowiredFieldDependency;
    }
}

If the FieldQualifierAutowiredTest integration test, listed below, is executed:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(
  loader=AnnotationConfigContextLoader.class,
  classes=ApplicationContextTestAutowiredQualifier.class)
public class FieldQualifierAutowiredTest {

    @Autowired
    private ArbitraryDependency fieldDependency1;

    @Autowired
    private ArbitraryDependency fieldDependency2;

    @Test
    public void givenAutowiredQualifier_WhenOnField_ThenDep1Valid(){
        assertNotNull(fieldDependency1);
        assertEquals("Arbitrary Dependency", fieldDependency1.toString());
    }

    @Test
    public void givenAutowiredQualifier_WhenOnField_ThenDep2Valid(){
        assertNotNull(fieldDependency2);
        assertEquals("Another Arbitrary Dependency",
          fieldDependency2.toString());
    }
}

a NoUniqueBeanDefinitionException will be thrown.

The exception is due to the ambiguity caused by the two beans defined in the application context. The Spring Framework does not know which bean dependency should be autowired to which reference variable. Resolve this issue by adding the @Qualifier annotation to lines 7 and 10 of the FieldQualifierAutowiredTest integration test:

@Autowired
private FieldDependency fieldDependency1;

@Autowired
private FieldDependency fieldDependency2;

so that the code block looks as follows:

@Autowired
@Qualifier("autowiredFieldDependency")
private FieldDependency fieldDependency1;

@Autowired
@Qualifier("anotherAutowiredFieldDependency")
private FieldDependency fieldDependency2;

Run the test again, and this time it will pass.

4.1.3. Match by Name

The same integration test scenario will be used to demonstrate the match-by-name execution path when using the @Autowired annotation to inject a field dependency. When autowiring dependencies by name, the @ComponentScan annotation must be used with the application context, ApplicationContextTestAutowiredName:

@Configuration
@ComponentScan(basePackages={"com.baeldung.dependency"})
    public class ApplicationContextTestAutowiredName {
}

The @ComponentScan annotation will search packages for Java classes that have been annotated with the @Component annotation. For example, in the application context, the com.baeldung.dependency package will be scanned for classes that have been annotated with the @Component annotation. In this scenario, the Spring Framework must detect the ArbitraryDependency class, which has the @Component annotation:

@Component(value="autowiredFieldDependency")
public class ArbitraryDependency {

    private final String label = "Arbitrary Dependency";

    public String toString() {
        return label;
    }
}

The attribute value, autowiredFieldDependency, passed into the @Component annotation, tells the Spring Framework that the ArbitraryDependency class is a component named autowiredFieldDependency. In order for the @Autowired annotation to resolve dependencies by name, the component name must correspond with the field name defined in the FieldAutowiredNameTest integration test; please refer to line 8:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(
  loader=AnnotationConfigContextLoader.class,
  classes=ApplicationContextTestAutowiredName.class)
public class FieldAutowiredNameTest {

    @Autowired
    private ArbitraryDependency autowiredFieldDependency;

    @Test
    public void givenAutowiredAnnotation_WhenOnField_ThenDepValid(){
        assertNotNull(autowiredFieldDependency);
        assertEquals("Arbitrary Dependency",
          autowiredFieldDependency.toString());
	}
}

When the FieldAutowiredNameTest integration test is run as-is, it will pass.

But how do we know that the @Autowired annotation really did invoke the match-by-name execution path? Change the name of the reference variable autowiredFieldDependency to another name of your choice, then run the test again.

This time, the test will fail and a NoUniqueBeanDefinitionException is thrown. A similar check would be to change the @Component attribute value, autowiredFieldDependency, to another value of your choice and run the test again. A NoUniqueBeanDefinitionException will also be thrown.

This exception is proof that if an incorrect bean name is used, no valid bean will be found. Therefore, the match-by-name execution path was invoked.

4.2. Setter Injection

Setter-based injection for the @Autowired annotation is similar the approach demonstrated for @Resource setter-based injection. Instead of annotating the reference variable with the @Inject annotation, the corresponding setter is annotated. The execution paths followed by field-based dependency injection also apply to setter based injection.

5. Applying These Annotations

This raises the question, which annotation should be used and under what circumstances? The answer to these questions depends on the design scenario faced by the application in question, and how the developer wishes to leverage polymorphism based on the default execution paths of each annotation.

5.1. Application-Wide use of Singletons Through Polymorphism

If the design is such that application behaviors are based on implementations of an interface or an abstract class, and these behaviors are used throughout the application, then use either the @Inject or @Autowired annotation.

The benefit of this approach is that when the application is upgraded, or a patch needs to be applied in order to fix a bug; then classes can be swapped out with minimal negative impact to the overall application behavior. In this scenario, the primary default execution path is match-by-type.

5.2. Fine-Grained Application Behavior Configuration Through Polymorphism

If the design is such that the application has complex behavior, each behavior is based on different interfaces/abstract classes, and usage of each of these implementations varies across the application, then use the @Resource annotation. In this scenario, the primary default execution path is match-by-name.

5.3. Dependency Injection Should be Handled Solely by the Java EE Platform

If there is a design mandate for all dependencies to be injected by the Java EE Platform and not Spring, then the choice is between the @Resource annotation and the @Inject annotation. You should narrow down the final decision between the two annotations, based on which default execution path is required.

5.4. Dependency Injection Should be Handled Solely by the Spring Framework

If the mandate is for all dependencies to be handled by the Spring Framework, the only choice is the @Autowired annotation.

5.5. Discussion Summary

The table below summarizes the discussion.

Scenario @Resource @Inject @Autowired
Application-wide use of singletons through polymorphism
Fine-grained application behaviour configuration through polymorphism
Dependency injection should be handled solely by the Java EE platform
Dependency injection should be handled solely by the Spring Framework

6. Conclusion

The article aimed to provide a deeper insight into the behavior of each annotation. Understanding how each annotation behaves will contribute to better overall application design and maintenance.

The code used during the discussion can be found on github.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Introduction to JSON Schema in Java

$
0
0

I usually post about Jackson and JSON stuff on Twitter - you can follow me there:

1. Overview

JSON Schema is a declarative language for validating the format and structure of a JSON Object. It allows us to specify a number of special primitives to describe exactly what a valid JSON Object will look like.

The JSON Schema specification is divided into three parts:

  • JSON Schema Core: The JSON Schema Core specification is where the terminology for a schema is defined.
  • JSON Schema Validation: The JSON Schema Validation specification is the document that defines the valid ways to define validation constraints. This document also defines a set of keywords that can be used to specify validations for a JSON API. In the examples that follow, we’ll be using some of these keywords.
  • JSON Hyper-Schema: This is another extension of the JSON Schema spec, where-in, the hyperlink and hypermedia-related keywords are defined.

2. Defining a JSON Schema

Now that we have defined what a JSON Schema is used for, let’s create a JSON Object and the corresponding JSON Schema describing it.

The following is a simple JSON Object representing a product catalog:

{
    "id": 1,
    "name": "Lampshade",
    "price": 0
}

We could define its JSON Schema as follow:

{
    "$schema": "http://json-schema.org/draft-04/schema#",
    "title": "Product",
    "description": "A product from the catalog",
    "type": "object",
    "properties": {
        "id": {
            "description": "The unique identifier for a product",
            "type": "integer"
        },
        "name": {
            "description": "Name of the product",
            "type": "string"
        },
        "price": {
            "type": "number",
            "minimum": 0,
            "exclusiveMinimum": true
        }
    },
    "required": ["id", "name", "price"]
}

As we can see a JSON Schema is a JSON document, and that document MUST be an object. Object members (or properties) defined by JSON Schema are called keywords.

Let’s explain the keywords that we have used in our sample:

  • The $schema keyword states that this schema is written according to the draft v4 specification.
  • The title and description keywords are descriptive only, in that they do not add constraints to the data being validated. The intent of the schema is stated with these two keywords: describes a product.
  • The type keyword defines the first constraint on our JSON data: it has to be a JSON Object.

Also, a JSON Schema MAY contain properties which are not schema keywords. In our case id, name, price will be members (or properties) of the JSON Object.

For each properties we can define the type. We defined id and name as string and price as number. In JSON Schema a number can have a minimum. By default this minimum is inclusive, so we need to specify exclusiveMinimum.

Finally, the Schema tells that id, name and price are required.

3. Validation with JSON Schema

With our JSON Schema put in place we can validate our JSON Object.

There are many libraries to accomplish this task. For the purpose of our example we have chosen the Java json-schema library.

First of all we need to add the following dependency to our pom.xml:

<dependency>
    <groupId>org.everit.json</groupId>
    <artifactId>org.everit.json.schema</artifactId>
    <version>1.3.0</version>
</dependency>

Finally we can write a couple of simple test case to validate our JSON Object:

@Test
public void givenInvalidInput_whenValidating_thenInvalid() throws ValidationException {
    JSONObject jsonSchema = new JSONObject(
      new JSONTokener(JSONSchemaTest.class.getResourceAsStream("/schema.json")));
    JSONObject jsonSubject = new JSONObject(
      new JSONTokener(JSONSchemaTest.class.getResourceAsStream("/product_invalid.json")));
    
    Schema schema = SchemaLoader.load(jsonSchema);
    jsonSchema.validate(jsonSubject);
}

In this case thrown ValidationException will point to #/price. If you look at the console it will print the following output:

#/price: 0.0 is not higher than 0

The second test look like the following:

@Test
public void givenValidInput_whenValidating_thenValid() throws ValidationException {
    JSONObject jsonSchema = new JSONObject(
      new JSONTokener(JSONSchemaTest.class.getResourceAsStream("/schema.json")));
    JSONObject jsonSubject = new JSONObject(
      new JSONTokener(JSONSchemaTest.class.getResourceAsStream("/product_valid.json")));

    Schema schema = SchemaLoader.load(jsonSchema);
    jsonSchema.validate(jsonSubject);
}

Since we use a valid JSON Object no validation error will be thrown.

4. Conclusion

In this article we have defined what is a JSON Schema and wich are some relevant keyword that help us to define our own schema.

Coupling a JSON Schema with its corresponding JSON Object representation we can perform some validation task.

A simple test case of this article can be found in the github project.

I usually post about Jackson and JSON stuff on Twitter - you should follow me there:


AssertJ for Guava

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

This article focuses on AssertJ Guava-related assertions and is the second article from the AssertJ series. If you want to some general info about AssertJ, have a look at the first article in the series Introduction to AssertJ.

2. Maven Dependencies

In order to use AssertJ with Guava, you need to add the following dependency to your pom.xml:

<dependency>
    <groupId>org.assertj</groupId>
    <artifactId>assertj-guava</artifactId>
    <version>3.0.0</version>
    <scope>test</scope>
</dependency>

You can find the latest version here.

And note that since version 3.0.0, AssertJ Guava relies on Java 8 and AssertJ Core 3.x.

3. Guava Assertions in Action

AssertJ has custom assertions for Guava types: ByteSource, Multimap, Optional, Range, RangeMap and Table.

3.1. ByteSource Assertions

Let’s start off by creating two empty temporary files:

File temp1 = File.createTempFile("bael", "dung1");
File temp2 = File.createTempFile("bael", "dung2");

and creating ByteSource instances from them:

ByteSource byteSource1 = Files.asByteSource(temp1);
ByteSource byteSource2 = Files.asByteSource(temp2);

Now we can write the following assertion:

assertThat(buteSource1)
  .hasSize(0)
  .hasSameContentAs(byteSource2);

3.2. Multimap Assertions

Multimaps are maps that can associate more than one value with a given key. The Multimap assertions work pretty similarly to normal Map implementations.

Let’s start by creating a Multimap instance and adding some entries:

Multimap<Integer, String> mmap = Multimaps
  .newMultimap(new HashMap<>(), Sets::newHashSet);
mmap.put(1, "one");
mmap.put(1, "1");

And now we can assert:

assertThat(mmap)
  .hasSize(2)
  .containsKeys(1)
  .contains(entry(1, "one"))
  .contains(entry(1, "1"));

There are also two additional assertions available – with subtle difference between them:

  • containsAllEntriesOf and
  • hasSameEntriesAs.

Let’s have a look at these two assertions; we’ll start by defining a few maps:

Multimap<Integer, String> mmap1 = ArrayListMultimap.create();
mmap1.put(1, "one");
mmap1.put(1, "1");
mmap1.put(2, "two");
mmap1.put(2, "2");

Multimap<Integer, String> mmap1_clone = Multimaps
  .newSetMultimap(new HashMap<>(), HashSet::new);
mmap1_clone.put(1, "one");
mmap1_clone.put(1, "1");
mmap1_clone.put(2, "two");
mmap1_clone.put(2, "2");

Multimap<Integer, String> mmap2 = Multimaps
  .newSetMultimap(new HashMap<>(), HashSet::new);
mmap2.put(1, "one");
mmap2.put(1, "1");

As you can see, mmap1 and mmap1_clone contain exactly the same entries, but are two different objects of two different Map types. The Map mmap2 contains a single entry that is shared among all maps. Now the following assertion is true:

assertThat(mmap1)
  .containsAllEntriesOf(mmap2)
  .containsAllEntriesOf(mmap1_clone)
  .hasSameEntriesAs(mmap1_clone);

3.3. Optional Assertions

Assertions for Guava’s Optional involve value presence checking and utilities for extracting the inner value.

Let’s start by creating an Optional instance:

Optional<String> something = Optional.of("something");

And now we can check the value’s presence and assert the Optional‘s content:

assertThat(something)
  .isPresent()
  .extractingValue()
  .isEqualTo("something");

3.4. Range Assertions

Assertions for Guava’s Range class involves checking Range‘s lower and upper bounds or whether a certain value is within a given range.

Let’s define a simple range of characters by doing the following:

Range<String> range = Range.openClosed("a", "g");

and now we can test:

assertThat(range)
  .hasOpenedLowerBound()
  .isNotEmpty()
  .hasClosedUpperBound()
  .contains("b");

3.5. Table Assertions

AssertJ’s table-specific assertions allow the checking of row and column count and the presence of a cells value.

Let’s create a simple Table instance:

Table<Integer, String, String> table = HashBasedTable.create(2, 2);
table.put(1, "A", "PRESENT");
table.put(1, "B", "ABSENT");

and now we can perform the following check:

assertThat(table)
  .hasRowCount(1)
  .containsValues("ABSENT")
  .containsCell(1, "B", "ABSENT");

4. Conclusion

In this article from AssertJ series, we explored all Guava-related features.

The implementation of all the examples and code snippets can be found in a GitHub project.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Introduction to Java Logging

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

Logging is a powerful aid for understanding and debugging program’s run-time behavior. Logs capture and persist the important data and make it available for analysis at any point in time.

This article discusses the most popular java logging frameworks, Log4j 2 and Logback, along with their predecessor Log4j, and briefly touches upon SLF4J, a logging facade that provides a common interface for different logging frameworks.

2. Enabling Logging

All the logging frameworks discussed in the article share the notion of loggers, appenders and layouts. Enabling logging inside the project follows three common steps:

  1. Adding needed libraries
  2. Configuration
  3. Placing log statements

The upcoming sections discuss the steps for each framework individually.

3. Log4j 2

Log4j 2 is new and improved version of the Log4j logging framework. The most compelling improvement is the possibility of asynchronous logging. Log4j 2 requires the following libraries:

<dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-api</artifactId>
    <version>2.6.1</version>
</dependency>
<dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-core</artifactId>
    <version>2.6.1</version>
</dependency>

Latest version of log4j-api you can find here and log4j-corehere.

3.1. Configuration

Configuring Log4j 2 is based on the main configuration log4j.xml file. The first thing to configure is the appender.

These determine where the log message will be routed. Destination can be a console, a file, socket, etc.

Log4j 2 has many appenders for different purposes, you can find more information on the official Log4j 2 site.

Lets take a look at a simple config example:

<Configuration status="debug" name="baeldung" packages="">
    <Appenders>
        <Console name="stdout" target="SYSTEM_OUT">
            <PatternLayout pattern="%d{yyyy-MM-dd HH:mm:ss} %p %m%n"/>
        </Console>
    </Appenders>
</Configuration>

You can set a name for each appender, for example use name console instead of stdout.

Notice the PatternLayout element – this determines how message should look like. In our example, the pattern is set based on the pattern param, where %d determines date pattern, %p – output of log level, %m – output of logged message and %n – adds new line symbol. More info about pattern you can find on official Log4j 2 page.

Finally – to enable an appender (or multiple) you need to add it to <Root> section:

<Root level="error">
    <AppenderRef ref="STDOUT"/>
</Root>

3.2. Logging to File

Sometimes you will need to use logging to a file, so we will add fout logger to our configuration:

<Appenders>
    <File name="fout" fileName="baeldung.log" append="true">
        <PatternLayout>
            <Pattern>%d{yyyy-MM-dd HH:mm:ss} %-5p %m%nw</Pattern>
        </PatternLayout>
    </File>
</Appenders>

The File appender have several parameters that can be configured:

  • file – determines file name of the log file
  • append – The default value for this param is true, meaning that by default a File appender will append to an existing file and not truncate it.
  • PatternLayout that was described in previous example.

In order to enable File appender you need to add it to <Root> section:

<Root level="INFO">
    <AppenderRef ref="stdout" />
    <AppenderRef ref="fout"/>
</Root>

3.3. Asynchronous Logging

If you want to make your Log4j 2 asynchronous you need to add LMAX disruptor library to your pom.xml. LMAX disruptor is a lock-free inter-thread communication library.

Adding disruptor to pom.xml:

<dependency>
    <groupId>com.lmax</groupId>
    <artifactId>disruptor</artifactId>
    <version>3.3.4</version>
</dependency>

Latest version of disruptor can be found here.

If you want to use LMAX disruptor you need to use <asyncRoot> instead of <Root> in your configuration.

<AsyncRoot level="DEBUG">
    <AppenderRef ref="stdout" />
    <AppenderRef ref="fout"/>
</AsyncRoot>

Or you can enable asynchronous logging by setting the system property Log4jContextSelector to org.apache.logging.log4j.core.async.AsyncLoggerContextSelector.

You can of course read more about the configuration of the Log4j2 async logger and see some performance diagrams on the Log4j2 official page .

3.4. Usage

The following is a simple example that demonstrates the use of Log4j for logging:

import org.apache.logging.log4j.Logger;
import org.apache.logging.log4j.LogManager;

public class Log4jExample {

    private static Logger logger = LogManager.getLogger(Log4jExample.class);

    public static void main(String[] args) {
        logger.debug("Debug log message");
        logger.info("Info log message");
        logger.error("Error log message");
    }
}

After running, the application will log the following messages to both console and file named baeldung.log:

2016-06-16 17:02:13 INFO  Info log message
2016-06-16 17:02:13 ERROR Error log message

If you elevate the root log level to ERROR:

<level value="ERROR" />

The output will look like the following:

2016-06-16 17:02:13 ERROR Error log message

As you can see, changing the log level to upper parameter causes the messages with lower log levels will not be print to appenders.

Method logger.error can be also used to log an exception that occurred:

try {
    // Here some exception can be thrown
} catch (Exception e) {
    logger.error("Error log message", throwable);
}

3.5. Package Level Configuration

Let’s say you need to show messages with the log level TRACE – for example from a specific package such as com.baeldung.log4j2:

logger.trace("Trace log message");

For all other packages you want to continue logging only INFO messages.

Keep in mind that TRACE is lower than the root log level INFO that we specified in configuration.

To enable logging only for one of packages you need to add the following section before <Root> to your log4j.xml:

<Logger name="com.baeldung.log4j2" level="debug">
    <AppenderRef ref="stdout"/>
</Logger>

It will enable logging for com.baeldung.log4j package and your output will look like:

2016-06-16 17:02:13 TRACE Trace log message
2016-06-16 17:02:13 DEBUG Debug log message
2016-06-16 17:02:13 INFO  Info log message
2016-06-16 17:02:13 ERROR Error log message

4. Logback

Logback is meant to be an improved version of Log4j, developed by the same developer who made Log4j.

Logback also has a lot more features compared to Log4j, with many of them being introduced into Log4j 2 as well. Here’s a quick look at all of the advantages of Logback on the official site.

Let’s start by adding the following dependency to the pom.xml:

<dependency>
    <groupId>ch.qos.logback</groupId>
    <artifactId>logback-classic</artifactId>
    <version>1.1.7</version>
</dependency>

This dependency will transitively pull in another two dependencies, the logback-core and slf4j-api. Note that the latest version of Logback can be found here.

4.1. Configuration

Let’s now have a look at a Logback configuration example:

<configuration>
  # Console appender
  <appender name="stdout" class="ch.qos.logback.core.ConsoleAppender">
    <layout class="ch.qos.logback.classic.PatternLayout">
      # Pattern of log message for console appender
      <Pattern>%d{yyyy-MM-dd HH:mm:ss} %-5p %m%n</Pattern>
    </layout>
  </appender>

  # File appender
  <appender name="fout" class="ch.qos.logback.core.FileAppender">
    <file>baeldung.log</file>
    <append>false</append>
    <encoder>
      # Pattern of log message for file appender
      <pattern>%d{yyyy-MM-dd HH:mm:ss} %-5p %m%n</pattern>
    </encoder>
  </appender>

  # Override log level for specified package
  <logger name="com.baeldung.log4j" level="TRACE"/>

  <root level="INFO">
    <appender-ref ref="stdout" />
    <appender-ref ref="fout" />
  </root>
</configuration>

Logback uses SLF4J as an interface, so you need to import SLF4J’s Logger and LoggerFactory.

4.2. SLF4J

SLF4J provides a common interface and abstraction for most of the Java logging frameworks. It acts as a facade and provides standardized API for accessing the underlying features of the logging framework.

Logback uses SLF4J as native API for its functionality. Following is the example using Logback logging:

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class Log4jExample {

    private static Logger logger = LoggerFactory.getLogger(Log4jExample.class);

    public static void main(String[] args) {
        logger.debug("Debug log message");
        logger.info("Info log message");
        logger.error("Error log message");
    }
}

The output will remain the same as in previous examples.

5. Log4J

Finally, let’s have a look at the venerable Log4j logging framework.

At this point it’s of course outdated, but worth discussing as it lays the foundation for its more modern successors.

Many of the configuration details match those discussed in Log4j 2 section.

5.1. Configuration

First of all you need to add Log4j library to your projects pom.xml:

<dependency>
    <groupId>log4j</groupId>
    <artifactId>log4j</artifactId>
    <version>1.2.17</version>
</dependency>

Here you should be able to find latest version of Log4j.

Lets take a look at a complete example of simple Log4j configuration with only one console appender:

<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd" >
<log4j:configuration debug="false">

    <!--Console appender-->
    <appender name="stdout" class="org.apache.log4j.ConsoleAppender">
        <layout class="org.apache.log4j.PatternLayout">
            <param name="ConversionPattern" 
              value="%d{yyyy-MM-dd HH:mm:ss} %p %m%n" />
        </layout>
    </appender>

    <root>
        <level value="INFO" />
        <appender-ref ref="stdout" />
    </root>

</log4j:configuration>

<log4j:configuration debug=”false”> is open tag of whole configuration which has one property – debug. It determines whether you want to add Log4j debug information to logs.

5.2. Usage

After you have added Log4j library and configuration you can use logger in your code. Lets take a look at a simple example:

import org.apache.log4j.Logger;

public class Log4jExample {
    private static Logger logger = Logger.getLogger(Log4jExample.class);

    public static void main(String[] args) {
        logger.debug("Debug log message");
        logger.info("Info log message");
        logger.error("Error log message");
    }
}

6. Conclusion

This article shows very simple examples of how you can use different logging framework such as Log4j, Log4j2 and Logback. It covers simple configuration examples for all of the mentioned frameworks.

The examples that accompany the article are available at GitHub.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Introduction to JSF EL 2

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Introduction

Expression Language (EL), is a scripting language that’s seen adoption within many Java frameworks, such as Spring with SpEL and JBoss with JBoss EL.

In this article, we’ll focus at the JSF’s implementation of this scripting language – Unified EL.

EL is currently in version 3.0, a major upgrade that allows the processing engine to be used in standalone mode – for example, on the Java SE platform. Prior versions were dependent on a JavaEE-compliant application server or web container. This article discusses EL version 2.2.

2. Immediate and Deferred Evaluation

The primary function of EL in JSF is to connect the JSF view (usually XHTML markup) and the java-based back-end. The back-end can be user-created managed beans, or container-managed objects like the HTTP session.

We will be looking at EL 2.2. EL in JSF comes in two general forms, immediate syntax EL and deferred syntax EL.

2.1. Immediate Syntax EL

Otherwise known as JSP EL, this is a scripting format that’s a holdover from the JSP days of java web application development.

The JSP EL expressions start with the dollar sign ($), then followed by the left curly bracket ({), then followed by the actual expression, and finally closed with the right curly bracket (}):

${ELBean.value > 0}

This syntax:

  1. Is evaluated only once (at the beginning) in the lifecycle of a page. What this means is that the value that is. being read by the expression in the example above must be set before the page is loaded.
  2. Provides read-only access to bean values.
  3. And as a result, requires adherence to the JavaBean naming convention.

For most uses, this form of EL is not very versatile.

2.2. Deferred Execution EL

Deferred Execution EL is the EL designed for JSF proper. It’s major syntactical difference with JSP EL is that it’s marked with a “#” instead of a “$“.

#{ELBean.value > 0}

Deferred EL:

  1. Is in sync with the JSF lifecycle. This means that an EL expression in deferred EL is evaluated at different points in the rendering of a JSF page (at the beginning and the end).
  2. Provides read and write access to bean values. This allows one to set a value in a JSF backing-bean (or anywhere else) using EL.
  3. Allows a programmer to invoke arbitrary methods on an object and depending on the version of EL, pass arguments to such methods.

Unified EL is the specification that unifies both deferred EL and JSP EL, allowing both syntax in the same page.

3. Unified EL

Unified EL allows two general flavors of expressions, value expressions and method expressions.

And a quick note – the following sections will show some examples, which are all available in the app (see the Github link at the end) by navigating to:

http://localhost:8080/jsf/el_intro.jsf

3.1. Value Expressions

A value expression allows us to either read or set a managed bean property, depending on where it’s placed.

The following expression reads a managed bean property onto the page:

Hello, #{ELBean.firstName}

The following expression however, allows us to set a value on the user object:

<h:inputText id="firstName" value="#{ELBean.firstName}" required="true"/>

The variable must follow JavaBean naming convention to be eligible for this kind of treatment. For the value of the bean to be committed, the enclosing form just needs to be saved.

3.2. Method Expressions

Unified EL provides method expressions to execute public, non-static methods from within a JSF page. The methods may or may not have return values.

Here’s a quick example:

<h:commandButton value="Save" action="#{ELBean.save}"/>

The save() method being referred to is defined on a backing bean named ELBean. 

Starting from EL 2.2, you can also pass arguments to the method that’s accessed using EL. This can allow us to rewrite our example thus:

<h:inputText id="firstName" binding="#{firstName}" required="true"/>
<h:commandButton value="Save"
  action="#{ELBean.saveFirstName(firstName.value.toString().concat('(passed)'))}"/>

What we’ve done here, is to create a page-scoped binding expression for the inputText component and directly pass the value attribute to the method expression.

Note that the variable is passed to the method without any special notation, curly braces or escape characters.

3.3. Implicit EL Objects

The JSF EL engine provides access to several container-managed objects. Some of them are:

  • #{Application}: Also available as the #{servletContext}, this is the object representing the web application instance
  • #{applicationScope}: a map of variables accessible web application-wide
  • #{Cookie}: a map of the HTTP Cookie variables
  • #{facesContext}: the current instance of FacesContext
  • #{flash}: the JSF Flash scoped-object
  • #{header}: a map of the HTTP headers in the current request
  • #{initParam}: a map of the context initialization variables of the web application
  • #{param}: a map of the HTTP request query parameters
  • #{request}: the HTTPServletRequest object
  • #{requestScope}: a request-scoped map of variables
  • #{sessionScope}: a session-scoped map of variables
  • #{session}: the HTTPSession object
  • #{viewScope}: a view (page-) scoped map of variables

The following simple example lists all the request headers and values by accessing the headers implicit object:

<c:forEach items="#{header}" var="header">
   <tr>
       <td>#{header.key}</td>
       <td>#{header.value}</td>
   </tr>
</c:forEach>

4. What You Can Do in EL

In it’s versatility, EL can be featured in Java code, XHTML markup, Javascript and even in JSF configuration files like the faces-config.xml file. Let’s examine some concrete use-cases.

4.1. Use EL in Page Markup

EL can be featured in standard HTML tags:

<meta name="description" content="#{ELBean.pageDescription}"/>

4.2. Use EL in JavaScript

EL will be interpreted when encountered in Javascript or <script> tags:

<script type="text/javascript"> var theVar = #{ELBean.firstName};</script>

A backing bean variable will be set as a javascript variable here.

4.3. Evaluate Boolean Logic in EL Using Operators

EL supports fairly advanced comparison operators:

  • eq equality operator, equivalent to “==.”
  • lt less than operator, equivalent to “<.”
  • le less than or equal to operator, equivalent to “<=.”
  • gt greater than operator, equivalent to “>.”
  • ge greater than or equal, equivalent to “>=.

4.4. Evaluate EL in a Backing Bean

From within the backing bean code, one can evaluate an EL expression using the JSF Application. This opens up a world of possibilities, in connecting the JSF page with the backing bean. You could retrieve implicit EL objects, or retrieve actual HTML page components or their value easily from the backing bean:

FacesContext ctx = FacesContext.getCurrentInstance(); 
Application app = ctx.getApplication(); 
String firstName = app.evaluateExpressionGet(ctx, "#{firstName.value}", String.class); 
HtmlInputText firstNameTextBox = app.evaluateExpressionGet(ctx, "#{firstName}", HtmlInputText.class);

This allows the developer a great deal of flexibility in interacting with a JSF page.

5. What You Can Not Do in EL

EL < 3.0 does have some limitations. The following sections discuss some of them.

5.1. No Overloading

EL doesn’t support the use of overloading. So in a backing bean with the following methods:

public void save(User theUser);
public void save(String username);
public void save(Integer uid);

JSF EL will not be able to properly evaluate the following expression

<h:commandButton value="Save" action="#{ELBean.save(firstName.value)}"/>

The JSF ELResolver will introspect the class definition of bean, and pick the first method returned by java.lang.Class#getMethods (a method that returns the methods available in a class). The order of the methods returned is not guaranteed and this will inevitably result in undefined behaviour.

5.2. No Enums or Constant Values

JSF EL < 3.0, doesn’t support the use of constant values or Enums in the script. So, having any of the following

public static final String USER_ERROR_MESS = "No, you can’t do that";
enum Days { Sat, Sun, Mon, Tue, Wed, Thu, Fri };

means that you won’t be able to do the following

<h:outputText id="message" value="#{ELBean.USER_ERROR_MESS}"/>
<h:commandButton id="saveButton" value="save" rendered="bean.offDay==Days.Sun"/>

5.3. No Built-in Null Safety

JSF EL < v3.0 doesn’t provide implicit null safe access, which some may find odd about a modern scripting engine.

So if person in the expression below is null, the entire expression fails with an unsightly NPE

Hello Mr, #{ELBean.person.surname}"

6. Conclusion

We’ve examined some of the fundamentals of JSF EL, strengths and limitations.

This is largely a versatile scripting language with some room for improvement; it’s also the glue that binds the JSF view, to the JSF model and controller.

The source code that accompanies this article is available at GitHub.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Hibernate: save, persist, update, merge, saveOrUpdate

$
0
0

I usually post about Persistence on Twitter - you can follow me there:

1. Introduction

In this article we will discuss the differences between several methods of the Session interface: save, persist, update, merge, saveOrUpdate.

This is not an introduction to Hibernate and you should already know the basics of configuration, object-relational mapping and working with entity instances. For an introductory article to Hibernate, visit our tutorial on Hibernate 4 with Spring.

2. Session as a Persistence Context Implementation

The Session interface has several methods that eventually result in saving data to the database: persist, save, update, merge, saveOrUpdate. To understand the difference between these methods, we must first discuss the purpose of the Session as a persistence context and the difference between the states of entity instances in relation to the Session.

We should also understand the history of Hibernate development that led to some partly duplicated API methods.

2.1. Managing Entity Instances

Apart from object-relational mapping itself, one of the problems that Hibernate was intended to solve is the problem of managing entities during runtime. The notion of “persistence context” is Hibernate’s solution to this problem. Persistence context can be thought of as a container or a first-level cache for all the objects that you loaded or saved to a database during a session.

The session is a logical transaction, which boundaries are defined by your application’s business logic. When you work with the database through a persistence context, and all of your entity instances are attached to this context, you should always have a single instance of entity for every database record that you’ve interacted during the session with.

In Hibernate, the persistence context is represented by org.hibernate.Session instance. For JPA, it is the javax.persistence.EntityManager. When we use Hibernate as a JPA provider and operate via EntityManager interface, the implementation of this interface basically wraps the underlying Session object. However, Hibernate Session provides a richer interface with more possibilities so sometimes it is useful  to work with Session directly.

2.2. States of Entity Instances

Any entity instance in your application appears in one of the three main states in relation to the Session persistence context:

  • transient — this instance is not, and never was, attached to a Session; this instance has no corresponding rows in the database; it’s usually just a new object that you have created to save to the database;
  • persistent — this instance is associated with a unique Session object; upon flushing the Session to the database, this entity is guaranteed to have a corresponding consistent record in the database;
  • detached — this instance was once attached to a Session (in a persistent state), but now it’s not; an instance enters this state if you evict it from the context, clear or close the Session, or put the instance through serialization/deserialization process.

Here is a simplified state diagram with comments on Session methods that make the state transitions happen.

2016-07-11_13-38-11

When the entity instance is in the persistent state, all changes that you make to the mapped fields of this instance will be applied to the corresponding database records and fields upon flushing the Session. The persistent instance can be thought of as “online”, whereas the detached instance has gone “offline” and is not monitored for changes.

This means that when you change fields of a persistent object, you don’t have to call save, update or any of those methods to get these changes to the database: all you need is to commit the transaction, or flush or close the session, when you’re done with it.

2.3. Conformity to JPA Specification

Hibernate was the most successful Java ORM implementation. No wonder that the specification for Java persistence API (JPA) was heavily influenced by the Hibernate API. Unfortunately, there were also many differences: some major, some more subtle.

To act as an implementation of the JPA standard, Hibernate APIs had to be revised. Several methods were added to Session interface to match the EntityManager interface. These methods serve the same purpose as the “original” methods, but conform to the specification and thus have some differences.

3. Differences Between the Operations

It is important to understand from the beginning that all of the methods (persist, save, update, merge, saveOrUpdate) do not immediately result in the corresponding SQL UPDATE or INSERT statements. The actual saving of data to the database occurs on committing the transaction or flushing the Session.

The mentioned methods basically manage the state of entity instances by transitioning them between different states along the lifecycle.

As an example entity, we will use a simple annotation-mapped entity Person:

@Entity
public class Person {

    @Id
    @GeneratedValue
    private Long id;

    private String name;

    // ... getters and setters

}

3.1. Persist

The persist method is intended for adding a new entity instance to the persistence context, i.e. transitioning an instance from transient to persistent state.

We usually call it when we want to add a record to the database (persist an entity instance):

Person person = new Person();
person.setName("John");
session.persist(person);

What happens after the persist method is called? The person object has transitioned from transient to persistent state. The object is in the persistence context now, but not yet saved to the database. The generation of INSERT statements will occur only upon commiting the transaction, flushing or closing the session.

Notice that the persist method has void return type. It operates on the passed object “in place”, changing its state. The person variable references the actual persisted object.

This method is a later addition to the Session interface. The main differentiating feature of this method is that it conforms to the JSR-220 specification (EJB persistence). The semantics of this method is strictly defined in the specification, which basically states, that:

  • a transient instance becomes persistent (and the operation cascades to all of its relations with cascade=PERSIST or cascade=ALL),
  • if an instance is already persistent, then this call has no effect for this particular instance (but it still cascades to its relations with cascade=PERSIST or cascade=ALL),
  • if an instance is detached, you should expect an exception, either upon calling this method, or upon committing or flushing the session.

Notice that there is nothing here that concerns the identifier of an instance. The spec does not state that the id will be generated right away, regardless of the id generation strategy. The specification for the persist method allows the implementation to issue statements for generating id on commit or flush, and the id is not guaranteed to be non-null after calling this method, so you should not rely upon it.

You may call this method on an already persistent instance, and nothing happens. But if you try to persist a detached instance, the implementation is bound to throw an exception. In the following example we persist the entity, evict it from the context so it becomes detached, and then try to persist again. The second call to session.persist() causes an exception, so the following code will not work:

Person person = new Person();
person.setName("John");
session.persist(person);

session.evict(person);

session.persist(person); // PersistenceException!

3.2. Save

The save method is an “original” Hibernate method that does not conform to the JPA specification.

Its purpose is basically the same as persist, but it has different implementation details. The documentation for this method strictly states that it persists the instance, “first assigning a generated identifier”. The method is guaranteed to return the Serializable value of this identifier.

Person person = new Person();
person.setName("John");
Long id = (Long) session.save(person);

The effect of saving an already persisted instance is the same as with persist. Difference comes when you try to save a detached instance:

Person person = new Person();
person.setName("John");
Long id1 = (Long) session.save(person);

session.evict(person);
Long id2 = (Long) session.save(person);

The id2 variable will differ from id1. The call of save on a detached instance creates a new persistent instance and assigns it a new identifier, which results in a duplicate record in a database upon committing or flushing.

3.3. Merge

The main intention of the merge method is to update a persistent entity instance with new field values from a detached entity instance.

For instance, suppose you have a RESTful interface with a method for retrieving an JSON-serialized object by its id to the caller and a method that receives an updated version of this object from the caller. An entity that passed through such serialization/deserialization will appear in a detached state.

After deserializing this entity instance, you need to get a persistent entity instance from a persistence context and update its fields with new values from this detached instance. So the merge method does exactly that:

  • finds an entity instance by id taken from the passed object (either an existing entity instance from the persistence context is retrieved, or a new instance loaded from the database);
  • copies fields from the passed object to this instance;
  • returns newly updated instance.

In the following example we evict (detach) the saved entity from context, change the name field, and then merge the detached entity.

Person person = new Person(); 
person.setName("John"); 
session.save(person);

session.evict(person);
person.setName("Mary");

Person mergedPerson = (Person) session.merge(person);

Note that the merge method returns an object — it is the mergedPerson object that was loaded into persistence context and updated, not the person object that you passed as an argument. Those are two different objects, and the person object usually needs to be discarded (anyway, don’t count on it being attached to persistence context).

As with persist method, the merge method is specified by JSR-220 to have certain semantics that you can rely upon:

  • if the entity is detached, it is copied upon an existing persistent entity;
  • if the entity is transient, it is copied upon a newly created persistent entity;
  • this operation cascades for all relations with cascade=MERGE or cascade=ALL mapping;
  • if the entity is persistent, then this method call does not have effect on it (but the cascading still takes place).

3.4. Update

As with persist and save, the update method is an “original” Hibernate method that was present long before the merge method was added. Its semantics differs in several key points:

  • it acts upon passed object (its return type is void); the update method transitions the passed object from detached to persistent state;
  • this method throws an exception if you pass it a transient entity.

In the following example we save the object, then evict (detach) it from the context, then change its name and call update. Notice that we don’t put the result of the update operation in a separate variable, because the update takes place on the person object itself. Basically we’re reattaching the existing entity instance to the persistence context — something the JPA specification does not allow us to do.

Person person = new Person();
person.setName("John");
session.save(person);
session.evict(person);

person.setName("Mary");
session.update(person);

Trying to call update on a transient instance will result in an exception. The following will not work:

Person person = new Person();
person.setName("John");
session.update(person); // PersistenceException!

3.5. SaveOrUpdate

This method appears only in the Hibernate API and does not have its standardized counterpart. Similar to update, it also may be used for reattaching instances.

Actually, the internal DefaultUpdateEventListener class that processes the update method is a subclass of DefaultSaveOrUpdateListener, just overriding some functionality. The main difference of saveOrUpdate method is that it does not throw exception when applied to a transient instance; instead, it makes this transient instance persistent. The following code will persist a newly created instance of Person:

Person person = new Person();
person.setName("John");
session.saveOrUpdate(person);

You may think of this method as a universal tool for making an object persistent regardless of its state wether it is transient or detached.

4. What to Use?

If you don’t have any special requirements, as a rule of thumb, you should stick to the persist and merge methods, because they are standardized and guaranteed to conform to the JPA specification.

They are also portable in case you decide to switch to another persistence provider, but they may sometimes appear not so useful as the “original” Hibernate methods, save, update and saveOrUpdate.

5. Conclusion

We’ve discussed the purpose of different Hibernate Session methods in relation to managing persistent entities in runtime. We’ve learned how these methods transist entity instances through their lifecycles and why some of these methods have duplicated functionality.

The source code for the article is available on GitHub.

I usually post about Persistence on Twitter - you can follow me there:


Minification of JS and CSS Assets with Maven

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

This article shows how to minify Javascript and CSS assets as a build step and serve the resulting files with Spring MVC.

We will use YUI Compressor as the underlying minification library and YUI Compressor Maven plugin to integrate it into our build process.

2. Maven Plugin Configuration

First, we need to declare that we will use the compressor plugin in our pom.xml file and execute the compress goal. This will compress all .js and .css files under src/main/webapp so that foo.js will be minified as foo-min.js and myCss.css will be minified as myCss-min.css:

<plugin>
   <groupId>net.alchim31.maven</groupId>
    <artifactId>yuicompressor-maven-plugin</artifactId>
    <version>1.5.1</version>
    <executions>
        <execution>
            <goals>
                <goal>compress</goal>
            </goals>
        </execution>
    </executions>
</plugin>

Our src/main/webapp directory contains the following files:

js/
├── foo.js
├── jquery-1.11.1.min.js
resources/
└── myCss.css

After executing mvn clean package, the generated WAR file will contain the following files:

js/
├── foo.js
├── foo-min.js
├── jquery-1.11.1.min.js
├── jquery-1.11.1.min-min.js
resources/
├── myCss.css
└── myCss-min.css

3. Keeping the Filenames the Same

At this stage, when we execute mvn clean package, foo-min.js and myCss-min.css are created by the plugin. Since we have originally used foo.js and myCss.css when referring to the files, our page will still use the original non-minified files, as the minified files have different names than the original.

In order to prevent having both foo.js/foo-min.js and myCss.css/myCss-min.css and have the files minified without changing their names, we need to configure the plugin with nosuffix option as follows:

<plugin>
    <groupId>net.alchim31.maven</groupId>
    <artifactId>yuicompressor-maven-plugin</artifactId>
    <version>1.5.1</version>
    <executions>
        <execution>
            <goals>
                <goal>compress</goal>
            </goals>
        </execution>
    </executions>
    <configuration>
        <nosuffix>true</nosuffix>
    </configuration>
</plugin>

Now when we execute mvn clean package, we will have the following files in the generated WAR file:

js/
├── foo.js
├── jquery-1.11.1.min.js
resources/
└── myCss.css

4. WAR Plugin Configuration

Keeping the filenames the same has a side effect. It causes the WAR plugin to overwrite the minified foo.js and myCss.css files with the original files, so we don’t have the minified versions of the files in the final output. foo.js file contains the following lines before minification:

function testing() {
    alert("Testing");
}

When we examine the contents of the foo.js file in the generated WAR file, we see that it has the original content instead of the minified content. To solve this problem, we need to specify a webappDirectory for the compressor plugin and reference this from within WAR plugin configuration.

<plugin>
    <groupId>net.alchim31.maven</groupId>
    <artifactId>yuicompressor-maven-plugin</artifactId>
    <version>1.5.1</version>
    <executions>
        <execution>
            <goals>
                <goal>compress</goal>
            </goals>
        </execution>
    </executions>
    <configuration>
        <nosuffix>true</nosuffix>
        <webappDirectory>${project.build.directory}/min</webappDirectory>
    </configuration>
</plugin>
<plugin>
<artifactId>maven-war-plugin</artifactId>
<configuration>
    <webResources>
        <resource>
            <directory>${project.build.directory}/min</directory>
        </resource>
    </webResources>
</configuration>
</plugin>

Here we have specified the min directory as the output directory for the minified files and configured the WAR plugin to include this in the final output.

Now we have the minified files in the generated WAR file, with their original filenames foo.js and myCss.css. We can check foo.js to see that it has the following minified content now:

function testing(){alert("Testing")};

5. Excluding Already Minified Files

Third-party Javascript and CSS libraries may have minified versions available for download. If you happen to use one of these in your project, you don’t need to process them again.

Including already minified files produces warning messages when building the project.

For example, jquery-1.11.1.min.js is an already minified Javascript file and it causes warning messages similar to the following during the build:

[WARNING] .../src/main/webapp/js/jquery-1.11.1.min.js [-1:-1]: 
Using 'eval' is not recommended. Moreover, using 'eval' reduces the level of compression!
execScript||function(b){a. ---> eval <--- .call(a,b);})
[WARNING] ...jquery-1.11.1.min.js:line -1:column -1: 
Using 'eval' is not recommended. Moreover, using 'eval' reduces the level of compression!
execScript||function(b){a. ---> eval <--- .call(a,b);})

To exclude already minified files from the process, configure the compressor plugin with an excludes option as follows:

<plugin>
    <groupId>net.alchim31.maven</groupId>
    <artifactId>yuicompressor-maven-plugin</artifactId>
    <version>1.5.1</version>
    <executions>
        <execution>
            <goals>
                <goal>compress</goal>
            </goals>
        </execution>
    </executions>
    <configuration>
        <nosuffix>true</nosuffix>
        <webappDirectory>${project.build.directory}/min</webappDirectory>
        <excludes>
            <exclude>**/*.min.js</exclude>
        </excludes>
    </configuration>
</plugin>

This will exclude all files under all directories whose filenames end with min.js. Executing mvn clean package doesn’t produce warning messages now and the build doesn’t try to minify already minified files.

6. Conclusion

In this article, we have described a nice way to integrate minification of Javascript and CSS files into your Maven workflow. To serve these static assets with your Spring MVC application, see our Serve Static Resources with Spring article.

You can find the code for this article on GitHub.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE


Binary Data Formats in a Spring REST API

$
0
0

I just announced the Master Class of my "REST With Spring" Course:

>> THE "REST WITH SPRING" CLASSES

1. Overview

While JSON and XML are widely popular data transfer formats when it comes to REST APIs, they’re not the only options available.

There exist many other formats with varying degree of serialization speed and serialized data size.

In this article we explore how to configure a Spring REST mechanism to use binary data formats – which we illustrate with Kryo.

Moreover we show how to support multiple data formats by adding support for Google Protocol buffers.

2. HttpMessageConverter

HttpMessageConverter interface is basically Spring’s public API for the conversion of REST data formats.

There are different ways to specify the desired converters. Here we extend WebMvcConfigurerAdapter and explicitly provide the converters we want to use in the overridden configureMessageConverters method:

@Configuration
@EnableWebMvc
@ComponentScan({ "org.baeldung.web" })
public class WebConfig extends WebMvcConfigurerAdapter {
    @Override
    public void configureMessageConverters(List<HttpMessageConverter<?>> messageConverters) {
        super.configureMessageConverters(messageConverters);
    }
}

3. Kryo

3.1. Kryo Overview and Maven

Kryo is a binary encoding format that provides good serialization and deserialization speed and smaller transferred data size compared to text-based formats.

While in theory it can be used to transfer data between different kinds of systems, it is primarily designed to work with Java components.

We add the necessary Kryo libraries with the following Maven dependency:

<dependency>
    <groupId>com.esotericsoftware</groupId>
    <artifactId>kryo</artifactId>
    <version>4.0.0</version>
</dependency>

To check the latest version of kryo you can have a look here.

3.2. Kryo in Spring REST

In order to utilize Kryo as data transfer format, we create a custom HttpMessageConverter and implement the necessary serialization and deserialization logic. Also, we define our custom HTTP header for Kryo: application/x-kryo. Here is a full simplified working example which we use for demonstration purposes:

public class KryoHttpMessageConverter extends AbstractHttpMessageConverter<Object> {

    public static final MediaType KRYO = new MediaType("application", "x-kryo");

    private static final ThreadLocal<Kryo> kryoThreadLocal = new ThreadLocal<Kryo>() {
        @Override
        protected Kryo initialValue() {
            Kryo kryo = new Kryo();
            kryo.register(Foo.class, 1);
            return kryo;
        }
    };

    public KryoHttpMessageConverter() {
        super(KRYO);
    }

    @Override
    protected boolean supports(Class<?> clazz) {
        return Object.class.isAssignableFrom(clazz);
    }

    @Override
    protected Object readInternal(
      Class<? extends Object> clazz, HttpInputMessage inputMessage) throws IOException {
        Input input = new Input(inputMessage.getBody());
        return kryoThreadLocal.get().readClassAndObject(input);
    }

    @Override
    protected void writeInternal(
      Object object, HttpOutputMessage outputMessage) throws IOException {
        Output output = new Output(outputMessage.getBody());
        kryoThreadLocal.get().writeClassAndObject(output, object);
        output.flush();
    }

    @Override
    protected MediaType getDefaultContentType(Object object) {
        return KRYO;
    }
}

The controller method is straightforward (note there is no need for any custom protocol-specific data types, we use plain Foo DTO):

@RequestMapping(method = RequestMethod.GET, value = "/foos/{id}")
@ResponseBody
public Foo findById(@PathVariable long id) {
    return fooRepository.findById(id);
}

And a quick test to prove that we have wired everything together correctly:

RestTemplate restTemplate = new RestTemplate();
restTemplate.setMessageConverters(Arrays.asList(new KryoHttpMessageConverter()));

HttpHeaders headers = new HttpHeaders();
headers.setAccept(Arrays.asList(KryoHttpMessageConverter.KRYO));
HttpEntity<String> entity = new HttpEntity<String>(headers);

ResponseEntity<Foo> response = restTemplate.exchange("http://localhost:8080/spring-rest/foos/{id}",
  HttpMethod.GET, entity, Foo.class, "1");
Foo resource = response.getBody();

assertThat(resource, notNullValue());

4. Supporting Multiple Data Formats

Often you would want to provide support for multiple data formats for the same service. The clients specify the desired data formats in the Accept HTTP header, and the corresponding message converter is invoked to serialize the data.

Usually, you just have to register another converter for things to work out of the box. Spring picks the appropriate converter automatically based on the value in the Accept header and the supported media types declared in the converters.

For example, to add support for both JSON and Kryo, register both KryoHttpMessageConverter and MappingJackson2HttpMessageConverter:

@Override
public void configureMessageConverters(List<HttpMessageConverter<?>> messageConverters) {
    messageConverters.add(new MappingJackson2HttpMessageConverter());
    messageConverters.add(new KryoHttpMessageConverter());
    super.configureMessageConverters(messageConverters);
}

Now, let’s suppose that we want to add Google Protocol Buffer to the list as well. For this example, we assume there is a class FooProtos.Foo generated with the protoc compiler based on the following proto file:

package baeldung;
option java_package = "org.baeldung.web.dto";
option java_outer_classname = "FooProtos";
message Foo {
    required int64 id = 1;
    required string name = 2;
}

Spring comes with some built-in support for Protocol Buffer. All we need to make it work is to include ProtobufHttpMessageConverter in the list of supported converters:

@Override
public void configureMessageConverters(List<HttpMessageConverter<?>> messageConverters) {
    messageConverters.add(new MappingJackson2HttpMessageConverter());
    messageConverters.add(new KryoHttpMessageConverter());
    messageConverters.add(new ProtobufHttpMessageConverter());
    super.configureMessageConverters(messageConverters);
}

However, we have to define a separate controller method that returns FooProtos.Foo instances (JSON and Kryo both deal with Foos, so no changes are needed in the controller to distinguish the two).

There are two ways to resolve the ambiguity about which method gets called. The first approach is to use different URLs for protobuf and other formats. For example, for protobuf:

@RequestMapping(method = RequestMethod.GET, value = "/fooprotos/{id}")
@ResponseBody
public FooProtos.Foo findProtoById(@PathVariable long id) { … }

and for the others:

@RequestMapping(method = RequestMethod.GET, value = "/foos/{id}")
@ResponseBody
public Foo findById(@PathVariable long id) { … }

Notice that for protobuf we use value = “/fooprotos/{id}” and for the other formats value = “/foos/{id}”.

The second – and better approach is to use the same URL, but to explicitly specify the produced data format in the request mapping for protobuf:

@RequestMapping(
  method = RequestMethod.GET, 
  value = "/foos/{id}", 
  produces = { "application/x-protobuf" })
@ResponseBody
public FooProtos.Foo findProtoById(@PathVariable long id) { … }

Note that by specifying the media type in the produces annotation attribute we give a hint to the underlying Spring mechanism about which mapping needs to be used based on the value in the Accept header provided by clients, so there is no ambiguity about which method needs to be invoked for the “foos/{id}” URL.

The second approach enables us to provide a uniform and consistent REST API to the clients for all data formats.

Finally, if you’re interested in going deeper into using Protocol Buffers with a Spring REST API, have a look at the reference article.

5. Registering Extra Message Converters

It is very important to note that you loose all of the default message converters when you override the configureMessageConverters method. Only the ones you provide will be used.

While sometimes this is exactly what you want, in many cases you just want to add new converters, while still keeping the default ones which already take care of standard data formats like JSON. To achieve this, override the extendMessageConverters method:

@Configuration
@EnableWebMvc
@ComponentScan({ "org.baeldung.web" })
public class WebConfig extends WebMvcConfigurerAdapter {
    @Override
    public void extendMessageConverters(List<HttpMessageConverter<?>> messageConverters) {
        messageConverters.add(new ProtobufHttpMessageConverter());
        messageConverters.add(new KryoHttpMessageConverter());
        super.configureMessageConverters(messageConverters);
    }
}

6. Conclusion

In this tutorial, we looked at how easy it is to use any data transfer format in Spring MVC, and we examined this by using Kryo as an example.

We also showed how to add support for multiple formats so that different clients are able to use different formats.

The implementation of this Binary Data Formats in a Spring REST API Tutorial is of course on Github. This is a Maven based project, so it should be easy to import and run as it is.

The Master Class of my "REST With Spring" Course is finally out:

>> CHECK OUT THE CLASSES

AssertJ’s Java 8 Features

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

This article focuses on AssertJ‘s Java8-related features and is the third article from the series.

If you’re looking for general information on its main features have a look at the first article in the series Introduction to AssertJ and then at AssertJ for Guava.

2. Maven Dependencies

Java 8’s support is included in the main AssertJ Core module since version 3.5.1. In order to use the module, you will need to include the following section in your pom.xml file:

<dependency>
    <groupId>org.assertj</groupId>
    <artifactId>assertj-core</artifactId>
    <version>3.5.1</version>
    <scope>test</scope>
</dependency>

This dependency covers only the basic Java assertions. If you want to use the advanced assertions, you will need to add additional modules separately.

The latest Core version can be found here.

3. Java 8 Features

AssertJ leverages Java 8 features by providing special helper methods and new assertions for Java 8 types.

3.1. Optional Assertions

Let’s create a simple Optional instance:

Optional<String> givenOptional = Optional.of("something");

We can now easily check if an Optional contains some value and what that containing value is:

assertThat(givenOptional)
  .isPresent()
  .hasValue("something");

3.2. Predicate Assertions

Let’s create a simple Predicate instance by checking the length of a String:

Predicate<String> predicate = s -> s.length() > 4;

Now you can easily check which Strings are rejected or accepted by the Predicate:

assertThat(predicate)
  .accepts("aaaaa", "bbbbb")
  .rejects("a", "b")
  .acceptsAll(asList("aaaaa", "bbbbb"))
  .rejectsAll(asList("a", "b"));

3.3. LocalDate Assertions

Let’s start by defining two LocalDate objects:

LocalDate givenLocalDate = LocalDate.of(2016, 7, 8);
LocalDate todayDate = LocalDate.now();

You can now easily check if a given date is before/after a given date, or today:

assertThat(givenLocalDate)
  .isBefore(LocalDate.of(2020, 7, 8))
  .isAfterOrEqualTo(LocalDate.of(1989, 7, 8));

assertThat(todayDate)
  .isAfter(LocalDate.of(1989, 7, 8))
  .isToday();

3.4. LocalDateTime Assertions

The LocalDateTime assertions work similarly to LocalDate‘s, but do not share the isToday method.

Let’s create an example LocalDateTime object:

LocalDateTime givenLocalDate = LocalDateTime.of(2016, 7, 8, 12, 0);

And now you can check:

assertThat(givenLocalDate)
  .isBefore(LocalDateTime.of(2020, 7, 8, 11, 2));

3.5. LocalTime Assertions

The LocalTime assertions work similarly to other java.util.time.* assertions, but they do have one exclusive method: hasSameHourAs.

Let’s create an example LocalTime object:

LocalTime givenLocalTime = LocalTime.of(12, 15);

and now you can assert:

assertThat(givenLocalTime)
  .isAfter(LocalTime.of(1, 0))
  .hasSameHourAs(LocalTime.of(12, 0));

3.6. FlatExtracting Helper Method

The FlatExtracting is a special utility method that utilizes Java 8’s lambdas in order to extract properties from Iterable elements.

Let’s create a simple List with LocalDate objects:

List<LocalDate> givenList = asList(ofYearDay(2016, 5), ofYearDay(2015, 6));

now we can easily check if this List contains at least one LocalDate object with the year 2015:

assertThat(givenList)
  .flatExtracting(LocalDate::getYear)
  .contains(2015);

the flatExtracting method does not limit us to field extraction. We can always provide it with any function:

assertThat(givenList)
  .flatExtracting(LocalDate::isLeapYear)
  .contains(true);

or even:

assertThat(givenList)
  .flatExtracting(Object::getClass)
  .contains(LocalDate.class);

You can also extract multiple properties at once:

assertThat(givenList)
  .flatExtracting(LocalDate::getYear, LocalDate::getDayOfMonth)
  .contains(2015, 6);

3.7. Satisfies Helper Method

The Satisfies method allows you to quickly check if an object satisfies all provided assertions.

Let’s create an example String instance:

String givenString = "someString";

and now we can provide assertions as a lambda body:

assertThat(givenString)
  .satisfies(s -> {
    assertThat(s).isNotEmpty();
    assertThat(s).hasSize(10);
  });

3.8. HasOnlyOneElementSatisfying Helper Method

The HasOnlyOneElement helper method allows checking if an Iterable instance contains exactly only one element satisfying provided assertions.

Let’s create an example List:

List<String> givenList = Arrays.asList("");

and now you can assert:

assertThat(givenList)
  .hasOnlyOneElementSatisfying(s -> assertThat(s).isEmpty());

3.9. Matches Helper Method

The Matches helper method allows checking if a given object matches the given Predicate function.

Let’s take an empty String:

String emptyString = "";

and now we can check it’s state by providing an adequate Predicate lambda function:

assertThat(emptyString)
  .matches(String::isEmpty);

4. Conclusion

In this last article from the AssertJ series, we explored all advanced AssertJ Java 8’s features, which concludes the series.

The implementation of all the examples and code snippets can be found in the GitHub project.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Intro to Jedis – the Java Redis Client Library

$
0
0

I usually post about Persistence on Twitter - you can follow me there:

1. Overview

This article is an introduction to Jedis, a client library in Java for Redis – the popular in-memory data structure store that can persist on disk as well. It is driven by a keystore-based data structure to persist data and can be used as a database, cache, message broker, etc.

First we are going to explain in which kind of situations Jedis is useful and what it is about. In the subsequent sections we are elaborating on the various data structures and explaining transactions, pipelining and the publish/subscribe feature. We conclude with connection pooling and Redis Cluster.

2. Why Jedis?

Redis lists the most well known client libraries on their official site. There are multiple alternatives to Jedis, but only two more are currently worthy of their recommendation star, lettuce and Redisson.

These two clients do have some unique features like thread safety, transparent reconnection handling and an asynchronous API, all features of which Jedis lacks. However, it is small and considerably faster than the other two. Besides, it is the client library of choice of the Spring Framework developers and it has the biggest community of all three.

3. Maven Dependencies

Let’s start by declaring the only dependency we will need in the pom.xml:

<dependency>
    <groupId>redis.clients</groupId>
    <artifactId>jedis</artifactId>
    <version>2.8.1</version>
</dependency>

If you’re looking for the latest version of the library, check out this page.

4. Redis Installation

You will need to install and fire up one of the latest versions of Redis. We are running the latest stable version at this moment (3.2.1), but any post 3.x version should be okay. Find here more information about Redis for Linux and Macintosh, they have very similar basic installation steps. Windows is not officially supported but this port is well maintained.

Thereafter we can directly dive in and connect to it from our Java code:

Jedis jedis = new Jedis();

The default constructor will work just fine unless you have started the service on a non-default port or on a remote machine, in which case you can configure it correctly by passing the correct values as parameters into the constructor.

5. Redis Data Structures

Most of the native operation commands are supported and, conveniently enough, they normally share the same method name.

5.1. Strings

Strings are the most basic kind of Redis value, useful for when you need to persist simple key-value data types:

jedis.set("events/city/rome", "32,15,223,828");
String cachedResponse = jedis.get("events/city/rome");

The variable cachedResponse will hold the value 32,15,223,828. Coupled with expiration support, discussed later, it can work as a lightning fast and simple to use cache layer for HTTP requests received at your web application and other caching requirements.

5.2. Lists

Redis Lists are simply lists of strings, sorted by insertion order and make it an ideal tool to implement, for instance, message queues:

jedis.lpush("queue#tasks", "firstTask");
jedis.lpush("queue#tasks", "secondTask");

String task = jedis.rpop("queue#tasks");

The variable task will hold the value firstTask. Remember that you can serialize any object and persist it as a string, so messages in the queue can carry more complex data when required.

5.3. Sets

Redis Sets are an unordered collection of Strings that come in handy when you want to exclude repeated members:

jedis.sadd("nicknames", "nickname#1");
jedis.sadd("nicknames", "nickname#2");
jedis.sadd("nicknames", "nickname#1");

Set<String> nicknames = jedis.smembers("nicknames");
boolean exists = jedis.sismember("nicknames", "nickname#1");

The Java Set nicknames will have a size of 2, the second addition of nickname#1 was ignored. Also, the exists variable will have a value of true, the method sismember enables you to quickly check for the existence of a particular member.

5.4. Hashes

Redis Hashes are maps between string fields and string values:

jedis.hset("user#1", "name", "Peter");
jedis.hset("user#1", "job", "politician");
		
String name = jedis.hget("user#1", "name");
		
Map<String, String> fields = jedis.hgetAll("user#1");
String job = fields.get("job");

As you can see, hashes are a very convenient data type when you want to access object’s properties individually since you do not need to retrieve the whole object.

5.5. Sorted Sets

Sorted Sets are like a Set where each member has an associated ranking, that is used for sorting them:

Map<String, Double> scores = new HashMap<>();

scores.put("PlayerOne", 3000.0);
scores.put("PlayerTwo", 1500.0);
scores.put("PlayerThree", 8200.0);

for (String player : scores.keySet()) {
    jedis.zadd("ranking", scores.get(player), player);
}
		
String player = jedis.zrevrange("ranking", 0, 1).iterator().next();
long rank = jedis.zrevrank("ranking", "PlayerOne");

The variable player will hold the value PlayerThree because we are retrieving the top 1 player and he is the one with the highest score. The rank variable will have a value of 1 because PlayerOne is the second in the ranking and the ranking is zero-based.

6. Transactions

Transactions guarantee atomicity and thread safety operations, which means that requests from other clients will never be handled concurrently during Redis transactions:

String friendsPrefix = "friends#";
String userOneId = "4352523";
String userTwoId = "5552321";

Transaction t = jedis.multi();
t.sadd(friendsPrefix + userOneId, userTwoId);
t.sadd(friendsPrefix + userTwoId, userOneId);
t.exec();

You can even make a transaction success dependent on a specific key by “watching” it right before you instantiate your Transaction:

jedis.watch("friends#deleted#" + userOneId);

If the value of that key changes before the transaction is executed, the transaction will not be completed successfully.

7. Pipelining

When we have to send multiple commands, we can pack them together in one request and save connection overhead by using pipelines, it is essentially a network optimization. As long as the operations are mutually independent, we can take advantage of this technique:

String userOneId = "4352523";
String userTwoId = "4849888";

Pipeline p = jedis.pipelined();
p.sadd("searched#" + userOneId, "paris");
p.zadd("ranking", 126, userOneId);
p.zadd("ranking", 325, userTwoId);
Response<Boolean> pipeExists = p.sismember("searched#" + userOneId, "paris");
Response<Set<String>> pipeRanking = p.zrange("ranking", 0, -1);
p.sync();

String exists = pipeExists.get();
Set<String> ranking = pipeRanking.get();

Notice we do not get direct access to the command responses, instead we are given a Response instance from which we can request the underlying response after the pipeline has been synced.

8. Publish/Subscribe

We can use the Redis messaging broker functionality to send messages between the different components of our system. Make sure the subscriber and publisher threads do not share the same Jedis connection.

8.1. Subscriber

Subscribe and listen to messages sent to a channel:

Jedis jSubscriber = new Jedis();
jSubscriber.subscribe(new JedisPubSub() {
    @Override
    public void onMessage(String channel, String message) {
        // handle message
    }
}, "channel");

Subscribe is a blocking method, you will need to unsubscribe from the JedisPubSub explicitly. We have overridden the onMessage method but there are many more useful methods available to override.

8.2. Publisher

Then simply send messages to that same channel from the publisher’s thread:

Jedis jPublisher = new Jedis();
jPublisher.publish("channel", "test message");

9. Connection Pooling

It is important to know that the way we have been dealing with our Jedis instance is naive. In a real world scenario you do not want to use a single instance in a multi-threaded environment as a single instance is not thread safe.

Luckily enough we can easily create a pool of connections to Redis for us to reuse on demand, a pool that is thread safe and reliable as long as you return the resource to the pool when you are done with it.

Let’s create the JedisPool:

final JedisPoolConfig poolConfig = buildPoolConfig();
JedisPool jedisPool = new JedisPool(poolConfig, "localhost");

private JedisPoolConfig buildPoolConfig() {
    final JedisPoolConfig poolConfig = new JedisPoolConfig();
    poolConfig.setMaxTotal(128);
    poolConfig.setMaxIdle(128);
    poolConfig.setMinIdle(16);
    poolConfig.setTestOnBorrow(true);
    poolConfig.setTestOnReturn(true);
    poolConfig.setTestWhileIdle(true);
    poolConfig.setMinEvictableIdleTimeMillis(Duration.ofSeconds(60).toMillis());
    poolConfig.setTimeBetweenEvictionRunsMillis(Duration.ofSeconds(30).toMillis());
    poolConfig.setNumTestsPerEvictionRun(3);
    poolConfig.setBlockWhenExhausted(true);
    return poolConfig;
}

Since the pool instance is thread safe, you can store it somewhere statically but you should take care of destroying the pool to avoid leaks when the application is being shutdown.

Now we can make use of our pool from anywhere in the application when needed:

try (Jedis jedis = jedisPool.getResource()) {
    // do operations with jedis resource
}

We used the Java try-with-resources statement to avoid having to manually close the Jedis resource, but if you cannot use this statement you can also close the resource manually in the finally clause.

Make sure you use a pool like we have described in your application if you do not want to face nasty multi-threading issues. You can obviously play with the pool configuration parameters to adapt it to the best setup in your system.

10. Redis Cluster

This Redis implementation provides easy scalability and high availability, we encourage you to read their official specification if you are not familiar with it. We will not cover Redis cluster setup since that is a bit out of the scope for this article, but you should have no problems in doing so when you are done with its documentation.

Once we have that ready, we can start using it from our application:

try (JedisCluster jedisCluster = new JedisCluster(new HostAndPort("localhost", 6379))) {
    // use the jedisCluster resource as if it was a normal Jedis resource
} catch (IOException e) {}

We only need to provide the host and port details from one of our master instances, it will auto-discover the rest of the instances in the cluster.

This is certainly a very powerful feature but it is not a silver bullet. When using Redis Cluster you cannot perform transactions nor use pipelines, two important features on which many applications rely for ensuring data integrity.

Transactions are disabled because, in a clustered environment, keys will be persisted across multiple instances. Operation atomicity and thread safety cannot be guaranteed for operations that involve command execution in different instances.

Some advanced key creation strategies will ensure that data that is interesting for you to be persisted in the same instance will get persisted that way. In theory that should enable you to perform transactions successfully using one of the underlying Jedis instances of the Redis Cluster.

Unfortunately, currently you cannot find out in which Redis instance a particular key is saved using Jedis (which is actually supported natively by Redis), so you do not know against which of the instances you must perform the transaction operation. If you are interested about this, you can find more information here.

11. Conclusion

The vast majority of the features from Redis are already available in Jedis and its development moves forward at a good pace.

It gives you the ability to integrate a powerful in-memory storage engine in your application with very little hassle, just do not forget to set up connection pooling to avoid thread safety issues.

I usually post about Persistence on Twitter - you can follow me there:


Using Couchbase in a Spring Application

$
0
0

I usually post about Persistence on Twitter - you can follow me there:

1. Introduction

In this follow-up to our introduction to Couchbase, we create a set of Spring services that can be used together to create a basic persistence layer for a Spring application without the use of Spring Data.

2. Cluster Service

In order to satisfy the constraint that only a single CouchbaseEnvironment may be active in the JVM, we begin by writing a service that connects to a Couchbase cluster and provides access to data buckets without directly exposing either the Cluster or CouchbaseEnvironment instances.

2.1. Interface

Here is our ClusterService interface:

public interface ClusterService {
    Bucket openBucket(String name, String password);
}

2.2. Implementation

Our implementation class instantiates a DefaultCouchbaseEnvironment and connects to a cluster during the @PostConstruct phase during Spring context initialization.

This ensures that the cluster is not null and that it is connected when the class is injected into other service classes, thus enabling them to open one or more data buckets:

@Service
public class ClusterServiceImpl implements ClusterService {
    private Cluster cluster;
    
    @PostConstruct
    private void init() {
        CouchbaseEnvironment env = DefaultCouchbaseEnvironment.create();
        cluster = CouchbaseCluster.create(env, "localhost");
    }
...
}

Next, we provide a ConcurrentHashMap to contain the open buckets and implement the openBucket method:

private Map<String, Bucket> buckets = new ConcurrentHashMap<>();

@Override
synchronized public Bucket openBucket(String name, String password) {
    if(!buckets.containsKey(name)) {
        Bucket bucket = cluster.openBucket(name, password);
        buckets.put(name, bucket);
    }
    return buckets.get(name);
}

3. Bucket Service

Depending on how you architect your application, you may need to provide access to the same data bucket in multiple Spring services.

If we merely attempted to open the same bucket in two or more services during application startup, the second service to attempt this is likely to encounter a ConcurrentTimeoutException.

To avoid this scenario, we define a BucketService interface and an implementation class per bucket. Each implementation class acts as a bridge between the ClusterService and the classes that need direct access to a particular Bucket.

3.1. Interface

Here is our BucketService interface:

public interface BucketService {
    Bucket getBucket();
}

3.2. Implementation

The following class provides access to the “baeldung-tutorial” bucket:

@Service
@Qualifier("TutorialBucketService")
public class TutorialBucketService implements BucketService {

    @Autowired
    private ClusterService couchbase;
    
    private Bucket bucket;
    
    @PostConstruct
    private void init() {
        bucket = couchbase.openBucket("baeldung-tutorial", "");
    }

    @Override
    public Bucket getBucket() {
        return bucket;
    }
}

By injecting the ClusterService in our TutorialBucketService implementation class and opening the bucket in a method annotated with @PostConstruct, we have ensured that the bucket will be ready for use when the TutorialBucketService is then injected into other services.

4. Persistence Layer

Now that we have a service in place to obtain a Bucket instance, we will create a repository-like persistence layer that provides CRUD operations for entity classes to other services without exposing the Bucket instance to them.

4.1. The Person Entity

Here is the Person entity class that we wish to persist:

public class Person {

    private String id;
    private String type;
    private String name;
    private String homeTown;

    // standard getters and setters
}

4.2. Converting Entity Classes To and From JSON

To convert entity classes to and from the JsonDocument objects that Couchbase uses in its persistence operations, we define the JsonDocumentConverter interface:

public interface JsonDocumentConverter<T> {
    JsonDocument toDocument(T t);
    T fromDocument(JsonDocument doc);
}

4.3. Implementing the JSON Converter

Next, we need to implement a JsonConverter for Person entities.

@Service
public class PersonDocumentConverter
  implements JsonDocumentConverter<Person> {
    ...
}

We could use the Jackson library in conjunction with the JsonObject class’s toJson and fromJson methods to serialize and deserialize the entities, however there is additional overhead in doing so.

Instead, for the toDocument method, we’ll use the fluent methods of the JsonObject class to create and populate a JsonObject before wrapping it a JsonDocument:

@Override
public JsonDocument toDocument(Person p) {
    JsonObject content = JsonObject.empty()
            .put("type", "Person")
            .put("name", p.getName())
            .put("homeTown", p.getHomeTown());
    return JsonDocument.create(p.getId(), content);
}

And for the fromDocument method, we’ll use theJsonObject class’s getString method along with the setters in the Person class in our fromDocument method:

@Override
public Person fromDocument(JsonDocument doc) {
    JsonObject content = doc.content();
    Person p = new Person();
    p.setId(doc.id());
    p.setType("Person");
    p.setName(content.getString("name"));
    p.setHomeTown(content.getString("homeTown"));
    return p;
}

4.4. CRUD Interface

We now create a generic CrudService interface that defines persistence operations for entity classes:

public interface CrudService<T> {
    void create(T t);
    T read(String id);
    T readFromReplica(String id);
    void update(T t);
    void delete(String id);
    boolean exists(String id);
}

4.5. Implementing the CRUD Service

With the entity and converter classes in place, we now implement the CrudService for the Person entity, injecting the bucket service and document converter shown above and retrieving the bucket during initialization:

@Service
public class PersonCrudService implements CrudService<Person> {
    
    @Autowired
    private TutorialBucketService bucketService;
    
    @Autowired
    private PersonDocumentConverter converter;
    
    private Bucket bucket;
    
    @PostConstruct
    private void init() {
        bucket = bucketService.getBucket();
    }

    @Override
    public void create(Person person) {
        if(person.getId() == null) {
            person.setId(UUID.randomUUID().toString());
        }
        JsonDocument document = converter.toDocument(person);
        bucket.insert(document);
    }

    @Override
    public Person read(String id) {
        JsonDocument doc = bucket.get(id);
        return (doc != null ? converter.fromDocument(doc) : null);
    }

    @Override
    public Person readFromReplica(String id) {
        List<JsonDocument> docs = bucket.getFromReplica(id, ReplicaMode.FIRST);
        return (docs.isEmpty() ? null : converter.fromDocument(docs.get(0)));
    }

    @Override
    public void update(Person person) {
        JsonDocument document = converter.toDocument(person);
        bucket.upsert(document);
    }

    @Override
    public void delete(String id) {
        bucket.remove(id);
    }

    @Override
    public boolean exists(String id) {
        return bucket.exists(id);
    }
}

5. Putting it All Together

Now that we have all of the pieces of our persistence layer in place, here’s a simple example of a registration service that uses the PersonCrudService to persist and retrieve registrants:

@Service
public class RegistrationService {

    @Autowired
    private PersonCrudService crud;
    
    public void registerNewPerson(String name, String homeTown) {
        Person person = new Person();
        person.setName(name);
        person.setHomeTown(homeTown);
        crud.create(person);
    }
    
    public Person findRegistrant(String id) {
        try{
            return crud.read(id);
        }
        catch(CouchbaseException e) {
            return crud.readFromReplica(id);
        }
    }
}

6. Conclusion

We have shown that with a few basic Spring services, it is fairly trivial to incorporate Couchbase into a Spring application and implement a basic persistence layer without using Spring Data.

The source code shown in this tutorial is available in the github project.

You can learn more about the Couchbase Java SDK at the official Couchbase developer documentation site.

I usually post about Persistence on Twitter - you can follow me there:


Intro to Spring Boot Starters

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

Dependency management is a critical aspects of any complex project. And doing this manually is less than ideal; the more time you spent on it the less time you have on the other important aspects of the project.

Spring Boot starters were built to address exactly this problem. Starter POMs are a set of convenient dependency descriptors that you can include in your application. You get a one-stop-shop for all the Spring and related technology that you need, without having to hunt through sample code and copy paste loads of dependency descriptors.

We have more than 30 Boot starters available – let’s see some of them in the following sections.

2. The Web Starter

First, let’s look at developing the REST service; we can use libraries like Spring MVC, Tomcat and Jackson – a lot of dependencies for a single application.

Spring Boot starters can help to reduce the number of manually added dependencies just by adding one dependency. So instead of manually specifying the dependencies just add one starter as in the following example:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>

Now we can create a REST controller. For the sake of simplicity we won’t use the database and focus on the REST controller:

@RestController
public class GenericEntityController {
    private List<GenericEntity> entityList = new ArrayList<>();

    @RequestMapping("/entity/all")
    public List<GenericEntity> findAll() {
        return entityList;
    }

    @RequestMapping(value = "/entity", method = RequestMethod.POST)
    public GenericEntity addEntity(GenericEntity entity) {
        entityList.add(entity);
        return entity;
    }

    @RequestMapping("/entity/findby/{id}")
    public GenericEntity findById(@PathVariable Long id) {
        return entityList.stream().
                 filter(entity -> entity.getId().equals(id)).
                   findFirst().get();
    }
}

The GenericEntity is a simple bean with id of type Long and value of type String.

That’s it – with the application running, you can access http://localhost:8080/springbootapp/entity/all and check the controller is working.

We have created a REST application with quite minimal configuration.

3. The Test Starter

For testing we usually use the following set of libraries: Spring Test, JUnit, Hamcrest and Mockito. We can include all of these libraries manually, but Spring Boot starter can be used to automatically include these libraries in the following way:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-test</artifactId>
    <scope>test</scope>
</dependency>

Notice that you don’t need to specify the version number of an artifact. Spring Boot will figure out what version to use – all you need to specify is the version of spring-boot-starter-parent artifact. If later on you need to upgrade the Boot library and dependencies, just upgrade the Boot version in one place and it will take care of the rest.

Let’s actually test the controller we created in the previous example.

There are two ways to test the controller:

  • Using the mock environment
  • Using the embedded Servlet container (like Tomcat or Jetty)

In this example we’ll use a mock environment:

@RunWith(SpringJUnit4ClassRunner.class)
@SpringApplicationConfiguration(classes = Application.class)
@WebAppConfiguration
public class SpringBootApplicationTest {
    @Autowired
    private WebApplicationContext webApplicationContext;
    private MockMvc mockMvc;

    @Before
    public void setupMockMvc() {
        mockMvc = MockMvcBuilders.webAppContextSetup(webApplicationContext).build();
    }

    @Test
    public void givenRequestHasBeenMade_whenMeetsAllOfGivenConditions_thenCorrect()
      throws Exception { 
        MediaType contentType = new MediaType(MediaType.APPLICATION_JSON.getType(),
        MediaType.APPLICATION_JSON.getSubtype(), Charset.forName("utf8"));
        mockMvc.perform(MockMvcRequestBuilders.get("/entity/all")).
        andExpect(MockMvcResultMatchers.status().isOk()).
        andExpect(MockMvcResultMatchers.content().contentType(contentType)).
        andExpect(jsonPath("$", hasSize(4))); 
    } 
}

What is important here is that @WebAppConfiguration annotation and MockMVC are part of the spring-test module, hasSize is a Hamcrest matcher, and @Before is a JUnit annotation. These are all available by importing one this one starter dependency.

4. The Data JPA Starter

Most web applications have some sort of persistence – and that’s quite often JPA.

Instead of defining all of the associated dependencies manually – let’s go with the starter instead:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
    <groupId>com.h2database</groupId>
    <artifactId>h2</artifactId>
    <scope>runtime</scope>
</dependency>

Notice that out of box we have automatic support for at least the following databases: H2, Derby and Hsqldb. In our example we’ll use H2.

Now let’s create the repository for our entity:

public interface GenericEntityRepository extends JpaRepository<GenericEntity, Long> {}

Time to test the code. Here is the JUnit test:

@RunWith(SpringJUnit4ClassRunner.class)
@SpringApplicationConfiguration(classes = Application.class)
public class SpringBootJPATest {
    
    @Autowired
    private GenericEntityRepository genericEntityRepository;

    @Test
    public void givenGenericEntityRepository_whenSaveAndRetreiveEntity_thenOK() {
        GenericEntity genericEntity = 
          genericEntityRepository.save(new GenericEntity("test"));
        GenericEntity foundedEntity = 
          genericEntityRepository.findOne(genericEntity.getId());
        
        assertNotNull(foundedEntity);
        assertEquals(genericEntity.getValue(), foundedEntity.getValue());
    }
}

We didn’t spend time on specifying the database vendor, URL connection and credentials. No extra configuration is necessary as we’re benefiting from the solid Boot defaults; but of course all of these details can still be configured if necessary.

5. The Mail Starter

A very common task in enterprise development is sending email; and dealing directly with Java Mail API usually can be difficult.

Spring Boot starter hides this complexity – mail dependencies can be specified in the following way:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-mail</artifactId>
</dependency>

Now we can directly use the JavaMailSender, so let’s write some tests.

For the testing purpose we need a simple SMTP server. In this example we’ll use Wiser. This is how we can include it in our POM:

<dependency>
    <groupId>org.subethamail</groupId>
    <artifactId>subethasmtp</artifactId>
    <version>3.1.7</version>
    <scope>test</scope>
</dependency>

The latest version of Wiser can be found on Maven central repository.

Here is the source code for the test:

@RunWith(SpringJUnit4ClassRunner.class)
@SpringApplicationConfiguration(classes = Application.class)
public class SpringBootMailTest {
    @Autowired
    private JavaMailSender javaMailSender;

    private Wiser wiser;

    private String userTo = "user2@localhost";
    private String userFrom = "user1@localhost";
    private String subject = "Test subject";
    private String textMail = "Text subject mail";

    @Before
    public void setUp() throws Exception {
        final int TEST_PORT = 25;
        wiser = new Wiser(TEST_PORT);
        wiser.start();
    }

    @After
    public void tearDown() throws Exception {
        wiser.stop();
    }

    @Test
    public void givenMail_whenSendAndReceived_thenCorrect() throws Exception {
        SimpleMailMessage message = composeEmailMessage();
        javaMailSender.send(message);
        List<WiserMessage> messages = wiser.getMessages();

        assertThat(messages, hasSize(1));
        WiserMessage wiserMessage = messages.get(0);
        assertEquals(userFrom, wiserMessage.getEnvelopeSender());
        assertEquals(userTo, wiserMessage.getEnvelopeReceiver());
        assertEquals(subject, getSubject(wiserMessage));
        assertEquals(textMail, getMessage(wiserMessage));
    }

    private String getMessage(WiserMessage wiserMessage)
      throws MessagingException, IOException {
        return wiserMessage.getMimeMessage().getContent().toString().trim();
    }

    private String getSubject(WiserMessage wiserMessage) throws MessagingException {
        return wiserMessage.getMimeMessage().getSubject();
    }

    private SimpleMailMessage composeEmailMessage() {
        SimpleMailMessage mailMessage = new SimpleMailMessage();
        mailMessage.setTo(userTo);
        mailMessage.setReplyTo(userFrom);
        mailMessage.setFrom(userFrom);
        mailMessage.setSubject(subject);
        mailMessage.setText(textMail);
        return mailMessage;
    }
}

In the test, the @Before and @After methods are in charge of starting and stopping the mail server.

Notice that we’re wiring in the JavaMailSender bean – the bean was automatically created by Spring Boot.

Just like any other defaults in Boot, the email settings for the JavaMailSender can be customized in application.properties:

spring.mail.host=localhost
spring.mail.port=25
spring.mail.properties.mail.smtp.auth=false

So we configured the mail server on localhost:25 and we didn’t require authentication.

6. Conclusion

In this article we have given an overview of Starters, explained why we need them and provided examples on how to use them in your projects.

Let’s recap the benefits of using Spring Boot starters:

  • increase pom manageability
  • production ready, tested & supported dependency configurations
  • decrease the overall configuration time for the project

Actual list of starters can be found here. Source code for the examples can be found here.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Viewing all 3702 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>