Quantcast
Channel: Baeldung
Viewing all 3698 articles
Browse latest View live

Vue.js Frontend with a Spring Boot Backend

$
0
0

1. Overview

In this tutorial, we’ll go over an example application that renders a single page with a Vue.js frontend, while using Spring Boot as a backend.

We’ll also utilize Thymeleaf to pass information to the template.

2. Spring Boot Setup

The application pom.xml uses the spring-boot-starter-thymeleaf dependency for template rendering along with the usual spring-boot-starter-web:

<dependency> 
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId> 
    <version>2.0.3.RELEASE</version>
</dependency> 
<dependency> 
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-thymeleaf</artifactId> 
    <version>2.0.3.RELEASE</version>
</dependency>

Thymeleaf by default looks for view templates at templates/, we’ll add an empty index.html to src/main/resources/templates/index.html. We’ll update its contents in the next section.

Finally, our Spring Boot controller will be in src/main/java:

@Controller
public class MainController {
    @GetMapping("/")
    public String index(Model model) {
        model.addAttribute("eventName", "FIFA 2018");
        return "index";
    }
}

This controller renders a single template with data passed to the view via the Spring Web Model object using model.addAttribute.

Let’s run the application using:

mvn spring-boot:run

Browse to http://localhost:8080 to see the index page. It’ll be empty at this point, of course.

Our goal is to make the page print out something like this:

Name of Event: FIFA 2018

Lionel Messi
Argentina's superstar

Christiano Ronaldo
Portugal top-ranked player

3. Rendering Data Using a Vue.Js Component

3.1. Basic Setup of Template

In the template, let’s load Vue.js and Bootstrap (optional) to render the User Interface:

// in head tag

<!-- Include Bootstrap -->

//  other markup

// at end of body tag
<script 
  src="https://cdn.jsdelivr.net/npm/vue@2.5.16/dist/vue.js">
</script>
<script 
  src="https://cdnjs.cloudflare.com/ajax/libs/babel-standalone/6.21.1/babel.min.js">
</script>

Here we load Vue.js from a CDN, but you can host it too if that’s preferable.

We load Babel in-browser so that we can write some ES6-compliant code in the page without having to run transpilation steps.

In a real-world application, you’ll likely use a build process using a tool such as Webpack and Babel transpiler, instead of using in-browser Babel.

Now let’s save the page and restart using the mvn spring-boot:run command. We refresh the browser to see our updates; nothing interesting yet.

Next, let’s set up an empty div element to which we’ll attach our User Interface:

<div id="contents"></div>

Next, we set up a Vue application on the page:

<script type="text/babel">
    var app = new Vue({
        el: '#contents'
    });
</script>

What just happened? This code creates a Vue application on the page. We attach it to the element with CSS selector #contents.

That refers to the empty div element on the page. The application is now set up to use Vue.js!

3.2. Displaying Data in the Template

Next, let’s create a header which shows the ‘eventName‘ attribute we passed from Spring controller, and render it using Thymeleaf’s features:

<div class="lead">
    <strong>Name of Event:</strong>
    <span th:text="${eventName}"></span>
</div>

Now let’s attach a ‘data’ attribute to the Vue application to hold our array of player data, which is a simple JSON array.

Our Vue app now looks like this:

<script type="text/babel">
    var app = new Vue({
        el: '#contents',
        data: {
            players: [
                { id: "1", 
                  name: "Lionel Messi", 
                  description: "Argentina's superstar" },
                { id: "2", 
                  name: "Christiano Ronaldo", 
                  description: "World #1-ranked player from Portugal" }
            ]
        }
    });
</script>

Now Vue.js knows about a data attribute called players.

3.3. Rendering Data with a Vue.js Component

Next, let’s create a Vue.js component named player-card which renders just one player. Remember to register this component before creating the Vue app.

Otherwise, Vue won’t find it:

Vue.component('player-card', {
    props: ['player'],
    template: `<div class="card">
        <div class="card-body">
            <h6 class="card-title">
                {{ player.name }}
            </h6>
            <p class="card-text">
                <div>
                    {{ player.description }}
                </div>
            </p>
        </div>
        </div>`
});

Finally, let’s loop over the set of players in the app object and render a player-card component for each player:

<ul>
    <li style="list-style-type:none" v-for="player in players">
        <player-card
          v-bind:player="player" 
          v-bind:key="player.id">
        </player-card>
    </li>
</ul>

The logic here is the Vue directive called v-for, which will loop over each player in the players data attribute and render a player-card for each player entry inside a <li> element.

v-bind:player means that the player-card component will be given a property called player whose value will be the player loop variable currently being worked with. v-bind:key is required to make each <li> element unique.

Generally, player.id is a good choice since it is already unique.

Now if you reload this page, observe the generated HTML markup in devtools, and it will look similar to this:

<ul>
    <li style="list-style-type: none;">
        <div class="card">
            // contents
        </div>
    </li>
    <li style="list-style-type: none;">
        <div class="card">
            // contents
        </div>
    </li>
</ul>

A workflow improvement note: it’ll quickly become cumbersome to have to restart the application and refresh the browser each time you make a change to the code.

Therefore, to make life easier, please refer to this article on how to use Spring Boot devtools and automatic restart.

4. Conclusion

In this quick article, we went over how to set up a web application using Spring Boot for backend and Vue.js for the frontend. This recipe can form the basis for more powerful and scalable applications, and this is just a starting point for most such applications.

As usual, code samples can be found over on GitHub.


Remove the First Element from a List

$
0
0

1. Overview

In this super-quick tutorial, we’ll show how to remove the first element from a List.

We’ll perform this operation for two common implementations of the List interface – ArrayList and LinkedList.

2. Creating a List

Firstly, let’s populate our Lists:

@Before
public void init() {
    list.add("cat");
    list.add("dog");
    list.add("pig");
    list.add("cow");
    list.add("goat");

    linkedList.add("cat");
    linkedList.add("dog");
    linkedList.add("pig");
    linkedList.add("cow");
    linkedList.add("goat");
}

3. ArrayList

Secondly, let’s remove the first element from the ArrayList, and make sure that our list doesn’t contain it any longer:

@Test
public void givenList_whenRemoveFirst_thenRemoved() {
    list.remove(0);

    assertThat(list, hasSize(4));
    assertThat(list, not(contains("cat")));
}

As shown above, we’re using remove(index) method to remove the first element – this will also work for any implementation of the List interface.

4. LinkedList

LinkedList also implements remove(index) method (in its own way) but it has also the removeFirst() method.

Let’s make sure that it works as expected:

@Test
public void givenLinkedList_whenRemoveFirst_thenRemoved() {
    linkedList.removeFirst();

    assertThat(linkedList, hasSize(4));
    assertThat(linkedList, not(contains("cat")));
}

5. Time Complexity

Although the methods look similar, their efficiency differs. ArrayList‘s remove() method requires O(n) time, whereas LinkedList‘s removeFirst() method requires O(1) time.

This is because ArrayList uses an array under the hood, and the remove() operation requires copying the rest of the array to the beginning. The larger the array is, the more elements need to be shifted.

Unlike that, LinkedList uses pointers meaning that each element points to the next and the previous one.

Hence, removing the first element means just changing the pointer to the first element. This operation always requires the same time not depending on the size of a list.

6. Conclusion

In this article, we’ve covered how to remove the first element from a List, and have compared the efficiency of this operation for ArrayList and LinkedList implementations.

As usual, the complete source code is available over on GitHub.

Spring Session With JDBC

$
0
0

1. Overview

In this quick tutorial, we’ll learn how to use the Spring session JDBC to persist session information to a database.

For demonstration purposes, we’ll be using an in-memory H2 database.

2. Configuration Options

The easiest and fastest way to create our sample project is by using Spring Boot. However, we’ll also show a non-boot way to set things up.

Hence, you don’t need to complete both sections 3 and 4. Just pick one depending on whether or not we are using Spring Boot to configure Spring Session.

3. Spring Boot Configuration

First, let’s look at the required configuration for Spring Session JDBC.

3.1. Maven Dependencies

First, we need to add these dependencies to our project:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-test</artifactId>
    <scope>test</scope>
</dependency> 
<dependency>
    <groupId>org.springframework.session</groupId>
    <artifactId>spring-session-jdbc</artifactId>
</dependency>
<dependency>
    <groupId>com.h2database</groupId>
    <artifactId>h2</artifactId>
    <version>1.4.197</version>
    <scope>runtime</scope>
</dependency>

Our application runs with Spring Boot, and the parent pom.xml provides versions for each entry. The latest version of each dependency can be found here: spring-boot-starter-webspring-boot-starter-test, spring-session-jdbc, and h2.

Surprisingly, the only configuration property that we need to enable Spring Session backed by a relational database is in the application.properties:

spring.session.store-type=jdbc

4. Standard Spring Config (no Spring Boot)

Let’s also have a look at the integrating and configuring spring-session without Spring Boot – just with plain Spring.

4.1. Maven Dependencies

First, if we’re adding spring-session-jdbc to a standard Spring project, we’ll need to add spring-session-jdbc and h2 to our pom.xml (last two dependencies from the snippet in the previous section).

4.2. Spring Session Configuration

Now let’s add a configuration class for Spring Session JDBC:

@Configuration
@EnableJdbcHttpSession
public class Config
  extends AbstractHttpSessionApplicationInitializer {

    @Bean
    public EmbeddedDatabase dataSource() {
        return new EmbeddedDatabaseBuilder()
          .setType(EmbeddedDatabaseType.H2)
          .addScript("org/springframework/session/jdbc/schema-h2.sql").build();
    }

    @Bean
    public PlatformTransactionManager transactionManager(DataSource dataSource) {
        return new DataSourceTransactionManager(dataSource);
    }
}

As we can see, the differences are minimal. Now we have to define our EmbeddedDatabase and PlatformTransactionManager beans explicitly – Spring Boot does it for us in the previous configuration.

The above ensures that  Spring bean by the name of springSessionRepositoryFilter is registered with our Servlet Container for every request.

5. A Simple App

Moving on, let’s look at a simple REST API that saves demonstrates session persistence

5.1. Controller

First, let’s add a Controller class to store and display information in the HttpSession:

@Controller
public class SpringSessionJdbcController {

    @GetMapping("/")
    public String index(Model model, HttpSession session) {
        List<String> favoriteColors = getFavColors(session);
        model.addAttribute("favoriteColors", favoriteColors);
        model.addAttribute("sessionId", session.getId());
        return "index";
    }

    @PostMapping("/saveColor")
    public String saveMessage
      (@RequestParam("color") String color, 
      HttpServletRequest request) {
 
        List<String> favoriteColors 
          = getFavColors(request.getSession());
        if (!StringUtils.isEmpty(color)) {
            favoriteColors.add(color);
            request.getSession().
              setAttribute("favoriteColors", favoriteColors);
        }
        return "redirect:/";
    }

    private List<String> getFavColors(HttpSession session) {
        List<String> favoriteColors = (List<String>) session
          .getAttribute("favoriteColors");
        
        if (favoriteColors == null) {
            favoriteColors = new ArrayList<>();
        }
        return favoriteColors;
    }
}

6. Testing Our Implementation

Now that we have an API with a GET and POST method, let’s write tests to invoke both methods.

In each case, we should be able to assert that the session information is persisted in the database. To verify this, we’ll query the session database directly.

Let’s first set things up:

@RunWith(SpringRunner.class)
@SpringBootTest(
  webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class SpringSessionJdbcApplicationTests {

    @LocalServerPort
    private int port;

    @Autowired
    private TestRestTemplate testRestTemplate;

    private List<String> getSessionIdsFromDatabase() 
      throws SQLException {
 
        List<String> result = new ArrayList<>();
        ResultSet rs = getResultSet(
          "SELECT * FROM SPRING_SESSION");
        
        while (rs.next()) {
            result.add(rs.getString("SESSION_ID"));
        }
        return result;
    }

    private List<byte[]> getSessionAttributeBytesFromDb() 
      throws SQLException {
 
        List<byte[]> result = new ArrayList<>();
        ResultSet rs = getResultSet(
          "SELECT * FROM SPRING_SESSION_ATTRIBUTES");
        
        while (rs.next()) {
            result.add(rs.getBytes("ATTRIBUTE_BYTES"));
        }
        return result;
    }

    private ResultSet getResultSet(String sql) 
      throws SQLException {
 
        Connection conn = DriverManager
          .getConnection("jdbc:h2:mem:testdb", "sa", "");
        Statement stat = conn.createStatement();
        return stat.executeQuery(sql);
    }
}

Note the use of @FixMethodOrder(MethodSorters.NAME_ASCENDING) to control the order of test case executionRead more about it here.

Let’s begin by asserting that the session tables are empty in the database:

@Test
public void whenH2DbIsQueried_thenSessionInfoIsEmpty() 
  throws SQLException {
 
    assertEquals(
      0, getSessionIdsFromDatabase().size());
    assertEquals(
      0, getSessionAttributeBytesFromDatabase().size());
}

Next, we test the GET endpoint:

@Test
public void whenH2DbIsQueried_thenOneSessionIsCreated() 
  throws SQLException {
 
    assertThat(this.testRestTemplate.getForObject(
      "http://localhost:" + port + "/", String.class))
      .isNotEmpty();
    assertEquals(1, getSessionIdsFromDatabase().size());
}

When the API is invoked for the first time, a session is created and persisted in the database. As we can see, there is only one row in the SPRING_SESSION table at this point.

Finally, we test the POST endpoint by providing a favorite color:

@Test
public void whenH2DbIsQueried_thenSessionAttributeIsRetrieved()
  throws Exception {
 
    MultiValueMap<String, String> map = new LinkedMultiValueMap<>();
    map.add("color", "red");
    this.testRestTemplate.postForObject(
      "http://localhost:" + port + "/saveColor", map, String.class);
    List<byte[]> queryResponse = getSessionAttributeBytesFromDatabase();
    
    assertEquals(1, queryResponse.size());
    ObjectInput in = new ObjectInputStream(
      new ByteArrayInputStream(queryResponse.get(0)));
    List<String> obj = (List<String>) in.readObject();
    assertEquals("red", obj.get(0));
}

As expected, SPRING_SESSION_ATTRIBUTES table persists the favorite color. Notice that we have to deserialize the contents of ATTRIBUTE_BYTES to a list of String objects since Spring does object serialization when persisting session attributes in the database.

7. How Does It Work?

Looking at the controller, there’s no indication of database persisting the session information. All the magic is happening in one line we added in the application.properties.

That is, when we specify spring.session.store-type=jdbc, behind the scenes, Spring Boot will apply a configuration that is equivalent to manually adding @EnableJdbcHttpSession annotation.

This creates a Spring Bean named springSessionRepositoryFilter that implements a SessionRepositoryFilter.

Another key point is that the filter intercepts every HttpServletRequest and wraps it into a SessionRepositoryRequestWrapper.

It also calls the commitSession method to persist the session information.

8. Session Information Stored in H2 Database

By adding the below properties, we could take a look at the tables where the session information is stored from the URL –  http://localhost:8080/h2-console/:

spring.h2.console.enabled=true
spring.h2.console.path=/h2-console
spring session jdbc spring session jdbc

9. Conclusion

Spring Session is a powerful tool for managing HTTP sessions in a distributed system architecture. Spring takes care of the heavy lifting for simple use cases by providing a predefined schema with minimal configuration. At the same time, it offers the flexibility to come up with our design on how we want to store session information.

Finally, for managing authentication information using Spring Session you can refer to this article – Guide to Spring Session.

As always, you can find the source code over on Github.

Get a Random Number in Kotlin

$
0
0

1. Introduction

This short tutorial will demonstrate how to generate a random number using Kotlin.

2. Random number using java.lang.Math

The easiest way to generate a random number in Kotlin is to use java.lang.Math. Below example will generate a random double number between 0 and 1.

@Test
fun whenRandomNumberWithJavaUtilMath_thenResultIsBetween0And1() {
    val randomNumber = Math.random()
    assertTrue { randomNumber >= 0 }
    assertTrue { randomNumber < 1 }
}

3. Random number using ThreadLocalRandom

We can also use java.util.concurrent.ThreadLocalRandom to generate a random double, integer or long value. Integer and long values generated this way can be both positive or negative.

ThreadLocalRandom is thread-safe and provides better performance in a multithreaded environment because it provides a separate Random object for every thread and thus reduces contention between threads:

@Test
fun whenRandomNumberWithJavaThreadLocalRandom_thenResultsInDefaultRanges() {
    val randomDouble = ThreadLocalRandom.current().nextDouble()
    val randomInteger = ThreadLocalRandom.current().nextInt()
    val randomLong = ThreadLocalRandom.current().nextLong()
    assertTrue { randomDouble >= 0 }
    assertTrue { randomDouble < 1 }
    assertTrue { randomInteger >= Integer.MIN_VALUE }
    assertTrue { randomInteger < Integer.MAX_VALUE }
    assertTrue { randomLong >= Long.MIN_VALUE }
    assertTrue { randomLong < Long.MAX_VALUE }
}

4. Random number using Kotlin.js

We can also generate a random double using the Math class from the kotlin.js library.

@Test
fun whenRandomNumberWithKotlinJSMath_thenResultIsBetween0And1() {
    val randomDouble = Math.random()
    assertTrue { randomDouble >=0 }
    assertTrue { randomDouble < 1 }
}

5. Random number in given range using pure Kotlin

Using pure Kotlin, we can create a list of numbers, shuffle it and then take the first element from that list:

@Test
fun whenRandomNumberWithKotlinNumberRange_thenResultInGivenRange() {
    val randomInteger = (1..12).shuffled().first()
    assertTrue { randomInteger >= 1 }
    assertTrue { randomInteger <= 12 }
}

6. Random number in given range using ThreadLocalRandom

ThreadLocalRandom presented in section 3 can also be used to generate a random number in a given range:

@Test
fun whenRandomNumberWithJavaThreadLocalRandom_thenResultsInGivenRanges() {
    val randomDouble = ThreadLocalRandom.current().nextDouble(1.0, 10.0)
    val randomInteger = ThreadLocalRandom.current().nextInt(1, 10)
    val randomLong = ThreadLocalRandom.current().nextLong(1, 10)
    assertTrue { randomDouble >= 1 }
    assertTrue { randomDouble < 10 }
    assertTrue { randomInteger >= 1 }
    assertTrue { randomInteger < 10 }
    assertTrue { randomLong >= 1 }
    assertTrue { randomLong < 10 }
}

7. Conclusion

In this article, we’ve learned a few ways to generate a random number in Kotlin.

As always, all code presented in this tutorial can be found over on GitHub.

Auto-import Classes in IntelliJ

$
0
0

1. Overview

This brief tutorial will describe each option of IntelliJ IDEA’s ‘auto-import’ feature.

2. Auto-import

There are several options in IntelliJ IDEA that we may configure in Settings > Editor > Auto Import:

Let’s review each of these options.

2.1. Show Import Popup

When enabled, IDEA will underline a class reference in our code and suggest an import to add:

If there are several options to choose from, Idea will let us choose an import from a list of alternatives:

2.2. Optimize Imports on the Fly

This one will make IDEA remove unused imports automatically and rearrange others according to the ‘Code Style’ preferences.

2.3. Add Unambiguous Imports on the Fly

Also, there is a way to automatically add an import as we add references to classes that need to be imported.

2.4. Show Import Suggestions for Static Methods and Fields

Our final option will enable the import popup feature for statics.

However, note that turning only this option on (without ‘Show import popup’) will not enable import suggestions for classes:

3. Conclusion

Some developers prefer to have total control over the imports in their classes, others rely on the IDE to handle this technical task.

Either of them may benefit from various configuration options that IntelliJ IDEA IDE has, including those for importing behavior.

Add Multiple Items to an Java ArrayList

$
0
0

1. Overview of ArrayList

In this quick tutorial, we’ll show to how to add multiple items to an already initialized ArrayList.

For an introduction to the use of the ArrayList, please refer to this article here.

2. AddAll

First of all, we’re going to introduce a simple way to add multiple items into an ArrayList.

First, we’ll be using addAll(), which takes a collection as its argument:

List<Integer> anotherList = Arrays.asList(5, 12, 9, 3, 15, 88);
list.addAll(anotherList);

It’s important to keep in mind that the elements added in the first list will reference the same objects as the elements in anotherList.

For that reason, every amends made in one of these elements will affect both lists.

3. Collections.addAll

The Collections class consists exclusively of static methods that operate on or return collections.

One of them is addAll, which needs a destination list and the items to be added may be specified individually or as an array.

Here it’s an example of how to use it with individual elements:

List<Integer> list = new ArrayList<>();
Collections.addAll(list, 1, 2, 3, 4, 5);

And another one to exemplify the operation with two arrays:

List<Integer> list = new ArrayList<>();
Integer[] otherList = new Integer[] {1, 2, 3, 4, 5};
Collections.addAll(list, otherList);

Similarly to the way explained in the above section, the contents of both lists here will refer to the same objects.

4. Using Java 8

This version of Java opens our possibilities by adding new tools. The one we’ll explore in the next examples is Stream:

List<Integer> source = ...;
List<Integer> target = ...;

source.stream()
  .forEachOrdered(target::add);

The main advantages of this way are the opportunity to use skip and filters. In the next example we’re going to skip the first element:

source.stream()
  .skip(1)
  .forEachOrdered(target::add);

It’s possible to filter the elements by our necessities. For instance, the Integer value:

source.stream()
  .filter(i -> i > 10)
  .forEachOrdered(target::add);

Finally, there are scenarios where we want to work in a null-safe way. For those ones, we can use Optional:

Optional.ofNullable(source).ifPresent(target::addAll)

In the above example, we’re adding elements from source to target by the method addAll.

5. Conclusion

In this article, we’ve explored different ways to add multiple items to an already initialized ArrayList.

As always, code samples can be found over on GitHub.

How to Filter a Collection in Java

$
0
0

1. Overview

In this short tutorial, we’ll have a look at different ways of filtering a Collection in Java – that is, finding all the items that meet a certain condition.

This is a fundamental task that is present in practically any Java application.

For this reason, the number of libraries that provide functionality for this purpose is significant.

Particularly, in this tutorial we’ll cover:

  • Java 8 Streams’ filter() function
  • Java 9 filtering collector
  • Relevant Eclipse Collections APIs
  • Apache’s CollectionUtils filter() method
  • Guava’s Collections2 filter() approach

2. Using Streams

Since Java 8 was introduced, Streams have gained a key role in most cases where we have to process a collection of data.

Consequently, this is the preferred approach in most cases as it is built in Java and requires no additional dependencies.

2.1. Filtering a Collection with Streams

For the sake of simplicity, in all the examples our objective will be to create a method that retrieves only the even numbers from a Collection of Integer values.

Thus, we can express the condition that we’ll use to evaluate each item as ‘value % 2 == 0‘.

In all the cases, we’ll have to define this condition as a Predicate object:

public Collection<Integer> findEvenNumbers(Collection<Integer> baseCollection) {
    Predicate<Integer> streamsPredicate = item -> item % 2 == 0;

    return baseCollection.stream()
      .filter(streamsPredicate)
      .collect(Collectors.toList());
}

It’s important to note that each library we analyze in this tutorial provides its own Predicate implementation, but that still, all of them are defined as functional interfaces, therefore allowing us to use Lambda functions to declare them.

In this case, we used a predefined Collector provided by Java that accumulates the elements into a List, but we could’ve used others, as discussed in this previous post.

2.2. Filtering After Grouping a Collection in Java 9

Streams allow us to aggregate items using the groupingBy collector.

Yet, if we filter as we did in the last section, some elements might get discarded in an early stage, before this collector comes into play.

For this reason, the filtering collector was introduced with Java 9, with the objective of processing the subcollections after they have been grouped.

Following our example, let’s imagine we want to group our collection based on the number of digits each Integer has, before filtering out the odd numbers:

public Map<Integer, List<Integer>> findEvenNumbersAfterGrouping(
  Collection<Integer> baseCollection) {
 
    Function<Integer, Integer> getQuantityOfDigits = item -> (int) Math.log10(item) + 1;
    
    return baseCollection.stream()
      .collect(groupingBy(
        getQuantityOfDigits,
        filtering(item -> item % 2 == 0, toList())));
}

In short, if we use this collector, we might end up with an empty value entry, whereas if we filter before grouping, the collector wouldn’t create such an entry at all.

Of course, we would choose the approach based on our requirements.

3. Using Eclipse Collections

We can also make use of some other third-party libraries to accomplish our objective, either if its because our application doesn’t support Java 8 or because we want to take advantage of some powerful functionality not provided by Java.

Such is the case of Eclipse Collections, a library that strives to keep up with the new paradigms, evolving and embracing the changes introduced by all the latest Java releases.

We can begin by exploring our Eclipse Collections Introductory post to have a broader knowledge of the functionality provided by this library.

3.1. Dependencies

Let’s begin by adding the following dependency to our project’s pom.xml:

<dependency>
    <groupId>org.eclipse.collections</groupId>
    <artifactId>eclipse-collections</artifactId>
    <version>9.2.0</version>
</dependency>

The eclipse-collections includes all the necessary data structure interfaces and the API itself.

3.2. Filtering a Collection with Eclipse Collections

Let’s now use eclipse’s filtering functionality on one of its data structures, such as its MutableList:

public Collection<Integer> findEvenNumbers(Collection<Integer> baseCollection) {
    Predicate<Integer> eclipsePredicate
      = item -> item % 2 == 0;
 
    Collection<Integer> filteredList = Lists.mutable
      .ofAll(baseCollection)
      .select(eclipsePredicate);

    return filteredList;
}

As an alternative, we could’ve used the Iterate‘s select() static method to define the filteredList object:

Collection<Integer> filteredList
 = Iterate.select(baseCollection, eclipsePredicate);

4. Using Apache’s CollectionUtils

To get started with Apache’s CollectionUtils library, we can check out this short tutorial where we covered its uses.

In this tutorial, however, we’ll focus on its filter() implementation.

4.1. Dependencies

First, we’ll need the following dependencies in our pom.xml file:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-collections4</artifactId>
    <version>4.2</version>
</dependency>

3.2. Filtering a Collection with CollectionUtils

We are now ready to use the CollectonUtils‘ methods:

public Collection<Integer> findEvenNumbers(Collection<Integer> baseCollection) {
    Predicate<Integer> apachePredicate = item -> item % 2 == 0;

    CollectionUtils.filter(baseCollection, apachePredicate);
    return baseCollection;
}

We have to take into account that this method modifies the baseCollection by removing every item that doesn’t match the condition.

This means that the base Collection has to be mutable, otherwise it will throw an exception.

4. Using Guava’s Collections2

As before, we can read our previous post ‘Filtering and Transforming Collections in Guava’ for further information on this subject.

3.1. Dependencies

Let’s start by adding this dependency in our pom.xml file:

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>25.1-jre</version>
</dependency>

3.2. Filtering a Collection with Collections2

As we can see, this approach is fairly similar to the one followed in the last section:

public Collection<Integer> findEvenNumbers(Collection<Integer> baseCollection) {
    Predicate<Integer> guavaPredicate = item -> item % 2 == 0;
        
    return Collections2.filter(baseCollection, guavaPredicate);
}

Again, here we define a Guava specific Predicate object.

In this case, Guava doesn’t modify the baseCollection, it generates a new one, so we can use an immutable collection as input.

5. Conclusion

In summary, we’ve seen that there are many different ways of filtering collections in Java.

Even though Streams are usually the preferred approach, its good to know and keep in mind the functionality offered by other libraries.

Especially if we need to support older Java versions. However, if this is the case, we need to keep in mind recent Java features used throughout the tutorial such as lambdas should be replaced with anonymous classes.

As usual, we can find all the examples shown in this tutorial in our Github repo.

Display RSS Feed with Spring MVC

$
0
0

1. Introduction

This quick tutorial will show how to build a simple RSS feed using Spring MVC and the AbstractRssFeedView class.

Afterward, we’ll also implement a simple REST API – to expose our feed over the wire.

2. RSS Feed

Before going into the implementation details, let’s make a quick review on what RSS is and how it works.

RSS is a type of web feed which easily allows a user to keep track of updates from a website. Furthermore, RSS feeds are based on an XML file which summarizes the content of a site. A news aggregator can then subscribe to one or more feeds and display the updates by regularly checking if the XML has changed.

3. Dependencies

First of all, since Spring’s RSS support is based on the ROME framework, we’ll need to add it as a dependency to our pom before we can actually use it:

<dependency>
    <groupId>com.rometools</groupId>
    <artifactId>rome</artifactId>
    <version>1.10.0</version>
</dependency>

For a guide to Rome have a look at our previous article.

4. Feed Implementation

Next up, we’re going to build the actual feed. In order to do that, we’ll extend the AbstractRssFeedView class and implement two of its methods.

The first one will receive a Channel object as input and will populate it with the feed’s metadata.

The other will return a list of items which represents the feed’s content:

@Component
public class RssFeedView extends AbstractRssFeedView {

    @Override
    protected void buildFeedMetadata(Map<String, Object> model, 
      Channel feed, HttpServletRequest request) {
        feed.setTitle("Baeldung RSS Feed");
        feed.setDescription("Learn how to program in Java");
        feed.setLink("http://www.baeldung.com");
    }

    @Override
    protected List<Item> buildFeedItems(Map<String, Object> model, 
      HttpServletRequest request, HttpServletResponse response) {
        Item entryOne = new Item();
        entryOne.setTitle("JUnit 5 @Test Annotation");
        entryOne.setAuthor("donatohan.rimenti@gmail.com");
        entryOne.setLink("http://www.baeldung.com/junit-5-test-annotation");
        entryOne.setPubDate(Date.from(Instant.parse("2017-12-19T00:00:00Z")));
        return Arrays.asList(entryOne);
    }
}

5. Exposing the Feed

Finally, we’re going to build a simple REST service to make our feed available on the web. The service will return the view object that we just created:

@RestController
public class RssFeedController {

    @Autowired
    private RssFeedView view;
    
    @GetMapping("/rss")
    public View getFeed() {
        return view;
    }
}

Also, since we’re using Spring Boot to start up our application, we’ll implement a simple launcher class:

@SpringBootApplication
public class RssFeedApplication {
    public static void main(final String[] args) {
        SpringApplication.run(RssFeedApplication.class, args);
    }
}

After running our application, when performing a request to our service we’ll see the following RSS Feed:

<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0">
    <channel>
        <title>Baeldung RSS Feed</title>
        <link>http://www.baeldung.com</link>
        <description>Learn how to program in Java</description>
        <item>
            <title>JUnit 5 @Test Annotation</title>
            <link>http://www.baeldung.com/junit-5-test-annotation</link>
            <pubDate>Tue, 19 Dec 2017 00:00:00 GMT</pubDate>
            <author>donatohan.rimenti@gmail.com</author>
        </item>
    </channel>
</rss>

6. Conclusion

This article went through how to build a simple RSS feed with Spring and ROME and make it available for the consumers by using a Web Service.

In our example, we used Spring Boot to start up our application. For more details on this topic, continue reading this introductory article on Spring Boot.

As always, all the code used is available over on GitHub.


Parsing YAML with SnakeYAML

$
0
0

1. Overview

In this tutorial, we’ll learn how to use SnakeYAML library to serialize Java objects to YAML documents and vice versa.

2. Project Setup

In order to use SnakeYAML in our project, we’ll add the following Maven dependency (the latest version can be found here):

<dependency>
    <groupId>org.yaml</groupId>
    <artifactId>snakeyaml</artifactId>
    <version>1.21</version>            
</dependency>

3. Entry Point

The Yaml class is the entry point for the API:

Yaml yaml = new Yaml();

Since the implementation is not thread safe, different threads must have their own Yaml instance.

4. Loading a YAML Document

The library provides support for loading the document from a String or an InputStream. Majority of the code samples here would be based on parsing the InputStream.

Let’s start by defining a simple YAML document, and naming the file as customer.yaml:

firstName: "John"
lastName: "Doe"
age: 20

4.1. Basic Usage

Now we’ll parse the above YAML document with the Yaml class:

Yaml yaml = new Yaml();
InputStream inputStream = this.getClass()
  .getClassLoader()
  .getResourceAsStream("customer.yaml");
Map<String, Object> obj = yaml.load(inputStream);
System.out.println(obj);

The above code generates the following output:

{firstName=John, lastName=Doe, age=20}

By default, the load() method returns a Map instance. Querying the Map object each time would require us to know the property key names in advance, and it’s also not easy to traverse over nested properties.

4.2. Custom Type

The library also provides a way to load the document as a custom class. This option would allow easy traversal of data in memory.

Let’s define a Customer class and try to load the document again:

public class Customer {

    private String firstName;
    private String lastName;
    private int age;

    // getters and setters
}

Assuming the YAML document to be deserialized as a known type, we can specify an explicit global tag in the document.

Let’s update the document and store it in a new file customer_with_type.yaml:

!!com.baeldung.snakeyaml.Customer
firstName: "John"
lastName: "Doe"
age: 20

Note the first line in the document, which holds the info about the class to be used when loading it.

Now we’ll update the code used above, and pass the new file name as input:

Yaml yaml = new Yaml();
InputStream inputStream = this.getClass()
 .getClassLoader()
 .getResourceAsStream("yaml/customer_with_type.yaml");
Customer customer = yaml.load(inputStream);

The load() method now returns an instance of Customer typeThe drawback to this approach is that the type has to be exported as a library in order to be used where needed.

Although, we could use the explicit local tag for which we aren’t required to export libraries.

Another way of loading a custom type is by using the Constructor class. This way we can specify the root type for a YAML document to be parsed. Let us create a Constructor instance with the Customer type as root type and pass it to the Yaml instance.

Now on loading the customer.yaml, we’ll get the Customer object:

Yaml yaml = new Yaml(new Constructor(Customer.class));

4.3. Implicit Types

In case there’s no type defined for a given property, the library automatically converts the value to an implicit type.

For example:

1.0 -> Float
42 -> Integer
2009-03-30 -> Date

Let’s test this implicit type conversion using a test case:

@Test
public void whenLoadYAML_thenLoadCorrectImplicitTypes() {
   Yaml yaml = new Yaml();
   Map<Object, Object> document = yaml.load("3.0: 2018-07-22");
 
   assertNotNull(document);
   assertEquals(1, document.size());
   assertTrue(document.containsKey(3.0d));   
}

4.4. Nested Objects and Collections

Given a top-level type, the library automatically detects the types of nested objects, unless they’re an interface or an abstract class, and deserializes the document into the relevant nested type.

Let’s add Contact and Address details to the customer.yaml, and save the new file as customer_with_contact_details_and_address.yaml. 

Now we’ll parse the new YAML document:

firstName: "John"
lastName: "Doe"
age: 31
contactDetails:
   - type: "mobile"
     number: 123456789
   - type: "landline"
     number: 456786868
homeAddress:
   line: "Xyz, DEF Street"
   city: "City Y"
   state: "State Y"
   zip: 345657

Customer class should also reflect these changes. Here’s the updated class:

public class Customer {
    private String firstName;
    private String lastName;
    private int age;
    private List<Contact> contactDetails;
    private Address homeAddress;    
    // getters and setters
}

Let’s see how Contact and Address classes look like:

public class Contact {
    private String type;
    private int number;
    // getters and setters
}
public class Address {
    private String line;
    private String city;
    private String state;
    private Integer zip;
    // getters and setters
}

Now we’ll test the Yaml#load() with the given test case:

@Test
public void 
  whenLoadYAMLDocumentWithTopLevelClass_thenLoadCorrectJavaObjectWithNestedObjects() {
 
    Yaml yaml = new Yaml(new Constructor(Customer.class));
    InputStream inputStream = this.getClass()
      .getClassLoader()
      .getResourceAsStream("yaml/customer_with_contact_details_and_address.yaml");
    Customer customer = yaml.load(inputStream);
 
    assertNotNull(customer);
    assertEquals("John", customer.getFirstName());
    assertEquals("Doe", customer.getLastName());
    assertEquals(31, customer.getAge());
    assertNotNull(customer.getContactDetails());
    assertEquals(2, customer.getContactDetails().size());
    
    assertEquals("mobile", customer.getContactDetails()
      .get(0)
      .getType());
    assertEquals(123456789, customer.getContactDetails()
      .get(0)
      .getNumber());
    assertEquals("landline", customer.getContactDetails()
      .get(1)
      .getType());
    assertEquals(456786868, customer.getContactDetails()
      .get(1)
      .getNumber());
    assertNotNull(customer.getHomeAddress());
    assertEquals("Xyz, DEF Street", customer.getHomeAddress()
      .getLine());
}

4.5. Type-Safe Collections

When one or more properties of a given Java class are type-safe (generic) collections, then it’s important to specify the TypeDescription so that the correct parameterized type is identified.

Let’s take one Customer having more than one Contact, and try to load it:

firstName: "John"
lastName: "Doe"
age: 31
contactDetails:
   - { type: "mobile", number: 123456789}
   - { type: "landline", number: 123456789}

In order to load this document, we can specify the TypeDescription for the given property on the top level class:

Constructor constructor = new Constructor(Customer.class);
TypeDescription customTypeDescription = new TypeDescription(Customer.class);
customTypeDescription.addPropertyParameters("contactDetails", Contact.class);
constructor.addTypeDescription(customTypeDescription);
Yaml yaml = new Yaml(constructor);

4.6. Loading Multiple Documents

There could be cases where, in a single File there are several YAML documents, and we want to parse all of them. The Yaml class provides a loadAll() method to do such type of parsing.

By default, the method returns an instance of Iterable<Object> where each object is of type Map<String, Object>. If a custom type is desired then we can use the Constructor instance as discussed above

Consider the following documents in a single file:

---
firstName: "John"
lastName: "Doe"
age: 20
---
firstName: "Jack"
lastName: "Jones"
age: 25

We can parse the above using the loadAll() method as shown in the below code sample:

@Test
public void whenLoadMultipleYAMLDocuments_thenLoadCorrectJavaObjects() {
    Yaml yaml = new Yaml(new Constructor(Customer.class));
    InputStream inputStream = this.getClass()
      .getClassLoader()
      .getResourceAsStream("yaml/customers.yaml");

    int count = 0;
    for (Object object : yaml.loadAll(inputStream)) {
        count++;
        assertTrue(object instanceof Customer);
    }
    assertEquals(2,count);
}

5. Dumping YAML Documents

The library also provides a method to dump a given Java object into a YAML document. The output could be a String or a specified file/stream.

5.1. Basic Usage

We’ll start with a simple example of dumping an instance of Map<String, Object> to a YAML document (String):

@Test
public void whenDumpMap_thenGenerateCorrectYAML() {
    Map<String, Object> data = new LinkedHashMap<String, Object>();
    data.put("name", "Silenthand Olleander");
    data.put("race", "Human");
    data.put("traits", new String[] { "ONE_HAND", "ONE_EYE" });
    Yaml yaml = new Yaml();
    StringWriter writer = new StringWriter();
    yaml.dump(data, writer);
    String expectedYaml = "name: Silenthand Olleander\nrace: Human\ntraits: [ONE_HAND, ONE_EYE]\n";

    assertEquals(expectedYaml, writer.toString());
}

The above code produces the following output (note that using an instance of LinkedHashMap preserves the order of the output data):

name: Silenthand Olleander
race: Human
traits: [ONE_HAND, ONE_EYE]

5.2. Custom Java Objects

We can also choose to dump custom Java types into an output stream. This will, however, add the global explicit tag to the output document:

@Test
public void whenDumpACustomType_thenGenerateCorrectYAML() {
    Customer customer = new Customer();
    customer.setAge(45);
    customer.setFirstName("Greg");
    customer.setLastName("McDowell");
    Yaml yaml = new Yaml();
    StringWriter writer = new StringWriter();
    yaml.dump(customer, writer);        
    String expectedYaml = "!!com.baeldung.snakeyaml.Customer {age: 45, contactDetails: null, firstName: Greg,\n  homeAddress: null, lastName: McDowell}\n";

    assertEquals(expectedYaml, writer.toString());
}

With the above approach, we’re still dumping the tag information in YAML document.

This means we have to export our class as a library for any consumer who is deserializing it. In order to avoid the tag name in the output file, we can use the dumpAs() method provided by the library.

So in the above code, we could tweak the following to remove the tag:

yaml.dumpAs(customer, Tag.MAP, null);

6. Conclusion

This article illustrated usages of SnakeYAML library to serialize Java objects to YAML and vice versa.

All of the examples can be found in the GitHub project – this is a Maven based project, so it should be easy to import and run as it is.

A Guide to JavaFaker

$
0
0

1. Overview

JavaFaker is a library that can be used to generate a wide array of real-looking data from addresses to popular culture references.

In this tutorial, we’ll be looking at how to use JavaFaker’s classes to generate fake data. We’ll start by introducing the Faker class and the FakeValueService, before moving on to introducing locales to make the data more specific to a single place.

Finally, we’ll discuss how unique the data is. To test JavaFaker’s classes, we’ll make use of regular expressions, you can read more about them here.

2. Dependencies

Below is the single dependency we’ll need to get started with JavaFaker.

First, the dependency we’ll need for Maven-based projects:

<dependency>
    <groupId>com.github.javafaker</groupId>
    <artifactId>javafaker</artifactId>
    <version>0.15</version>
</dependency>

For Gradle users, you can add the following to your build.gradle file:

compile group: 'com.github.javafaker', name: 'javafaker', version: '0.15'

3. FakeValueService

The FakeValueService class provides methods for generating random sequences as well as resolving .yml files associated with the locale.

In this section, we’ll cover some of the useful methods that the FakerValueService has to offer.

3.1. Letterify, Numerify, and Bothify

Three useful methods are Letterify, Numberify, and Bothify. Letterify helps to generate random sequences of alphabetic characters.

Next, Numerify simply generates numeric sequences.

Finally, Bothify is a combination of the two and can create random alphanumeric sequences – useful for mocking things like ID strings.

FakeValueService requires a valid Locale, as well as a RandomService:

@Test
public void whenBothifyCalled_checkPatternMatches() throws Exception {

    FakeValuesService fakeValuesService = new FakeValuesService(
      new Locale("en-GB"), new RandomService());

    String email = fakeValuesService.bothify("????##@gmail.com");
    Matcher emailMatcher = Pattern.compile("\\w{4}\\d{2}@gmail.com").matcher(email);
 
    assertTrue(emailMatcher.find());
}

In this unit test, we create a new FakeValueService with a locale of en-GB and use the bothify method to generate a unique fake Gmail address.

It works by replacing ‘?’ with random letters and ‘#’ with random numbers. We can then check the output is correct with a simple Matcher check.

3.2. Regexify

Similarly, regexify generates a random sequence based on a chosen regex pattern.

In this snippet, we’ll use the FakeValueService to create a random sequence following a specified regex:

@Test
public void givenValidService_whenRegexifyCalled_checkPattern() throws Exception {

    FakeValuesService fakeValuesService = new FakeValuesService(
      new Locale("en-GB"), new RandomService());

    String alphaNumericString = fakeValuesService.regexify("[a-z1-9]{10}");
    Matcher alphaNumericMatcher = Pattern.compile("[a-z1-9]{10}").matcher(alphaNumericString);
 
    assertTrue(alphaNumericMatcher.find());
}

Our code creates a lower-case alphanumeric string of length 10. Our pattern checks the generated string against the regex.

4. JavaFaker’s Faker Class

The Faker class allows us to use JavaFaker’s fake data classes.

In this section, we’ll see how to instantiate a Faker object and use it to call some fake data:

Faker faker = new Faker();

String streetName = faker.address().streetName();
String number = faker.address().buildingNumber();
String city = faker.address().city();
String country = faker.address().country();

System.out.println(String.format("%s\n%s\n%s\n%s",
  number,
  streetName,
  city,
  country));

Above, we use the Faker Address object to generate a random address.

When we run this code, we’ll get an example of the output:

3188
Dayna Mountains
New Granvilleborough
Tonga

We can see that the data has no single geographical location since we didn’t specify a locale. To change this, we’ll learn to make the data more relevant to our location in the next section.

We could also use this faker object in a similar way to create data relating to many more objects such as:

  • Business
  • Beer
  • Food
  • PhoneNumber

You can find the full list here.

5. Introducing Locales

Here, we’ll introduce how to use locales to make the generated data more specific to a single location. We’ll introduce a Faker with a US locale, and a UK locale:

@Test
public void givenJavaFakersWithDifferentLocals_thenHeckZipCodesMatchRegex() {

    Faker ukFaker = new Faker(new Locale("en-GB"));
    Faker usFaker = new Faker(new Locale("en-US"));

    System.out.println(String.format("American zipcode: %s", usFaker.address().zipCode()));
    System.out.println(String.format("British postcode: %s", ukFaker.address().zipCode()));

    Pattern ukPattern = Pattern.compile(
      "([Gg][Ii][Rr] 0[Aa]{2})|((([A-Za-z][0-9]{1,2})|"
      + "(([A-Za-z][A-Ha-hJ-Yj-y][0-9]{1,2})|(([A-Za-z][0-9][A-Za-z])|([A-Za-z][A-Ha-hJ-Yj-y]" 
      + "[0-9]?[A-Za-z]))))\\s?[0-9][A-Za-z]{2})");
    Matcher ukMatcher = ukPattern.matcher(ukFaker.address().zipCode());

    assertTrue(ukMatcher.find());

    Matcher usMatcher = Pattern.compile("^\\d{5}(?:[-\\s]\\d{4})?$")
      .matcher(usFaker.address().zipCode());

    assertTrue(usMatcher.find());
}

Above, we see that the two Fakers with the locale match their regexes for the countries zip codes.

If the locale passed to the Faker does not exist, the Faker throws a LocaleDoesNotExistException.

We’ll test this with the following unit test:

@Test(expected = LocaleDoesNotExistException.class)
public void givenWrongLocale_whenFakerInitialised_testExceptionThrown() {
    Faker wrongLocaleFaker = new Faker(new Locale("en-seaWorld"));
}

6. Uniqueness

While JavaFaker seemingly generates data at Random, the uniqueness cannot be guaranteed.

JavaFaker supports seeding of its pseudo-random number generator (PRNG) in the form of a RandomService to provide the deterministic output of repeated method calls.

Simply put, pseudorandomness is a process that appears random but is not.

We can see how this works by creating two Fakers with the same seed:

@Test
public void givenJavaFakersWithSameSeed_whenNameCalled_CheckSameName() {

    Faker faker1 = new Faker(new Random(24));
    Faker faker2 = new Faker(new Random(24));

    assertEquals(faker1.name().firstName(), faker2.name().firstName());
}

The above code returns the same name from two different fakers.

7. Conclusion

In this tutorial, we have explored the JavaFaker library to generate real-looking fake data. We’ve also covered two useful classes the Faker class and the FakeValueService class.

We explored how we can use locales to generate location specific data.

Finally, we discussed how the data generated only seems random and the uniqueness of the data is not guaranteed.

As usual, code snippets can be found over on GitHub.

A Simple Guide to Connection Pooling in Java

$
0
0

1. Overview

Connection pooling is a well-known data access pattern, whose main purpose is to reduce the overhead involved in performing database connections and read/write database operations.

In a nutshell, a connection pool is, at the most basic level, a database connection cache implementation, which can be configured to suit specific requirements.

In this tutorial, we’ll make a quick roundup of a few popular connection pooling frameworks, and we’ll learn how to implement from scratch our own connection pool.

2. Why Connection Pooling?

The question is rhetorical, of course.

If we analyze the sequence of steps involved in a typical database connection life cycle, we’ll understand why:

  1. Opening a connection to the database using the database driver
  2. Opening a TCP socket for reading/writing data
  3. Reading / writing data over the socket
  4. Closing the connection
  5. Closing the socket

It becomes evident that database connections are fairly expensive operations, and as such, should be reduced to a minimum in every possible use case (in edge cases, just avoided).

Here’s where connection pooling implementations come into play.

By just simply implementing a database connection container, which allows us to reuse a number of existing connections, we can effectively save the cost of performing a huge number of expensive database trips, hence boosting the overall performance of our database-driven applications.

3. JDBC Connection Pooling Frameworks

From a pragmatic perspective, implementing a connection pool from the ground up is just pointless, considering the number of “enterprise-ready” connection pooling frameworks available out there.

From a didactic one, which is the goal of this article, it’s not.

Even so, before we learn how to implement a basic connection pool, let’s first showcase a few popular connection pooling frameworks.

3.1. Apache Commons DBCP

Let’s start this quick roundup with Apache Commons DBCP Component, a full-featured connection pooling JDBC framework:

public class DBCPDataSource {
    
    private static BasicDataSource ds = new BasicDataSource();
    
    static {
        ds.setUrl("jdbc:h2:mem:test");
        ds.setUsername("user");
        ds.setPassword("password");
        ds.setMinIdle(5);
        ds.setMaxIdle(10);
        ds.setMaxOpenPreparedStatements(100);
    }
    
    public static Connection getConnection() throws SQLException {
        return ds.getConnection();
    }
    
    private DBCPDataSource(){ }
}

In this case, we’ve used a wrapper class with a static block to easily configure DBCP’s properties.

Here’s how to get a pooled connection with the DBCPDataSource class:

Connection con = DBCPDataSource.getConnection();

3.2. HikariCP

Moving on, let’s look at HikariCP, a lightning fast JDBC connection pooling framework created by Brett Wooldridge (for the full details on how to configure and get the most out of HikariCP, please check this article):

public class HikariCPDataSource {
    
    private static HikariConfig config = new HikariConfig();
    private static HikariDataSource ds;
    
    static {
        config.setJdbcUrl("jdbc:h2:mem:test");
        config.setUsername("user");
        config.setPassword("password");
        config.addDataSourceProperty("cachePrepStmts", "true");
        config.addDataSourceProperty("prepStmtCacheSize", "250");
        config.addDataSourceProperty("prepStmtCacheSqlLimit", "2048");
        ds = new HikariDataSource(config);
    }
    
    public static Connection getConnection() throws SQLException {
        return ds.getConnection();
    }
    
    private HikariCPDataSource(){}
}

Similarly, here’s how to get a pooled connection with the HikariCPDataSource class:

Connection con = HikariCPDataSource.getConnection();

3.3. C3PO

Last in this review is C3PO, a powerful JDBC4 connection and statement pooling framework developed by Steve Waldman:

public class C3poDataSource {

    private static ComboPooledDataSource cpds = new ComboPooledDataSource();

    static {
        try {
            cpds.setDriverClass("org.h2.Driver");
            cpds.setJdbcUrl("jdbc:h2:mem:test");
            cpds.setUser("user");
            cpds.setPassword("password");
        } catch (PropertyVetoException e) {
            // handle the exception
        }
    }
    
    public static Connection getConnection() throws SQLException {
        return cpds.getConnection();
    }
    
    private C3poDataSource(){}
}

As expected, getting a pooled connection with the C3poDataSource class is similar to the previous examples:

Connection con = C3poDataSource.getConnection();

4. A Simple Implementation

To better understand the underlying logic of connection pooling, let’s create a simple implementation.

Let’s start out with a loosely-coupled design, based on just one single interface:

public interface ConnectionPool {
    Connection getConnection();
    boolean releaseConnection(Connection connection);
    String getUrl();
    String getUser();
    String getPassword();
}

The ConnectionPool interface defines the public API of a basic connection pool.

Now, let’s create an implementation, which provides some basic functionality, including getting and releasing a pooled connection:

public class BasicConnectionPool 
  implements ConnectionPool {

    private String url;
    private String user;
    private String password;
    private List<Connection> connectionPool;
    private List<Connection> usedConnections = new ArrayList<>();
    private static int INITIAL_POOL_SIZE = 10;
    
    public static BasicConnectionPool create(
      String url, String user, 
      String password) throws SQLException {
 
        List<Connection> pool = new ArrayList<>(INITIAL_POOL_SIZE);
        for (int i = 0; i < INITIAL_POOL_SIZE; i++) {
            pool.add(createConnection(url, user, password));
        }
        return new BasicConnectionPool(url, user, password, pool);
    }
    
    // standard constructors
    
    @Override
    public Connection getConnection() {
        Connection connection = connectionPool
          .remove(connectionPool.size() - 1);
        usedConnections.add(connection);
        return connection;
    }
    
    @Override
    public boolean releaseConnection(Connection connection) {
        connectionPool.add(connection);
        return usedConnections.remove(connection);
    }
    
    private static Connection createConnection(
      String url, String user, String password) 
      throws SQLException {
        return DriverManager.getConnection(url, user, password);
    }
    
    public int getSize() {
        return connectionPool.size() + usedConnections.size();
    }

    // standard getters
}

While pretty naive, the BasicConnectionPool class provides the minimal functionality that we’d expect from a typical connection pooling implementation.

In a nutshell, the class initializes a connection pool based on an ArrayList that stores 10 connections, which can be easily reused.

It’s possible to create JDBC connections with the DriverManager class and with Datasource implementations.

As it’s much better to keep the creation of connections database agnostic, we’ve used the former, within the create() static factory method.

In this case, we’ve placed the method within the BasicConnectionPool, because this is the only implementation of the interface.

In a more complex design, with multiple ConnectionPool implementations, it’d be preferable to place it in the interface, therefore getting a more flexible design and a greater level of cohesion.

The most relevant point to stress here is that once the pool is created, connections are fetched from the pool, so there’s no need to create new ones.

Furthermore, when a connection is released, it’s actually returned back to the pool, so other clients can reuse it.

There’s no any further interaction with the underlying database, such as an explicit call to the Connection’s close() method.

5. Using the BasicConnectionPool Class

As expected, using our BasicConnectionPool class is straightforward.

Let’s create a simple unit test and get a pooled in-memory H2 connection:

@Test
public whenCalledgetConnection_thenCorrect() {
    ConnectionPool connectionPool = BasicConnectionPool
      .create("jdbc:h2:mem:test", "user", "password");
 
    assertTrue(connectionPool.getConnection().isValid(1));
}

6. Further Improvements and Refactoring

Of course, there’s plenty of room to tweak/extend the current functionality of our connection pooling implementation.

For instance, we could refactor the getConnection() method, and add support for maximum pool size. If all available connections are taken, and the current pool size is less than the configured maximum, the method will create a new connection:

@Override
public Connection getConnection() throws SQLException {
    if (connectionPool.isEmpty()) {
        if (usedConnections.size() < MAX_POOL_SIZE) {
            connectionPool.add(createConnection(url, user, password));
        } else {
            throw new RuntimeException(
              "Maximum pool size reached, no available connections!");
        }
    }

    Connection connection = connectionPool
      .remove(connectionPool.size() - 1);
    usedConnections.add(connection);
    return connection;
}

Note that the method now throws SQLException, meaning we’ll have to update the interface signature as well.

Or, we could add a method to gracefully shut down our connection pool instance:

public void shutdown() throws SQLException {
    usedConnections.forEach(this::releaseConnection);
    for (Connection c : connectionPool) {
        c.close();
    }
    connectionPool.clear();
}

In production-ready implementations, a connection pool should provide a bunch of extra features, such as the ability for tracking the connections that are currently in use, support for prepared statement pooling, and so forth.

As we’ll keep things simple, we’ll omit how to implement these additional features and keep the implementation non-thread-safe for the sake of clarity.

7. Conclusion

In this article, we took an in-depth look at what connection pooling is and learned how to roll our own connection pooling implementation.

Of course, we don’t have to start from scratch every time that we want to add a full-featured connection pooling layer to our applications.

That’s why we made first a simple roundup showing some of the most popular connection pool frameworks, so we can have a clear idea on how to work with them, and pick up the one that best suits our requirements.

As usual, all the code samples shown in this article are available over on GitHub.

Spring MVC Streaming and SSE Request Processing

$
0
0

1. Introduction

This simple tutorial demonstrates the use of several asynchronous and streaming objects in Spring MVC 5.x.x.

Specifically, we’ll review three key classes:

  • ResponseBodyEmitter
  • SseEmitter
  • StreamingResponseBody

Also, we’ll discuss how to interact with them using a JavaScript client.

2. ResponseBodyEmitter

ResponseBodyEmitter handles async responses.

Also, it represents a parent for a number of subclasses – one of which we’ll take a closer look at below.

2.1. Server Side

It’s better to use a ResponseBodyEmitter along with its own dedicated asynchronous thread and wrapped with a ResponseEntity (which we can inject the emitter into directly):

@Controller
public class ResponseBodyEmitterController {
 
    private ExecutorService executor 
      = Executors.newCachedThreadPool();

    @GetMapping("/rbe")
    public ResponseEntity<ResponseBodyEmitter> handleRbe() {
        ResponseBodyEmitter emitter = new ResponseBodyEmitter();
        executor(() -> {
            try {
                emitter.send(
                  "/rbe" + " @ " + new Date(), MediaType.TEXT_PLAIN);
                emitter.complete();
            } catch (Exception ex) {
                emitter.completeWithError(ex);
            }
        });
        return new ResponseEntity(emitter, HttpStatus.OK);
    }
}

So, in the example above, we can sidestep needing to use CompleteableFutures, more complicated asynchronous promises, or use of the @Async annotation.

Instead, we simply declare our asynchronous entity and wrap it in a new Thread provided by the ExecutorService.

2.2. Client Side

For client-side use, we can use a simple XHR method and call our API endpoints just like in a usual AJAX operation:

var xhr = function(url) {
    return new Promise(function(resolve, reject) {
        var xmhr = new XMLHttpRequest();
        //...
        xmhr.open("GET", url, true);
        xmhr.send();
       //...
    });
};

xhr('http://localhost:8080/javamvcasync/rbe')
  .then(function(success){ //... });

3. SseEmitter

SseEmitter is actually a subclass of ResponseBodyEmitter and provides additional Server-Sent Event (SSE) support out-of-the-box.

3.1. Server Side

So, let’s take a quick look at an example controller leveraging this powerful entity:

@Controller
public class SseEmitterController {
    private ExecutorService nonBlockingService = Executors
      .newCachedThreadPool();
    
    @GetMapping("/sse")
    public SseEmitter handleSse() {
         SseEmitter emitter = new SseEmitter();
         nonBlockingService.execute(() -> {
             try {
                 emitter.send("/sse" + " @ " + new Date());
                 // we could send more events
                 emitter.complete();
             } catch (Exception ex) {
                 emitter.completeWithError(ex);
             }
         });
         return emitter;
    }   
}

Pretty standard fare, but we’ll notice a few differences between this and our usual REST controller:

  • First, we return a SseEmitter
  • Also, we wrap the core response information in its own Thread
  • Finally, we send response information using emitter.send()

3.2. Client Side

Our client works a little bit differently this time since we can leverage the continuously connected Server-Sent Event Library:

var sse = new EventSource('http://localhost:8080/javamvcasync/sse');
sse.onmessage = function (evt) {
    var el = document.getElementById('sse');
    el.appendChild(document.createTextNode(evt.data));
    el.appendChild(document.createElement('br'));
};

4. StreamingResponseBody

Lastly, we can use StreamingResponseBody to write directly to an OutputStream before passing that written information back to the client using a ResponseEntity.

4.1. Server Side

@Controller
public class StreamingResponseBodyController {
 
    @GetMapping("/srb")
    public ResponseEntity<StreamingResponseBody> handleRbe() {
        StreamingResponseBody stream = out -> {
            String msg = "/srb" + " @ " + new Date();
            out.write(msg.getBytes());
        };
        return new ResponseEntity(stream, HttpStatus.OK);
    }
}

4.2. Client Side

Just like before, we’ll use a regular XHR method to access the controller above:

var xhr = function(url) {
    return new Promise(function(resolve, reject) {
        var xmhr = new XMLHttpRequest();
        //...
        xmhr.open("GET", url, true);
        xmhr.send();
        //...
    });
};

xhr('http://localhost:8080/javamvcasync/srb')
  .then(function(success){ //... });

Next, let’s take a look at some successful uses of these examples.

5. Bringing it all Together

After we’ve successfully compiled our server and run our client above (accessing the supplied index.jsp), we should see the following in our browser:


And the following in our terminal:

 

We can also call the endpoints directly and see them streaming responses appear in our browser.

6. Conclusion

While Future and CompleteableFuture have proven robust additions to Java and Spring, we now have several resources at our disposal to more adequately handle asynchronous and streaming data for highly-concurrent web applications.

Finally, check out the complete code examples over on GitHub.

Java Weekly, Issue 240

$
0
0

Here we go…

1. Spring and Java

>> WireMock Tutorial: Request Matching, Part Four [petrikainulainen.net]

A nice write-up that shows how to specify expectations for XML documents received by a web service.

>> Spring Boot 1.x EOL Aug 1st 2019 [spring.io]

That’s one good incentive to finally migrate!

>> 5 Reasons and 101 Bugfixes – Why You Should Use Hibernate 5.3 [thoughts-on-java.org]

If you’ve been wondering whether you should upgrade to Hibernate 5,3, look no further — there is much to be gained, as you’ll see in this article.

>> Improving Testability of Java Microservices with Container Orchestration and a Service Mesh [infoq.com]

A quick overview of the benefits that container orchestration brings to the microservices testing table. Very cool.

>> Configuring Graal Native AOT for reflection [blog.frankel.ch]

A brief look at the challenges faced when using the Ahead-of-Time bytecode compiler to create native images from source code that uses a lot of reflection.

>> How Contract Tests Improve the Quality of Your Distributed Systems [infoq.com]

A detailed piece on how consumer-driven contracts can help you to catch bugs early between the integration points in your systems. This could save you hours of end-to-end testing.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Deep feature consistent variational auto-encoder [krasserm.github.io]

A quick look at a machine-learning algorithm for image analysis and comparison, fueled by neural networks. Fascinating.

>> Use Logging Levels Consistently [reflectoring.io]

A pragmatic approach to deciding what kinds of information to log at which levels. A good read.

>> A beginner’s guide to database multitenancy [vladmihalcea.com]

The title says it all.

>> How to Read an RFC [mnot.net]

As it turns out, the languages used to specify RFCs can leave them open to misinterpretation, even if you know the context(s) around which they were created.

Also worth reading:

3. Musings

>> Strong Opinions [blog.code-cop.org]

It’s not easy to abandon your strongly held opinions. But, as Confucius said, “Real knowledge is to know the extent of one’s ignorance.”

>> Being Good at Your Job is Overrated [daedtech.com]

Why merely being good at what you do is not necessarily going to advance your career.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> What’s in a Name? [dilbert.com]

>> A-B Testing [dilbert.com]

>> When in Doubt, Hire a Consultant [dilbert.com]

5. Pick of the Week

>> Web Architecture 101 [engineering.videoblocks.com]

Logging Exceptions Using SLF4J

$
0
0

1. Overview

In this quick tutorial, we’ll show how to log exceptions in Java using the SLF4J API. We’ll use the slf4j-simple API as the logging implementation.

You can explore different logging techniques in one of our previous articles.

2. Maven Dependencies

First, we need to add the following dependencies to our pom.xml:

<dependency>                             
    <groupId>org.slf4j</groupId>         
    <artifactId>slf4j-api</artifactId>   
    <version>1.7.25</version>  
</dependency> 
                       
<dependency>                             
    <groupId>org.slf4j</groupId>         
    <artifactId>slf4j-simple</artifactId>
    <version>1.7.25</version>  
</dependency>

The latest versions of these libraries can be found on Maven Central.

3. Examples

Usually, all exceptions are logged using the error() method available in the Logger class. There are quite a few variations of this method. We’re going to explore:

void error(String msg);
void error(String format, Object... arguments);
void error(String msg, Throwable t);

Let’s first initialize the Logger that we’re going to use:

Logger logger = LoggerFactory.getLogger(NameOfTheClass.class);

If we just have to show the error message, then we can simply add:

logger.error("An exception occurred!");

The output of the above code will be:

ERROR packageName.NameOfTheClass - An exception occurred!

This is simple enough. But to add more relevant information about the exception (including the stack trace) we can write:

logger.error("An exception occurred!", new Exception("Custom exception"));

The output will be:

ERROR packageName.NameOfTheClass - An exception occurred!
java.lang.Exception: Custom exception
  at packageName.NameOfTheClass.methodName(NameOfTheClass.java:lineNo)

In presence of multiple parameters, if the last argument in a logging statement is an exception, then SLF4J will presume that the user wants the last argument to be treated as an exception instead of a simple parameter:

logger.error("{}, {}! An exception occurred!", 
  "Hello", 
  "World", 
  new Exception("Custom exception"));

In the above snippet, the String message will be formatted based on the passed object details. We’ve used curly braces as placeholders for String parameters passed to the method.

In this case, the output will be:

ERROR packageName.NameOfTheClass - Hello, World! An exception occurred!
java.lang.Exception: Custom exception 
  at packageName.NameOfTheClass.methodName(NameOfTheClass.java:lineNo)

4. Conclusion

In this quick tutorial, we found out how to log exceptions using the SLF4J API.

The code snippets are available over in the GitHub repository.

Creating a Custom Log4j2 Appender

$
0
0

1. Introduction

In this tutorial, we’ll learn about creating a custom Log4j2 appender. If you’re looking for the introduction to Log4j2, please take a look at this article.

Log4j2 ships with a lot of built-in appenders which can be used for various purposes such as logging to a file, to a database, to a socket or to a NoSQL database.

However, there could be a need for a custom appender depending on the application demands.

Log4j2 is an upgraded version of Log4j and has significant improvements over Log4j. Hence, we’ll be using the Log4j2 framework to demonstrate the creation of a custom appender.

2. Maven Setup

We will need the log4j-core dependency in our pom.xml to start with:

<dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-core</artifactId>
    <version>2.11.0</version>
</dependency>

The latest version log4j-core can be found here.

3. Custom Appender

There are two ways by which we can implement our custom appender. First is by implementing the Appender interface and the second is by extending the AbstractAppender class. The second method provides a simple way to implement our own custom appender and that is what we will use.

For this example, we’re going to create a MapAppender. We’ll capture the log events and store them in a ConcurrentHashMap with the timestamp for the key.

Here’s how we create the MapAppender:

@Plugin(
  name = "MapAppender", 
  category = Core.CATEGORY_NAME, 
  elementType = Appender.ELEMENT_TYPE)
public class MapAppender extends AbstractAppender {

    private ConcurrentMap<String, LogEvent> eventMap = new ConcurrentHashMap<>();

    protected MapAppender(String name, Filter filter) {
        super(name, filter, null);
    }

    @PluginFactory
    public static MapAppender createAppender(
      @PluginAttribute("name") String name, 
      @PluginElement("Filter") Filter filter) {
        return new MapAppender(name, filter);
    }

    @Override
    public void append(LogEvent event) {
        eventMap.put(Instant.now().toString(), event);
    }
}

We’ve annotated the class with the @Plugin annotation which indicates that our appender is a plugin.

The name of the plugin signifies the name we would provide in the configuration to use this appender. The category specifies that category under which we place the plugin. The elementType is appender.

We also need a factory method that will create the appender. Our createAppender method serves this purpose and is annotated with the @PluginFactory annotation.

Here, we initialize our appender by calling the protected constructor and we pass the layout as null as we are not going to provide any layout in the config file and we expect the framework to resolve default layout.

Next, we’ve overridden the append method which has the actual logic of handling the LogEvent. In our case, the append method puts the LogEvent into our eventMap. 

4. Configuration

Now that we have our MapAppender in place, we need a lo4j2.xml configuration file to use this appender for our logging.

Here’s how we define the configuration section in our log4j2.xml file:

<Configuration xmlns:xi="http://www.w3.org/2001/XInclude" packages="com.baeldung" status="WARN">

Note that the packages attribute should reference the package that contains your custom appender.

Next, in our appender’s section, we define the appender. Here is how we add our custom appender to the list of appenders in the configuration:

<MapAppender name="MapAppender" />

The last part is to actually use the appender in our Loggers section. For our implementation, we use MapAppender as a root logger and define it in the root section.

Here’s how it’s done:

<Root level="DEBUG">
    <AppenderRef ref="MapAppender" />
</Root>

5. Error Handling

To handle errors while logging the event we can use the error method inherited from AbstractAppender.

For example, if we don’t want to log events which have a log level less than that of WARN.

We can use the error method of AbstractAppender to log an error message. Here’s how it’s done in our class:

public void append(LogEvent event) {
    if (event.getLevel().isLessSpecificThan(Level.WARN)) {
        error("Unable to log less than WARN level.");
        return;
    }
    eventMap.put(Instant.now().toString(), event);
}

Observe how our append method has changed now. We check the event’s level for being greater than WARN and we return early if it is anything less than WARN.

6. Conclusion

In this article, we’ve seen how to implement a custom appender for Log4j2.

While there are many in-built ways of logging our data by using Log4j2’s provided appenders, we also have tools in this framework that enable us to create our own appender as per our application needs.

As usual, the example can be found over on Github.


MQTT Client in Java

$
0
0

1. Overview

In this tutorial, we’ll see how we can add MQTT messaging in a Java project using the libraries provided by the Eclipse Paho project.

2. MQTT Primer

MQTT (MQ Telemetry Transport) is a messaging protocol that was created to address the need for a simple and lightweight method to transfer data to/from low-powered devices, such as those used in industrial applications.

With the increased popularity of IoT (Internet of Things) devices, MQTT has seen an increased use, leading to its standardization by OASIS and ISO.

The protocol supports a single messaging pattern, namely the Publish-Subscribe pattern: each message sent by a client contains an associated “topic” which is used by the broker to route it to subscribed clients. Topics names can be simple strings like “oiltemp” or a path-like string “motor/1/rpm“.

In order to receive messages, a client subscribes to one or more topics using its exact name or a string containing one of the supported wildcards (“#” for multi-level topics and “+” for single-level”).

3. Project Setup

In order to include the Paho library in a Maven project, we have to add the following dependency:

<dependency>
  <groupId>org.eclipse.paho</groupId>
  <artifactId>org.eclipse.paho.client.mqttv3</artifactId>
  <version>1.2.0</version>
</dependency>

The latest version of the Eclipse Paho Java library module can be downloaded from Maven Central.

4. Client Setup

When using the Paho library, the first thing we need to do in order to send and/or receive messages from an MQTT broker is to obtain an implementation of the IMqttClient interfaceThis interface contains all methods required by an application in order to establish a connection to the server, send and receive messages.

Paho comes out of the box with two implementations of this interface, an asynchronous one (MqttAsyncClient) and a synchronous one (MqttClient). In our case, we’ll focus on the synchronous version, which has simpler semantics.

The setup itself is a two-step process: we first create an instance of the MqttClient class and then we connect it to our server. The following subsection detail those steps.

4.1. Creating a New IMqttClient Instance

The following code snippet shows how to create a new IMqttClient synchronous instance:

String publisherId = UUID.randomUUID().toString();
IMqttClient publisher = new MqttClient("tcp://iot.eclipse.org:1883",publisherId);

In this case, we’re using the simplest constructor available, which takes the endpoint address of our MQTT broker and a client identifier, which uniquely identifies our client.

In our case, we used a random UUID, so a new client identifier will be generated on every run.

Paho also provides additional constructors that we can use in order to customize the persistence mechanism used to store unacknowledged messages and/or the ScheduledExecutorService used to run background tasks required by the protocol engine implementation.

The server endpoint we’re using is a public MQTT broker hosted by the Paho project, which allows anyone with an internet connection to test clients without the need of any authentication.

4.2. Connecting to the Server

Our newly created MqttClient instance is not connected to the server. We do so by calling its connect() method, optionally passing a MqttConnectOptions instance that allows us to customize some aspects of the protocol.

In particular, we can use those options to pass additional information such as security credentials, session recovery mode, reconnection mode and so on.

The MqttConnectionOptions class expose those options as simple properties that we can set using normal setter methods. We only need to set the properties required for our scenario – the remaining ones will assume default values.

The code used to establish a connection to the server typically looks like this:

MqttConnectOptions options = new MqttConnectOptions();
options.setAutomaticReconnect(true);
options.setCleanSession(true);
options.setConnectionTimeout(10);
publisher.connect(options);

Here, we define our connection options so that:

  • The library will automatically try to reconnect to the server in the event of a network failure
  • It will discard unsent messages from a previous run
  • Connection timeout is set to 10 seconds

5. Sending Messages

Sending messages using an already connected MqttClient is very straightforward. We use one of the publish() method variants to send the payload, which is always a byte array, to a given topic, using one of the following quality-of-service options:

  • 0 – “at most once” semantics, also known as “fire-and-forget”. Use this option when message loss is acceptable, as it does not require any kind of acknowledgment or persistence
  • 1 – “at least once” semantics. Use this option when message loss is not acceptable and your subscribers can handle duplicates
  • 2 – “exactly once” semantics. Use this option when message loss is not acceptable and your subscribers cannot handle duplicates

In our sample project, the EngineTemperatureSensor class plays the role of a mock sensor that produces a new temperature reading every time we invoke its call() method.

This class implements the Callable interface so we can easily use it with one of the ExecutorService implementations available in the java.util.concurrent package:

public class EngineTemperatureSensor implements Callable<Void> {

    // ... private members omitted
    
    public EngineTemperatureSensor(IMqttClient client) {
        this.client = client;
    }

    @Override
    public Void call() throws Exception {        
        if ( !client.isConnected()) {
            return null;
        }           
        MqttMessage msg = readEngineTemp();
        msg.setQos(0);
        msg.setRetained(true);
        client.publish(TOPIC,msg);        
        return null;        
    }

    private MqttMessage readEngineTemp() {             
        double temp =  80 + rnd.nextDouble() * 20.0;        
        byte[] payload = String.format("T:%04.2f",temp)
          .getBytes();        
        retrun new MqttMessage(payload);           
    }
}

The MqttMessage encapsulates the payload itself, the requested Quality-of-Service and also the retained flag for the message. This flag indicates to the broker that it should retain this message until consumed by a subscriber.

We can use this feature to implement a “last known good” behavior, so when a new subscriber connects to the server, it will receive the retained message right away.

6. Receiving Messages

In order to receive messages from the MQTT broker, we need to use one of the subscribe() method variants, which allow us to specify:

  • One or more topic filters for messages we want to receive
  • The associated QoS
  • The callback handler to process received messages

In the following example, we show how to add a message listener to an existing IMqttClient instance to receive messages from a given topic. We use a CountDownLatch as a synchronization mechanism between our callback and the main execution thread, decrementing it every time a new message arrives.

In the sample code, we’ve used a different IMqttClient instance to receive messages. We did it just to make more clear which client does what, but this is not a Paho limitation – if you want, you can use the same client for publishing and receiving messages:

CountDownLatch receivedSignal = new CountDownLatch(10);
subscriber.subscribe(EngineTemperatureSensor.TOPIC, (topic, msg) -> {
    byte[] payload = msg.getPayload();
    // ... payload handling omitted
    receivedSignal.countDown();
});    
receivedSignal.await(1, TimeUnit.MINUTES);

The subscribe() variant used above takes an IMqttMessageListener instance as its second argument.

In our case, we use a simple lambda function that processes the payload and decrements a counter. If not enough messages arrive in the specified time window (1 minute), the await() method will throw an exception.

When using Paho, we don’t need to explicitly acknowledge message receipt. If the callback returns normally, Paho assumes it a successful consumption and sends an acknowledgment to the server.

If the callback throws an Exception, the client will be shut down. Please note that this will result in loss of any messages sent with QoS level of 0.

Messages sent with QoS level 1 or 2 will be resent by the server once the client is reconnected and subscribes to the topic again.

7. Conclusion

In this article, we demonstrated how we can add support for the MQTT protocol in our Java applications using the library provided by the Eclipse Paho project.

This library handles all low-level protocol details, allowing us to focus on other aspects of our solution, while leaving good space to customize important aspects of its internal features, such as message persistence.

The code shown in this article is available over on GitHub.

Sample Application with Spring Boot and Vaadin

$
0
0

1. Overview

Vaadin is a server-side Java framework for creating web user interfaces.

In this tutorial, we’ll explore how to use a Vaadin based UI on a Spring Boot based backend. For an introduction to Vaadin refer to this tutorial.

2. Setup

Let’s start by adding Maven dependencies to a standard Spring Boot application:

<dependency>
    <groupId>com.vaadin</groupId>
    <artifactId>vaadin-spring-boot-starter</artifactId>
</dependency>

Vaadin is also a recognized dependency by the Spring Initializer.

This tutorial uses a newer version of Vaadin than the default one brought in by the starter module. To use the newer version, simply define the Vaadin Bill of Materials (BOM) like this:

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>com.vaadin</groupId>
            <artifactId>vaadin-bom</artifactId>
            <version>10.0.1</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>

3. Backend Service

We’ll use an Employee entity with firstName and lastName properties to perform CRUD operations on it:

@Entity
public class Employee {

    @Id
    @GeneratedValue
    private Long id;

    private String firstName;
    private String lastName;
}

Here’s the simple, corresponding Spring Data repository – to manage the CRUD operations:

public interface EmployeeRepository extends JpaRepository<Employee, Long> {
    List<Employee> findByLastNameStartsWithIgnoreCase(String lastName);
}

We declare query method findByLastNameStartsWithIgnoreCase on the EmployeeRepository interface. It will return the list of Employees matching the lastName.

Let’s also pre-populate the DB with a few sample Employees:

@Bean
public CommandLineRunner loadData(EmployeeRepository repository) {
    return (args) -> {
        repository.save(new Employee("Bill", "Gates"));
        repository.save(new Employee("Mark", "Zuckerberg"));
        repository.save(new Employee("Sundar", "Pichai"));
        repository.save(new Employee("Jeff", "Bezos"));
    };
}

4. Vaadin UI

4.1. MainView Class

The MainView class is the entry point for Vaadin’s UI logic. Annotation @Route tells Spring Boot to automatically pick it up and show at the root of the web app:

@Route
public class MainView extends VerticalLayout {
    private EmployeeRepository employeeRepository;
    private EmployeeEditor editor;
    Grid<Employee> grid;
    TextField filter;
    private Button addNewBtn;
}

We can customize the URL where the view is shown by giving a parameter to the @Route annotation:

@Route(value="myhome")

The class uses following UI components to be displayed on the page:

EmployeeEditor editor – shows the Employee form used to provide employee information to create and edit.

Grid<Employee> grid – gird to display the list of Employees

TextField filter – text field to enter the last name based on which the gird will be filtered

Button addNewBtn – Button to add a new Employee. Displays the EmployeeEditor editor.

It internally uses the employeeRepository to perform the CRUD operations.

4.2. Wiring the Components Together

MainView extends VerticalLayout. VerticalLayout is a component container, which shows the subcomponents in the order of their addition (vertically).

Next, we initialize and add the components.

We provide a label to the button with a + icon.

this.grid = new Grid<>(Employee.class);
this.filter = new TextField();
this.addNewBtn = new Button("New employee", VaadinIcon.PLUS.create());

We use HorizontalLayout to horizontally arrange filter text field and the button. Then add this layout, gird, and editor into the parent vertical layout:

HorizontalLayout actions = new HorizontalLayout(filter, addNewBtn);
add(actions, grid, editor);

Provide the gird height and column names. We also add help text in the text field:

grid.setHeight("200px");
grid.setColumns("id", "firstName", "lastName");
grid.getColumnByKey("id").setWidth("50px").setFlexGrow(0);

filter.setPlaceholder("Filter by last name");

On the application startup, UI would look this:

4.3. Adding Logic to Components

We’ll set ValueChangeMode.EAGER to the filter text field. This syncs the value to the server each time it’s changed on the client.

We also set a listener for the value change event, which returns the filtered list of employees based on the text provided in the filter:

filter.setValueChangeMode(ValueChangeMode.EAGER);
filter.addValueChangeListener(e -> listEmployees(e.getValue()));

On selecting a row within the gird, we would show the Employee form, allowing the user to edit the first name and last name:

grid.asSingleSelect().addValueChangeListener(e -> {
    editor.editEmployee(e.getValue());
});

On clicking the add new employee button, we would show the blank Employee form:

addNewBtn.addClickListener(e -> editor.editEmployee(new Employee("", "")));

Finally, we listen to the changes made by the editor and refresh the grid with data from the backend:

editor.setChangeHandler(() -> {
    editor.setVisible(false);
    listEmployees(filter.getValue());
});

The listEmployees function gets the filtered list of Employees and updates the grid:

void listEmployees(String filterText) {
    if (StringUtils.isEmpty(filterText)) {
        grid.setItems(employeeRepository.findAll());
    } else {
        grid.setItems(employeeRepository.findByLastNameStartsWithIgnoreCase(filterText));
    }
}

4.4. Building the Form

We’ll use a simple form for the user to add/edit an employee:

@SpringComponent
@UIScope
public class EmployeeEditor extends VerticalLayout implements KeyNotifier {

    private EmployeeRepository repository;
    private Employee employee;

    TextField firstName = new TextField("First name");
    TextField lastName = new TextField("Last name");

    Button save = new Button("Save", VaadinIcon.CHECK.create());
    Button cancel = new Button("Cancel");
    Button delete = new Button("Delete", VaadinIcon.TRASH.create());

    HorizontalLayout actions = new HorizontalLayout(save, cancel, delete);
    Binder<Employee> binder = new Binder<>(Employee.class);
    private ChangeHandler changeHandler;
}

The @SpringComponent is just an alias to Springs @Component annotation to avoid conflicts with Vaadins Component class.

The @UIScope binds the bean to the current Vaadin UI.

Currently, edited Employee is stored in the employee member variable. We capture the Employee properties through firstName and lastName text fields.

The form has three button – save, cancel and delete.

Once all the components are wired together, the form would look as below for a row selection:

We use a Binder which binds the form fields with the Employee properties using the naming convention:

binder.bindInstanceFields(this);

We call the appropriate EmployeeRepositor method based on the user operations:

void delete() {
    repository.delete(employee);
    changeHandler.onChange();
}

void save() {
    repository.save(employee);
    changeHandler.onChange();
}

5. Conclusion

In this article, we wrote a full-featured CRUD UI application using Spring Boot and Spring Data JPA for persistence.

As usual, the code is available over on GitHub.

Thread Safe LIFO Data Structure Implementations

$
0
0

1. Introduction

In this tutorial, we’ll discuss various options for Thread-safe LIFO Data structure implementations.

In the LIFO data structure, elements are inserted and retrieved according to the Last-In-First-Out principle. This means the last inserted element is retrieved first.

In computer science, stack is the term used to refer to such data structure. 

A stack is handy to deal with some interesting problems like expression evaluation, implementing undo operations, etc. Since it can be used in concurrent execution environments, we might need to make it thread-safe. 

2. Understanding Stacks

Basically, a Stack must implement the following methods:

  1. push() – add an element at the top
  2. pop() – fetch and remove the top element
  3. peek() – fetch the element without removing from the underlying container

As discussed before, let’s assume we want a command processing engine.

In this system, undoing executed commands is an important feature.

In general, all the commands are pushed onto the stack and then undo operation can simply be implemented:

  • pop() method to get the last executed command
  • call the undo() method on the popped command object

3. Understanding Thread Safety in Stacks

If a data structure is not thread-safe, when accessed concurrently, it might end up having race conditions.

Race conditions, in a nutshell, occur when the correct execution of code depends on the timing and sequence of threads. This happens mainly if more than one thread shares the data structure and this structure is not designed for this purpose.

Let’s examine a method below from a Java Collection class, ArrayDeque:

public E pollFirst() {
    int h = head;
    E result = (E) elements[h];
    // ... other book-keeping operations removed, for simplicity
    head = (h + 1) & (elements.length - 1);
    return result;
}

To explain the potential race condition in the above code, let us assume two threads executing this code as given in the below sequence:

  • First thread executes the third line: sets the result object with the element at the index ‘head’
  • The second thread executes the third line: sets the result object with the element at the index ‘head’
  • First thread executes the fifth line: resets the index ‘head’ to the next element in the backing array
  • The second thread executes the fifth line: resets the index ‘head’ to the next element in the backing array

Oops! Now, both the executions would return the same result object

To avoid such race conditions, in this case, a thread shouldn’t execute the first line till the other thread finishes resetting the ‘head’ index at the fifth line. In other words, accessing the element at the index ‘head’ and resetting the index ‘head’ should happen atomically for a thread.

Clearly, in this case, correct execution of code depends on the timing of threads and hence it’s not thread-safe.

4. Thread-safe Stacks Using Locks

In this section, we’ll discuss two possible options for concrete implementations of a thread-safe stack. 

In particular, we’ll cover the Java Stack and a thread-safe decorated ArrayDeque. 

Both use Locks for mutually exclusive access.

4.1. Using the Java Stack

Java Collections has a legacy implementation for thread-safe Stack, based on Vector which is basically a synchronized variant of ArrayList.

However, the official doc itself suggests considering using ArrayDeque. Hence we won’t get into too much detail.

Although the Java Stack is thread-safe and straight-forward to use, there are major disadvantages with this class:

  • It doesn’t have support for setting the initial capacity
  • It uses locks for all the operations. This might hurt the performance for single threaded executions.

4.2. Using ArrayDeque

Using the Deque interface is the most convenient approach for LIFO data structures as it provides all the needed stack operations. ArrayDeque is one such concrete implementation.  

Since it’s not using locks for the operations, single-threaded executions would work just fine. But for multi-threaded executions, this is problematic.

However, we can implement a synchronization decorator for ArrayDeque. Though this performs similarly to Java Collection Framework’s Stack class, the important issue of Stack class, lack of initial capacity setting, is solved.

Let’s have a look at this class:

public class DequeBasedSynchronizedStack<T> {

    // Internal Deque which gets decorated for synchronization.
    private ArrayDeque<T> dequeStore;

    public DequeBasedSynchronizedStack(int initialCapacity) {
        this.dequeStore = new ArrayDeque<>(initialCapacity);
    }

    public DequeBasedSynchronizedStack() {
        dequeStore = new ArrayDeque<>();
    }

    public synchronized T pop() {
        return this.dequeStore.pop();
    }

    public synchronized void push(T element) {
        this.dequeStore.push(element);
    }

    public synchronized T peek() {
        return this.dequeStore.peek();
    }

    public synchronized int size() {
        return this.dequeStore.size();
    }
}

Note that our solution does not implement Deque itself for simplicity, as it contains many more methods.

Also, Guava contains SynchronizedDeque which is a production-ready implementation of a decorated ArrayDequeue.

5. Lock-free Thread-safe Stacks

ConcurrentLinkedDeque is a lock-free implementation of Deque interface. This implementation is completely thread-safe as it uses an efficient lock-free algorithm.

Lock-free implementations are immune to the following issues, unlike lock based ones.

  • Priority inversion – This occurs when the low-priority thread holds the lock needed by a high priority thread. This might cause the high-priority thread to block
  • Deadlocks – This occurs when different threads lock the same set of resources in a different order.

On top of that, Lock-free implementations have some features which make them perfect to use in both single and multi-threaded environments.

  • For unshared data structures and for single-threaded access, performance would be at par with ArrayDeque
  • For shared data structures, performance varies according to the number of threads that access it simultaneously.

And in terms of usability, it is no different than ArrayDeque as both implement the Deque interface.

6. Conclusion

In this article, we’ve discussed the stack data structure and its benefits in designing systems like Command processing engine and Expression evaluators.

Also, we have analyzed various stack implementations in the Java collections framework and discussed their performance and thread-safety nuances.

As usual, code examples can be found over on GitHub.

Custom Validation MessageSource in Spring Boot

$
0
0

1. Overview

MessageSource is a powerful feature available in Spring applications. This helps application developers handle various complex scenarios with writing much extra code, such as environment-specific configuration, internationalization or configurable values.

One more scenario could be modifying the default validation messages to more user-friendly/custom messages.

In this tutorial, we’ll see how to configure and manage custom validation MessageSource in the application using Spring Boot.

2. Maven Dependencies

Let’s start with adding the necessary Maven dependencies:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-validation</artifactId>
</dependency>

You can find the latest versions of these libraries over on Maven Central.

3. Custom Validation Message Example

Let’s consider a scenario where we have to develop an application that supports multiple languages. If the user doesn’t provide the correct details as input, we’d like to show error messages according to the user’s locale.

Let’s take an example of a Login form bean:

public class LoginForm {

    @NotEmpty(message = "{email.notempty}")
    @Email
    private String email;

    @NotNull
    private String password;

    // standard getter and setters
}

Here we’ve added validation constraints that verify if an email is not provided at all, or provided, but not following the standard email address style.

To show custom and locale-specific message, we can provide a placeholder as mentioned for the @NotEmpty annotation.

The email.notempty property will be resolved from a properties files by the MessageSource configuration.

4. Defining the MessageSource Bean

An application context delegates the message resolution to a bean with the exact name messageSource.

ReloadableResourceBundleMessageSource is the most common MessageSource implementation that resolves messages from resource bundles for different locales:

@Bean
public MessageSource messageSource() {
    ReloadableResourceBundleMessageSource messageSource
      = new ReloadableResourceBundleMessageSource();
    
    messageSource.setBasename("classpath:messages");
    messageSource.setDefaultEncoding("UTF-8");
    return messageSource;
}

Here, it’s important to provide the basename as locale-specific file names will be resolved based on the name provided.

5. Defining LocalValidatorFactoryBean 

To use custom name messages in a properties file like we need to define a LocalValidatorFactoryBean and register the messageSource:

@Bean
public LocalValidatorFactoryBean getValidator() {
    LocalValidatorFactoryBean bean = new LocalValidatorFactoryBean();
    bean.setValidationMessageSource(messageSource());
    return bean;
}

However, note that if we had already extended the WebMvcConfigurerAdapter, to avoid having the custom validator ignored, we’d have to set the validator by overriding the getValidator() method from the parent class.

Now we can define a property message like:

email.notempty=<Custom_Message>”

instead of

“javax.validation.constraints.NotEmpty.message=<Custom_message>”

6. Defining Property Files

The final step is to create a properties file in the src/main/resources directory with the name provided in the basename in step 4:

# messages.properties
email.notempty=Please provide valid email id.

Here we can take advantage of internationalization along with this. Let’s say we want to show messages for a French user in their language.

In this case, we have to add one more property file with the name the messages_fr.properties in the same location (No code changes required at all):

# messages_fr.properties
email.notempty=Veuillez fournir un identifiant de messagerie valide.

7. Conclusion

In this article, we covered how the default validation messages can be changed without modifying the code if the configuration is done properly beforehand.

We can also leverage the support of internationalization along with this to make the application more user-friendly.

As always, the full source code is available over on GitHub.

Java Weekly, Issue 241

$
0
0

Here we go…

1. Spring and Java

>> Spring Boot – Best Practices [e4developer.com]

This primer can help jumpstart your journey down the road of Spring Boot.

>> It’s time! Migrating to Java 11 [medium.com]

With JDK 8 nearing end-of-life and JDK 11 on the horizon, this step-by-step formula for migrating applications to Java 11 couldn’t come soon enough.

>> WireMock Tutorial: Introduction to Stubbing [petrikainulainen.net]

A nice overview of request stubbing and crafting HTTP response bodies, headers, and status codes in WireMock. Good stuff.

>> How to query by entity type using JPA Criteria API [vladmihalcea.com]

A quick example using JPA inheritance that shows how to find entities of a superclass or a specific subclass. Very cool.

>> How to Configure a Human-Readable Logging Format with Logback and Descriptive Logger [reflectoring.io]

A clever SLF4J wrapper library for injecting a custom ID to the Mapped Diagnostic Context of each Logback message, and some handy formatting tips to boot.

>> Spring Boot integration in IntelliJ IDEA [blog.frankel.ch]

A brief rundown of the many ways this popular IDE can help you create, configure, run, debug, and monitor Spring Boot projects. This can really speed up your development time.

>> Multi-module project builds with Maven and Gradle [andresalmiray.com]

A reminder that, while Maven and Gradle aren’t perfect, there’s usually a workaround that lets you achieve your objective.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> Top Docker Monitoring Tools [code-maze.com]

If Docker is part of your infrastructure, you’ll need a way to monitor your containers. Here are some of the best tools for the job. Choose wisely.

>> Tip: Provide Contextual Information in Log Messages [reflectoring.io]

Some practical advice on how adding context to your log messages can make them more useful.

>> Should that be a Microservice? Part 5: Failure Isolation [content.pivotal.io]

A compelling argument in favor of isolating failure-prone services into microservices and using a circuit breaker to mitigate failures.

>> Pseudo Localization @ Netflix [medium.com]

A novel approach that helps developers identify and avoid some of the pitfalls of writing multi-language UIs, without incurring the added cost of translation.

>> Code Review Guidelines [philipphauer.de]

A great set of rules for both authors and reviewers that can make a code review much more personal and well-received.

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> 99 Problems but a Giraffe Ain’t One [dilbert.com]

>> AI Guilt Trip Lost on Wally [dilbert.com]

>> Delegate Like a Boss [dilbert.com]

4. Pick of the Week

>> Imaginary Problems Are the Root of Bad Software [medium.com]

Viewing all 3698 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>