Quantcast
Channel: Baeldung
Viewing all 3691 articles
Browse latest View live

REST API with Play Framework in Java

$
0
0

I just announced the Master Class of my "REST With Spring" Course:

>> THE "REST WITH SPRING" CLASSES

1. Overview

The purpose of this tutorial is to explore the Play Framework and learn how to build REST services with it using Java.

We will put together a REST API to create, retrieve, update and delete student records.

In such applications, we would normally have a database to store student records. Play has an inbuilt H2 database along with support for JPA with Hibernate and other persistence frameworks.

However, to keep things simple and focus on the most important stuff, we will use a simple map to store student objects with unique Ids.

2. Play Framework Setup

Let’s head over to Play framework’s official page and download the latest version of its distribution. At the time of this tutorial, the latest is version 2.5.

This will download a small starter package with a script called activator inside the bin folder. This script will setup the rest of the framework for us incrementally as and when we need the different components.

To make things simple, we will need to add the activator to PATH variable so as to run it from the console.

To get a more gentle introductory article for Play, you can follow this link.

3. Create New Application

Let’s create a new application called student-api based on play-Java template:

activator new student-api play-Java

We will only be discussing REST specific concepts. The rest of the code should be self-explanatory.

4. Project Dependencies

Play allows us to add external JARs as project dependencies in a number of ways. In this case, we will use the simplest which involves adding a JAR in the lib folder in the project’s root.

The only external dependency we will use, is the JSON library whose latest version we can download from Maven Central by following this link.

If student-api/lib folder does not already exist, let’s create it and drop the jar in there. However, this will only be used when we run our tests from within Play.

As it is with REST applications, we can as well start the server and run tests from any other environment by sending requests and making assertions on the results.

To make things simple, we will be using any IDE to run these tests. You can as well test the GET requests from the browser if you prefer.

5. Models

Let’s navigate to student-api/app/models or create one if it does not already exist. Let’s then create a Java bean for moving student information, here is Student.java:

public class Student {
    private String firstName;
    private String lastName;
    private int age;
    private int id;

    // standard constructors, getters and setters
}

We will now create a simple data store for student data with helper methods to perform CRUD operations on it, here is StudentStore.java:

public class StudentStore {
    private Map<Integer, Student> students = new HashMap<>();

    public Student addStudent(Student student) {
        int id = students.size();
        student.setId(id);
        students.put(id, student);
        return student;
    }

    public Student getStudent(int id) {
        return students.get(id);
    }

    public Set<Student> getAllStudents() {
        return new HashSet<>(students.values());
    }

    public Student updateStudent(Student student) {
        int id = student.getId();
        if (students.containsKey(id)) {
            students.put(id, student);
            return student;
        }
        return null;
    }

    public boolean deleteStudent(int id) {
        return students.remove(id) != null;
    } 
}

We are storing students data in a hashmap. We create an integer id for each new student record and use it as a key.

6. Controllers

Let’s head over to student-api/app/controllers and create a new controller called StudentController.java. Let’s step through the code incrementally.

Note that Play ships with Jackson to allow for data processing – so we can import any Jackson classes we need without external dependencies.

Let’s define an utility class to perform repetitive operations. In this case, building HTTP responses. So let’s create student-api/app/util package and add Util.java in it:

public class Util {
    public static ObjectNode createResponse(
      Object response, boolean ok) {
        
        ObjectNode result = Json.newObject();
        result.put("isSuccessfull", ok);
        if (response instanceof String) {
            result.put("body", (String) response);
        }
        else {
            result.put("body", (JsonNode) response);
        }

        return result;
    }
}

With this method, we will be creating standard JSON responses with a boolean isSuccessfull key and the response body.

We can now step through the actions of the controller class.

The create Action:

public Result create() {
    JsonNode json = request().body().asJson();
    if (json == null){
        return badRequest(Util.createResponse(
          "Expecting Json data", false));
    }
    Student student = StudentStore.getInstance().addStudent(
      (Student) Json.fromJson(json, Student.class));
    JsonNode jsonObject = Json.toJson(student);
    return created(Util.createResponse(jsonObject, true));
}

We use a call from the super controller class to retrieve the request body into Jackson’s JsonNode class. Notice how we use the utility method to create a response if the body is null.

We can pass to it any String or a JsonNode and a boolean flag to indicate status.

Notice also how we use Json.fromJson() to convert the incoming JSON object into a Student object and back to JSON for the response.

Finally, instead of ok() which we are used to, we are using created helper method from the play.mvc.results package. The concepts will come up throughout the rest of actions.

The update Action:

public Result update() {
    JsonNode json = request().body().asJson();
    if (json == null){
        return badRequest(Util.createResponse(
          "Expecting Json data", false));
    }
    Student student = StudentStore.getInstance().updateStudent(
      (Student) Json.fromJson(json, Student.class));
    if (student == null) {
        return notFound(Util.createResponse(
          "Student not found", false));
    }

    JsonNode jsonObject = Json.toJson(student);
    return ok(Util.createResponse(jsonObject, true));
}

The retrieve Action:

public Result retrieve(int id) {
    Student student = StudentStore.getInstance().getStudent(id);
    if (student == null) {
        return notFound(Util.createResponse(
          "Student with id:" + id + " not found", false));
    }
    JsonNode jsonObjects = Json.toJson(student);
    return ok(Util.createResponse(jsonObjects, true));
}

The delete Action:

public Result delete(int id) {
    boolean status = StudentStore.getInstance().deleteStudent(id);
    if (!status) {
        return notFound(Util.createResponse(
          "Student with id:" + id + " not found", false));
    }
    return ok(Util.createResponse(
      "Student with id:" + id + " deleted", true));
}

The listStudents Action:

public Result listStudents() {
    Set<Student> result = StudentStore.getInstance().getAllStudents();
    ObjectMapper mapper = new ObjectMapper();

    JsonNode jsonData = mapper.convertValue(result, JsonNode.class);
    return ok(Util.createResponse(jsonData, true));
}

Notice a difference in how we convert a list of Student objects into a JsonNode with using Jackson’s ObjectMapper.

7. Mappings

Having set up our controller actions, we can now map them, open file student-api/conf/routes and add these routes:

GET     /              controllers.StudentController.listStudents()
POST    /:id           controllers.StudentController.retrieve(id:Int)
POST    /              controllers.StudentController.create()
PUT     /              controllers.StudentController.update()
DELETE  /:id           controllers.StudentController.delete(id:Int)
GET     /assets/*file  controllers.Assets.versioned(
                         path="/public", file: Asset)

The /assets endpoint must always be present for download of static resources.

After this, we are done with building the student API.

8. Testing

We can now run tests on it by sending requests to http://localhost:9000/ and/or adding the appropriate context. Running the base path from the browser should output:

{
     "isSuccessfull":true,
     "body":[]
}

As we can see, the body is empty since we have not added any records yet. Using a Java client, let’s run some tests, we can globalize the base path to avoid repetition:

private static final String BASE_URL = "http://localhost:9000";

Adding records:

 @Test
 public void whenCreatesRecord_thenCorrect() {
     Student student = new Student("jody", "west", 50);
     JSONObject obj = new JSONObject(makeRequest(
       BASE_URL, "POST", new JSONObject(student)));
 
     assertTrue(obj.getBoolean("isSuccessfull"));
 
     JSONObject body = obj.getJSONObject("body");
 
     assertEquals(student.getAge(), body.getInt("age"));
     assertEquals(student.getFirstName(), body.getString("firstName"));
     assertEquals(student.getLastName(), body.getString("lastName"));
    }

Notice use of makeRequest method, it’s our own helper method to send HTTP requests to any URL:

public static String makeRequest(String myUrl, 
  String httpMethod, JSONObject parameters) {
    URL url = null;
    url = new URL(myUrl);
    HttpURLConnection conn = null;
    conn = (HttpURLConnection) url.openConnection();
    conn.setDoInput(true);
    conn.setRequestProperty("Content-Type", "application/json");
    DataOutputStream dos = null;
    conn.setRequestMethod(httpMethod);

    if (Arrays.asList("POST", "PUT").contains(httpMethod)) {
        String params = parameters.toString();
        conn.setDoOutput(true);
        dos = new DataOutputStream(conn.getOutputStream());
        dos.writeBytes(params);
        dos.flush();
        dos.close();
    }
            
    int respCode = conn.getResponseCode();
    if (respCode != 200 && respCode != 201) {
        String error = inputStreamToString(conn.getErrorStream());
        return error;
    }
    String inputString = inputStreamToString(conn.getInputStream());
       
    return inputString;
}

And also method inputStreamToString:

public static String inputStreamToString(InputStream is) {
    BufferedReader br = null;
    StringBuilder sb = new StringBuilder();

    String line;
    br = new BufferedReader(new InputStreamReader(is));
    while ((line = br.readLine()) != null) {
        sb.append(line);
    }
    br.close();
    return sb.toString();
}

After running the above test, loading http://localhost:9000 from the browser should now give us:

{
    "isSuccessfull":true,
    "body":  [{
        "firstName":"Barrack",
        "lastName":"Obama",
        "age":50,
        "id":1
    }]
}

The id attribute will be incrementing for every new record we add.

Deleting a record:

@Test
public void whenDeletesCreatedRecord_thenCorrect() {
    Student student = new Student("Usain", "Bolt", 25);
    JSONObject ob1 = new JSONObject(
      makeRequest(BASE_URL, "POST",
        new JSONObject(student))).getJSONObject("body");
    int id = ob1.getInt("id");
    JSONObject obj1 = new JSONObject(
      makeRequest(BASE_URL + "/" + id, "POST", new JSONObject()));

    assertTrue(obj1.getBoolean("isSuccessfull"));

    makeRequest(BASE_URL + "/" + id, "DELETE", null);
    JSONObject obj2 = new JSONObject(
      makeRequest(BASE_URL + "/" + id, "POST", new JSONObject()));

    assertFalse(obj2.getBoolean("isSuccessfull"));
}

In the above test, we first create a new record, retrieve it successfully by its new id, we then deleted it. When we try to retrieve by the same id again, the operation fails as expected.

Let’s now update the record:

@Test
public void whenUpdatesCreatedRecord_thenCorrect() {
    Student student = new Student("Donald", "Trump", 50);
    JSONObject body1 = new JSONObject(
      makeRequest(BASE_URL, "POST", 
        new JSONObject(student))).getJSONObject("body");

    assertEquals(student.getAge(), body1.getInt("age"));

    int newAge = 60;
    body1.put("age", newAge);
    JSONObject body2 = new JSONObject(
      makeRequest(BASE_URL, "PUT", body1)).getJSONObject("body");

    assertFalse(student.getAge() == body2.getInt("age"));
    assertTrue(newAge == body2.getInt("age"));
}

The above test demonstrates the change in the value of the age field after updating the record.

Getting all records:

 Student student1 = new Student("jane", "daisy", 50);
    Student student2 = new Student("john", "daniel", 60);
    Student student3 = new Student("don", "mason", 55);
    Student student4 = new Student("scarlet", "ohara", 90);

    makeRequest(BASE_URL, "POST", new JSONObject(student1));
    makeRequest(BASE_URL, "POST", new JSONObject(student2));
    makeRequest(BASE_URL, "POST", new JSONObject(student3));
    makeRequest(BASE_URL, "POST", new JSONObject(student4));

    JSONObject objects = new JSONObject(makeRequest(BASE_URL, "GET", null));
    assertTrue(objects.getBoolean("isSuccessfull"));
    JSONArray array = objects.getJSONArray("body");
    assertTrue(array.length() >= 4);

With the above test, we are ascertaining the proper functioning of the listStudents controller action.

9. Conclusion

In this article, we have shown how to build a full-fledged REST API using Play Framework.

The full source code and examples for this article are available in the GitHub project.

The Master Class of my "REST With Spring" Course is finally out:

>> CHECK OUT THE CLASSES


Introduction to SLF4J

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

 1. Overview

Simple Logging Facade for Java (abbreviated SLF4J) – acts as a facade for different logging frameworks (e.g. java.util.logging, logback, Log4j). It offers a generic API making the logging independent of the actual implementation.

This allows for different logging frameworks to coexist. It also helps migrate from one framework to another. Finally, apart from standardized API, it also offers some “syntactic sugar”.

This article will discuss the dependencies and configuration needed to integrate SLF4J with Log4j2, Logback, Log4J2 and Jakarta Commons Logging. More about each of this implementations you can read in article Introduction to Java Logging.

2. The Log4j2 Setup

To use SLF4J with Log4j2 you should add the following libraries to pom.xml:

<dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-api</artifactId>
    <version>2.7</version>
</dependency>
<dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-core</artifactId>
    <version>2.7</version>
</dependency>
<dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-slf4j-impl</artifactId>
    <version>2.7</version>
</dependency>

The latest version can be found here: log4j-api, log4j-core, log4j-slf4j-impl.

The actual logging configuration is adhering to native Log4j 2 configuration. Let’s see how the Logger instance is created:

public class SLF4JExample {

    private static Logger logger = LoggerFactory.getLogger(SLF4JExample.class);

    public static void main(String[] args) {
        logger.debug("Debug log message");
        logger.info("Info log message");
        logger.error("Error log message");
    }
}

Note that the Logger and LoggerFactory belong to the org.slf4j package. An example of a project, running with the explained configuration is available here.

3. The Logback Setup

To use SLF4J with Logback you don’t need to add SLF4J to your classpath. Logback is already using SLF4J. It’s considered as the reference implementation. We need only to include the Logback library:

<dependency>
    <groupId>ch.qos.logback</groupId>
    <artifactId>logback-classic</artifactId>
    <version>1.1.7</version>
</dependency>

The latest version can be found here: logback-classic.

The configuration is Logback-specific but works seamlessly with SLF4J. With the proper dependencies and configuration in place, the same code from previous sections can be used to handle the logging.

 4. The Log4j Setup

In the previous sections, we covered a use-case where SLF4J “sits” on top of the particular logging implementation. Used like this, it completely abstracts away the underlying framework.

There are cases when an existing logging solution cannot be replaced e.g. due to third-party requirements. This, however, does not mean that the project is “sentenced” only to the already used framework.

SLF4J can be configured as a bridge, where the calls to an existing framework are redirected to it. Let’s add the necessary dependencies to create a bridge for Log4j:

<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>log4j-over-slf4j</artifactId>
    <version>1.7.21</version>
</dependency>

With the dependency in place (check for latest at log4j-over-slf4j), all the calls to Log4j will be redirected to SLF4J. Consider the official documentation to learn more about bridging existing frameworks.

Just as with the other frameworks Log4j can serve as an underlying implementation. Let’s add the necessary dependencies:

<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-log4j12</artifactId>
    <version>1.7.21</version>
</dependency>
<dependency>
    <groupId>log4j</groupId>
    <artifactId>log4j</artifactId>
    <version>1.2.17</version>
</dependency>

Latest version could be found here for slf4j-log4j12 and log4j. An exemplary project, configured in the manner explained is available here.

5. JCL Bridge Setup

In the previous sections, we showed how the same code base can be used to support logging using different implementations. While this is the main promise and strength of SLF4J, it is also the goal behind JCL (Jakarta Commons Logging or Apache Commons Logging).

JCL is, by its intentions, a framework similar to SLF4J. The major difference is that JCL resolves the underlying implementation during execution-time through a class loading system. This approach is perceived problematic in cases where there are custom classloaders at play.

SLF4J resolves its bindings at compile-time. It’s perceived simpler yet powerful enough.

Luckily, two frameworks can work together in the bridge mode:

<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>jcl-over-slf4j</artifactId>
    <version>1.7.21</version>
</dependency>

The latest dependency version can be found here jcl-over-slf4j.

As with the other cases, the same code-base will run just fine. An example of a complete project running this setup is available here.

6. Further SLF4J Goodness

SLF4J provides additional that can make logging more efficient and code more readable. For example, SLF4J provides a very useful interface for working with parameters:

String variable = "Hello John";
logger.debug("Printing variable value: {}", variable);

Here is the code example of Log4j that doing the same thing:

String variable = "Hello John";
logger.debug("Printing variable value: " + variable);

As you can see, Log4j will concatenate Strings regardless of debug level being enabled or not. In high-load applications, this may cause performance issues. SLF4J will concatenate Strings only when the debug level is enabled. To do the same with Log4J you need to add extra if block which will check if debug level is enabled or not:

String variable = "Hello John";
if (logger.isDebugEnabled()) {
    logger.debug("Printing variable value: " + variable);
}

SLF4J standardized the logging levels which are different for the particular implementations. The FATAL logging level got dropped (it was introduced in Log4j) based on the premise that in a logging framework, we should not decide when an application should be terminated.

The logging levels used are ERROR, WARN, INFO, DEBUG, TRACE. You can read more about using them in the Introduction to Java Logging article.

7. Conclusion

SLF4J helps with the silent switching between logging frameworks. It is simple, yet flexible, and allows for readability and performance improvements.

As usual, the code can be found over on GitHub. In addition, we’re referencing two other projects dedicated to different articles, but containing discussed log configurations, here and here.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Java Web Weekly, Issue 146

$
0
0

1. Spring and Java

>> Securing JAX-RS Endpoints with JWT [antoniogoncalves.org]

JWT is becoming the de facto standard in web security yesterday. And JJWT is certainly a good way to go for an implementation.

>> Introducing Hibernate Search Sort DSL [in.relation.to]

The ElasticSearch support coming to Hibernate looks intelligently designed. Plus it’s a fluid API, which gives it some extra points.

>> How to update only a subset of entity attributes using JPA and Hibernate [vladmihalcea.com]

Who said Hibernate is a blunt instrument? You can get surgical with it, Training-Day style.

>> How to persist creation and update timestamps with Hibernate [thoughts-on-java.org]

Keeping track of create/update times is usually the first step towards building out audit logic – here’s a good, simple way of doing that in Hibernate.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Evolving Distributed Systems [olivergierke.de]

This one’s first for a reason. It’s a step back and a real look at architecting a distributed system.

It talks about boundaries between systems, the essential question of inter-communication, all in the scope of achieving a good cadence of actually pushing out real work.

>> No More Boilerplate Code [thecodewhisperer.com]

Better code design? Why not.

>> API Simulation + Contact Testing = Happiness [alexecollins.com]

API contract testing is definitely an underused practice.

This is a very quick and to the point writeup introducing the concept and giving you some basic tools to get it going.

Also worth reading:

3. Musings

>> On the limits of TDD, and the limits of studies of TDD [virtuouscode.com]

The results of an interesting (albeit not super scientific) test about the results of doing TDD.

Of course measuring only a few of the concerns may not be very representative – TDD touches so many aspects of development that it’s tough to really quantify the impact it has.

>> Making sure inter-teams communication doesn’t work [frankel.ch]

Some common sense advice about good communication, which is unfortunately glossed over by so many organizations out there.

>> You don’t need tests [swizec.com]

I chuckled my way through this one. You should do the same.

>> Undercover Testability Killers [daedtech.com]

Unit testing is markedly difficult when you’re starting out.

Before even considering the correctness of the system, the first significant advantage of weaving tests into a system has is design. Good design doesn’t necessarily come from unit tests, but it’s a whole lot easier with these as a positive constraint on the system.

>> The Right Way to Run a Technical Interview [3riverdev.com]

A solid technique well worth doing when you’re running a technical interview.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> My idea-shredding gloves [dilbert.com]

>> Smart people like it [dilbert.com]

>> What makes you think you can do my job better? [dilbert.com]

5. Pick of the Week

>> How To Ask Questions The Smart Way [catb.org]

Thymeleaf: Custom Layout Dialect

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Introduction

Thymeleaf is a Java template engine for processing and creating HTML, XML, JavaScript, CSS and plain text. For an intro to Thymeleaf and Spring, have a look at this write-up.

In this writeup we’ll focus on templating – something that most reasonably complex sites need to do one way or another. Simplu put, pages need to share common page components like the header, footer, menu and potentially much more.

Thymeleaf addresses that with custom dialects – you can build layouts using the Thymeleaf Standard Layout System or the Layout Dialect – which uses the decorator pattern for working with the layout files.

In this article, we’ll discuss a handful features of Thymeleaf Layout Dialect – which can be found here.  To be more specific, we will discuss its features like creating layouts, custom titles or head element merging.

2. Maven Dependencies

First, let’s see the required configuration needed to integrate Thymeleaf with Spring. The thymeleaf-spring library is required in our dependencies:

<dependency>
    <groupId>org.thymeleaf</groupId>
    <artifactId>thymeleaf</artifactId>
    <version>3.0.1.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.thymeleaf</groupId>
    <artifactId>thymeleaf-spring4</artifactId>
    <version>3.0.1.RELEASE</version>
</dependency>

Note that, for a Spring 3 project, the thymeleaf-spring3 library must be used instead of thymeleaf-spring4. The latest version of the dependencies may be found here.

We’ll also need a dependency for custom layouts dialect:

<dependency>
    <groupId>nz.net.ultraq.thymeleaf</groupId>
    <artifactId>thymeleaf-layout-dialect</artifactId>
    <version>2.0.4</version>
</dependency>

The latest version can be found at the Maven Central Repository.

3. Thymeleaf Layout Dialect Configuration

In this section, we will discuss how to configure Thymeleaf to use Layout Dialect. If you want to take a step back and see how to configure Thymeleaf 3.0 in your web app project, please check this tutorial.

Once we add Maven dependency as a part of a project, we’ll need to add the Layout Dialect to our existing Thymeleaf template engine. We can do this with Java configuration:

SpringTemplateEngine engine = new SpringTemplateEngine();
engine.addDialect(new LayoutDialect());

Or by using XML file config:

<bean id="templateEngine" class="org.thymeleaf.spring4.SpringTemplateEngine">
    <property name="additionalDialects">
        <set>
            <bean class="nz.net.ultraq.thymeleaf.LayoutDialect"/>
        </set>
    </property>
</bean>

When decorating the sections of the content and layout templates, the default behaviour is to place all content elements after the layout ones.

Sometimes we need a smarter merging of elements, allowing to group similar elements together (js scripts together, stylesheets together etc.).

To achieve that, we need to add sorting strategy to our configuration – in our case, it will be the grouping strategy:

engine.addDialect(new LayoutDialect(new GroupingStrategy()));

or in XML:

<bean class="nz.net.ultraq.thymeleaf.LayoutDialect">
    <constructor-arg ref="groupingStrategy"/>
</bean>

The default strategy is to append elements. With above-mentioned, we will have everything sorted and grouped.

If neither strategy suits our needs, we can implement own SortingStrategy and pass it along to the dialect like above.

4. Namespace and Attribute Processors’ Features

Once, configured we can start using layout namespace, and five new attribute processors: decorate, title-pattern, insert, replace, and fragment. 

In order to create layout template that we want to use for our HTML files, we created following file, named template.html:

<!DOCTYPE html>
<html xmlns:layout="http://www.ultraq.net.nz/thymeleaf/layout">
...
</html>

As we can see, we changed the namespace from the standard xmlns:th=”http://www.thymeleaf.org” to xmlns:layout=”http://www.ultraq.net.nz/thymeleaf/layout”.

Now we can start working with attribute processors in our HTML files.

4.1. layout:fragment

In order to create custom sections in our layout or reusable templates that can be replaced by sections which share the same name, we use fragment attribute inside our template.html body:

<body>
    <header>
        <h1>New dialect example</h1>
    </header>
    <section layout:fragment="custom-content">
        <p>Your page content goes here</p>
    </section>
    <footer>
        <p>My custom footer</p>
        <p layout:fragment="custom-footer">Your custom footer here</p>
    </footer>
</body>

Notice that there are two sections that will be replaced by our custom HTML – content and footer.

It’s also important to write the default HTML that’s going to be used if any of the fragments will not be found.

4.2. layout:decorate

Next step that we need to make is to create HTML file, that will be decorated by our layout:

<!DOCTYPE html>
<html xmlns:layout="http://www.ultraq.net.nz/thymeleaf/layout"
  layout:decorate="~{template.html}">
<head>
<title>Layout Dialect Example</title>
</head>
<body>
    <section layout:fragment="custom-content">
        <p>This is a custom content that you can provide</p>
    </section>
    <footer>
        <p layout:fragment="custom-footer">This is some footer content
          that you can change</p>
    </footer>
</body>
</html>

As it is shown in the 3rd line, we use layout:decorate and specify the decorator source. All fragments from the layout file that match fragments in a content file will be replaced by its custom implementation.

4.3. layout:title-pattern

Given that the Layout dialect automatically overrides the layout’s title with the one that is found in the content template, you might preserve parts of the title found in the layout.

For example, we can create breadcrumbs or retain the name of the website in the page title. The layout:title-pattern processor comes with a help in such a case. All you need to specify in your layout file is:

<title layout:title-pattern="$LAYOUT_TITLE - $CONTENT_TITLE">Baeldung</title>

So the final result for the layout and content file presented in the previous paragraph will look like this:

<title>Baeldung - Layout Dialect Example</title>

4.4. layout:insert/layout:replace

The first processor, layout:insert, is similar to Thymeleaf’s original th:insert, but allows to pass the entire HTML fragments to the inserted template. It is very useful if you have some HTML that you want to reuse, but whose contents are too complex to determine or construct with context variables alone.

The second one, layout:replace, is similar to the first one, but with the behaviour of th:replace, which will actually substitute the host tag by the defined fragment’s code.

5. Conclusion

In this article, we described a few ways of implementing layouts in Thymeleaf.

Note that the examples use the Thymeleaf version 3.0; if you want to learn how to migrate your project, please follow this procedure.

The full implementation of this tutorial can be found in the GitHub project.

How to test? Our suggestion is to play with a browser first, then check the existing JUnit tests as well.

Remember, you can build layouts using above-mentioned Layout Dialect or you can easily create your own solution. Hopefully, this article gives you some more insights on the topic and you will find your preferred approach depending on your needs.

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

Introduction to the Java NIO Selector

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Overview

In this article, we will explore the introductory parts of Java NIO’s Selector component. This is an abstract class defined in the java.nio.channels package.

A selector provides a mechanism for monitoring one or more NIO channels and recognizing when one or more become available for data transfer.

This way, a single thread can be used for managing multiple channels, and thus multiple network connections.

2. Why Use a Selector?

With a selector, we can use one thread instead of several to manage multiple channels. Switching between threads is expensive for the operating system, and additionally, each thread takes up memory.

Therefore, the fewer threads we use, the better. However, it’s important to remember that modern operating systems and CPU’s keep getting better at multitasking, so the overheads of multi-threading keep diminishing over time.

We will be dealing with here is how we can handle multiple channels with a single thread using a selector.

Note also that selectors don’t just help you read data; they can also listen for incoming network connections and write data across slow channels.

3. Setup

To use the selector, we do not need any special set up. All the classes we need are the core java.nio package and we just have to import what we need.

After that, we can register multiple channels with a selector object. When I/O activity happens on any of the channels, the selector notify us. This is how we can read from a large number of data sources from a single thread.

Any channel we register with a selector must be a sub-class of SelectableChannel. These are a special type of channels that can be put in non-blocking mode.

4. Creating a Selector

A selector may be created by invoking the static open method of the Selector class, which will use the system’s default selector provider to create a new selector:

Selector selector = Selector.open();

5. Registering Selectable Channels

In order for a selector to monitor any channels, we must register these channels with the selector. We do this by invoking the register method of the selectable channel.

But before a channel is registered with a selector, it must be in non-blocking mode:

channel.configureBlocking(false);
SelectionKey key = channel.register(selector, SelectionKey.OP_READ);

This means that we cannot use FileChannels with a selector since they cannot be switched into non-blocking mode the way we do with socket channels.

The first parameter is the Selector object we created earlier, the second parameter defines an interest set, meaning what events we are interested in listening for in the monitored channel, via the selector.

There are four different events we can listen for, each is represented by a constant in the SelectionKey class:

  • Connect – when a client attempts to connect to the server. Represented by SelectionKey.OP_CONNECT
  • Accept – when the server accepts a connection from a client. Represented by SelectionKey.OP_ACCEPT
  • Read – when the server is ready to read from the channel. Represented by SelectionKey.OP_READ
  • Write – when the server is ready to write to the channel. Represented by SelectionKey.OP_WRITE

The returned object SelectionKey, represents the selectable channel’s registration with the selector. We will look at it further in the following section.

6. The SelectionKey Object

As we saw in the previous section, when we register a channel with a selector, we get a SelectionKey object. This object holds data representing the registration of the channel.

It contains some important properties which we must understand well to be able to use the selector on the channel. We will look at these properties in the following sub sections.

6.1. The Interest Set

An interest set defines the set of events that we want the selector to watch out for on this channel. It is an integer value; we can get this information in the following way.

First, we have the interest set returned by the SelectionKey‘s interestOps method. Then we have the event constant in SelectionKey we looked at earlier.

When we AND these two values, we get a boolean value that tells us whether the event is being watched for or not:

int interestSet = selectionKey.interestOps();

boolean isInterestedInAccept  = interestSet & SelectionKey.OP_ACCEPT;
boolean isInterestedInConnect = interestSet & SelectionKey.OP_CONNECT;
boolean isInterestedInRead    = interestSet & SelectionKey.OP_READ;
boolean isInterestedInWrite   = interestSet & SelectionKey.OP_WRITE;

6.2. The Ready Set

The ready set defines the set of events that the channel is ready for. It is an integer value as well; we can get this information in the following way.

We have the ready set returned by SelectionKey‘s readyOps method. When we AND this value with the events constants as we did in the case of interest set, we get a boolean representing whether the channel is ready for a particular value or not.

Another alternative and shorter way to do this is to use SelectionKey’s convenience methods for this same purpose:

selectionKey.isAcceptable();
selectionKey.isConnectable();
selectionKey.isReadable();
selectionKey.isWriteable();

6.3. The Channel

Accessing the channel being watched from the SelectionKey object is very simple. We just call the channel method:

Channel channel = key.channel();

6.4. The Selector

Just like getting a channel, it’s very easy to obtain the Selector object from the SelectionKey object:

Selector selector = key.selector();

6.5. Attaching Objects

We can attach an object to a SelectionKey. Sometimes we may want to give a channel a custom ID or attach any kind of Java object we may want to keep track of.

Attaching objects is a handy way of doing it. Here is how you attach and get objects from a SelectionKey:

key.attach(Object);

Object object = key.attachment();

Alternatively, we can choose to attach an object during channel registration. We add it as a third parameter to channel’s register method, like so:

SelectionKey key = channel.register(
  selector, SelectionKey.OP_ACCEPT, object);

7. Channel Key Selection

So far, we have looked at how to create a selector, register channels to it and inspect the properties of the SelectionKey object which represents a channel’s registration to a selector.

This is only half of the process, now we have to perform a continuous process of selecting the ready set which we looked at earlier. We do selection using selector’s select method, like so:

int channels = selector.select();

This method blocks until at least one channel is ready for an operation. The integer returned represents the number of keys whose channels are ready for an operation.

Next, we usually retrieve the set of selected keys for processing:

Set<SelectionKey> selectedKeys = selector.selectedKeys();

The set we have obtained is of SelectionKey objects, each key represents a registered channel which is ready for an operation.

After this, we usually iterate over this set and for each key, we obtain the channel and perform any of the operations that appear in our interest set on it.

During the lifetime of a channel, it may be selected several times as its key appears in the ready set for different events. This is why we must have a continuous loop to capture and process channel events as and when they occur.

8. Complete Example

To cement the knowledge we have gained in the previous sections, we are going to build a complete client server example.

For ease of testing out our code, we will build an echo server and an echo client. In this kind of setup, the client connects to the server and starts sending messages to it. The server echoes back messages sent by each client.

When the server encounters a specific message, such as end, it interprets it as the end of the communication and closes the connection with the client.

8.1. The Server

Here is our code for EchoServer.java:

public class EchoServer {

    public static void main(String[] args){
        Selector selector = Selector.open();
        ServerSocketChannel serverSocket = ServerSocketChannel.open();
        serverSocket.bind(new InetSocketAddress("localhost", 5454));
        serverSocket.configureBlocking(false);
        serverSocket.register(selector, SelectionKey.OP_ACCEPT);
        ByteBuffer buffer = ByteBuffer.allocate(256);

        while (true) {
            selector.select();
            Set<SelectionKey> selectedKeys = selector.selectedKeys();
            Iterator<SelectionKey> iter = selectedKeys.iterator();
            while (iter.hasNext()) {

                SelectionKey key = iter.next();

                if (key.isAcceptable()) {
                    SocketChannel client = serverSocket.accept();
                    client.configureBlocking(false);
                    client.register(selector, SelectionKey.OP_READ);
                }

                if (key.isReadable()) {
                    SocketChannel client = (SocketChannel) key.channel();
                    client.read(buffer);
                    buffer.flip();
                    client.write(buffer);
                    buffer.clear();
                }
                iter.remove();
            }
        }
    }
}

This is what is happening; we create a Selector object by calling the static open method. We then create a channel also by calling its static open method, specifically a ServerSocketChannel instance.

This is because ServerSocketChannel is selectable and good for a stream-oriented listening socket.

We then bind it to a port of our choice. Remember we said earlier that before registering a selectable channel to a selector, we must first set it to non-blocking mode. So next we do this and then register the channel to the selector.

We don’t need the SelectionKey instance of this channel at this stage, so we will not remember it.

Java NIO uses a buffer-oriented model other than a stream-oriented model. So socket communication usually takes place by writing to and reading from a buffer.

We, therefore, create a new ByteBuffer which the server will be writing to and reading from. We initialize it to 256 bytes, it’s just an arbitrary value, depending on how much data we plan to transfer to and fro.

Finally, we perform the selection process. We select the ready channels, retrieve their selection keys, iterate over the keys and perform the operations for which each channel is ready.

We do this in an infinite loop since servers usually need to keep running whether there is an activity or not.

The only operation a ServerSocketChannel can handle is an ACCEPT operation. When we accept the connection from a client, we obtain a SocketChannel object on which we can do read and writes. We set it to non-blocking mode and register it for a READ operation to the selector.

During one of the subsequent selections, this new channel will become read-ready. We retrieve it and read it contents into the buffer. True to it’s as an echo server, we must write this content back to the client.

When we desire to write to a buffer from which we have been reading, we must call the flip() method.

We finally set the buffer to write mode by calling the flip method and simply write to it.

8.2. The Client

Here is our code for EchoClient.java:

public class EchoClient {
    private static SocketChannel client;
    private static ByteBuffer buffer;
    private static EchoClient instance;

    // singleton implementation 

    private EchoClient() {
        client = SocketChannel.open(
          new InetSocketAddress("localhost", 5454));
        buffer = ByteBuffer.allocate(256);
    }

    public String sendMessage(String msg) {
        buffer = ByteBuffer.wrap(msg.getBytes());
        client.write(buffer);
        buffer.clear();
        client.read(buffer);
        String response = new String(buffer.array()).trim();
        buffer.clear();
        return response;
    }
}

The client is simpler than the server.

We use a singleton pattern to instantiate it inside the start static method. We call the private constructor from this method.

In the private constructor, we open a connection on the same port on which the server channel was bound and still on the same host.

We then create a buffer to which we can write and from which we can read.

Finally, we have a sendMessage method which reads wraps any string we pass to it into a byte buffer which is transmitted over the channel to the server.

We then read from the client channel to get the message sent by the server. We return this as the echo of our message.

8.3. Testing

Inside a class called EchoTest.java, we are going to create a test case which sends messages to the server and only passes when the same messages are received back from the server.

Let’s start the server by running EchoServer.java as a Java application since it has a main method. We can now run the test:

@Test
public void givenClient_whenServerEchosMessage_thenCorrect() {
    EchoClient client = EchoClient.start();
    String resp1 = client.sendMessage("hello");
    String resp2 = client.sendMessage("world");
    assertEquals("hello", resp1);
    assertEquals("world", resp2);
}

9. Conclusion

In this article, we have covered basic usage of the Java NIO Selector component.

The complete source code and all code snippets for this article are available in the GitHub project.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Where is the Maven Local Repository?

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Overview

This quick writeup will focus on where Maven stores all the local dependencies locally – which is in the Maven local repository.

Simply put, when we run a Maven build, all the dependencies of our project (jars, plugin jars, other artifacts) are all stored locally for later use.

Also keep in mind that, beyond just this type of local repository, Maven does support 3 types of repos:

  • Local – Folder location on the local Dev machine
  • Central – Repository provided by Maven community
  • Remote – Organization owned custom repository

Let’s now focus on the local repository.

2. The Local Repository

The local repository of Maven is a folder location on the developer’s machine, where all the project artifacts are stored locally.

When maven build is executed, Maven automatically downloads all the dependency jars into the local repository.

Usually this folder is named .m2.

Here’s where the default path to this folder is – based on OS:

Windows : C:\Users\<User_Name>\.m2
Linux : /home/<User_Name>/.m2
Mac : ~/.m2

3. No Repository in the Default Location

If the repo is not present in this default location, it’s likely because some pre-existing configuration.

That config file is located in the Maven installation directory – in a folder called conf – and is named settings.xml.

Here’s the relevant configuration that determines the location of our missing local repo:

<settings>
    <localRepository>C:/maven_repository</localRepository>
    ...

That’s essentially how we can change the location of the local repo – and of course, if that location is changed, we’ll no longer find the repo at the default location.

Note: The files stored in the earlier location will not be moved automatically.

4. Conclusion

In this quick tutorial we had a look on the Maven local repository default setup and the custom configuration on it to change the repo location.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Introduction to WebJars

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Overview

This tutorial introduces WebJars and how to use them in a Java application.

Simply put, WebJars are client side dependencies packaged into JAR archive files. They work with most JVM containers and web frameworks.

Here’s a few popular WebJars: Twitter Bootstrap, jQuery, Angular JS, Chart.js etc; a full list is available on the official website.

2. Why Use WebJars?

This question has a very simple answer – because it’s easy.

Manually adding and managing client side dependencies often results in difficult to maintain codebases.

Also, most Java developers prefer to use Maven and Gradle as build and dependency management tools.

The main problem WebJars solves is making client side dependencies available on Maven Central and usable in any standard Maven project.

Here are a few interesting advantages of WebJars:

  1. We can explicitly and easily manage the client-side dependencies in JVM-based web applications
  2. We can use them with any commonly used build tool, eg: Maven, Gradle, etc
  3. WebJars behave like any other Maven dependency – which means that we get transitive dependencies as well

3. The Maven Dependency

Let’s jump right into it and add Twitter Bootstrap and jQuery to pom.xml:

<dependency>
    <groupId>org.webjars</groupId>
    <artifactId>bootstrap</artifactId>
    <version>3.3.7-1</version>
</dependency>
<dependency>
    <groupId>org.webjars</groupId>
    <artifactId>jquery</artifactId>
    <version>3.1.1</version>
</dependency>

Now Twitter Bootstrap and jQuery are available on the project classpath; we can simply reference them and use them in our application.

Note: You can check the latest version of the Twitter Bootstrap and the jQuery dependencies on Maven Central.

4. The Simple App

With these two WebJar dependencies defined, let’s now set up a simple Spring MVC project to be able to use the client-side dependencies.

Before we get to that however, it’s important to understand that WebJars have nothing to do with Spring, and we’re only using Spring here because it’s a very quick and simple way to set up an MVC project.

Here’s a good place to start to set up the Spring MVC and Spring Boot project.

And, with the simple projet set up, we’ll define a some mappings for our new client dependencies:

@Configuration
@EnableWebMvc
public class WebConfig extends WebMvcConfigurerAdapter {

    @Override
    public void addResourceHandlers(ResourceHandlerRegistry registry) {
        registry
          .addResourceHandler("/webjars/**")
          .addResourceLocations("/webjars/");
    }
}

We can of course do that via XML as well:

<mvc:resources mapping="/webjars/**" location="/webjars/"/>

5. Version-Agnostic Dependencies

When using Spring Framework version 4.2 or higher, it will automatically detect the webjars-locator library on the classpath and use it to automatically resolve the version of any WebJars assets.

In order to enable this feature, we’ll add the webjars-locator library as a dependency of the application:

<dependency>
    <groupId>org.webjars</groupId>
    <artifactId>webjars-locator</artifactId>
    <version>0.30</version>
</dependency>

In this case, we can reference the WebJars assets without using the version; see next section for a couple actual examples.

6. WebJars on the Client

Let’s add a simple plain HTML welcome page to our application (this is index.html):

<html>
    <head>
        <title>WebJars Demo</title>
    </head>
    <body> 
    </body>
</html>

Now we can use Twitter Bootstrap and jQuery in the project – let’s use both in our welcome page, starting with Bootstrap:

<script src="/webjars/bootstrap/3.3.7-1/js/bootstrap.min.js"></script>

For a version-agnostic approach:

<script src="/webjars/bootstrap/js/bootstrap.min.js"></script>

Add jQuery:

<script src="/webjars/jquery/3.1.1/jquery.min.js"></script>

And the version-agnostic approach:

<script src="/webjars/jquery/jquery.min.js"></script>

7. Testing

Now that we’ve added Twitter Bootstrap and jQuery in our HTML page, let’s test them.

We’ll add a bootstrap alert into our page:

<div class="container"><br/>
    <div class="alert alert-success">		  
        <strong>Success!</strong> It is working as we expected.
    </div>
</div>

Note that some basic understanding of Twitter Bootstrap is assumed here; here’s the getting started guides on the official.

This will show an alert as shown below, which means we have successfully added Twitter Bootstrap to our classpath.

Let’s use jQuery now. We’ll add a close button to this alert:

<a href="#" class="close" data-dismiss="alert" aria-label="close">×</a>

Now we need to add jQuery and bootstrap.min.js for the close button functionality, so add them inside body tag of index.html, as below:

<script src="/webjars/jquery/3.1.1/jquery.min.js"></script>
<script src="/webjars/bootstrap/3.3.7-1/js/bootstrap.min.js"></script>

Note: If you are using version-agnostic approach, be sure to remove only the version from the path, otherwise, relative imports may not work:

<script src="/webjars/jquery/jquery.min.js"></script>
<script src="/webjars/bootstrap/js/bootstrap.min.js"></script>

This is how our final welcome page should look like:

<html>
    <head>
        <script src="/webjars/jquery/3.1.1/jquery.min.js"></script>
        <script src="/webjars/bootstrap/3.3.7-1/js/bootstrap.min.js"></script>
        <title>WebJars Demo</title>
        <link rel="stylesheet" 
          href="/webjars/bootstrap/3.3.7-1/css/bootstrap.min.css" />
    </head>
    <body>
        <div class="container"><br/>
            <div class="alert alert-success">
                <a href="#" class="close" data-dismiss="alert" 
                  aria-label="close">×</a>
                <strong>Success!</strong> It is working as we expected.
            </div>
        </div>
    </body>
</html>

This is how the application should look like. (And the alert should disappear when clicking the close button.)

webjarsdemo

8. Conclusion

In this quick article, we focused on the basics of using WebJars in a JVM-based project, which makes development and maintenance a lot easier.

We implemented a Spring Boot backed project and used Twitter Bootstrap and jQuery in our project using WebJars.

The source code of the above-used example can be found in the Github project – this is a Maven project, so it should be easy to import and build.

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

An Introduction To Spring JMS

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Overview

Spring provides a JMS Integration framework that simplifies the use of the JMS API. This article introduces basic concepts of such integration.

2. Maven Dependency

In order to use Spring JMS in our application, we need to add necessary artifacts in the pom.xml:

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-jms</artifactId>
    <version>4.3.3.RELEASE</version>
</dependency>

The newest version of the artifact can be found here.

3. The JmsTemplate

JmsTemplate class handles the creation and releasing of resources when sending or synchronously receiving messages.

Hence the class that uses this JmsTemplate only needs to implement callback interfaces as specified in the method definition.

Starting with Spring 4.1, the JmsMessagingTemplate is built on top of JmsTemplate which provides an integration with the messaging abstraction, i.e., org.springframework.messaging.Message which in turn allows us to create a message to send in a generic manner.

4. Connection Management

In order to connect and be able to send/receive messages, we need to configure a ConnectionFactory.

A ConnectionFactory is one of the JMS administered objects which are preconfigured by an administrator. A client with the help of the configuration will make the connection with a JMS provider.

Spring provides 2 types of ConnectionFactory:

  • SingleConnectionFactory – is an implementation of ConnectionFactory interface, that will return the same connection on all createConnection() calls and ignore calls to close()
  • CachingConnectionFactory – extends the functionality of the SingleConnectionFactory and adds enhances it with a caching of Sessions, MessageProducers, and MessageConsumers

5. Destination Management

As discussed above, along with the ConnectionFactory , destinations are also JMS administered objects and can be stored and retrieved from a JNDI.

Spring provides generic resolvers like DynamicDestinationResolver and specific resolvers such as JndiDestinationResolver.

The JmsTemplate will delegate the resolution of the destination name to one of the implementations basing on our selection.

It will also provide a property called defaultDestination – which will be used with send and receive operations that do not refer to a specific destination.

6. Message Conversion

Spring JMS would be incomplete without the support of Message Converters.

The default conversion strategy used by JmsTemplate for both ConvertAndSend() and ReceiveAndConvert() operations is the SimpleMessageConverter class.

The SimpleMessageConverter is able to handle TextMessages, BytesMessages, MapMessages, and ObjectMessages. This class implements the MessageConverter interface.

Apart from SimpleMessageConverter, Spring JMS provides some other MessageConverter classes out of the box like MappingJackson2MessageConverter, MarshallingMessageConverter, MessagingMessageConverter.

Moreover, we can create custom message conversion functionality simply by implementing the MessageConverter interface’s toMessage() and FromMessage() methods.

Let us see a sample code snippet on implementing a custom MessageConverter,

public class SampleMessageConverter implements MessageConverter {
    public Object fromMessage(Message message) 
      throws  JMSException, MessageConversionException {
        //...
    }

    public Message toMessage(Object object, Session session)
      throws  JMSException, MessageConversionException { 
        //...
    }
}

7. Sample Spring JMS

In this section, we will see how JmsTemplate is used for sending and receiving messages.

The default method for sending the message is JmsTemplate.send(). It has two key parameters of which, first parameter is the JMS destination and the second parameter is an implementation of MessageCreator which contains the callback method createMessage() that JmsTemplate will use to construct a message to be sent.

JmsTemplate.send() is good for sending plain text messages but in order to send custom messages, JmsTemplate has another method called convertAndSend().

We can see below the implementation of these methods:

public class SampleJmsMessageSender {

    private JmsTemplate jmsTemplate;
    private Queue queue;

    // setters for jmsTemplate & queue

    public void simpleSend() {
        jmsTemplate.send(queue, s -> s.createTextMessage("hello queue world"));
    }
    public void sendMessage(Employee employee) { 
        System.out.println("Jms Message Sender : " + employee); 
        Map<String, Object> map = new HashMap<>(); 
        map.put("name", employee.getName()); map.put("age", employee.getAge()); 
        jmsTemplate.convertAndSend(map); 
    }
}

Below is the message receiver class, we call it as Message-Driven POJO (MDP). We can see that the class SampleListener is implementing the MessageListener interface and provides the text specific implementation for the interface method onMessage().

Apart from onMessage() method, our SampleListener class also called a method receiveAndConvert() for receiving custom messages:

public class SampleListener implements MessageListener {

    public JmsTemplate getJmsTemplate() {
        return getJmsTemplate();
    }

    public void onMessage(Message message) {
        if (message instanceof TextMessage) {
            try {
                String msg = ((TextMessage) message).getText();
                System.out.println("Message has been consumed : " + msg);
            } catch (JMSException ex) {
                throw new RuntimeException(ex);
            }
        } else {
            throw new IllegalArgumentException("Message Error");
        }
    }

    public Employee receiveMessage() throws JMSException {
        Map map = (Map) getJmsTemplate().receiveAndConvert();
        return new Employee((String) map.get("name"), (Integer) map.get("age"));
    }
}

We saw how to implement MessageListener and below we see the configuration in Spring application context:

<bean id="messageListener" class="com.baeldung.spring.jms.SampleListener" /> 

<bean id="jmsContainer" 
  class="org.springframework.jms.listener.DefaultMessageListenerContainer"> 
    <property name="connectionFactory" ref="connectionFactory"/> 
    <property name="destinationName" ref="IN_QUEUE"/> 
    <property name="messageListener" ref="messageListener" /> 
</bean>

DefaultMessageListenerContainer is the default message listener container Spring provides along with many other specialized containers.

8. Configuration with Annotations

@JmsListener is the only annotation required to convert a method of a normal bean into a JMS listener endpoint. Spring JMS provides many more annotations to ease the JMS implementation. We can see some of the sample classes annotated classes below,

@JmsListener(destination = "myDestination")
public void SampleJmsListenerMethod(Message<Order> order) { ... }

In order to add multiple listeners to a single method we just need to add multiple @JmsListener annotation.

@EnableJms is the annotation added to one of our configuration classes to support the above discussed @JmsListener annotated methods.

@Configuration
@EnableJms
public class AppConfig {

    @Bean
    public DefaultJmsListenerContainerFactory jmsListenerContainerFactory() {
        DefaultJmsListenerContainerFactory factory 
          = new DefaultJmsListenerContainerFactory();
        factory.setConnectionFactory(connectionFactory());
        return factory;
    }
}

9. Conclusion

In this tutorial, we discussed configuration and basic concepts of Spring JMS. We also had a brief look on the Spring specific JmsTemplate classes which are used for sending and receiving messages.

You can find the code implementation in the GitHub project.

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE


Intro to Mapping with MapStruct

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Overview

In this article, we’ll explore the use of MapStruct which is, simply put, a Java Bean mapper.

This API contains functions that automatically map between two Java Beans. With MapStruct we only need to create the interface and the library will automatically create a concrete implementation during compile time.

2. MapStruct and Transfer Object Pattern

For most applications, you’ll notice a lot of boilerplate code converting POJOs to other POJOs.

For example, a common type of conversion happens between persistence-backed entities and DTOs that go out to the client side.

So that is the problem that MapStruct solvesmanually creating bean mappers is time-consuming. The library is able to generate bean mapper classes automatically.

3. Maven

Let’s add the below dependency into our Maven pom.xml:

<dependency>
    <groupId>org.mapstruct</groupId>
    <artifactId>mapstruct-jdk8</artifactId>
    <version>1.0.0.Final</version> 
</dependency>

The latest stable release can always be found here:

Let’s also add the annotationProcessorPaths section to the configuration part of the maven-compiler-plugin plugin. The mapstruct-processor is used to generate the mapper implementation during the build:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-compiler-plugin</artifactId>
    <version>3.5.1</version>
    <configuration>
        <source>1.8</source>
	<target>1.8</target>
        <annotationProcessorPaths>
            <path>
                <groupId>org.mapstruct</groupId>
                <artifactId>mapstruct-processor</artifactId>
                <version>${org.mapstruct.version}</version>
            </path>
        </annotationProcessorPaths>
    </configuration>
</plugin>

4. Basic Mapping

4.1. Creating a POJO

Let’s first create a simple Java POJO:

public class SimpleSource {

    private String name;
    private String description;

    // getters and setters
}
 
public class SimpleDestination {

    private String name;
    private String description;

    // getters and setters
}

4.2. The Mapper Interface

@Mapper
public interface SimpleSourceDestinationMapper {
    SimpleDestination sourceToDestination(SimpleSource source);
    SimpleSource destinationToSource(SimpleDestination destination);
}

Notice we did not create any implementation class for our SimpleSourceDestinationMapper – because MapStruct creates it for us.

4.3. The New Mapper

We can trigger the MapStruct processing by executing an mvn clean install.

This will generate the implementation class under /target/generated-sources/annotations/.

Here is the class that MapStruct auto-creates for us:

public class SimpleSourceDestinationMapperImpl
  implements SimpleSourceDestinationMapper {

    @Override
    public SimpleDestination sourceToDestination(SimpleSource source) {
        if ( source == null ) {
            return null;
        }

        SimpleDestination simpleDestination = new SimpleDestination();

        simpleDestination.setName( source.getName() );
        simpleDestination.setDescription( source.getDescription() );

        return simpleDestination;
    }

    @Override
    public SimpleSource destinationToSource(SimpleDestination destination){
        if ( destination == null ) {
            return null;
        }

        SimpleSource simpleSource = new SimpleSource();

        simpleSource.setName( destination.getName() );
        simpleSource.setDescription( destination.getDescription() );

        return simpleSource;
    }
}

4.4. A Test Case

Finally, with everything generated, let’s write a test case will show that values in SimpleSource match values in SimpleDestination.

public class SimpleSourceDestinationMapperTest {

    private SimpleSourceDestinationMapper mapper
      = Mappers.getMapper(SimpleSourceDestinationMapper.class);

    @Test
    public void givenSourceToDestination_whenMaps_thenCorrect() {
        SimpleSource simpleSource = new SimpleSource();
        simpleSource.setName("SourceName");
        simpleSource.setDescription("SourceDescription");
        SimpleDestination destination = mapper.sourceToDestination(simpleSource);
 
        assertEquals(simpleSource.getName(), destination.getName());
        assertEquals(simpleSource.getDescription(), 
          destination.getDescription());
    }

    @Test
    public void givenDestinationToSource_whenMaps_thenCorrect() {
        SimpleDestination destination = new SimpleDestination();
        destination.setName("DestinationName");
        destination.setDescription("DestinationDescription");

        SimpleSource source = mapper.destinationToSource(destination);

        assertEquals(destination.getName(), source.getName());
        assertEquals(destination.getDescription(),
          source.getDescription());
    }
}

5. Mapping with Dependency Injection

Next, let’s obtain an instance of a mapper in MapStruct by simply calling Mappers.getMapper(YourClass.class).

Of course that’s a very manual way of getting the instance – a much better alternative would be injecting the mapper directly where we need it (if our project uses any Dependency Injection solution).

Luckily MapStruct has solid support for both Spring and CDI (Contexts and Dependency Injection).

To use Spring IoC in our mapper, we simply need to add the componentModel attribute to @Mapper with the value spring and for CDI would be cdi .

5.1. Modify the Mapper

Add the following code to SimpleSourceDestinationMapper:

@Mapper(componentModel = "spring")
public interface SimpleSourceDestinationMapper

6. Mapping Fields with Different Field Names

From our previous example, MapStruct was able to map our beans automatically because they have the same field names. So what if a bean we are about to map has a different field name?

For our example, we will be creating a new bean called Employee and EmployeeDTO.

6.1. New POJOs

public class EmployeeDTO {

    private int employeeId;
    private String employeeName;

    // getters and setters
}
public class Employee {

    private int id;
    private String name;

    // getters and setters
}

6.2. The Mapper Interface

When mapping different field names, we will need to configure its source field to its target field and to do that we will need to add @Mappings annotation. This annotation accepts an array of @Mapping annotation which we will use to add the target and source attribute.

In MapStruct we can also use dot notation to define a member of a bean:

@Mapper
public interface EmployeeMapper {

    @Mappings({
      @Mapping(target="employeeId", source="entity.id"),
      @Mapping(target="employeeName", source="entity.name")
    })
    EmployeeDTO employeeToEmployeeDTO(Employee entity);

    @Mappings({
      @Mapping(target="id", source="dto.employeeId"),
      @Mapping(target="name", source="dto.employeeName")
    })
    Employee employeeDTOtoEmployee(EmployeeDTO dto);
}

6.3. The Test Case

Again we need to test that both source and destination object values match:

@Test
public void givenEmpDTODiffNametoEmp_whenMaps_thenCorrect() {
    EmployeeDTO dto = new EmployeeDTO();
    dto.setEmployeeId(1);
    dto.setEmployeeName("John");

    Employee entity = mapper.employeeDTOtoEmployee(dto);

    assertEquals(dto.getEmployeeId(), entity.getId());
    assertEquals(dto.getEmployeeName(), entity.getName());
}

More test cases can be found in the Github project.

7. Mapping Beans with Child Beans

Next we’ll show how to map a bean with references to other beans.

7.1. Modify the POJO

Let’s add a new bean reference in the Employee object:

public class EmployeeDTO {

    private int employeeId;
    private String employeeName;
    private DivisionDTO division;

    // getters and setters omitted
}
public class Employee {

    private int id;
    private String name;
    private Division division;

    // getters and setters omitted
}
public class Division {

    private int id;
    private String name;

    // default constructor, getters and setters omitted
}

7.2. Modify the Mapper

Here we need to add a method to convert the Division to DivisionDTO and vice versa; if MapStruct detects that the object type needs to be converted and the method to convert exists in the same class then it will use it automatically.

Let’s add this to the mapper:

DivisionDTO divisionToDivisionDTO(Division entity);

Division divisionDTOtoDivision(DivisionDTO dto);

7.3. Modify the Test Case

Let’s modify and add a few test cases to the existing one:

@Test
public void givenEmpDTONestedMappingToEmp_whenMaps_thenCorrect() {
    EmployeeDTO dto = new EmployeeDTO();
    dto.setDivision(new DivisionDTO(1, "Division1"));

    Employee entity = mapper.employeeDTOtoEmployee(dto);

    assertEquals(dto.getDivision().getId(), 
      entity.getDivision().getId());
    assertEquals(dto.getDivision().getName(), 
      entity.getDivision().getName());
}

8. Mapping With Type Conversion

MapStruct also offers a couple of ready made implicit type conversions and for our example, we will try to convert a String date to an actual Date object.

For more details on implicit type conversion, you may read the MapStruct reference guide.

8.1. Modify the Beans

Add a start date for our employee:

public class Employee {

    // other fields
    private Date startDt;

    // getters and setters
}
public class EmployeeDTO {

    // other fields
    private String employeeStartDt;

    // getters and setters
}

8.2. Modify the Mapper

Modify the mapper and provide the dateFormat for our start date:

@Mappings({
  @Mapping(target="employeeId", source = "entity.id"),
  @Mapping(target="employeeName", source = "entity.name"),
  @Mapping(target="employeeStartDt", source = "entity.startDt",
           dateFormat = "dd-MM-yyyy HH:mm:ss")})
EmployeeDTO employeeToEmployeeDTO(Employee entity);

@Mappings({
  @Mapping(target="id", source="dto.employeeId"),
  @Mapping(target="name", source="dto.employeeName"),
  @Mapping(target="startDt", source="dto.employeeStartDt",
           dateFormat="dd-MM-yyyy HH:mm:ss")})
Employee employeeDTOtoEmployee(EmployeeDTO dto);

8.3. Modify the Test Case

Let’s add a few more test case to verify the conversion is correct:

private static final String DATE_FORMAT = "dd-MM-yyyy HH:mm:ss";

@Test
public void givenEmpStartDtMappingToEmpDTO_whenMaps_thenCorrect() throws ParseException {
    Employee entity = new Employee();
    entity.setStartDt(new Date());

    EmployeeDTO dto = mapper.employeeToEmployeeDTO(entity);
    SimpleDateFormat format = new SimpleDateFormat(DATE_FORMAT);
 
    assertEquals(format.parse(dto.getEmployeeStartDt()).toString(),
      entity.getStartDt().toString());
}

@Test
public void givenEmpDTOStartDtMappingToEmp_whenMaps_thenCorrect() throws ParseException {
    EmployeeDTO dto = new EmployeeDTO();
    dto.setEmployeeStartDt("01-04-2016 01:00:00");

    Employee entity = mapper.employeeDTOtoEmployee(dto);
    SimpleDateFormat format = new SimpleDateFormat(DATE_FORMAT);
 
    assertEquals(format.parse(dto.getEmployeeStartDt()).toString(),
      entity.getStartDt().toString());
}

9. Conclusion

This article provided an introduction to MapStruct, we have introduced most of the basics of the Mapping library and how to use it in our applications.

The implementation of these examples and tests can be found in the Github project. This is a Maven project, so it should be easy to import and run as it is.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Spring Security: Authentication with a Database-backed UserDetailsService

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Overview

In this article, we will show how to create a custom database-backed UserDetailsService for authentication with Spring Security.

2. UserDetailsService

The UserDetailsService interface is used to retrieve user-related data. It has one method named loadUserByUsername() which finds a user entity based on the username and can be overridden to customize the process of finding the user.

It is used by the DaoAuthenticationProvider to load details about the user during authentication.

3. The User Model

For storing users, we will create a User entity that is mapped to a database table, with the following attributes:

@Entity
public class User {

    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;

    @Column(nullable = false, unique = true)
    private String username;

    private String password;

    //standard getters and setters
}

4. Retrieving a User

For the purpose of retrieving a user associated with a username, we will create a DAO class using Spring Data by extending the JpaRepository interface:

public interface UserRepository extends JpaRepository<User, Long> {

    User findByUsername(String username);
}

5. The UserDetailsService

In order to provide our own user service, we will need to implement the UserDetailsService interface.

We’ll create a class called MyUserDetailsService that overrides the method loadUserByUsername() of the interface.

In this method, we retrieve the User object using the DAO, and if it exists, wrap it into a MyUserPrincipal object, which implements UserDetails, and returns it:

@Service
public class MyUserDetailsService implements UserDetailsService {

    @Autowired
    private UserRepository userRepository;

    @Override
    public UserDetails loadUserByUsername(String username) {
        User user = userRepository.findByUsername(username);
        if (user == null) {
            throw new UsernameNotFoundException(username);
        }
        return new MyUserPrincipal(user);
    }
}

The MyUserPrincipal class is defined as follows:

public class MyUserPrincipal implements UserDetails {
    private User user;

    public MyUserPrincipal(User user) {
        this.user = user;
    }
    //...
}

6. Spring Configuration

We will demonstrate both types of Spring configurations: XML and annotation-based, which are necessary in order to use our custom UserDetailsService implementation.

6.1. Annotation Configuration

Using Spring annotations, we will define the UserDetailsService bean, and set it as a property of the authenticationProvider bean, which we inject into the authenticationManager:

@Override
protected void configure(AuthenticationManagerBuilder auth)
  throws Exception {
    auth.userDetailsService(userDetailsService);
}

@Bean
public DaoAuthenticationProvider authenticationProvider() {
    DaoAuthenticationProvider authProvider
      = new DaoAuthenticationProvider();
    authProvider.setUserDetailsService(userDetailsService);
    authProvider.setPasswordEncoder(encoder());
    return authProvider;
}

@Bean
public PasswordEncoder encoder() {
    return new BCryptPasswordEncoder(11);
}

6.2. XML Configuration

For the XML configuration, we need to define a bean with type MyUserDetailsService, and inject it into Spring’s authentication-provider bean:

<bean id="myUserDetailsService" 
  class="org.baeldung.security.MyUserDetailsService"/>

<security:authentication-manager>
    <security:authentication-provider 
      user-service-ref="myUserDetailsService" >
        <security:password-encoder ref="passwordEncoder">
        </security:password-encoder>
    </security:authentication-provider>
</security:authentication-manager>
    
<bean id="passwordEncoder" 
  class="org.springframework.security
  .crypto.bcrypt.BCryptPasswordEncoder">
    <constructor-arg value="11"/>
</bean>

7. Conclusion

In this article, we’ve shown how to create a custom Spring-based UserDetailsService backed by persistent data.

The implementation can be found in the GitHub project – this is a Maven based project, so it should be easy to import and run as it is.

The Master Class "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

Java Web Weekly, Issue 147

$
0
0

A forward looking week a focus on reactive programming.

Here we go…

1. Spring and Java

>> Project Valhalla: Goals [mail.openjdk.java.net]

A very interesting read about the Java language itself and the direction it’s potentially heading into.

>> Testing RxJava [infoq.com]

Another very interesting look at RxJava, this time driven with tests. I’m definitely excited about version 2 coming out soon.

>> Small scale stream processing kata. Part 2: RxJava 1.x/2.x [nurkiewicz.com]

Reactive programming is clearly going mainstream in the Java ecosystem over the next 12 to 24 months.

Here’s another practical look at the main player in that space right now – RxJava.

This reaction writeup is also worth reading.

>> Spring Cloud Pipelines [spring.io]

A new (and heavily opinionated) new Spring Cloud project – with the goal of quickly rolling out a non-trivial deployment pipeline.

Also check out the Marcin’s announcement here.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Musings

>> The New Minimalism [prog21.dadgum.com]

Good advice on working with abstractions and developing software in a pragmatic way.

>> Why coding is more fun than engineering [swizec.com]

Getting an insight into how someone else works is – at least for me – very useful. I generally am able to glean something out of it that I can apply to my own work and make it better.

It’s because of that this quick writeup was definitely a good read.

>> Reviewing Strangers’ Code on Github [daedtech.com]

Interesting suggestion on stepping out of the comfort zone of your own codebase and into another.

Whether or not Github is your jam, definitely do these kinds of reviews on the regular, as staying in a single codebase for years is often a cap on your growth.

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> I majored in art [dilbert.com]

>> Our new product is cannibalizing our old product [dilbert.com]

>> The best way to evaluate investment funds [dilbert.com]

5. Pick of the Week

This week I’m picking the new Hibernate course over on Thoughts on Java – given that the price increases in only 3 days:

>> The new Hibernate Course

Here’s a video series to learn more about the material.

A Guide to Cassandra with Java

$
0
0

Thorben (from Thoughts on Java) just opened up his new Advanced Hibernate training. The early-bird pricing is available for a few days (until next Monday I think).

1. Overview

This tutorial is an introductory guide to the Apache Cassandra database using Java.

You will find key concepts explained, along with a working example that covers the basic steps to connect to and start working with this NoSQL database from Java.

2. Cassandra

Cassandra is a scalable NoSQL database that provides continuous availability with no single point of failure and gives the ability to handle large amounts of data with exceptional performance.

This database uses a ring design instead of using a master-slave architecture. In the ring design, there is no master node – all participating nodes are identical and communicate with each other as peers.

This makes Cassandra a horizontally scalable system by allowing for the incremental addition of nodes without needing reconfiguration.

2.1. Key Concepts

Let’s start with a short survey of some of the key concepts of Cassandra:

  • Cluster – a collection of nodes or Data Centers arranged in a ring architecture. A name must be assigned to every cluster, which will subsequently be used by the participating nodes
  • Keyspace – If you are coming from a relational database, then the schema is the respective keyspace in Cassandra. The keyspace is the outermost container for data in Cassandra. The main attributes to set per keyspace are the Replication Factor, the Replica Placement Strategy and the Column Families
  • Column Family  Column Families in Cassandra are like tables in Relational Databases. Each Column Family contains a collection of rows which are represented by a Map<RowKey, SortedMap<ColumnKey, ColumnValue>>The key gives the ability to access related data together
  • Column – A column in Cassandra is a data structure which contains a column name, a value and a timestamp. The columns and the number of columns in each row may vary in contrast with a relational database where data are well structured

3. Using the Java Client

3.1. Maven Dependency

We need to define the following Cassandra dependency in the pom.xml, the latest version of which can be found here:

<dependency>
    <groupId>com.datastax.cassandra</groupId>
    <artifactId>cassandra-driver-core</artifactId>
    <version>3.1.0</version>
</dependency>

In order to test the code with an embedded database server we should also add the cassandra-unit dependency, the latest version of which can be found here:

<dependency>
    <groupId>org.cassandraunit</groupId>
    <artifactId>cassandra-unit</artifactId>
    <version>3.0.0.1</version>
</dependency>

3.2. Connecting to Cassandra

In order to connect to Cassandra from Java, we need to build a Cluster object.

An address of a node needs to be provided as a contact point. If we don’t provide a port number, the default port (9042) will be used.

These settings allow the driver to discover the current topology of a cluster.

public class CassandraConnector {

    private Cluster cluster;

    private Session session;

    public void connect(String node, Integer port) {
        Builder b = Cluster.builder().addContactPoint(node);
        if (port != null) {
            b.withPort(port);
        }
        cluster = b.build();

        session = cluster.connect();
    }

    public Session getSession() {
        return this.session;
    }

    public void close() {
    	session.close();
        cluster.close();
    }
}

3.3. Creating the Keyspace

Let’s create our “library” keyspace:

public void createKeyspace(
  String keyspaceName, String replicationStrategy, int replicationFactor) {
  StringBuilder sb = 
    new StringBuilder("CREATE KEYSPACE IF NOT EXISTS ")
      .append(keyspaceName).append(" WITH replication = {")
      .append("'class':'").append(replicationStrategy)
      .append("','replication_factor':").append(replicationFactor)
      .append("};");
        
    String query = sb.toString();
    session.execute(query);
}

Except from the keyspaceName we need to define two more parameters, the replicationFactor and the replicationStrategy. These parameters determine the number of replicas and how the replicas will be distributed across the ring, respectively.

With replication Cassandra ensures reliability and fault tolerance by storing copies of data in multiple nodes.

At this point we may test that our keyspace has successfully been created:

private KeyspaceRepository schemaRepository;
private Session session;

@Before
public void connect() {
    CassandraConnector client = new CassandraConnector();
    client.connect("127.0.0.1", 9142);
    this.session = client.getSession();
    schemaRepository = new KeyspaceRepository(session);
}
@Test
public void whenCreatingAKeyspace_thenCreated() {
    String keyspaceName = "library";
    schemaRepository.createKeyspace(keyspaceName, "SimpleStrategy", 1);

    ResultSet result = 
      session.execute("SELECT * FROM system_schema.keyspaces;");

    List<String> matchedKeyspaces = result.all()
      .stream()
      .filter(r -> r.getString(0).equals(keyspaceName.toLowerCase()))
      .map(r -> r.getString(0))
      .collect(Collectors.toList());

    assertEquals(matchedKeyspaces.size(), 1);
    assertTrue(matchedKeyspaces.get(0).equals(keyspaceName.toLowerCase()));
}

3.4. Creating a Column Family

Now, we can add the first Column Family “books” to the existing keyspace:

private static final String TABLE_NAME = "books";
private Session session;

public void createTable() {
    StringBuilder sb = new StringBuilder("CREATE TABLE IF NOT EXISTS ")
      .append(TABLE_NAME).append("(")
      .append("id uuid PRIMARY KEY, ")
      .append("title text,")
      .append("subject text);");

    String query = sb.toString();
    session.execute(query);
}

The code to test that the Column Family has been created, is provided below:

private BookRepository bookRepository;
private Session session;

@Before
public void connect() {
    CassandraConnector client = new CassandraConnector();
    client.connect("127.0.0.1", 9142);
    this.session = client.getSession();
    bookRepository = new BookRepository(session);
}
@Test
public void whenCreatingATable_thenCreatedCorrectly() {
    bookRepository.createTable();

    ResultSet result = session.execute(
      "SELECT * FROM " + KEYSPACE_NAME + ".books;");

    List<String> columnNames = 
      result.getColumnDefinitions().asList().stream()
      .map(cl -> cl.getName())
      .collect(Collectors.toList());
        
    assertEquals(columnNames.size(), 3);
    assertTrue(columnNames.contains("id"));
    assertTrue(columnNames.contains("title"));
    assertTrue(columnNames.contains("subject"));
}

3.5. Altering the Column Family

A book has also a publisher, but no such column can be found in the created table. We can use the following code to alter the table and add a new column:

public void alterTablebooks(String columnName, String columnType) {
    StringBuilder sb = new StringBuilder("ALTER TABLE ")
      .append(TABLE_NAME).append(" ADD ")
      .append(columnName).append(" ")
      .append(columnType).append(";");

    String query = sb.toString();
    session.execute(query);
}

Let’s make sure that the new column publisher has been added:

@Test
public void whenAlteringTable_thenAddedColumnExists() {
    bookRepository.createTable();

    bookRepository.alterTablebooks("publisher", "text");

    ResultSet result = session.execute(
      "SELECT * FROM " + KEYSPACE_NAME + "." + "books" + ";");

    boolean columnExists = result.getColumnDefinitions().asList().stream()
      .anyMatch(cl -> cl.getName().equals("publisher"));
        
    assertTrue(columnExists);
}

3.6. Inserting data in the Column Family

Now that the books table has been created, we are ready to start adding data to the table:

public void insertbookByTitle(Book book) {
    StringBuilder sb = new StringBuilder("INSERT INTO ")
      .append(TABLE_NAME_BY_TITLE).append("(id, title) ")
      .append("VALUES (").append(book.getId())
      .append(", '").append(book.getTitle()).append("');");

    String query = sb.toString();
    session.execute(query);
}

A new row has been added in the ‘books’ table, so we can test if the row exists:

@Test
public void whenAddingANewBook_thenBookExists() {
    bookRepository.createTableBooksByTitle();

    String title = "Effective Java";
    Book book = new Book(UUIDs.timeBased(), title, "Programming");
    bookRepository.insertbookByTitle(book);
        
    Book savedBook = bookRepository.selectByTitle(title);
    assertEquals(book.getTitle(), savedBook.getTitle());
}

In the test code above we have used a different method to create a table named booksByTitle:

public void createTableBooksByTitle() {
    StringBuilder sb = new StringBuilder("CREATE TABLE IF NOT EXISTS ")
      .append("booksByTitle").append("(")
      .append("id uuid, ")
      .append("title text,")
      .append("PRIMARY KEY (title, id));");

    String query = sb.toString();
    session.execute(query);
}

In Cassandra one of the best practices is to use one-table-per-query pattern. This means, for a different query a different table is needed.

In our example, we have chosen to select a book by its title. In order to satisfy the selectByTitle query, we have created a table with a compound PRIMARY KEY using the columns, title and id. The column title is the partitioning key while the id column is the clustering key.

This way, many of the tables in your data model contain duplicate data. This is not a downside of this database. On the contrary, this practice optimizes the performance of the reads.

Let’s see the data that are currently saved in our table:

public List<Book> selectAll() {
    StringBuilder sb = 
      new StringBuilder("SELECT * FROM ").append(TABLE_NAME);

    String query = sb.toString();
    ResultSet rs = session.execute(query);

    List<Book> books = new ArrayList<Book>();

    rs.forEach(r -> {
        books.add(new Book(
          r.getUUID("id"), 
          r.getString("title"),  
          r.getString("subject")));
    });
    return books;
}

A test for query returning expected results:

@Test
public void whenSelectingAll_thenReturnAllRecords() {
    bookRepository.createTable();
        
    Book book = new Book(
      UUIDs.timeBased(), "Effective Java", "Programming");
    bookRepository.insertbook(book);
      
    book = new Book(
      UUIDs.timeBased(), "Clean Code", "Programming");
    bookRepository.insertbook(book);
        
    List<Book> books = bookRepository.selectAll(); 
        
    assertEquals(2, books.size());
    assertTrue(books.stream().anyMatch(b -> b.getTitle()
      .equals("Effective Java")));
    assertTrue(books.stream().anyMatch(b -> b.getTitle()
      .equals("Clean Code")));
}

Everything is fine till now, but one thing has to be realized. We started working with table books, but in the meantime, in order to satisfy the select query by title column, we had to create another table named booksByTitle.

The two tables are identical containing duplicated columns, but we have only inserted data in the booksByTitle table. As a consequence,  data in two tables is currently inconsistent.

We can solve this using a batch query, which comprises two insert statements, one for each table. A batch query executes multiple DML statements as a single operation.

An example of such query is provided:

public void insertBookBatch(Book book) {
    StringBuilder sb = new StringBuilder("BEGIN BATCH ")
      .append("INSERT INTO ").append(TABLE_NAME)
      .append("(id, title, subject) ")
      .append("VALUES (").append(book.getId()).append(", '")
      .append(book.getTitle()).append("', '")
      .append(book.getSubject()).append("');")
      .append("INSERT INTO ")
      .append(TABLE_NAME_BY_TITLE).append("(id, title) ")
      .append("VALUES (").append(book.getId()).append(", '")
      .append(book.getTitle()).append("');")
      .append("APPLY BATCH;");

    String query = sb.toString();
    session.execute(query);
}

Again we test the batch query results like so:

@Test
public void whenAddingANewBookBatch_ThenBookAddedInAllTables() {
    bookRepository.createTable();
        
    bookRepository.createTableBooksByTitle();
    
    String title = "Effective Java";
    Book book = new Book(UUIDs.timeBased(), title, "Programming");
    bookRepository.insertBookBatch(book);
    
    List<Book> books = bookRepository.selectAll();
    
    assertEquals(1, books.size());
    assertTrue(
      books.stream().anyMatch(
        b -> b.getTitle().equals("Effective Java")));
        
    List<Book> booksByTitle = bookRepository.selectAllBookByTitle();
    
    assertEquals(1, booksByTitle.size());
    assertTrue(
      booksByTitle.stream().anyMatch(
        b -> b.getTitle().equals("Effective Java")));
}

Note: As of version 3.0, a new feature called “Materialized Views” is available , which we may use instead of batch queries. A well-documented example for “Materialized Views” is available here.

3.7. Deleting the Column Family

The code below shows how to delete a table:

public void deleteTable() {
    StringBuilder sb = 
      new StringBuilder("DROP TABLE IF EXISTS ").append(TABLE_NAME);

    String query = sb.toString();
    session.execute(query);
}

Selecting a table that does not exist in the keyspace results in an InvalidQueryException: unconfigured table books:

@Test(expected = InvalidQueryException.class)
public void whenDeletingATable_thenUnconfiguredTable() {
    bookRepository.createTable();
    bookRepository.deleteTable("books");
       
    session.execute("SELECT * FROM " + KEYSPACE_NAME + ".books;");
}

3.8. Deleting the Keyspace

Finally, let’s delete the keyspace:

public void deleteKeyspace(String keyspaceName) {
    StringBuilder sb = 
      new StringBuilder("DROP KEYSPACE ").append(keyspaceName);

    String query = sb.toString();
    session.execute(query);
}

And test that the keyspace has been deleted:

@Test
public void whenDeletingAKeyspace_thenDoesNotExist() {
    String keyspaceName = "library";
    schemaRepository.deleteKeyspace(keyspaceName);

    ResultSet result = 
      session.execute("SELECT * FROM system_schema.keyspaces;");
    boolean isKeyspaceCreated = result.all().stream()
      .anyMatch(r -> r.getString(0).equals(keyspaceName.toLowerCase()));
        
    assertFalse(isKeyspaceCreated);
}

4. Conclusion

This tutorial covered the basic steps of connecting to and using the Cassandra database with Java. Some of the key concepts of this database have also been discussed in order to help you kick start.

The full implementation of this tutorial can be found in the Github project.

Thorben (from Thoughts on Java) just opened up his new Advanced Hibernate training. The early-bird pricing is available for a few days (until next Monday I think).

How to Create an Executable JAR with Maven

$
0
0

I just released the Master Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Introduction

In this quick article we’ll focus on packaging a Maven project into an executable Jar file.

Usually, when creating a jar file, we want to execute it easily, without using the IDE; to that end, we’ll discuss the configuration and pros/cons of using each of these approaches for creating the executable.

2. Configuration

In order to create an executable jar, we don’t need any additional dependencies. We just need to create Maven Java project, and have at least one class with the main(…) method.

In our example, we created Java class named ExecutableMavenJar.

We also need to make sure that our pom.xml contains the the following elements:

<modelVersion>4.0.0</modelVersion>
<groupId>com.baeldung</groupId>
<artifactId>core-java</artifactId>
<version>0.1.0-SNAPSHOT</version>
<packaging>jar</packaging>

The most important aspect here is the type – to create an executable jar, double check the configuration uses a jar type.

Now we can start using the various solutions.

2.1. Manual Configuration

Let’s start with a manual approach – with the help of the maven-dependency-plugin.

First, we’ll copy all required dependencies into the folder that we’ll specify:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-dependency-plugin</artifactId>
    <executions>
        <execution>
            <id>copy-dependencies</id>
            <phase>prepare-package</phase>
            <goals>
                <goal>copy-dependencies</goal>
            </goals>
            <configuration>
                <outputDirectory>
                    ${project.build.directory}/libs
                </outputDirectory>
            </configuration>
        </execution>
    </executions>
</plugin>

There are two important aspects to notice. First, we specify the goal copy-dependencies, which tells Maven to copy these dependencies into the specified outputDirectory.

In our case, we’ll create a folder named libs, inside the project build directory (which is usually the target folder).

In the second step, we are going to create executable and class path aware jar, with the link to the dependencies copied in the first step:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-jar-plugin</artifactId>
    <configuration>
        <archive>
            <manifest>
                <addClasspath>true</addClasspath>
                <classpathPrefix>libs/</classpathPrefix>
                <mainClass>
                    org.baeldung.executable.ExecutableMavenJar
                </mainClass>
            </manifest>
        </archive>
    </configuration>
</plugin>

The most important part of above-mentioned is the manifest configuration. We add a classpath, with all dependencies (folder libs/), and provide the information about the main class.

Please note, that we need to provide fully qualified named of the class, which means it will include package name.

The advantages and disadvantages of this approach are:

  • pros – transparent process, where we can specify each step
  • cons – manual, dependencies are out of the final jar, which means that your executable jar will only run if the libs folder will be accessible and visible for a jar

2.2. Apache Maven Assembly Plugin

The Apache Maven Assembly Plugin allows users to aggregate the project output along with its dependencies, modules, site documentation, and other files into a single, runnable package.

The main goal in the assembly plugin is the single goal – used to create all assemblies (all other goals are deprecated and will be removed in a future release).

Let’s take a look at the configuration in pom.xml:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-assembly-plugin</artifactId>
    <executions>
        <execution>
            <phase>package</phase>
            <goals>
                <goal>single</goal>
            </goals>
            <configuration>
                <archive>
                <manifest>
                    <mainClass>
                        org.baeldung.executable.ExecutableMavenJar
                    </mainClass>
                </manifest>
                </archive>
                <descriptorRefs>
                    <descriptorRef>jar-with-dependencies</descriptorRef>
                </descriptorRefs>
            </configuration>
        </execution>
    </executions>
</plugin>

Similarly to the manual approach, we need to provide the information about the main class; the difference is that the Maven Assembly Plugin will automatically copy all required dependencies into a jar file.

In the descriptorRefs part of the configuration code, we provided the name, that will be added to the project name.

Output in our example will be named as core-java-jar-with-dependencies.jar.

  • pros – dependencies inside the jar file, one-file only
  • cons – basic control of packaging your artifact, for example, there is no class relocation support

2.3. Apache Maven Shade Plugin

Apache Maven Shade Plugin provides the capability to package the artifact in an uber-jar, which consists of all dependencies required to run the project. Moreover, it supports shading – i.e. rename – the packages of some of the dependencies.

Let’s take a look at the configuration:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-shade-plugin</artifactId>
    <executions>
        <execution>
            <goals>
                <goal>shade</goal>
            </goals>
            <configuration>
                <shadedArtifactAttached>true</shadedArtifactAttached>
                <transformers>
                    <transformer implementation=
                      "org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
                        <mainClass>org.baeldung.executable.ExecutableMavenJar</mainClass>
                </transformer>
            </transformers>
        </configuration>
        </execution>
    </executions>
</plugin>

There are three main parts of this configuration:

First, <shadedArtifactAttached> marks all dependencies to be packaged into the jar.

Second, we need to specify the transformer implementation; we used the standard one in our example.

Finally, we need to specify the main class of our application.

The output file will be named core-java-0.1.0-SNAPSHOT-shaded.jar, where core-java is our project name, followed by snapshot version and plugin name.

  • pros – dependencies inside the jar file, advanced control of packaging your artifact, with shading and class relocation
  • cons – complex configuration (especially if we want to use advanced features)

2.4. One Jar Maven Plugin

Another option to create executable jar is the One Jar project.

This provides custom classloader that knows how to load classes and resources from jars inside an archive, instead of from jars in the filesystem.

Let’s take a look at the configuration:

<plugin>
    <groupId>com.jolira</groupId>
    <artifactId>onejar-maven-plugin</artifactId>
    <executions>
        <execution>
            <configuration>
                <mainClass>org.baeldung.executable.
                  ExecutableMavenJar</mainClass>
                <attachToBuild>true</attachToBuild>
                <filename>
                  ${project.build.finalName}.${project.packaging}
                </filename>
            </configuration>
            <goals>
                <goal>one-jar</goal>
            </goals>
        </execution>
    </executions>
</plugin>

As it is shown in the configuration, we need to specify the main class and attach all dependencies to build, by using attachToBuild = true.

Also, we should provide the output filename. Moreover, the goal for Maven is one-jar. Please note, that One Jar is a commercial solution, that will make dependency jars not expanded into the filesystem at runtime.

  • pros – clean delegation model, allows classes to be at the top-level of the One Jar, supports external jars and can support Native libraries
  • cons – not actively supported since 2012

2.5. Spring Boot Maven Plugin

Finally, the last solution we’ll look at is the Spring Boot Maven Plugin.

This allows to package executable jar or war archives and run an application “in-place”.

To use it we need to use at least Maven version 3.2. The detailed description is available here.

Let’s have a look at the config:

<plugin>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-maven-plugin</artifactId>
    <executions>
        <execution>
            <goals>
                <goal>repackage</goal>
            </goals>
            <configuration>
                <classifier>spring-boot</classifier>
                <mainClass>
                  org.baeldung.executable.ExecutableMavenJar
                </mainClass>
            </configuration>
        </execution>
    </executions>
</plugin>

There are two differences between Spring plugin and the others. First, the goal of the execution is called repackage, and the classifier is named spring-boot.

Please note, that we don’t need to have Spring Boot application in order to use this plugin.

  • pros – dependencies inside a jar file, you can run it in every accessible location, advanced control of packaging your artifact, with excluding dependencies from the jar file etc., packaging of war files as well
  • cons – adds potentially unnecessary Spring and Spring Boot related classes

2.6. Web Application with Executable Tomcat

In the last part, we want to cover the topic of having a standalone web application, that is packed inside a jar file. In order to do that, we need to use different plugin, designed for creating executable jar files:

<plugin>
    <groupId>org.apache.tomcat.maven</groupId>
    <artifactId>tomcat7-maven-plugin</artifactId>
    <version>2.0</version>
    <executions>
        <execution>
            <id>tomcat-run</id>
            <goals>
                <goal>exec-war-only</goal>
            </goals>
            <phase>package</phase>
            <configuration>
                <path>/</path>
                <enableNaming>false</enableNaming>
                <finalName>webapp.jar</finalName>
                <charset>utf-8</charset>
            </configuration>
        </execution>
    </executions>
</plugin>

The goal is set as exec-war-only, path to your server is specified inside configuration tag, with additional properties, like finalName, charset etc. To build a jar, run man package, which will result in creating webapp.jar in your target directory. To run

To run the application, just write this in your console: java -jar target/webapp.jar and try to test it by specifying the localhost:8080/ in a browser.

  • pros – having one file, easy to deploy and run
  • cons – a size of the file is much larger, due to packing Tomcat embedded distribution inside a war file

Please note, that this is the latest version of this plugin, which supports Tomcat7 server. To avoid errors, please check that your dependency for Servlets has scope set as provided, otherwise there will be a conflict at the runtime of executable jar:

<dependency>
    <groupId>javax.servlet</groupId>
    <artifactId>javax.servlet-api</artifactId>
    <scope>provided</scope>
</dependency>

3. Conclusion

In this article, we described many ways of creating an executable jar with various Maven plugins.

The full implementation of this tutorial can be found in this (executable jar) and this (executable war) Github projects.

How to test? In order to compile the project into an executable jar, please run Maven with mvn clean package command.

Hopefully, this article gives you some more insights on the topic and you will find your preferred approach depending on your needs.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Introduction To Ehcache

$
0
0

Thorben (from Thoughts on Java) just opened up his new Advanced Hibernate training. The early-bird pricing is available for a few days (until next Monday I think).

1. Overview

In this article, we will introduce Ehcache, a widely used, open-source Java-based cache. It features memory and disk stores, listeners, cache loaders, RESTful and SOAP APIs and other very useful features.

To show how caching can optimize our application, we will create a simple method which will calculate square values of provided numbers. On each call, the method will call calculateSquareOfNumber(int number) method and print information message to the console.

With this simple example, we want to show that calculation of squared values is done only once, and every other call with same input value is returning result from cache.

It’s important to notice that we’re focused entirely on Ehcache itself (without Spring); if you want to see how Ehcache works with Spring, have a look at read this article.

2. Maven Dependencies

In order to use Ehcache we need to add this Maven dependency:

<dependency>
    <groupId>org.ehcache</groupId>
    <artifactId>ehcache</artifactId>
    <version>3.1.3</version>
</dependency>

The latest version of the Ehcache artifact can be found here.

3. Cache Configuration

Ehcache can be configured in two ways:

  • The first way is through Java POJO where all configuration parameters are configured through Ehcache API
  • The second way is configuration through XML file where we can configure Ehcache according to provided schema definition

In this article, we’ll show both approaches – Java as well as XML configuration.

3.1. Java Configuration

This subsection will show how easy it is to configure Ehcache with POJOs. Also, we will create a helper class for easier cache configuration and availability:

public class CacheHelper {

    private CacheManager cacheManager;
    private Cache<Integer, Integer> squareNumberCache;

    public CacheHelper() {
        cacheManager = CacheManagerBuilder
          .newCacheManagerBuilder().build();
        cacheManager.init();

        squareNumberCache = cacheManager
          .createCache("squaredNumber", CacheConfigurationBuilder
            .newCacheConfigurationBuilder(
              Integer.class, Integer.class,
              ResourcePoolsBuilder.heap(10)));
    }

    public Cache<Integer, Integer> getSquareNumberCacheFromCacheManager() {
        return cacheManager.getCache("squaredNumber", Integer.class, Integer.class);
    }
    
    // standard getters and setters
}

To initialize our cache, first, we need to define Ehcache CacheManager object. In this example, we are creating a default cache squaredNumber” with the newCacheManagerBuilder() API.

The cache will simply map Integer keys to Integer values.

Notice how, before we start using the defined cache, we need to initialize the CacheManager object with the init() method.

Finally, to obtain our cache, we can just use the getCache() API with the provided name, key and value types of our cache.

With those few lines, we created our first cache which is now available to our application.

3.2. XML Configuration

The configuration object from subsection 4.1. is equal to using this XML configuration:

<cache-template name="squaredNumber">
    <key-type>java.lang.Integer</key-type>
    <value-type>java.lang.Integer</value-type>
    <heap unit="entries">10</heap>
</cache-template>

And to include this cache in our Java application, we need to read XML configuration file in Java:

URL myUrl = getClass().getResource(xmlFile); 
XmlConfiguration xmlConfig = new XmlConfiguration(myUrl); 
CacheManager myCacheManager = CacheManagerBuilder
  .newCacheManager(xmlConfig);

4. Ehcache Test

In section 4. we showed how you can define simple cache for your purposes. To show that caching actually works, we will create SquaredCalculator class which will calculate squared value of the provided input, and store calculated value in a cache.

Of course, if cache already contains calculated value, we will return cached value and avoid unnecessary calculations:

public class SquaredCalculator {
    private CacheHelper cache;

    public int getSquareValueOfNumber(int input) {
        if (cache.getSquareNumberCache().containsKey(input)) {
            return cache.getSquareNumberCache().get(input);
        }

        System.out.println("Calculating square value of " + input + 
          " and caching result.");

        int squaredValue = (int) Math.pow(input, 2);
        cache.getSquareNumberCache().put(input, squaredValue);

        return squaredValue;
    }

    //standard getters and setters;
}

To complete our test scenario, we will also need the code which will calculate square values:

@Test
public void whenCalculatingSquareValueAgain_thenCacheHasAllValues() {
    for (int i = 10; i < 15; i++) {
        assertFalse(cacheHelper.getSquareNumberCache().containsKey(i));
        System.out.println("Square value of " + i + " is: "
          + squaredCalculator.getSquareValueOfNumber(i) + "\n");
    }      
    
    for (int i = 10; i < 15; i++) {
        assertTrue(cacheHelper.getSquareNumberCache().containsKey(i));
        System.out.println("Square value of " + i + " is: "
          + squaredCalculator.getSquareValueOfNumber(i) + "\n");
    }
}

If we run our test, we will get this result in our console:

Calculating square value of 10 and caching result.
Square value of 10 is: 100

Calculating square value of 11 and caching result.
Square value of 11 is: 121

Calculating square value of 12 and caching result.
Square value of 12 is: 144

Calculating square value of 13 and caching result.
Square value of 13 is: 169

Calculating square value of 14 and caching result.
Square value of 14 is: 196

Square value of 10 is: 100
Square value of 11 is: 121
Square value of 12 is: 144
Square value of 13 is: 169
Square value of 14 is: 196

As you can notice, calculate() method was doing calculations only on first call. On the second call, all values were found in the cache and returned from it.

5. Other Ehcache Configuration Options

When we created our cache in the previous example, it was a simple cache without any special options. This section will show other options which are useful in cache creation.

5.1. Disk Persistence

If there are too many values to store into the cache, we can store some of those values on the hard drive.

PersistentCacheManager persistentCacheManager = 
  CacheManagerBuilder.newCacheManagerBuilder()
    .with(CacheManagerBuilder.persistence(getStoragePath()
      + File.separator 
      + "squaredValue")) 
    .withCache("persistent-cache", CacheConfigurationBuilder
      .newCacheConfigurationBuilder(Integer.class, Integer.class,
        ResourcePoolsBuilder.newResourcePoolsBuilder()
          .heap(10, EntryUnit.ENTRIES)
          .disk(10, MemoryUnit.MB, true)) 
      )
  .build(true);

persistentCacheManager.close();

Instead of default CacheManager, we now use PersistentCacheManager which will persist all values which can’t be saved into memory.

From configuration, we can see that cache will save 10 elements into memory and it will allocate 10MB on the hard drive for persistence.

5.2. Data Expiry

If we cache a lot of data, it’s natural that we save cached data for some period of time so we can avoid big memory usage.

Ehcache controls data freshness trough Expiry interface:

CacheConfiguration<Integer, Integer> cacheConfiguration 
  = CacheConfigurationBuilder
    .newCacheConfigurationBuilder(Integer.class, Integer.class, 
      ResourcePoolsBuilder.heap(100)) 
    .withExpiry(Expirations.timeToLiveExpiration(Duration.of(60, 
      TimeUnit.SECONDS))).build();

In this cache, all data will live for 60 seconds and after that period of time, it will be deleted from memory.

6. Conclusion

In this article, we showed how to use simple Ehcache caching in a Java application.

In our example, we saw that even a simply configured cache can save a lot of unnecessary operations. Also, we showed that we can configure caches through POJOs and XML and that Ehcache has quite some nice features – such as persistence and data expiry.

As always, the code from this article can be found on GitHub.

Thorben (from Thoughts on Java) just opened up his new Advanced Hibernate training. The early-bird pricing is available for a few days (until next Monday I think).

Guide to Hazelcast with Java

$
0
0

I just released the Master Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

This is an introductory article on Hazelcast where we will walk through how to create a cluster member, distributed Map to share Map data among the cluster nodes and create a Java client to connect and query data in the cluster.

2. What Is Hazelcast?

Hazelcast is a distributed In-Memory Data Grid platform for Java. The architecture supports high scalability and data distribution in a clustered environment. It supports auto-discovery of nodes and intelligent synchronization. To see the features for all Hazelcast editions we can refer to following link.

Hazelcast is available in different editions. You can find the list of available Hazelcast editions in the following link. For the purpose of this article, we will focus on the open-source edition of Hazelcast.

Likewise, Hazelcast offers various features such as Distributed Data Structure, Distributed Compute, Distributed Query etc. For the purpose of this article, we will focus on distributed Map.

3. Maven Dependencies

Hazelcast offers different libraries based on the usage. We can find maven dependencies under group com.hazelcast in Maven Central.

For the purpose of this article, we will focus on dependencies needed to create standalone Hazelcast cluster member and the Hazelcast Java Client.

3.1. Hazelcast Cluster Member

We need to add hazelcast dependency in pom.xml as shown below:

<dependency>
    <groupId>com.hazelcast</groupId>
    <artifactId>hazelcast</artifactId>
    <version>3.7.2</version>
</dependency>

The dependency is available in maven central repository.

3.2. Hazelcast Java Client

Besides Hazelcast core dependency, we will also need to include the client dependency:

<dependency>
    <groupId>com.hazelcast</groupId>
    <artifactId>hazelcast-client</artifactId>
    <version>3.7.2</version>
</dependency>

The dependency is available in the maven central repository.

4. Your First Hazelcast Application

4.1. Create Hazelcast Member

Members (also called nodes) automatically join together to form a cluster. This automatic joining takes place with various discovery mechanisms that the members use to find each other.

Let’s create a member that stores data in a Hazelcast distributed map:

public class ServerNode {
    
    HazelcastInstance hzInstance = Hazelcast.newHazelcastInstance();
    ...
}

When we start the ServerNode application, we can see the flowing text in the console which means that we create a new Hazelcast node in our JVM which will have to join the cluster.

Members [1] {
    Member [192.168.1.105]:5701 - 899898be-b8aa-49aa-8d28-40917ccba56c this
}

To create multiple nodes we can start the multiple instances of ServerNode application. Hazelcast will automatically create and add a new member to the cluster.

For example, if we run the ServerNode application again, we will see the following log in the console which says that there are two members in the cluster.

Members [2] {
  Member [192.168.1.105]:5701 - 899898be-b8aa-49aa-8d28-40917ccba56c
  Member [192.168.1.105]:5702 - d6b81800-2c78-4055-8a5f-7f5b65d49f30 this
}

4.2. Create Distributed Map

Next, we will create a distributed Map. We need the instance of HazelcastInstance created earlier to create a distributed Map which extends java.util.concurrent.ConcurrentMap interface.

Map<Long, String> map = hazelcastInstance.getMap("data");
...

Let’s add some entries to the map:

IdGenerator idGenerator = hazelcastInstance.getIdGenerator("newid");
for (int i = 0; i < 10; i++) {
    map.put(idGenerator.newId(), "message" + 1);
}

As we can see above, we have added 10 entries in the map. We used IdGenerator to ensure that we get the unique key for the map. For more details on IdGenerator, you can check out the following link.

While this may not be the real world example, it is just used to demonstrate one of the many operations that we can apply to the distributed map. We will see in the later section how we can retrieve the map entries added by the cluster member from the Hazelcast java client.

Internally, Hazelcast will partition the map entries and distribute and replicate the entries among the cluster members. For more details on Hazelcast Map, you can check out the following link.

4.3. Create Hazelcast Java Client

Hazelcast client allows us to do all Hazelcast operations without being a member of the cluster. It connects to one of the cluster members and delegates all cluster-wide operations to it.

Let’s create a native client:

ClientConfig config = new ClientConfig();
GroupConfig groupConfig = config.getGroupConfig();
groupConfig.setName("dev");
groupConfig.setPassword("dev-pass");
HazelcastInstance hzClient
  = HazelcastClient.newHazelcastClient(config);

The default username and password to access the cluster are dev and dev-pass. For more details on Hazelcast client, you can check out the following link.

4.4. Access Distributed Map From Java Client

Next, we will access the distributed Map that we created earlierWe need the instance of HazelcastInstance created earlier to access the distributed Map.

IMap<Long, String> map = hzClient.getMap("data");
...

Now we can do operations on a map without being a member of the cluster. For example, let’s try to iterate over the map entries added by the cluster member:

for (Entry<Long, String> entry : map.entrySet()) {
    ...
}

5. Configuring Hazelcast

In this section will focus on how to configure the Hazelcast network using declaratively (XML) and programmatically (API) and use Hazelcast management center to monitor and manage nodes that are running.

While Hazelcast is starting up, it looks for hazelcast.config system property. If it is set, its value is used as the path. If the above system property is not set, Hazelcast then checks whether there is a hazelcast.xml file in the working directory. If not, then it checks whether

If not, then it checks whether hazelcast.xml exists on the classpath. If none of the above works, Hazelcast loads the default configuration, i.e.

If none of the above works, Hazelcast loads the default configuration, i.e. hazelcast-default.xml that comes with hazelcast.jar.

5.1. Network Configuration

By default, Hazelcast uses multicast for discovering other members that can form a cluster. If multicast is not a preferred way of discovery for our environment, then we can configure Hazelcast for full TCP/IP cluster.

Let’s configure the TCP/IP cluster using declarative configuration:

<?xml version="1.0" encoding="UTF-8"?>
<hazelcast xsi:schemaLocation=
  "http://www.hazelcast.com/schema/config hazelcast-config-3.7.xsd"
  xmlns="http://www.hazelcast.com/schema/config"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
    <network>
        <port auto-increment="true" port-count="20">5701</port>
        <join>
            <multicast enabled="false">
	    </multicast>
	    <tcp-ip enabled="true">
	        <member>machine1</member>
		<member>localhost</member>
	    </tcp-ip>
        </join>
    </network>
</hazelcast>

and programmatic configuration:

Config config = new Config();
NetworkConfig network = config.getNetworkConfig();
network.setPort(5701).setPortCount(20);
network.setPortAutoIncrement(true);
JoinConfig join = network.getJoin();
join.getMulticastConfig().setEnabled(false);
join.getTcpIpConfig()
  .addMember("machine1")
  .addMember("localhost").setEnabled(true);

By default, Hazelcast will try 100 ports to bind. In the example above, if we set the value of port as 5701 and limit the port count to 20, as members are joining to the cluster, Hazelcast tries to find ports between 5701 and 5721.

If we want to choose to use only one port, we can disable the auto-increment feature of a port by setting auto-increment to false.

5.2. Management Center Configuration

Management center allows us to monitor overall state of clusters, we can also analyze and browse your data structures in detail, update map configurations and take thread dump from nodes.

In order to user Hazelcast management center, we can either deploy the mancenter-version.war application into our Java application server/container or we can start Hazelcast Management Center from the command line. We can download the latest Hazelcast ZIP from hazelcast.org. The ZIP contains the mancenter-version.war file.

We can configure our Hazelcast nodes by adding the URL of the web application to hazelcast.xml and then have the Hazelcast members communicate with the management center.

Let’s configure the management center using declarative configuration:

<management-center enabled="true">
    http://localhost:8080/mancenter
</management-center>

and programmatic configuration:

ManagementCenterConfig manCenterCfg = new ManagementCenterConfig();
manCenterCfg.setEnabled(true).setUrl("http://localhost:8080/mancenter");

6. Conclusion

In this article, we covered introductory concepts about Hazelcast. For details, you can take a look at the Reference Manual.

You can find the source code for this article over on GitHub.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE


A Guide To UDP In Java

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Overview

In this article, we will be exploring networking communication with Java, over the User Datagram Protocol (UDP).

UDP is a communication protocol that transmits independent packets over the network with no guarantee of arrival and no guarantee of the order of delivery.

Most communication over the internet takes place over the Transmission Control Protocol (TCP), however, UDP has its place which we will be exploring in the next section.

2. Why Use UDP?

UDP is quite different from the more common TCP. But before considering the surface level disadvantages of UDP, it’s important to understand that the lack of overhead can make it significantly faster than TCP.

Apart from speed, we also need to remember that some kinds of communication do not require the reliability of TCP but value low latency instead. The video is a good example of an application that might benefit from running over UDP instead of TCP.

3. Building UDP Applications

Building UDP applications is very similar to building a TCP system; the only difference is that we don’t establish a point to point connection between a client and a server.

The setup is very straightforward too. Java ships with built-in networking support for UDP – which is part of the java.net package. Therefore to perform networking operations over UDP, we only need to import the classes from the java.net package: java.net.DatagramSocket and java.net.DatagramPacket.

In the following sections, we will learn how to design applications that communicate over UDP; we’ll use the popular echo protocol for this application.

First, we will build an echo server that sends back any message sent to it, then an echo client that just sends any arbitrary message to the server and finally, we will test the application to ensure everything is working fine.

4. The Server

In UDP communication, a single message is encapsulated in a DatagramPacket which is sent through a DatagramSocket.

Let’s start by setting up a simple server:

public class EchoServer extends Thread {

    private DatagramSocket socket;
    private boolean running;
    private byte[] buf = new byte[256];

    public EchoServer() {
        socket = new DatagramSocket(4445);
    }

    public void run() {
        running = true;

        while (running) {
            DatagramPacket packet 
              = new DatagramPacket(buf, buf.length);
            socket.receive(packet);
            
            InetAddress address = packet.getAddress();
            int port = packet.getPort();
            packet = new DatagramPacket(buf, buf.length, address, port);
            String received 
              = new String(packet.getData(), 0, packet.getLength());
            
            if (received.equals("end")) {
                running = false;
                continue;
            }
            socket.send(packet);
        }
        socket.close();
    }
}

We create a global DatagramSocket which we will use throughout to send packets, a byte array to wrap our messages, and a status variable called running.

For simplicity, the server is extending Thread, so we can implement everything inside the run method.

Inside run, we create a while loop that just runs until running is changed to false by some error or a termination message from the client.

At the top of the loop, we instantiate a DatagramPacket to receive incoming messages.

Next, we call the receive method on the socket. This method blocks until a message arrives and it stores the message inside the byte array of the DatagramPacket passed to it.

After receiving the message, we retrieve the address and port of the client, since we are going to send the response
back.

Next, we create a DatagramPacket for sending a message to the client. Notice the difference in signature with the receiving packet. This one also requires address and port of the client we are sending the message to.

5. The Client

Now let’s roll out a simple client for this new server:

public class EchoClient {
    private DatagramSocket socket;
    private InetAddress address;

    private byte[] buf;

    public EchoClient() {
        socket = new DatagramSocket();
        address = InetAddress.getByName("localhost");
    }

    public String sendEcho(String msg) {
        buf = msg.getBytes();
        DatagramPacket packet 
          = new DatagramPacket(buf, buf.length, address, 4445);
        socket.send(packet);
        packet = new DatagramPacket(buf, buf.length);
        socket.receive(packet);
        String received = new String(
          packet.getData(), 0, packet.getLength());
        return received;
    }

    public void close() {
        socket.close();
    }
}

The code is not that different from the server’s. We have our global DatagramSocket and address of the server. We instantiate these inside the constructor.

We have a separate method which sends messages to the server and returns the response.

We first convert the string message into a byte array, then create a DatagramPacket for sending messages.

Next – we send the message. We immediately convert the DatagramPacket into a receiving one.

When the echo arrives, we convert the bytes to a string and return the string.

6. The Test

In a class UDPTest.java, we simply create one test to check the echoing ability of our two applications:

public class UDPTest {
    EchoClient client;

    @Before
    public void setup(){
        new EchoServer().start();
        client = new EchoClient();
    }

    @Test
    public void whenCanSendAndReceivePacket_thenCorrect() {
        String echo = client.sendEcho("hello server");
        assertEquals("hello server", echo);
        echo = client.sendEcho("server is working");
        assertFalse(echo.equals("hello server"));
    }

    @After
    public void tearDown() {
        client.sendEcho("end");
        client.close();
    }
}

In setup, we start the server and also create the client. While in the tearDown method, we send a termination message to the server so that it can close and at the same time we close the client.

7. Conclusion

In this article, we have learned about the User Datagram Protocol and successfully built our own client-server applications that communicate over UDP.

To get full source code for the examples used in this article, you can check out the GitHub project.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Java Web Weekly, Issue 148

$
0
0

1. Spring and Java

>> HTTP headers forwarding in microservices [frankel.ch]

>> Tracing Spring Integration Flow with Spring Cloud Sleuth [java-allandsundry.com]

Doing a proper microservice implementation is tough – no two ways about it. There are certainly new challenges but also a new class of tools meant to help with these challenges.

Here are two interesting writeups about one of these tools – Spring Cloud Sleuth – and about tracing an HTTP request across multiple services.

>> JUnit 5 State Of The Union [sitepoint.com]

A good high level look at JUnit 5 right now, a year and a couple of months into development.

>> 6 Hibernate features that I’m missing in JPA [thoughts-on-java.org]

Hibernate has been on a roll lately, and JPA is lagging behind even more so than usual. Here’s a list of solid features that should hopefully make it into the next version of JPA.

>> The best way to implement equals, hashCode, and toString with JPA and Hibernate [vladmihalcea.com]

An interesting discussing focused on the fundamental.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Message Processing Styles [tbray.org]

A quick look at processing JSON data in real-world systems, where things aren’t as neat and tidy as we’d like them to be, and pretty much anything could come over the wire.

>> A service framework for operation-based CRDTs [krasserm.github.io]

If you’re way into Event Sourcing and CQRS, than this will make a good read, for both practical takeaways but also for cross-pollination of architectural ideas.

Also worth reading:

3. Musings

>> Short DNS Record TTL And Centralization Are Serious Risks For The Internet [techblog.bozho.net]

No doubt you heard and probably experienced the massive DDOS attack a few days ago.

There are several reports and analysis worth reading online of course, here’s one that actually goes beyond just “what happened”.

>> 4 Ways Custom Code Metrics Make a Difference [daedtech.com]

If you’re using static analysis, semi-custom, tunable rules simply need to be used and evolved. Without these, the defaults likely won’t fit the specifics of your codebase and your needs – which generally leads to either lots of false positives, or turning off useful rules entirely.

As a quick take-away, definitely tweak and keep tweaking your static analysis rules, so that they actually make sense for your codebase.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Stop being engineers! [dilbert.com]

>> Try eating cake [dilbert.com]

>> Taking more responsibility [dilbert.com]

5. Pick of the Week

>> Just shut up and let your devs concentrate [geekwire.com]

Custom AccessDecisionVoters in Spring Security

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Introduction

Most of the time when securing a Spring Web application or a REST API, the tools provided by Spring Security are more than enough, but sometimes we are looking for a more specific behavior.

In this tutorial, we’ll write a custom AccessDecisionVoter and show how it can be used to abstract away the authorization logic of a web application and separate it from the business logic of the application.

2. Scenario

To demonstrate how the AccessDecisionVoter works, we’ll implement a scenario with two user types, USER and ADMIN, in which a USER may access the system only on even-numbered minutes, while an ADMIN will always be granted access.

3. AccessDecisionVoter Implementations

First, we’ll describe a few of the implementations provided by Spring that will participate alongside our custom voter in making the final decision on the authorization. Then we’ll take a look at how to implement a custom voter.

3.1. The Default AccessDecisionVoter Implementations

Spring Security provides several AccessDecisionVoter implementations. We will use a few of them as part of our security solution here.

Let’s take a look at how and when these default voters implementations vote.

The AuthenticatedVoter will cast a vote based on the Authentication object’s level of authentication – specifically looking for either a fully authenticated pricipal, one authenticated with remember-me or, finally, anonymous.

The RoleVoter votes if any of the configuration attributes starts with the String “ROLE_”. If so, it will search for the role in the GrantedAuthority list of the Authentication object.

The WebExpressionVoter enables us to use SpEL (Spring Expression Language) to authorize the requests using the @PreAuthorize annotation.

For example, if we’re using Java config:

@Override
protected void configure(final HttpSecurity http) throws Exception {
    ...
    .antMatchers("/").hasAnyAuthority("ROLE_USER")
    ...
}

Or using an XML configuration – we can use SpEL inside an intercept-url tag, in the http tag:

<http use-expressions="true">
    <intercept-url pattern="/"
      access="hasAuthority('ROLE_USER')"/>
    ...
</http>

3.2. Custom AccessDecisionVoter Implementation

Now let’s create a custom voter – by implementing the AccessDecisionVoter interface:

public class MinuteBasedVoter implements AccessDecisionVoter {
   ...
}

The first of three methods we must provide is the vote method. The vote method is the most important part of the custom voter and is where our authorization logic goes.

The vote method can return three possible values:

  • ACCESS_GRANTED – the voter gives an affirmative answer
  • ACCESS_DENIED – the voter gives a negative answer
  • ACCESS_ABSTAIN – the voter abstains from voting

Let’s now implement the vote method:

@Override
public int vote(
  Authentication authentication, Object object, Collection collection) {
    return authentication.getAuthorities().stream()
      .map(GrantedAuthority::getAuthority)
      .filter(r -> "ROLE_USER".equals(r) 
        && LocalDateTime.now().getMinute() % 2 != 0)
      .findAny()
      .map(s -> ACCESS_DENIED)
      .orElseGet(() -> ACCESS_ABSTAIN);
}

In our vote method, we check if the request comes from a USER. If so, we return ACCESS_GRANTED if it’s an even-numbered minute, otherwise, we return ACCESS_DENIED. If the request does not come from a USER, we abstain from the vote and return ACCESS_ABSTAIN.

The second method returns whether the voter supports a particular configuration attribute. In our example, the voter does not need any custom configuration attribute, so we return true:

@Override
public boolean supports(ConfigAttribute attribute) {
    return true;
}

The third method returns whether the voter can vote for the secured object type or not. Since our voter is not concerned with the secured object type, we return true:

@Override
public boolean supports(Class clazz) {
    return true;
}

4. The AccessDecisionManager

The final authorization decision is handled by the AccessDecisionManager.

The AbstractAccessDecisionManager contains a list of AccessDecisionVoters – which are responsible for casting their votes independent of each other.

There are three implementations for processing the votes to cover the most common use cases:

  • AffirmativeBased – grants access if any of the AccessDecisionVoters return an affirmative vote
  • ConsensusBased – grants access if there are more affirmative votes than negative (ignoring users who abstain)
  • UnanimousBased – grants access if every voter either abstains or returns an affirmative vote

Of course, you can implement your own AccessDecisionManager with your custom decision-making logic.

5. Configuration

In this part of the tutorial, we will take a look at Java-based and XML-based methods for configuring our custom AccessDecisionVoter with an AccessDecisionManager.

5.1. Java Configuration

Let’s create a configuration class for Spring Web Security:

@Configuration
@EnableWebSecurity
public class WebSecurityConfig extends WebSecurityConfigurerAdapter {
...
}

And let’s define an AccessDecisionManager bean that uses a UnanimousBased manager with our customized list of voters:

@Bean
public AccessDecisionManager accessDecisionManager() {
    List<AccessDecisionVoter<? extends Object>> decisionVoters 
      = Arrays.asList(
        new WebExpressionVoter(),
        new RoleVoter(),
        new AuthenticatedVoter(),
        new MinuteBasedVoter());
    return new UnanimousBased(decisionVoters);
}

Finally, let’s configure Spring Security to use the previously defined bean as the default AccessDecisionManager:

@Override
protected void configure(HttpSecurity http) throws Exception {
    http
    ...
    .anyRequest()
    .authenticated()
    .accessDecisionManager(accessDecisionManager());
}

5.2. XML Configuration

If using XML configuration, you’ll need to modify your spring-security.xml file (or whichever file contains your security settings).

First, you’ll need to modify the <http> tag:

<http access-decision-manager-ref="accessDecisionManager">
  <intercept-url
    pattern="/**"
    access="hasAnyRole('ROLE_ADMIN', 'ROLE_USER')"/>
  ...
</http>

Next, add a bean for the custom voter:

<beans:bean
  id="minuteBasedVoter"
  class="org.baeldung.voter.MinuteBasedVoter"/>

Then add a bean for the AccessDecisionManager:

<beans:bean 
  id="accessDecisionManager" 
  class="org.springframework.security.access.vote.UnanimousBased">
    <beans:constructor-arg>
        <beans:list>
            <beans:bean class=
              "org.springframework.security.web.access.expression.WebExpressionVoter"/>
            <beans:bean class=
              "org.springframework.security.access.vote.AuthenticatedVoter"/>
            <beans:bean class=
              "org.springframework.security.access.vote.RoleVoter"/>
            <beans:bean class=
              "org.baeldung.voter.MinuteBasedVoter"/>
        </beans:list>
    </beans:constructor-arg>
</beans:bean>

Here’s a sample <authentication-manager> tag supporting our scenario:

<authentication-manager>
    <authentication-provider>
        <user-service>
            <user name="user" password="pass" authorities="ROLE_USER"/>
            <user name="admin" password="pass" authorities="ROLE_ADMIN"/>
        </user-service>
    </authentication-provider>
</authentication-manager>

If you are using a combination of Java and XML configuration, you can import the XML into a configuration class:

@Configuration
@ImportResource({"classpath:spring-security.xml"})
public class XmlSecurityConfig {
    public XmlSecurityConfig() {
        super();
    }
}

6. Conclusion

In this tutorial, we looked at a way to customize security for a Spring Web application by using AccessDecisionVoters. We saw some voters provided by Spring Security that contributed to our solution. Then we discussed how to implement a custom AccessDecisionVoter.

Then we discussed how the AccessDecisionManager makes the final authorization decision, and we showed how to use the implementations provided by Spring to make this decision after all the voters cast their votes.

Then we configured a list of AccessDecisionVoters with an AccessDecisionManager through Java and XML.

The implementation can be found in the Github project.

When the project runs locally the login page can be accessed at:

http://localhost:8080/spring-security-custom-permissions/login

The credentials for the USER are “user” and “pass, and credentials for the ADMIN are “admin” and “pass”.

The Master Class "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

Apache CXF Support for RESTful Web Services

$
0
0

I just announced the Master Class of my "REST With Spring" Course:

>> THE "REST WITH SPRING" CLASSES

1. Overview

This tutorial introduces Apache CXF as a framework compliant with the JAX-RS standard, which defines support of the Java ecosystem for the REpresentational State Transfer (REST) architectural pattern.

Specifically, it describes step by step how to construct and publish a RESTful web service, and how to write unit tests to verify a service.

This is the third in a series on Apache CXF; the first one focuses on the usage of CXF as a JAX-WS fully compliant implementation. The second article provides a guide on how to use CXF with Spring.

2. Maven Dependencies

The first required dependency is org.apache.cxf:cxf-rt-frontend-jaxrs. This artifact provides JAX-RS APIs as well as a CXF implementation:

<dependency>
    <groupId>org.apache.cxf</groupId>
    <artifactId>cxf-rt-frontend-jaxrs</artifactId>
    <version>3.1.7</version>
</dependency>

In this tutorial, we use CXF to create a Server endpoint to publish a web service instead of using a servlet container. Therefore, the following dependency needs to be included in the Maven POM file:

<dependency>
    <groupId>org.apache.cxf</groupId>
    <artifactId>cxf-rt-transports-http-jetty</artifactId>
    <version>3.1.7</version>
</dependency>

Finally, let’s add the HttpClient library to facilitate unit tests:

<dependency>
    <groupId>org.apache.httpcomponents</groupId>
    <artifactId>httpclient</artifactId>
    <version>4.5.2</version>
</dependency>

Here you can find the latest version of the cxf-rt-frontend-jaxrs dependency. You may also want to refer to this link for the latest versions of the org.apache.cxf:cxf-rt-transports-http-jetty artifacts. Finally, the latest version of httpclient can be found here.

3. Resource Classes and Request Mapping

Let’s start implementing a simple example; we’re going to set up our REST API with two resources Course and Student.

We’ll start simple and move towards a more complex example as we go.

3.1. The Resources

Here is the definition of the Student resource class:

@XmlRootElement(name = "Student")
public class Student {
    private int id;
    private String name;

    // standard getters and setters
    // standard equals and hashCode implementations

}

Notice we’re using the @XmlRootElement annotation to tell JAXB that instances of this class should be marshaled to XML.

Next, comes the definition of the Course resource class:

@XmlRootElement(name = "Course")
public class Course {
    private int id;
    private String name;
    private List<Student> students = new ArrayList<>();

    private Student findById(int id) {
        for (Student student : students) {
            if (student.getId() == id) {
                return student;
            }
        }
        return null;
    }
    // standard getters and setters
    // standard equals and hasCode implementations
    
}

Finally, let’s implement the CourseRepository – which is the root resource and serves as the entry point to web service resources:

@Path("course")
@Produces("text/xml")
public class CourseRepository {
    private Map<Integer, Course> courses = new HashMap<>();

    // request handling methods

    private Course findById(int id) {
        for (Map.Entry<Integer, Course> course : courses.entrySet()) {
            if (course.getKey() == id) {
                return course.getValue();
            }
        }
        return null;
    }
}

Notice the mapping with the @Path annotation. The CourseRepository is the root resource here, so it’s mapped to handle all URLS starting with course.

The value of @Produces annotation is used to tell the server to convert objects returned from methods within this class to XML documents before sending them to clients. We’re using JAXB here as the default since no other binding mechanisms are specified.

3.2. Simple Data Setup

Because this is a simple example implementation, we’re using in-memory data instead of a full-fledged persistent solution.

With that in mind, let’s implement some simple setup logic to populate some data into the system:

{
    Student student1 = new Student();
    Student student2 = new Student();
    student1.setId(1);
    student1.setName("Student A");
    student2.setId(2);
    student2.setName("Student B");

    List<Student> course1Students = new ArrayList<>();
    course1Students.add(student1);
    course1Students.add(student2);

    Course course1 = new Course();
    Course course2 = new Course();
    course1.setId(1);
    course1.setName("REST with Spring");
    course1.setStudents(course1Students);
    course2.setId(2);
    course2.setName("Learn Spring Security");

    courses.put(1, course1);
    courses.put(2, course2);
}

Methods within this class that take care of HTTP requests are covered in the next subsection.

3.3. The API – Request Mapping Methods

Now, let’s go to the implementation of the actual REST API.

We’re going to start adding API operations – using the @Path annotation – right in the resource POJOs.

It’s important to understand that is a significant difference from the approach in a typical Spring project – where the API operations would be defined in a controller, not on the POJO itself.

Let’s start with mapping methods defined inside the Course class:

@GET
@Path("{studentId}")
public Student getStudent(@PathParam("studentId")int studentId) {
    return findById(studentId);
}

Simply put, the method is invoked when handling GET requests, denoted by the @GET annotation.

Noticed the simple syntax of mapping the studentId path parameter from the HTTP request.

We’re then simply using the findById helper method to return the corresponding Student instance.

The following method handles POST requests, indicated by the @POST annotation, by adding the received Student object to the students list:

@POST
@Path("")
public Response createStudent(Student student) {
    for (Student element : students) {
        if (element.getId() == student.getId() {
            return Response.status(Response.Status.CONFLICT).build();
        }
    }
    students.add(student);
    return Response.ok(student).build();
}

This returns a 200 OK response if the create operation was successful, or 409 Conflict if an object with the submitted id is already existent.

Also note that we can skip the @Path annotation since its value is an empty String.

The last method takes care of DELETE requests. It removes an element from the students list whose id is the received path parameter and returns a response with OK (200) status. In case there are no elements associated with the specified id, which implies there is nothing to be removed, this method returns a response with Not Found (404) status:

@DELETE
@Path("{studentId}")
public Response deleteStudent(@PathParam("studentId") int studentId) {
    Student student = findById(studentId);
    if (student == null) {
        return Response.status(Response.Status.NOT_FOUND).build();
    }
    students.remove(student);
    return Response.ok().build();
}

Let’s move on to request mapping methods of the CourseRepository class.

The following getCourse method returns a Course object that is the value of an entry in the courses map whose key is the received courseId path parameter of a GET request. Internally, the method dispatches path parameters to the findById helper method to do its job.

@GET
@Path("courses/{courseId}")
public Course getCourse(@PathParam("courseId") int courseId) {
    return findById(courseId);
}

The following method updates an existing entry of the courses map, where the body of the received PUT request is the entry value and the courseId parameter is the associated key:

@PUT
@Path("courses/{courseId}")
public Response updateCourse(@PathParam("courseId") int courseId, Course course) {
    Course existingCourse = findById(courseId);        
    if (existingCourse == null) {
        return Response.status(Response.Status.NOT_FOUND).build();
    }
    if (existingCourse.equals(course)) {
        return Response.notModified().build();    
    }
    courses.put(courseId, course);
    return Response.ok().build();
}

This updateCourse method returns a response with OK (200) status if the update is successful, does not change anything and returns a Not Modified (304) response if the existing and uploaded objects have the same field values. In case a Course instance with the given id is not found in the courses map, the method returns a response with Not Found (404) status.

The third method of this root resource class does not directly handle any HTTP request. Instead, it delegates requests to the Course class where requests are handled by matching methods:

@Path("courses/{courseId}/students")
public Course pathToStudent(@PathParam("courseId") int courseId) {
    return findById(courseId);
}

We have shown methods within the Course class that process delegated requests right before.

4. Server Endpoint

This section focuses on the construction of a CXF server, which is used for publishing the RESTful web service whose resources are depicted in the preceding section. The first step is to instantiate a JAXRSServerFactoryBean object and set the root resource class:

JAXRSServerFactoryBean factoryBean = new JAXRSServerFactoryBean();
factoryBean.setResourceClasses(CourseRepository.class);

A resource provider then needs to be set on the factory bean to manage the life cycle of the root resource class. We use the default singleton resource provider that returns the same resource instance to every request:

factoryBean.setResourceProvider(
  new SingletonResourceProvider(new CourseRepository()));

We also set an address to indicate the URL where the web service is published:

factoryBean.setAddress("http://localhost:8080/");

Now the factoryBean can be used to create a new server that will start listening for incoming connections:

Server server = factoryBean.create();

All the code above in this section should be wrapped in the main method:

public class RestfulServer {
    public static void main(String args[]) throws Exception {
        // code snippets shown above
    }
}

The invocation of this main method is presented in section 6.

5. Test Cases

This section describes test cases used to validate the web service we created before. Those tests validate resource states of the service after responding to HTTP requests of the four most commonly used methods, namely GET, POST, PUT, and DELETE.

5.1. Preparation

First, two static fields are declared within the test class, named RestfulTest:

private static String BASE_URL = "http://localhost:8080/baeldung/courses/";
private static CloseableHttpClient client;

Before running tests we create a client object, which is used to communicate with the server and destroy it afterward:

@BeforeClass
public static void createClient() {
    client = HttpClients.createDefault();
}
    
@AfterClass
public static void closeClient() throws IOException {
    client.close();
}

The client instance is now ready to be used by test cases.

5.2. GET Requests

In the test class, we define two methods to send GET requests to the server running the web service.

The first method is to get a Course instance given its id in the resource:

private Course getCourse(int courseOrder) throws IOException {
    URL url = new URL(BASE_URL + courseOrder);
    InputStream input = url.openStream();
    Course course
      = JAXB.unmarshal(new InputStreamReader(input), Course.class);
    return course;
}

The second is to get a Student instance given the ids of the course and student in the resource:

private Student getStudent(int courseOrder, int studentOrder)
  throws IOException {
    URL url = new URL(BASE_URL + courseOrder + "/students/" + studentOrder);
    InputStream input = url.openStream();
    Student student
      = JAXB.unmarshal(new InputStreamReader(input), Student.class);
    return student;
}

These methods send HTTP GET requests to the service resource, then unmarshal XML responses to instances of the corresponding classes. Both are used to verify service resource states after executing POST, PUT, and DELETE requests.

5.3. POST Requests

This subsection features two test cases for POST requests, illustrating operations of the web service when the uploaded Student instance leads to a conflict and when it is successfully created.

In the first test, we use a Student object unmarshaled from the conflict_student.xml file, located on the classpath with the following content:

<Student>
    <id>2</id>
    <name>Student B</name>
</Student>

This is how that content is converted to a POST request body:

HttpPost httpPost = new HttpPost(BASE_URL + "1/students");
InputStream resourceStream = this.getClass().getClassLoader()
  .getResourceAsStream("conflict_student.xml");
httpPost.setEntity(new InputStreamEntity(resourceStream));

The Content-Type header is set to tell the server that the content type of the request is XML:

httpPost.setHeader("Content-Type", "text/xml");

Since the uploaded Student object is already existent in the first Course instance, we expect that the creation fails and a response with Conflict (409) status is returned. The following code snippet verifies the expectation:

HttpResponse response = client.execute(httpPost);
assertEquals(409, response.getStatusLine().getStatusCode());

In the next test, we extract the body of an HTTP request from a file named created_student.xml, also on the classpath. Here is content of the file:

<Student>
    <id>3</id>
    <name>Student C</name>
</Student>

Similar to the previous test case, we build and execute a request, then verify that a new instance is successfully created:

HttpPost httpPost = new HttpPost(BASE_URL + "2/students");
InputStream resourceStream = this.getClass().getClassLoader()
  .getResourceAsStream("created_student.xml");
httpPost.setEntity(new InputStreamEntity(resourceStream));
httpPost.setHeader("Content-Type", "text/xml");
        
HttpResponse response = client.execute(httpPost);
assertEquals(200, response.getStatusLine().getStatusCode());

We may confirm new states of the web service resource:

Student student = getStudent(2, 3);
assertEquals(3, student.getId());
assertEquals("Student C", student.getName());

This is what the XML response to a request for the new Student object looks like:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Student>
    <id>3</id>
    <name>Student C</name>
</Student>

5.4. PUT Requests

Let’s start with an invalid update request, where the Course object being updated does not exist. Here is content of the instance used to replace a non-existent Course object in the web service resource:

<Course>
    <id>3</id>
    <name>Apache CXF Support for RESTful</name>
</Course>

That content is stored in a file called non_existent_course.xml on the classpath. It is extracted and then used to populate the body of a PUT request by the code below:

HttpPut httpPut = new HttpPut(BASE_URL + "3");
InputStream resourceStream = this.getClass().getClassLoader()
  .getResourceAsStream("non_existent_course.xml");
httpPut.setEntity(new InputStreamEntity(resourceStream));

The Content-Type header is set to tell the server that the content type of the request is XML:

httpPut.setHeader("Content-Type", "text/xml");

Since we intentionally sent an invalid request to update a non-existent object, a Not Found (404) response is expected to be received. The response is validated:

HttpResponse response = client.execute(httpPut);
assertEquals(404, response.getStatusLine().getStatusCode());

In the second test case for PUT requests, we submit a Course object with the same field values. Since nothing is changed in this case, we expect that a response with Not Modified (304) status is returned. The whole process is illustrated:

HttpPut httpPut = new HttpPut(BASE_URL + "1");
InputStream resourceStream = this.getClass().getClassLoader()
  .getResourceAsStream("unchanged_course.xml");
httpPut.setEntity(new InputStreamEntity(resourceStream));
httpPut.setHeader("Content-Type", "text/xml");
        
HttpResponse response = client.execute(httpPut);
assertEquals(304, response.getStatusLine().getStatusCode());

Where unchanged_course.xml is the file on the classpath keeping information used to update. Here is its content:

<Course>
    <id>1</id>
    <name>REST with Spring</name>
</Course>

In the last demonstration of PUT requests, we execute a valid update. The following is content of the changed_course.xml file whose content is used to update a Course instance in the web service resource:

<Course>
    <id>2</id>
    <name>Apache CXF Support for RESTful</name>
</Course>

This is how the request is built and executed:

HttpPut httpPut = new HttpPut(BASE_URL + "2");
InputStream resourceStream = this.getClass().getClassLoader()
  .getResourceAsStream("changed_course.xml");
httpPut.setEntity(new InputStreamEntity(resourceStream));
httpPut.setHeader("Content-Type", "text/xml");

Let’s validate a PUT request to the server and validate a successful upload:

HttpResponse response = client.execute(httpPut);
assertEquals(200, response.getStatusLine().getStatusCode());

Let’s verify the new states of the web service resource:

Course course = getCourse(2);
assertEquals(2, course.getId());
assertEquals("Apache CXF Support for RESTful", course.getName());

The following code snippet shows the content of the XML response when a GET request for the previously uploaded Course object is sent:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Course>
    <id>2</id>
    <name>Apache CXF Support for RESTful</name>
</Course>

5.5. DELETE Requests

First, let’s try to delete a non-existent Student instance. The operation should fail and a corresponding response with Not Found (404) status is expected:

HttpDelete httpDelete = new HttpDelete(BASE_URL + "1/students/3");
HttpResponse response = client.execute(httpDelete);
assertEquals(404, response.getStatusLine().getStatusCode());

In the second test case for DELETE requests, we create, execute and verify a request:

HttpDelete httpDelete = new HttpDelete(BASE_URL + "1/students/1");
HttpResponse response = client.execute(httpDelete);
assertEquals(200, response.getStatusLine().getStatusCode());

We verify new states of the web service resource with the following code snippet:

Course course = getCourse(1);
assertEquals(1, course.getStudents().size());
assertEquals(2, course.getStudents().get(0).getId());
assertEquals("Student B", course.getStudents().get(0).getName());

Next, we list the XML response that is received after a request for the first Course object in the web service resource:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Course>
    <id>1</id>
    <name>REST with Spring</name>
    <students>
        <id>2</id>
        <name>Student B</name>
    </students>
</Course>

It is clear that the first Student has successfully been removed.

6. Test Execution

Section 4 described how to create and destroy a Server instance in the main method of the RestfulServer class.

The last step to make the server up and running is to invoke that main method. In order to achieve that, the Exec Maven plugin is included and configured in the Maven POM file:

<plugin>
    <groupId>org.codehaus.mojo</groupId>
    <artifactId>exec-maven-plugin</artifactId>
    <version>1.5.0<version>
    <configuration>
        <mainClass>
          com.baeldung.cxf.jaxrs.implementation.RestfulServer
        </mainClass>
    </configuration>
</plugin>

The latest version of this plugin can be found via this link.

In the process of compiling and packaging the artifact illustrated in this tutorial, the Maven Surefire plugin automatically executes all tests enclosed in classes having names starting or ending with Test. If this is the case, the plugin should be configured to exclude those tests:

<plugin>
    <artifactId>maven-surefire-plugin</artifactId>
    <version>2.19.1</version>
    <configuration>
    <excludes>
        <exclude>**/ServiceTest</exclude>
    </excludes>
    </configuration>
</plugin>

With the above configuration, ServiceTest is excluded since it is the name of the test class. You may choose any name for that class, provided tests contained therein are not run by the Maven Surefire plugin before the server is ready for connections.

For the latest version of Maven Surefire plugin, please check here.

Now you can execute the exec:java goal to start the RESTful web service server and then run the above tests using an IDE. Equivalently you may start the test by executing the command mvn -Dtest=ServiceTest test in a terminal.

7. Conclusion

This tutorial illustrated the use of Apache CXF as a JAX-RS implementation. It demonstrated how the framework could be used to define resources for a RESTful web service and to create a server for publishing the service.

The implementation of all these examples and code snippets can be found in the GitHub project.

The Master Class of my "REST With Spring" Course is finally out:

>> CHECK OUT THE CLASSES

DynamoDB in a Spring Boot Application Using Spring Data

$
0
0

I usually post about Persistence on Twitter - you can follow me there:

1. Overview

In this article, we’ll explore the basics of integrating DynamoDB into a Spring Boot Application with a hands-on, practical example project.

We’ll demonstrate how to configure an application to use a local DynamoDB instance using Spring Data. We’ll also create an example data model and repository class as well as perform actual database operations using an integration test.

2. DynamoDB

DynamoDB is a fully-managed hosted NoSQL database on AWS, similar to other NoSQL databases such as Cassandra or MongoDB. DynamoDB offers fast, consistent and predictable performance and is massively scalable.

You can learn more about DynamoDB on the AWS Documentation.

Let’s install a local instance of DynamoDB to avoid incurring the cost of running a live instance.

For development, running DynamoDB locally makes more sense than running on AWS; the local instance will be run as an executable JAR file.

You can find instructions on how to run DynamoDB locally here.

3. Maven Dependencies

Add the following dependencies to start working with DynamoDB using Spring Data:

<dependencyManagement>
    <dependencies>
    <dependency>
        <groupId>org.springframework.data</groupId>
        <artifactId>spring-data-releasetrain</artifactId>
        <version>Gosling-SR1</version>
        <type>pom</type>
        <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>
<dependencies>
    <dependency>
        <groupId>com.amazonaws</groupId>
        <artifactId>aws-java-sdk-dynamodb</artifactId>
        <version>1.11.34</version>
    </dependency>
    <dependency>
        <groupId>com.github.derjust</groupId>
        <artifactId>spring-data-dynamodb</artifactId>
        <version>4.3.1</version>
    </dependency>
</dependencies>

4. Configuration

Next, let’s define the following properties in the application.properties file:

amazon.dynamodb.endpoint=http://localhost:8000/
amazon.aws.accesskey=key
amazon.aws.secretkey=key2

The access and secret keys listed above are just arbitrary values for your local config.  When accessing a local instance of DynamoDB these fields need to be populated by some values but are not needed to actually authenticate.

The properties will be dynamically pulled out of the application.properties file in the Spring config:

@Configuration
@EnableDynamoDBRepositories
  (basePackages = "com.baeldung.spring.data.dynamodb.repositories")
public class DynamoDBConfig {

    @Value("${amazon.dynamodb.endpoint}")
    private String amazonDynamoDBEndpoint;

    @Value("${amazon.aws.accesskey}")
    private String amazonAWSAccessKey;

    @Value("${amazon.aws.secretkey}")
    private String amazonAWSSecretKey;

    @Bean
    public AmazonDynamoDB amazonDynamoDB() {
        AmazonDynamoDB amazonDynamoDB 
          = new AmazonDynamoDBClient(amazonAWSCredentials());
        
        if (!StringUtils.isEmpty(amazonDynamoDBEndpoint)) {
            amazonDynamoDB.setEndpoint(amazonDynamoDBEndpoint);
        }
        
        return amazonDynamoDB;
    }

    @Bean
    public AWSCredentials amazonAWSCredentials() {
        return new BasicAWSCredentials(
          amazonAWSAccessKey, amazonAWSSecretKey);
    }
}

5. The Data Model

Let’s now create a POJO model to represent the data stored in DynamoDB.

This POJO will use annotations similar to those used in Hibernate to define the table name, attributes, keys and other aspects of the table.

5.1. Data Model Attributes

The following class, ProductInfo, represents a table with items that contains 3 attributes:

  1. ID
  2. MSRP
  3. Cost

5.2 Java Data Model Class

Let’s create a file called ProductInfo.java in your data model folder:

@DynamoDBTable(tableName = "ProductInfo")
public class ProductInfo {
    private String id;
    private String msrp;
    private String cost;

    @DynamoDBHashKey
    @DynamoDBAutoGeneratedKey
    public String getId() {
        return id;
    }

    @DynamoDBAttribute
    public String getMsrp() {
        return msrp;
    }

    @DynamoDBAttribute
    public String getCost() {
        return cost;
    }

    // standard setters/constructors
}

6. CRUD Repository

Next, we need to create a ProductRepository interface to define the CRUD functionality we want to build out. Repositories used to read and persist data to and from DynamoDB will implement this interface:

@EnableScan
public interface ProductInfoRepository extends 
  CrudRepository<ProductInfo, String> {
    
    List<ProductInfo> findById(String id);
}

7. Integration Test

Next, let’s create an integration test to ensure we can successfully connect to the local instance of DynamoDB:

@RunWith(SpringJUnit4ClassRunner.class)
@SpringApplicationConfiguration(classes = Application.class)
@WebAppConfiguration
@IntegrationTest
@ActiveProfiles("local")
@TestPropertySource(properties = { 
  "amazon.dynamodb.endpoint=http://localhost:8000/", 
  "amazon.aws.accesskey=test1", 
  "amazon.aws.secretkey=test231" })
public class ProductInfoRepositoryIntegrationTest {

    private DynamoDBMapper dynamoDBMapper;

    @Autowired
    private AmazonDynamoDB amazonDynamoDB;

    @Autowired
    ProductInfoRepository repository;

    private static final String EXPECTED_COST = "20";
    private static final String EXPECTED_PRICE = "50";

    @Before
    public void setup() throws Exception {
        dynamoDBMapper = new DynamoDBMapper(amazonDynamoDB);
        
        CreateTableRequest tableRequest = dynamoDBMapper
          .generateCreateTableRequest(ProductInfo.class);
        tableRequest.setProvisionedThroughput(
          new ProvisionedThroughput(1L, 1L));
        amazonDynamoDB.createTable(tableRequest);
        
        //...

        dynamoDBMapper.batchDelete(
          (List<ProductInfo>)repository.findAll());
    }

    @Test
    public void sampleTestCase() {
        ProductInfo dave = new ProductInfo(EXPECTED_COST, EXPECTED_PRICE);
        ProductInfoRepository.save(dave);

        List<ProductInfo> result 
          = (List<ProductInfo>) repository.findAll();
        
        assertTrue("Not empty", result.size() > 0);
        assertTrue("Contains item with expected cost", 
          result.get(0).getCost().equals(EXPECTED_COST));
    }
}

8. Conclusion

And we’re done – we can now connect to DynamoDB from a Spring Boot Application.

Of course, after completing testing locally, we should be able to transparently use a live instance of DynamoDB on AWS and run the deployed code with only minor configuration changes.

As always, the example used in this article is available as a sample project over on GitHub.

I usually post about Persistence on Twitter - you can follow me there:


Viewing all 3691 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>