Quantcast
Channel: Baeldung
Viewing all 3683 articles
Browse latest View live

CDI Interceptor vs Spring AspectJ

$
0
0

I just released the Master Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Introduction

The Interceptor pattern is generally used to add new, cross-cutting functionality or logic in an application, and has solid support in a large number of libraries.

In this article we’ll cover and contrast two of these major libraries: CDI interceptors and Spring AspectJ.

2. CDI Interceptor Project Setup

CDI is officially supported for Java EE but some implementations provide support to use CDI in Java SE environment. Weld can be considered as one example of CDI implementation which is supported in Java SE.

In order to use CDI we need to import the Weld library in our POM:

<dependency>
    <groupId>org.jboss.weld.se</groupId>
    <artifactId>weld-se-core</artifactId>
    <version>2.3.5.Final</version>
</dependency>

The most recent Weld library can be found in Maven repository.

Let’s now create a simple interceptor.

3. Introducing the CDI Interceptor

In order to designate classes we needed to intercept, let’s create the interceptor binding:

@InterceptorBinding
@Target( { METHOD, TYPE } )
@Retention( RUNTIME )
public @interface Audited {
}

After we’ve defined the interceptor binding we need to define the actual interceptor implementation:

@Audited
@Interceptor
public class AuditedInterceptor {
    public static boolean calledBefore = false;
    public static boolean calledAfter = false;

    @AroundInvoke
    public Object auditMethod(InvocationContext ctx) throws Exception {
        calledBefore = true;
        Object result = ctx.proceed();
        calledAfter = true;
        return result;
    }
}

Every @AroundInvoke method takes a javax.interceptor.InvocationContext argument, returns a java.lang.Object, and can throw an Exception.

And so, when we annotate a method with the new @Audit interface, auditMethod will be invoked first, and only then the target method be proceed as well.

4. Apply the CDI Interceptor

Let’s apply the created interceptor on some business logic:

public class SuperService {
    @Audited
    public String deliverService(String uid) {
        return uid;
    }
}

We’ve created this simple service and annotated the method we wanted to intercept with the @Audited annotation.

To enable the CDI interceptor one need to specify the full class name in the beans.xml file, located in the META-INF directory:

<beans xmlns="http://java.sun.com/xml/ns/javaee"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://java.sun.com/xml/ns/javaee
      http://java.sun.com/xml/ns/javaee/beans_1_2.xsd">
    <interceptors>
        <class>com.baeldung.interceptor.AuditedInterceptor</class>
    </interceptors>
</beans>

To validate that interceptor is indeed works let’s now run the following test:

public class TestInterceptor {
    Weld weld;
    WeldContainer container;

    @Before
    public void init() {
        weld = new Weld();
        container = weld.initialize();
    }

    @After
    public void shutdown() {
        weld.shutdown();
    }

    @Test
    public void givenTheService_whenMethodAndInterceptorExecuted_thenOK() {
        SuperService superService = container.select(SuperService.class).get();
        String code = "123456";
        superService.deliverService(code);
        
        Assert.assertTrue(AuditedInterceptor.calledBefore);
        Assert.assertTrue(AuditedInterceptor.calledAfter);
    }
}

In this quick test, we first get the bean SuperService from the container, then invoke business method deliverService on it and check that interceptor AuditedInterceptor was actually called by validating it’s state variables.

Also we have @Before and @After annotated methods in which we initialize and shutdown Weld container respectively.

5. CDI Considerations

We can point out the following advantages of CDI interceptors:

  • It is a standard feature of Java EE specification
  • Some CDI implementations libraries can used in Java SE
  • Can be used when we project has severe limitations on third party libraries

The disadvantages of the CDI interceptors are the following:

  • Tight coupling between class with business logic and interceptor
  • Hard to see which classes are intercepted in the project
  • Lack of flexible mechanism to apply interceptors to a group of methods

6. Spring AspectJ

Spring supports a similar implementation of interceptor functionality using AspectJ syntax as well.

First we need to add the following Spring and AspectJ dependencies to POM:

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-context</artifactId>
    <version>4.3.1.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.aspectj</groupId>
    <artifactId>aspectjweaver</artifactId>
    <version>1.8.9</version>
</dependency>

The most recent versions of Spring contextaspectjweaver can be found in the Maven repository.

We can now create an simple aspect using AspectJ annotation syntax:

@Aspect
public class SpringTestAspect {
    @Autowired
    private List accumulator;

    @Around("execution(* com.baeldung.spring.service.SpringSuperService.*(..))")
    public Object auditMethod(ProceedingJoinPoint jp) throws Throwable {
        String methodName = jp.getSignature().getName();
        accumulator.add("Call to " + methodName);
        Object obj = jp.proceed();
        accumulator.add("Method called successfully: " + methodName);
        return obj;
    }
}

We created an aspect which applies to all the methods of SpringSuperService class – which, for simplicity, looks like this:

public class SpringSuperService {
    public String getInfoFromService(String code) {
        return code;
    }
}

7. Spring AspectJ Aspect Apply

In order to validate that aspect really applies to the service, let’s write the following unit test:

@RunWith(SpringRunner.class)
@ContextConfiguration(classes = { AppConfig.class })
public class TestSpringInterceptor {
    @Autowired
    SpringSuperService springSuperService;

    @Autowired
    private List accumulator;

    @Test
    public void givenService_whenServiceAndAspectExecuted_thenOk() {
        String code = "123456";
        String result = springSuperService.getInfoFromService(code);
        
        Assert.assertThat(accumulator.size(), is(2));
        Assert.assertThat(accumulator.get(0), is("Call to getInfoFromService"));
        Assert.assertThat(accumulator.get(1), is("Method called successfully: getInfoFromService"));
    }
}

In this test we inject our service, call the method and check the result.

Here’s what the configuration looks like:

@Configuration
@EnableAspectJAutoProxy
public class AppConfig {
    @Bean
    public SpringSuperService springSuperService() {
        return new SpringSuperService();
    }

    @Bean
    public SpringTestAspect springTestAspect() {
        return new SpringTestAspect();
    }

    @Bean
    public List getAccumulator() {
        return new ArrayList();
    }
}

One important aspect here in the @EnableAspectJAutoProxy annotation – which enables support for handling components marked with AspectJ’s @Aspect annotation, similar to functionality found in Spring’s XML element.

8. Spring AspectJ Considerations

We can point out the following advantages of Spring AspectJ:

  • Interceptors are decoupled from the business logic
  • Interceptors can benefit from dependency injection
  • Interceptor has all the configuration information in itself
  • Adding new interceptors wouldn’t require augmenting existing code
  • Interceptor has flexible mechanism to choose which methods to intercept
  • Can be used in Java EE and Java SE

The disadvantages of the CDI interceptors are the following:

  • Need to know AspectJ syntax in order to develop interceptors
  • Learning curve for the AspectJ interceptors is higher than for the CDI interceptors

9. Conclusion

In this article we have covered two implementations of interceptor pattern: CDI interceptor and Spring AspectJ. We have covered advantages and disadvantages each of them.

Source code for examples of this article can be found in our repository on GitHub.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE


Pagination with Spring REST and AngularJS table

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Overview

In this article we will mainly focus on implementing server side pagination in a Spring REST API and a simple AngularJS frontend.

We’ll also explore a commonly used table grid in Angular named UI Grid.

2. Dependencies

Here we detail various dependencies that are required for this article.

2.1. JavaScript

In order for Angular UI Grid to work we will need the below scripts imported in our HTML.

2.2. Maven

For our backend we will be using Spring Boot, so we’ll need the below dependencies:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-tomcat</artifactId>
    <scope>provided</scope>
</dependency>

Note: Other dependencies were not specified here, for the full list, check the complete pom.xml in the github project.

3. About the Application

The application is a simple student’s directory app which allows users to see the student details in a paginated table grid.

The application uses Spring Boot and runs in an embedded Tomcat server with an embedded database.

Finally, on the API side of things, there are a few ways to do pagination, described in the REST Pagination in Spring article here – which is highly recommended reading in conjunction with this article.

Our solution here is simple – having the paging information in a URI query as follows:  /student/get?page=1&size=2.

4. The Client Side

First we need to create the client-side logic.

4.1. The UI-Grid

Our index.html will have the imports we need and a simple implementation of the table grid:

<!DOCTYPE html>
<html lang="en" ng-app="app">
    <head>
        <link rel="stylesheet" href="https://cdn.rawgit.com/angular-ui/
          bower-ui-grid/master/ui-grid.min.css">
        <script src="https://ajax.googleapis.com/ajax/libs/angularjs/
          1.5.6/angular.min.js"></script>
        <script src="https://cdn.rawgit.com/angular-ui/bower-ui-grid/
          master/ui-grid.min.js"></script>
        <script src="view/app.js"></script>
    </head>
    <body>
        <div ng-controller="StudentCtrl as vm">
            <div ui-grid="gridOptions" class="grid" ui-grid-pagination>
            </div>
        </div>
    </body>
</html>

Let’s have a closer look at the code:

  • ng-app – is the Angular directive that loads the module app. All elements under these will be part of the app module
  • ng-controller – is the Angular directive that loads the controller StudentCtrl with an alias of vm. All elements under these will be part of the StudentCtrl controller
  • ui-grid – is the Angular directive that belongs to Angular ui-grid and uses gridOptions as its default settings, gridOptions is declared under $scope in app.js

4.2. The AngularJS Module

Let’s first define the module in app.js:

var app = angular.module('app', ['ui.grid','ui.grid.pagination']);

We declared the app module and we injected ui.grid to enable UI-Grid functionality; we also injected ui.grid.pagination to enable pagination support.

Next we’ll define the controller:

app.controller('StudentCtrl', ['$scope','StudentService', 
    function ($scope, StudentService) {
        var paginationOptions = {
            pageNumber: 1,
            pageSize: 5,
	    sort: null
        };

    StudentService.getStudents(
      paginationOptions.pageNumber,
      paginationOptions.pageSize).success(function(data){
        $scope.gridOptions.data = data.content;
        $scope.gridOptions.totalItems = data.totalElements;
      });

    $scope.gridOptions = {
        paginationPageSizes: [5, 10, 20],
        paginationPageSize: paginationOptions.pageSize,
        enableColumnMenus:false,
	useExternalPagination: true,
        columnDefs: [
           { name: 'id' },
           { name: 'name' },
           { name: 'gender' },
           { name: 'age' }
        ],
        onRegisterApi: function(gridApi) {
           $scope.gridApi = gridApi;
           gridApi.pagination.on.paginationChanged(
             $scope, 
             function (newPage, pageSize) {
               paginationOptions.pageNumber = newPage;
               paginationOptions.pageSize = pageSize;
               StudentService.getStudents(newPage,pageSize)
                 .success(function(data){
                   $scope.gridOptions.data = data.content;
                   $scope.gridOptions.totalItems = data.totalElements;
                 });
            });
        }
    };
}]);

Let’s now have a look at the custom pagination settings in $scope.gridOptions:

  • paginationPageSizes – defines the available page size options
  • paginationPageSize – defines the default page size
  • enableColumnMenus – is used to enable/disable the menu on columns
  • useExternalPagination – is required if you are paginating on the server side
  • columnDefs – the column names that will be automatically mapped to the JSON object returned from the server. The field names in the JSON Object returned from the server and the column name defined should match.
  • onRegisterApi – the ability to register public methods events inside the grid. Here we registered the gridApi.pagination.on.paginationChanged to tell UI-Grid to trigger this function whenever the page was changed.

And to send the request to the API:

app.service('StudentService',['$http', function ($http) {

    function getStudents(pageNumber,size) {
        pageNumber = pageNumber > 0?pageNumber - 1:0;
        return $http({
          method: 'GET',
            url: 'student/get?page='+pageNumber+'&size='+size
        });
    }
    return {
        getStudents: getStudents
    };
}]);

5. The Backend and the API

5.1. The RESTful Service

Here’s the simple RESTful API implementation with pagination support:

@RestController
public class StudentDirectoryRestController {

    @Autowired
    private StudentService service;

    @RequestMapping(
      value = "/student/get", 
      params = { "page", "size" }, 
      method = RequestMethod.GET
    )
    public Page<Student> findPaginated(
      @RequestParam("page") int page, @RequestParam("size") int size) {

        Page<Student> resultPage = service.findPaginated(page, size);
        if (page > resultPage.getTotalPages()) {
            throw new MyResourceNotFoundException();
        }

        return resultPage;
    }
}

The @RestController was introduced in Spring 4.0 as a convenience annotation which implicitly declares @Controller and @ResponseBody.

For our API, we declared it to accept two parameters which are page and size that would also determine the number of records to return to the client.

We also added a simple validation that will throw a MyResourceNotFoundException if the page number is higher than the total pages.

Finally, we’ll return Page as the Response – this is a super helpful component of Spring Data which has holds pagination data.

5.2. The Service Implementation

Our service will simply return the records based on page and size provided by the controller:

@Service
public class StudentServiceImpl implements StudentService {

    @Autowired
    private StudentRepository dao;

    @Override
    public Page<Student> findPaginated(int page, int size) {
        return dao.findAll(new PageRequest(page, size));
    }
}

5.3. The Repository Implementation

For our persistence layer, we’re using an embedded database and Spring Data JPA.

First we need to setup our persistence config:

@EnableJpaRepositories("org.baeldung.web.dao")
@ComponentScan(basePackages = { "org.baeldung.web" })
@EntityScan("org.baeldung.web.entity") 
@Configuration
public class PersistenceConfig {

    @Bean
    public JdbcTemplate getJdbcTemplate() {
        return new JdbcTemplate(dataSource());
    }

    @Bean
    public DataSource dataSource() {
        EmbeddedDatabaseBuilder builder = new EmbeddedDatabaseBuilder();
        EmbeddedDatabase db = builder
          .setType(EmbeddedDatabaseType.HSQL)
          .addScript("db/sql/data.sql")
          .build();
        return db;
    }
}

The persistence config is simple – we have @EnableJpaRepositories to scan the specified package and find our Spring Data JPA repository interfaces.

We have the @ComponentScan here to automatically scan for all beans and we have @EntityScan (from Spring Boot) to scan for entity classes.

We also declared our simple datasource – using an embedded database that will run the SQL script provided on startup.

Now it’s time we create our data repository:

public interface StudentRepository extends JpaRepository<Student, Long> {}

This is basically all that we need to do here; if you want to go deeper into how to set up and use the highly powerful Spring Data JPA, definitely read the guide to it here.

6. Pagination Request and Response

When calling the API – http://localhost:8080/student/get?page=1&size=5, the JSON response will look something like this:

{
    "content":[
        {"studentId":"1","name":"Bryan","gender":"Male","age":20},
        {"studentId":"2","name":"Ben","gender":"Male","age":22},
        {"studentId":"3","name":"Lisa","gender":"Female","age":24},
        {"studentId":"4","name":"Sarah","gender":"Female","age":26},
        {"studentId":"5","name":"Jay","gender":"Male","age":20}
    ],
    "last":false,
    "totalElements":20,
    "totalPages":4,
    "size":5,
    "number":0,
    "sort":null,
    "first":true,
    "numberOfElements":5
}

One thing to notice here is that server returns a org.springframework.data.domain.Page DTO, wrapping our Student Resources.

The Page object will have the following fields:

  • last – set to true if its the last page otherwise false
  • first – set to true if it’s the first page otherwise false
  • totalElements – the total number of rows/records. In our example we passed this to the ui-grid options $scope.gridOptions.totalItems to determine how many pages will be available
  • totalPages – the total number of pages which was derived from (totalElements / size)
  • size – the number of records per page, this was passed from the client via param size
  • number – the page number sent by the client, in our response the number is 0 because in our backend we are using an array of Students which is a zero based index, so in our backend we decrement the page number by 1
  • sort – the sorting parameter for the page
  • numberOfElements – the number of rows/records return for the page

7. Testing Pagination

Let’s now set up a a test for our pagination logic, using RestAssured; to learn more about RestAssured you can have a look at this tutorial.

7.1. Preparing the Test

For ease of development of our test class we will be adding the static imports:

io.restassured.RestAssured.*
io.restassured.matcher.RestAssuredMatchers.*
org.hamcrest.Matchers.*

Next, we’ll set up the Spring enabled test:

@RunWith(SpringJUnit4ClassRunner.class)
@SpringApplicationConfiguration(classes = Application.class)
@WebAppConfiguration
@IntegrationTest("server.port:8888")

The @SpringApplicationConfiguration helps Spring know how to load the ApplicationContext, in this case we used the Application.java to configure our ApplicationContext.

The @WebAppConfiguration was defined to tell Spring that the ApplicationContext to be loaded should be a WebApplicationContext.

And the @IntegrationTest was defined to trigger the application startup when running the test, this makes our REST services available for testing.

7.2. The Tests

Here is our first test case:

@Test
public void givenRequestForStudents_whenPageIsOne_expectContainsNames() {
    given().params("page", "0", "size", "2").get(ENDPOINT)
      .then()
      .assertThat().body("content.name", hasItems("Bryan", "Ben"));
}

This test case above is to test that when page 1 and size 2 is passed to the REST service the JSON content returned from the server should have the names Bryan and Ben.

Let’s dissect the test case:

  • given – the part of RestAssured and is used to start building the request, you can also use with()
  • get – the part of RestAssured and if used triggers a get request, use post() for post request
  • hasItems – the part of hamcrest that checks if the values have any match

We add a few more test cases:

@Test
public void givenRequestForStudents_whenResourcesAreRetrievedPaged_thenExpect200() {
    given().params("page", "0", "size", "2").get(ENDPOINT)
      .then()
      .statusCode(200);
}

This test asserts that when the point is actually called an OK response is received:

@Test
public void givenRequestForStudents_whenSizeIsTwo_expectNumberOfElementsTwo() {
    given().params("page", "0", "size", "2").get(ENDPOINT)
      .then()
      .assertThat().body("numberOfElements", equalTo(2));
}

This test asserts that when page size of two is requested the pages size that is returned is actually two:

@Test
public void givenResourcesExist_whenFirstPageIsRetrieved_thenPageContainsResources() {
    given().params("page", "0", "size", "2").get(ENDPOINT)
      .then()
      .assertThat().body("first", equalTo(true));
}

This test asserts that when the resources is called the first time the first page name value is true.

There are many more tests in the repository, so definitely have a look at the github project.

8. Conclusion

This article illustrated how to implement a data table grid using UI-Grid in AngularJS and how to implement the required server side pagination.

The implementation of these examples and tests can be found in the github project. This is a Maven project, so it should be easy to import and run as it is.

To run the Spring boot project, you can simply do mvn spring-boot:run and access it locally on http://localhost:8080/.

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

Integration Testing in Spring

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Overview

Integration testing plays an important role in the application development cycle by verifying end-to-end behavior of the system.

In this article we will see how we can leverage the Spring MVC test framework in order to write and run integration tests that test controllers without explicitly starting a Servlet container.

2. Preparation

The following Maven dependencies are needed for running integration tests as described in this article. First and foremost the latest JUnit and Spring test dependencies:

<dependency>
    <groupId>junit</groupId>
    <artifactId>junit</artifactId>
    <version>4.12</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-test</artifactId>
    <version>4.3.2.RELEASE</version>
    <scope>test</scope>
</dependency>

For effective asserting of results, we’re going to also use Hamcrest and JSON path:

<dependency>
    <groupId>org.hamcrest</groupId>
    <artifactId>hamcrest-library</artifactId>
    <version>1.3</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>com.jayway.jsonpath</groupId>
    <artifactId>json-path</artifactId>
    <version>2.2.0</version>
    <scope>test</scope>
</dependency>

3. Spring MVC Test Configuration

Let’s now introduce how to configure and run the Spring enabled tests.

3.1. Enable Spring in Tests

First, any Spring enabled test will run with the help of @RunWith(SpringJUnit4ClassRunner.class); the runner is essentially the entry-point to start using the Spring Test framework.

We also need the @ContextConfiguration annotations to load the context configuration and bootstrap the context that the test will use.

Let’s have a look:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes = { ApplicationConfig.class })
@WebAppConfiguration
public class GreetControllerIntegrationTest {
    ....
}

Notice how, in @ContextConfiguration, we provided the ApplicationConfig.class config class which loads the configuration we need for this particular test.

We used a Java configuration class here to specify the context configuration; similarly we can use the XML based configuration:

@ContextConfiguration(locations={""})

Finally – the test is also annotated with @WebAppConfiguration – which will load the web application context.

By default, it looks for the root web application at default path src/main/webapp; the location can be overridden by passing value argument as:

@WebAppConfiguration(value = "")

3.2. The WebApplicationContext Object

WebApplicationContext (wac) provides the web application configuration. It loads all the application beans and controllers into the context.

We’ll now be able to wire the web application context right into the test:

@Autowired
private WebApplicationContext wac;

3.3. Mocking Web Context Beans

MockMvc provides support for Spring MVC testing. It encapsulates all web application beans and make them available for testing.

Let’s see how to use it:

private MockMvc mockMvc;
@Before
public void setup() throws Exception {
    this.mockMvc = MockMvcBuilders.webAppContextSetup(this.wac).build();
}

We need to initialize the mockMvc object in the @Before annotated method, so that we don’t need to initialize it inside every test.

3.4. Verify Test Configuration

For our tutorial here, let’s actually verify that we’re loading the WebApplicationContext object (wac) properly. We’ll also verify that the right servletContext is being attached:

@Test
public void givenWac_whenServletContext_thenItProvidesGreetController() {
    ServletContext servletContext = wac.getServletContext();
    
    Assert.assertNotNull(servletContext);
    Assert.assertTrue(servletContext instanceof MockServletContext);
    Assert.assertNotNull(wac.getBean("greetController"));
}

Notice that we’re also checking that we a GreetController.java bean exists in the web context – which ensures that spring beans are loaded properly.

At this point, the setup of integration test, is done. Let us see how we can test resource methods using the MockMvc object.

4. Writing Integration Tests

In this section we’ll go over the basic operation available through the test framework.

We’ll show how to send requests with path variables and parameters. Also, we’ll follow with the few examples that show how to assert that the proper view name is resolved, or that the response body is as expected.

The following snippets use static imports from MockMvcRequestBuilders or MockMvcResultMatchers classes.


4.1. Verify View Name

Let’s invoke the /homePage endpoint from our test as:

http://localhost:8080/spring-mvc-test/

or

http://localhost:8080/spring-mvc-test/homePage

Code Snippet:

@Test
public void givenHomePageURI_whenMockMVC_thenReturnsIndexJSPViewName() {
    this.mockMvc.perform(get("/homePage")).andDo(print())
      
      .andExpect(view().name("index"));
}

Let’s break that down:

  • perform() method will call a get request method which returns the ResultActions. Using this result we can have assertion expectations on response like content, HTTP status, header, etc
  • andDo(print()) will print the request and response. This is helpful to get detailed view in case of error
  • andExpect() will expect the provided argument. In our case we are expecting “index” to be returned via MockMvcResultMatchers.view()

4.2. Verify Response Body

We will invoke /greet endpoint from our test as:

http://localhost:8080/spring-mvc-test/greet

Expected Output:

{
    "id": 1,
    "message": "Hello World!!!"
}

Code Snippet:

@Test
public void givenGreetURI_whenMockMVC_thenVerifyResponse() {
    MvcResult mvcResult = this.mockMvc.perform(get("/greet"))
      .andDo(print()).andExpect(status().isOk())
      .andExpect(jsonPath("$.message").value("Hello World!!!"))
      .andReturn();
    
    Assert.assertEquals("application/json;charset=UTF-8", 
      mvcResult.getResponse().getContentType());
}

Let’s see exactly what’s going on:

  • andExpect(MockMvcResultMatchers.status().isOk()) will verify that response http status is Ok i.e. 200. This ensures that request was successfully executed
  • andExpect(MockMvcResultMatchers.jsonPath(“$.message”).value(“Hello World!!!”)) will verify that response content matches with the argument “Hello World!!!“. Here we used jsonPath which extracts response content and provide the requested value
  • andReturn() will return the MvcResult object which is used, when we have to verify something which is not achievable by library. You can see we have added assertEquals to match the content type of response that is extracted from MvcResult object

4.3. Send GET Request with Path Variable

We will invoke /greetWithPathVariable/{name} endpoint from our test as:

http://localhost:8080/spring-mvc-test/greetWithPathVariable/John

Expected Output:

{
    "id": 1,
    "message": "Hello World John!!!"
}

Code Snippet:

@Test
public void givenGreetURIWithPathVariable_whenMockMVC_thenResponseOK() {
    this.mockMvc
      .perform(get("/greetWithPathVariable/{name}", "John"))
      .andDo(print()).andExpect(status().isOk())
      
      .andExpect(content().contentType("application/json;charset=UTF-8"))
      .andExpect(jsonPath("$.message").value("Hello World John!!!"));
}

MockMvcRequestBuilders.get(“/greetWithPathVariable/{name}”, “John”) will send request as “/greetWithPathVariable/John“.

This becomes easier with respect to readability and knowing what are the parameters which are dynamically set in URL. This method doesn’t restricts to pass number of path parameters.

4.4. Send GET Request with Query Parameters

We will invoke /greetWithQueryVariable?name={name} endpoint from our test as:

http://localhost:8080/spring-mvc-test
  /greetWithQueryVariable?name=John%20Doe

Expected Output:

{
    "id": 1,
    "message": "Hello World John Doe!!!"
}

Code Snippet:

@Test
public void givenGreetURIWithQueryParameter_whenMockMVC_thenResponseOK() {
    this.mockMvc.perform(get("/greetWithQueryVariable")
      .param("name", "John Doe")).andDo(print()).andExpect(status().isOk())
      .andExpect(content().contentType("application/json;charset=UTF-8"))
      .andExpect(jsonPath("$.message").value("Hello World John Doe!!!"));
}

param(“name”, “John Doe”) will append the query parameter in the GET request. It is similar to “/greetWithQueryVariable?name=John%20Doe“. 

The query parameter can also be implemented using URI template style:

this.mockMvc.perform(
  get("/greetWithQueryVariable?name={name}", "John Doe"));

4.5. Send POST Request

We will invoke /greetWithPost endpoint from our test as:

http://localhost:8080/spring-mvc-test/greetWithPost

Expected Output:

{
    "id": 1,
    "message": "Hello World!!!"
}

Code Snippet:

@Test
public void givenGreetURIWithPost_whenMockMVC_thenVerifyResponse() {
    this.mockMvc.perform(post("/greetWithPost")).andDo(print())
      .andExpect(status().isOk()).andExpect(content()
      .contentType("application/json;charset=UTF-8"))
      .andExpect(jsonPath("$.message").value("Hello World!!!"));
}

MockMvcRequestBuilders.post(“/greetWithPost”) will send the post request. Path variables and Query Parameters can be set in similar way we looked earlier, whereas Form Data can be set via param() method only similar to Query Parameter as:

http://localhost:8080/spring-mvc-test/greetWithPostAndFormData

Form Data:

id=1;name=John%20Doe

Expected Output:

{
    "id": 1,
    "message": "Hello World John Doe!!!"
}

Code Snippet:

@Test
public void givenGreetURIWithPostAndFormData_whenMockMVC_thenResponseOK() {
    this.mockMvc.perform(post("/greetWithPostAndFormData").param("id", "1")
      .param("name", "John Doe")).andDo(print()).andExpect(status().isOk())
      
      .andExpect(content().contentType("application/json;charset=UTF-8"))
      .andExpect(jsonPath("$.message").value("Hello World John Doe!!!"))
      .andExpect(jsonPath("$.id").value(1));
}

In above code snippet we have added 2 parameters id as “1” and name as “John Doe”.

5. Conclusion

In this quick tutorial we implemented a few simple Spring enabled integration tests.

We also looked at the WebApplicationContext and MockMVC object creation which played important role in calling the endpoints of the application.

Looking further we covered how we can send GET and POST request with variations of parameter passing and how to verify the HTTP response status, header and content.

Finally, the implementation of all these examples and code snippets is available in GitHub.

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

Guide To Running Logic on Startup in Spring

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Introduction

In this article we’ll focus on how to run logic at the startup of a Spring application.

2. Running Logic On Startup

Running logic during/after Spring application’s startup is a common scenario, but one that causes multiple problems.

In order to benefit from Inverse of Control, we naturally need to renounce partial control over the application’s flow to the container – which is why instantiation, setup logic on startup, etc needs special attention.

We can’t simply include our logic in the beans’ constructors or call methods after instantiation of any object; we are simply not in control during those processes.

Let’s look at the real-life example:

@Component
public class InvalidInitExampleBean {

    @Autowired
    private Environment env;

    public InvalidInitExampleBean() {
        env.getActiveProfiles();
    }
}

Here, we’re trying to access an autowired field in the constructor. When the constructor is called, the Spring bean is not yet fully initialized. This is problematic because calling not yet initialized fields will of course result in NullPointerExceptions.

Spring gives us a few ways of managing this situation.

2.1. The @PostConstruct Annotation

Javax’s @PostConstruct annotation can be used for annotating a method that should be run once immediately after the bean’s initialization. Keep in mind that the annotated method will be executed by Spring even if there is nothing to inject.

Here’s @PostConstruct in action:

@Component
public class PostConstructExampleBean {

    private static final Logger LOG = Logger.getLogger(PostConstructExampleBean.class);

    @Autowired
    private Environment environment;

    @PostConstruct
    public void init() {
        LOG.info(Arrays.asList(environment.getDefaultProfiles()));
    }
}

In the example above you can see that the Environment instance was safely injected and then called in the @PostConstruct annotated method without throwing a NullPointerException.

2.2. The InitializingBean Interface

The InitializingBean approach works pretty similarly to the previous one. Instead of annotating a method, you need to implement the InitializingBean interface and the afterPropertiesSet() method.

Here you can see the previous example implemented using the InitializingBean interface:

@Component
public class InitializingBeanExampleBean implements InitializingBean {

    private static final Logger LOG = Logger.getLogger(InitializingBeanExampleBean.class);

    @Autowired
    private Environment environment;

    @Override
    public void afterPropertiesSet() throws Exception {
        LOG.info(Arrays.asList(environment.getDefaultProfiles()));
    }
}

2.3. An ApplicationListener

This approach can be used for running logic after the Spring context has been initialized, so we are not focusing on any particular bean, but waiting for all of them to initialize.

In order to achieve this you need to create a bean that implements the ApplicationListener<ContextRefreshedEvent> interface:

@Component
public class StartupApplicationListenerExample implements 
  ApplicationListener<ContextRefreshedEvent> {

    private static final Logger LOG = Logger.getLogger(StartupApplicationListenerExample.class);

    public static int counter;

    @Override public void onApplicationEvent(ContextRefreshedEvent event) {
        LOG.info("Increment counter");
        counter++;
    }
}

The same results can be achieved by using the newly-introduced @EventListener annotation:

@Component
public class EventListenerExampleBean {

    private static final Logger LOG = Logger.getLogger(EventListenerExampleBean.class);

    public static int counter;

    @EventListener
    public void onApplicationEvent(ContextRefreshedEvent event) {
        LOG.info("Increment counter");
        counter++;
    }
}

In this example we chose the ContextRefreshedEvent. Make sure to pick an appropriate event that suits your needs.

2.4. The XML init-method

The Init-method is an XML-way of executing a method after a bean’s initialization.

Here is what a bean looks like:

public class InitMethodExampleBean {

    private static final Logger LOG = Logger.getLogger(InitMethodExampleBean.class);

    @Autowired
    private Environment environment;

    public void init() {
        LOG.info(Arrays.asList(environment.getDefaultProfiles()));
    }
}

You can notice that there is no special interfaces implemented nor any special annotations used.

And this is how a bean definition looks in an XML config:

<bean id="initMethodExampleBean"
  class="org.baeldung.startup.InitMethodExampleBean"
  init-method="init">
</bean>

2.5. Constructor Injection

If you are injecting fields using Constructor Injection, you can simply include your logic in a constructor:

@Component 
public class LogicInConstructorExampleBean {

    private static final Logger LOG = Logger.getLogger(LogicInConstructorExampleBean.class);

    private final Environment environment;

    @Autowired
    public LogicInConstructorExampleBean(Environment environment) {
        this.environment = environment;
        LOG.info(Arrays.asList(environment.getDefaultProfiles()));
    }
}

3. Combining Mechanisms

In order to achieve full control over your beans, you might want to combine the above mechanisms together.

The order of execution is as follows:

  1. The constructor
  2. the @PostConstruct annotated methods
  3. the InitializingBean’s afterPropertiesSet() method
  4. the initialization method specified as init-method in XML

Let’s create a Spring bean that combines all mechanisms:

@Component
@Scope(value = "prototype")
public class AllStrategiesExampleBean implements InitializingBean {

    private static final Logger LOG = Logger.getLogger(AllStrategiesExampleBean.class);

    public AllStrategiesExampleBean() {
        LOG.info("Constructor");
    }

    @Override
    public void afterPropertiesSet() throws Exception {
        LOG.info("InitializingBean");
    }

    @PostConstruct
    public void postConstruct() {
        LOG.info("PostConstruct");
    }

    public void init() {
        LOG.info("init-method");
    }
}

If you try to instantiate this bean, you will be able to see logs that match the order specified above:

[main] INFO o.b.startup.AllStrategiesExampleBean - Constructor
[main] INFO o.b.startup.AllStrategiesExampleBean - PostConstruct
[main] INFO o.b.startup.AllStrategiesExampleBean - InitializingBean
[main] INFO o.b.startup.AllStrategiesExampleBean - init-method

4. Conclusion

In this article we illustrated multiple ways of executing logic on Spring’s application startup.

Code samples can be found on GitHub.

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

Java Web Weekly, Issue 138

$
0
0

I just released the Master Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> Oracle Unveils Plan to Revamp Java EE 8 for the Cloud [infoq.com]

The plans Oracle has for the Java EE ecosystem – an important read but definitely not a positive one.

>> Managing your Database Secrets with Vault [spring.io]

This trend of finally getting solid, well-supported management of credentials and sensitive information – is quite helpful to the security of most systems out there.

>> Reactive log stream processing with RxJava – Part I [balamaci.ro]

>> Reactive log stream processing with RxJava – Part II [balamaci.ro]

A good way to get your feed wet with the upcoming paradigm of reactive programming before Spring 5 comes out.

>> JPA providers market share in 2016 [vladmihalcea.com]

Market data is so often overlooked but so very important for understanding an industry.

That’s why I run my yearly survey, that’s why I enjoy reading the RebelLabs reports and that’s why this dataset is significant as well.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Starting with Cucumber for end-to-end testing [frankel.ch]

An interesting and reluctant adoption of Cucumber with some Spring goodness in there for good measure.

>> Website enumeration insanity: how our personal data is leaked [troyhunt.com]

A good read about enumeration attacks, all in stories. Funny stories about companies acting silly with their security.

>> The Basics of Web Application Security [martinfowler.com]

A solid look at the basics.

>> StatsD vs collectd vs fluentd and Other Daemons You Should Know [takipi.com]

A good, high level overview of the options we have for daemons, when collecting data off a production box.

I’ve personally been using collecd for years – and it gets the job done, but I’ll certainly be looking fluentd after reading this piece.

>> DDD Decoded – Bounded Contexts Explained [sapiensworks.com]

>> DDD Decoded – Domain Services Explained [sapiensworks.com]

I’m thoroughly enjoying this series on the basics of DDD.

Also worth reading:

3. Musings

>> Static Analysis and The Other Kind of False Positives [daedtech.com]

Wrangling a large codebase with static analysis tools often comes down to the art of tuning the tool to keep enthusiasm up – for both you and your team. To many warnings and the problem looks to large. To few and you’re not seeing enough.

Of course, with a bit of discipline and experience, you can get to more manageable codebase if you keep at it.

>> All Libraries Should Follow a Zero-Dependency Policy [jooq.org]

A very good point, event though it’s sometimes easier said than done.

>> Can’t you make the team work harder? [dandreamsofcoding.com]

A balanced and thoughtful approach to management – lots to learn from here, whichever side of the table you happen to sit.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Your watch costs most than my entire net worth [dilbert.com]

>> I heard words I didn’t know were words [dilbert.com]

>> Is it working? [dilbert.com]

5. Pick of the Week

>> “Eat, sleep, code, repeat” is such bullshit [m.signalvnoise.com]

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Eager/Lazy Loading In Hibernate

$
0
0

I usually post about Persistence on Twitter - you can follow me there:

1. Introduction

When working with an ORM, data fetching/loading can be classified into two types: eager and lazy.

In this quick article we are going to point out differences and show those can be used in Hibernate.

2. Maven Dependencies

In order to use Hibernate, let’s first define the main dependencyin our pom.xml:

<dependency>
    <groupId>org.hibernate</groupId>
    <artifactId>hibernate-core</artifactId>   
    <version>5.2.2.Final</version>
</dependency>

The latest version of Hibernate can be found here.

3. Eager and Lazy Loading

The first thing that we should discuss here is what lazy loading and eager loading are:

  • Eager Loading is a design pattern in which data initialization occurs on the spot
  • Lazy Loading is a design pattern which is used to defer initialization of an object as long as it’s possible

Let’s see how this actually works with some examples:

The UserLazy class:

@Entity
@Table(name = "USER")
public class UserLazy implements Serializable {

    @Id
    @GeneratedValue
    @Column(name = "USER_ID")
    private Long userId;

    @OneToMany(fetch = FetchType.LAZY, mappedBy = "user")
    private Set<OrderDetail> orderDetail = new HashSet();

    // standard setters and getters
    // also override equals and hashcode

}

The OrderDetail class:

@Entity
@Table (name = "USER_ORDER")
public class OrderDetail implements Serializable {
	
    @Id
    @GeneratedValue
    @Column(name="ORDER_ID")
    private Long orderId;
	
    @ManyToOne(fetch = FetchType.LAZY)
    @JoinColumn(name="USER_ID")
    private UserLazy user;

    // standard setters and getters
    // also override equals and hashcode

}

One User can have multiple OrderDetails. In eager loading strategy, if we load the User data, it will also load up all orders associated with it and will store it in a memory.

But, when lazy loading is enabled, if we pull up a UserLazy, OrderDetail data won’t be initialized and loaded into a memory until an explicit call is made to it.

In the next section we will see how the above example is implemented in Hibernate.

4. Loading Configuration

In this section we will look at how can we configure fetching strategies in Hibernate. We will reuse examples from the previous section.

Lazy Loading can be simply enabled using the following annotation parameter:

fetch = FetchType.LAZY

To use Eager Fetching the following parameter is used:

fetch = FetchType.EAGER

To setup Eager Loading we have used UserLazy‘s twin class called UserEager.

In the next section we will look at the differences between the two types of fetching.

5. Differences

As we mentioned, the main difference between the two types of fetching is a moment when data gets loaded into a memory.

Let’s have a look at this example:

List<UserLazy> users = sessionLazy.createQuery("From UserLazy").list();
UserLazy userLazyLoaded = users.get(3);
return (userLazyLoaded.getOrderDetail());

With the lazy initialization approach, orderDetailSet will get initialized only when it is explicitly called by using a getter or some other method as shown in the above example:

UserLazy userLazyLoaded = users.get(3);

But with an eager approach in UserEager it will be initialized immediately in the first line of above example:

List<UserEager> user = sessionEager.createQuery("From UserEager").list();

For lazy loading a proxy object is used and a separate SQL query is fired to load the orderDetailSet .

The idea of disabling proxies or lazy loading is considered a bad practice in Hibernate. It can result in a lot of data being fetched from a database and stored in a memory, irrespective of the need for it.

The following method can be used to test the above functionality:

Hibernate.isInitialized(orderDetailSet);

Now it is important to have a look at the queries that are generated in either cases:

<property name="show_sql">true</property>

The above setting in the fetching.hbm.xml shows the SQL queries that are generated. If you look at a console output then you will be able to see generated queries.

For Lazy Loading the query that is generated to load the User data:

select user0_.USER_ID as USER_ID1_0_,  ... from USER user0_

However, in eager loading, we saw a join being made with USER_ORDER:

select orderdetai0_.USER_ID as USER_ID4_0_0_, orderdetai0_.ORDER_ID as ORDER_ID1_1_0_, orderdetai0_ ...
  from USER_ORDER orderdetai0_ where orderdetai0_.USER_ID=?

The above query is generated for all Users, which results in much more memory being used than in the other approach.

6. Advantages and Disadvantages

6.1. Lazy Loading

Advantages:

  • Initial load time much smaller than in the other approach
  • Less memory consumption than in the other approach

Disadvantages:

  • Delayed initialization might impact performance during unwanted moments
  • In some cases you need to handle lazily-initialized objects with a special care or you might end up with an exception

6.2. Eager Loading:

Advantages:

  • No delayed initialization related performance impacts

Disadvantages:

  • Long initial loading time
  • Loading too much unnecessary data might impact performance

7. Lazy Loading in Hibernate

Hibernate applies lazy loading approach on entities and associations by providing a proxy implementation of classes.

Hibernate intercepts calls to an entity by substituting it with a proxy derived from an entity’s class. In our example, when a requested information is missing, it will be loaded from a database before control is ceded to the User class implementation.

It should also be noted that when the association is represented as a collection class (in the above examples it is represented as Set<OrderDetail> orderDetailSet), then a wrapper is created and substituted for an original collection.

To know more about proxy design pattern you can refer here.

8. Conclusion

In this article we showed the examples of the two main types of fetching that are used in Hibernate.

For advanced level expertise you can look at the official website of Hibernate. To get the code discussed in this article please have a look at this repository.

Dockerizing a Spring Boot Application

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Overview

In this article we’ll focus on how to dockerize a Spring Boot Application to run it in an isolated environment, a.k.a. container.

Furthermore we’ll show how to create a composition of containers, which depend on each other and are linked against each other in a virtual private network. We’ll also see how they can be managed together with single commands.

Let’s start by creating a Java-enabled, lightweight base image, running Alpine Linux.

2. Common Base Image

We’re going to be using Docker’s own build-file format: a Dockerfile.

A Dockerfile is in principle, a linewise batch file, containing commands to build an image. It’s not absolutely necessary to put these commands into a file, because we’re able to pass them to the command-line, as well – a file is simply more convenient.

So, let’s write our first Dockerfile:

FROM alpine:edge
MAINTAINER baeldung.com
RUN apk add --no-cache openjdk8
COPY files/UnlimitedJCEPolicyJDK8/* \
  /usr/lib/jvm/java-1.8-openjdk/jre/lib/security/
  • FROM: The keyword FROM, tells Docker to use a given image with its tag as build-base. If this image is not in the local library, an online-search on DockerHub, or on any other configured remote-registry, is performed
  • MAINTAINER: A MAINTAINER is usually an email address, identifying the author of an image
  • RUN: With the RUN command, we’re execute a shell command-line within the target system. Here we utilizing Alpine Linux’s package manager apk to install the Java 8 OpenJDK
  • COPY: The last command tells Docker to COPY a few files from the local file-system, specifically a subfolder to the build directory, into the image in a given path

REQUIREMENTS: In order to run the tutorial successfully, you have to download the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files from Oracle. Simply extract the downloaded archive into a local folder named ‘files’.

To finally build the image and store it in the local library, we have to run:

docker build --tag=alpine-java:base --rm=true .

NOTICE: The –tag option will give the image its name and –rm=true will remove intermediate images after it has been built successfully. The last character in this shell command is a dot, acting as build-directory argument.

3. Dockerize a Standalone Spring Boot Application

As example for an application which we can dockerize, we will take the spring-cloud-config/server from the spring cloud configuration tutorial. As  a preparation-step, we have to assemble a runnable jar file and copy it to our Docker build-directory:

tutorials $> cd spring-cloud-config/server
server    $> mvn package spring-boot:repackage
server    $> cp target/server-0.0.1-SNAPSHOT.jar \
               ../../spring-boot-docker/files/config-server.jar
server    $> cd ../../spring-boot-docker

Now we will create a Dockerfile named Dockerfile.server with the following content:

FROM alpine-java:base
MAINTAINER baeldung.com
COPY files/spring-cloud-config-server.jar /opt/spring-cloud/lib/
COPY files/spring-cloud-config-server-entrypoint.sh /opt/spring-cloud/bin/
ENV SPRING_APPLICATION_JSON= \ 
  '{"spring": {"cloud": {"config": {"server": \
  {"git": {"uri": "/var/lib/spring-cloud/config-repo", \
  "clone-on-start": true}}}}}}'
ENTRYPOINT ["/usr/bin/java"]
CMD ["-jar", "/opt/spring-cloud/lib/spring-cloud-config-server.jar"]
VOLUME /var/lib/spring-cloud/config-repo
EXPOSE 8888
  • FROM: As base for our image we will take the Java-enabled Alpine Linux, created in the previous section
  • COPY: We let Docker copy our jar file into the image
  • ENV: This command let us define some environment variables, which will be respected by the application running in the container. Here we define a customized Spring Boot Application configuration, to hand-over to the jar-executable later
  • ENTRYPOINT/CMD: This will be the executable to start when the container is booting. We must define them as JSON-Array, because we will use an ENTRYPOINT in combination with a CMD for some application arguments
  • VOLUME: Because our container will be running in an isolated environment, with no direct network access, we have to define a mountpoint-placeholder for our configuration repository
  • EXPOSE: Here we are telling Docker, on which port our application is listing. This port will be published to the host, when the container is booting

To create an image from our Dockerfile, we have to run ‘docker build’, like before:

$> docker build --file=Dockerfile.server \
     --tag=config-server:latest --rm=true .

But before we’re going to run a container from our image, we have to create a volume for mounting:

$> docker volume create --name=spring-cloud-config-repo

NOTICE: While a container is immutable, when not committed to an image after application exits, data stored in a volume will be persistent over several containers.

Finally we are able to run the container from our image:

$> docker run --name=config-server --publish=8888:8888 \
     --volume=spring-cloud-config-repo:/var/lib/spring-cloud/config-repo \
     config-server:latest
  • First, we have to –name our container. If not, one will be automatically chosen
  • Then, we must –publish our exposed port (see Dockerfile) to a port on our host. The value is given in the form ‘host-port:container-port’. If only a container-port is given, a randomly chosen host-port will be used. If we leave this option out, the container will be completely isolated
  • The –volume option gives access to either a directory on the host (when used with an absolute path) or a previously created Docker volume (when used with a volume-name). The path after the colon specifies the mountpoint within the container
  • As argument we have to tell Docker, which image to be used. Here we have to give the image-name from the previously ‘docker build‘ step
  • Some more useful options:
    • -it – enable interactive mode and allocate a pseudo-tty
    • -d – detach from the container after booting

If we ran the container in detached mode, we can inspect its details, stop it and remove it with the following commands:

$> docker inspect config-server
$> docker stop config-server
$> docker rm config-server

4. Dockerize Dependent Applications in a Composite

Docker commands and Dockerfiles are particularly suitable for creating individual containers. But if you want to operate on a network of isolated applications, the container management quickly becomes cluttered.

To solve that, Docker provides a tool named Docker Compose. This comes with an own build-file in YAML format and is better suited in managing multiple containers. For example: it is able to start or stop a composite of services in one command, or merges logging output of multiple services together into one pseudo-tty.

Let’s build an example of two applications running in different docker containers. They will communicate with each other and be presented as “single unit” to the host system. We will build and copy the spring-cloud-config/client example described in the spring cloud configuration tutorial to our files folder, like we have done before with the config-server.

This will be our docker-compose.yml:

version: '2'
services:
    config-server:
        container_name: config-server
        build:
            context: .
            dockerfile: Dockerfile.server
        image: config-server:latest
        expose:
            - 8888
        networks:
            - spring-cloud-network
        volumes:
            - spring-cloud-config-repo:/var/lib/spring-cloud/config-repo
        logging:
            driver: json-file
    config-client:
        container_name: config-client
        build:
            context: .
            dockerfile: Dockerfile.client
        image: config-client:latest
        entrypoint: /opt/spring-cloud/bin/config-client-entrypoint.sh
        environment:
            SPRING_APPLICATION_JSON: \
              '{"spring": {"cloud":  \
              {"config": {"uri": "http://config-server:8888"}}}}'
        expose:
            - 8080
        ports:
            - 8080:8080
        networks:
            - spring-cloud-network
        links:
            - config-server:config-server
        depends_on:
            - config-server
        logging:
            driver: json-file
networks:
    spring-cloud-network:
        driver: bridge
volumes:
    spring-cloud-config-repo:
        external: true
  • version: Specifies which format version should be used. This is a mandatory field. Here we use the newer version, whereas the legacy format is ‘1’
  • services: Each object in this key defines a service, a.k.a container. This section is mandatory
    • build: If given, docker-compose is able to build an image from a Dockerfile
      • context: If given, it specifies the build-directory, where the Dockerfile is looked-up
      • dockerfile: If given, it sets an alternate name for a Dockerfile
    • image: Tells Docker which name it should given to the image when build-features are used. Otherwise it is searching for this image in the library or remote-registry
    • networks: This is the identifier of the named networks to use. A given name-value must be listed in the networks section
    • volumes: This identifies the named volumes to use and the mountpoints to mount the volumes to, separated by a colon. Likewise in networks section, a volume-name must be defined in a separate volumes section
    • links: This will create an internal network link between this service and the listed service. This service will be able to connect to the listed service, whereby the part before the colon specifies a service-name from the services section and the part after the colon specifies the hostname at which the service is listening on an exposed port
    • depends_on: This tells Docker to start a service only, if the listed services has started successfully. NOTICE: This works only at container level! For a workaround to start the dependent application first, see config-client-entrypoint.sh
    • logging: Here we are using the ‘json-file’ driver, which is the default one. Alternatively ‘syslog’ with a given address option or ‘none’ can be used
  • networks: In this section we’re specifying the networks available to our services. In this example we let docker-compose create a named network of type ‘bridge’ for us. If the option external is set to true, it will use an existing one with the given name
  • volumes: This is very similar to the networks section

Before we continue, we will check our build-file for syntax-errors:

$> docker-compose config

This will be our Dockerfile.client to build the config-client image from. It differs from the Dockerfile.server in that we additionally install OpenBSD netcat (which is needed in the next step) and make the entrypoint executable:

FROM alpine-java:base
MAINTAINER baeldung.com
RUN apk --no-cache add netcat-openbsd
COPY files/config-client.jar /opt/spring-cloud/lib/
COPY files/config-client-entrypoint.sh /opt/spring-cloud/bin/
RUN chmod 755 /opt/spring-cloud/bin/config-client-entrypoint.sh

And this will be the customized entrypoint for our config-client service. Here we use netcat in a loop to check whether our config-server is ready. You have to notice, that we can reach our config-server by its link-name, instead of an IP address:

#!/bin/sh
while ! nc -z config-server 8888 ; do
    echo "Waiting for upcoming Config Server"
    sleep 2
done
java -jar /opt/spring-cloud/lib/config-client.jar

Finally we can build our images, create the defined containers and start it in one command:

$> docker-compose up --build

To stop the containers, remove it from Docker and remove the connected networks and volumes from it, we can use the opposite command:

$> docker-compose down

A nice feature of docker-compose is the ability to scale services. For example, we can tell Docker to run one container for the config-server and three containers for the config-client.

But for this to work properly, we have to remove the container_name from our docker-compose.yml, for letting Docker choose one, and we have to change the exposed port configuration, to avoiding clashes.

After that, we are able to scale our services like so:

$> docker-compose build
$> docker-compose up -d
$> docker-compose scale config-server=1 config-client=3

5. Conclusion

As we’ve seen, we are now be able to building custom Docker images, running a Spring Boot Application as a Docker container and creating dependent containers with docker-compose.

For further reading about the build-files, we refer to the official Dockerfile reference and the docker-compose.yml reference.

As usual, the source codes for this tutorial can be found on Github.

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

Java Web Weekly, Issue 139

$
0
0

I just released the Master Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> Solving Fat JAR Woes at HubSpot [infoq.com]

A very interesting look at (and solution for) the scaling problems of using Fat Jars beyond a certain point.

>> AWS Lambda for Serverless Java Developers: What’s in It for You? [takipi.com]

A nice, quick intro to AWL Lambda.

>> Benefits of @Repeatable annotations in Hibernate 5.2 [thoughts-on-java.org]

I had no idea most Hibernate annotations are now repeatable. Very nice.

>> Spring Boot Microservice Development on Kubernetes: The Easy Way [christianposta.com]

Very useful series of screencasts, going deep into how to run Spring Boot on Docker and Kubernetes.

>> Spring Boot: Show all logging events for one Web request only [moelholm.com]

Very nicely done – using the log like a scalpel not like a machete.

Also worth reading:

Presentations and Webinars:

Time to upgrade:

2. Technical and Musings

>> DDD Decoded – Application Services Explained [sapiensworks.com]

>> DDD Decoded – Modelling with CQS [sapiensworks.com]

>> DDD Decoded – Domain Relationships Explained [sapiensworks.com]

DDD is definitely one of the best ways to work on your architectural chops, and this series is clear and to the point explaining some of the basic concepts.

>> Fixing JSON [tbray.org]

Inside notes on making JSON better with the help of some minor tweaks.

>> Do you want a framework or a solution? [ontestautomation.com]

It’s well worth approaching test automation in a structured, intentional way, and most importantly with an open mind to picking a tool you might not yet be familiar with.

Otherwise – and I speak from experience – you may end up with a rats nest of unreliable tests.

>> Scaling Elasticsearch for Multi-Tenant, Multi-Cluster [loggly.com]

I really enjoyed reading through this writeup, mainly because I did a multi-tenant implementation on top of Elasticsearch when the platform was much less mature.

It’s a solid platform, but not without its very real and thorny issues.

Also worth reading:

Presentations and Webinars:

4. Comics

And my favorite Dilberts of the week:

>> Was this going well until I said “waddle”? [dilbert.com]

>> If our call gets disconnected, I count that as a closed ticket [dilbert.com]

>> My existence will make your empire seem larger [dilbert.com]

5. Pick of the Week

>> Less stress, more productivity: why working fewer hours is better for you and your employer [codewithoutrules.com]

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE


Introduction to Hystrix

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Overview

A typical distributed system consists of many services collaborating together.

These services are prone to failure or delayed responses. If a services fails it may impact on other services affecting performance and possibly making other parts of application inaccessible or in the worst case bring down the whole application.

Of course, there are solutions available that help make applications resilient and fault tolerant – one such framework is Hystrix.

The Hystrix framework library helps to control the interaction between services by providing fault tolerance and latency tolerance. It improves overall resilience of the system by isolating the failing services and stopping the cascading effect of failures.

In this series of posts we will begin by looking at how Hystrix comes to the rescue when a service or system fails and what Hystrix can accomplish in these circumstances.

2. Simple Example

The way Hystrix provides fault and latency tolerance is to isolate and wrap calls to remote services.

In this simple example we wrap a call in the run() method of the HystrixCommand:

class CommandHelloWorld extends HystrixCommand<String> {

    private String name;

    CommandHelloWorld(String name) {
        super(HystrixCommandGroupKey.Factory.asKey("ExampleGroup"));
        this.name = name;
    }

    @Override
    protected String run() {
        return "Hello " + name + "!";
    }
}

and we execute the call as follows:

@Test
public void givenInputBobAndDefaultSettings_whenCommandExecuted_thenReturnHelloBob(){
    assertThat(new CommandHelloWorld("Bob").execute(), equalTo("Hello Bob!"));
}

3. Maven Setup

To use Hystrix in a Maven projects, we need to have hystrix-core and rxjava-core dependency from Netflix in the project pom.xml:

<dependency>
    <groupId>com.netflix.hystrix</groupId>
    <artifactId>hystrix-core</artifactId>
    <version>1.5.4</version>
</dependency>

The latest version can always be found here.

<dependency>
    <groupId>com.netflix.rxjava</groupId>
    <artifactId>rxjava-core</artifactId>
    <version>0.20.7</version>
</dependency>

The latest version of this library can always be found here.

4. Setting up Remote Service

Let’s start by simulating a real world example.

In the example below, the class RemoteServiceTestSimulator represents a service on a remote server. It has a method which responds with a message after the given period of time. We can imagine that this wait is a simulation of a time consuming process at the remote system resulting in a delayed response to the calling service:

class RemoteServiceTestSimulator {

    private long wait;

    RemoteServiceTestSimulator(long wait) throws InterruptedException {
        this.wait = wait;
    }

    String execute() throws InterruptedException {
        Thread.sleep(wait);
        return "Success";
    }
}

And here is our sample client that calls the RemoteServiceTestSimulator.

The call to the service is isolated and wrapped in the run() method of a HystrixCommand. Its this wrapping that provides the resilience we touched upon above:

class RemoteServiceTestCommand extends HystrixCommand<String> {

    private RemoteServiceTestSimulator remoteService;

    RemoteServiceTestCommand(Setter config, RemoteServiceTestSimulator remoteService) {
        super(config);
        this.remoteService = remoteService;
    }

    @Override
    protected String run() throws Exception {
        return remoteService.execute();
    }
}

The call is executed by calling the execute() method on an instance of the RemoteServiceTestCommand object.

The following test demonstrates how this is done:

@Test
public void givenSvcTimeoutOf100AndDefaultSettings_whenRemoteSvcExecuted_thenReturnSuccess()
  throws InterruptedException {

    HystrixCommand.Setter config = HystrixCommand
      .Setter
      .withGroupKey(HystrixCommandGroupKey.Factory.asKey("RemoteServiceGroup2"));
    
    assertThat(new RemoteServiceTestCommand(config, new RemoteServiceTestSimulator(100)).execute(),
      equalTo("Success"));
}

So far we have seen how to wrap remote service calls in the HystrixCommand object. In the section below let’s look at how to deal with a situation when the remote service starts to deteriorate.

5. Working with Remote Service and Defensive Programming

5.1. Defensive Programming with Timeout

It is general programming practice to set timeouts for calls to remote services.

Let’s begin by looking at how to set timeout on HystrixCommand and how it helps by short circuiting:

@Test
public void givenSvcTimeoutOf5000AndExecTimeoutOf10000_whenRemoteSvcExecuted_thenReturnSuccess()
  throws InterruptedException {

    HystrixCommand.Setter config = HystrixCommand
      .Setter
      .withGroupKey(HystrixCommandGroupKey.Factory.asKey("RemoteServiceGroupTest4"));

    HystrixCommandProperties.Setter commandProperties = HystrixCommandProperties.Setter();
    commandProperties.withExecutionTimeoutInMilliseconds(10_000);
    config.andCommandPropertiesDefaults(commandProperties);

    assertThat(new RemoteServiceTestCommand(config, new RemoteServiceTestSimulator(500)).execute(),
      equalTo("Success"));
}

In the above test, we are delaying the service’s response by setting the timeout to 500 ms. We are also setting the execution timeout on HystrixCommand to be 10,000 ms, thus allowing sufficient time for the remote service to respond.

Now let’s see what happens when the execution timeout is less than the service timeout call:

@Test(expected = HystrixRuntimeException.class)
public void givenSvcTimeoutOf15000AndExecTimeoutOf5000_whenRemoteSvcExecuted_thenExpectHre()
  throws InterruptedException {

    HystrixCommand.Setter config = HystrixCommand
      .Setter
      .withGroupKey(HystrixCommandGroupKey.Factory.asKey("RemoteServiceGroupTest5"));

    HystrixCommandProperties.Setter commandProperties = HystrixCommandProperties.Setter();
    commandProperties.withExecutionTimeoutInMilliseconds(5_000);
    config.andCommandPropertiesDefaults(commandProperties);

    new RemoteServiceTestCommand(config, new RemoteServiceTestSimulator(15_000)).execute();
}

Notice how we’ve lowered the bar and set the execution timeout to 5,000 ms.

We are expecting the service to respond within 5,000 ms, whereas we have set the service to respond after 15,000 ms. If you notice when you execute the test, the test will exit after 5,000 ms instead of waiting for 15,000 ms and will throw a HystrixRuntimeException.

This demonstrates how Hystrix does not wait longer than the configured timeout for a response. This helps make the system protected by Hystrix more responsive.

In the below sections we will look into setting thread pool size which prevents threads being exhausted and we will discuss its benefit.

5.2. Defensive Programming with Limited Thread Pool

Setting timeouts for service call does not solve all the issues associated with remote services.

When a remote service starts to respond slowly, an typical application will continue to call that remote service.

The application doesn’t know if the remote service is healthy or not and new threads are spawned every time a request comes in. This will cause threads on an already struggling server to be used.

We don’t want this to happen as we need these threads for other remote calls or processes running on our server and we also want to avoid CPU utilization spiking up.

Let’s see how to set the thread pool size in HystrixCommand:

@Test
public void givenSvcTimeoutOf500AndExecTimeoutOf10000AndThreadPool_whenRemoteSvcExecuted
  _thenReturnSuccess() throws InterruptedException {

    HystrixCommand.Setter config = HystrixCommand
      .Setter
      .withGroupKey(HystrixCommandGroupKey.Factory.asKey("RemoteServiceGroupThreadPool"));

    HystrixCommandProperties.Setter commandProperties = HystrixCommandProperties.Setter();
    commandProperties.withExecutionTimeoutInMilliseconds(10_000);
    config.andCommandPropertiesDefaults(commandProperties);
    config.andThreadPoolPropertiesDefaults(HystrixThreadPoolProperties.Setter()
      .withMaxQueueSize(10)
      .withCoreSize(3)
      .withQueueSizeRejectionThreshold(10));

    assertThat(new RemoteServiceTestCommand(config, new RemoteServiceTestSimulator(500)).execute(),
      equalTo("Success"));
}

In the above test, we are setting the maximum queue size, the core queue size and the queue rejection size. Hystrix will start rejecting the requests when the maximum number of threads have reached 10 and the task queue has reached a size of 10.

The core size is the number of threads that always stay alive in the thread pool.

5.3. Defensive Programming with Short Circuit Breaker Pattern

However, there is still an improvement that we can make to remote service calls.

Let’s consider the case that the remote service has started failing.

We don’t want to keep firing off requests at it and waste resources. We would ideally want to stop making requests for a certain amount of time in order to give the service time to recover before then resuming requests. This is what is called the Short Circuit Breaker pattern.

Let’s see how Hystrix implements this pattern:

@Test
public void givenCircuitBreakerSetup_whenRemoteSvcCmdExecuted_thenReturnSuccess()
  throws InterruptedException {

    HystrixCommand.Setter config = HystrixCommand
      .Setter
      .withGroupKey(HystrixCommandGroupKey.Factory.asKey("RemoteServiceGroupCircuitBreaker"));

    HystrixCommandProperties.Setter properties = HystrixCommandProperties.Setter();
    properties.withExecutionTimeoutInMilliseconds(1000);
    properties.withCircuitBreakerSleepWindowInMilliseconds(4000);
    properties.withExecutionIsolationStrategy
     (HystrixCommandProperties.ExecutionIsolationStrategy.THREAD);
    properties.withCircuitBreakerEnabled(true);
    properties.withCircuitBreakerRequestVolumeThreshold(1);

    config.andCommandPropertiesDefaults(properties);
    config.andThreadPoolPropertiesDefaults(HystrixThreadPoolProperties.Setter()
      .withMaxQueueSize(1)
      .withCoreSize(1)
      .withQueueSizeRejectionThreshold(1));

    assertThat(this.invokeRemoteService(config, 10_000), equalTo(null));
    assertThat(this.invokeRemoteService(config, 10_000), equalTo(null));
    assertThat(this.invokeRemoteService(config, 10_000), equalTo(null));

    Thread.sleep(5000);

    assertThat(new RemoteServiceTestCommand(config, new RemoteServiceTestSimulator(500)).execute(),
      equalTo("Success"));

    assertThat(new RemoteServiceTestCommand(config, new RemoteServiceTestSimulator(500)).execute(),
      equalTo("Success"));

    assertThat(new RemoteServiceTestCommand(config, new RemoteServiceTestSimulator(500)).execute(),
      equalTo("Success"));
}
public String invokeRemoteService(HystrixCommand.Setter config, int timeout)
  throws InterruptedException {

    String response = null;

    try {
        response = new RemoteServiceTestCommand(config,
          new RemoteServiceTestSimulator(timeout)).execute();
    } catch (HystrixRuntimeException ex) {
        System.out.println("ex = " + ex);
    }

    return response;
}

In the above test we have set different circuit breaker properties. The most important ones are:

  • The CircuitBreakerSleepWindow which is set to 4,000 ms. This configures the circuit breaker window and defines the time interval after which the request to the remote service will be resumed
  • The CircuitBreakerRequestVolumeThreshold which is set to 1 and defines the number of request errors after which the circuit breaker will trip open

With the above settings in place, our HystrixCommand will now trip open after two failed request. The third request will not even hit the remote service even though we have set the service delay to be 500 ms, Hystrix will short circuit and our method will return null as response.

We will subsequently add a Thread.sleep(5000) in order to cross the limit of the sleep window that we have set. This will cause Hystrix to close the circuit and the subsequent requests will flow through successfully.

6. Conclusion

In summary Hystrix is designed to:

  1. Provide protection and control over failures and latency from services typically accessed over the network
  2. Stop cascading of failures resulting from some of the services being down
  3. Fail fast and rapidly recover
  4. Degrade gracefully where possible
  5. Real time monitoring and alerting of command center on failures

In the next post we will see how to combine the benefits of Hystrix with the Spring framework.

The full project code and all examples can be found over on the github project.

WebAppConfiguration in Spring Tests

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Overview

In this article, we’ll explore the @WebAppConfiguration annotation in Spring, why we need it in our integration tests and also how can we configure it so that these tests actually bootstrap a WebApplicationContext.

2. @WebAppConfiguration

Simply put, this is a class-level annotation used to create a web version of the application context in the Spring Framework.

It’s used to denote that the ApplicationContext which is bootstrapped for the test should be an instance of WebApplicationContext.

A quick note about usage – we’ll usually find this annotation in integration tests because the WebApplicationContext is used to build a MockMvc object. You can find more information about integration testing with Spring here.

3. Loading a WebApplicationContext

Starting with Spring 3.2, there is now support for loading a WebApplicationContext in integration tests:

@WebAppConfiguration
@ContextConfiguration(classes = WebConfig.class)
public class EmployeeControllerTest {
    ...
}

This instructs the TestContext framework that a WebApplicationContext should be loaded for the test.

And, in the background a MockServletContext is created and supplied to our test’s WebApplicationContext by the TestContext framework.

3.1. Configuration Options

By default, the base resource path for the WebApplicationContext will be set to “file:src/main/webapp”, which is the default location for the root of the WAR in a Maven Project.

However, we can override this by simple providing an alternate path to the @WebAppConfiguration annotation:

@WebAppConfiguration("src/test/webapp")

We can also reference a base resource path from the classpath instead of the file system:

@WebAppConfiguration("classpath:test-web-resources")

3.2. Caching

Once the WebApplicationContext is loaded it will be cached and reused for all subsequent tests that declare the same unique context configuration within the same test suite.

For further details on caching, you can consult the Context caching section of the reference manual.

4. Using @WebAppConfiguration in Tests

Now that we understand why do we need to add the @WebAppConfiguration annotation in our test classes, let’s see what happens if we miss adding it when we are using a WebApplicationContext.

@RunWith(SpringJUnit4ClassRunner.class)
// @WebAppConfiguration omitted on purpose
@ContextConfiguration(classes = WebConfig.class)
public class EmployeeTest {

    @Autowired
    private WebApplicationContext webAppContext;
    private MockMvc mockMvc;

    @Before
    public void setup() {
        MockitoAnnotations.initMocks(this);
        mockMvc = MockMvcBuilders.webAppContextSetup(webAppContext).build();
    }
    
    ...
}

Notice that we commented out the annotation to simulate the scenario in which we forget to add it. Here it’s easy to see why the test will fail when we run the JUnit test:  we are trying to autowire the WebApplicationContext in a class where we haven’t set one.

A more typical example however is a test that uses a web-enabled Spring configuration; that’s actually enough to make the test break.

Let’s have a look:

@RunWith(SpringJUnit4ClassRunner.class)
// @WebAppConfiguration omitted on purpose
@ContextConfiguration(classes = WebConfig.class)
public class EmployeeTestWithoutMockMvc {

    @Autowired
    private EmployeeController employeeController;

    ...
}

Even though the above example isn’t autowiring a WebApplicationContext it will still fail because it’s trying to use a web-enabled configuration – WebConfig:

@Configuration
@EnableWebMvc
@ComponentScan("com.baeldung.web")
public class WebConfig extends WebMvcConfigurerAdapter {
    ...
}

The annotation @EnableWebMvc is the culprit here – that will basically require a web enabled Spring context, and without it – we’ll see the test fail:

Caused by: org.springframework.beans.factory.NoSuchBeanDefinitionException: 
  No qualifying bean of type [javax.servlet.ServletContext] found for dependency: 
    expected at least 1 bean which qualifies as autowire candidate for this dependency. 

Dependency annotations: 
  {@org.springframework.beans.factory.annotation.Autowired(required=true)}
    at o.s.b.f.s.DefaultListableBeanFactory
      .raiseNoSuchBeanDefinitionException(DefaultListableBeanFactory.java:1373)
    at o.s.b.f.s.DefaultListableBeanFactory
      .doResolveDependency(DefaultListableBeanFactory.java:1119)
    at o.s.b.f.s.DefaultListableBeanFactory
      .resolveDependency(DefaultListableBeanFactory.java:1014)
    at o.s.b.f.a.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement
      .inject(AutowiredAnnotationBeanPostProcessor.java:545)
    ... 43 more

So that’s the problem that we’re easily fixing by adding the @WebAppConfiguration annotation to our tests.

5. Conclusion

In this article we showed how we can let the TestContext framework to load a WebApplicationContext into our integration tests just by adding the annotation.

Finally, we looked in the examples that even though if we add the @ContextConfiguration to the test, this won’t be able to work unless we add the @WebAppConfiguration annotation.

The implementation of the examples in this article are available in our repository on GitHub.

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

Changing Spring Model Parameters with Handler Interceptor

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Introduction

In this tutorial we are going to focus on the Spring MVC HandlerInterceptor. More specifically, we will change Spring MVC’s model parameters before and after handling a request.

If you want to read about HandlerInterceptor’s basics, check out this article.

2. Maven Dependencies

In order to use Interceptors, you need to include the following section in a dependencies section of your pom.xml file:

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-web</artifactId>
    <version>4.3.2.RELEASE</version>
</dependency>

Latest version can be found here.

This dependency only covers Spring Web so don’t forget to add spring-core and spring-context for a full web application, and a logging library of your choice.

3. Custom Implementation

One of the use cases of HandlerInterceptor is adding common/user specific parameters to a model, which will be available on each generated view.

In our example, we will use custom interceptor implementation to add logged user’s username to model parameters. In more complex systems we may add more specific information like: user avatar path, user location, etc.

Let’s start with defining our new Interceptor class:

public class UserInterceptor extends HandlerInterceptorAdapter {

    private static Logger log = LoggerFactory.getLogger(UserInterceptor.class);

    ...
}

We extend HandlerInterceptorAdapter, as we only want to implement preHandle() and postHandle() methods.

As we mentioned before, we want to add logged user’s name to a model. First of all, we need to check if a user is logged in. We may obtain this information by checking SecurityContextHolder:

public static boolean isUserLogged() {
    try {
        return !SecurityContextHolder.getContext().getAuthentication()
          .getName().equals("anonymousUser");
    } catch (Exception e) {
        return false;
    }
}

When an HttpSession is established, but nobody is logged in, a username in Spring Security context equals to anonymousUser. Next, we proceed with implementation of preHandle():

3.1. Method preHandle()

Before handling a request, we cannot access model parameters. In order to add username, we need to use HttpSession to set parameters:

@Override
public boolean preHandle(HttpServletRequest request,
  HttpServletResponse response, Object object) throws Exception {
    if (isUserLogged()) {
        addToModelUserDetails(request.getSession());
    }
    return true;
}

This is crucial if we are using some of this information before handling a request. As we see, we are checking if a user is logged in and then add parameters to our request by obtaining its session:

private void addToModelUserDetails(HttpSession session) {
    log.info("=============== addToModelUserDetails =========================");
    
    String loggedUsername 
      = SecurityContextHolder.getContext().getAuthentication().getName();
    session.setAttribute("username", loggedUsername);
    
    log.info("user(" + loggedUsername + ") session : " + session);
    log.info("=============== addToModelUserDetails =========================");
}

We used SecurityContextHolder to obtain loggedUsername. You may override Spring Security UserDetails implementation to obtain email instead of a standard username.

3.2. Method postHandle() 

After handling a request, our model parameters are available, so we may access them to change values or add new ones. In order to do that, we use the overridden postHandle() method:

@Override
public void postHandle(
  HttpServletRequest req, 
  HttpServletResponse res,
  Object o, 
  ModelAndView model) throws Exception {
    
    if (model != null && !isRedirectView(model)) {
        if (isUserLogged()) {
	    addToModelUserDetails(model);
	}
    }
}

Let’s take a look at the implementation details.

First of all, it’s better to check if the model is not null. It will prevent us from encountering a NullPointerException.

Moreover, we may check if a View is not an instance of RedirectView.

There is no need to add/change parameters after request is handled and then redirected, as immediately, new controller will perform handling again. To check if the view is redirected, we are introducing the following method:

public static boolean isRedirectView(ModelAndView mv) {
    String viewName = mv.getViewName();
    if (viewName.startsWith("redirect:/")) {
        return true;
    }
    View view = mv.getView();
    return (view != null && view instanceof SmartView
      && ((SmartView) view).isRedirectView());
}

Finally, we are checking again if a user is logged, and if yes, we are adding parameters to Spring model:

private void addToModelUserDetails(ModelAndView model) {
    log.info("=============== addToModelUserDetails =========================");
    
    String loggedUsername = SecurityContextHolder.getContext()
      .getAuthentication().getName();
    model.addObject("loggedUsername", loggedUsername);
    
    log.trace("session : " + model.getModel());
    log.info("=============== addToModelUserDetails =========================");
}

Please note that logging is very important, as this logic works “behind the scenes” of our application. It is easy to forget that we are changing some model parameters on each View without logging it properly.

4. Configuration

To add our newly created Interceptor into Spring configuration, we need to override addInterceptors() method inside WebConfig class that extends WebMvcConfigurerAdapter:

@Override
public void addInterceptors(InterceptorRegistry registry) {
    registry.addInterceptor(new UserInterceptor());
}

We may achieve the same configuration by editing our XML Spring configuration file:

<mvc:interceptors>
    <bean id="userInterceptor" class="org.baeldung.web.interceptor.UserInterceptor"/>
</mvc:interceptors>

From this moment, we may access all user-related parameters on all generated views.

Please notice, if multiple Spring Interceptors are configured, the preHandle() method is executed in the order of configuration whereas postHandle() and afterCompletion() methods are invoked in the reverse order.

5. Conclusion

This tutorial presents intercepting web requests using Spring MVC’s HandlerInterceptor in order to provide user information.

In this particular example, we focused on adding logged user’s details in our web application to a model parameters. You may extend this HandlerInterceptor implementation by adding more detailed information.

All examples and configurations are available here on GitHub.

5.1. Articles in the Series

All articles of the series:

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

X.509 Authentication in Spring Security

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Overview

In this article, we’ll focus on the main use cases for X.509 certificate authentication  – verifying the identity of a communication peer when using the HTTPS (HTTP over SSL) protocol.

Simply put – while a secure connection is established, the client verifies the server according to its certificate (issued by a trusted certificate authority).

But beyond that, X.509 in Spring Security can be used to verify the identity of a client through the server whilst connecting. This is called “mutual authentication” and we’ll look at how that’s done here as well.

Finally, we’ll touch on when it makes sense to use this kind of authentication.

To demonstrate server verification, we’ll create a simple web-application and install a custom certificate authority in a browser.

And, for mutual authentication, we’ll create a client certificate and modify our server to allow only verified clients.

2. Keystores

Optional Requirement: To use cryptographically strong keys together with encryption and decryption features you need the ‘Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files’ installed in your JVM.

These can be downloaded for example from Oracle (follow the installation instructions included in the download). Some Linux distributions also provide an installable package through their package managers.

To implement X.509 authentication in a Spring application, we’ll first create a keystore in the Java Key-Store (JKS) format.

This keystore must contain a valid certificate of an authority or a chain of certificate authorities and an own certificate for our server. The latter must be signed by one of the included authorities and has to be named after the hostname on which the server is running; we’ll use the Java keytool application here.

To simplify the process of creating keys and certificates using keytool, the code on Github, provides a commented Makefile for GNU make, containing all steps necessary to complete this section. You can also easily customize it via a few environment variables.

Tip: As all-in-one step you can run make without arguments. This will create a keystore, a truststore and two certificates for importing in your browser (one for localhost and one for a user called “cid”).

For creating a new keystore with a certificate authority, we can run make as follows:

$> make create-keystore PASSWORD=changeit

Now, we will add a certificate for our development host to this created keystore and sign it by our certificate authority:

$> make add-host HOSTNAME=localhost

To allow client authentication, we also need a keystore called “truststore”. This truststore has to be contain valid certificates of our certificate authority and all of the allowed clients. For reference on using keytool, please look into the Makefile at the following given sections:

$> make create-truststore PASSWORD=changeit
$> make add-client CLIENTNAME=cid

3. Example Application

Our SSL secured server-project will be consist of a @SpringBootApplication annotated application class (which is a kind of @Configuration), an application.properties configuration file and a very simple MVC-style front-end.

All, the application has to do, is presenting a HTML page with a “Hello {User}!” message. This way we can inspect the server certificate in a browser to make sure, that the connection is verified and secured.

First we create a new Maven project with three Spring Boot Starter bundles included:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-security</artifactId>
    <version>1.4.0.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <version>1.4.0.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-thymeleaf</artifactId>
    <version>1.4.0.RELEASE</version>
</dependency>

For reference: you will find the bundles on Maven Central (security, web, thymeleaf).

As next step, we create the main application class and the user-controller:

@SpringBootApplication
public class X509AuthenticationServer {
    public static void main(String[] args) {
        SpringApplication.run(X509AuthenticationServer.class, args);
    }
}

@Controller
public class UserController {
    @RequestMapping(value = "/user")
    public String user(Model model, Principal principal) {
        
        UserDetails currentUser 
          = (UserDetails) ((Authentication) principal).getPrincipal();
        model.addAttribute("username", currentUser.getUsername());
        return "user";
    }
}

Now, we tell the application where they can find our keystore and how it can be accessed. We set SSL to an “enabled” status and change the standard listening port to indicate a secured connection.

Additionally we configure some user-details for accessing our server via Basic Authentication:

server.ssl.key-store=../keystore/keystore.jks
server.ssl.key-store-password=${PASSWORD}
server.ssl.key-alias=localhost
server.ssl.key-password=${PASSWORD}
server.ssl.enabled=true
server.port=8443
security.user.name=Admin
security.user.password=admin

This will be the HTML template, located at the resources/templates folder:

<!DOCTYPE html>
<html xmlns:th="http://www.thymeleaf.org">
<head>
    <title>X.509 Authentication Demo</title>
</head>
<body>
    <h2>Hello <span th:text="${username}"/>!</h2>
</body>
</html>

Before we finish this section and look at the site, we still need to install our generated certificate authority as trusted certificate in a browser of our choice.

An exemplary installation of our certificate authority for Mozilla Firefox would look like follows:

  1. Type about:preferences in the address bar
  2. Open Advanced -> Cerficates -> View Certificates -> Authorities
  3. Click on Import
  4. Locate the Baeldung tutorials folder and its subfolder spring-security-x509/keystore
  5. Select the ca.crt file and click OK
  6. Choose “Trust this CA to identify websites” and click OK

Note: If you don’t want to add our certificate authority to the list of trusted authorities, you’ll later have the option to make an exception and show the website tough, even when it is mentioned as insecure. But then you’ll see a ‘yellow exclamation mark’ symbol in the address bar, indicating the insecure connection!

Afterwards we will navigate to the basic-secured-server module and run:

mvn spring-boot:run

Finally we hit https://localhost:8443/user, enter our user credentials from the application.properties and should see a “Hello Admin!” message. Now we’re able to inspect the connection status by clicking the ‘green lock’ symbol in the address bar and it should be a secured connection.

Picture of a secured HTTP connection.

4. Mutual Authentication

In this section, we use Spring Security to grant users access to our demo-website. The procedure makes a login form obsolete.

But before we continue to modify our server, we will discuss in short, when it makes sense to provide this kind of authentication.

Pros:

  • The private key of a X.509 client certificate is stronger than any user-defined password. But it has to be kept secret!
  • With a certificate, the identity of a client is well-known and easy to verify.
  • No more forgotten passwords!

Cons:

  • You must remember that for each user that should be verified by the server, its own certificate needs to be installed in the configured truststore. For small applications with only a few clients, this may perhaps be practicable, with an increasing number of clients it may leads to complex key-management for users.
  • The private key of an certificate has to be installed in a client application. In fact: X.509 client authentication is device dependent, which makes it impossible to use this kind of authentication in public areas, for example in an internet-café.
  • There must be a mechanism to revoke compromised client certificates.

In order to continue we are modifying our X509AuthenticationServer to extend from WebSecurityConfigurerAdapter and override one of the provided configure methods. Here we configure the x.509 mechanism to parse the Common Name (CN) field of a certificate for extracting usernames.

With this extracted usernames, Spring Security is looking-up in a provided UserDetailsService for matching users. So we also implement this service interface containing one demo user.

Tip: In production environments, this UserDetailsService can load its users for example from a JDBC Datasource.

You have to notice, that we annotate our class with @EnableWebSecurity and @EnableGlobalMethodSecurity with enabled pre-/post-authorization.

With the latter we are able to annotate our resources with @PreAuthorize and @PostAuthorize for a fine-grained access control:

@SpringBootApplication
@EnableWebSecurity
@EnableGlobalMethodSecurity(prePostEnabled = true)
public class X509AuthenticationServer extends WebSecurityConfigurerAdapter {
    ...

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http.authorizeRequests().anyRequest().authenticated()
          .and()
          .x509()
            .subjectPrincipalRegex("CN=(.*?)(?:,|$)")
            .userDetailsService(userDetailsService());
    }

    @Bean
    public UserDetailsService userDetailsService() {
        return new UserDetailsService() {
            @Override
            public UserDetails loadUserByUsername(String username) {
                if (username.equals("cid")) {
                    return new User(username, "", 
                      AuthorityUtils
                        .commaSeparatedStringToAuthorityList("ROLE_USER"));
                }
            }
        };
    }
}

As said previously, we are now able to use Expression-Based Access Control in our controller. More specifically, our authorization annotations are respected because the @EnableGlobalMethodSecurity annotation in our @Configuration:

@Controller
public class UserController {
    @PreAuthorize("hasAuthority('ROLE_USER')")
    @RequestMapping(value = "/user")
    public String user(Model model, Principal principal) {
        ...
    }
}

An overview over all possible authorization options can be found in the official documentation.

As final modification step, we have to tell the application where our truststore is located and that SSL client authentication is necessary (server.ssl.client-auth=need).

So we put the following into our application.properties:

server.ssl.trust-store=../keystore/truststore.jks
server.ssl.trust-store-password=${PASSWORD}
server.ssl.client-auth=need

Now, if we run the application and point our browser to https://localhost:8443/user, we become informed that the peer cannot be verified and it denies to open our website. So we also have to install our client certificate, which is outlined here exemplary for Mozilla Firefox:

  1. Type about:preferences in the address bar
  2. Open Advanced -> View Certificates -> Your Certificates
  3. Click on Import
  4. Locate the Baeldung tutorials folder and its subfolder spring-security-x509/keystore
  5. Select the cid.p12 file and click OK
  6. Input the password for your certificate and click OK

As final step we’re refreshing our browser-tab containing the website and selecting our client certificate in the newly opened chooser-dialog.

Screenshot of a Client Certificate Chooser

If we see a welcome message like “Hello cid!”, we were successful!

5. Mutual Authentication with XML

Adding X.509 client authentication to a http security configuration in XML is also possible:

<http>
    ...
    <x509 subject-principal-regex="CN=(.*?)(?:,|$)" 
      user-service-ref="userService"/>

    <authentication-manager>
        <authentication-provider>
            <user-service id="userService">
                <user name="cid" password="" authorities="ROLE_USER"/>
            </user-service>
        </authentication-provider>
    </authentication-manager>
    ...
</http>

To configure a underlying Tomcat, we have to put our keystore and our truststore into its conf folder and edit the server.xml:

<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" scheme="https" secure="true"
    clientAuth="true" sslProtocol="TLS"
    keystoreFile="${catalina.home}/conf/keystore.jks"
    keystoreType="JKS" keystorePass="changeit"
    truststoreFile="${catalina.home}/conf/truststore.jks"
    truststoreType="JKS" truststorePass="changeit"
/>

Tip: With clientAuth set to “want”, SSL is still enabled, even if the client doesn’t provide a valid certificate. But in this case, we have to use a second authentication mechanism, for example a login-form, to access the secured resources.

6. Conclusion

In summary, we’ve learned how to create a keystore containing a certificate authority and a self-signed certificate for our development environment.

We’ve created the truststore containing a certificate authority and a client certificate, and we have used both to verify our server on the client-side and our client on the server-side.

If you have studied the Makefile, you should be able to create certificates, make certificate-requests and import signed certificates using Java keytool.

Furthermore you now should be able to export a client certificate into the PKCS12 format, and using it in a client application like a browser, for example Mozilla Firefox.

And we’ve discussed when it makes sense to use Spring Security X.509 client authentication, so it is up to you, to decide, whether to implement it into your web-application, or not.

And to wrap up, you’ll find the source code to this article on Github.

The Master Class "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

Guide to Java 8’s Functional Interfaces

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Introduction

This article is a guide to different functional interfaces present in Java 8, their general use cases and usage in the standard JDK library.

2. Lambdas in Java 8

Java 8 brought a powerful new syntactic improvement in the form of lambda expressions. A lambda is an anonymous function that can be handled as a first-class language citizen, for instance, passed to or returned from a method.

Before Java 8, you would usually create a class for every case where you needed to encapsulate a single piece of functionality. This implied a lot of unnecessary boilerplate code to define something that served as a primitive function representation.

Lambdas, functional interfaces and best practices of working with them in general are described in the article “Lambda Expressions and Functional Interfaces: Tips and Best Practices”. This guide focuses on some particular functional interfaces that are present in the java.util.function package.

3. Functional Interfaces

All functional interfaces are recommended to have an informative @FunctionalInterface annotation. This not only clearly communicates the purpose of this interface, but also allows a compiler to generate an error if the annotated interface does not satisfy the conditions.

Any interface with a SAM(Single Abstract Method) is a functional interface, and its implementation may be treated as lambda expressions.

Note that Java 8’s default methods are not abstract and do not count: a functional interface may still have multiple default methods. You can observe this by looking at the Function’s documentation.

4. Functions

The most simple and general case of a lambda is a functional interface with a method that receives one value and returns another. This function of a single argument is represented by the Function interface which is parameterized by the types of its argument and a return value:

public interface Function<T, R> { … }

One of the usages of the Function type in the standard library is the Map.computeIfAbsent method that returns a value from a map by key, but calculates a value if a key is not already present in a map. To calculate a value, it uses the passed Function implementation:

Map<String, Integer> nameMap = new HashMap<>();
Integer value = nameMap.computeIfAbsent("John", s -> s.length());

A value in this case will be calculated by applying a function to a key, put inside a map and also returned from a method call. By the way, we may replace the lambda with a method reference that matches passed and returned value types.

Remember that an object on which the method is invoked is in fact the implicit first argument of a method, which allows casting an instance method length reference to a Function interface:

Integer value = nameMap.computeIfAbsent("John", String::length);

The Function interface has also a default compose method that allows to combine several functions into one and execute them sequentially:

Function<Integer, String> intToString = Object::toString;
Function<String, String> quote = s -> "'" + s + "'";

Function<Integer, String> quoteIntToString = quote.compose(intToString);

assertEquals("'5'", quoteIntToString.apply(5));

The quoteIntToString function is a combination of the quote function applied to a result of the intToString function.

5. Primitive Function Specializations

Since a primitive type can’t be a generic type argument, there are versions of the Function interface for most used primitive types double, int, long, and their combinations in argument and return types:

  • IntFunction, LongFunction, DoubleFunction: arguments are of specified type, return type is parameterized
  • ToIntFunction, ToLongFunction, ToDoubleFunction: return type is of specified type, arguments are parameterized
  • DoubleToIntFunction, DoubleToLongFunction, IntToDoubleFunction, IntToLongFunction, LongToIntFunction, LongToDoubleFunction — having both argument and return type defined as primitive types, as specified by their names

There is no out-of-the-box functional interface for, say, a function that takes a short and returns a byte, but nothing stops you from writing your own:

@FunctionalInterface
public interface ShortToByteFunction {

    byte applyAsByte(short s);

}

Now we can write a method that transforms an array of short to an array of byte using a rule defined by a ShortToByteFunction:

public byte[] transformArray(short[] array, ShortToByteFunction function) {
    byte[] transformedArray = new byte[array.length];
    for (int i = 0; i < array.length; i++) {
        transformedArray[i] = function.applyAsByte(array[i]);
    }
    return transformedArray;
}

Here’s how we could use it to transform an array of shorts to array of bytes multiplied by 2:

short[] array = {(short) 1, (short) 2, (short) 3};
byte[] transformedArray = transformArray(array, s -> (byte) (s * 2));

byte[] expectedArray = {(byte) 2, (byte) 4, (byte) 6};
assertArrayEquals(expectedArray, transformedArray);

6. Two-Arity Function Specializations

To define lambdas with two arguments, we have to use additional interfaces that contain “Bi” keyword in their names: BiFunction, ToDoubleBiFunction, ToIntBiFunction and ToLongBiFunction.

BiFunction has both arguments and a return type generified, while ToDoubleBiFunction and others allow you to return a primitive value.

One of the typical examples of using this interface in the standard API is in the Map.replaceAll method, which allows to replace all values in a map with some computed value.

Let’s use a BiFunction implementation that receives a key and an old value to calculate a new value for the salary and return it.

Map<String, Integer> salaries = new HashMap<>();
salaries.put("John", 40000);
salaries.put("Freddy", 30000);
salaries.put("Samuel", 50000);

salaries.replaceAll((name, oldValue) -> 
  name.equals("Freddy") ? oldValue : oldValue + 10000);

7. Suppliers

The Supplier functional interface is yet another Function specialization that does not take any arguments. It is typically used for lazy generation of values. For instance, let’s define a function that squares a double value. It will receive not a value itself, but a Supplier of this value:

public double squareLazy(Supplier<Double> lazyValue) {
    return Math.pow(lazyValue.get(), 2);
}

This allows us to lazily generate the argument for invocation of this function using a Supplier implementation. This can be useful if the generation of this argument takes a considerable amount of time. We’ll simulate that using Guava’s sleepUninterruptibly method:

Supplier<Double> lazyValue = () -> {
    Uninterruptibles.sleepUninterruptibly(1000, TimeUnit.MILLISECONDS);
    return 9d;
};

Double valueSquared = squareLazy(lazyValue);

Another use case for the Supplier is defining a logic for sequence generation. To demonstrate it, let’s use a static Stream.generate method to create a Stream of Fibonacci numbers:

int[] fibs = {0, 1};
Stream<Integer> fibonacci = Stream.generate(() -> {
    int result = fibs[1];
    int fib3 = fibs[0] + fibs[1];
    fibs[0] = fibs[1];
    fibs[1] = fib3;
    return result;
});

The function that is passed to the Stream.generate method implements the Supplier functional interface. Notice that to be useful as a generator, the Supplier usually needs some sort of external state. In this case its state is comprised of two last Fibonacci sequence numbers.

To implement this state, we use an array instead of a couple of variables, because all external variables used inside the lambda have to be effectively final.

Other specializations of Supplier functional interface include BooleanSupplier, DoubleSupplier, LongSupplier and IntSupplier, whose return types are corresponding primitives.

8. Consumers

As opposed to the Supplier, the Consumer accepts a generified argument and returns nothing. It is a function that is representing side effects.

For instance, let’s greet everybody in a list of names by printing the greeting in the console. The lambda passed to the List.forEach method implements the Consumer functional interface:

List<String> names = Arrays.asList("John", "Freddy", "Samuel");
names.forEach(name -> System.out.println("Hello, " + name));

There are also specialized versions of the ConsumerDoubleConsumer, IntConsumer and LongConsumer — that receive primitive values as arguments. More interesting is the BiConsumer interface. One of its use cases is iterating through the entries of a map:

Map<String, Integer> ages = new HashMap<>();
ages.put("John", 25);
ages.put("Freddy", 24);
ages.put("Samuel", 30);

ages.forEach((name, age) -> System.out.println(name + " is " + age + " years old"));

Another set of specialized BiConsumer versions is comprised of ObjDoubleConsumer, ObjIntConsumer and ObjLongConsumer which receive two arguments one of which is generified, and another is a primitive type.

9. Predicates

In mathematical logic, a predicate is a function that receives a value and returns a boolean value.

The Predicate functional interface is a specialization of a Function that receives a generified value and returns a boolean. A typical use case of the Predicate lambda is to filter a collection of values:

List<String> names = Arrays.asList("Angela", "Aaron", "Bob", "Claire", "David");

List<String> namesWithA = names.stream()
  .filter(name -> name.startsWith("A"))
  .collect(Collectors.toList());

In the code above we filter a list using the Stream API and keep only names that start with letter “A”. The filtering logic is encapsulated in the Predicate implementation.

As in all previous examples, there are IntPredicate, DoublePredicate and LongPredicate versions of this function that receive primitive values.

10. Operators

Operator interfaces are special cases of a function that receive and return the same value type. The UnaryOperator interface receives a single argument. One of its use cases in the Collections API is to replace all values in a list with some computed values of the same type:

List<String> names = Arrays.asList("bob", "josh", "megan");

names.replaceAll(name -> name.toUpperCase());

The List.replaceAll function returns void, as it replaces the values in place. To fit the purpose, the lambda used to transform the values of a list has to return the same result type as it receives. This is why the UnaryOperator is useful here.

Of course, instead of name -> name.toUpperCase(), you can simply use a method reference:

names.replaceAll(String::toUpperCase);

One of the most interesting use cases of a BinaryOperator is a reduction operation. Suppose we want to aggregate a collection of integers in a sum of all values. With Stream API, we could do this using Collectors, but a more generic way to do it is, would be to use the reduce method:

List<Integer> values = Arrays.asList(3, 5, 8, 9, 12);

int sum = values.stream()
  .reduce(0, (i1, i2) -> i1 + i2);

The reduce method receives an initial accumulator value and a BinaryOperator function. The arguments of this function are a pair of values of a same type, and a function itself contains a logic for joining them in a single value of a same type. Passed function must be associative, which means that the order of value aggregation does not matter, i.e. the following condition should hold:

op.apply(a, op.apply(b, c)) == op.apply(op.apply(a, b), c)

The associative property of a BinaryOperator operator function allows to easily parallelize the reduction process.

Of course, there are also specializations of UnaryOperator and BinaryOperator that can be used with primitive values, namely DoubleUnaryOperator, IntUnaryOperator, LongUnaryOperator, DoubleBinaryOperator, IntBinaryOperator and LongBinaryOperator.

11. Legacy Functional Interfaces

Not all functional interfaces appeared in Java 8. Many interfaces from previous versions of Java conform to the constraints of a FunctionalInterface and can be used as lambdas. A prominent example is the Runnable and Callable interfaces that are used in concurrency APIs. In Java 8 these interfaces are also marked with a @FunctionalInterface annotation. This allows us to greatly simplify concurrency code:

Thread thread = new Thread(() -> System.out.println("Hello From Another Thread"));
thread.start();

12. Conclusion

In this article we’ve described different functional interfaces present in the Java 8 API that can be used as lambda expressions. The source code for the article is available on GitHub.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Injecting Git Information Into Spring

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Overview

In this tutorial, we’re going to show how to inject Git repository information into a Maven-built Spring Boot-based application.

In order to do this we will use maven-git-commit-id-plugin – a handy tool created solely for this purpose.

2. Maven Dependencies

Let’s add a plugin to a <plugins> section of our pom.xml file of our project:

<plugin>
    <groupId>pl.project13.maven</groupId>
    <artifactId>git-commit-id-plugin</artifactId>
    <version>2.2.1</version>
</plugin>

You can find the latest version here. Keep in mind that this plugin requires at least 3.1.1 version of Maven.

3. Configuration

The plugin has many convenient flags and attributes which expands its functionality. In this section we are going to briefly describe some of them. If you want to get to know all of them, visit maven-git-commit-id-plugin’s page, and if you want to go straight to the example, go to section 4.

The following snippets contain examples of plugin attributes; specify them in a <configuration></configuration> section according to your needs.

3.1. Missing Repository

You can configure it to omit errors if Git repository has not been found:

<failOnNoGitDirectory>false</failOnNoGitDirectory>

3.2. Git Repository Location

If you want to specify custom .git repository location, use dotGitDirectory attribute:

<dotGitDirectory>${project.basedir}/submodule_directory/.git</dotGitDirectory>

3.3. Output File

In order to generate properties file with a custom name and/or directory, use following section:

<generateGitPropertiesFilename>
    ${project.build.outputDirectory}/filename.properties
</generateGitPropertiesFilename>

3.4. Verbosity

For more generous logging use:

<verbose>true</verbose>

3.5. Properties File Generation

You can turn off creation of a git.properties file:

<generateGitPropertiesFile>false</generateGitPropertiesFile>

3.6. Properties’ Prefix

If you want to specify a custom property prefix, use:

<prefix>git</prefix>

3.7. Only For Parent Repository

When working with project with submodules, setting this flag makes sure, that plugin works only for parent repository:

<runOnlyOnce>true</runOnlyOnce>

3.8. Properties Exclusion

You might want to exclude some sensitive data like repository user info:

<excludeProperties>
    <excludeProperty>git.user.*</excludeProperty>
</excludeProperties>

3.9. Properties Inclusion

Including only specified data is also possible:

<includeOnlyProperties>    
    <includeOnlyProperty>git.commit.id</includeOnlyProperty>
</includeOnlyProperties>

4. Sample Application

Let’s create a sample REST controller, which will return basic information about our project.

We will create sample app using Spring Boot. If you don’t know how to set up a Spring Boot application, please see the introductory article: Configure a Spring Boot Web Application.

Our app will consist of 2 classes: Application and CommitIdController

4.1.  Application

CommitIdApplication will serve as a root of our application:

@SpringBootApplication(scanBasePackages = { "com.baeldung.git" })
public class CommitIdApplication {
 
    public static void main(String[] args) {
        SpringApplication.run(CommitIdApplication.class, args);
    }
 
    @Bean
    public static PropertySourcesPlaceholderConfigurer placeholderConfigurer() {
        PropertySourcesPlaceholderConfigurer propsConfig 
          = new PropertySourcesPlaceholderConfigurer();
        propsConfig.setLocation(new ClassPathResource("git.properties"));
        propsConfig.setIgnoreResourceNotFound(true);
        propsConfig.setIgnoreUnresolvablePlaceholders(true);
        return propsConfig;
    }
}

Besides configuring the root of out application, we created PropertyPlaceHolderConfigurer bean so that we are able to access properties file generated by the plugin.

We also set some flags, so that application would run smoothly even if Spring could not resolve the git.properties file.

4.2. Controller

@RestController
public class CommitInfoController {

    @Value("${git.commit.message.short}")
    private String commitMessage;

    @Value("${git.branch}")
    private String branch;

    @Value("${git.commit.id}")
    private String commitId;

    @RequestMapping("/commitId")
    public Map<String, String> getCommitId() {
        Map<String, String> result = new HashMap<>();
        result.put("Commit message",commitMessage);
        result.put("Commit branch", branch);
        result.put("Commit id", commitId);
        return result;
    }
}

As you can see we are injecting Git properties into class fields.

To see all properties available refer to git.properties file or author’s Github page. We also created a simple endpoint which, on HTTP GET request, will respond with a JSON containing injected values.

4.3. Maven Entry

<plugin>
    <groupId>pl.project13.maven</groupId>
    <artifactId>git-commit-id-plugin</artifactId>
    <version>2.2.1</version>
</plugin>

As you can see the plugin provides basic functionality “out of the box” and any additional configuration is optional. Generated properties file can be found under the default target/classes/ directory.

Remember to perform a Maven build before running the application, so that plugin can generate properties file.

After booting and requesting localhost:8080/commitId you can see a JSON file with a structure similar to the following:

{
    "Commit id":"7adb64f1800f8a84c35fef9e5d15c10ab8ecffa6",
    "Commit branch":"commit_id_plugin",
    "Commit message":"Merge branch 'master' into commit_id_plugin"
}

5. Conclusion

In this tutorial we showed the basic of using maven-git-commit-id-plugin and created a simple Spring Boot application, which makes use of properties generated by the plugin.

Presented configuration does not cover all available flags and attributes, but it covers all basics necessary to start working with this plugin.

You can find code examples on Github.

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

Two Factor Authentication with Spring Security and Google Authenticator

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Overview

In this tutorial, we’re going to implement Two Factor Authentication functionality with a Soft Token and Spring Security.

We’re going to be adding the new functionality into an existing, simple login flow and use the Google Authenticator app to generate the tokens.

Simply put, two factor authentication is a verification process which follows the well known principle of “something the user knows and something the user has”.

And so, users provide an extra “verification token” during authentication – a one-time password verification code based on Time-based One-time Password TOTP algorithm.

2. Maven Configuration

First, in order to use Google Authenticator in our app we need to:

  • Generate secret key
  • Provide secret key to the user via QR-code
  • Verify token entered by user using this secret key.

We will use a simple server-side library to generate/verify one-time password by adding the following dependency to our pom.xml:

<dependency>
    <groupId>org.jboss.aerogear</groupId>
    <artifactId>aerogear-otp-java</artifactId>
    <version>1.0.0</version>
</dependency>

3. User Entity

Next, we will modify our user entity to hold extra information – as follows:

@Entity
public class User {
    ...
    private boolean isUsing2FA;
    private String secret;

    public User() {
        super();
        this.secret = Base32.random();
        ...
    }
}

Note that:

  • We save a random secret code for each user to be used later in generating verification code
  • Our 2-step verification is optional

4. Extra Login Parameter

First, we will need to adjust our security configuration to accept extra parameter – verification token. We can accomplish that by using custom AuthenticationDetailsSource:

Here is our CustomWebAuthenticationDetailsSource:

@Component
public class CustomWebAuthenticationDetailsSource implements 
  AuthenticationDetailsSource<HttpServletRequest, WebAuthenticationDetails> {
    @Override
    public WebAuthenticationDetails buildDetails(HttpServletRequest context) {
        return new CustomWebAuthenticationDetails(context);
    }
}

and here is CustomWebAuthenticationDetails:

public class CustomWebAuthenticationDetails extends WebAuthenticationDetails {

    private String verificationCode;

    public CustomWebAuthenticationDetails(HttpServletRequest request) {
        super(request);
        verificationCode = request.getParameter("code");
    }

    public String getVerificationCode() {
        return verificationCode;
    }
}

And our security configuration:

@Configuration
@EnableWebSecurity
public class LssSecurityConfig extends WebSecurityConfigurerAdapter {

    @Autowired
    private CustomWebAuthenticationDetailsSource authenticationDetailsSource;

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http.formLogin()
            .authenticationDetailsSource(authenticationDetailsSource)
            ...
    } 
}

And finally add the extra parameter to our login form:

<labelth:text="#{label.form.login2fa}">
    Google Authenticator Verification Code
</label>
<input type='text' name='code'/>

Note: We need to set our custom AuthenticationDetailsSource in our security configuration.

5. Custom Authentication Provider

Next, we’ll need a custom AuthenticationProvider to handle extra parameter validation:

public class CustomAuthenticationProvider extends DaoAuthenticationProvider {

    @Autowired
    private UserRepository userRepository;

    @Override
    public Authentication authenticate(Authentication auth)
      throws AuthenticationException {
        String verificationCode 
          = ((CustomWebAuthenticationDetails) auth.getDetails()).getVerificationCode();
        User user = userRepository.findByEmail(auth.getName());
        if ((user == null)) {
            throw new BadCredentialsException("Invalid username or password");
        }
        if (user.isUsing2FA()) {
            Totp totp = new Totp(user.getSecret());
            if (!isValidLong(verificationCode) || !totp.verify(verificationCode)) {
                throw new BadCredentialsException("Invalid verfication code");
            }
        }
        
        Authentication result = super.authenticate(auth);
        return new UsernamePasswordAuthenticationToken(
          user, result.getCredentials(), result.getAuthorities());
    }

    private boolean isValidLong(String code) {
        try {
            Long.parseLong(code);
        } catch (NumberFormatException e) {
            return false;
        }
        return true;
    }

    @Override
    public boolean supports(Class<?> authentication) {
        return authentication.equals(UsernamePasswordAuthenticationToken.class);
    }
}

Note that – after we verified the one-time-password verification code, we simply delegated authentication downstream.

Here is our Authentication Provider bean

@Bean
public DaoAuthenticationProvider authProvider() {
    CustomAuthenticationProvider authProvider = new CustomAuthenticationProvider();
    authProvider.setUserDetailsService(userDetailsService);
    authProvider.setPasswordEncoder(encoder());
    return authProvider;
}

6. Registration Process

Now, in order for users to be able to use the application to generate the tokens, they’ll need to set things up properly when they register.

And so, we’ll need to do few simple modifications to the registration process – to allow users who have chosen to use 2-step verification to scan the QR-code they need to login later.

First, we add this simple input to our registration form:

Use Two step verification <input type="checkbox" name="using2FA" value="true"/>

Then, in our RegistrationController – we redirect users based on their choices after confirming registration:

@RequestMapping(value = "/registrationConfirm", method = RequestMethod.GET)
public String confirmRegistration(@RequestParam("token") String token, ...) {
    String result = userService.validateVerificationToken(token);
    if(result.equals("valid")) {
        User user = userService.getUser(token);
        if (user.isUsing2FA()) {
            model.addAttribute("qr", userService.generateQRUrl(user));
            return "redirect:/qrcode.html?lang=" + locale.getLanguage();
        }
        model.addAttribute("message", messages.getMessage("message.accountVerified", null, locale));
        return "redirect:/login?lang=" + locale.getLanguage();
    }
    ...
}

And here is our method generateQRUrl():

public static String QR_PREFIX = 
  "https://chart.googleapis.com/chart?chs=200x200&chld=M%%7C0&cht=qr&chl=";

@Override
public String generateQRUrl(User user) {
    return QR_PREFIX + URLEncoder.encode(String.format(
      "otpauth://totp/%s:%s?secret=%s&issuer=%s", 
      APP_NAME, user.getEmail(), user.getSecret(), APP_NAME),
      "UTF-8");
}

And here is our qrcode.html:

<html>
<body>
<div id="qr">
    <p>
        Scan this Barcode using Google Authenticator app on your phone 
        to use it later in login
    </p>
    <img th:src="${param.qr[0]}"/>
</div>
<a href="/login" class="btn btn-primary">Go to login page</a>
</body>
</html>

Note that:

  • generateQRUrl() method is used to generate QR-code URL
  • This QR-code will be scanned by users mobile phones using Google Authenticator app
  • The app will generate a 6-digit code that is valid for only 30 seconds which is desired verification code
  • This verification code will be verified while login using our custom AuthenticationProvider

7. Enable Two Step Verification

Next, we will make sure that users can change their login preferences at any time – as follows:

@RequestMapping(value = "/user/update/2fa", method = RequestMethod.POST)
@ResponseBody
public GenericResponse modifyUser2FA(@RequestParam("use2FA") boolean use2FA) throws UnsupportedEncodingException {
    User user = userService.updateUser2FA(use2FA);
    if (use2FA) {
        return new GenericResponse(userService.generateQRUrl(user));
    }
    return null;
}

And here is updateUser2FA():

@Override
public User updateUser2FA(boolean use2FA) {
    Authentication curAuth = SecurityContextHolder.getContext().getAuthentication();
    User currentUser = (User) curAuth.getPrincipal();
    currentUser.setUsing2FA(use2FA);
    currentUser = repository.save(currentUser);
    Authentication auth = new UsernamePasswordAuthenticationToken(currentUser, currentUser.getPassword(), curAuth.getAuthorities());
    SecurityContextHolder.getContext().setAuthentication(auth);
    return currentUser;
}

And here is the front-end:

<div th:if="${#authentication.principal.using2FA}">
    You are using Two-step authentication 
    <a href="#" onclick="disable2FA()">Disable 2FA</a> 
</div>
<div th:if="${! #authentication.principal.using2FA}">
    You are not using Two-step authentication 
    <a href="#" onclick="enable2FA()">Enable 2FA</a> 
</div>
<br/>
<div id="qr" style="display:none;">
    <p>Scan this Barcode using Google Authenticator app on your phone </p>
</div>

<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.11.2/jquery.min.js"></script>
<script type="text/javascript">
function enable2FA(){
    set2FA(true);
}
function disable2FA(){
    set2FA(false);
}
function set2FA(use2FA){
    $.post( "/user/update/2fa", { use2FA: use2FA } , function( data ) {
        if(use2FA){
        	$("#qr").append('<img src="'+data.message+'" />').show();
        }else{
            window.location.reload();
        }
    });
}
</script>

8. Conclusion

In this quick tutorial we illustrated how to do a two-factor authentication implementation using a Soft Token with Spring Security.

The full source code can be found – as always – on GitHub.

The Master Class "Learn Spring Security" is out:

>> CHECK OUT THE COURSE


A Guide to Java Sockets

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Overview

The term socket programming refers to writing programs that execute across multiple computers in which the devices are all connected to each other using a network.

There are two communication protocols that one can use for socket programming: User Datagram Protocol (UDP) and Transfer Control Protocol (TCP).

The main difference between the two is that UDP is connectionless, meaning there is no session between the client and the server while TCP is connection-oriented, meaning an exclusive connection must first be established between client and server for communication to take place.

This tutorial presents an introduction to sockets programming over TCP/IP networks and demonstrates how to write client/server applications in Java. UDP is not a mainstream protocol and as such may not be often encountered.

2. Project Setup

Java provides a collection of classes and interfaces that take care of low level communication details between the client and the server.

These are mostly contained in the java.net package, so we need to make the following import:

import java.net.*;

We also need the java.io package which gives us input and output streams to write to and read from while communicating:

import java.io.*;

For the sake of simplicity, we’ll run our client and server programs on the same computer. If we were to execute them on different networked computers, the only thing that would change is the IP address, in this case, we will use localhost on 127.0.0.1.

3. Simple Example

Let’s get our hands dirty with the most basic of examples involving a client and a server. It’s going to be a two way communication application where the client greets the server and the server responds.

Let’s create the server application in a class called GreetServer.java with the following code.

We include the main method and the global variables to draw attention to how we’ll be running all servers in this article. In the rest of the examples in the articles, we’ll omit this kind of more repetitive code:

public class GreetServer {
    private ServerSocket serverSocket;
    private Socket clientSocket;
    private PrintWriter out;
    private BufferedReader in;

    public void start(int port) {
        serverSocket = new ServerSocket(port);
        clientSocket = serverSocket.accept();
        out = new PrintWriter(clientSocket.getOutputStream(), true);
        in = new BufferedReader(new InputStreamReader(clientSocket.getInputStream()));
        String greeting = in.readLine();
            if ("hello server".equals(greeting)) {
                out.println("hello client");
            }
            else {
                out.println("unrecognised greeting");
            }
    }

    public void stop() {
        in.close();
        out.close();
        clientSocket.close();
        serverSocket.close();
    }
    public static void main(String[] args) {
        GreetServer server=new GreetServer();
        server.start(6666);
    }
}

Let’s also create a client called GreetClient.java with this code:

public class GreetClient {
    private Socket clientSocket;
    private PrintWriter out;
    private BufferedReader in;

    public void startConnection(String ip, int port) {
        clientSocket = new Socket(ip, port);
        out = new PrintWriter(clientSocket.getOutputStream(), true);
        in = new BufferedReader(new InputStreamReader(clientSocket.getInputStream()));
    }

    public String sendMessage(String msg) {
        out.println(msg);
        String resp = in.readLine();
        return resp;
    }

    public void stopConnection() {
        in.close();
        out.close();
        clientSocket.close();
    }
}

Let’s start the server; in your IDE you do this by simply running it as a Java application.

And now lets send a greeting to the server using a unit test, which confirms that the server actually sends a greeting in response:

@Test
public void givenGreetingClient_whenServerRespondsWhenStarted_thenCorrect() {
    GreetClient client = new GreetClient();
    client.startConnection("127.0.0.1", 6666);
    String response = client.sendMessage("hello server");
    assertEquals("hello client", response);
}

Don’t worry if you don’t entirely understand what is happening here, as this example is meant to give us a feel of what to expect later on in the article.

In the following sections, we will dissect socket communication using this simple example and dive deeper into the details with more examples.

4. How Sockets Work

We will use the above example to step through different parts of this section.

By definition, a socket is one endpoint of a two-way communication link between two programs running on different computers on a network. A socket is bound to a port number so that the transport layer can identify the application that data is destined to be sent to.

4.1. The Server

Usually, a server runs on a specific computer on the network and has a socket that is bound to a specific port number. In our case, we use the same computer as the client and started the server on port 6666:

ServerSocket serverSocket = new ServerSocket(6666);

The server just waits, listening to the socket for a client to make a connection request. This happens in the next step:

Socket clientSocket = serverSocket.accept();

When the server code encounters the accept method, it blocks until a client makes a connection request to it.

If everything goes well, the server accepts the connection. Upon acceptance, the server gets a new socket, clientSocket, bound to the same local port, 6666, and also has its remote endpoint set to the address and port of the client.

At this point, the new Socket object puts the server in direct connection with the client, we can then access the output and input streams to write and receive messages to and from the client respectively:

PrintWriter out = new PrintWriter(clientSocket.getOutputStream(), true);
BufferedReader in = new BufferedReader(new InputStreamReader(clientSocket.getInputStream()));

From here onwards, the server is capable of exchanging messages with the client endlessly until the socket is closed with its streams.

However, in our example the server can only send a greeting response before it closes the connection, this means that if we ran our test again, the connection would be refused.

To allow continuity in communication, we will have to read from the input stream inside a while loop and only exit when the client sends a termination request, we will see this in action in a following section.

For every new client, the server needs a new socket returned by the accept call.  The serverSocket is used to continue to listen for connection requests while tending to the needs of the connected clients. We have not allowed for this yet in our first example.

4.2. The Client

The client must know the hostname or IP of the machine on which the server is running and the port number on which the server is listening.

To make a connection request, the client tries to rendezvous with the server on the server’s machine and port:

Socket clientSocket = new Socket("127.0.0.1", 6666);

The client also needs to identify itself to the server so it binds to a local port number, assigned by the system, that it will use during this connection. We don’t deal with this ourselves.

The above constructor only creates a new socket when the server has accepted the connection, otherwise we will get a connection refused exception. When successfully created we can then obtain input and output streams from it to communicate with the server:

PrintWriter out = new PrintWriter(clientSocket.getOutputStream(), true);
BufferedReader in = new BufferedReader(new InputStreamReader(clientSocket.getInputStream()));

The input stream of the client is connected to the output stream of the server, just like the input stream of the server is connected to the output stream of the client.

5. Continuous Communication

Our current server blocks until a client connects to it and then blocks again to listen to a message from the client, after the single message, it closes the connection because we have not dealt with continuity.

So it is only helpful in ping requests, but imagine we would like to implement a chat server, continuous back and forth communication between server and client would definitely be required.

We will have to create a while loop to continuously observe the input stream of the server for incoming messages.

Let’s create a new server called EchoServer.java whose sole purpose is to echo back whatever messages it receives from clients:

public class EchoServer {
    public void start(int port) {
        serverSocket = new ServerSocket(port);
        clientSocket = serverSocket.accept();
        out = new PrintWriter(clientSocket.getOutputStream(), true);
        in = new BufferedReader(new InputStreamReader(clientSocket.getInputStream()));
        
        String inputLine;
        while ((inputLine = in.readLine()) != null) {
        if (".".equals(inputLine)) {
            out.println("good bye");
            break;
         }
         out.println(inputLine);
    }
}

Notice that we have added a termination condition where the while loop exits when we receive a period character.

We will start EchoServer using the main method just as we did for the GreetServer. This time, we start it on another port such as 4444 to avoid confusion.

The EchoClient is similar to GreetClient, so we can duplicate the code. We are separating them for clarity.

In a different test class we shall create a test to show that multiple requests to the EchoServer will be served without the server closing the socket. This is true as long as we are sending requests from the same client.

Dealing with multiple clients is a different case, which we shall see in a subsequent section.

Lets create a setup method to initiate a connection with the server:

@Before
public void setup() {
    client = new EchoClient();
    client.startConnection("127.0.0.1", 4444);
}

We will equally create a tearDown method to release all our resources, this is best practice for every case where we use network resources:

@After
public void tearDown() {
    client.stopConnection();
}

Lets then test our echo server with a few requests:

@Test
public void givenClient_whenServerEchosMessage_thenCorrect() {
    String resp1 = client.sendMessage("hello");
    String resp2 = client.sendMessage("world");
    String resp3 = client.sendMessage("!");
    String resp4 = client.sendMessage(".");
    
    assertEquals("hello", resp1);
    assertEquals("world", resp2);
    assertEquals("!", resp3);
    assertEquals("good bye", resp4);
}

This is an improvement over the initial example, where we would only communicate once before the server closed our connection; now we send a termination signal to tell the server when we’re done with the session.

6. Server with Multiple Clients

Much as the previous example was an improvement over the first one, it is still not that great a solution. A server must have the capacity to service many clients and many requests simultaneously.

Handling multiple clients is what we are going to cover in this section.

Another feature we will see here is that the same client could disconnect and reconnect again, without getting a connection refused exception or a connection reset on the server. Previously we were not able to do this.

This means that our server is going to be more robust and resilient across multiple requests from multiple clients.

How we will do this is to create a new socket for every new client and service that client’s requests on a different thread. The number of clients being served simultaneously will equal the number of threads running.

The main thread will be running a while loop as it listens for new connections.

Enough talk, lets create another server called EchoMultiServer.java. Inside it, we will create a handler thread class to manage each client’s communications on its socket:

public class EchoMultiServer {
    private ServerSocket serverSocket;

    public void start(int port) {
        serverSocket = new ServerSocket(port);
        while (true)
            new EchoClientHandler(serverSocket.accept()).run();
    }

    public void stop() {
        serverSocket.close();
    }

    private static class EchoClientHandler extends Thread {
        private Socket clientSocket;
        private PrintWriter out;
        private BufferedReader in;

        public EchoClientHandler(Socket socket) {
            this.clientSocket = socket;
        }

        public void run() {
            out = new PrintWriter(clientSocket.getOutputStream(), true);
            in = new BufferedReader(
              new InputStreamReader(clientSocket.getInputStream()));
            
            String inputLine;
            while ((inputLine = in.readLine()) != null) {
                if (".".equals(inputLine)) {
                    out.println("bye");
                    break;
                }
                out.println(inputLine);
            }

            in.close();
            out.close();
            clientSocket.close();
    }
}

Notice that we now call accept inside a while loop. Every time the while loop is executed, it blocks on the accept call until a new client connects, then the handler thread, EchoClientHandler, is created for this client.

What happens inside the thread is what we previously did in the EchoServer where we handled only a single client. So the EchoMultiServer delegates this work to EchoClientHandler so that it can keep listening for more clients in the while loop.

We will still use EchoClient to test the server, this time we will create multiple clients each sending and receiving multiple messages from the server.

Let’s start our server using it’s main method on port 5555.

For clarity, we will still put tests in a new suite:

@Test
public void givenClient1_whenServerResponds_thenCorrect() {
    EchoClient client1 = new EchoClient();
    client1.startConnection("127.0.0.1", 5555);
    String msg1 = client1.sendMessage("hello");
    String msg2 = client1.sendMessage("world");
    String terminate = client1.sendMessage(".");
    
    assertEquals(msg1, "hello");
    assertEquals(msg2, "world");
    assertEquals(terminate, "bye");
}

@Test
public void givenClient2_whenServerResponds_thenCorrect() {
    EchoClient client2 = new EchoClient();
    client2.startConnection("127.0.0.1", 5555);
    String msg1 = client2.sendMessage("hello");
    String msg2 = client2.sendMessage("world");
    String terminate = client2.sendMessage(".");
    
    assertEquals(msg1, "hello");
    assertEquals(msg2, "world");
    assertEquals(terminate, "bye");
}

We could create as many of these test cases as we please, each spawning a new client and the server will serve all of them.

7. Conclusion

In this tutorial, we’ve focused on an introduction to sockets programming over TCP/IP and wrote a simple Client/Server application in Java.

The full source code for the article can be found – as usual – in the github project.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Cachable Static Assets with Spring MVC

$
0
0

I just released the Master Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

This article focuses on caching static assets (such as Javascript and CSS files) when serving them with Spring MVC.

We’ll also touch on the concept of “perfect caching”, essentially making sure that – when a file is updated – the old version isn’t incorrectly served from the cache.

2. Caching Static Assets

In order to make a static assets cacheable, we need to configure its corresponding resource handler.

Here’s a simple example of how to do that – setting the Cache-Control header on the response to max-age=31536000 which causes the browser to use the cached version of the file for one year:

@EnableWebMvc
public class MvcConfig extends WebMvcConfigurerAdapter {
    @Override
    public void addResourceHandlers(ResourceHandlerRegistry registry) {
        registry.addResourceHandler("/js/**") 
                .addResourceLocations("/js/") 
                .setCacheControl(CacheControl.maxAge(365, TimeUnit.DAYS));
    }
}

The reason we have such a long time period for cache validity is that we want the client to use the cached version of the file until the file is updated, and 365 days is the maximum we can use according to the RFC for the Cache-Control header.

And so, when a client requests foo.js for the first time, he will receive the whole file over the network (37 bytes in this case) with a status code of 200 OK. The response will have the following header to control the caching behavior:

Cache-Control: max-age=31536000

This will cause the browser to cache the file with an expiration duration of a year, as a result of the following response:

cache

When the client requests the same file for a second time, the browser will not make another request to the server. Instead, it will directly serve the file from its cache and avoid the network round-trip so the page will load much faster:

cache-highlighted

Chrome browser users need to be careful while testing because Chrome will not use the cache if you refresh the page by pressing the refresh button on the screen or by pressing F5 key. You need to press enter on the address bar to observe the caching behavior. More info on that here.

3. Versioning Static Assets

Using a cache for serving the static assets makes the page load really fast, but it has an important caveat. When you update the file, client will not get the most recent version of the file since it does not check with server if the file is up-to-date and just serves the file from browser cache.

Here’s what we need to do to make the browser get the file from the server only when the file is updated:

  • Serve the file under a URL which has a version in it. For example, foo.js should be served under /js/foo-46944c7e3a9bd20cc30fdc085cae46f2.js
  • Update links to the file with the new URL
  • Update version part of the URL whenever the file is updated. For example, when foo.js is updated, it should now be served under /js/foo-a3d8d7780349a12d739799e9aa7d2623.js.

Client will request the file from the server when it’s updated because the page will have a link to a different URL, so the browser will not use its cache. If a file is not updated, its version (hence its URL) will not change and the client will keep using the cache for that file.

Normally, we would need to do all of these manually, but Spring supports these out of the box, including calculating the hash for each file and appending them to the URLs. Let’s see how we can configure our Spring application to do all of this for us.

3.1. Serve Under a URL with a Version

We need to add a VersionResourceResolver to a path in order to serve the files under it with an updated version string in its URL:

@Override
public void addResourceHandlers(ResourceHandlerRegistry registry) {
    registry.addResourceHandler("/js/**")
            .addResourceLocations("/js/")
            .setCacheControl(CacheControl.maxAge(365, TimeUnit.DAYS))
            .resourceChain(false)
            .addResolver(new VersionResourceResolver().addContentVersionStrategy("/**"));
}

Here we use a content version strategy. Each file in the /js folder will be served under a URL that has a version computed from its content. This is called fingerprinting. For example, foo.js will now be served under the URL /js/foo-46944c7e3a9bd20cc30fdc085cae46f2.js.

With this configuration, when a client makes a request for http://localhost:8080/js/46944c7e3a9bd20cc30fdc085cae46f2.js:

curl -i http://localhost:8080/js/foo-46944c7e3a9bd20cc30fdc085cae46f2.js

Server will respond with a Cache-Control header to tell the client browser to cache the file for a year:

HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Last-Modified: Tue, 09 Aug 2016 06:43:26 GMT
Cache-Control: max-age=31536000

3.2. Update Links with the New URL

Before we inserted version into the URL, we could use a simple script tag to import foo.js:

<script type="text/javascript" src="/js/foo.js">

Now that we serve the same file under a URL with a version, we need to reflect it on the page:

<script type="text/javascript" 
  src="<em>/js/foo-46944c7e3a9bd20cc30fdc085cae46f2.js</em>">

It becomes tedious to deal with all those long paths. There’s a better solution that Spring provides for this problem. We can use ResourceUrlEncodingFilter and JSTL’s url tag for rewriting the URLs of the links with versioned ones.

ResourceURLEncodingFilter can be registered under web.xml as usual:

<filter>
    <filter-name>resourceUrlEncodingFilter</filter-name>
    <filter-class>
        org.springframework.web.servlet.resource.ResourceUrlEncodingFilter
    </filter-class>
</filter>
<filter-mapping>
    <filter-name>resourceUrlEncodingFilter</filter-name>
    <url-pattern>/*</url-pattern>
</filter-mapping>

JSTL core tag library needs to be imported on our JSP page before we can use the url tag:

<%@ taglib uri="http://java.sun.com/jsp/jstl/core" prefix="c" %>

Then, we can use the url tag to import foo.js as follows:

<script type="text/javascript" src="<c:url value="/js/foo.js" />">

When this JSP page is rendered, the URL for the file is rewritten correctly to contain the version in it:

<script type="text/javascript" src="/js/foo-46944c7e3a9bd20cc30fdc085cae46f2.js">

3.3. Update Version Part of the URL

Whenever a file is updated, its version is computed again and the file is served under a URL which contains the new version. We don’t have to do any additional work for this, VersionResourceResolver handles this for us.

4. Fix CSS links

CSS files can import other CSS files by using @import directives. For example, myCss.css file imports another.css file:

@import "another.css";

This would normally cause problems with versioned static assets, because the browser will make a request for another.css file, but the file is served under a versioned path such as another-9556ab93ae179f87b178cfad96a6ab72.css. 

To fix this problem and to make a request to the correct path, we need to introduce CssLinkResourceTransformer to the resource handler configuration:

@Override
public void addResourceHandlers(ResourceHandlerRegistry registry) {
    registry.addResourceHandler("/resources/**")
            .addResourceLocations("/resources/", "classpath:/other-resources/")
            .setCacheControl(CacheControl.maxAge(365, TimeUnit.DAYS))
            .resourceChain(false)
            .addResolver(new VersionResourceResolver().addContentVersionStrategy("/**"))
            .addTransformer(new CssLinkResourceTransformer());
}

This modifies content of myCss.css and swaps the import statement with the following:

@import "another-9556ab93ae179f87b178cfad96a6ab72.css";

5. Conclusion

Taking advantage of HTTP caching is a huge boost to web site performance, but it might be cumbersome to avoid serving stale resources while using caching.

In this article, we have implemented a good strategy to use HTTP caching while serving static assets with Spring MVC and busting the cache when the files are updated.

You can find the source code for this article on GitHub.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Introduction to Spring Cloud Netflix – Eureka

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Overview

In this tutorial, we’ll introduce client-side service discovery via “Spring Cloud Netflix Eureka“.

Client-side service discovery allows services to find and communicate with each other without hardcoding hostname and port. The only ‘fixed point’ in such an architecture consists of a service registry with which each service has to register.

A drawback is that all clients must implement a certain logic to interact with this fixed point. This assumes an additional network round trip prior to the actual request.

With Netflix Eureka each client is able to simultaneously act as a server, to replicate its status to a connected peer. In other words, a client retrieves a list with all connected peers of a service registry and makes all further requests to any other services through a load-balancing algorithm.

To be informed about the presence of a client, they have to send a heartbeat signal to the registry.

To achieve the goal of this write-up, we will implement 3 microservices:

  • a service registry (Eureka Server),
  • a REST service which registers itself at the registry (Eureka Client) and
  • a web-application, which is consuming the REST service as registry-aware client (Spring Cloud Netflix Feign Client).

2. Eureka Server

To implement a Eureka Server for using as service registry is as easy as: adding spring-cloud-starter-eureka-server to the dependencies, enable the Eureka Server in a @SpringBootApplication per annotate it with @EnableEurekaServer and configure some properties. But we’ll do it step by step.

Firstly we’ll create a new Maven project and put the dependencies into it. You have to notice that we’re importing the spring-cloud-starter-parent to all projects described in this tutorial:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-eureka-server</artifactId>
    <version>1.1.5.RELEASE</version>
</dependency>

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-parent</artifactId>
            <version>Brixton.SR4</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>

Next, we’re creating the main application class:

@SpringBootApplication
@EnableEurekaServer
public class EurekaServerApplication {
    public static void main(String[] args) {
        SpringApplication.run(EurekaServerApplication.class, args);
    }
}

Finally, we’re configuring the properties in YAML format; so application.yml will be our configuration file:

server:
  port: 8761
eureka:
  client:
    registerWithEureka: false
    fetchRegistry: false

Here we’re configuring an application port – 8761 is the default one for Eureka servers. We are telling the built-in Eureka Client not to register with ‘itself’, because our application should be acting as a server.

Now we will point our browser to http://localhost:8761 to view the Eureka dashboard, where we will later inspecting the registered instances.

At the moment, we see basic indicators such as status and health indicators.

Picture of an example Eureka Dashboard.

3. Eureka Client

For a @SpringBootApplication to be discovery-aware, we have to include some Spring Discovery Client (for example: spring-cloud-starter-eureka) into our classpath.

Than we need to annotate a @Configuration with either @EnableDiscoveryClient or @EnableEurekaClient.

The latter tells Spring Boot to explicitly use Spring Netflix Eureka for service discovery. To fill our client application with some sample-life, we’ll also include the spring-boot-starter-web package in the pom.xml and implement a REST controller.

But first, we will add the dependencies:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-eureka</artifactId>
    <version>1.1.5.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <version>1.4.0.RELEASE</version>
</dependency>

Here we will implement the main application class:

@SpringBootApplication
@EnableEurekaClient
@RestController
public class EurekaClientApplication {
    public static void main(String[] args) {
        SpringApplication.run(EurekaClientApplication.class, args);
    }

    @RequestMapping("/greeting")
    public String greeting() {
        return "Hello from EurekaClient!";
    }
}

Next, we have to set-up an application.yml with a configured Spring application name to uniquely identify our client in the list of registered applications.

We are able to let Spring Boot choose a random port for us, because later we are accessing this service with its name, and finally we have to tell our client, were it has to locate the registry:

spring:
  application:
    name: spring-cloud-eureka-client
server:
  port: 0
eureka:
  client:
    serviceUrl:
      defaultZone: ${EUREKA_URI:http://localhost:8761/eureka}
  instance:
    preferIpAddress: true

When we decided to setup our Eureka Client this way, we had in mind that this kind of service should later be easily scalable.

Now we will run the client and point our browser to http://localhost:8761 again, to see its registration status on the Eureka Dashboard. By using the Dashboard we can do further configuration e.g. link the homepage of a registered client with the Dashboard for administrative purposes. The configuration options however, are beyond the scope of this article.

Picture of an example Eureka Dashboard with a registered client.

4. Feign Client

To finalize our project with 3 dependent microservices, we will now implement a REST-consuming web-application using Spring Netflix Feign Client.

Think of Feign as discovery-aware Spring RestTemplate using interfaces to communicate with endpoints. This interfaces will be automatically implemented at runtime and instead of service-urls, it is using service-names.

Without Feign we would have to autowire an instance of EurekaClient into our controller with which we could receive a service-information by service-name as Application object.

We would use this Application to get a list of all instances of this service, pick a suitable one and use this InstanceInfo to get hostname and port. With this we could do a normal request with any http client.

For example:

@Autowired
private EurekaClient eurekaClient;

public void doRequest() {
    Application application = eurekaClient.getApplication("spring-cloud-eureka-client");
    InstanceInfo instanceInfo = application.getInstances().get(0);
    String hostname = instanceInfo.getHostName();
    int port = instanceInfo.getPort();
    ...
}

A RestTemplate can also be used to access Eureka client-services by name, but this topic is beyond this write-up.

To setup our Feign Client project, we’re going to add the following 4 dependencies to its pom.xml:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-feign</artifactId>
    <version>1.1.5.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-eureka</artifactId>
    <version>1.1.5.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <version>1.4.0.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-thymeleaf</artifactId>
    <version>1.4.0.RELEASE</version>
</dependency>

The Feign Client is located in the spring-cloud-starter-feign package. To enable it, we have to annotate a @Configuration with @EnableFeignClients. To use it, we simply annotate an interface with @FeignClient(“service-name”) and auto-wire it into a controller.

A good method to create such Feign Clients is to create interfaces with @RequestMapping annotated methods and put them into a separate module. This way they can be shared between server and client. On server-side you can implement them as @Controller and on client-side they can be extended and annotated as @FeignClient.

Furthermore the spring-cloud-starter-eureka package needs to be included in the project and enabled by annotate the main application class with @EnableEurekaClient.

The spring-boot-starter-web and spring-boot-starter-thymeleaf dependencies are used to present a view, containing fetched data from our REST service.

This will be our Feign Client interface:

@FeignClient("spring-cloud-eureka-client")
public interface GreetingClient {
    @RequestMapping("/greeting")
    String greeting();
}

Here we will implement the main application class which simultaneously acts as controller:

@SpringBootApplication
@EnableEurekaClient
@EnableFeignClients
@Controller
public class FeignClientApplication {
    @Autowired
    private GreetingClient greetingClient;

    public static void main(String[] args) {
        SpringApplication.run(FeignClientApplication.class, args);
    }

    @RequestMapping("/get-greeting")
    public String greeting(Model model) {
        model.addAttribute("greeting", greetingClient.greeting());
        return "greeting-view";
    }
}

This will be the HTML template for our view:

<!DOCTYPE html>
<html xmlns:th="http://www.thymeleaf.org">
    <head>
        <title>Greeting Page</title>
    </head>
    <body>
        <h2 th:text="${greeting}"/>
    </body>
</html>

At least the application.yml configuration file is almost the same as in the previous step:

spring:
  application:
    name: spring-cloud-eureka-feign-client
server:
  port: 8080
eureka:
  client:
    serviceUrl:
      defaultZone: ${EUREKA_URI:http://localhost:8761/eureka}

Now we are able to build and run this service. Finally we’ll point our browser to http://localhost:8080/get-greeting and it should display something like the following:

Hello from EurekaClient!

5. Conclusion

As we’ve seen, we’re now able to implement a service registry using Spring Netflix Eureka Server and register some Eureka Clients with it.

Because our Eureka Client from step 3 listens on a randomly chosen port, it doesn’t know its location without the information from the registry. With a Feign Client and our registry, we are able to locate and consume the REST service, even when the location changes.

Finally we’ve got a big picture about using service discovery in a microservice architecture.

As usual, you’ll find the sources on GitHub, which includes also a set of Docker-related files for using with docker-compose to create containers from our project.

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

Hystrix Integration with Existing Spring Application

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Overview

In the last article we looked at the basics of Hystrix and how it can help with building a fault tolerant and resilient application.

There are lots of existing Spring applications that make calls to external systems that would benefit from Hystrix. Unfortunately, it may not be possible to rewrite these applications in order to integrate Hystrix, however a non-invasive way of integrating Hystrix is possible with the help of Spring AOP.

In this article we will look at how to integrate Hystrix with an existing Spring application.

2. Hystrix into a Spring Application

2.1. Existing Application

Let’s take a look at the application’s existing client caller which makes call to the RemoteServiceTestSimulator that we created in the previous article:

@Component("springClient")
public class SpringExistingClient {

    @Value("${remoteservice.timeout}")
    private int remoteServiceDelay;

    public String invokeRemoteServiceWithOutHystrix() throws InterruptedException {
        return new RemoteServiceTestSimulator(remoteServiceDelay).execute();
    }
}

As we can see in the above code snippet, the invokeRemoteServiceWithOutHystrix method is responsible for making calls to the RemoteServiceTestSimulator remote service. Of course, a real world applications will not be this simple.

2.2. Create an Around Advice

To demonstrate how to integrate Hystrix we are going to use this client as an example.

To do this, we will define an Around advice that will kick in when invokeRemoteService gets executed:

@Around("@annotation(com.baeldung.hystrix.HystrixCircuitBreaker)")
public Object circuitBreakerAround(ProceedingJoinPoint aJoinPoint) {
    return new RemoteServiceCommand(config, aJoinPoint).execute();
}

The above advice is designed as an Around advice to be executed at a pointcut annotated with @HystrixCircuitBreaker.

Now let’s see the definition of the HystrixCircuitBreaker annotation:

@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
public @interface HystrixCircuitBreaker {}

2.3. The Hystrix Logic

Now let’s take a look at the RemoteServiceCommand. It is implemented as a static inner class in the sample code, so as to encapsulate the Hystrix invocation logic:

private static class RemoteServiceCommand extends HystrixCommand<String> {

    private ProceedingJoinPoint joinPoint;

    RemoteServiceCommand(Setter config, ProceedingJoinPoint joinPoint) {
        super(config);
        this.joinPoint = joinPoint;
    }

    @Override
    protected String run() throws Exception {
        try {
            return (String) joinPoint.proceed();
        } catch (Throwable th) {
            throw new Exception(th);
        }
    }
}

The whole implementation of Aspect component can be seen here.

2.4. Annotate with @HystrixCircuitBreaker

Once the aspect has been defined, we can annotate our client method with @HystrixCircuitBreaker as shown below and Hystrix will be provoked for every call to methods annotated:

@HystrixCircuitBreaker
public String invokeRemoteServiceWithHystrix() throws InterruptedException{
    return new RemoteServiceTestSimulator(remoteServiceDelay).execute();
}

The below integration test will demonstrate the difference between the Hystrix route and non Hystrix route.

2.5. Test the Integration

For the purpose of demonstration, we have defined two method execution routes one with Hystrix and other without.

public class SpringAndHystrixIntegrationTest {

    @Autowired
    private HystrixController hystrixController;

    @Test(expected = HystrixRuntimeException.class)
    public void givenTimeOutOf15000_whenClientCalledWithHystrix_thenExpectHystrixRuntimeException()
      throws InterruptedException {
        hystrixController.withHystrix();
    }

    @Test
    public void givenTimeOutOf15000_whenClientCalledWithOutHystrix_thenExpectSuccess()
      throws InterruptedException {
        assertThat(hystrixController.withOutHystrix(), equalTo("Success"));
    }
}

When the test executes, you can see that the method call without Hystrix will wait for the whole execution time of the remote service whereas the Hystrix route will short circuit and throw the HystrixRuntimeException after the defined timeout which in our case is 10 seconds.

3. Conclusion

We can create one aspect for each remote service call that we want to make with different configurations. In the next article we’ll look at integrating Hystrix from the beginning of a project.

All code in this article can be found in the GitHub repository.

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

New Java/Spring Technical Editor for Baeldung (part-time, remote)

$
0
0

We’re looking for a part-time Java/Spring Technical Editor to work with new authors.

Simply put – you’ll be guiding authors by reviewing their articles and giving them technical feedback (in video form – usually a short, 5 minute screencasts is enough).

Budget and Time Commitment

Overall, you’ll spend around 12-16 hours / month and the budget is 600$ / month.

This corresponds to a target of 12 – 16 articles / month (12 x 1250 words or 16 x 1000 words articles).

The typical article will take about 2 rounds of review until it’s ready to go. All of this usually takes about 30 to 45 minutes of work for a small to medium article and can take 60 to 90 minutes for larger pieces.

Who is the right candidate?

First – you need to be a developer yourself, working or actively involved in the Java and Spring ecosystem. All of these articles are code-centric, so being in the trenches and able to code is instrumental.

Second – you’ll provide feedback via screencasts. These videos are informal – generally just a quick review explaining what author needs to improve on their article.

Finally – and it almost goes without saying – you should have a good command of the English language.

What Will You Be Doing?

The goal is to make sure that the article hits a high level of quality before it gets published.

That means that the code examples should be runnable and correct, and that the formatting and style matches the guidelines of the site.

Apply

If you think you’re well suited for this work, the current editorial team would love to work with you to help grow Baeldung.

Email me at eugen@baeldung.com with your details.

Cheers,

Eugen. 

Viewing all 3683 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>