Quantcast
Channel: Baeldung
Viewing all 3689 articles
Browse latest View live

Guide to @ConfigurationProperties in Spring Boot

$
0
0

1. Introduction

One of the handy features of Spring Boot is externalized configuration and easy access to properties defined in properties files.

An earlier article described various ways in which this can be done and in this article, we’re going to explore the @ConfigurationProperties annotation in greater detail.

2. Setup

The setup for this article is fairly standard. We start by adding spring-boot-starter-parent as the parent in our pom.xml:

<!-- Inherit defaults from Spring Boot -->
<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>1.5.2.RELEASE</version>
    <relativePath/> <!-- lookup parent from repository -->
</parent>

The latest version of Spring Boot can be found on the maven central here.

In order to be able to validate properties defined in the file, we would need a JSR-303’s implementation. hibernate-validator is one of them. Let’s add it to our pom.xml as well:

<dependency>
   <groupId>org.hibernate</groupId>
   <artifactId>hibernate-validator</artifactId>
   <version>5.4.1.Final</version>
</dependency>

Depending on the type of application and the environment running, we might need to add one or more additional dependencies. The reference page has details about the same, and the latest version of hibernate-validator can be found here.

We shall take the example of a hypothetical class which has configuration properties related to a mail server to understand the full power of this annotation:

public class ConfigProperties {

    public static class Credentials {
        private String authMethod;
        private String username;
        private String password;

       // standard getters and setters
    }
    private String host;
    private int port;
    private String from;
    private Credentials credentials;
    private List<String> defaultRecipients;
    private Map<String, String> additionalHeaders;
 
    // standard getters and setters
}

3. Binding the Configuration Properties

According to the official documentation, it is recommended to isolate the configuration properties into a separate POJO annotated with @ConfigurationProperties, so let’s start by doing that:

@Configuration
@ConfigurationProperties
public class ConfigProperties {
    // previous code
}

We also have added the @Configuration annotation for Spring to be able to find this bean and make it a candidate for injection.

The annotation works best when we have hierarchical properties that all have the same prefix, so we mention the prefix too as a part of the annotation. We can also optionally define a custom source where we’re storing these properties, else the default location (

We can also optionally define a custom source where we’re storing these properties, else the default location (classpath:application.properties) is looked up. So we now add the above annotations to the existing properties class:

@Configuration
@PropertySource("classpath:configprops.properties")
@ConfigurationProperties(prefix = "mail")
public class ConfigProperties {
    // previous code
}

That’s it! Now any properties defined in the property file that has the prefix mail and the same name as one of the properties are automatically assigned to this object.

Also, by default, a relaxed binding scheme is adopted for the binding, so all of the following variations are bound to the property authMethod of the Credentials class:

mail.credentials.auth_method
mail.credentials.auth-method
mail.credentials_AUTH_METHOD
mail.CREDENTIALS_AUTH_METHOD

Similarly, List and Map properties can also be bound. Here’s a sample properties file that binds correctly to our ConfigProperties object defined earlier:

#Simple properties
mail.host=mailer@mail.com
mail.port=9000
mail.from=mailer@mail.com

#List properties
mail.defaultRecipients[0]=admin@mail.com
mail.defaultRecipients[1]=owner@mail.com

#Map Properties
mail.additionalHeaders.redelivery=true
mail.additionalHeaders.secure=true

#Object properties
mail.credentials.username=john
mail.credentials.password=password
mail.credentials.authMethod=SHA1

4. Property Validation 

One of the handy things that this annotation provides the is the validation of properties using the JSR-303 format. This allows for all sorts of neat things like checking that a property is not null:

@NotBlank
private String host;

We can also check the minimum and maximum length of a String property:

@Length(max = 4, min = 1)
private String authMethod;

Or enforce the minimum and maximum value of an Integer property:

@Min(1025)
@Max(65536)
private int port;

And finally, we can also make sure that a property matches a certain pattern by defining a regex for the same. This has been done for email, as an example:

@Pattern(regexp = "^[a-z0-9._%+-]+@[a-z0-9.-]+\\.[a-z]{2,6}$")
private String from;

This helps us reduce a lot of if – else conditions in our code and makes it look much cleaner and concise.

If any of these validations fail then the main application would fail to start with an IllegalStateException till the incorrect property is corrected.

Also, it is important that we declare getters and setters for each of the properties as they’re used by the validator framework to access the concerned properties.

5. Conclusion

In this quick tutorial, we explored the @ConfigurationProperties annotation and also saw some of the handy features it provides like relaxed binding and Bean Validation.

As usual, the code is available over on Github.


An Introduction to Spring Cloud Zookeeper

$
0
0

 1. Introduction

In this article, we will get acquainted with Zookeeper and how it’s used for Service Discovery which is used as a centralized knowledge about services in the cloud.

Spring Cloud Zookeeper provides Apache Zookeeper integration for Spring Boot apps through autoconfiguration and binding to the Spring Environment.

2. Service Discovery Setup

We will create two apps:

  • An app that will provide a service (referred to in this article as the Service Provider)
  • An app that will consume this service (called the Service Consumer)

Apache Zookeeper will act as a coordinator in our service discovery setup. Apache Zookeeper installation instructions are available at the following link.

3. Service Provider Registration

We will enable service registration by adding the spring-cloud-starter-zookeeper-discovery dependency and using the annotation @EnableDiscoveryClient in the main application.

Below, we will show this process step by step for the service that returns the “Hello World !” in a response to GET requests.

3.1. Maven Dependencies

First, let’s add the required spring-cloud-starter-zookeeper-discovery, spring-webspring-cloud-dependencies and spring-boot-starter dependencies to our pom.xml file:

<dependencies>
    <dependency>
        <groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter</artifactId>
	<version>1.5.2.RELEASE</version>
    </dependency>
    <dependency>
        <groupId>org.springframework</groupId>
	<artifactId>spring-web</artifactId>
        <version>4.3.7.RELEASE</version>
    </dependency>
    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-starter-zookeeper-discovery</artifactId>
        <version>1.0.3.RELEASE</version>
     </dependency>
</dependencies>
<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-dependencies</artifactId>
            <version>Brixton.SR7</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>

3.2. Service Provider Annotations

Next, we will annotate our main class with @EnableDiscoveryClient. This will make the HelloWorld application discovery-aware:

@SpringBootApplication
@EnableDiscoveryClient
public class HelloWorldApplication {
    public static void main(String[] args) {
        SpringApplication.run(HelloWorldApplication.class, args);
    }
}

And a simple controller:

@GetMapping("/helloworld")
public String helloWorld() {
    return "Hello World!";
}

3.3. YAML Configurations

Now let us create a YAML Application.yml file that will be used for configuring the application log level and informing Zookeeper that the application is discovery-enabled.

The name of the application with which gets registered to Zookeeper is the most important. Later in the service consumer, a feign client will use this name during the service discovery:

spring:
  application:
    name: HelloWorld
  cloud:
    zookeeper:
      discovery:
        enabled: true
logging:
  level:
    org.apache.zookeeper.ClientCnxn: WARN

The spring boot application looks for zookeeper on default port 2181. If zookeeper is located somewhere else, the configuration needs to be added:

spring:
  cloud:
    zookeeper:
      connect-string: localhost:2181

4. Service Consumer

Now we will create a REST service consumer and registered it using spring Netflix Feign Client.

4.1. Maven Dependency

First, let’s add the required spring-cloud-starter-zookeeper-discovery, spring-webspring-cloud-dependencies, spring-boot-starter-actuator and spring-cloud-starter-feign dependencies to our pom.xml file:

<dependencies>
    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-starter-zookeeper-discovery</artifactId>
        <version>1.0.3.RELEASE</version>
    </dependency>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-actuator</artifactId>
        <version>1.5.2.RELEASE</version>
    </dependency>
    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-starter-feign</artifactId>
        <version>1.2.5.RELEASE</version>
    </dependency>
</dependencies>
<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-dependencies</artifactId>
            <version>Brixton.SR7</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>

4.2. Service Consumer Annotations

As with the service provider, we will annotate the main class with @EnableDiscoveryClient to make it discovery-aware:

@SpringBootApplication
@EnableDiscoveryClient
public class GreetingApplication {
 
    public static void main(String[] args) {
        SpringApplication.run(GreetingApplication.class, args);
    }
}

4.3. Discover Service with Feign Client

We will use the Spring Cloud Feign Integration, a project by Netflix that lets you define a declarative REST Client. We declare how the URL looks like and feign takes care of connecting to the REST service.

The Feign Client is imported via the spring-cloud-starter-feign package. We will annotate a @Configuration with @EnableFeignClients to make use of it within the application.

Finally, we annotate an interface with @FeignClient(“service-name”) and auto-wire it into our application for us to access this service programmatically.

Here in the annotation @FeignClient(name = “HelloWorld”), we refer to the service-name of the service producer we previously created.

@Configuration
@EnableFeignClients
@EnableDiscoveryClient
public class HelloWorldClient {
 
    @Autowired
    private TheClient theClient;

    @FeignClient(name = "HelloWorld")
    interface TheClient {
 
        @RequestMapping(path = "/helloworld", method = RequestMethod.GET)
        @ResponseBody
	String helloWorld();
    }
    public String HelloWorld() {
        return theClient.HelloWorld();
    }
}

4.4. Controller Class

The following is the simple service controller class that will call the service provider function on our feign client class to consume the service (whose details are abstracted through service discovery) via the injected interface helloWorldClient object and displays it in response:

@RestController
public class GreetingController {
 
    @Autowired
    private HelloWorldClient helloWorldClient;

    @GetMapping("/get-greeting")
    public String greeting() {
        return helloWorldClient.helloWorld();
    }
}

4.5. YAML Configurations

Next, we create a YAML file Application.yml very similar to the one used before. That configures the application’s log level:

logging:
  level:
    org.apache.zookeeper.ClientCnxn: WARN

The application looks for the Zookeeper on default port 2181. If Zookeeper is located somewhere else, the configuration needs to be added:

spring:
  cloud:
    zookeeper:
      connect-string: localhost:2181

5. Testing the Setup

The HelloWorld REST service registers itself with Zookeeper on deployment. Then the Greeting service acting as the service consumer calls the HelloWorld service using the Feign client.

Now we can build and run these two services.

Finally, we’ll point our browser to http://localhost:8083/get-greeting, and it should display:

Hello World!

6. Conclusion

In this article, we have seen how to implement service discovery using Spring Cloud Zookeeper and we registered a service called HelloWorld within Zookeeper server to be discovered and consumed by the Greeting service using a Feign Client without knowing its location details.

As always, the code for this article is available on the GitHub.

Introduction to the Stripe API for Java

$
0
0

1. Overview

Stripe is a cloud-based service that enables businesses and individuals to receive payments over the internet and offers both client-side libraries (JavaScript and native mobile) and server-side libraries (Java, Ruby, Node.js, etc.).

Stripe provides a layer of abstraction that reduces the complexity of receiving payments. As a result, we don’t need to deal with credit card details directly – instead, we deal with a token symbolizing an authorization to charge.

In this tutorial, we will create a sample Spring Boot project that allows users to input a credit card and later will charge the card for a certain amount using the Stripe API for Java.

2. Dependencies

To make use of the Stripe API for Java in the project, we add the corresponding dependency to our pom.xml:

<dependency>
    <groupId>com.stripe</groupId>
    <artifactId>stripe-java</artifactId>
    <version>4.2.0</version>
</dependency>

We can find its latest version in the Maven Central repository.

For our sample project, we will leverage the spring-boot-starter-parent:

<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>1.5.2.RELEASE</version>
</parent>

We will also use Lombok to reduce boilerplate code, and Thymeleaf will be the template engine for delivering dynamic web pages.

Since we are using the spring-boot-starter-parent to manage the versions of these libraries, we don’t have to include their versions in pom.xml:

<dependency>
    <groupId>org.springframework.boot</groupId> 
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-thymeleaf</artifactId>
</dependency>
<dependency>
    <groupId>org.projectlombok</groupId>
    <artifactId>lombok</artifactId>
</dependency>

Note that if you’re using NetBeans, you may want to use Lombok explicitly with version 1.16.16, since a bug in the version of Lombok provided with Spring Boot 1.5.2 causes NetBeans to generate a lot of errors.

3. API Keys

Before we can communicate with Stripe and execute credit card charges, we need to register a Stripe account and obtain secret/public Stripe API keys.

After confirming the account, we will log in to access the Stripe dashboard. We then choose “API keys” on the left side menu:

Stripe Dashboard API Keys

There will be two pairs of secret/public keys — one for test and one for live. Let’s leave this tab open so that we can use these keys later.

4. General Flow

The charge of the credit card will be done in five simple steps, involving the front-end (run in a browser), back-end (our Spring Boot application), and Stripe:

  1. A user goes to the checkout page and clicks “Pay with Card”.
  2. A user is presented with Stripe Checkout overlay dialog, where fills the credit card details.
  3. A user confirms with “Pay <amount>” which will:
    • Send the credit card to Stripe
    • Get a token in the response which will be appended to the existing form
    • Submit that form with the amount, public API key, email, and the token to our back-end
  4. Our back-end contacts Stripe with the token, the amount, and the secret API key.
  5. Back-end checks Stripe response and provides the user with feedback of the operation.
Stripe payment flow

We will cover each step in greater detail in the following sections.

5. Checkout Form

Stripe Checkout is a customizable, mobile ready, and localizable widget that renders a form to introduce credit card details. Through the inclusion and configuration of “checkout.js“, it is responsible for:

  • “Pay with Card” button rendering
    Pay with Card button
  • Payment overlay dialog rendering (triggered after clicking “Pay with Card”)
    Stripe checkout form overlay
  • Credit card validation
  • “Remember me” feature (associates the card with a mobile number)
  • Sending the credit card to Stripe and replacing it with a token in the enclosing form (triggered after clicking “Pay <amount>”)

If we need to exercise more control over the checkout form than is provided by Stripe Checkout, then we can use Stripe Elements.

Next, we will analyze the controller that prepares the form and then the form itself.

5.1. Controller

Let’s start by creating a controller to prepare the model with the necessary information that the checkout form needs.

First, we’ll need to copy the test version of our public key from the Stripe dashboard and use it to define STRIPE_PUBLIC_KEY as an environment variable. We then use this value in the stripePublicKey field.

We’re also setting currency and amount (expressed in cents) manually here merely for demonstration purposes, but in a real application, we might set a product/sale id that could be used to fetch the actual values.

Then, we’ll dispatch to the checkout view which holds the checkout form:

@Controller
public class CheckoutController {

    @Value("${STRIPE_PUBLIC_KEY}")
    private String stripePublicKey;

    @RequestMapping("/checkout")
    public String checkout(Model model) {
        model.addAttribute("amount", 50 * 100); // in cents
        model.addAttribute("stripePublicKey", stripePublicKey);
        model.addAttribute("currency", ChargeRequest.Currency.EUR);
        return "checkout";
    }
}

Regarding the Stripe API keys, you can define them as environment variables per application (test vs. live).

As is the case with any password or sensitive information, it is best to keep the secret key out of your version control system.

5.2. Form

The “Pay with Card” button and the checkout dialog are included by adding a form with a script inside, correctly configured with data attributes:

<form action='/charge' method='POST' id='checkout-form'>
    <input type='hidden' th:value='${amount}' name='amount' />
    <label>Price:<span th:text='${amount/100}' /></label>
    <!-- NOTE: data-key/data-amount/data-currency will be rendered by Thymeleaf -->
    <script
       src='https://checkout.stripe.com/checkout.js' 
       class='stripe-button'
       th:attr='data-key=${stripePublicKey}, 
         data-amount=${amount}, 
         data-currency=${currency}'
       data-name='Baeldung'
       data-description='Spring course checkout'
       data-image
         ='http://www.baeldung.com/wp-content/themes/baeldung/favicon/android-chrome-192x192.png'
       data-locale='auto'
       data-zip-code='false'>
   </script>
</form>

The “checkout.js” script automatically triggers a request to Stripe right before the submit, which then appends the Stripe token and the Stripe user email as the hidden fields “stripeToken” and “stripeEmail“.

These will be submitted to our back-end along with the other form fields. The script data attributes are not submitted.

We use Thymeleaf to render the attributes “data-key“, “data-amount“, and “data-currency“.

The amount (“data-amount“) is used only for display purposes (along with “data-currency“). Its unit is cents of the used currency, so we divide it by 100 to display it.

The Stripe public key is passed to Stripe after the user asks to pay. Do not use the secret key here, as this is sent to the browser.

6. Charge Operation

For server-side processing, we need to define the POST request handler used by the checkout form. Let’s take a look at the classes we will need for the charge operation.

6.1. ChargeRequest Entity

Let’s define the ChargeRequest POJO that we will use as a business entity during the charge operation:

@Data
public class ChargeRequest {

    public enum Currency {
        EUR, USD;
    }
    private String description;
    private int amount;
    private Currency currency;
    private String stripeEmail;
    private String stripeToken;
}

6.2. Service

Let’s write a StripeService class to communicate the actual charge operation to Stripe:

@Service
public class StripeService {

    @Value("${STRIPE_SECRET_KEY}")
    private String secretKey;
    
    @PostConstruct
    public void init() {
        Stripe.apiKey = secretKey;
    }
    public Charge charge(ChargeRequest chargeRequest) 
      throws AuthenticationException, InvalidRequestException,
        APIConnectionException, CardException, APIException {
        Map<String, Object> chargeParams = new HashMap<>();
        chargeParams.put("amount", chargeRequest.getAmount());
        chargeParams.put("currency", chargeRequest.getCurrency());
        chargeParams.put("description", chargeRequest.getDescription());
        chargeParams.put("source", chargeRequest.getStripeToken());
        return Charge.create(chargeParams);
    }
}

As was shown in the CheckoutController, the secretKey field is populated from the STRIPE_SECRET_KEY environment variable that we copied from the Stripe dashboard.

Once the service has been initialized, this key is used in all subsequent Stripe operations.

The object returned by the Stripe library represents the charge operation and contains useful data like the operation id.

6.3. Controller

Finally, let’s write the controller that will receive the POST request made by the checkout form and submit the charge to Stripe via our StripeService.

Note that the “ChargeRequest” parameter is automatically initialized with the request parameters “amount“, “stripeEmail“, and “stripeToken” included in the form:

@Controller
public class ChargeController {

    @Autowired
    private StripeService paymentsService;

    @PostMapping("/charge")
    public String charge(ChargeRequest chargeRequest, Model model)
      throws StripeException {
        chargeRequest.setDescription("Example charge");
        chargeRequest.setCurrency(Currency.EUR);
        Charge charge = paymentsService.charge(chargeRequest);
        model.addAttribute("id", charge.getId());
        model.addAttribute("status", charge.getStatus());
        model.addAttribute("chargeId", charge.getId());
        model.addAttribute("balance_transaction", charge.getBalanceTransaction());
        return "result";
    }

    @ExceptionHandler(StripeException.class)
    public String handleError(Model model, StripeException ex) {
        model.addAttribute("error", ex.getMessage());
        return "result";
    }
}

On success, we add the status, the operation id, the charge id, and the balance transaction id to the model so that we can show them later to the user (Section 7). This is done to illustrate some of the contents of the charge object.

Our ExceptionHandler will deal with exceptions of type StripeException that are thrown during the charge operation.

If we need more fine-grained error handling, we can add separate handlers for the subclasses of StripeException, such as CardException, RateLimitException, or AuthenticationException.

The “result” view renders the result of the charge operation.

7. Showing the Result

The HTML used to display the result is a basic Thymeleaf template that displays the outcome of a charge operation. The user is sent here by the ChargeController whether the charge operation was successful or not:

<!DOCTYPE html>
<html xmlns='http://www.w3.org/1999/xhtml' xmlns:th='http://www.thymeleaf.org'>
    <head>
        <title>Result</title>
    </head>
    <body>
        <h3 th:if='${error}' th:text='${error}' style='color: red;'></h3>
        <div th:unless='${error}'>
            <h3 style='color: green;'>Success!</h3>
            <div>Id.: <span th:text='${id}' /></div>
            <div>Status: <span th:text='${status}' /></div>
            <div>Charge id.: <span th:text='${chargeId}' /></div>
            <div>Balance transaction id.: <span th:text='${balance_transaction}' /></div>
        </div>
        <a href='/checkout.html'>Checkout again</a>
    </body>
</html>

On success, the user will see some details of the charge operation:

Charge successful

On error, the user will be presented with the error message as returned by Stripe:

Charge error

8. Conclusion

In this tutorial, we’ve shown how to make use of the Stripe Java API to charge a credit card. In the future, we could reuse our server-side code to serve a native mobile app.

To test the entire charge flow, we don’t need to use a real credit card (even in test mode). We can rely on Stripe testing cards instead.

The charge operation is one among many possibilities offered by the Stripe Java API. The official API reference will guide us through the whole set of operations.

The sample code used in this tutorial can be found in the GitHub project.

Testing in Spring Boot

$
0
0

1. Overview

In this article, we’ll have a look at writing tests using the framework support in Spring Boot. We’ll cover unit tests that can run in isolation as well as integration tests that will bootstrap Spring context before executing tests.

If you are new to Spring Boot, check out our intro to Spring Boot.

2. Project Setup

The application we’re going to use in this article is an API that provides some basic operations on an Employee Resource. This is a typical tiered architecture – the API call is processed from the Controller to Service to the Persistence layer.

3. Maven Dependencies

Let’s first add our testing dependencies:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-test</artifactId>
    <scope>test</scope>
    <version>1.5.3.RELEASE</version>
</dependency>
<dependency>
    <groupId>com.h2database</groupId>
    <artifactId>h2</artifactId>
    <scope>test</scope>
    <version>1.4.194</version>
</dependency>

The spring-boot-starter-test is the primary dependency that contains the majority of elements required for our tests.

The H2 DB is our in-memory database. It eliminates the need for configuring and starting an actual database for test purposes.

4. Integration Testing with @DataJpaTest

We’re going to work with an entity named Employee which has an id and a name as its properties:

@Entity
@Table(name = "person")
public class Employee {

    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;

    @Size(min = 3, max = 20)
    private String name;

    // standard getters and setters, constructors
}

And here’s our repository – using Spring Data JPA:

@Repository
public interface EmployeeRepository extends JpaRepository<Employee, Long> {

    public Employee findByName(String name);

}

That’s it for the persistence layer code. Now let’s head towards writing our test class.

First, let’s create the skeleton of our test class:

@RunWith(SpringRunner.class)
@DataJpaTest
public class EmployeeRepositoryTest {

    @Autowired
    private TestEntityManager entityManager;

    @Autowired
    private EmployeeRepository employeeRepository;

    // write test cases here

}

@RunWith(SpringRunner.class) is used to provide a bridge between Spring Boot test features and JUnit. Whenever we are using any Spring Boot testing features in out JUnit tests, this annotation will be required.

@DataJpaTest provides some standard setup needed for testing the persistence layer:

  • configuring H2, an in-memory database
  • setting Hibernate, Spring Data, and the DataSource
  • performing an @EntityScan
  • turning on SQL logging

To carry out some DB operation, we need some records already setup in our database. To setup such data, we can use TestEntityManager. The TestEntityManager provided by Spring Boot is an alternative to the standard JPA EntityManager that provides methods commonly used when writing tests.

EmployeeRepository is the component that we are going to test. Now let’s write our first test case:

@Test
public void whenFindByName_thenReturnEmployee() {
    // given
    Employee alex = new Employee("alex");
    entityManager.persist(alex);
    entityManager.flush();

    // when
    Employee found = employeeRepository.findByName(alex.getName());

    // then
    assertThat(found.getName())
      .isEqualTo(alex.getName());
}

In the above test, we’re using the TestEntityManager to insert an Employee in the DB and reading it via the find by name API.

The assertThat(…) part comes from the Assertj library which comes bundled with Spring Boot.

5. Mocking with @MockBean

Our Service layer code is dependent on our Repository. However, to test the Service layer, we do not need to know or care about how the persistence layer is implemented:

@Service
public class EmployeeServiceImpl implements EmployeeService {

    @Autowired
    private EmployeeRepository employeeRepository;

    @Override
    public Employee getEmployeeByName(String name) {
        return employeeRepository.findByName(name);
    }
}

Ideally, we should be able to write and test our Service layer code without wiring in our full persistence layer.

To achieve this, we can use the mocking support provided by Spring Boot Test.

Let’s have a look at the test class skeleton first:

@RunWith(SpringRunner.class)
public class EmployeeServiceImplTest {

    @TestConfiguration
    static class EmployeeServiceImplTestContextConfiguration {
 
        @Bean
        public EmployeeService employeeService() {
            return new EmployeeServiceImpl();
        }
    }

    @Autowired
    private EmployeeService employeeService;

    @MockBean
    private EmployeeRepository employeeRepository;

    // write test cases here
}

To check the Service class, we need to have an instance of Service class created and available as a @Bean so that we can @Autowire it in our test class. This configuration is achieved by using the @TestConfiguration annotation.

During component scanning, we might find components or configurations created only for specific tests accidentally get picked up everywhere. To help prevent that, Spring Boot provides @TestConfiguration annotation that can be used on classes in src/test/java to indicate that they should not be picked up by scanning.

Another interesting thing here is the use of @MockBean. It creates a Mock for the EmployeeRepository which can be used to bypass the call to the actual EmployeeRepository:

@Before
public void setUp() {
    Employee alex = new Employee("alex");

    Mockito.when(employeeRepository.findByName(alex.getName()))
      .thenReturn(alex);
}

Since the setup is done, the test case will be simpler:

@Test
public void whenValidName_thenEmployeeShouldBeFound() {
    String name = "alex";
    Employee found = employeeService.getEmployeeByName(name);
 
     assertThat(found.getName())
      .isEqualTo(name);
 }

6. Unit Testing with @WebMvcTest

Our Controller depends on the Service layer; let’s only include a single method for simplicity:

@RestController
@RequestMapping("/api")
public class EmployeeRestController {

    @Autowired
    private EmployeeService employeeService;

    @GetMapping("/employees")
    public List<Employee> getAllEmployees() {
        return employeeService.getAllEmployees();
    }
}

Since we are only focused on the Controller code, it is natural to mock the Service layer code for our unit tests:

@RunWith(SpringRunner.class)
@WebMvcTest(EmployeeRestController.class)
public class EmployeeRestControllerTest {

    @Autowired
    private MockMvc mvc;

    @MockBean
    private EmployeeService service;

    // write test cases here
}

To test the Controllers, we can use @WebMvcTest. It will auto-configure the Spring MVC infrastructure for our unit tests.

In most of the cases, @WebMvcTest will be limited to bootstrap a single controller. It is used along with @MockBean to provide mock implementations for required dependencies.

@WebMvcTest also auto-configures MockMvc which offers a powerful way of easy testing MVC controllers without starting a full HTTP server.

Having said that, let’s write our test case:

@Test
public void givenEmployees_whenGetEmployees_thenReturnJsonArray()
  throws Exception {
    
    Employee alex = new Employee("alex");

    List<Employee> allEmployees = Arrays.asList(alex);

    given(service.getAllEmployees()).willReturn(allEmployees);

    mvc.perform(get("/api/employees")
      .contentType(MediaType.APPLICATION_JSON))
      .andExpect(status().isOk())
      .andExpect(jsonPath("$", hasSize(1)))
      .andExpect(jsonPath("$[0].name", is(alex.getName())));
}

The get(…) method call can be replaced by other methods corresponding to HTTP verbs like put(), post(), etc. Please note that we are also setting the content type in the request.

MockMvc is flexible, and we can create any request using it.

7. Integration Testing with @SpringBootTest

As the name suggests, integration tests focus on integrating different layers of the application. That also means no mocking is involved.

Ideally, we should keep the integration tests separated from the unit tests and should not run along with the unit tests. We can do that by using a different profile to only run the integration tests. A couple of reasons for doing this could be that the integration tests are time-consuming and might need an actual database to execute.

However, in this article, we won’t focus on that and we’ll instead make use of the in-memory H2 persistence storage.

The integration tests need to start up a container to execute the test cases. Hence, some additional setup is required for this – all of this is easy in Spring Boot:

@RunWith(SpringRunner.class)
@SpringBootTest(
  webEnvironment = WebEnvironment.RANDOM_PORT,
  classes = Application.class)
@AutoConfigureMockMvc
@TestPropertySource(
  locations = "classpath:application-integrationtest.properties")
public class EmployeeRestControllerIntTest {

    @Autowired
    private MockMvc mvc;

    @Autowired
    private EmployeeRepository repository;

    // write test cases here
}

The @SpringBootTest annotation can be used when we need to bootstrap the entire container. The annotation works by creating the ApplicationContext that will be utilized in our tests.

We can use the webEnvironment attribute of @SpringBootTest to configure our runtime environment. We are using WebEnvironment.RANDOM_PORT so that the container will start at any random port. It will be helpful if several integration tests are running in parallel on the same machine.

We can use the @TestPropertySource annotation to configure locations of properties files specific to our tests. Please note that the property file loaded with @TestPropertySource will override the existing application.properties file.

The application-integrationtest.properties contains the details to configure the persistence storage:

spring.datasource.url = jdbc:h2:mem:test
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.H2Dialect

If we want to run our integration tests against MySQL, we can change the above values in the properties file.

The test cases for the integration tests might look similar to the Controller layer unit tests:

@Test
public void givenEmployees_whenGetEmployees_thenStatus200()
  throws Exception {

    createTestEmployee("bob");

    mvc.perform(get("/api/employees")
      .contentType(MediaType.APPLICATION_JSON))
      .andExpect(status().isOk())
      .andExpect(content()
      .contentTypeCompatibleWith(MediaType.APPLICATION_JSON))
      .andExpect(jsonPath("$[0].name", is("bob")));
}

The difference from the Controller layer unit tests is that here nothing is mocked and end-to-end scenarios will be executed.

8. Conclusion

In this tutorial, we took a deep dive into the testing support in Spring Boot and showed how to write unit tests efficiently.

The complete source code of this article can be found over on GitHub. The source code contains many more examples and various test cases.

And, if you want to keep learning about testing – we have separate articles related to integration tests and unit tests in JUnit 5.

Guide to the Java TransferQueue

$
0
0

1. Overview

In this article, we’ll be looking at the TransferQueue construct from the standard java.util.concurrent package.

Simply put, this queue allows us to create programs according to the producer-consumer pattern, and coordinate messages passing from producers to consumers.

The implementation is actually similar to the BlockingQueuebut gives us the new ability to implement a form of backpressure. This means that, when the producer sends a message to the consumer using the transfer() method, the producer will stay blocked until the message is consumed.

2. One Producer – Zero Consumers

Let’s test a transfer() method from the TransferQueue – the expected behavior is that the producer will be blocked until the consumer receives the message from the queue using a take() method.

To achieve that, we’ll create a program that has one producer but zero consumers. The first call of transfer() from the producer thread will block indefinitely, as we don’t have any consumers to fetch that element from the queue.

Let’s see how the Producer class looks like:

class Producer implements Runnable {
    private TransferQueue<String> transferQueue;
 
    private String name;
 
    private Integer numberOfMessagesToProduce;
 
    public AtomicInteger numberOfProducedMessages
      = new AtomicInteger();

    @Override
    public void run() {
        for (int i = 0; i < numberOfMessagesToProduce; i++) {
            try {
                boolean added 
                  = transferQueue.tryTransfer("A" + i, 4000, TimeUnit.MILLISECONDS);
                if(added){
                    numberOfProducedMessages.incrementAndGet();
                }
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
        }
    }
    // standard constructors
}

We are passing an instance of the TransferQueue to the constructor together with a name that we want to give our producer and the number of elements that should be transferred to the queue.

Note that we are using the tryTransfer() method, with a given timeout. We are waiting four seconds, and if a producer is not able to transfer the message within the given timeout, it returns false and moves on to the next message. The producer has a numberOfProducedMessages variable to keep track of how many messages were produced.

Next, let’s look at the Consumer class:

class Consumer implements Runnable {
 
    private TransferQueue<String> transferQueue;
 
    private String name;
 
    private int numberOfMessagesToConsume;
 
    public AtomicInteger numberOfConsumedMessages
     = new AtomicInteger();

    @Override
    public void run() {
        for (int i = 0; i < numberOfMessagesToConsume; i++) {
            try {
                String element = transferQueue.take();
                longProcessing(element);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
        }
    }

    private void longProcessing(String element)
      throws InterruptedException {
        numberOfConsumedMessages.incrementAndGet();
        Thread.sleep(500);
    }
    
    // standard constructors
}

It is similar to the producer, but we are receiving elements from the queue by using the take() method. We are also simulating some long running action by using the longProcessing() method in which we are incrementing the numberOfConsumedMessages variable that is a counter of the received messages.

Now, let’s start our program with only one producer:

@Test
public void whenUseOneProducerAndNoConsumers_thenShouldFailWithTimeout() 
  throws InterruptedException {
    // given
    TransferQueue<String> transferQueue = new LinkedTransferQueue<>();
    ExecutorService exService = Executors.newFixedThreadPool(2);
    Producer producer = new Producer(transferQueue, "1", 3);

    // when
    exService.execute(producer);

    // then
    exService.awaitTermination(5000, TimeUnit.MILLISECONDS);
    exService.shutdown();

    assertEquals(producer.numberOfProducedMessages.intValue(), 0);
}

We want to send three elements to the queue, but the producer is blocked on the first element, and there is no consumer to fetch that element from the queue. We are using the tryTransfer() method which will block until the message is consumed or the timeout is reached. After the timeout, it will return false to indicate the transfer has failed, and it will try to transfer the next one. This is the output from the previous example:

Producer: 1 is waiting to transfer...
can not add an element due to the timeout
Producer: 1 is waiting to transfer...

3. One Producer – One Consumer

Let’s test a situation when there are one producer and one consumer:

@Test
public void whenUseOneConsumerAndOneProducer_thenShouldProcessAllMessages() 
  throws InterruptedException {
    // given
    TransferQueue<String> transferQueue = new LinkedTransferQueue<>();
    ExecutorService exService = Executors.newFixedThreadPool(2);
    Producer producer = new Producer(transferQueue, "1", 3);
    Consumer consumer = new Consumer(transferQueue, "1", 3);

    // when
    exService.execute(producer);
    exService.execute(consumer);

    // then
    exService.awaitTermination(5000, TimeUnit.MILLISECONDS);
    exService.shutdown();

    assertEquals(producer.numberOfProducedMessages.intValue(), 3);
    assertEquals(consumer.numberOfConsumedMessages.intValue(), 3);
}

The TransferQueue is used as an exchange point, and until the consumer consumes an element from the queue, the producer cannot proceed with adding another element to it. Let’s look at the program output:

Producer: 1 is waiting to transfer...
Consumer: 1 is waiting to take element...
Producer: 1 transferred element: A0
Producer: 1 is waiting to transfer...
Consumer: 1 received element: A0
Consumer: 1 is waiting to take element...
Producer: 1 transferred element: A1
Producer: 1 is waiting to transfer...
Consumer: 1 received element: A1
Consumer: 1 is waiting to take element...
Producer: 1 transferred element: A2
Consumer: 1 received element: A2

We see that producing and consuming elements from the queue is sequential because of the specification of TransferQueue.

4. Many Producers – Many Consumers

In the last example we will consider having multiple consumers and multiple producers:

@Test
public void whenMultipleConsumersAndProducers_thenProcessAllMessages() 
  throws InterruptedException {
    // given
    TransferQueue<String> transferQueue = new LinkedTransferQueue<>();
    ExecutorService exService = Executors.newFixedThreadPool(3);
    Producer producer1 = new Producer(transferQueue, "1", 3);
    Producer producer2 = new Producer(transferQueue, "2", 3);
    Consumer consumer1 = new Consumer(transferQueue, "1", 3);
    Consumer consumer2 = new Consumer(transferQueue, "2", 3);

    // when
    exService.execute(producer1);
    exService.execute(producer2);
    exService.execute(consumer1);
    exService.execute(consumer2);

    // then
    exService.awaitTermination(10_000, TimeUnit.MILLISECONDS);
    exService.shutdown();

    assertEquals(producer1.numberOfProducedMessages.intValue(), 3);
    assertEquals(producer2.numberOfProducedMessages.intValue(), 3);
}

In this example, we have two consumers and two producers. When the program starts, we see that both producers can produce one element and after that, they will block until one of the consumers takes that element from the queue:

Producer: 1 is waiting to transfer...
Consumer: 1 is waiting to take element...
Producer: 2 is waiting to transfer...
Producer: 1 transferred element: A0
Producer: 1 is waiting to transfer...
Consumer: 1 received element: A0
Consumer: 1 is waiting to take element...
Producer: 2 transferred element: A0
Producer: 2 is waiting to transfer...
Consumer: 1 received element: A0
Consumer: 1 is waiting to take element...
Producer: 1 transferred element: A1
Producer: 1 is waiting to transfer...
Consumer: 1 received element: A1
Consumer: 2 is waiting to take element...
Producer: 2 transferred element: A1
Producer: 2 is waiting to transfer...
Consumer: 2 received element: A1
Consumer: 2 is waiting to take element...
Producer: 1 transferred element: A2
Consumer: 2 received element: A2
Consumer: 2 is waiting to take element...
Producer: 2 transferred element: A2
Consumer: 2 received element: A2

5. Conclusion

In this article, we were looking at the TransferQueue construct from the java.util.concurrent package.

We saw how to implement the producer-consumer program using that construct. We used a transfer() method to create a form of backpressure, where a producer can not publish another element until the consumer retrieves an element from the queue.

The TransferQueue can be very useful when we do not want an over-producing producer that will flood the queue with messages, resulting in the OutOfMemory errors. In such design, the consumer will be dictating the speed at which the producer will produce messages.

All these examples and code snippets can be found over on GitHub – this is a Maven project, so it should be easy to import and run as it is.

An Intro to the Spring DispatcherServlet

$
0
0

 1. Introduction

Simply put, in the Front Controller design pattern, a single controller is responsible for directing incoming HttpRequests to all of an application’s other controllers and handlers.

Spring’s DispatcherServlet implements this pattern and is, therefore, responsible for correctly coordinating the HttpRequests to their right handlers.

In this article, we will examine the Spring DispatcherServlet’s request processing workflow and how to implement several of the interfaces that participate in this workflow.

2. DispatcherServlet Request Processing

Essentially, a DispatcherServlet handles an incoming HttpRequest, delegates the request, and processes that request according to the configured HandlerAdapter interfaces that have been implemented within the Spring application along with accompanying annotations specifying handlers, controller endpoints, and response objects.

Let’s get more in depth about how a DispatcherServlet processes a component:

  • the WebApplicationContext associated to a DispatcherServlet under the key DispatcherServlet.WEB_APPLICATION_CONTEXT_ATTRIBUTE is searched for and made available to all of the elements of the process
  • The DispatcherServlet finds all implementations of the HandlerAdapter interface configured for your dispatcher using getHandler() – each found and configured implementation handles the request via handle()  through the remainder of the process
  • the LocaleResolver is optionally bound to the request to enable elements in the process to resolve the locale
  • the ThemeResolver is optionally bound to the request to let elements, such as views, determine which theme to use
  • if a MultipartResolver is specified, the request is inspected for MultipartFiles – any found are wrapped in a MultipartHttpServletRequest for further processing
  • HandlerExceptionResolver implementations declared in the WebApplicationContext pick up exceptions that are thrown during processing of the request

You can learn more about all the ways to register and set up a DispatcherServlet here.

3. HandlerAdapter Interfaces

The HandlerAdapter interface facilitates the use of controllers, servlets, HttpRequests, and HTTP paths through several specific interfaces. The HandlerAdapter interface thus plays an essential role through the many stages of the DispatcherServlet request processing workflow.

First, each HandlerAdapter implementation is placed into the HandlerExecutionChain from your dispatcher’s getHandler() method. Then, each of those implementations handle() the HttpServletRequest object as the execution chain proceeds.

In the following sections, we will explore a few of the most important and commonly used HandlerAdapters in greater detail.

3.1. Mappings

To understand mappings, we need to first look at how to annotate controllers since controllers are so essential to the HandlerMapping interface.

The SimpleControllerHandlerAdapter allows for the implementation of a controller explicitly without a @Controller annotation.

The RequestMappingHandlerAdapter supports methods annotated with the @RequestMapping annotation.

We’ll focus on the @Controller annotation here, but a helpful resource with several examples using the SimpleControllerHandlerAdapter is also available.

The @RequestMapping annotation sets the specific endpoint at which a handler will be available within the WebApplicationContext associated with it.

Let’s see an example of a Controller that exposes and handles the ‘/user/example’ endpoint:

@Controller
@RequestMapping("/user")
@ResponseBody
public class UserController {
 
    @GetMapping("/example")
    public User fetchUserExample() {
        // ...
    }
}

The paths specified by the @RequestMapping annotation are managed internally via the HandlerMapping interface.

The URLs structure is naturally relative to the DispatcherServlet itself – and determined by the servlet mapping.

Thus, if the DispatcherServlet is mapped to ‘/’, then all mappings are going to be covered by that mapping.

If, however, the servlet mapping is ‘/dispatcher‘ instead, then any @RequestMapping annotations are going to be relative to that root URL.

Remember that ‘/’ is not the same as ‘/*’ for servlet mappings! ‘/’ is the default mapping and exposes all URL’s to the dispatcher’s area of responsibility.

‘/*’  is confusing to a lot of newer Spring developers. It does not specify that all paths with the same URL context are under the dispatcher’s area of responsibility. Instead, it overrides and ignores the other dispatcher mappings. So, ‘/example’ will come up as a 404!

For that reason, ‘/*’ shouldn’t be used except in very limited circumstances (like configuring a filter).

3.2. HTTP Request Handling

The core responsibility of a DispatcherServlet is to dispatch incoming HttpRequests to the correct handlers specified with the @Controller or @RestController annotations.

As a sidenote, the main difference between @Controller and @RestController is how the response is generated – the @RestController also defines @ResponseBody by default.

A writeup where we go into much greater depth regarding Spring’s controllers can be found here.

3.3. The ViewResolver Interface

A ViewResolver is attached to a DispatcherServlet as a configuration setting on an ApplicationContext object.

A ViewResolver determines both what kind of views are served by the dispatcher and from where they are served.

Here’s an example configuration which we’ll place into our WebMvcConfigurerAdapter for rendering JSP pages:

@Configuration
@EnableWebMvc
@ComponentScan("com.baeldung.springdispatcherservlet")
public class AppConfig extends WebMvcConfigurerAdapter {

    @Bean
    public UrlBasedViewResolver viewResolver() {
        UrlBasedViewResolver resolver
          = new UrlBasedViewResolver();
        resolver.setPrefix("/WEB-INF/jsp/");
        resolver.setSuffix(".jsp");
        resolver.setViewClass(JstlView.class);
        return resolver;
    }
}

Very straight-forward! There are three main parts to this:

  1. setting the prefix, which sets the default URL path to find the set views within
  2. the default view type which is set via the suffix
  3. setting a view class on the resolver which allows technologies like JSTL or Tiles to be associated with the rendered views

One common question involves how precisely a dispatcher’s ViewResolver and the overall project directory structure are related. Let’s take a look at the basics.

Here’s an example path configuration for an InternalViewResolver using Spring’s XML configuration:

<property name="prefix" value="/jsp/"/>

For the sake of our example, we’ll assume that our application is being hosted on:

http://localhost:8080/

This is the default address and port for a locally hosted Apache Tomcat server.

Assuming that our application is called dispatcherexample-1.0.0, our JSP views will be accessible from:

http://localhost:8080/dispatcherexample-1.0.0/jsp/

The path for these views within an ordinary Spring project with Maven is this:

src -|
     main -|
            java
            resources
            webapp -|
                    jsp
                    WEB-INF

The default location for views is within WEB-INF. The path specified for our InternalViewResolver in the snippet above determines the subdirectory of ‘src/main/webapp’ in which your views will be available.

 3.4. The LocaleResolver Interface

The primary way to customize session, request, or cookie information for our dispatcher is through the LocaleResolver interface.

CookieLocaleResolver is an implementation allowing the configuration of stateless application properties using cookies. Let’s add it to AppConfig.

@Bean
public CookieLocaleResolver cookieLocaleResolverExample() {
    CookieLocaleResolver localeResolver 
      = new CookieLocaleResolver();
    localeResolver.setDefaultLocale(Locale.ENGLISH);
    localeResolver.setCookieName("locale-cookie-resolver-example");
    localeResolver.setCookieMaxAge(3600);
    return localeResolver;
}

@Bean 
public LocaleResolver sessionLocaleResolver() { 
    SessionLocaleResolver localeResolver = new SessionLocaleResolver(); 
    localeResolver.setDefaultLocale(Locale.US); 
    localResolver.setDefaultTimeZone(TimeZone.getTimeZone("UTC"));
    return localeResolver; 
}

SessionLocaleResolver allows for session-specific configuration in a stateful application.

The setDefaultLocale() method represents a geographical, political, or cultural region, whereas setDefaultTimeZone() determines the relevant TimeZone object for the application Bean in question.

Both methods are available on each of the above implementations of LocaleResolver.

3.5. The ThemeResolver Interface

Spring provides stylistic theming for our views.

Let’s take a look at how to configure our dispatcher to handle themes.

First, let’s set up all the configuration necessary to find and use our static theme files. We need to set a static resource location for our ThemeSource to configure the actual Themes themselves (Theme objects contain all of the configuration information stipulated in those files). Add this to AppConfig:

@Override
public void addResourceHandlers(ResourceHandlerRegistry registry) {
    registry.addResourceHandler("/resources/**")
      .addResourceLocations("/", "/resources/")
      .setCachePeriod(3600)
      .resourceChain(true)
      .addResolver(new PathResourceResolver());
}

@Bean
public ResourceBundleThemeSource themeSource() {
    ResourceBundleThemeSource themeSource
      = new ResourceBundleThemeSource();
    themeSource.setDefaultEncoding("UTF-8");
    themeSource.setBasenamePrefix("themes.");
    return themeSource;
}

Requests managed by the DispatcherServlet can modify the theme through a specified parameter passed into setParamName() available on the ThemeChangeInterceptor object. Add to AppConfig:

@Bean
public CookieThemeResolver themeResolver() {
    CookieThemeResolver resolver = new CookieThemeResolver();
    resolver.setDefaultThemeName("example");
    resolver.setCookieName("example-theme-cookie");
    return resolver;
}

@Bean
public ThemeChangeInterceptor themeChangeInterceptor() {
   ThemeChangeInterceptor interceptor
     = new ThemeChangeInterceptor();
   interceptor.setParamName("theme");
   return interceptor;
}

@Override
public void addInterceptors(InterceptorRegistry registry) {
    registry.addInterceptor(themeChangeInterceptor());
}

The following JSP tag is added to our view to make the correct styling appear:

<link rel="stylesheet" href="${ctx}/<spring:theme code='styleSheet'/>" type="text/css"/>

The following URL request renders the example theme using the ‘theme’ parameter passed into our configured ThemeChangeIntercepter:

http://localhost:8080/dispatcherexample-1.0.0/?theme=example

3.6. The MultipartResolver Interface

A MultipartResolver implementation inspects a request for multiparts and wraps them in a MultipartHttpServletRequest for further processing by other elements in the process if at least one multipart is found. Add to AppConfig:

@Bean
public CommonsMultipartResolver multipartResolver() 
  throws IOException {
    CommonsMultipartResolver resolver
      = new CommonsMultipartResolver();
    resolver.setMaxUploadSize(10000000);
    return resolver;
}

Now that we’ve configured our MultipartResolver bean, let’s set up a controller to process MultipartFile requests:

@Controller
public class MultipartController {

    @Autowired
    ServletContext context;

    @PostMapping("/upload")
    public ModelAndView FileuploadController(
      @RequestParam("file") MultipartFile file) 
      throws IOException {
        ModelAndView modelAndView = new ModelAndView("index");
        InputStream in = file.getInputStream();
        String path = new File(".").getAbsolutePath();
        FileOutputStream f = new FileOutputStream(
          path.substring(0, path.length()-1)
          + "/uploads/" + file.getOriginalFilename());
        int ch;
        while ((ch = in.read()) != -1) {
            f.write(ch);
        }
        f.flush();
        f.close();
        modelAndView.getModel()
          .put("message", "File uploaded successfully!");
        return modelAndView;
    }
}

We can use a normal form to submit a file to the specified endpoint. Uploaded files will be available in ‘CATALINA_HOME/bin/uploads’.

3.7. The HandlerExceptionResolver Interface

Spring’s HandlerExceptionResolver provides uniform error handling for an entire web application, a single controller, or a set of controllers.

To provide application-wide custom exception handling, create a class annotated with @ControllerAdvice:

@ControllerAdvice
public class ExampleGlobalExceptionHandler {

    @ExceptionHandler
    @ResponseBody 
    public String handleExampleException(Exception e) {
        // ...
    }
}

Any methods within that class annotated with @ExceptionHandler will be available on every controller within dispatcher’s area of responsibility.

Implementations of the HandlerExceptionResolver interface in the DispatcherServlet’s ApplicationContext are available to intercept a specific controller under that dispatcher’s area of responsibility whenever @ExceptionHandler is used as an annotation, and the correct class is passed in as a parameter:

@Controller
public class FooController{

    @ExceptionHandler({ CustomException1.class, CustomException2.class })
    public void handleException() {
        // ...
    }
    // ...
}

The handleException() method will now serve as an exception handler for FooController in our example above if either exception CustomException1 or CustomException2 occurs.

Here’s an article that goes more in depth about exception handling in a Spring web application.

 4. Conclusion

In this tutorial, we’ve reviewed Spring’s DispatcherServlet and several ways to configure it.

As always, the source code used in this tutorial is available over on Github.

Introduction to JAX-WS

$
0
0

1. Overview

Java API for XML Web Services (JAX-WS) is a standardized API for creating and consuming SOAP (Simple Object Access Protocol) web services.

In this article, we’ll create a SOAP web service and connect to it using JAX-WS.

2. SOAP

SOAP is an XML specification for sending messages over a network. SOAP messages are independent of any operating system and can use a variety of communication protocols including HTTP and SMTP.

SOAP is XML heavy, hence best used with tools/frameworks. JAX-WS is a framework that simplifies using SOAP. It is part of standard Java.

3. Top-Down vs. Bottom-Up

There are two ways of building SOAP web services. We can go with a top-down approach or a bottom-up approach.

In a top-down (contract-first) approach, a WSDL document is created, and the necessary Java classes are generated from the WSDL. In a bottom-up (contract-last) approach, the Java classes are written, and the WSDL is generated from the WSDL.

Writing a WSDL file can be quite difficult depending on how complex your web service is. This makes the bottom-up approach an easier option. On the other hand, since your WSDL is generated from the Java classes, any change in code might cause a change in the WSDL. This is not the case for the top-down approach.

In this article, we’ll take a look at both approaches.

4. Web Services Definition Language (WSDL)

WSDL is a contract definition of the available services. It is a specification of input/output messages, and how to invoke the web service. It is language neutral and is defined in XML.

Let’s look at the major elements of a WSDL document.

4.1. Definitions

The definitions element is the root element of all WSDL documents. It defines the name, the namespace, etc. of the service and, as you can see, can be quite spacious:

<definitions xmlns="http://schemas.xmlsoap.org/wsdl/" 
  xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" 
  xmlns:tns="http://jaxws.baeldung.com/" 
  xmlns:wsam="http://www.w3.org/2007/05/addressing/metadata"
  xmlns:wsp="http://www.w3.org/ns/ws-policy" 
  xmlns:wsp1_2="http://schemas.xmlsoap.org/ws/2004/09/policy"
  xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" 
  xmlns:xsd="http://www.w3.org/2001/XMLSchema" 
  targetNamespace="http://jaxws.baeldung.com/" 
  name="EmployeeService">
  ...
</definitions>

4.2. Types

The types element defines the data types used by the web service. WSDL uses XSD (XML Schema Definition) as the type system which helps with interoperability:

<definitions ...>
    ...
    <types>
        <xsd:schema>
            <xsd:import namespace="http://jaxws.baeldung.com/" 
              schemaLocation = "http://localhost:8080/employeeservice?xsd=1" />
        </xsd:schema>
    </types>
    ...
</definitions>

4.3. Messages

The message element provides an abstract definition of the data being transmitted. Each message element describes the input or output of a service method and the possible exceptions:

<definitions ...>
    ...
    <message name="getEmployee">
        <part name="parameters" element="tns:getEmployee" />
    </message>
    <message name="getEmployeeResponse">
        <part name="parameters" element="tns:getEmployeeResponse" />
    </message>
    <message name="EmployeeNotFound">
        <part name="fault" element="tns:EmployeeNotFound" />
    </message>
    ...
</definitions>

4.4. Operations and Port Types

The portType element describes each operation that can be performed and all the message elements involved. For example, the getEmployee operation specifies the request input, output and possible fault exception thrown by the web service operation:

<definitions ...>
    ...
    <portType name="EmployeeService">
        <operation name="getEmployee">
            <input 
              wsam:Action="http://jaxws.baeldung.com/EmployeeService/getEmployeeRequest" 
              message="tns:getEmployee" />
            <output 
              wsam:Action="http://jaxws.baeldung.com/EmployeeService/getEmployeeResponse" 
              message="tns:getEmployeeResponse" />
            <fault message="tns:EmployeeNotFound" name="EmployeeNotFound" 
              wsam:Action="http://jaxws.baeldung.com/EmployeeService/getEmployee/Fault/EmployeeNotFound" />
        </operation>
    ....
    </port>
    ...
</definitions>

4.5. Bindings

The binding element provides protocol and data format details for each portType:

<definitions ...>
    ...
    <binding name="EmployeeServiceImplPortBinding" 
      type="tns:EmployeeService">
        <soap:binding transport="http://schemas.xmlsoap.org/soap/http" 
          style="document" />
        <operation name="getEmployee">
            <soap:operation soapAction="" />
            <input>
                <soap:body use="literal" />
            </input>
            <output>
                <soap:body use="literal" />
            </output>
            <fault name="EmployeeNotFound">
                <soap:fault name="EmployeeNotFound" use="literal" />
            </fault>
        </operation>
        ...
    </binding>
    ...
</definitions>

4.6. Services and Ports

The service element defines the ports supported by the web service. The port element in service defines the name, binding and the address of the service:

<definitions ...>
    ...
    <service name="EmployeeService">
        <port name="EmployeeServiceImplPort" 
          binding="tns:EmployeeServiceImplPortBinding">
            <soap:address 
              location="http://localhost:8080/employeeservice" />
        </port>
    </service>
    ...
</definitions>

5. Top-Down (Contract-First) Approach

Let’s start with a top-down approach by creating a WSDL file employeeservicetopdown.wsdl. For the sake of simplicity, it has only one method:

<?xml version="1.0" encoding="UTF-8"?>
<definitions 
  xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/"
  xmlns:tns="http://topdown.server.jaxws.baeldung.com/"
  xmlns:xsd="http://www.w3.org/2001/XMLSchema"
  xmlns="http://schemas.xmlsoap.org/wsdl/"
  targetNamespace="http://topdown.server.jaxws.baeldung.com/"
  qname="EmployeeServiceTopDown">
    <types>
        <xsd:schema 
          targetNamespace="http://topdown.server.jaxws.baeldung.com/">
            <xsd:element name="countEmployeesResponse" type="xsd:int"/>
        </xsd:schema>
    </types>

    <message name="countEmployees">
    </message>
    <message name="countEmployeesResponse">
        <part name="parameters" element="tns:countEmployeesResponse"/>
    </message>
    <portType name="EmployeeServiceTopDown">
        <operation name="countEmployees">
            <input message="tns:countEmployees"/>
            <output message="tns:countEmployeesResponse"/>
        </operation>
    </portType>
    <binding name="EmployeeServiceTopDownSOAP" 
      type="tns:EmployeeServiceTopDown">
        <soap:binding transport="http://schemas.xmlsoap.org/soap/http" 
          style="document"/>
        <operation name="countEmployees">
            <soap:operation 
              soapAction="http://topdown.server.jaxws.baeldung.com/
              EmployeeServiceTopDown/countEmployees"/>
            <input>
                <soap:body use="literal"/>
            </input>
            <output>
                <soap:body use="literal"/>
            </output>
        </operation>
    </binding>
    <service name="EmployeeServiceTopDown">
        <port name="EmployeeServiceTopDownSOAP" 
          binding="tns:EmployeeServiceTopDownSOAP">
            <soap:address 
              location="http://localhost:8080/employeeservicetopdown"/>
        </port>
    </service>
</definitions>

5.1. Generating Web Service Source Files from WSDL

To generate web service source files from a WSDL document, we can use the wsimport tool which is part of JDK (at $JAVA_HOME/bin).

From command prompt:

wsimport -s . -p com.baeldung.jaxws.server.topdown employeeservicetopdown.wsdl

Command line options used: -p specifies the target package. -s specifies where to put the generated source files.

The generated files:

  • EmployeeServiceTopDown.java – is the service endpoint interface (SEI) that contains method definitions
  • ObjectFactory.java – contains factory methods to create instances of schema derived classes programmatically
  • EmployeeServiceTopDown_Service.java – is the service provider class that can be used by a JAX-WS client

5.2. Web Service Endpoint Interface

The wsimport tool has generated the web service endpoint interface EmployeeServiceTopDown. It declares the web service methods:

@WebService(
  name = "EmployeeServiceTopDown", 
  targetNamespace = "http://topdown.server.jaxws.baeldung.com/")
@SOAPBinding(parameterStyle = SOAPBinding.ParameterStyle.BARE)
@XmlSeeAlso({
    ObjectFactory.class
})
public interface EmployeeServiceTopDown {
    @WebMethod(
      action = "http://topdown.server.jaxws.baeldung.com/"
      + "EmployeeServiceTopDown/countEmployees")
    @WebResult(
      name = "countEmployeesResponse", 
      targetNamespace = "http://topdown.server.jaxws.baeldung.com/", 
      partName = "parameters")
    public int countEmployees();
}

5.3. Web Service Implementation

The wsimport tool has created the structure of the web service. We have to create the implementation of the web service:

@WebService(
  name = "EmployeeServiceTopDown", 
  endpointInterface = "com.baeldung.jaxws.server.topdown.EmployeeServiceTopDown",
  targetNamespace = "http://topdown.server.jaxws.baeldung.com/")
public class EmployeeServiceTopDownImpl 
  implements EmployeeServiceTopDown {
 
    @Inject 
    private EmployeeRepository employeeRepositoryImpl;
 
    @WebMethod
    public int countEmployees() {
        return employeeRepositoryImpl.count();
    }
}

6. Bottom-Up (Contract-Last) Approach

In a bottom-up approach, we have to create both the endpoint interface and the implementation classes. The WSDL is generated from the classes when the web service is published.

Let’s create a web service that will perform simple CRUD operations on Employee data.

6.1. The Model Class

The Employee model class:

public class Employee {
    private int id;
    private String firstName;

    // standard getters and setters
}

6.2. Web Service Endpoint Interface

The web service endpoint interface which declares the web service methods:

@WebService
public interface EmployeeService {
    @WebMethod
    Employee getEmployee(int id);

    @WebMethod
    Employee updateEmployee(int id, String name);

    @WebMethod
    boolean deleteEmployee(int id);

    @WebMethod
    Employee addEmployee(int id, String name);

    // ...
}

This interface defines an abstract contract for the web service. The annotations used:

  • @WebService denotes that it is a web service interface
  • @WebMethod is used to customize a web service operation
  • @WebResult is used to customize name of the XML element that represents the return value

6.3. Web Service Implementation

The implementation class of the web service endpoint interface:

@WebService(endpointInterface = "com.baeldung.jaxws.EmployeeService")
public class EmployeeServiceImpl implements EmployeeService {
 
    @Inject 
    private EmployeeRepository employeeRepositoryImpl;

    @WebMethod
    public Employee getEmployee(int id) {
        return employeeRepositoryImpl.getEmployee(id);
    }

    @WebMethod
    public Employee updateEmployee(int id, String name) {
        return employeeRepositoryImpl.updateEmployee(id, name);
    }

    @WebMethod
    public boolean deleteEmployee(int id) {
        return employeeRepositoryImpl.deleteEmployee(id);
    }

    @WebMethod
    public Employee addEmployee(int id, String name) {
        return employeeRepositoryImpl.addEmployee(id, name);
    }

    // ...
}

7. Publishing the Web Service Endpoints

To publish the web services (top-down and bottom-up), we need to pass an address and an instance of the web service implementation to the publish() method of the javax.xml.ws.Endpoint class:

public class EmployeeServicePublisher {
    public static void main(String[] args) {
        Endpoint.publish(
          "http://localhost:8080/employeeservicetopdown", 
           new EmployeeServiceTopDownImpl());

        Endpoint.publish("http://localhost:8080/employeeservice", 
          new EmployeeServiceImpl());
    }
}

We can now run EmployeeServicePublisher to start the web service. To make use of CDI features, the web services can be deployed as WAR file to application servers like WildFly or GlassFish.

8. Remote Web Service Client

Let’s now create a JAX-WS client to connect to the EmployeeService web service remotely.

8.1. Generating Client Artifacts

To generate JAX-WS client artifacts, we can once again use the wsimport tool:

wsimport -keep -p com.baeldung.jaxws.client http://localhost:8080/employeeservice?wsdl

The generated EmployeeService_Service class encapsulates the logic to get the server port using URL and QName.

8.2. Connecting to the Web Service

The web service client uses the generated EmployeeService_Service to connect to the server and make web service calls remotely:

public class EmployeeServiceClient {
    public static void main(String[] args) throws Exception {
        URL url = new URL("http://localhost:8080/employeeservice?wsdl");

        EmployeeService_Service employeeService_Service 
          = new EmployeeService_Service(url);
        EmployeeService employeeServiceProxy 
          = employeeService_Service.getEmployeeServiceImplPort();

        List<Employee> allEmployees 
          = employeeServiceProxy.getAllEmployees();
    }
}

9. Conclusion

This article is a quick introduction to SOAP Web services using JAX-WS.

We have used both the bottom-up and top-down approaches to creating SOAP Web services using the JAX-WS API. We have also written a JAX-WS client that can remotely connect to the server and make web service calls.

The complete source code is available over on GitHub.

Introduction to Serenity BDD

$
0
0

1. Introduction

In this tutorial, we’ll give an introduction to Serenity BDD – a great tool for Behaviour Driven Development (BDD). This is a solution for automated acceptance testing that generates well-illustrated testing reports.

2. Core Concepts

The concepts behind Serenity follow the concepts behind BDD. If you want to read more about it, check our article about Cucumber and JBehave.

2.1. Requirements

In Serenity, requirements are organized into three levels:

  1. capabilities
  2. features
  3. stories

Typically, a project implements high-level capabilities, e.x. order management and membership management capabilities in an e-commerce project. Each capability is comprised of many features, and features are explained in detail by user stories.

2.2. Steps and Tests

Steps contain a group of resource manipulation operations. It can be an action, verification or a context related operation. The classic Given_When_Then format can be reflected in the steps.

And tests go hand in hand with Steps. Each test tells a simple user story, which is carried out using certain Step.

2.3. Reports

Serenity not only reports the test results but also uses them for producing living documentation describing the requirements and application behaviors.

3. Testing with SerenityBDD

To run our Serenity tests with JUnit, we need to @RunWith the SerenityRunner, test runner. SerenityRunner instruments the step libraries and ensures that the test results will be recorded and reported on by the Serenity reporters.

3.1. Maven Dependencies

To make use of Serenity with JUnit, we should include serenity-core and serenity-junit in the pom.xml:

<dependency>
    <groupId>net.serenity-bdd</groupId>
    <artifactId>serenity-core</artifactId>
    <version>1.2.5-rc.11</version>
</dependency>
<dependency>
    <groupId>net.serenity-bdd</groupId>
    <artifactId>serenity-junit</artifactId>
    <version>1.2.5-rc.11</version>
</dependency>

We also need serenity-maven-plugin to have reports aggregated from test results:

<plugin>
    <groupId>net.serenity-bdd.maven.plugins</groupId>
    <artifactId>serenity-maven-plugin</artifactId>
    <version>1.2.5-rc.6</version>
    <executions>
        <execution>
            <id>serenity-reports</id>
            <phase>post-integration-test</phase>
            <goals>
                <goal>aggregate</goal>
            </goals>
        </execution>
    </executions>
</plugin>

If we want Serenity to generate reports even if there’s a test failure, add the following to the pom.xml:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-surefire-plugin</artifactId>
    <version>2.20</version>
    <configuration>
        <testFailureIgnore>true</testFailureIgnore>
    </configuration>
</plugin>

3.2. A Membership Points Example

Initially, our tests are based on the typical membership points feature in an e-commerce application. A customer can join the member program. As the customer purchases goods on the platform, the membership points will increase, and the customer’s membership grade would grow accordingly.

Now let’s write several tests against the scenarios described above and see how Serenity works.

First, let’s write the test for membership initialisation and see which steps do we need:

@RunWith(SerenityRunner.class)
public class MemberStatusLiveTest {

    @Steps 
    private MemberStatusSteps memberSteps;

    @Test
    public void membersShouldStartWithBronzeStatus() {
        memberSteps.aClientJoinsTheMemberProgram();
        memberSteps.theMemberShouldHaveAStatusOf(Bronze);
    }
}

Then we implement the two steps as follows:

public class MemberStatusSteps {

    private Member member;

    @Step("Given a member has {0} points")
    public void aMemberHasPointsOf(int points) {
        member = Member.withInitialPoints(points);
    }

    @Step("Then the member grade should be {0}")
    public void theMemberShouldHaveAStatusOf(MemberGrade grade) {
        assertThat(member.getGrade(), equalTo(grade));
    }
}

Now we are ready to run an integration test with mvn clean verify. The reports will be located at target/site/serenity/index.html:

Serenity Report - Tests Overview

From the report, we can see that we have only one acceptance test ‘Members should start with bronze status, has the ability to’ and is passing. By clicking on the test, the steps is illustrated:

Serenity Report - Member Status Steps

As we can see, Serenity’s report gives us a thorough understanding of what our application is doing and if it aligns our requirements. If we have some steps to implement, we can mark them as @Pending:

@Pending
@Step("When the member exchange {}")
public void aMemberExchangeA(Commodity commodity){
    //TODO
}

The report would remind us what needs to be done next. And in case any test fails, it can be seen in the report as well:

Serenity Report - Failure and Ignored

Each failed, ignored or skipped step will be listed respectively:

Serenity Report - Steps List

4. Integration with JBehave

Serenity can also integrate with existing BDD frameworks such as JBehave.

4.1. Maven Dependencies

To integrate with JBehave, one more dependency serenity-jbehave is needed in the POM:

<dependency>
    <groupId>net.serenity-bdd</groupId>
    <artifactId>serenity-jbehave</artifactId>
    <version>1.24.0</version>
</dependency>

4.2. JBehave Github REST API Test Continued

As we have introduced how to do REST API testing with JBehave, we can continue with our JBehave REST API test and see how it fits in Serenity.

Our story was:

Scenario: Github user's profile should have a login payload same as username
 
Given github user profile api
When I look for eugenp via the api
Then github's response contains a 'login' payload same as eugenp

The Given_When_Then steps can be migrated to as @Steps without any changes:

public class GithubRestUserAPISteps {

    private String api;
    private GitHubUser resource;

    @Step("Given the github REST API for user profile")
    public void withUserProfileAPIEndpoint() {
        api = "https://api.github.com/users/%s";
    }

    @Step("When looking for {0} via the api")
    public void getProfileOfUser(String username) throws IOException {
        HttpResponse httpResponse = getGithubUserProfile(api, username);
        resource = retrieveResourceFromResponse(httpResponse, GitHubUser.class);
    }

    @Step("Then there should be a login field with value {0} in payload of user {0}")
    public void profilePayloadShouldContainLoginValue(String username) {
        assertThat(username, Matchers.is(resource.getLogin()));
    }

}

To make JBehave’s story-to-code mapping work as expected, we need to implement JBehave’s step definition using @Steps:

public class GithubUserProfilePayloadStepDefinitions {

    @Steps
    GithubRestUserAPISteps userAPISteps;

    @Given("github user profile api")
    public void givenGithubUserProfileApi() {
        userAPISteps.withUserProfileAPIEndpoint();
    }

    @When("looking for $user via the api")
    public void whenLookingForProfileOf(String user) throws IOException {
        userAPISteps.getProfileOfUser(user);
    }

    @Then("github's response contains a 'login' payload same as $user")
    public void thenGithubsResponseContainsAloginPayloadSameAs(String user) {
        userAPISteps.profilePayloadShouldContainLoginValue(user);
    }
}

With SerenityStories, we can run JBehave tests both from within our IDE and in the build process:

import net.serenitybdd.jbehave.SerenityStory;

public class GithubUserProfilePayload extends SerenityStory {}

After the verify build finished, we can see our test report:

Serenity Report - GitHub User Profile Story

Compared to plain text report of JBehave, the rich report by Serenity gives us a more eye-pleasing and live overview of our story and the test result.

5. Integration with REST-assured

It is noteworthy that Serenity supports integration with REST-assured. To have a review of REST-assured, take a look at the guide to REST-assured.

5.1. Maven Dependencies

To make use of REST-assured with Serenity, the serenity-rest-assured dependency should be included:

<dependency>
    <groupId>net.serenity-bdd</groupId>
    <artifactId>serenity-rest-assured</artifactId>
    <version>1.2.5-rc.11</version>
</dependency>

5.2. Use REST-assured in Github REST API Test

Now we can replace our web client with REST-assured utilities:

import static net.serenitybdd.rest.SerenityRest.rest;
import static net.serenitybdd.rest.SerenityRest.then;

public class GithubRestAssuredUserAPISteps {

    private String api;

    @Step("Given the github REST API for user profile")
    public void withUserProfileAPIEndpoint() {
        api = "https://api.github.com/users/{username}";
    }

    @Step("When looking for {0} via the api")
    public void getProfileOfUser(String username) throws IOException {
        rest().get(api, username);
    }

    @Step("Then there should be a login field with value {0} in payload of user {0}")
    public void profilePayloadShouldContainLoginValue(String username) {
        then().body("login", Matchers.equalTo(username));
    }

}

After replacing the implementation of userAPISteps in the StepDefition, we can re-run the verify build:

public class GithubUserProfilePayloadStepDefinitions {

    @Steps
    GithubRestAssuredUserAPISteps userAPISteps;

    //...

}

In the report, we can see the actual API invoked during the test, and by clicking on the REST Query button, the details of request and response will be presented:

Serenity Report - Story with REST-assured

6. Integration with JIRA

As of now, we already have a great test report describing details and status of our requirements with Serenity. But for an agile team, issue tracking systems such as JIRA are often used to keep track of requirements. It would be better if we could use them seamlessly.

Luckily, Serenity already supports integration with JIRA.

6.1. Maven Dependencies

To integrate with JIRA, we need another dependency: serenity-jira-requirements-provider.

<dependency>
    <groupId>net.serenity-bdd</groupId>
    <artifactId>serenity-jira-requirements-provider</artifactId>
    <version>1.1.3-rc.5</version>
</dependency>

6.2. One-way Integration

To add JIRA links in the story, we can add the JIRA issue using story’s meta tag:

Meta:
@issue #BDDTEST-1

Besides, JIRA account and links should be specified in the file serenity.properties at the root of the project:

jira.url=<jira-url>
jira.project=<jira-project>
jira.username=<jira-username>
jira.password=<jira-password>

Then there would be a JIRA link appended in the report:

Serenity Report - Story with JIRA

Serenity also supports two-way integration with JIRA, we can refer to the official documentation for more details.

7. Summary

In this article, we introduced Serenity BDD and multiple integrations with other test frameworks and requirement management systems.

Although we have covered most of what Serenity can do, it can certainly do more. In our next article, we’ll cover on how Serenity with WebDriver support can enable us to automate web application pages using screenplay.

As always, the full implementation code can be found over on the GitHub project.


Java Web Weekly, Issue 174

$
0
0

Lots of interesting writeups on Java 9 this week.

Here we go…

1. Spring and Java

>> Project Amber: The Future of Java Exposed [takipi.com]

The future Java with Local Variable Type Inference, Enhanced Enums, and Lambda Leftovers definitely looks interesting. Let’s hope we won’t have to wait for too long 🙂

>> Java SE 9 – JPMS modules are not artifacts [joda.org]

Java 9 should be released in a few months, so this is a good time to brush up on our knowledge about the upcoming module system.

>> Spring Boot, @EnableWebMvc And Common Use-Cases [techblog.bozho.net]

It turns out that the standard @EnableWebMvc annotation does not integrate well with Spring Boot and can turn off some of its autoconfiguration magic.

>> The top five reasons you should be using JUnit 5 right now! [developer.ibm.com]

The release of the newest version of JUnit is getting closer and it has some very interesting features.

>> Mapping Definitions in JPA and Hibernate – Annotations, XML or both? [thoughts-on-java.org]

Each of those approaches has its own set of benefits and challenges. The rule of thumb would be to stick to only one of them but if you still want to use both, remember that XML mappings override those configured using annotations.

>> Spring Security – Programmatic Registration of Java Configuration Beans [baselogic.com]

Many developers tend to stick to XML-based config when configuring their Spring applications. It’s good to recall that almost everything can be achieved now with a Java-based config.

>> Thymeleaf 3 Standard Layout System Improvements [codeleak.pl]

There were some improvements introduced to Thymeleaf recently and there are few small things to remember about.

>> The best way to do batch processing with JPA and Hibernate [vladmihalcea.com]

A quick and practical example of implementing batch processing using JPA and Hibernate only.

>> JSR 269 MR for Java SE 9, last call for comments [oracle.com]

If you want to add something to the Annotation Processing API, it’s the last call 🙂

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> The Hardest Part of Microservices: Calling Your Services [christianposta.com]

Microservices have their own set of challenges, and calling them successfully is certainly one of them.

>> Using JsonPath and XmlPath in REST Assured [ontestautomation.com]

Quick and practical examples of using JsonPath and XmlPath with REST-assured.

Also worth reading:

3. Musings

>> Your Job Title of Tomorrow: Efficiencer [daedtech.com]

At the end of the day, software developers get hired to optimize and automate so it’s important to market yourself as someone that solves problems and not someone that simply develops things.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Power Nap! [dilbert.com]

>> The hero’s journey [dilbert.com]

>> The key to leadership [dilbert.com]

5. Pick of the Week

>> Forget About Setting Goals. Focus on This Instead. [jamesclear.com]

Kotlin Java Interoperability

$
0
0

1. Overview

In this tutorial, we’re going to discuss the interoperability between Java and Kotlin. We’re going to cover some basic examples as well as some other more complex scenarios.

2. Setting up Kotlin

Creating a Kotlin project is very simple using IntelliJ, Eclipse, and even the command line – however for this tutorial we’re going to follow the installation steps from our previous tutorial Introduction to Kotlin since it already has what we need for our demo purposes.

3. The Basics

Calling Java from Kotlin is straightforward and smooth since it was built with the idea of interoperability.

Let’s create this Customer class using core Java:

public class Customer {

    private String firstName;
    private String lastName;
    private int age;

    // standard setters and getters

}

4. Getters and Setters

Let’s now work with this simple Java POJO from Kotlin.

Getters and setters that follow the Java convention for these types of methods are represented as attributes in Kotlin:

val customer = Customer()

customer.firstName = "Frodo"
customer.lastName = "Baggins"

assertEquals(customer.firstName, "Frodo")
assertEquals(customer.lastName, "Baggins")

It’s worth noting that the new keyword is not required for instantiating an object.

The language tries to avoid boilerplate code as much as possible so we do not call getters/setters explicitly – we can simply use them using the field notation.

We need to remember that if a Java class has only setter methods, the property will not be accessible since the language does not support set-only properties.

If a method returns void then when it is called from Kotlin it will return Unit.

5. Null Safety

Kotlin is well known for its null safety feature, but as we know, this is not the case for Java, which makes it impractical for objects coming from it. A very simple example can be seen if we have a String array:

val characterList = ArrayList<String>()
characterList.add("Bilbo")
val character = list[0]

Kotlin doesn’t display any nullability errors at compile time when a method is being called on a variable of a platform type – and this type can’t be written explicitly in the language. So when a value is assigned, we can rely on this inference, or we can just choose the type we expect:

val a: String? = character
val a: String = character

They’re both allowed, but in the case of the non-null type, the compiler will immediately assert upon assignment, which will prevent the variable to hold a null value.

In the end, the compiler does its best to avoid nulls, but still, it’s impossible to eliminate it because of generics.

6. Arrays

In Kotlin arrays are invariant – which means that it wouldn’t allow us to assign Array<Int> to Array<Any> to prevent runtime failures.

So we have an example class:

public class ArrayExample {

    public int sumValues(int[] nums) {
        int res = 0;

        for (int x:nums) {
            res += x;
        }

        return res;
    }
}

If we want to pass an array of primitives to this method, we have to use one of the specialized classes from Kotlin:

val ex = ArrayExample()
val numArray = intArrayOf(1, 2, 3)

assertEquals(ex.sumValues(numArray), 6)

7. Varargs

Java gives us the capability of passing any number of arguments to a method:

public int sumArgValues(int... sums) {
    // same as above
}

The process is the same, with the slight difference that we need to use the spread operator * to pass the array:

assertEquals(ex.sumValues(*numArray), 6)

Currently, there’s no possibility of passing null to a varargs method.

8. Exceptions

In Kotlin all exceptions are unchecked, which means that the compiler will not force us to catch any exceptions:

// In our Java code

public void writeList() throws IOException {
    File file = new File("E://file.txt");
    FileReader fr = new FileReader(file);
    fr.close();
}

// In Kotlin

fun makeReadFile() {
    val ax = ArrayExample()
    ax.writeList()
}

9. Reflection

Simply put, reflection works on both Kotlin and Java classes:

val instance = Customer::class.java
val constructors = instance.constructors

assertEquals(constructors.size, 1)
assertEquals(constructors[0].name, "com.baeldung.java.Customer")

We can also obtain getter and setter methods, a KProperty for a Java field and a KFunction for a constructor.

10. Object Methods

When objects are imported to Kotlin, all references of the type java.lang.Object get changed to kotlin.Any:

val instance = Customer::class
val supertypes = instance.supertypes

assertEquals(supertypes[0].toString(), "kotlin.Any")

11. Conclusion

This quick tutorial provides us a bigger understanding of the Kotlin Java Interoperability. We had a look at some simple examples to show how Kotlin generally leads to less verbose code overall.

As always, the implementation of all of these examples and snippets can be found over on GitHub. This is a Maven-based project so it should be easy to import and run.

HashSet and TreeSet Comparison

$
0
0

1. Introduction

In this article, we are going to compare two of the most popular Java implementations of the java.util.Set interface – HashSet and TreeSet.

2. Differences

HashSet and TreeSet are leaves of the same branch, but they differ in few important matters.

2.1. Ordering

HashSet stores the objects in random order, whereas TreeSet applies the natural order of the elements. Let’s see the following example:

@Test
public void givenTreeSet_whenRetrievesObjects_thenNaturalOrder() {
    Set<String> set = new TreeSet<>();
    set.add("Baeldung");
    set.add("is");
    set.add("Awesome");
 
    assertEquals(3, set.size());
    assertTrue(set.iterator().next().equals("Awesome"));
}

After adding the String objects into TreeSet, we see that the first one is “Awesome”, even though it was added at the very end. A similar operation done with HashSet does not guarantee that the order of elements will remain constant over time.

2.2. Null Objects

Another difference is that HashSet can store null objects, while TreeSet does not allow them:

@Test(expected = NullPointerException.class)
public void givenTreeSet_whenAddNullObject_thenNullPointer() {
    Set<String> set = new TreeSet<>();
    set.add("Baeldung");
    set.add("is");
    set.add(null);
}

@Test
public void givenHashSet_whenAddNullObject_thenOK() {
    Set<String> set = new HashSet<>();
    set.add("Baeldung");
    set.add("is");
    set.add(null);
 
    assertEquals(3, set.size());
}

If we try to store the null object in a TreeSet, the operation will result in a thrown NullPointerException. The only exception was in Java 7 when it was allowed to have exactly one null element in the TreeSet.

2.3. Performance

Simply put, HashSet is faster than the TreeSet.

HashSet provides constant-time performance for most operations like add(), remove() and contains(), versus the log(n) time offered by the TreeSet. 

Usually, we can see that the execution time for adding elements into TreeSet is much better than for the HashSet. 

Please remember that the JVM might be not warmed up, so the execution times can differ. A good discussion how to design and perform micro tests using various Set implementations is available here.

2.4. Implemented Methods

TreeSet is rich in functionalities, implementing additional methods like:

  • pollFirst() – to return the first element, or null if Set is empty
  • pollLast() – to retrieve and remove the last element, or return null if Set is empty
  • first() – to return the first item
  • last() – to return the last item
  • ceiling() – to return the least element greater than or equal to the given element, or null if there is no such element
  • lower() – to return the largest element strictly less than the given element, or null if there is no such element

The methods mentioned above make TreeSet much easier to use and more powerful than HashSet.

3. Similarities

3.1. Unique Elements

Both TreeSet and HashSet guarantee a duplicate-free collection of elements, as it is a part of the generic Set interface:

@Test
public void givenHashSetAndTreeSet_whenAddDuplicates_thenOnlyUnique() {
    Set<String> set = new HashSet<>();
    set.add("Baeldung");
    set.add("Baeldung");
 
    assertTrue(set.size() == 1);
        
    Set<String> set2 = new TreeSet<>();
    set2.add("Baeldung");
    set2.add("Baeldung");
 
    assertTrue(set2.size() == 1);
}

3.2. Not synchronized

None of the described Set implementations are synchronized. This means that if multiple threads access a Set concurrently, and at least one of the threads modifies it, then it must be synchronized externally.

3.3. Fail-fast Iterators

The Iterators returned by TreeSet and HashSet are fail-fast.

That means that any modification of the Set at any time after the Iterator is created will throw a ConcurrentModificationException:

@Test(expected = ConcurrentModificationException.class)
public void givenHashSet_whenModifyWhenIterator_thenFailFast() {
    Set<String> set = new HashSet<>();
    set.add("Baeldung");
    Iterator<String> it = set.iterator();

    while (it.hasNext()) {
        set.add("Awesome");
        it.next();
    }
}

4. Which Implementation to use?

Both implementations fulfill the contract of the idea of a set so it’s up to the context which implementation we might use.

Here are few quick points to remember:

  • If we want to keep our entries sorted, we need to go for the TreeSet
  • If we value performance more than memory consumption, we should go for the HashSet
  • If we are short on memory, we should go for the TreeSet
  • If we want to access elements that are relatively close to each other according to their natural ordering, we might want to consider TreeSet because it has greater locality
  • HashSet‘s performance can be tuned using the initialCapacity and loadFactor, which is not possible for the TreeSet
  • If we want to preserve insertion order and benefit from constant time access, we can use the LinkedHashSet

5. Conclusion

In this article, we covered the differences and similarities between TreeSet and HashSet.

As always, the code examples for this article are available over on GitHub.

Guide to the Most Important JVM Parameters

$
0
0

1. Overview

In this quick tutorial, we’ll explore the most well-known options which can be used to configure the Java Virtual Machine.

2. Explicit Heap Memory

One of the most common performance-related practices is to initialize the heap memory as per the application requirements.

That’s why we should specify minimal and maximal heap size. Below parameters can be used for achieving it:

-Xms<heap size>[unit] 
-Xmx<heap size>[unit]

Here, unit denotes the unit in which the memory (indicated by heap size) is to be initialized. Units can be marked as ‘g’ for GB, ‘m’ for MB and ‘k’ for KB.

For example, if we want to assign minimum 2 GB and maximum 5 GB to JVM, we need to write:

-Xms2G -Xmx5G

Starting with Java 8, the size of Metaspace is not defined. Once it reaches the global limit, JVM automatically increases it, However, to overcome any unnecessary instability, we can set Metaspace size with:

-XX:MaxMetaspaceSize=<metaspace size>[unit]

Here, metaspace size denotes the amount of memory we want to assign to Metaspace.

As per Oracle guidelines, after total available memory, the second most influential factor is the proportion of the heap reserved for the Young Generation. By default, the minimum size of the YG is 1310 MB, and maximum size is unlimited.

We can assign them explicitly:

-XX:NewSize=<young size>[unit] 
-XX:MaxNewSize=<young size>[unit]

3. Garbage Collection

For better stability of the application, choosing of right Garbage Collection algorithm is critical.

JVM has four types of GC implementations:

  • Serial Garbage Collector
  • Parallel Garbage Collector
  • CMS Garbage Collector
  • G1 Garbage Collector

These implementations can be declared with the below parameters:

-XX:+UseSerialGC
-XX:+UseParallelGC
-XX:+USeParNewGC
-XX:+UseG1GC

More details on Garbage Collection implementations can be found here.

4. GC Logging

To strictly monitor the application health, we should always check the JVM’s Garbage Collection performance. The easiest way to do this is to log the GC activity in human readable format.

Using the following parameters, we can log the GC activity:

-XX:+UseGCLogFileRotation 
-XX:NumberOfGCLogFiles=< number of log files > 
-XX:GCLogFileSize=< file size >[ unit ]
-Xloggc:/path/to/gc.log

UseGCLogFileRotation specifies the log file rolling policy, much like log4j, s4lj, etc. NumberOfGCLogFiles denotes the max number of log files that can be written for a single application life cycle. GCLogFileSize specifies the max size of the file. Finally, loggc denotes its location.

Point to note here is that, there are two more JVM parameters available (-XX:+PrintGCTimeStamps  and -XX:+PrintGCDateStamps) which can be used to print date-wise timestamp in the GC log.

For example, if we want to assign a maximum of 100 GC log files, each having a maximum size of 50 MB and want to store them in ‘/home/user/log/’ location, we can use below syntax:

-XX:+UseGCLogFileRotation  
-XX:NumberOfGCLogFiles=10
-XX:GCLogFileSize=50M 
-Xloggc:/home/user/log/gc.log

However, the problem is that one additional daemon thread is always used for monitoring system time in the background. This behavior may create some performance bottleneck; that’s why it’s always better not to play with this parameter in production.

5. Handling Out of Memory

It’s very common for a large application to face out of memory error which, in turn, results in the application crash. It’s a very critical scenario and very hard to replicate to troubleshoot the issue.

That’s why JVM comes with some parameters which dump heap memory into a physical file which can be used later for finding out leaks:

-XX:-HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=./java_pid<pid>.hprof
-XX:OnOutOfMemoryError="< cmd args >;< cmd args >" 
-XX:+UseGCOverheadLimit

A couple of points to note here:

  • HeapDumpOnOutOfMemoryError instructs the JVM to dump heap into physical file in case of OutOfMemoryError
  • HeapDumpPath denotes the path where the file is to be written; any filename can be given; however, if JVM finds a <pid> tag in the name, the process id of the current process causing the out of memory error will be appended to the file name with .hprof format
  • OnOutOfMemoryError is used to issue emergency commands to be executed in case of out of memory error; proper command should be used in the space of cmd args. For example, if we want to restart the server as soon as out of memory occur, we can set the parameter:
-XX:OnOutOfMemoryError="shutdown -r"
  • UseGCOverheadLimit is a policy that limits the proportion of the VM’s time that is spent in GC before an OutOfMemory error is thrown

6. 32/64 bit

In the OS environment where both 32 and 64-bit packages are installed, the JVM automatically chooses 32-bit environmental packages.

If we want to set the environment to 64 bit manually, we can do so using below parameter:

-d< OS bit >

OS bit can be either 32 or 64. More information about this can be found here.

7. Misc

  • -server: enables “Server Hotspot VM”; this parameter is used by default in 64 bit JVM
  • -XX:+UseStringDeduplication: Java 8u20 has introduced this JVM parameter for reducing the unnecessary use of memory by creating too many instances of the same String; this optimizes the heap memory by reducing duplicate String values to a single global char[] array
  • -XX:+UseLWPSynchronization: sets LWP (Light Weight Process) – based synchronization policy instead of thread-based synchronization
  • -XX:LargePageSizeInBytessets the large page size used for the Java heap; it takes the argument in GB/MB/KB; with larger page sizes we can make better use of virtual memory hardware resources; however, this may cause larger space sizes for the PermGen, which in turn can force to reduce the size of Java heap space
  • -XX:MaxHeapFreeRatio: sets the maximum percentage of heap free after GC to avoid shrinking.
  • -XX:MinHeapFreeRatio: sets the minimum percentage of heap free after GC to avoid expansion; to monitor the heap usage you can use VisualVM shipped with JDK.
  • -XX:SurvivorRatio: Ratio of eden/survivor space size – for example, -XX:SurvivorRatio=6 sets the ratio between each survivor space and eden space to be 1:6,
  • -XX:+UseLargePages: use large page memory if it is supported by the system; please note that OpenJDK 7 tends to crash if using this JVM parameter
  • -XX:+UseStringCache: enables caching of commonly allocated strings available in the String pool
  • -XX:+UseCompressedStrings: use a byte[] type for String objects which can be represented in pure ASCII format
  • -XX:+OptimizeStringConcat: it optimizes String concatenation operations where possible

8. Conclusion

In this quick article, we learned about some important JVM parameters – which can be used to tune and improve general application performance.

Some of these can also be used for debugging purposes.

If you want to explore the reference parameters in more detail, you can get started here.

Guide to the ConcurrentSkipListMap

$
0
0

1. Overview

In this quick article, we’ll be looking at the ConcurrentSkipListMap class from the java.util.concurrent package.

This construct allows us to create thread-safe logic in a lock-free way.  It’s ideal for problems when we want to make an immutable snapshot of the data while other threads are still inserting data into the map.

We will be solving a problem of sorting a stream of events and getting a snapshot of the events that arrived in the last 60 seconds using that construct.

2. Stream Sorting Logic

Let’s say that we have a stream of events that are continually coming from multiple threads. We need to be able to take events from the last 60 seconds, and also events that are older than 60 seconds.

First, let’s define the structure of our event data:

public class Event {
    private ZonedDateTime eventTime;
    private String content;

    // standard constructors/getters
}

We want to keep our events sorted using the eventTime field. To achieve this using the ConcurrentSkipListMap, we need to pass a Comparator to its constructor while creating an instance of it:

ConcurrentSkipListMap<ZonedDateTime, String> events
 = new ConcurrentSkipListMap<>(
 Comparator.comparingLong(v -> v.toInstant().toEpochMilli()));

We’ll be comparing all arrived events using their timestamps. We are using the comparingLong() method and passing the extract function that can take a long timestamp from the ZonedDateTime. 

When our events are arriving, we need only to add them to the map using the put() method. Note that this method does not require any explicit synchronization:

public void acceptEvent(Event event) {
    events.put(event.getEventTime(), event.getContent());
}

The ConcurrentSkipListMap will handle the sorting of those events underneath using the Comparator that was passed to it in the constructor.

The most notable pros of the ConcurrentSkipListMap are the methods that can make an immutable snapshot of its data in a lock-free way. To get all events that arrived within the past minute, we can use the tailMap() method and pass the time from which we want to get elements:

public ConcurrentNavigableMap<ZonedDateTime, String> getEventsFromLastMinute() {
    return events.tailMap(ZonedDateTime.now().minusMinutes(1));
}

It will return all events from the past minute. It will be an immutable snapshot and what is the most important is that other writing threads can add new events to the ConcurrentSkipListMap without any need to do explicit locking.

We can now get all events that arrived later that one minute from now – by using the headMap() method:

public ConcurrentNavigableMap<ZonedDateTime, String> getEventsOlderThatOneMinute() {
    return events.headMap(ZonedDateTime.now().minusMinutes(1));
}

This will return an immutable snapshot of all events that are older than one minute. All of the above methods belong to the EventWindowSort class, which we’ll use in the next section.

3. Testing the Sorting Stream Logic

Once we implemented our sorting logic using the ConcurrentSkipListMap, we can now test it by creating two writer threads that will send one hundred events each:

ExecutorService executorService = Executors.newFixedThreadPool(3);
EventWindowSort eventWindowSort = new EventWindowSort();
int numberOfThreads = 2;

Runnable producer = () -> IntStream
  .rangeClosed(0, 100)
  .forEach(index -> eventWindowSort.acceptEvent(
      new Event(ZonedDateTime.now().minusSeconds(index), UUID.randomUUID().toString()))
  );

for (int i = 0; i < numberOfThreads; i++) {
    executorService.execute(producer);
}

Each thread is invoking the acceptEvent() method, sending the events that have eventTime from now to “now minus one hundred seconds”.

In the meantime, we can invoke the getEventsFromLastMinute() method that will return the snapshot of events that are within the one minute window:

ConcurrentNavigableMap<ZonedDateTime, String> eventsFromLastMinute 
  = eventWindowSort.getEventsFromLastMinute();

The number of events in the eventsFromLastMinute will be varying in each test run depending on the speed at which the producer threads will be sending the events to the EventWindowSort. We can assert that there is not a single event in the returned snapshot that is older than one minute:

long eventsOlderThanOneMinute = eventsFromLastMinute
  .entrySet()
  .stream()
  .filter(e -> e.getKey().isBefore(ZonedDateTime.now().minusMinutes(1)))
  .count();
 
assertEquals(eventsOlderThanOneMinute, 0);

And that there are more than zero events in the snapshot that are within the one minute window:

long eventYoungerThanOneMinute = eventsFromLastMinute
  .entrySet()
  .stream()
  .filter(e -> e.getKey().isAfter(ZonedDateTime.now().minusMinutes(1)))
  .count();
 
assertTrue(eventYoungerThanOneMinute > 0);

Our getEventsFromLastMinute() uses the tailMap() underneath.

Let’s test now the getEventsOlderThatOneMinute() that is using the headMap() method from the ConcurrentSkipListMap:

ConcurrentNavigableMap<ZonedDateTime, String> eventsFromLastMinute 
  = eventWindowSort.getEventsOlderThatOneMinute();

This time we get a snapshot of events that are older than one minute. We can assert that there are more than zero of such events:

long eventsOlderThanOneMinute = eventsFromLastMinute
  .entrySet()
  .stream()
  .filter(e -> e.getKey().isBefore(ZonedDateTime.now().minusMinutes(1)))
  .count();
 
assertTrue(eventsOlderThanOneMinute > 0);

And next, that there is not a single event that is from within the last minute:

long eventYoungerThanOneMinute = eventsFromLastMinute
  .entrySet()
  .stream()
  .filter(e -> e.getKey().isAfter(ZonedDateTime.now().minusMinutes(1)))
  .count();
 
assertEquals(eventYoungerThanOneMinute, 0);

The most important thing to note is that we can take the snapshot of data while other threads are still adding new values to the ConcurrentSkipListMap.

4. Conclusion

In this quick tutorial, we had a look at the basics of the ConcurrentSkipListMap, along with some practical examples

We leveraged the high performance of the ConcurrentSkipListMap to implement a non-blocking algorithm that can serve us an immutable snapshot of data even if at the same time multiple threads are updating the map.

The implementation of all these examples and code snippets can be found in the GitHub project; this is a Maven project, so it should be easy to import and run as it is.

How to Perform a Simple HTTP Request in Java

$
0
0

1. Overview

In this quick tutorial, we’re going to present a way of performing HTTP requests in Java — by using the built-in Java class HttpUrlConnection.

2. HttpUrlConnection

The HttpUrlConnection class allows us to perform basic HTTP requests without the use of any additional libraries. All the classes that are needed are contained in the java.net package.

The disadvantages of using this method are that the code can be more cumbersome than other HTTP libraries, and it does not provide more advanced functionalities such as dedicated methods for adding headers or authentication.

3. Creating a Request

A HttpUrlConnection instance is created by using the openConnection() method of the URL class. Note that this method only creates a connection object, but does not establish the connection yet.

The HttpUrlConnection class is used for all types of requests by setting the requestMethod attribute to one of the values: GET, POST, HEAD, OPTIONS, PUT, DELETE, TRACE.

Let’s create a connection to a given url using GET method:

URL url = new URL("http://example.com");
HttpURLConnection con = (HttpURLConnection) url.openConnection();
con.setRequestMethod("GET");

4. Adding Request Parameters

If we want to add parameters to a request, we have to set the doOutput property to true, then write a String of the form param1=value&param2=value to the OutputStream of the HttpUrlConnection instance:

Map<String, String> parameters = new HashMap<>();
parameters.put("param1", "val");

con.setDoOutput(true);
DataOutputStream out = new DataOutputStream(con.getOutputStream());
out.writeBytes(ParameterStringBuilder.getParamsString(parameters));
out.flush();
out.close();

To facilitate the transformation of the parameter Map, we have written a utility class called ParameterStringBuilder containing a static method getParamsString() that transforms a Map to a String of the required format:

public class ParameterStringBuilder {
    public static String getParamsString(Map<String, String> params) 
      throws UnsupportedEncodingException{
        StringBuilder result = new StringBuilder();

        for (Map.Entry<String, String> entry : params.entrySet()) {
          result.append(URLEncoder.encode(entry.getKey(), "UTF-8"));
          result.append("=");
          result.append(URLEncoder.encode(entry.getValue(), "UTF-8"));
          result.append("&");
        }

        String resultString = result.toString();
        return resultString.length() > 0
          ? resultString.substring(0, resultString.length() - 1)
          : resultString;
    }
}

5. Setting Request Headers

Adding headers to a request can be achieved by using the setRequestProperty() method:

con.setRequestProperty("Content-Type", "application/json");

To read the value of a header from a connection, we can use the getHeaderField() method:

String contentType = con.getHeaderField("Content-Type");

6. Configuring Timeouts

HttpUrlConnection class allows setting the connect and read timeouts. These values define the interval of time to wait for the connection to the server to be established or data to be available for reading.

To set the timeout values we can use the setConnectTimeout() and setReadTimeout() methods:

con.setConnectTimeout(5000);
con.setReadTimeout(5000);

In the example above, we have set both timeout values to 5 seconds.

7. Handling Cookies

The java.net package contains classes that ease working with cookies such as CookieManager and HttpCookie.

First, to read the cookies from a response, we can retrieve the value of the Set-Cookie header and parse it to a list of HttpCookie objects:

String cookiesHeader = con.getHeaderField("Set-Cookie");
List<HttpCookie> cookies = HttpCookie.parse(cookiesHeader);

Next, we will add the cookies to the cookie store:

cookies.forEach(cookie -> cookieManager.getCookieStore().add(null, cookie));

Let’s check if a cookie called username is present, and if not, we will add it to the cookie store with a value of “john”:

Optional<HttpCookie> usernameCookie = cookies.stream()
  .findAny().filter(cookie -> cookie.getName().equals("username"));
if (usernameCookie == null) {
    cookieManager.getCookieStore().add(null, new HttpCookie("username", "john"));
}

Finally, to add the cookies to the request, we need to set the Cookie header, after closing and reopening the connection:

con.disconnect();
con = (HttpURLConnection) url.openConnection();

con.setRequestProperty("Cookie", 
  StringUtils.join(cookieManager.getCookieStore().getCookies(), ";"));

8. Handling Redirects 

We can enable or disable automatically following redirects for a specific connection by using the setInstanceFollowRedirects() method with true or false parameter:

con.setInstanceFollowRedirects(false);

It is also possible to enable or disable automatic redirect for all connections:

HttpUrlConnection.setFollowRedirects(false);

By default, the behavior is enabled.

When a request returns a status code 301 or 302, indicating a redirect, we can retrieve the Location header and create a new request to the new URL:

if (status == HttpURLConnection.HTTP_MOVED_TEMP
  || status == HttpURLConnection.HTTP_MOVED_PERM) {
    String location = con.getHeaderField("Location");
    URL newUrl = new URL(location);
    con = (HttpURLConnection) newUrl.openConnection();
}

9. Reading the Response

Reading the response of the request can be done by parsing the InputStream of the HttpUrlConnection instance.

To execute the request we can use the getResponseCode(), connect(), getInputStream() or getOutputStream() methods:

int status = con.getResponseCode();

Finally, let’s read the response of the request and place it in a content String:

BufferedReader in = new BufferedReader(
  new InputStreamReader(con.getInputStream()));
String inputLine;
StringBuffer content = new StringBuffer();
while ((inputLine = in.readLine()) != null) {
    content.append(inputLine);
}
in.close();

To close the connection, we can use the disconnect() method:

con.disconnect();

10. Conclusion

In this article, we have shown how we can perform HTTP requests using the HttpUrlConnection class. The full source code of the examples can be found over on GitHub.

Difference Between Wait and Sleep in Java

$
0
0

1. Overview

In this short article, we’ll have a look at the standard sleep() and wait() methods in core Java, and understand the differences and similarities between them.

2. General Differences Between Wait and Sleep

Simply put, wait() is an instance method that’s used for thread synchronization.

It can be called on any object, as it’s defined right on java.lang.Object, but it can only be called from a synchronized block. It releases the lock on the object so that another thread can jump in and acquire a lock.

On the other hand, Thread.sleep() is a static method that can be called from any context. Thread.sleep() pauses the current thread and does not release any locks.

Here’s a very simplistic initial look at these two core APIs in action:

private static Object LOCK = new Object();

private static void sleepWaitExamples() 
  throws InterruptedException {
 
    Thread.sleep(1000);
    System.out.println(
      "Thread '" + Thread.currentThread().getName() +
      "' is woken after sleeping for 1 second");
 
    synchronized (LOCK) {
        LOCK.wait(1000);
        System.out.println("Object '" + LOCK + "' is woken after" +
          " waiting for 1 second");
    }
}

Running this example will produce the following output:

Thread ‘main’ is woken after sleeping for 1 second
Object ‘java.lang.Object@31befd9f’ is woken after waiting for 1 second

3. Waking up Wait and Sleep

When we use the sleep() method, a thread gets started after a specified time interval, unless it is interrupted.

For wait(), the waking up process is a bit more complicated. We can wake the thread by calling either the notify() or notifyAll() methods on the monitor that is being waited on.

Use notifyAll() instead of notify() when you want to wake all threads that are in the waiting state. Similarly to the wait() method itself, notify(), and notifyAll() have to be called from the synchronized context.

For example, here’s how you can wait:

synchronized (b) {
    while (b.sum == 0) {
        System.out.println("Waiting for ThreadB to complete...");
        b.wait();
    }

    System.out.println("ThreadB has completed. " + 
      "Sum from that thread is: " + b.sum);
}

And then, here’s how another thread can then wake up the waiting thread – by calling notify() on the monitor:

int sum;
 
@Override 
public void run() {
    synchronized (this) {
        int i = 0;
        while (i < 100000) {
            sum += i;
            i++; 
        }
        notify(); 
    } 
}

Running this example will produce the following output:

Waiting for ThreadB to complete…
ThreadB has completed. Sum from that thread is: 704982704

4. Conclusion

This is a quick primer to the semantics of wait and sleep in Java.

In general, we should use sleep() for controlling execution time of one thread and wait() for multi-thread-synchronization. Naturally, there’s a lot more to explore – after understanding the basics well.

As always, you can check out the examples provided in this article over on GitHub.


LongAdder and LongAccumulator in Java

$
0
0

1. Overview

In this article, we’ll be looking at two constructs from the java.util.concurrent package: LongAdder and LongAccumulator.

Both are created to be very efficient in the multi-threaded environment and both leverage very clever tactics to be lock-free and still remain thread-safe.

2. LongAdder

Let’s consider some logic that’s incrementing some values very often, where using an AtomicLong can be a bottleneck. This uses a compare-and-swap operation, which – under heavy contention – can lead to a lot of wasted CPU cycles.

LongAdder, on the other hand, uses a very clever trick to reduce contention between threads, when these are incrementing it.

When we want to increment an instance of the LongAdder, we need to call the increment() method. That implementation keeps an array of counters that can grow on demand.

And so, when more threads are calling increment(), the array will be longer. Each record in the array can be updated separately – reducing the contention. Due to that fact, the LongAdder is a very efficient way to increment a counter from multiple threads.

Let’s create an instance of the LongAdder class and update it from multiple threads:

LongAdder counter = new LongAdder();
ExecutorService executorService = Executors.newFixedThreadPool(8);

int numberOfThreads = 4;
int numberOfIncrements = 100;

Runnable incrementAction = () -> IntStream
  .range(0, numberOfIncrements)
  .forEach(i -> counter.increment());

for (int i = 0; i < numberOfThreads; i++) {
    executorService.execute(incrementAction);
}

The result of the counter in the LongAdder is not available until we call the sum() method. That method will iterate over all values of the underneath array, and sum those values returning the proper value. We need to be careful though because the call to the sum() method can be very costly:

assertEquals(counter.sum(), numberOfIncrements * numberOfThreads);

Sometimes, after we call sum(), we want to clear all state that is associated with the instance of the LongAdder and start counting from the beginning. We can use the sumThenReset() method to achieve that:

assertEquals(counter.sumThenReset(), numberOfIncrements * numberOfThreads);
assertEquals(counter.sum(), 0);

Note that the subsequent call to the sum() method returns zero meaning that the state was successfully reset.

3. LongAccumulator

LongAccumulator is also a very interesting class – which allows us to implement a lock-free algorithm in a number of scenarios. For example, it can be used to accumulate results according to the supplied LongBinaryOperator – this works similarly to the reduce() operation from Stream API.

The instance of the LongAccumulator can be created by supplying the LongBinaryOperator and the initial value to its constructor. The important thing to remember that LongAccumulator will work correctly if we supply it with a commutative function where the order of accumulation does not matter.

LongAccumulator accumulator = new LongAccumulator(Long::sum, 0L);

We’re creating a LongAccumulator which will add a new value to the value that was already in the accumulator. We are setting the initial value of the LongAccumulator to zero, so in the first call of the accumulate() method, the previousValue will have a zero value.

Let’s invoke the accumulate() method from multiple threads:

int numberOfThreads = 4;
int numberOfIncrements = 100;

Runnable accumulateAction = () -> IntStream
  .rangeClosed(0, numberOfIncrements)
  .forEach(accumulator::accumulate);

for (int i = 0; i < numberOfThreads; i++) {
    executorService.execute(accumulateAction);
}

Notice how we’re passing a number as an argument to the accumulate() method. That method will invoke our sum() function.

The LongAccumulator is using the compare-and-swap implementation – which leads to these interesting semantics.

Firstly, it executes an action defined as a LongBinaryOperator, and then it checks if the previousValue changed. If it was changed, the action is executed again with the new value. If not, it succeeds changing the value that is stored in the accumulator.

We can now assert that the sum of all values from all iterations was 20200:

assertEquals(accumulator.get(), 20200);

4. Conclusion

In this quick tutorial, we had a look at LongAdder and LongAccumulator and we’ve shown how to use both constructs to implement very efficient and lock-free solutions.

The implementation of all these examples and code snippets can be found in the GitHub project – this is a Maven project, so it should be easy to import and run as it is.

Java Annotations Interview Questions (+ Answers)

$
0
0

1. Introduction

Annotations have been around since Java 5, and nowadays, they are ubiquitous programming constructs that allow enriching the code.

In this article, we’ll review some of the questions, regarding annotations; that are often asked on technical interviews and, where appropriate; we’ll implement examples to understand their answers better.

2. Questions

Q1. What are annotations? What are their typical use cases?

Annotations are metadata bound to elements of the source code of a program and have no effect on the operation of the code they operate.

Their typical uses cases are:

  • Information for the compiler – with annotations, the compiler can detect errors or suppress warnings
  • Compile-time and deployment-time processing – software tools can process annotations and generate code, configuration files, etc.
  • Runtime processing – annotations can be examined at runtime to customize the behavior of a program

Q2. Describe some useful annotations from the standard library.

There are several annotations in the java.lang and java.lang.annotation packages, the more common ones include but not limited to:

  • @Override – marks that a method is meant to override an element declared in a superclass. If it fails to override the method correctly, the compiler will issue an error
  • @Deprecated – indicates that element is deprecated and should not be used. The compiler will issue a warning if the program uses a method, class, or field marked with this annotation
  • @SuppressWarnings – tells the compiler to suppress specific warnings. Most commonly used when interfacing with legacy code written before generics appeared
  • @FunctionalInterface – introduced in Java 8, indicates that the type declaration is a functional interface and whose implementation can be provided using a Lambda Expression

Q3. How can you create an annotation?

Annotations are a form of an interface where the keyword interface is preceded by @, and whose body contains annotation type element declarations that look very similar to methods:

public @interface SimpleAnnotation {
    String value();

    int[] types();
}

After the annotation is defined, yon can start using it in through your code:

@SimpleAnnotation(value = "an element", types = 1)
public class Element {
    @SimpleAnnotation(value = "an attribute", types = { 1, 2 })
    public Element nextElement;
}

Note that, when providing multiple values for array elements, you must enclose them in brackets.

Optionally, default values can be provided as long as they are constant expressions to the compiler:

public @interface SimpleAnnotation {
    String value() default "This is an element";

    int[] types() default { 1, 2, 3 };
}

Now, you can use the annotation without those elements:

@SimpleAnnotation
public class Element {
    // ...
}

Or only some of them:

@SimpleAnnotation(value = "an attribute")
public Element nextElement;

Q4. What object types can be returned from an annotation method declaration?

The return type must be a primitive, String, Class, Enum, or an array of one of the previous types. Otherwise, the compiler will throw an error.

Here’s an example code that successfully follows this principle:

enum Complexity {
    LOW, HIGH
}

public @interface ComplexAnnotation {
    Class<? extends Object> value();

    int[] types();

    Complexity complexity();
}

The next example will fail to compile since Object is not a valid return type:

public @interface FailingAnnotation {
    Object complexity();
}

Q5. Which program elements can be annotated?

Annotations can be applied in several places throughout the source code. They can be applied to declarations of classes, constructors, and fields:

@SimpleAnnotation
public class Apply {
    @SimpleAnnotation
    private String aField;

    @SimpleAnnotation
    public Apply() {
        // ...
    }
}

Methods and their parameters:

@SimpleAnnotation
public void aMethod(@SimpleAnnotation String param) {
    // ...
}

Local variables, including a loop and resource variables:

@SimpleAnnotation
int i = 10;

for (@SimpleAnnotation int j = 0; j < i; j++) {
    // ...
}

try (@SimpleAnnotation FileWriter writer = getWriter()) {
    // ...
} catch (Exception ex) {
    // ...
}

Other annotation types:

@SimpleAnnotation
public @interface ComplexAnnotation {
    // ...
}

And even packages, through the package-info.java file:

@PackageAnnotation
package com.baeldung.interview.annotations;

As of Java 8, they can also be applied to the use of types. For this to work, the annotation must specify an @Target annotation with a value of ElementType.USE:

@Target(ElementType.TYPE_USE)
public @interface SimpleAnnotation {
    // ...
}

Now, the annotation can be applied to class instance creation:

new @SimpleAnnotation Apply();

Type casts:

aString = (@SimpleAnnotation String) something;

Implements clause:

public class SimpleList<T>
  implements @SimpleAnnotation List<@SimpleAnnotation T> {
    // ...
}

And throws clause:

void aMethod() throws @SimpleAnnotation Exception {
    // ...
}

Q6. Is there a way to limit the elements in which an annotation can be applied?

Yes, the @Target annotation can be used for this purpose. If we try to use an annotation in a context where it is not applicable, the compiler will issue an error.

Here’s an example to limit the usage of the @SimpleAnnotation annotation to field declarations only:

@Target(ElementType.FIELD)
public @interface SimpleAnnotation {
    // ...
}

We can pass multiple constants if we want to make it applicable in more contexts:

@Target({ ElementType.FIELD, ElementType.METHOD, ElementType.PACKAGE })

We can even make an annotation so it cannot be used to annotate anything. This may come in handy when the declared types are intended solely for use as a member type in complex annotations:

@Target({})
public @interface NoTargetAnnotation {
    // ...
}

Q7. What are meta-annotations?

Are annotations that apply to other annotations.

All annotations that aren’t marked with @Target, or are marked with it but include ANNOTATION_TYPE constant are also meta-annotations:

@Target(ElementType.ANNOTATION_TYPE)
public @interface SimpleAnnotation {
    // ...
}

Q8. What are repeating annotations?

These are annotations that can be applied more than once to the same element declaration.

For compatibility reasons, since this feature was introduced in Java 8, repeating annotations are stored in a container annotation that is automatically generated by the Java compiler. For the compiler to do this, there are two steps to declared them.

First, we need to declare a repeatable annotation:

@Repeatable(Schedules.class)
public @interface Schedule {
    String time() default "morning";
}

Then, we define the containing annotation with a mandatory value element, and whose type must be an array of the repeatable annotation type:

public @interface Schedules {
    Schedule[] value();
}

Now, we can use @Schedule multiple times:

@Schedule
@Schedule(time = "afternoon")
@Schedule(time = "night")
void scheduledMethod() {
    // ...
}

Q9. How can you retrieve annotations? How does this relate to its retention policy?

You can use the Reflection API or an annotation processor to retrieve annotations.

The @Retention annotation and its RetentionPolicy parameter affect how you can retrieve them. There are three constants in RetentionPolicy enum:

  • RetentionPolicy.SOURCE – makes the annotation to be discarded by the compiler but annotation processors can read them
  • RetentionPolicy.CLASS – indicates that the annotation is added to the class file but not accessible through reflection
  • RetentionPolicy.RUNTIME –Annotations are recorded in the class file by the compiler and retained by the JVM at runtime so that they can be read reflectively

Here’s an example code to create an annotation that can be read at runtime:

@Retention(RetentionPolicy.RUNTIME)
public @interface Description {
    String value();
}

Now, annotations can be retrieved through reflection:

Description description
  = AnnotatedClass.class.getAnnotation(Description.class);
System.out.println(description.value());

An annotation processor can work with RetentionPolicy.SOURCE, this is described in the article Java Annotation Processing and Creating a Builder.

RetentionPolicy.CLASS is usable when you’re writing a Java bytecode parser.

Q10. Will the following code compile? 

@Target({ ElementType.FIELD, ElementType.TYPE, ElementType.FIELD })
public @interface TestAnnotation {
    int[] value() default {};
}

No. It’s a compile-time error if the same enum constant appears more than once in an @Target annotation.

Removing the duplicate constant will make the code to compile successfully:

@Target({ ElementType.FIELD, ElementType.TYPE})

Q11. Is it possible to extend annotations?

No. Annotations always extend java.lang.annotation.Annotation, as stated in the Java Language Specification.

If we try to use the extends clause in an annotation declaration, we’ll get a compilation error:

public @interface AnAnnotation extends OtherAnnotation {
    // Compilation error
}

Conclusion

In this article, we covered some of the frequently asked questions appearing in technical interviews for Java developers, regarding annotations. This is by no means an exhaustive list, and should only be considered as the start of further research.

We, at Baeldung, wish you success in any upcoming interviews.

Using Java MappedByteBuffer

$
0
0

1. Overview

In this quick article, we’ll be looking at the MappedByteBuffer in the java.nio package. This utility can be quite useful for efficient file reads.

2. How MappedByteBuffer Works

When we’re loading a region of the file, we can load it to the particular memory region that can be accessed later.

When we know that we’ll need to read the content of a file multiple times, it’s a good idea to optimize the costly process e.g. by saving that content in the memory. Thanks to that, subsequent lookups of that part of the file will go only to the main memory without the need to load the data from the disc, reducing latency substantially.

One thing that we need to be careful with when using the MappedByteBuffer is when we’re working with very large files from disc – we need to make sure the file will fit in memory.

Otherwise, we can fill up the entire memory and, as a consequence, run into the common OutOfMemoryException. We can overcome that by loading only part of the file – based for example on usage patterns.

3. Reading the File Using MappedByteBuffer

Let’s say that we have a file called fileToRead.txt with the following content:

This is a content of the file

The file is located in the /resource directory so we can load it using the following function:

Path getFileURIFromResources(String fileName) throws Exception {
    ClassLoader classLoader = getClass().getClassLoader();
    return Paths.get(classLoader.getResource(fileName).getPath());
}

To create the MappedByteBuffer from a file, firstly we need to create a FileChannel from it. Once we have our channel created, we can invoke the map() method on it passing in the MapMode, a position from which we want to read, and the size parameter that specifies how many bytes we want:

CharBuffer charBuffer = null;
Path pathToRead = getFileURIFromResources("fileToRead.txt");

try (FileChannel fileChannel (FileChannel) Files.newByteChannel(
  pathToRead, EnumSet.of(StandardOpenOption.READ))) {
 
    MappedByteBuffer mappedByteBuffer = fileChannel
      .map(FileChannel.MapMode.READ_ONLY, 0, fileChannel.size());

    if (mappedByteBuffer != null) {
        charBuffer = Charset.forName("UTF-8").decode(mappedByteBuffer);
    }
}

Once we mapped our file into the memory mapped buffer, we can read the data from it into the CharBuffer. Important to note is that although we are reading the content of the file when we call the decode() method passing MappedByteBuffer, we read from memory, not from the disc. Therefore that read will be very fast.

We can assert that content that we read from our file is the actual content of the fileToRead.txt file:

assertNotNull(charBuffer);
assertEquals(
  charBuffer.toString(), "This is a content of the file");

Every subsequent read from the mappedByteBuffer will be very fast because the content of the file is mapped in memory and reading is done without a need to lookup data from the disc.

4. Writing to the File using MappedByteBuffer

Let’s say that we want to write some content into the file fileToWriteTo.txt using the MappedByteBuffer API. To achieve that we need to open the FileChannel and call the map() method on it, passing in the FileChannel.MapMode.READ_WRITE. 

Next, we can save the content of the CharBuffer into the file using the put() method from the MappedByteBuffer:

CharBuffer charBuffer = CharBuffer
  .wrap("This will be written to the file");
Path pathToWrite = getFileURIFromResources("fileToWriteTo.txt");

try (FileChannel fileChannel = (FileChannel) Files
  .newByteChannel(pathToWrite, EnumSet.of(
    StandardOpenOption.READ, 
    StandardOpenOption.WRITE, 
    StandardOpenOption.TRUNCATE_EXISTING))) {
    
    MappedByteBuffer mappedByteBuffer = fileChannel
      .map(FileChannel.MapMode.READ_WRITE, 0, charBuffer.length());
    
    if (mappedByteBuffer != null) {
        mappedByteBuffer.put(
          Charset.forName("utf-8").encode(charBuffer));
    }
}

We can assert that the actual content of the charBuffer was written to the file by reading the content of it:

List<String> fileContent = Files.readAllLines(pathToWrite);
assertEquals(fileContent.get(0), "This will be written to the file");

5. Conclusion

In this quick tutorial, we were looking at the MappedByteBuffer construct from the java.nio package.

This is a very efficient way to read a content of the file multiple times, as the file is mapped into memory and subsequent reads do not need to go to disc every time.

All these examples and code snippets can be found over on GitHub – this is a Maven project, so it should be easy to import and run as it is.

Dynamic Proxies in Java

$
0
0

1. Introduction

This article is about Java’s dynamic proxies – which is one of the primary proxy mechanisms available to us in the language.

Simply put, proxies are fronts or wrappers that pass function invocation through their own facilities (usually onto real methods) – potentially adding some functionality.

Dynamic proxies allow one single class with one single method to service multiple method calls to arbitrary classes with an arbitrary number of methods. A dynamic proxy can be thought of as a kind of Facade, but one that can pretend to be an implementation of any interface. Under the cover, it routes all method invocations to a single handler – the invoke() method.

While it’s not a tool meant for everyday programming tasks, dynamic proxies can be quite useful for framework writers. It may also be used in those cases where concrete class implementations won’t be known until run-time.

This feature is built into the standard JDK, hence no additional dependencies are required.

2. Invocation Handler

Let us build a simple proxy that doesn’t actually do anything except printing what method was requested to be invoked and return a hard-coded number.

First, we need to create a subtype of java.lang.reflect.InvocationHandler:

public class DynamicInvocationHandler implements InvocationHandler {

    private static Logger LOGGER = LoggerFactory.getLogger(
      DynamicInvocationHandler.class);

    @Override
    public Object invoke(Object proxy, Method method, Object[] args) 
      throws Throwable {
        LOGGER.info("Invoked method: {}", method.getName());

        return 42;
    }
}

Here we’ve defined a simple proxy that logs which method was invoked and returns 42.

3. Creating Proxy Instance

A proxy instance serviced by the invocation handler we have just defined is created via a factory method call on the java.lang.reflect.Proxy class:

Map proxyInstance = (Map) Proxy.newProxyInstance(
  DynamicProxyTest.class.getClassLoader(), 
  new Class[] { Map.class }, 
  new DynamicInvocationHandler());

Once we have a proxy instance we can invoke its interface methods as normal:

proxyInstance.put("hello", "world");

As expected a message about put() method being invoked is printed out in the log file.

4. Invocation Handler via Lambda Expressions

Since InvocationHandler is a functional interface, it is possible to define the handler inline using lambda expression:

Map proxyInstance = (Map) Proxy.newProxyInstance(
  DynamicProxyTest.class.getClassLoader(), 
  new Class[] { Map.class }, 
  (proxy, method, methodArgs) -> {
    if (method.getName().equals("get")) {
        return 42;
    } else {
        throw new UnsupportedOperationException(
          "Unsupported method: " + method.getName());
    }
});

Here, we defined a handler that returns 42 for all get operations and throws UnsupportedOperationException for everything else.

It’s invoked in the exactly the same way:

(int) proxyInstance.get("hello"); // 42
proxyInstance.put("hello", "world"); // exception

5. Timing Dynamic Proxy Example

Let’s examine one potential real-world scenario for dynamic proxies.

Suppose we want to record how long our functions take to execute. To this extent, we first define a handler capable of wrapping the “real” object, tracking timing information and reflective invocation:

public class TimingDynamicInvocationHandler implements InvocationHandler {

    private static Logger LOGGER = LoggerFactory.getLogger(
      TimingDynamicInvocationHandler.class);
    
    private final Map<String, Method> methods = new HashMap<>();

    private Object target;

    public TimingDynamicInvocationHandler(Object target) {
        this.target = target;

        for(Method method: target.getClass().getDeclaredMethods()) {
            this.methods.put(method.getName(), method);
        }
    }

    @Override
    public Object invoke(Object proxy, Method method, Object[] args) 
      throws Throwable {
        long start = System.nanoTime();
        Object result = methods.get(method.getName()).invoke(target, args);
        long elapsed = System.nanoTime() - start;

        LOGGER.info("Executing {} finished in {} ns", method.getName(), 
          elapsed);

        return result;
    }
}

Subsequently, this proxy can be used on various object types:

Map mapProxyInstance = (Map) Proxy.newProxyInstance(
  DynamicProxyTest.class.getClassLoader(), new Class[] { Map.class }, 
  new TimingDynamicInvocationHandler(new HashMap<>()));

mapProxyInstance.put("hello", "world");

CharSequence csProxyInstance = (CharSequence) Proxy.newProxyInstance(
  DynamicProxyTest.class.getClassLoader(), 
  new Class[] { CharSequence.class }, 
  new TimingDynamicInvocationHandler("Hello World"));

csProxyInstance.length()

Here, we have proxied a map and a char sequence (String).

Invocations of the proxy methods will delegate to the wrapped object as well as produce logging statements:

Executing put finished in 19153 ns 
Executing get finished in 8891 ns 
Executing charAt finished in 11152 ns 
Executing length finished in 10087 ns

6. Conclusion

In this quick tutorial, we have examined Java’s dynamic proxies as well as some of its possible usages.

As always, the code in the examples can be found over on GitHub.

How to Copy an Array in Java

$
0
0

1. Overview

In this quick article, we’ll discuss different array copying methods in Java. Array copy may seem like a trivial task, but it may cause unexpected results and program behaviors if not done carefully.

2. The System Class

Let’s start with the core Java library – System.arrayCopy(); this copies an array from a source array to a destination array, starting the copy action from the source position to the target position till the specified length.

The number of elements copied to the target array equals the specified length. It provides an easy way to copy a sub-sequence of an array to another.

If any of the array arguments is null, it throws a NullPointerException and if any of the integer arguments is negative or out of range, it throws an IndexOutOfBoundException.

Let’s have a look at an example to copy a full array to another using the java.util.System class:

int[] array = {23, 43, 55};
int[] copiedArray = new int[3];

System.arraycopy(array, 0, copiedArray, 0, 3);

Arguments this method take are; a source array, the starting position to copy from source array, a destination array, the starting position in the destination array, and the number of elements to be copied.

Let’s have a look at another example that shows copying a sub-sequence from a source array to a destination:

int[] array = {23, 43, 55, 12, 65, 88, 92};
int[] copiedArray = new int[3];

System.arraycopy(array, 2, copiedArray, 0, 3);
assertTrue(3 == copiedArray.length);
assertTrue(copiedArray[0] == array[2]);
assertTrue(copiedArray[1] == array[3]);
assertTrue(copiedArray[2] == array[4]);

3. The Arrays Class

The Arrays class also offers multiple overloaded methods to copy an array to another. Internally, it uses the same approach provided by System class that we have seen earlier. It mainly provides two methods, copyOf(…) and copyRangeOf(…).

Let’s have a look at copyOf first:

int[] array = {23, 43, 55, 12};
int newLength = array.length;

int[] copiedArray = Arrays.copyOf(array, newLength);

It’s important to note that Arrays class uses Math.min(…) for selecting the minimum of the source array length and the value of the new length parameter to determine the size of the resulting array.

Arrays.copyOfRange() takes 2 parameters, ‘from’ and ‘to’ in addition to the source array parameter. The resulting array includes the ‘from’ index but the ‘to’ index is excluded. Let’s see an example:

int[] array = {23, 43, 55, 12, 65, 88, 92};

int[] copiedArray = Arrays.copyOfRange(array, 1, 4);
assertTrue(3 == copiedArray.length);
assertTrue(copiedArray[0] == array[1]);
assertTrue(copiedArray[1] == array[2]);
assertTrue(copiedArray[2] == array[3]);

Both of these methods do a shallow copy of objects if applied on an array of non-primitive object types. Let’s see an example test case:

Employee[] copiedArray = Arrays.copyOf(employees, employees.length);

employees[0].setName(employees[0].getName() + "_Changed");
 
assertArrayEquals(copiedArray, array);

Because the result is a shallow copy – a change in the employee name of an element of the original array caused the change in the copy array.

And so – if we want to do a deep copy of non-primitive types – we can go for the other options described in the upcoming sections.

4. Array Copy with Object.clone()

Object.clone() is inherited from Object class in an array.

Let’s first copy an array of primitive types using clone method:

int[] array = {23, 43, 55, 12};
 
int[] copiedArray = array.clone();

And a proof that it works:

assertArrayEquals(copiedArray, array);
array[0] = 9;

assertTrue(copiedArray[0] != array[0]);

The above example shows that have the same content after cloning but they hold different references, so any change in any of them won’t affect the other one.

On the other hand, if we clone an array of non-primitive types using the same method, then the results will be different.

It creates a shallow copy of the non-primitive type array elements, even if the enclosed object’s class implements the Cloneable interface and overrides the clone() method from the Object class.

Let’s have a look at an example:

public class Address implements Cloneable {
    // ...

    @Override
    protected Object clone() throws CloneNotSupportedException {
         super.clone();
         Address address = new Address();
         address.setCity(this.city);
        
         return address;
    }
}

We can test our implementation by creating a new array of addresses and invoking our clone() method:

Address[] addresses = createAddressArray();
Address[] copiedArray = addresses.clone();
addresses[0].setCity(addresses[0].getCity() + "_Changed");
assertArrayEquals(copiedArray, addresses);

This example shows that any change in the original or copied array would cause the change in the other one even when the enclosed objects are Cloneable.

5. Using the Stream API

It turns out, we can use the Stream API for copying arrays too. Let’s have a look at an example:

String[] strArray = {"orange", "red", "green'"};
String[] copiedArray = Arrays.stream(strArray).toArray(String[]::new);

For the non-primitive types, it will also do a shallow copy of objects. To learn more about Java 8 Streams, you can start here.

6. External Libraries

Apache Commons 3 offers a utility class called SerializationUtils that provides a clone(…) method. It is very useful if we need to do a deep copy of an array of non-primitive types. It can be downloaded from here and its Maven dependency is:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-lang3</artifactId>
    <version>3.5</version>
</dependency>

Let’s have a look at a test case:

public class Employee implements Serializable {
    // fields
    // standard getters and setters
}

Employee[] employees = createEmployeesArray();
Employee[] copiedArray = SerializationUtils.clone(employees);
employees[0].setName(employees[0].getName() + "_Changed");
assertFalse(
  copiedArray[0].getName().equals(employees[0].getName()));

This class requires that each object should implement the Serializable interface. In terms of performance, it is slower than the clone methods written manually for each of the objects in our object graph to copy.

7. Conclusion

In this tutorial, we had a look at the various options to copy an array in Java.

The method to use is mainly dependent upon the exact scenario. As long as we’re using a primitive type array, we can use any of the methods offered by the System and Arrays classes. There shouldn’t be any difference in performance.

For non-primitive types, if we need to do a deep copy of an array we can either use the SerializationUtils or add clone methods to our classes explicitly.

And as always, the examples shown in this article are available on over on GitHub.

Viewing all 3689 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>