Quantcast
Channel: Baeldung
Viewing all 3697 articles
Browse latest View live

Getting the Size of an Iterable in Java

$
0
0

1. Overview

In this quick tutorial, we’ll learn about the various ways in which we can get the size of an Iterable in Java.

2. Iterable and Iterator

Iterable is one of the main interfaces of the collection classes in Java.

The Collection interface extends Iterable and hence all child classes of Collection also implement Iterable.

Iterable has only one method that produces an Iterator:

public interface Iterable<T> {
    public Iterator<T> iterator();    
}

This Iterator can then be used to iterate over the elements in the Iterable.

3. Iterable Size using Core Java

3.1. for-each loop

All classes that implement Iterable are eligible for the for-each loop in Java.

This allows us to loop over the elements in the Iterable while incrementing a counter to get its size:

int counter = 0;
for (Object i : data) {
    counter++;
}
return counter;

3.2. Collection.size()

In most cases, the Iterable will be an instance of Collection, such as a List or a Set.

In such cases, we can check the type of the Iterable and call size() method on it to get the number of elements.

if (data instanceof Collection) {
    return ((Collection<?>) data).size();
}

The call to size() is usually much faster than iterating through the entire collection.

Here’s an example showing the combination of the above two solutions: 

public static int size(Iterable data) {

    if (data instanceof Collection) {
        return ((Collection<?>) data).size();
    }
    int counter = 0;
    for (Object i : data) {
        counter++;
    }
    return counter;
}

3.3. Stream.count()

If we’re using Java 8, we can create a Stream from the Iterable.

The stream object can then be used to get the count of elements in the Iterable.

return StreamSupport.stream(data.spliterator(), false).count();

4. Iterable Size using Third-Party Libraries

4.1. IterableUtils#size()

The Apache Commons Collections library has a nice IterableUtils class that provides static utility methods for Iterable instances.

Before we start, we need to import the latest dependencies from Maven Central:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-collections4</artifactId>
    <version>4.1</version>
</dependency>

We can invoke the size() method of IterableUtils on an Iterable object to get its size.

return IterableUtils.size(data);

4.2. Iterables#size()

Similarly, the Google Guava library also provides a collection of static utility methods in its Iterables class to operate on Iterable instances.

Before we start, we need to import the latest dependencies from Maven Central:

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>25.0</version>
</dependency>

Invoking the static size() method on the Iterables class gives us the number of elements.

return Iterables.size(data);

Under the hood, both IterableUtils and Iterables use the combination of approaches described in 3.1 and 3.2 to determine the size.

5. Conclusion

In this article, we looked at different ways of getting the size of an Iterable in Java.

The source code for this article and the relevant test cases are available over on GitHub.


Optional orElse Optional

$
0
0

1. Introduction

In some cases, we might want to fallback to another Optional instance if another one is empty.

In this tutorial, we’ll briefly mention how we can do that – which is harder than it looks.

For an introduction to the Java Optional class, have a look at our previous article.

2. Using Plain Java

In Java 8, there’s no direct way to achieve return a different Optional if the first one is empty.

Therefore, we can implement our own custom method:

public static <T> Optional<T> or(Optional<T> optional, Optional<T> fallback) {
    return optional.isPresent() ? optional : fallback;
}

And, in practice:

@Test
public void givenOptional_whenValue_thenOptionalGeneralMethod() {
    String name = "Filan Fisteku";
    String missingOptional = "Name not provided";
    Optional<String> optionalString = Optional.ofNullable(name);
    Optional<String> fallbackOptionalString = Optional.ofNullable(missingOptional);
 
    assertEquals(
      optionalString, 
      Optionals.or(optionalString, fallbackOptionalString));
}
    
@Test
public void givenEmptyOptional_whenValue_thenOptionalGeneralMethod() {
    Optional<String> optionalString = Optional.empty();
    Optional<String> fallbackOptionalString = Optional.ofNullable("Name not provided");
 
    assertEquals(
      fallbackOptionalString, 
      Optionals.or(optionalString, fallbackOptionalString));
}

On the other hand, Java 9 does add an or() method that we can use to get an Optional, or another value, if that Optional isn’t present.

Let’s see this in practice with a quick example:

public static Optional<String> getName(Optional<String> name) {
    return name.or(() -> getCustomMessage());
}

We’ve used an auxiliary method to help us with our example:

private static Optional<String> getCustomMessage() {
    return Optional.of("Name not provided");
}

We can test it and further understand how it’s working. The following test case is a demonstration of the case when Optional has a value:

@Test
public void givenOptional_whenValue_thenOptional() {
    String name = "Filan Fisteku";
    Optional<String> optionalString = Optional.ofNullable(name);
    assertEquals(optionalString, Optionals.getName(optionalString));
}

3. Using Guava

Another way to do this is by using or() method of the guava’s Optional class. First, we need to add guava in our project (the latest version can be found here):

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>24.1.1-jre</version>
</dependency>

Now, we can continue with the same example that we had earlier:

public static com.google.common.base.Optional<String> getOptionalGuavaName(com.google.common.base.Optional<String> name) {
    return name.or(getCustomMessageGuava());
}
private static com.google.common.base.Optional<String> getCustomMessageGuava() {
    return com.google.common.base.Optional.of("Name not provided");
}

As we can see, it’s very similar to the one displayed above. However, it has a slight difference in the naming of the method and is exactly the same as or() method of the class Optional from JDK 9.

We can now test it, similarly as the example above:

@Test
public void givenGuavaOptional_whenInvoke_thenOptional() {
    String name = "Filan Fisteku";
    Optional<String> stringOptional = Optional.of(name);
 
    assertEquals(name, Optionals.getOptionalGuavaName(stringOptional));
}
@Test
public void givenGuavaOptional_whenNull_thenDefaultText() {
    assertEquals(
      com.google.common.base.Optional.of("Name not provided"), 
      Optionals.getOptionalGuavaName(com.google.common.base.Optional.fromNullable(null)));
}

4. Conclusion

This was a quick article illustrating how to achieve Optional orElse Optional functionality.

The code for all the examples explained here, and much more can be found over on GitHub.

Using the Spring RestTemplate Interceptor

$
0
0

1. Overview

In this tutorial, we’re going to learn how to implement a Spring RestTemplate Interceptor.

We’ll go through an example in which we’ll create an interceptor that adds a custom header to the response.

2. Interceptor Usage Scenarios

Besides header modification, some of the other use-cases where a RestTemplate interceptor is useful are:

  • Request and response logging
  • Retrying the requests with a configurable back off strategy
  • Request denial based on certain request parameters
  • Altering the request URL address

3. Creating the Interceptor

In most programming paradigms, interceptors are an essential part that enables programmers to control the execution by intercepting it. Spring framework also supports a variety of interceptors for different purposes.

Spring RestTemplate allows us to add interceptors that implement ClientHttpRequestInterceptor interface. The intercept(HttpRequest, byte[], ClientHttpRequestExecution) method of this interface will intercept the given request and return the response by giving us access to the request, body and execution objects.

We’ll be using the ClientHttpRequestExecution argument to do the actual execution, and pass on the request to the subsequent process chain.

As a first step, let’s create an interceptor class that implements the ClientHttpRequestInterceptor interface:

public class RestTemplateHeaderModifierInterceptor
  implements ClientHttpRequestInterceptor {

    @Override
    public ClientHttpResponse intercept(
      HttpRequest request, 
      byte[] body, 
      ClientHttpRequestExecution execution) throws IOException {
 
        ClientHttpResponse response = execution.execute(request, body);
        response.getHeaders().add("Foo", "bar");
        return response;
    }
}

Our interceptor will be invoked for every incoming request, and it will add a custom header Foo to every response, once the execution completes and returns.

Since the intercept() method included the request and body as arguments, it’s also possible to do any modification on the request or even denying the request execution based on certain conditions.

4. Setting up the RestTemplate

Now that we have created our interceptor, let’s create the RestTemplate bean and add our interceptor to it:

@Configuration
public class RestClientConfig {

    @Bean
    public RestTemplate restTemplate() {
        RestTemplate restTemplate = new RestTemplate();

        List<ClientHttpRequestInterceptor> interceptors
          = restTemplate.getInterceptors();
        if (CollectionUtils.isEmpty(interceptors)) {
            interceptors = new ArrayList<>();
        }
        interceptors.add(new RestTemplateHeaderModifierInterceptor());
        restTemplate.setInterceptors(interceptors);
        return restTemplate;
    }
}

In some cases, there might be interceptors already added to the RestTemplate object. So to make sure everything works as expected, our code will initialize the interceptor list only if it’s empty.

As our code shows, we are using the default constructor to create the RestTemplate object, but there are some scenarios where we need to read the request/response stream twice.

For instance, if we want our interceptor to function as a request/response logger, then we need to read it twice – the first time by the interceptor and the second time by the client.

The default implementation allows us to read the response stream only once. To cater such specific scenarios, Spring provides a special class called BufferingClientHttpRequestFactory. As the name suggests, this class will buffer the request/response in JVM memory for multiple usage.

Here’s how the RestTemplate object is initialized using BufferingClientHttpRequestFactory  to enable the request/response stream caching:

RestTemplate restTemplate 
  = new RestTemplate(
    new BufferingClientHttpRequestFactory(
      new SimpleClientHttpRequestFactory()
    )
  );

5. Testing Our Example

Here’s the JUnit test case for testing our RestTemplate interceptor:

public class RestTemplateItegrationTest {
    
    @Autowired
    RestTemplate restTemplate;

    @Test
    public void givenRestTemplate_whenRequested_thenLogAndModifyResponse() {
        LoginForm loginForm = new LoginForm("username", "password");
        HttpEntity<LoginForm> requestEntity
          = new HttpEntity<LoginForm>(loginForm);
        HttpHeaders headers = new HttpHeaders();
        headers.setContentType(MediaType.APPLICATION_JSON);
        
        ResponseEntity<String> responseEntity
          = restTemplate.postForEntity(
            "http://httpbin.org/post", requestEntity, String.class
          );
        
        assertThat(
          responseEntity.getStatusCode(),
          is(equalTo(HttpStatus.OK))
        );
        assertThat(
          responseEntity.getHeaders().get("Foo").get(0),
          is(equalTo("bar"))
        );
    }
}

Here, we’ve used the freely hosted HTTP request and response service http://httpbin.org to post our data. This testing service will return our request body along with some metadata.

6. Conclusion

This tutorial is all about how to set up an interceptor and add it to the RestTemplate object. This kind of interceptors can also be used for filtering, monitoring and controlling the incoming requests.

A common use-case for a RestTemplate interceptor is the header modification – which we’ve illustrated in details in this article.

And, as always, you can find the example code over on Github project. This is a Maven-based project, so it should be easy to import and run as it is.

Spring RestTemplate Error Handling

$
0
0

1. Overview

In this short tutorial, we’ll discuss how to implement and inject the ResponseErrorHandler interface in a RestTemplate instance – to gracefully handle HTTP errors returned by remote APIs. 

2. Default Error Handling

By default, the RestTemplate will throw one of these exceptions in case of an HTTP error:

  1. HttpClientErrorException – in case of HTTP status 4xx
  2. HttpServerErrorException – in case of HTTP status 5xx
  3. UnknownHttpStatusCodeException – in case of an unknown HTTP status

All these exceptions are extensions of RestClientResponseException.

Obviously, the simplest strategy to add a custom error handling is to wrap the call in a try/catch block. Then, we process the caught exception as we see fit.

However, this simple strategy doesn’t scale well as the number of remote APIs or calls increases. It’d be more efficient if we could implement a reusable error handler for all of our remote calls.

3. Implementing a ResponseErrorHandler

And so, a class that implements ResponseErrorHandler will read the HTTP status from the response and either:

  1. Throw an exception that is meaningful to our application
  2. Simply ignore the HTTP status and let the response flow continue without interruption

We need to inject the ResponseErrorHandler implementation into the RestTemplate instance.

Hence, we use the RestTemplateBuilder to build the template and replace the DefaultResponseErrorHandler in the response flow.

So let’s first implement our RestTemplateResponseErrorHandler:

@Component
public class RestTemplateResponseErrorHandler 
  implements ResponseErrorHandler {

    @Override
    public boolean hasError(ClientHttpResponse httpResponse) 
      throws IOException {

        return (
          httpResponse.getStatusCode().series() == CLIENT_ERROR 
          || httpResponse.getStatusCode().series() == SERVER_ERROR);
    }

    @Override
    public void handleError(ClientHttpResponse httpResponse) 
      throws IOException {

        if (httpResponse.getStatusCode()
          .series() == HttpStatus.Series.SERVER_ERROR) {
            // handle SERVER_ERROR
        } else if (httpResponse.getStatusCode()
          .series() == HttpStatus.Series.CLIENT_ERROR) {
            // handle CLIENT_ERROR
            if (httpResponse.getStatusCode() == HttpStatus.NOT_FOUND) {
                throw new NotFoundException();
            }
        }
    }
}

Next, we build the RestTemplate instance using the RestTemplateBuilder to introduce our RestTemplateResponseErrorHandler:

@Service
public class BarConsumerService {

    private RestTemplate restTemplate;

    @Autowired
    public BarConsumerService(RestTemplateBuilder restTemplateBuilder) {
        RestTemplate restTemplate = restTemplateBuilder
          .errorHandler(new RestTemplateResponseErrorHandler())
          .build();
    }

    public Bar fetchBarById(String barId) {
        return restTemplate.getForObject("/bars/4242", Bar.class);
    }

}

4. Testing our Implementation

Finally, let’s test this handler by mocking a server and returning a NOT_FOUND status:

@RunWith(SpringRunner.class)
@ContextConfiguration(classes = { NotFoundException.class, Bar.class })
@RestClientTest
public class RestTemplateResponseErrorHandlerIntegrationTest {

    @Autowired 
    private MockRestServiceServer server;
 
    @Autowired 
    private RestTemplateBuilder builder;

    @Test(expected = NotFoundException.class)
    public void  givenRemoteApiCall_when404Error_thenThrowNotFound() {
        Assert.assertNotNull(this.builder);
        Assert.assertNotNull(this.server);

        RestTemplate restTemplate = this.builder
          .errorHandler(new RestTemplateResponseErrorHandler())
          .build();

        this.server
          .expect(ExpectedCount.once(), requestTo("/bars/4242"))
          .andExpect(method(HttpMethod.GET))
          .andRespond(withStatus(HttpStatus.NOT_FOUND));

        Bar response = restTemplate 
          .getForObject("/bars/4242", Bar.class);
        this.server.verify();
    }
}

5. Conclusion

This article presented a solution to implement and test a custom error handler for a RestTemplate that converts HTTP errors into meaningful exceptions.

As always, the code presented in this article is available over on Github.

Pessimistic Locking in JPA

$
0
0

1. Overview

There are plenty of situations when we want to retrieve data from a database. Sometimes we want to lock it for ourselves for further processing so nobody else can interrupt our actions.

We can think of two concurrency control mechanisms which allow us to do that: setting the proper transaction isolation level or setting a lock on data that we need at the moment.

The transaction isolation is defined for database connections. We can configure it to retain the different degree of locking data.

However, the isolation level is set once the connection is created and it affects every statement within that connection. Luckily, we can use pessimistic locking which uses database mechanisms for reserving more granular exclusive access to the data.

We can use a pessimistic lock to ensure that no other transactions can modify or delete reserved data.

There are two types of locks we can retain: an exclusive lock and a shared lock. We could read but not write in data when someone else holds a shared lock. In order to modify or delete the reserved data, we need to have an exclusive lock.

We can acquire exclusive locks using ‘SELECT … FOR UPDATE‘ statements.

2. Lock Modes

JPA specification defines three pessimistic lock modes which we’re going to discuss:

  • PESSIMISTIC_READ – allows us to obtain a shared lock and prevent the data from being updated or deleted
  • PESSIMISTIC_WRITE – allows us to obtain an exclusive lock and prevent the data from being read, updated or deleted
  • PESSIMISTIC_FORCE_INCREMENT – works like PESSIMISTIC_READ and it additionally increments a version attribute of a versioned entity

All of them are static members of the LockModeType class and allow transactions to obtain a database lock. They all are retained until the transaction commits or rolls back.

It’s worth noticing that we can obtain only one lock at a time. If it’s impossible a PersistenceException is thrown.

2.1. PESSIMISTIC_READ

Whenever we want to just read data and don’t encounter dirty reads, we could use PESSIMISTIC_READ (shared lock). We won’t be able to make any updates or deletes though.

It sometimes happens that the database we use doesn’t support the PESSIMISTIC_READ lock, so it’s possible that we obtain the PESSIMISTIC_WRITE lock instead.

2.2. PESSIMISTIC_WRITE

Any transaction that needs to acquire a lock on data and make changes to it should obtain the PESSIMISTIC_WRITE lock. According to the JPA specification, holding PESSIMISTIC_WRITE lock will prevent other transactions from reading, updating or deleting the data.

Please note that some database systems implement multi-version concurrency control which allows readers to fetch data that has been already blocked.

2.3. PESSIMISTIC_FORCE_INCREMENT

This lock works similarly to PESSIMISTIC_WRITE, but it was introduced to cooperate with versioned entities – entities which have an attribute annotated with @Version.

Any updates of versioned entities could be preceded with obtaining the PESSIMISTIC_FORCE_INCREMENT lock. Acquiring that lock results in updating the version column.

It’s up to a persistence provider to determine whether it supports PESSIMISTIC_FORCE_INCREMENT for unversioned entities or not. If it doesn’t, it throws the PersistanceException.

2.4. Exceptions

It’s good to know which exception may occur while working with pessimistic locking. JPA specification provides different types of exceptions:

  • PessimisticLockException – indicates that obtaining a lock or converting a shared to exclusive lock fails and results in a transaction-level rollback
  • LockTimeoutException –  indicates that obtaining a lock or converting a shared lock to exclusive times out and results in a statement-level rollback
  • PersistanceException – indicates that a persistence problem occurred. PersistanceException and its subtypes, except NoResultException, NonUniqueResultException, LockTimeoutException, and QueryTimeoutException, marks the active transaction to be rolled back.

3. Using Pessimistic Locks

There are a few possible ways to configure a pessimistic lock on a single record or group of records. Let’s see how to do it in JPA.

3.1. Find

It’s probably the most straightforward way. It’s enough to pass a LockModeType object as a parameter to the find method:

entityManager.find(Student.class, studentId, LockModeType.PESSIMISTIC_READ);

3.2. Query

Additionally, we can use a Query object as well and call the setLockMode setter with a lock mode as a parameter:

Query query = entityManager.createQuery("from Student where studentId = :studentId");
query.setParameter("studentId", studentId);
query.setLockMode(LockModeType.PESSIMISTIC_WRITE);
query.getResultList()

3.3. Explicit Locking

It’s also possible to lock manually the results retrieved by the find method:

Student resultStudent = entityManager.find(Student.class, studentId);
entityManager.lock(resultStudent, LockModeType.PESSIMISTIC_WRITE);

3.4. Refresh

If we want to overwrite the state of the entity by the refresh method, we can also set a lock:

Student resultStudent = entityManager.find(Student.class, studentId);
entityManager.refresh(resultStudent, LockModeType.PESSIMISTIC_FORCE_INCREMENT);

3.5. NamedQuery

@NamedQuery annotation allows us to set a lock mode as well:

@NamedQuery(name="lockStudent",
  query="SELECT s FROM Student s WHERE s.id LIKE :studentId",
  lockMode = PESSIMISTIC_READ)

4. Lock Scope

Lock scope parameter defines how to deal with locking relationships of the locked entity. It’s possible to obtain a lock just on a single entity defined in a query or additionally block its relationships.

To configure the scope we can use PessimisticLockScope enum. It contains two values: NORMAL and EXTENDED.

We can set the scope by passing a parameter ‘javax.persistance.lock.scope‘ with PessimisticLockScope value as an argument to the proper method of EntityManager, Query, TypedQuery or NamedQuery:

Map<String, Object> properties = new HashMap<>();
map.put("javax.persistence.lock.scope", PessimisticLockScope.EXTENDED);
    
entityManager.find(
  Student.class, 1L, LockModeType.PESSIMISTIC_WRITE, properties);

4.1. PessimisticLockScope.NORMAL

We should know that the PessimisticLockScope.NORMAL is the default scope. With this locking scope, we lock the entity itself. When used with joined inheritance it also locks the ancestors.

Let’s look at the sample code with two entities:

@Entity
@Inheritance(strategy = InheritanceType.JOINED)
public class Person {

    @Id
    private Long id;
    private String name;
    private String lastName;

    // getters and setters
}

@Entity
public class Employee extends Person {

    private BigDecimal salary;

    // getters and setters
}

When we want to obtain a lock on the Employee, we can observe the SQL query which spans over those two entities:

SELECT t0.ID, t0.DTYPE, t0.LASTNAME, t0.NAME, t1.ID, t1.SALARY 
FROM PERSON t0, EMPLOYEE t1 
WHERE ((t0.ID = ?) AND ((t1.ID = t0.ID) AND (t0.DTYPE = ?))) FOR UPDATE

4.2. PessimisticLockScope.EXTENDED

The EXTENDED scope covers the same functionality as NORMAL. In addition, it’s able to block related entities in a join table.

Simply put, it works with entities annotated with @ElementCollection or @OneToOne, @OneToMany etc. with @JoinTable.

Let’s look at the sample code with the @ElementCollection annotation:

@Entity
public class Customer {

    @Id
    private Long customerId;
    private String name;
    private String lastName;
    @ElementCollection
    @CollectionTable(name = "customer_address")
    private List<Address> addressList;

    // getters and setters
}

@Embeddable
public class Address {

    private String country;
    private String city;

    // getters and setters
}

Let’s analyze some queries when searching for the Customer entity:

SELECT CUSTOMERID, LASTNAME, NAME 
FROM CUSTOMER WHERE (CUSTOMERID = ?) FOR UPDATE

SELECT CITY, COUNTRY, Customer_CUSTOMERID 
FROM customer_address 
WHERE (Customer_CUSTOMERID = ?) FOR UPDATE

We can see that there are two ‘FOR UPDATE‘ queries which lock a row in the customer table as well as a row in the join table.

Another interesting fact we should be aware of is that not all persistence providers support lock scopes.

5. Setting Lock Timeout

Besides setting lock scopes, we can adjust another lock parameter – timeout. The timeout value is the number of milliseconds that we want to wait for obtaining a lock until the LockTimeoutException occurs.

We can change the value of timeout similarly to lock scopes, by using property ‘javax.persistence.lock.timeout’ with the proper number of milliseconds.

It’s also possible to specify ‘no wait’ locking by changing timeout value to zero. However, we should keep in mind that there are database drivers which don’t support setting a timeout value this way.

Map<String, Object> properties = new HashMap<>(); 
map.put("javax.persistence.lock.timeout", 1000L); 

entityManager.find(
  Student.class, 1L, LockModeType.PESSIMISTIC_READ, properties);

6. Conclusion

When setting the proper isolation level is not enough to cope with concurrent transactions, JPA gives us pessimistic locking. It enables us to isolate and orchestrate different transactions so they don’t access the same resource at the same time.

To achieve that we can choose between discussed types of locks and consequently modify such parameters as their scopes or timeouts.

On the other hand, we should remember that understanding database locks is as important as understanding the mechanisms of underlying database systems. It’s also important to have in mind that the behavior of pessimistic locks depends on persistence provider we work with.

Lastly, the source code of this tutorial is available over on GitHub for hibernate and for EclipseLink.

Working With Arrays in Thymeleaf

$
0
0

1. Overview

In this quick tutorial, we’re going to see how we can use arrays in Thymeleaf. For easy setup, we’re going to use a spring-boot initializer to bootstrap our application.

The basics of Spring MVC and Thymeleaf can be found here.

2. Thymeleaf Dependency

In our pom.xml file, the only dependencies we need to add are SpringMVC and Thymeleaf:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-thymeleaf</artifactId>
</dependency>

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>

3. The Controller

For simplicity, let’s use a controller with only one method which handles GET requests.

This responds by passing an array to the model object which will make it accessible to the view:

@Controller
public class ThymeleafArrayController {
 
    @GetMapping("/arrays")
    public String arrayController(Model model) {
        String[] continents = {
          "Africa", "Antarctica", "Asia", "Australia", 
          "Europe", "North America", "Sourth America"
        };
        
        model.addAttribute("continents", continents);

        return "continents";
    }
}

4. The View

In the view page, we’re going to access the array continents by the name we pass it with (continents) from our controller above.

4.1. Properties and Indexes

One of the first property we’re going to inspect is the length of the array. This is how we can check it:

<p>...<span th:text="${continents.length}"></span>...</p>

And looking at the snippet of code above, which is from the view page, we should notice the use of the keyword th:text. We used it to print the value of the variable inside the curly braces, in this case, the length of the array.

Consequently, we access the value of each element of the array continents by its index just like we use to do within our normal Java code:

<ol>
    <li th:text="${continents[2]}"></li>
    <li th:text="${continents[0]}"></li>
    <li th:text="${continents[4]}"></li>
    <li th:text="${continents[5]}"></li>
    <li th:text="${continents[6]}"></li>
    <li th:text="${continents[3]}"></li>
    <li th:text="${continents[1]}"></li>
</ol>

As we’ve seen in the above code fragment, each element is accessible through its index. We can go here to learn more about expressions in Thymeleaf.

4.2. Iteration

Similarly, we can iterate over the elements the array sequentially.

In Thymeleaf, here’s how we can achieve that:

<ul th:each="continet : ${continents}">
    <li th:text="${continent}"></li>
</ul>

When using th:each keyword to iterate over the element of an array, we’re not restricted to using list tags only. We can use any HTML tag capable of displaying text on the page. For example:

<h4 th:each="continent : ${continents}" th:text="${continent}"></h4>

In the above code snippet, each element is going to be displayed on its own separate <h4></h4> tag.

4.3. Utility Functions

Finally, we’re going to employ the use of utility class functions to examine some other properties of the array.

Let’s take a look at this:

<p>The greatest <span th:text="${#arrays.length(continents)}"></span> continents.</p>

<p>Europe is a continent: <span th:text="${#arrays.contains(continents, 'Europe')}"></span>.</p>

<p>Array of continents is empty <span th:text="${#arrays.isEmpty(continents)}"></span>.</p>

We query the length of the array first, and then check whether Europe is an element of the array continents.

Lastly, we check that the array continents is empty or not.

5. Conclusion

In this article, we’ve learned how to work with an array in Thymeleaf by checking its length and accessing its elements using an index. We have also learned how to iterate over its elements within Thymeleaf.

Lastly, we have seen the use of utility functions to inspect other properties of an array.

And, as always, the complete source code of this article can be found over on Github.

Context Hierarchy with the Spring Boot Fluent Builder API

$
0
0

1. Overview

It’s possible to create separate contexts and organize them in a hierarchy in Spring Boot.

A context hierarchy can be defined in different ways in Spring Boot application. In this article, we’ll look at how we can create multiple contexts using the fluent builder API.

As we won’t go into details on how to set up a Spring Boot application, you might want to check out this article.

2. Application Context Hierarchy

We can have multiple application contexts which share a parent-child relationship.

A context hierarchy allows multiple child contexts to share beans which reside in the parent context. Each child context can override configuration inherited from the parent context.

Furthermore, we can use contexts to prevent beans registered in one context from being accessible in another. This facilitates the creation of loosely coupled modules.

Here some points worth noting are that a context can have only one parent while a parent context can have multiple child contexts. Also, a child context can access beans in the parent context but not vice-versa.

3. Using SpringApplicationBuilder API

The SpringApplicationBuilder class provides a fluent API to create a parent-child relationship between contexts using parent()child() and sibling() methods.

To exemplify the context hierarchy, we’ll set up a non-web parent application context with 2 child web contexts.

To demonstrate this, we’ll start two instances of embedded Tomcat each with its own web application context and both running in a single JVM.

3.1. Parent Context

To begin, let’s create a service bean along with a bean definition class which reside in the parent package. We want this bean to return a greeting which is displayed to the client of our web application:

@Service
public class HomeService implements IHomeService {

    public String getGreeting() {
        return "Welcome User";
    }
}

And the bean definition class:

@Configuration
@ComponentScan("com.baeldung.parent")
public class ServiceConfig {}

Next, we’ll create the configuration for the two child contexts.

3.2. Child Context

Since all contexts are configured using the default configuration file, we need to provide separate configurations for properties which cannot be shared among contexts such as server ports.

To prevent conflicting configurations being picked up by the auto-configuration, we’ll also keep the classes in separate packages.

Let’s start by defining a properties file for the first child context:

server.port=8081
server.servlet.context-path=/ctx1

spring.application.admin.enabled=false
spring.application.admin.jmx-name=org.springframework.boot:type=Ctx1Rest,name=Ctx1Application

Note that we’ve configured the port and context path, as well as a JMX name so the application names don’t conflict.

Let’s now add the main configuration class for this context:

@Configuration
@ComponentScan("com.baeldung.ctx1")
@EnableAutoConfiguration
public class Ctx1Config {
    
    @Bean
    public IHomeService homeService() {
        return new GreetingService();
    }
}

This class provides a new definition for the homeService bean that will overwrite the one from the parent.

Let’s see the definition of the GreetingService class:

@Service
public class GreetingService implements IHomeService {

    public String getGreeting() {
        return "Greetings for the day";
    }
}

Finally, we’ll add a controller for this web context that use the homeService bean to display a message to the user:

@Controller
public class Ctx1Controller {

    @Autowired
    HomeService homeService;

    @GetMapping("/home")
    @ResponseBody
    public String greeting() {

        return homeService.getGreeting();
    }
}

3.3. Sibling Context

For our second context, we’ll create a controller and configuration class which are very similar to the ones in the previous section.

This time, we won’t create a homeService bean – as we’ll access it from the parent context.

First, let’s add a properties file for this context:

server.port=8082
server.servlet.context-path=/ctx2

spring.application.admin.enabled=false
spring.application.admin.jmx-name=org.springframework.boot:type=WebAdmin,name=SpringWebApplication

And the configuration class for the sibling application:

@Configuration
@ComponentScan("com.baeldung.ctx2")
@EnableAutoConfiguration
@PropertySource("classpath:ctx2.properties")
public class Ctx2Config {}

Let’s also add a controller, which has HomeService as a dependency:

@RestController
public class Ctx2Controller {

    @Autowired
    IHomeService homeService;

    @GetMapping(value = "/greeting", produces = "application/json")
    public String getGreeting() {
        return homeService.getGreeting();
    }
}

In this case, our controller should get the homeService bean from the parent context.

3.4. Context Hierarchy

Now we can put everything together and define the context hierarchy using SpringApplicationBuilder:

public class App {
    public static void main(String[] args) {
        new SpringApplicationBuilder()
          .parent(ParentConfig.class).web(WebApplicationType.NONE)
          .child(WebConfig.class).web(WebApplicationType.SERVLET)
          .sibling(RestConfig.class).web(WebApplicationType.SERVLET)
          .run(args);
    }
}

Finally, on running the Spring Boot App we can access both applications at their respective ports using localhost:8081/ctx1/home and localhost:8082/ctx2/greeting.

4. Conclusion

Using the SpringApplicationBuilder API, we first created a parent-child relationship between two contexts of an application. Next, we covered how to override the parent configuration in the child context. Lastly, we added a sibling context to demonstrate how the configuration in the parent context can be shared with other child contexts.

The source code of the example is available over on GitHub.

Download a File From an URL in Java

$
0
0

1. Introduction

In this tutorial, we’ll see several methods that we can use to download a file.

We’ll cover examples ranging from the basic usage of Java IO to the NIO package, and some common libraries like Async Http Client and Apache Commons IO.

Finally, we’ll talk about how we can resume a download if our connection fails before the whole file is read.

2. Using Java IO

The most basic API we can use to download a file is Java IO. We can use the URL class to open a connection to the file we want to download. To effectively read the file, we’ll use the openStream() method to obtain an InputStream:

BufferedInputStream in = new BufferedInputStream(new URL(FILE_URL).openStream())

When reading from an InputStream, it’s recommended to wrap it in a BufferedInputStream to increase the performance.

The performance increase comes from buffering. When reading one byte at a time using the read() method, each method call implies a system call to the underlying file system. When the JVM invokes the read() system call, the program execution context switches from user mode to kernel mode and back.

This context switch is expensive from a performance perspective. When we read a large number of bytes, the application performance will be poor, due to a large number of context switches involved.

For writing the bytes read from the URL to our local file, we’ll use the write() method from the FileOutputStream class:

try (BufferedInputStream in = new BufferedInputStream(new URL(FILE_URL).openStream());
  FileOutputStream fileOutputStream new FileOutputStream(FILE_NAME)) {
    byte dataBuffer[] = new byte[1024];
    int bytesRead;
    while ((bytesRead = in.read(dataBuffer, 0, 1024)) != -1) {
        fileOutputStream.write(dataBuffer, 0, bytesRead);
    }
} catch (IOException e) {
    // handle exception
}

When using a BufferedInputStream, the read() method will read as many bytes as we set for the buffer size. In our example, we’re already doing this by reading blocks of 1024 bytes at a time, so BufferedInputStream isn’t necessary.

The example above is very verbose, but luckily, as of Java 7, we have the Files class which contains helper methods for handling IO operations. We can use the Files.copy() method to read all the bytes from an InputStream and copy them to a local file:

InputStream in = new URL(FILE_URL).openStream();
Files.copy(in, Paths.get(FILE_NAME), StandardCopyOption.REPLACE_EXISTING);

Our code works well but can be improved. Its main drawback is the fact that the bytes are buffered into memory.

Fortunately, Java offers us the NIO package that has methods to transfer bytes directly between 2 Channels without buffering.

We’ll go into details in the next section.

3. Using NIO

The Java NIO package offers the possibility to transfer bytes between 2 Channels without buffering them into the application memory.

To read the file from our URL, we’ll create a new ReadableByteChannel from the URL stream:

ReadableByteChannel readableByteChannel = Channels.newChannel(url.openStream());

The bytes read from the ReadableByteChannel will be transferred to a FileChannel corresponding to the file that will be downloaded:

FileOutputStream fileOutputStream = new FileOutputStream(FILE_NAME);
FileChannel fileChannel = fileOutputStream.getChannel();

We’ll use the transferFrom() method from the ReadableByteChannel class to download the bytes from the given URL to our FileChannel:

fileOutputStream.getChannel()
  .transferFrom(readableByteChannel, 0, Long.MAX_VALUE);

The transferTo() and transferFrom() methods are more efficient than simply reading from a stream using a buffer. Depending on the underlying operating system, the data can be transferred directly from the filesystem cache to our file without copying any bytes into the application memory.

On Linux and UNIX systems, these methods use the zero-copy technique that reduces the number of context switches between the kernel mode and user mode.

4. Using Libraries

We’ve seen in the examples above how we can download content from a URL just by using the Java core functionality. We also can leverage the functionality of existing libraries to ease our work, when performance tweaks aren’t needed.

For example, in a real-world scenario, we’d need our download code to be asynchronous.

We could wrap all the logic into a Callable, or we could use an existing library for this.

4.1. Async HTTP Client

AsyncHttpClient is a popular library for executing asynchronous HTTP requests using the Netty framework. We can use it to execute a GET request to the file URL and get the file content.

First, we need to create an HTTP client:

AsyncHttpClient client = Dsl.asyncHttpClient();

The downloaded content will be placed into a FileOutputStream:

FileOutputStream stream = new FileOutputStream(FILE_NAME);

Next, we create an HTTP GET request and register an AsyncCompletionHandler handler to process the downloaded content:

client.prepareGet(FILE_URL).execute(new AsyncCompletionHandler<FileOutputStream>() {

    @Override
    public State onBodyPartReceived(HttpResponseBodyPart bodyPart) 
      throws Exception {
        stream.getChannel().write(bodyPart.getBodyByteBuffer());
        return State.CONTINUE;
    }

    @Override
    public FileOutputStream onCompleted(Response response) 
      throws Exception {
        return stream;
    }
})

Notice that we’ve overridden the onBodyPartReceived() method. The default implementation accumulates the HTTP chunks received into an ArrayList. This could lead to high memory consumption, or an OutOfMemory exception when trying to download a large file.

Instead of accumulating each HttpResponseBodyPart into memory, we use a FileChannel to write the bytes to our local file directly. We’ll use the getBodyByteBuffer() method to access the body part content through a ByteBuffer.

ByteBuffers have the advantage that the memory is allocated outside of the JVM heap, so it doesn’t affect out applications memory.

4.2. Apache Commons IO

Another highly used library for IO operation is Apache Commons IO. We can see from the Javadoc that there’s a utility class named FileUtils that is used for general file manipulation tasks.

To download a file from a URL, we can use this one-liner:

FileUtils.copyURLToFile(
  new URL(FILE_URL), 
  new File(FILE_NAME), 
  CONNECT_TIMEOUT, 
  READ_TIMEOUT);

From a performance standpoint, this code is the same as the one we’ve exemplified in section 2.

The underlying code uses the same concepts of reading in a loop some bytes from an InputStream and writing them to an OutputStream.

One difference is the fact that here the URLConnection class is used to control the connection timeouts so that the download doesn’t block for a large amount of time:

URLConnection connection = source.openConnection();
connection.setConnectTimeout(connectionTimeout);
connection.setReadTimeout(readTimeout);

5. Resumable Download

Considering internet connections fail from time to time, it’s useful for us to be able to resume a download, instead of downloading the file again from byte zero.

Let’s rewrite the first example from earlier, to add this functionality.

The first thing we should know is that we can read the size of a file from a given URL without actually downloading it by using the HTTP HEAD method:

URL url = new URL(FILE_URL);
HttpURLConnection httpConnection = (HttpURLConnection) url.openConnection();
httpConnection.setRequestMethod("HEAD");
long removeFileSize = httpConnection.getContentLengthLong();

Now that we have the total content size of the file, we can check whether our file is partially downloaded. If so, we’ll resume the download from the last byte recorded on disk:

long existingFileSize = outputFile.length();
if (existingFileSize < fileLength) {
    httpFileConnection.setRequestProperty(
      "Range", 
      "bytes=" + existingFileSize + "-" + fileLength
    );
}

What happens here is that we’ve configured the URLConnection to request the file bytes in a specific range. The range will start from the last downloaded byte and will end at the byte corresponding to the size of the remote file.

Another common way to use the Range header is for downloading a file in chunks by setting different byte ranges. For example, to download 2 KB file, we can use the range 0 – 1024 and 1024 – 2048.

Another subtle difference from the code at section 2. is that the FileOutputStream is opened with the append parameter set to true:

OutputStream os = new FileOutputStream(FILE_NAME, true);

After we’ve made this change the rest of the code is identical to the one we’ve seen in section 2.

6. Conclusion

We’ve seen in this article several ways in which we can download a file from a URL in Java.

The most common implementation is the one in which we buffer the bytes when performing the read/write operations. This implementation is safe to use even for large files because we don’t load the whole file into memory.

We’ve also seen how we can implement a zero-copy download using Java NIO Channels. This is useful because it minimized the number of context switches done when reading and writing bytes and by using direct buffers, the bytes are not loaded into the application memory.

Also, because usually downloading a file is done over HTTP, we’ve shown how we can achieve this using the AsyncHttpClient library.

The source code for the article is available over on GitHub.


Introduction to Dagger 2

$
0
0

1. Introduction

In this tutorial, we’ll take a look at Dagger 2 – a fast and lightweight dependency injection framework.

The framework is available for both Java and Android, but the high-performance derived from compile-time injection makes it a leading solution for the latter.

2. Dependency Injection

As a bit of a reminder, Dependency Injection is a concrete application of the more generic Inversion of Control principle in which the flow of the program is controlled by the program itself.

It’s implemented through an external component which provides instances of objects (or dependencies) needed by other objects.

And different frameworks implement dependency injection in different ways. In particular, one of the most notable of these differences is whether the injection happens at run-time or at compile-time.

Run-time DI is usually based on reflection which is simpler to use but slower at run-time. An example of a run-time DI framework is Spring.

Compile-time DI, on the other hand, is based on code generation. This means that all the heavy-weight operations are performed during compilation. Compile-time DI adds complexity but generally performs faster.

Dagger 2 falls into this category.

3. Maven/Gradle Configuration

In order to use Dagger in a project, we’ll need to add the dagger dependency to our pom.xml:

<dependency>
    <groupId>com.google.dagger</groupId>
    <artifactId>dagger</artifactId>
    <version>2.16</version>
</dependency>

Furthermore, we’ll also need to include the Dagger compiler used to convert our annotated classes into the code used for the injections:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-compiler-plugin</artifactId>
    <version>3.6.1</version>
    <configuration>
         <annotationProcessorPaths>
              <path>
                  <groupId>com.google.dagger</groupId>
                  <artifactId>dagger-compiler</artifactId>
                  <version>2.16</version>
              </path>
         </annotationProcessorPaths>
    </configuration>
</plugin>

With this configuration, Maven will output the generated code into target/generated-sources/annotations.

For this reason, we likely need to further configure our IDE if we want to use any of its code completion features. Some IDEs have direct support for annotation processors while others may need us to add this directory to the build path.

Alternatively, if we’re using Android with Gradle, we can include both dependencies:

compile 'com.google.dagger:dagger:2.16'
annotationProcessor 'com.google.dagger:dagger-compiler:2.16'

Now that we have Dagger in our project, let’s create a sample application to see how it works.

4. Implementation

For our example, we’ll try to build a car by injecting its components.

Now, Dagger uses the standard JSR-330 annotations in many places, one being @Inject.

We can add the annotations to fields or the constructor. But, since Dagger doesn’t support injection on private fields, we’ll go for constructor injection to preserve encapsulation:

public class Car {

    private Engine engine;
    private Brand brand;

    @Inject
    public Car(Engine engine, Brand brand) {
        this.engine = engine;
        this.brand = brand;
    }

    // getters and setters

}

Next, we’ll implement the code to perform the injection. More specifically, we’ll create:

  • a module, which is a class that provides or builds the objects’ dependencies, and
  • a component, which is an interface used to generate the injector

Complex projects may contain multiple modules and components but since we’re dealing with a very basic program, one of each is enough.

Let’s see how to implement them.

4.1. Module

To create a module, we need to annotate the class with the @Module annotation. This annotation indicates that the class can make dependencies available to the container:

@Module
public class VehiclesModule {
}

Then, we need to add the @Provides annotation on methods that construct our dependencies:

@Module
public class VehiclesModule {
    @Provides
    public Engine provideEngine() {
        return new Engine();
    }

    @Provides
    @Singleton
    public Brand provideBrand() { 
        return new Brand("Baeldung"); 
    }
}

Also, note that we can configure the scope of a given dependency. In this case, we give the singleton scope to our Brand instance so all the car instances share the same brand object.

4.2. Component

Moving on, we’re going to create our component interfaceThis is the class that will generate Car instances, injecting dependencies provided by VehiclesModule.

Simply put, we need a method signature that returns a Car and we need to mark the class with the @Component annotation:

@Singleton
@Component(modules = VehiclesModule.class)
public interface VehiclesComponent {
    Car buildCar();
}

Notice how we passed our module class as an argument to the @Component annotation. If we didn’t do that, Dagger wouldn’t know how to build the car’s dependencies.

Also, since our module provides a singleton object, we must give the same scope to our component because Dagger doesn’t allow for unscoped components to refer to scoped bindings.

4.3. Client Code

Finally, we can run mvn compile in order to trigger the annotation processors and generate the injector code.

After that, we’ll find our component implementation with the same name as the interface, just prefixed with “Dagger“:

@Test
public void givenGeneratedComponent_whenBuildingCar_thenDependenciesInjected() {
    VehiclesComponent component = DaggerVehiclesComponent.create();

    Car carOne = component.buildCar();
    Car carTwo = component.buildCar();

    Assert.assertNotNull(carOne);
    Assert.assertNotNull(carTwo);
    Assert.assertNotNull(carOne.getEngine());
    Assert.assertNotNull(carTwo.getEngine());
    Assert.assertNotNull(carOne.getBrand());
    Assert.assertNotNull(carTwo.getBrand());
    Assert.assertNotEquals(carOne.getEngine(), carTwo.getEngine());
    Assert.assertEquals(carOne.getBrand(), carTwo.getBrand());
}

5. Spring Analogies

Those familiar with Spring may have noticed some parallels between the two frameworks.

Dagger’s @Module annotation makes the container aware of a class in a very similar fashion as any of Spring’s stereotype annotations (for example, @Service, @Controller…). Likewise, @Provides and @Component are almost equivalent to Spring’s @Bean and @Lookup respectively.

Spring also has its @Scope annotation, correlating to @Singleton, though note here already another difference in that Spring assumes a singleton scope by default while Dagger defaults to what Spring developers might refer to as the prototype scope, invoking the provider method each time a dependency is required.

6. Conclusion

In this article, we went through how to set up and use Dagger 2 with a basic example. We also considered the differences between run-time and compile-time injection.

As always, all the code in the article is available over on GitHub.

Deploy a Spring Boot App to Azure

$
0
0

1. Introduction

Microsoft Azure now features quite solid Java support.

In this tutorial, we’ll demonstrate how to make our Spring Boot application work on the Azure platform, step by step.

2. Maven Dependency and Configuration

First, we do need an Azure subscription to make use of the cloud services there; currently, we can sign up a free account here.

Next, login to the platform and create a service principal using the Azure CLI:

> az login
To sign in, use a web browser to open the page \
https://microsoft.com/devicelogin and enter the code XXXXXXXX to authenticate.
> az ad sp create-for-rbac --name "app-name" --password "password"
{
    "appId": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa",
    "displayName": "app-name",
    "name": "http://app-name",
    "password": "password",
    "tenant": "tttttttt-tttt-tttt-tttt-tttttttttttt"
}

Now we configure Azure service principal authentication settings in our Maven settings.xml, with the help of the following section, under <servers>:

<server>
    <id>azure-auth</id>
    <configuration>
        <client>aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa</client>
        <tenant>tttttttt-tttt-tttt-tttt-tttttttttttt</tenant>
        <key>password</key>
        <environment>AZURE</environment>
    </configuration>
</server>

We’ll rely on the authentication configuration above when uploading our Spring Boot application to the Microsoft platform, using azure-webapp-maven-plugin.

Let’s add the following Maven plugin to the pom.xml:

<plugin>
    <groupId>com.microsoft.azure</groupId>
    <artifactId>azure-webapp-maven-plugin</artifactId>
    <version>1.1.0</version>
    <configuration>
        <!-- ... -->
    </configuration>
</plugin>

We can check the latest release version here.

There are a number of configurable properties for this plugin that will be covered in the following introduction.

3. Deploy a Spring Boot App to Azure

Now that we’ve set up the environment, let’s try to deploy our Spring Boot application to Azure.

Our application replies with “hello azure!” when we access “/hello“:

@GetMapping("/hello")
public String hello() {
    return "hello azure!";
}

The platform now allows Java Web App deployment for both Tomcat and Jetty. With azure-webapp-maven-plugin, we can deploy our application directly to supported web containers as the default(ROOT) application, or deploy via FTP.

Note that as we’re going to deploy the application to web containers, we should package it as a WAR archive. As a quick reminder, we’ve got an article introducing how to deploy a Spring Boot WAR into Tomcat.

3.1. Web Container Deployment

We’ll use the following configuration for azure-webapp-maven-plugin if we intend to deploy to Tomcat on a Windows instance:

<configuration>
    <javaVersion>1.8</javaVersion>
    <javaWebContainer>tomcat 8.5</javaWebContainer>
    <!-- ... -->
</configuration>

For a Linux instance, try the following configuration:

<configuration>
    <linuxRuntime>tomcat 8.5-jre8</linuxRuntime>
    <!-- ... -->
</configuration>

Let’s not forget the Azure authentication:

<configuration>
    <authentication>
        <serverId>azure-auth</serverId>
    </authentication>
    <appName>spring-azure</appName>
    <resourceGroup>baeldung</resourceGroup>
    <!-- ... -->
</configuration>

When we deploy our application to Azure, we’ll see it appear as an App Service. So here we specified the property <appName> to name the App Service. Also, the App Service, as a resource, needs to be held by a resource group container, so <resourceGroup> is also required.

Now we’re ready to pull the trigger using the azure-webapp:deploy Maven target, and we’ll see the output:

> mvn clean package azure-webapp:deploy
...
[INFO] Start deploying to Web App spring-baeldung...
[INFO] Authenticate with ServerId: azure-auth
[INFO] [Correlation ID: cccccccc-cccc-cccc-cccc-cccccccccccc] \
Instance discovery was successful
[INFO] Target Web App doesn't exist. Creating a new one...
[INFO] Creating App Service Plan 'ServicePlanssssssss-bbbb-0000'...
[INFO] Successfully created App Service Plan.
[INFO] Successfully created Web App.
[INFO] Starting to deploy the war file...
[INFO] Successfully deployed Web App at \
https://spring-baeldung.azurewebsites.net
...

Now we can access https://spring-baeldung.azurewebsites.net/hello and see the response: ‘hello azure!’.

During the deployment process, Azure automatically created an App Service Plan for us. Check out the official document for details about Azure App Service plans. If we already have an App Service plan, we can set property <appServicePlanName> to avoid creating a new one:

<configuration>
    <!-- ... -->
    <appServicePlanName>ServicePlanssssssss-bbbb-0000</appServicePlanName>
</configuration>

3.2. FTP Deployment

To deploy via FTP, we can use the configuration:

<configuration>
    <authentication>
        <serverId>azure-auth</serverId>
    </authentication>
    <appName>spring-baeldung</appName>
    <resourceGroup>baeldung</resourceGroup>
    <javaVersion>1.8</javaVersion>

    <deploymentType>ftp</deploymentType>
    <resources>
        <resource>
            <directory>${project.basedir}/target</directory>
            <targetPath>webapps</targetPath>
            <includes>
                <include>*.war</include>
            </includes>
        </resource>
    </resources>
</configuration>

In the configuration above, we make the plugin locate the WAR file in directory ${project.basedir}/target, and deploy it to Tomcat container’s webapps directory.

Say our final artifact is named azure-0.1.war, we’ll see output like the following once we commence the deployment:

> mvn clean package azure-webapp:deploy
...
[INFO] Start deploying to Web App spring-baeldung...
[INFO] Authenticate with ServerId: azure-auth
[INFO] [Correlation ID: cccccccc-cccc-cccc-cccc-cccccccccccc] \
Instance discovery was successful
[INFO] Target Web App doesn't exist. Creating a new one...
[INFO] Creating App Service Plan 'ServicePlanxxxxxxxx-xxxx-xxxx'...
[INFO] Successfully created App Service Plan.
[INFO] Successfully created Web App.
...
[INFO] Finished uploading directory: \
/xxx/.../target/azure-webapps/spring-baeldung --> /site/wwwroot
[INFO] Successfully uploaded files to FTP server: \
xxxx-xxxx-xxx-xxx.ftp.azurewebsites.windows.net
[INFO] Successfully deployed Web App at \
https://spring-baeldung.azurewebsites.net

Note that here we didn’t deploy our application as the default Web App for Tomcat, so we can only access it through ‘https://spring-baeldung.azurewebsites.net/azure-0.1/hello’. The server will respond ‘hello azure!’ as expected.

4. Deploy with Custom Application Settings

Most of the time, our Spring Boot application requires data access to provide services. Azure now supports databases such as SQL Server, MySQL, and PostgreSQL.

For the sake of simplicity, we’ll use its In-App MySQL as our data source, as its configuration is quite similar to other Azure database services.

4.1. Enable In-App MySQL on Azure

Since there isn’t a one-liner to create a web app with In-App MySQL enabled, we have to first create the web app using the CLI:

az group create --location japanwest --name bealdung-group
az appservice plan create --name baeldung-plan --resource-group bealdung-group --sku B1
az webapp create --name baeldung-webapp --resource-group baeldung-group \
  --plan baeldung-plan --runtime java|1.8|Tomcat|8.5

Then enable MySQL in App in the portal:

After the In-App MySQL is enabled, we can find the default database, data source URL, and default account information in a file named MYSQLCONNSTR_xxx.txt under the /home/data/mysql directory of the filesystem.

4.2. Spring Boot Application Using Azure In-App MySQL

Here, for demonstration needs, we create a User entity and two endpoints used to register and list Users:

@PostMapping("/user")
public String register(@RequestParam String name) {
    userRepository.save(userNamed(name));
    return "registered";
}

@GetMapping("/user")
public Iterable<User> userlist() {
    return userRepository.findAll();
}

We’re going to use an H2 database in our local environment, and switch it to MySQL on Azure. Generally, we configure data source properties in the application.properties file:

spring.datasource.url=jdbc:h2:file:~/test
spring.datasource.username=sa
spring.datasource.password=

While for Azure deployment, we need to configure azure-webapp-maven-plugin in <appSettings>:

<configuration>
    <authentication>
        <serverId>azure-auth</serverId>
    </authentication>
    <javaVersion>1.8</javaVersion>
    <resourceGroup>baeldung-group</resourceGroup>
    <appName>baeldung-webapp</appName>
    <appServicePlanName>bealdung-plan</appServicePlanName>
    <appSettings>
        <property>
            <name>spring.datasource.url</name>
            <value>jdbc:mysql://127.0.0.1:55738/localdb</value>
        </property>
        <property>
            <name>spring.datasource.username</name>
            <value>uuuuuu</value>
        </property>
        <property>
            <name>spring.datasource.password</name>
            <value>pppppp</value>
        </property>
    </appSettings>
</configuration>

Now we can start to deploy:

> mvn clean package azure-webapp:deploy
...
[INFO] Start deploying to Web App custom-webapp...
[INFO] Authenticate with ServerId: azure-auth
[INFO] [Correlation ID: cccccccc-cccc-cccc-cccc-cccccccccccc] \
Instance discovery was successful
[INFO] Updating target Web App...
[INFO] Successfully updated Web App.
[INFO] Starting to deploy the war file...
[INFO] Successfully deployed Web App at \
https://baeldung-webapp.azurewebsites.net

We can see from the log that the deployment is finished.

Let’s test our new endpoints:

> curl -d "" -X POST https://baeldung-webapp.azurewebsites.net/user\?name\=baeldung
registered

> curl https://baeldung-webapp.azurewebsites.net/user
[{"id":1,"name":"baeldung"}]

The server’s response says it all. It works!

5. Deploy a Containerized Spring Boot App to Azure

In the previous sections, we’ve shown how to deploy applications to servlet containers (Tomcat in this case). How about deploying as a standalone runnable jar?

For now, we may need to containerize our Spring Boot application. Specifically, we can dockerize it and upload the image to Azure.

We already have an article about how to dockerize a Spring Boot App, but here we’re about to make use of another maven plugin: docker-maven-plugin, to automate dockerization for us:

<plugin>
    <groupId>com.spotify</groupId>
    <artifactId>docker-maven-plugin</artifactId>
    <version>1.1.0</version>
    <configuration>
        <!-- ... -->
    </configuration>
</plugin>

The latest version can be found here.

5.1. Azure Container Registry

First, we need a Container Registry on Azure to upload our docker image.

So let’s create one:

az acr create --admin-enabled --resource-group baeldung-group \
  --location japanwest --name baeldungadr --sku Basic

We’ll also need the Container Registry’s authentication information, and this can be queried using:

> az acr credential show --name baeldungadr --query passwords[0]
{
  "additionalProperties": {},
  "name": "password",
  "value": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}

Then add the following server authentication configuration in Maven’s settings.xml:

<server>
    <id>baeldungadr</id>
    <username>baeldungadr</username>
    <password>xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx</password>
</server>

5.2. Maven Plugin Configuration

Let’s add the following Maven plugin configuration to the pom.xml:

<properties>
    <!-- ... -->
    <azure.containerRegistry>baeldungadr</azure.containerRegistry>
    <docker.image.prefix>${azure.containerRegistry}.azurecr.io</docker.image.prefix>
</properties>

<build>
    <plugins>
        <plugin>
            <groupId>com.spotify</groupId>
            <artifactId>docker-maven-plugin</artifactId>
            <version>1.0.0</version>
            <configuration>
                <imageName>${docker.image.prefix}/${project.artifactId}</imageName>
                <registryUrl>https://${docker.image.prefix}</registryUrl>
                <serverId>${azure.containerRegistry}</serverId>
                <dockerDirectory>docker</dockerDirectory>
                <resources>
                    <resource>
                        <targetPath>/</targetPath>
                        <directory>${project.build.directory}</directory>
                        <include>${project.build.finalName}.jar</include>
                    </resource>
                </resources>
            </configuration>
        </plugin>
        <!-- ... -->
    </plugins>
</build>

In the configuration above, we specified the docker image name, registry URL and some properties similar to that of FTP deployment.

Note that the plugin will use values in <dockerDirectory> to locate the Dockerfile. We put the Dockerfile in the docker directory, and its content is:

FROM frolvlad/alpine-oraclejdk8:slim
VOLUME /tmp
ADD azure-0.1.jar app.jar
RUN sh -c 'touch /app.jar'
EXPOSE 8080
ENTRYPOINT [ "sh", "-c", "java -Djava.security.egd=file:/dev/./urandom -jar /app.jar" ]

5.3. Run Spring Boot App in a Docker Instance

Now we can build a Docker image and push it to the Azure registry:

> mvn docker:build -DpushImage
...
[INFO] Building image baeldungadr.azurecr.io/azure-0.1
...
Successfully built aaaaaaaaaaaa
Successfully tagged baeldungadr.azurecr.io/azure-0.1:latest
[INFO] Built baeldungadr.azurecr.io/azure-0.1
[INFO] Pushing baeldungadr.azurecr.io/azure-0.1
The push refers to repository [baeldungadr.azurecr.io/azure-0.1]
...
latest: digest: sha256:0f0f... size: 1375

After the upload is finished, let’s check the baeldungadr registry. We shall see the image in the repository list:

Now we are ready to run an instance of the image:

Once the instance is booted, we can access services provided by our application via its public IP address:

> curl http://a.x.y.z:8080/hello
hello azure!

5.4. Docker Container Deployment

Suppose we have a container registry, no matter it’s from Azure, Docker Hub, or our private registry.

With the help of the following configuration of azure-webapp-maven-plugin, we can also deploy our Spring Boot web app to the containers:

<configuration>
    <containerSettings>
        <imageName>${docker.image.prefix}/${project.artifactId}</imageName>
        <registryUrl>https://${docker.image.prefix}</registryUrl>
        <serverId>${azure.containerRegistry}</serverId>
    </containerSettings>
    <!-- ... -->
<configuration>

Once we run mvn azure-webapp:deploy, the plugin will help deploy our web app archive to an instance of the specified image.

Then we can access web services via the instance’s IP address or Azure App Service’s URL.

6. Conclusion

In this article, we introduced how to deploy a Spring Boot application to Azure, as a deployable WAR or a runnable JAR in a container. Though we’ve covered most of the features of azure-webapp-maven-plugin, there are some rich features yet to be explored. Please check out here for more details.

As always, the full implementation of the code samples can be found over on Github.

An Introduction to Java.util.Hashtable Class

$
0
0

1. Overview

Hashtable is the oldest implementation of a hash table data structure in Java. The HashMap is the second implementation, which was introduced in JDK 1.2.

Both classes provide similar functionality, but there are also small differences, which we’ll explore in this tutorial.

2. When to use Hashtable

Let’s say we have a dictionary, where each word has its definition. Also, we need to get, insert and remove words from the dictionary quickly.

Hence, Hashtable (or HashMap) makes sense. Words will be the keys in the Hashtable, as they are supposed to be unique. Definitions, on the other hand, will be the values.

3. Example of Use

Let’s continue with the dictionary example. We’ll model Word as a key:

public class Word {
    private String name;

    public Word(String name) {
        this.name = name;
    }
    
    // ...
}

Let’s say the values are Strings. Now we can create a Hashtable:

Hashtable<Word, String> table = new Hashtable<>();

First, let’s add an entry:

Word word = new Word("cat");
table.put(word, "an animal");

Also, to get an entry:

String definition = table.get(word);

Finally, let’s remove an entry:

definition = table.remove(word);

There are many more methods in the class, and we’ll describe some of them later.

But first, let’s talk about some requirements for the key object.

4. The Importance of hashCode()

To be used as a key in a Hashtable, the object mustn’t violate the hashCode() contract. In short, equal objects must return the same code. To understand why let’s look at how the hash table is organized.

Hashtable uses an array. Each position in the array is a “bucket” which can be either null or contain one or more key-value pairs. The index of each pair is calculated.

But why not to store elements sequentially, adding new elements to the end of the array?

The point is that finding an element by index is much quicker than iterating through the elements with the comparison sequentially. Hence, we need a function that maps keys to indexes.

4.1. Direct Address Table

The simplest example of such mapping is the direct-address table. Here keys are used as indexes:

index(k)=k,
where k is a key

Keys are unique, that is each bucket contains one key-value pair. This technique works well for integer keys when the possible range of them is reasonably small.

But we have two problems here:

  • First, our keys are not integers, but Word objects
  • Second, if they were integers, nobody would guarantee they were small. Imagine that the keys are 1, 2 and 1000000. We’ll have a big array of size 1000000 with only three elements, and the rest will be a wasted space

hashCode() method solves the first problem.

The logic for data manipulation in the Hashtable solves the second problem.

Let’s discuss this in depth.

4.2. hashCode() Method

Any Java object inherits the hashCode() method which returns an int value. This value is calculated from the internal memory address of the object. By default hashCode() returns distinct integers for distinct objects.

Thus any key object can be converted to an integer using hashCode(). But this integer may be big.

4.3. Reducing the Range

get(), put() and remove() methods contain the code which solves the second problem – reducing the range of possible integers.

The formula calculates an index for the key:

int index = (hash & 0x7FFFFFFF) % tab.length;

Where tab.length is the array size, and hash is a number returned by the key’s hashCode() method.

As we can see index is a reminder of the division hash by the array size. Note that equal hash codes produce the same index.

4.4. Collisions

Furthermore, even different hash codes can produce the same index. We refer to this as a collision. To resolve collisions Hashtable stores a LinkedList of key-value pairs.

Such data structure is called a hash table with chaining.

4.5. Load Factor

It is easy to guess that collisions slow down operations with elements. To get an entry it is not enough to know its index, but we need to go through the list and perform a comparison with each item.

Therefore it’s important to reduce the number of collisions. The bigger is an array, the smaller is the chance of a collision. The load factor determines the balance between the array size and the performance. By default, it’s 0.75 which means that the array size doubles when 75% of the buckets become not empty. This operation is executed by rehash() method.

But let’s return to the keys.

4.6. Overriding equals() and hashCode()

When we put an entry into a Hashtable and get it out of it, we expect that the value can be obtained not only with same the instance of the key but also with an equal key:

Word word = new Word("cat");
table.put(word, "an animal");
String extracted = table.get(new Word("cat"));

To set the rules of equality, we override the key’s equals() method:

public boolean equals(Object o) {
    if (o == this)
        return true;
    if (!(o instanceof Word))
        return false;

    Word word = (Word) o;
    return word.getName().equals(this.name);
}

But if we don’t override hashCode() when overriding equals() then two equal keys may end up in the different buckets because Hashtable calculates the key’s index using its hash code.

Let’s take a close look at the above example. What happens if we don’t override hashCode()?

  • Two instances of Word are involved here – the first is for putting the entry and the second is for getting the entry. Although these instances are equal, their hashCode() method return different numbers
  • The index for each key is calculated by the formula from section 4.3. According to this formula, different hash codes may produce different indexes
  • This means that we put the entry into one bucket and then try to get it out from the other bucket. Such logic breaks Hashtable

Equal keys must return equal hash codes, that’s why we override the hashCode() method:

public int hashCode() {
    return name.hashCode();
}

Note that it’s also recommended to make not equal keys return different hash codes, otherwise they end up in the same bucket.  This will hit the performance, hence, losing some of the advantages of a Hashtable.

Also, note that we don’t care about the keys of String, Integer, Long or another wrapper type. Both equal() and hashCode() methods are already overridden in wrapper classes.

5. Iterating Hashtables

There are a few ways to iterate Hashtables. In this section well talk about them and explain some of the implications.

5.1. Fail Fast: Iteration

Fail-fast iteration means that if a Hashtable is modified after its Iterator is created, then the ConcurrentModificationException will be thrown. Let’s demonstrate this.

First, we’ll create a Hashtable and add entries to it:

Hashtable<Word, String> table = new Hashtable<Word, String>();
table.put(new Word("cat"), "an animal");
table.put(new Word("dog"), "another animal");

Second, we’ll create an Iterator:

Iterator<Word> it = table.keySet().iterator();

And third, we’ll modify the table:

table.remove(new Word("dog"));

Now if we try to iterate through the table, we’ll get a ConcurrentModificationException:

while (it.hasNext()) {
    Word key = it.next();
}
java.util.ConcurrentModificationException
	at java.util.Hashtable$Enumerator.next(Hashtable.java:1378)

ConcurrentModificationException helps to find bugs and thus avoid unpredictable behavior, when, for example, one thread is iterating through the table, and another one is trying to modify it at the same time.

5.2. Not Fail Fast: Enumeration

Enumeration in a Hashtable is not fail-fast. Let’s look at an example.

First, let’s create a Hashtable and add entries to it:

Hashtable<Word, String> table = new Hashtable<Word, String>();
table.put(new Word("1"), "one");
table.put(new Word("2"), "two");

Second, let’s create an Enumeration:

Enumeration<Word> enumKey = table.keys();

Third, let’s modify the table:

table.remove(new Word("1"));

Now if we iterate through the table it won’t throw an exception:

while (enumKey.hasMoreElements()) {
    Word key = enumKey.nextElement();
}

5.3. Unpredictable Iteration Order

Also, note that iteration order in a Hashtable is unpredictable and does not match the order in which the entries were added.

This is understandable as it calculates each index using the key’s hash code. Moreover, rehashing takes place from time to time, rearranging the order of the data structure.

Hence, let’s add some entries and check the output:

Hashtable<Word, String> table = new Hashtable<Word, String>();
    table.put(new Word("1"), "one");
    table.put(new Word("2"), "two");
    // ...
    table.put(new Word("8"), "eight");

    Iterator<Map.Entry<Word, String>> it = table.entrySet().iterator();
    while (it.hasNext()) {
        Map.Entry<Word, String> entry = it.next();
        // ...
    }
}
five
four
three
two
one
eight
seven

6. Hashtable vs. HashMap

Hashtable and HashMap provide very similar functionality.

Both of them provide:

  • Fail-fast iteration
  • Unpredictable iteration order

But there are some differences too:

  • HashMap doesn’t provide any Enumeration, while Hashtable provides not fail-fast Enumeration
  • Hashtable doesn’t allow null keys and null values, while HashMap do allow one null key and any number of null values
  • Hashtable‘s methods are synchronized while HashMaps‘s methods are not

7. Hashtable API in Java 8

Java 8 has introduced new methods which help make our code cleaner. In particular, we can get rid of some if blocks. Let’s demonstrate this.

7.1. getOrDefault()

Let’s say we need to get the definition of the word “dog” and assign it to the variable if it is on the table. Otherwise, assign “not found” to the variable.

Before Java 8:

Word key = new Word("dog");
String definition;

if (table.containsKey(key)) {
     definition = table.get(key);
} else {
     definition = "not found";
}

After Java 8:

definition = table.getOrDefault(key, "not found");

7.2. putIfAbsent()

Let’s say we need to put a word “cat only if it’s not in the dictionary yet.

Before Java 8:

if (!table.containsKey(new Word("cat"))) {
    table.put(new Word("cat"), definition);
}

After Java 8:

table.putIfAbsent(new Word("cat"), definition);

7.3. boolean remove()

Let’s say we need to remove the word “cat” but only if it’s definition is “an animal”.

Before Java 8:

if (table.get(new Word("cat")).equals("an animal")) {
    table.remove(new Word("cat"));
}

After Java 8:

boolean result = table.remove(new Word("cat"), "an animal");

Finally, while old remove() method returns the value, the new method returns boolean.

7.4. replace()

Let’s say we need to replace a definition of “cat”, but only if its old definition is “a small domesticated carnivorous mammal”.

Before Java 8:

if (table.containsKey(new Word("cat")) 
    && table.get(new Word("cat")).equals("a small domesticated carnivorous mammal")) {
    table.put(new Word("cat"), definition);
}

After Java 8:

table.replace(new Word("cat"), "a small domesticated carnivorous mammal", definition);

7.5. computeIfAbsent()

This method is similar to putIfabsent().  But putIfabsent()  takes the value directly, and computeIfAbsent() takes a mapping function. It calculates the value only after it checks the key, and this is more efficient, especially if the value is difficult to obtain.

table.computeIfAbsent(new Word("cat"), key -> "an animal");

Hence, the above line is equivalent to:

if (!table.containsKey(cat)) {
    String definition = "an animal"; // note that calculations take place inside if block
    table.put(new Word("cat"), definition);
}

7.6. computeIfPresent()

This method is similar to the replace() method. But, again, replace()  takes the value directly, and computeIfPresent() takes a mapping function. It calculates the value inside of the if block, that’s why it’s more efficient.

Let’s say we need to change the definition:

table.computeIfPresent(cat, (key, value) -> key.getName() + " - " + value);

Hence, the above line is equivalent to:

if (table.containsKey(cat)) {
    String concatination=cat.getName() + " - " + table.get(cat);
    table.put(cat, concatination);
}

7.7. compute()

Now we’ll solve another task. Let’s say we have an array of String, where the elements are not unique.  Also, let’s calculate how many occurrences of a String we can get in the array. Here is the array:

String[] animals = { "cat", "dog", "dog", "cat", "bird", "mouse", "mouse" };

Also, we want to create a Hashtable which contains an animal as a key and the number of its occurrences as a value.

Here is a solution:

Hashtable<String, Integer> table = new Hashtable<String, Integer>();

for (String animal : animals) {
    table.compute(animal, 
        (key, value) -> (value == null ? 1 : value + 1));
}

Finally, let’s make sure, that the table contains two cats, two dogs, one bird and two mouses:

assertThat(table.values(), hasItems(2, 2, 2, 1));

7.8. merge()

There is another way to solve the above task:

for (String animal : animals) {
    table.merge(animal, 1, (oldValue, value) -> (oldValue + value));
}

The second argument, 1, is the value which is mapped to the key if the key is not yet on the table. If the key is already in the table, then we calculate it as oldValue+1.

7.9. foreach()

This is a new way to iterate through the entries. Let’s print all the entries:

table.forEach((k, v) -> System.out.println(k.getName() + " - " + v)

7.10. replaceAll()

Additionally, we can replace all the values without iteration:

table.replaceAll((k, v) -> k.getName() + " - " + v);

8. Conclusion

In this article, we’ve described the purpose of the hash table structure and showed how to complicate a direct-address table structure to get it.

Additionally, we’ve covered what collisions are and what a load factor is in a Hashtable. Also, we’ve learned why to override equals() and hashCode() for key objects.

Finally, we’ve talked about Hashtable‘s properties and Java 8 specific API.

As usual, the complete source code is available on Github.

Kotlin and Javascript

$
0
0

1. Overview

Kotlin is a next-generation programming language developed by JetBrains. It gains popularity with the Android development community as a replacement for Java.

Another exciting feature of Kotlin is the support of server- and client-side JavaScript. In this article, we’re going to discuss how to write server-side JavaScript applications using Kotlin.

Kotlin can be transpiled (source code written in one language and transformed into another language) to JavaScript. It gives users the option of targeting the JVM and JavaScript using the same language.

In the coming sections, we’re going to develop a node.js application using Kotlin.

2. node.js

Node.js is a lean, fast, cross-platform runtime environment for JavaScript. It’s useful for both server and desktop applications.

Let’s start by setting up a node.js environment and project.

2.1. Installing node.js

Node.js can be downloaded from the Node website. It comes with the npm package manager. After installing we need to set up the project.

In the empty directory, let’s run:

npm init

It will ask a few questions about package name, version description, and an entry point. Provide “kotlinNode” as name, “Kotlin Node Example”  as description and “crypto.js” as entrypoint. For the rest of the values, we’ll keep the default.

This process will generate a package.json file.

After this, we need to install a few dependency packages:

npm install
npm install kotlin --save
npm install express --save

This will install the modules required by our example application in the current project directory.

3.  Creating a node.js Application using Kotlin

In this section, we’re going to create a crypto API server using node.js in KotlinThe API will fetch some self-generated cryptocurrency rates.

3.1. Setting up the Kotlin Project

Now let’s setup the Kotlin project. We’ll be using Gradle here which is the recommended and easy to use approach. Gradle can be installed by following the instructions provided on the Gradle site.

Let’s start by creating the build.gradle file in the same directory:

buildscript {
    ext.kotlin_version = '1.2.41'
    repositories {
        mavenCentral()
    }
    dependencies {
        classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
    }
}

group 'com.baeldung'
version '1.0-SNAPSHOT'
apply plugin: 'kotlin2js'

repositories {
    mavenCentral()
}

dependencies {
    compile "org.jetbrains.kotlin:kotlin-stdlib-js:$kotlin_version"
    testCompile "org.jetbrains.kotlin:kotlin-test-js:$kotlin_version"
}

compileKotlin2Js.kotlinOptions {
    moduleKind = "commonjs"
    outputFile = "node/crypto.js"
}

There are two important points to highlight in the build.gradle file. First, we apply the kotlin2js plugin to do the transpilation.

Then, in KotlinOptions, we set moduleKind to “commonjs” to work with node.js. With the outputFile optionwe control where the transpiled code will be generated.

3.2. Creating an API

Let’s start implementing our application by creating the source folder src/main/kotlin.

In this folder, we create the file CryptoRate.kt.

In this file, we first need to import the require function and then create the main method:

external fun require(module: String): dynamic

fun main(args: Array<String>) {
    
}

Next, we import the required modules and create a server that listens on port 3000:

val express = require("express")

val app = express()
app.listen(3000, {
    println("Listening on port 3000")
})

Finally, we add the API endpoint “/crypto”. It will generate and return the data:

app.get("/crypto", { _, res ->
    res.send(generateCrypoRates())
})

data class CryptoCurrency(var name: String, var price: Float)

fun generateCryptoRates(): Array<CryptoCurrency> {
    return arrayOf<CryptoCurrency>(
      CryptoCurrency("Bitcoin", 90000F),
      CryptoCurrency("ETH",1000F),
      CryptoCurrency("TRX",10F)
    );
}

We’ve used the node.js express module to create the API endpoint. 

4. Run the Application

Running the application will be a two-part process. We need to transpile the Kotlin code into JavaScript before we can start our application with Node.

To create the JavaScript code, we use the Gradle build task:

gradlew build

This will generate the source code in the node directory.

Next, we execute the generated code file crypto.js using Node.js:

node node/crypto.js

This will launch the server running at port 3000In the browser let’s access the API by invoking http://localhost:3000/crypto to get this JSON result:

[
  {
    "name": "Bitcoin",
    "price": 90000
  },
  {
    "name": "ETH",
    "price": 1000
  },
  {
    "name": "TRX",
    "price": 10
  }
]

Alternatively, we can use tools like Postman or SoapUI to consume the API.

5. Conclusion

In this article, we’ve learned how to write node.js applications using Kotlin.

We’ve built a small service in a few minutes without using any boilerplate code.

As always, the code samples can be found over on GitHub.

Spring Data Annotations

$
0
0

1. Introduction

Spring Data provides an abstraction over data storage technologies. Therefore, our business logic code can be much more independent of the underlying persistence implementation. Also, Spring simplifies the handling of implementation-dependent details of data storage.

In this tutorial, we’ll see the most common annotations of the Spring Data, Spring Data JPA, and Spring Data MongoDB projects.

2. Common Spring Data Annotations

2.1. @Transactional

When we want to configure the transactional behavior of a method, we can do it with:

@Transactional
void pay() {}

If we apply this annotation on class level, then it works on all methods inside the class. However, we can override its effects by applying it to a specific method.

It has many configuration options, which can be found in this article.

2.2. @NoRepositoryBean

Sometimes we want to create repository interfaces with the only goal of providing common methods for the child repositories.

Of course, we don’t want Spring to create a bean of these repositories since we won’t inject them anywhere. @NoRepositoryBean does exactly this: when we mark a child interface of org.springframework.data.repository.Repository, Spring won’t create a bean out of it.

For example, if we want an Optional<T> findById(ID id) method in all of our repositories, we can create a base repository:

@NoRepositoryBean
interface MyUtilityRepository<T, ID extends Serializable> extends CrudRepository<T, ID> {
    Optional<T> findById(ID id);
}

This annotation doesn’t affect the child interfaces; hence Spring will create a bean for the following repository interface:

@Repository
interface PersonRepository extends MyUtilityRepository<Person, Long> {}

Note, that the example above isn’t necessary since Spring Data version 2 which includes this method replacing the older T findOne(ID id).

2.3. @Param

We can pass named parameters to our queries using @Param:

@Query("FROM Person p WHERE p.name = :name")
Person findByName(@Param("name") String name);

Note, that we refer to the parameter with the :name syntax.

For further examples, please visit this article.

2.4. @Id

@Id marks a field in a model class as the primary key:

class Person {

    @Id
    Long id;

    // ...
    
}

Since it’s implementation-independent, it makes a model class easy to use with multiple data store engines.

2.5. @Transient

We can use this annotation to mark a field in a model class as transient. Hence the data store engine won’t read or write this field’s value:

class Person {

    // ...

    @Transient
    int age;

    // ...

}

Like @Id, @Transient is also implementation-independent, which makes it convenient to use with multiple data store implementations.

2.6. @CreatedBy, @LastModifiedBy, @CreatedDate, @LastModifiedDate

With these annotations, we can audit our model classes: Spring automatically populates the annotated fields with the principal who created the object, last modified it, and the date of creation, and last modification:

public class Person {

    // ...

    @CreatedBy
    User creator;
    
    @LastModifiedBy
    User modifier;
    
    @CreatedDate
    Date createdAt;
    
    @LastModifiedBy
    Date modifiedAt;

    // ...

}

Note, that if we want Spring to populate the principals, we need to use Spring Security as well.

For a more thorough description, please visit this article.

3. Spring Data JPA Annotations

3.1. @Query

With @Query, we can provide a JPQL implementation for a repository method:

@Query("SELECT COUNT(*) FROM Person p")
long getPersonCount();

Also, we can use named parameters:

@Query("FROM Person p WHERE p.name = :name")
Person findByName(@Param("name") String name);

Besides, we can use native SQL queries, if we set the nativeQuery argument to true:

@Query(value = "SELECT AVG(p.age) FROM person p", nativeQuery = true)
Person getAverageAge();

For more information, please visit this article.

3.2. @Procedure

With Spring Data JPA we can easily call stored procedures from repositories.

First, we need to declare the repository on the entity class using standard JPA annotations:

@NamedStoredProcedureQueries({ 
    @NamedStoredProcedureQuery(
        name = "count_by_name", 
        procedureName = "person.count_by_name", 
        parameters = { 
            @StoredProcedureParameter(
                mode = ParameterMode.IN, 
                name = "name", 
                type = String.class),
            @StoredProcedureParameter(
                mode = ParameterMode.OUT, 
                name = "count", 
                type = Long.class) 
            }
    ) 
})

class Person {}

After this, we can refer to it in the repository with the name we declared in the name argument:

@Procedure(name = "count_by_name")
long getCountByName(@Param("name") String name);

3.3. @Lock

We can configure the lock mode when we execute a repository query method:

@Lock(LockModeType.NONE)
@Query("SELECT COUNT(*) FROM Person p")
long getPersonCount();

The available lock modes:

  • READ
  • WRITE
  • OPTIMISTIC
  • OPTIMISTIC_FORCE_INCREMENT
  • PESSIMISTIC_READ
  • PESSIMISTIC_WRITE
  • PESSIMISTIC_FORCE_INCREMENT
  • NONE

3.4. @Modifying

We can modify data with a repository method if we annotate it with @Modifying:

@Modifying
@Query("UPDATE Person p SET p.name = :name WHERE p.id = :id")
void changeName(@Param("id") long id, @Param("name") String name);

For more information, please visit this article.

3.5. @EnableJpaRepositories

To use JPA repositories, we have to indicate it to Spring. We can do this with @EnableJpaRepositories.

Note, that we have to use this annotation with @Configuration:

@Configuration
@EnableJpaRepositories
class PersistenceJPAConfig {}

Spring will look for repositories in the sub packages of this @Configuration class.

We can alter this behavior with the basePackages argument:

@Configuration
@EnableJpaRepositories(basePackages = "org.baeldung.persistence.dao")
class PersistenceJPAConfig {}

Also note, that Spring Boot does this automatically if it finds Spring Data JPA on the classpath.

4. Spring Data Mongo Annotations

Spring Data makes working with MongoDB much easier. In the next sections, we’ll explore the most basic features of Spring Data MongoDB.

For more information, please visit our article about Spring Data MongoDB.

4.1. @Document

This annotation marks a class as being a domain object that we want to persist to the database:

@Document
class User {}

It also allows us to choose the name of the collection we want to use:

@Document(collection = "user")
class User {}

Note, that this annotation is the Mongo equivalent of @Entity in JPA.

4.2. @Field

With @Field, we can configure the name of a field we want to use when MongoDB persists the document:

@Document
class User {

    // ...

    @Field("email")
    String emailAddress;

    // ...

}

Note, that this annotation is the Mongo equivalent of @Column in JPA.

4.3. @Query

With @Query, we can provide a finder query on a MongoDB repository method:

@Query("{ 'name' : ?0 }")
List<User> findUsersByName(String name);

4.4. @EnableMongoRepositories

To use MongoDB repositories, we have to indicate it to Spring. We can do this with @EnableMongoRepositories.

Note, that we have to use this annotation with @Configuration:

@Configuration
@EnableMongoRepositories
class MongoConfig {}

Spring will look for repositories in the sub packages of this @Configuration class. We can alter this behavior with the basePackages argument:

@Configuration
@EnableMongoRepositories(basePackages = "org.baeldung.repository")
class MongoConfig {}

Also note, that Spring Boot does this automatically if it finds Spring Data MongoDB on the classpath.

5. Conclusion

In this article, we saw which are the most important annotations we need to deal with data in general, using Spring. In addition, we looked into the most common JPA and MongoDB annotations.

As usual, examples are available over on GitHub here for common and JPA annotations, and here for MongoDB annotations.

Spring Data REST Events with @RepositoryEventHandler

$
0
0

1. Introduction

While working with an entity, the REST exporter handles operations for creating, saving, and deleting events. We can use an ApplicationListener to listen to these events and execute a function when the particular action is performed.

Alternatively, we can use annotated handler which filters events based on domain type.

2. Writing an Annotated Handler

The ApplicationListener doesn’t distinguish between entity types; but with the annotated handler, we can filter events based on domain type.

We can declare an annotation based event handler by adding @RepositoryEventHandler annotation on a POJO. As a result, this informs the BeanPostProcessor that the POJO needs to be inspected for handler methods.

In the example below, we annotate the class with RepositoryEventHandler corresponding to the entity Author – and declare methods pertaining to different before and after events corresponding to the Author entity in the AuthorEventHandler class:

@RepositoryEventHandler(Author.class) 
public class AuthorEventHandler {
    Logger logger = Logger.getLogger("Class AuthorEventHandler");
    
    @HandleBeforeCreate
    public void handleAuthorBeforeCreate(Author author){
        logger.info("Inside Author Before Create....");
        String name = author.getName();
    }

    @HandleAfterCreate
    public void handleAuthorAfterCreate(Author author){
        logger.info("Inside Author After Create ....");
        String name = author.getName();
    }

    @HandleBeforeDelete
    public void handleAuthorBeforeDelete(Author author){
        logger.info("Inside Author Before Delete ....");
        String name = author.getName();
    }

    @HandleAfterDelete
    public void handleAuthorAfterDelete(Author author){
        logger.info("Inside Author After Delete ....");
        String name = author.getName();
    }
}

Here, different methods of the AuthorEventHandler class will be invoked based on the operation performed on Author entity.

On finding the class with @RepositoryEventHandler annotation, Spring iterates over the methods in the class to find annotations corresponding to the before and after events mentioned below:

Before* Event Annotations – associated with before annotations are called before the event is called.

  • BeforeCreateEvent
  • BeforeDeleteEvent
  • BeforeSaveEvent
  • BeforeLinkSaveEvent

After* Event Annotations – associated with after annotations are called after the event is called.

  • AfterLinkSaveEvent
  • AfterSaveEvent
  • AfterCreateEvent
  • AfterDeleteEvent

We can also declare methods with different entity type corresponding to the same event type in a class:

@RepositoryEventHandler
public class BookEventHandler {

    @HandleBeforeCreate
    public void handleBookBeforeCreate(Book book){
        // code for before create book event
    }

    @HandleBeforeCreate
    public void handleAuthorBeforeCreate(Author author){
        // code for before create author event
    }
}

Here, the BookEventHandler class deals with more than one entity. On finding the class with @RepositoryEventHandler annotation, it iterates over the methods and calls the respective entity before the respective create event.

Also, we need to declare the event handlers in the @Configuration class which will inspect the bean for handlers and matches them with the right events:

@Configuration
public class RepositoryConfiguration{
    
    public RepositoryConfiguration(){
        super();
    }

    @Bean
    AuthorEventHandler authorEventHandler() {
        return new AuthorEventHandler();
    }

    @Bean
    BookEventHandler bookEventHandler(){
        return new BookEventHandler();
    }
}

3. Conclusion

In conclusion, this serves as an introduction to implementing and understanding @RepositoryEventHandler.

In this quick tutorial, we learned how to implement @RepositoryEventHandler annotation to handle various events corresponding to entity type.

And, as always, find the complete code samples over on Github.

Example of Downloading File in a Servlet

$
0
0

1. Overview

A common feature of web applications is the ability to expose a file for download.

In this tutorial, we’ll cover a simple example of creating a downloadable file and serving it from a Java Servlet application.

The file we are using will be from the webapp resources.

2. Maven Dependencies

If using Java EE, then we wouldn’t need to add any dependencies. However, if we’re using Java SE, we’ll need the javax.servlet-api dependency:

<dependency>
    <groupId>javax.servlet</groupId>
    <artifactId>javax.servlet-api</artifactId>
    <version>4.0.1</version>
    <scope>provided</scope>
</dependency>

The latest version of the dependency can be found here.

3. Servlet

Lets have a look at the code first and then find out what’s going on:

@WebServlet("/download")
public class DownloadServlet extends HttpServlet {
    private final int ARBITARY_SIZE = 1048;

    @Override
    protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
        resp.setContentType("text/plain");
        resp.setHeader("Content-disposition", "attachment; filename=sample.txt");
        
        InputStream in = req.getServletContext().getResourceAsStream("/WEB-INF/sample.txt");
        OutputStream out = resp.getOutputStream();

        byte[] buffer = new byte[ARBITARY_SIZE];
        
        int numBytesRead;
        while ((numBytesRead = in.read(buffer)) > 0) {
            out.write(buffer, 0, numBytesRead);
        }
        
        in.close();
        out.flush();
    }
}

3.1. Request End Point

@WebServlet(“/download”) annotation marks the DownloadServlet class to serve requests directed at the “/download” end-point.

Alternatively, we can do this by describing the mapping in the web.xml file.

3.2. Response Content-Type

The HttpServletResponse object has a method called as setContentType which we can use to set the Content-Type header of the HTTP response.

Content-Type is the historical name of the header property. Another name was the MIME type (Multipurpose Internet Mail Extensions). We now simply refer to the value as the Media Type.

This value could be “application/pdf”, “text/plain”, “text/html”, “image/jpg”, etc., the official list is maintained by the Internet Assigned Numbers Authority (IANA) and can be found here.

For our example, we are using a simple text file. The Content-Type for a text file is “text/plain”.

3.3. Response Content-Disposition

Setting the Content-Disposition header in the response object tells the browser how to handle the file it is accessing.

Browsers understand the use of Content-Disposition as a convention but it’s not actually a part of the HTTP standard. W3 has a memo on the use of Content-Disposition available to read here.

The Content-Disposition values for the main body of a response will be either “inline” (for webpage content to be rendered) or “attachment” (for a downloadable file).

If not specified, the default Content-Disposition is “inline”.

Using an optional header parameter, we can specify the filename “sample.txt”.

Some browsers will immediately download the file using the given filename and others will show a download dialog containing our predefined value.

The exact action taken will depend on the browser.

3.4. Reading From File and Writing to Output Stream

In the remaining lines of code, we take the ServletContext from the request, and use it to obtain the file at “/WEB-INF/sample.txt”.

Using HttpServletResponse#getOutputStream(), we then read from the input stream of the resource and write to the response’s OutputStream.

The size of the byte array we use is arbitrary. We can decide the size based on the amount of memory is reasonable to allocate for passing the data from the InputStream to the OutputStream; the smaller the number, the more loops; the bigger the number, the higher memory usage.

This cycle continues until numByteRead is 0 as that indicates the end of the file.

3.5. Close and Flush

close will close the Stream and release any resources that it is currently holding.

flush will write any remaining buffered bytes to the OutputStream destination.

We use these two methods to release memory, ensuring that the data we have prepared is sent out from our application.

3.6. Downloading the File

With everything in place, we are now ready to run our Servlet.

Now when we visit the relative end-point “/download”, our browser will attempt to download the file as “simple.txt”.

4. Conclusion

Downloading a file from a Servlet becomes a simple process. Using streams allow us to pass out the data as bytes and the Media Types inform the client browser what type of data to expect.

It is down to the browser to determine how to handle the response, however, we can give some guidelines with the Content-Disposition header.

All code in this article can be found over on GitHub.


Java Weekly, Issue 231

$
0
0

Here we go…

1. Spring and Java

>> From Java to Kotlin and Back [allegro.tech]

A controversial but interesting read about one team’s story which migrated from Java 8 to Kotlin… and then to Java 10.

>> Getting Started with Kafka in Spring Boot [e4developer.com]

Although Kafka can be an intimidating technology, Spring makes it much easier to get started using it.

>> Structuring and Testing Modules and Layers with Spring Boot [reflectoring.io]

A very interesting showcase of testing of multiple application layers in a Spring Boot application.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> Let’s Encrypt tips [advancedweb.hu]

A really good set of tips to keep top of mind when setting up certificates Let’s Encrypt. 

>> UTC is enough for everyone…right? [zachholman.com]

Reinventing the calendar, apparently 🙂 – with all the complexity that comes with that.

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> Wifi In Slide Deck [dilbert.com]

>> Boring And Needy Children [dilbert.com]

>> Two Jobs Forever [dilbert.com]

5. Pick of the Week

Not necessarily light reading:

>> OAuth 2.0 Security Best Current Practice [tools.ietf.org]

Spring Boot Annotations

$
0
0

1. Overview

Spring Boot made configuring Spring easier with its auto-configuration feature.

In this quick tutorial, we’ll explore the annotations from the org.springframework.boot.autoconfigure and org.springframework.boot.autoconfigure.condition packages.

2. @SpringBootApplication

We use this annotation to mark the main class of a Spring Boot application:

@SpringBootApplication
class VehicleFactoryApplication {

    public static void main(String[] args) {
        SpringApplication.run(VehicleFactoryApplication.class, args);
    }
}

@SpringBootApplication encapsulates @Configuration, @EnableAutoConfiguration, and @ComponentScan annotations with their default attributes.

3. @EnableAutoConfiguration

@EnableAutoConfiguration, as its name says, enables auto-configuration. It means that Spring Boot looks for auto-configuration beans on its classpath and automatically applies them.

Note, that we have to use this annotation with @Configuration:

@Configuration
@EnableAutoConfiguration
class VehicleFactoryConfig {}

4. Auto-Configuration Conditions

Usually, when we write our custom auto-configurations, we want Spring to use them conditionally. We can achieve this with the annotations in this section.

We can place the annotations in this section on @Configuration classes or @Bean methods.

In the next sections, we’ll only introduce the basic concept behind each condition. For further information, please visit this article.

4.1. @ConditionalOnClass and @ConditionalOnMissingClass

Using these conditions, Spring will only use the marked auto-configuration bean if the class in the annotation’s argument is present/absent:

@Configuration
@ConditionalOnClass(DataSource.class)
class MySQLAutoconfiguration {
    //...
}

4.2. @ConditionalOnBean and @ConditionalOnMissingBean

We can use these annotations when we want to define conditions based on the presence or absence of a specific bean:

@Bean
@ConditionalOnBean(name = "dataSource")
LocalContainerEntityManagerFactoryBean entityManagerFactory() {
    // ...
}

4.3. @ConditionalOnProperty

With this annotation, we can make conditions on the values of properties:

@Bean
@ConditionalOnProperty(
    name = "usemysql", 
    havingValue = "local"
)
DataSource dataSource() {
    // ...
}

4.4. @ConditionalOnResource

We can make Spring to use a definition only when a specific resource is present:

@ConditionalOnResource(resources = "classpath:mysql.properties")
Properties additionalProperties() {
    // ...
}

4.5. @ConditionalOnWebApplication and @ConditionalOnNotWebApplication

With these annotations, we can create conditions based on if the current application is or isn’t a web application:

@ConditionalOnWebApplication
HealthCheckController healthCheckController() {
    // ...
}

4.6. @ConditionalExpression

We can use this annotation in more complex situations. Spring will use the marked definition when the SpEL expression is evaluated to true:

@Bean
@ConditionalOnExpression("${usemysql} && ${mysqlserver == 'local'}")
DataSource dataSource() {
    // ...
}

4.7. @Conditional

For even more complex conditions, we can create a class evaluating the custom condition. We tell Spring to use this custom condition with @Conditional:

@Conditional(HibernateCondition.class)
Properties additionalProperties() {
    //...
}

5. Conclusion

In this article, we saw an overview of how can we fine-tune the auto-configuration process and provide conditions for custom auto-configuration beans.

As usual, the examples are available over on GitHub.

Spring Web Annotations

$
0
0

1. Overview

In this tutorial, we’ll explore Spring Web annotations from the org.springframework.web.bind.annotation package.

2. @RequestMapping

Simply put, @RequestMapping marks request handler methods inside @Controller classes; it can be configured using:

  • path, or its aliases, name, and value: which URL the method is mapped to
  • method: compatible HTTP methods
  • params: filters requests based on presence, absence, or value of HTTP parameters
  • headers: filters requests based on presence, absence, or value of HTTP headers
  • consumes: which media types the method can consume in the HTTP request body
  • produces: which media types the method can produce in the HTTP response body

Here’s a quick example of what that looks like:

@Controller
class VehicleController {

    @RequestMapping(value = "/vehicles/home", method = RequestMethod.GET)
    String home() {
        return "home";
    }
}

We can provide default settings for all handler methods in a @Controller class if we apply this annotation on the class level. The only exception is the URL which Spring won’t override with method level settings but appends the two path parts.

For example, the following configuration has the same effect as the one above:

@Controller
@RequestMapping(value = "/vehicles", method = RequestMethod.GET)
class VehicleController {

    @RequestMapping("/home")
    String home() {
        return "home";
    }
}

Moreover, @GetMapping, @PostMapping, @PutMapping, @DeleteMapping, and @PatchMapping are different variants of @RequestMapping with the HTTP method already set to GET, POST, PUT, DELETE, and PATCH respectively.

These are available since Spring 4.3 release.

3. @RequestBody

Let’s move on to @RequestBody – which maps the body of the HTTP request to an object:

@PostMapping("/save")
void saveVehicle(@RequestBody Vehicle vehicle) {
    // ...
}

The deserialization is automatic and depends on the content type of the request.

4. @PathVariable

Next, let’s talk about @PathVariable.

This annotation indicates that a method argument is bound to a URI template variable. We can specify the URI template with the @RequestMapping annotation and bind a method argument to one of the template parts with @PathVariable.

We can achieve this with the name or its alias, the value argument:

@RequestMapping("/{id}")
Vehicle getVehicle(@PathVariable("id") long id) {
    // ...
}

If the name of the part in the template matches the name of the method argument, we don’t have to specify it in the annotation:

@RequestMapping("/{id}")
Vehicle getVehicle(@PathVariable long id) {
    // ...
}

Moreover, we can mark a path variable optional by setting the argument required to false:

@RequestMapping("/{id}")
Vehicle getVehicle(@PathVariable(required = false) long id) {
    // ...
}

5. @RequestParam

We use @RequestParam for accessing HTTP request parameters:

@RequestMapping
Vehicle getVehicleByParam(@RequestParam("id") long id) {
    // ...
}

It has the same configuration options as the @PathVariable annotation.

In addition to those settings, with @RequestParam we can specify an injected value when Spring finds no or empty value in the request. To achieve this, we have to set the defaultValue argument.

Providing a default value implicitly sets required to false:

@RequestMapping("/buy")
Car buyCar(@RequestParam(defaultValue = "5") int seatCount) {
    // ...
}

Besides parameters, there are other HTTP request parts we can access: cookies and headers. We can access them with the annotations @CookieValue and @RequestHeader respectively.

We can configure them the same way as @RequestParam.

6. Response Handling Annotations

In the next sections, we will see the most common annotations to manipulate HTTP responses in Spring MVC.

6.1. @ResponseBody

If we mark a request handler method with @ResponseBody, Spring treats the result of the method as the response itself:

@ResponseBody
@RequestMapping("/hello")
String hello() {
    return "Hello World!";
}

If we annotate a @Controller class with this annotation, all request handler methods will use it.

6.2. @ExceptionHandler

With this annotation, we can declare a custom error handler method. Spring calls this method when a request handler method throws any of the specified exceptions.

The caught exception can be passed to the method as an argument:

@ExceptionHandler(IllegalArgumentException.class)
void onIllegalArgumentException(IllegalArgumentException exception) {
    // ...
}

6.3. @ResponseStatus

We can specify the desired HTTP status of the response if we annotate a request handler method with this annotation. We can declare the status code with the code argument, or its alias, the value argument.

Also, we can provide a reason using the reason argument.

We also can use it along with @ExceptionHandler:

@ExceptionHandler(IllegalArgumentException.class)
@ResponseStatus(HttpStatus.BAD_REQUEST)
void onIllegalArgumentException(IllegalArgumentException exception) {
    // ...
}

For more information about HTTP response status, please visit this article.

7. Other Web Annotations

Some annotations don’t manage HTTP requests or responses directly. In the next sections, we’ll introduce the most common ones.

7.1. @RestController

The @RestController combines @Controller and @ResponseBody.

Therefore, the following declarations are equivalent:

@Controller
@ResponseBody
class VehicleRestController {
    // ...
}
@RestController
class VehicleRestController {
    // ...
}

7.2. @ModelAttribute

With this annotation we can access elements that are already in the model of an MVC @Controller, by providing the model key:

@PostMapping("/assemble")
void assembleVehicle(@ModelAttribute("vehicle") Vehicle vehicleInModel) {
    // ...
}

Like with @PathVariable and @RequestParam, we don’t have to specify the model key if the argument has the same name:

@PostMapping("/assemble")
void assembleVehicle(@ModelAttribute Vehicle vehicle) {
    // ...
}

Besides, @ModelAttribute has another use: if we annotate a method with it, Spring will automatically add the method’s return value to the model:

@ModelAttribute("vehicle")
Vehicle getVehicle() {
    // ...
}

Like before, we don’t have to specify the model key, Spring uses the method’s name by default:

@ModelAttribute
Vehicle vehicle() {
    // ...
}

Before Spring calls a request handler method, it invokes all @ModelAttribute annotated methods in the class.

More information about @ModelAttribute can be found in this article.

7.3. @CrossOrigin

@CrossOrigin enables cross-domain communication for the annotated request handler methods:

@CrossOrigin
@RequestMapping("/hello")
String hello() {
    return "Hello World!";
}

If we mark a class with it, it applies to all request handler methods in it.

We can fine-tune CORS behavior with this annotation’s arguments.

For more details, please visit this article.

8. Conclusion

In this article, we saw how we can handle HTTP requests and responses with Spring MVC.

As usual, the examples are available over on GitHub.

Spring Scheduling Annotations

$
0
0

1. Overview

When single-threaded execution isn’t enough, we can use annotations from the org.springframework.scheduling.annotation package.

In this quick tutorial, we’re going to explore the Spring Scheduling Annotations.

2. @EnableAsync

With this annotation, we can enable asynchronous functionality in Spring.

We must use it with @Configuration:

@Configuration
@EnableAsync
class VehicleFactoryConfig {}

Now, that we enabled asynchronous calls, we can use @Async to define the methods supporting it.

3. @EnableScheduling

With this annotation, we can enable scheduling in the application.

We also have to use it in conjunction with @Configuration:

@Configuration
@EnableScheduling
class VehicleFactoryConfig {}

As a result, we can now run methods periodically with @Scheduled.

4. @Async

We can define methods we want to execute on a different thread, hence run them asynchronously.

To achieve this, we can annotate the method with @Async:

@Async
void repairCar() {
    // ...
}

If we apply this annotation to a class, then all methods will be called asynchronously.

Note, that we need to enable the asynchronous calls for this annotation to work, with @EnableAsync or XML configuration.

More information about @Async can be found in this article.

5. @Scheduled

If we need a method to execute periodically, we can use this annotation:

@Scheduled(fixedRate = 10000)
void checkVehicle() {
    // ...
}

We can use it to execute a method at fixed intervals, or we can fine-tune it with cron-like expressions.

@Scheduled leverages the Java 8 repeating annotations feature, which means we can mark a method with it multiple times:

@Scheduled(fixedRate = 10000)
@Scheduled(cron = "0 * * * * MON-FRI")
void checkVehicle() {
    // ...
}

Note, that the method annotated with @Scheduled should have a void return type.

Moreover, we have to enable scheduling for this annotation to work for example with @EnableScheduling or XML configuration.

For more information about scheduling read this article.

6. @Schedules

We can use this annotation to specify multiple @Scheduled rules:

@Schedules({ 
  @Scheduled(fixedRate = 10000), 
  @Scheduled(cron = "0 * * * * MON-FRI")
})
void checkVehicle() {
    // ...
}

Note, that since Java 8 we can achieve the same with the repeating annotations feature as described above.

7. Conclusion

In this article, we saw an overview of the most common Spring scheduling annotations.

As usual, the examples are available over on GitHub.

A Guide to Spring Data Key Value

$
0
0

1. Introduction

The Spring Data Key Value framework makes it easy to write Spring applications that use key-value stores.

It minimizes redundant tasks and boilerplate code required for interacting with the store. The framework works well for key-value stores like Redis and Riak.

In this tutorial, we’ll cover how we can use Spring Data Key Value with the default java.util.Map based implementation.

2. Requirements

The Spring Data Key Value 1.x binaries require JDK level 6.0 or above, and Spring Framework 3.0.x or above.

3. Maven Dependency

To work with Spring Data Key Value, we need to add the following dependency:

<dependency>
    <groupId>org.springframework.data</groupId>
    <artifactId>spring-data-keyvalue</artifactId>
    <version>2.0.6.RELEASE</version>
</dependency>

The latest version can be found here.

4. Creating an Entity

Let’s create an Employee entity:

@KeySpace("employees")
public class Employee {

    @Id
    private Integer id;

    private String name;

    private String department;

    private String salary;

    // constructors/ standard getters and setters

}

Keyspaces define in which part of the data structure the entity should be kept. This concept is very similar to collections in MongoDB and Elasticsearch, cores in Solr and tables in JPA.

By default, the keyspace of an entity is extracted from its type.

5. Repository

Similar to other Spring Data frameworks, we will need to activate Spring Data repositories using the @EnableMapRepositories annotation.

By default, the repositories will use the ConcurrentHashMap-based implementation:

@SpringBootApplication
@EnableMapRepositories
public class SpringDataKeyValueApplication {
}

It’s possible to change the default ConcurrentHashMap implementation and use some other java.util.Map implementations:

@EnableMapRepositories(mapType = WeakHashMap.class)

Creating repositories with Spring Data Key Value works the same way as with other Spring Data frameworks:

@Repository
public interface EmployeeRepository
  extends CrudRepository<Employee, Integer> {
}

For learning more about Spring Data repositories we can have a look at this article.

6. Using the Repository

By extending CrudRepository in EmployeeRepository, we get a complete set of persistence methods that perform CRUD functionality.

Now, we’ll see how we can use some of the available persistence methods.

6.1. Saving an Object

Let’s save a new Employee object to the data store using the repository:

Employee employee = new Employee(1, "Mike", "IT", "5000");
employeeRepository.save(employee);

6.2. Retrieving an Existing Object

We can verify the correct save of the employee in the previous section by fetching the employee:

Optional<Employee> savedEmployee = employeeRepository.findById(1);

6.3. Updating an Existing Object

CrudRepository doesn’t provide a dedicated method for updating an object.

Instead, we can use the save() method:

employee.setName("Jack");
employeeRepository.save(employee);

6.4. Deleting an Existing Object

We can delete the inserted object using the repository:

employeeRepository.deleteById(1);

6.5. Fetch All Objects

We can fetch all the saved objects:

Iterable<Employee> employees = employeeRepository.findAll();

7. KeyValueTemplate

Another way of performing operations on the data structure is by using KeyValueTemplate.

In very basic terms, the KeyValueTemplate uses a MapAdapter wrapping a java.util.Map implementation to perform queries and sorting:

@Bean
public KeyValueOperations keyValueTemplate() {
    return new KeyValueTemplate(keyValueAdapter());
}

@Bean
public KeyValueAdapter keyValueAdapter() {
    return new MapKeyValueAdapter(WeakHashMap.class);
}

Note that in case we have used @EnableMapRepositories, we don’t need to specify a KeyValueTemplate. It will be created by the framework itself.

8. Using KeyValueTemplate

Using KeyValueTemplate, we can perform the same operations as we did with the repository.

8.1. Saving an Object

Let’s see how to save a new Employee object to the data store using a template:

Employee employee = new Employee(1, "Mile", "IT", "5000");
keyValueTemplate.insert(employee);

8.2. Retrieving an Existing Object

We can verify the insertion of the object by fetching it from the structure using template:

Optional<Employee> savedEmployee = keyValueTemplate
  .findById(id, Employee.class);

8.3. Updating an Existing Object

Unlike CrudRepository, the template provides a dedicated method to update an object:

employee.setName("Jacek");
keyValueTemplate.update(employee);

8.4. Deleting an Existing Object

We can delete an object with a template:

keyValueTemplate.delete(id, Employee.class);

8.5. Fetch All Objects

We can fetch all the saved objects using a template:

Iterable<Employee> employees = keyValueTemplate
  .findAll(Employee.class);

8.6. Sorting the Objects

In addition to the basic functionality, the template also supports KeyValueQuery for writing custom queries.

For example, we can use a query to get a sorted list of Employees based on their salary:

KeyValueQuery<Employee> query = new KeyValueQuery<Employee>();
query.setSort(new Sort(Sort.Direction.DESC, "salary"));
Iterable<Employee> employees 
  = keyValueTemplate.find(query, Employee.class);

9. Conclusion

This article showcased how we can use Spring Data KeyValue framework with the default Map implementation using Repository or KeyValueTemplate.

There are more Spring Data Frameworks like Spring Data Redis which are written on top of Spring Data Key Value. Refer to this article for an introduction to Spring Data Redis.

And, as always, all code samples shown here are available over on GitHub.

Viewing all 3697 articles
Browse latest View live