Quantcast
Channel: Baeldung
Viewing all 3692 articles
Browse latest View live

Extra Login Fields with Spring Security

$
0
0

1. Introduction

In this article, we’ll implement a custom authentication scenario with Spring Security by adding an extra field to the standard login form.

We’re going to focus on 2 different approaches, to show the versatility of the framework and the flexible ways we can use it in.

Our first approach will be a simple solution which focuses on reuse of existing core Spring Security implementations.

Our second approach will be a more custom solution that may be more suitable for advanced use cases.

We’ll build on top of concepts that are discussed in our previous articles on Spring Security login.

2. Maven Setup

We’ll use Spring Boot starters to bootstrap our project and bring in all necessary dependencies.

The setup we’ll use requires a parent declaration, web starter, and security starter; we’ll also include thymeleaf :

<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>2.0.0.M7</version>
    <relativePath/>
</parent>
 
<dependencies>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
    </dependency>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-security</artifactId>
    </dependency>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-thymeleaf</artifactId>
     </dependency>
     <dependency>
        <groupId>org.thymeleaf.extras</groupId>
        <artifactId>thymeleaf-extras-springsecurity4</artifactId>
    </dependency>
</dependencies>

The most current version of Spring Boot security starter can be found over at Maven Central.

3. Simple Project Setup

In our first approach, we’ll focus on reusing implementations that are provided by Spring Security. In particular, we’ll reuse DaoAuthenticationProvider and UsernamePasswordToken as they exist “out-of-the-box”.

The key components will include:

  • SimpleAuthenticationFilter – an extension of UsernamePasswordAuthenticationFilter
  • SimpleUserDetailsServicean implementation of UserDetailsService
  • User – an extension of the User class provided by Spring Security that declares our extra domain field
  • SecurityConfig – our Spring Security configuration that inserts our SimpleAuthenticationFilter into the filter chain, declares security rules and wires up dependencies
  • login.html – a login page that collects the username, password, and domain

3.1. Simple Authentication Filter

In our SimpleAuthenticationFilter, the domain and username fields are extracted from the request. We concatenate these values and use them to create an instance of UsernamePasswordAuthenticationToken.

The token is then passed along to the AuthenticationProvider for authentication:

public class SimpleAuthenticationFilter
  extends UsernamePasswordAuthenticationFilter {

    @Override
    public Authentication attemptAuthentication(
      HttpServletRequest request, 
      HttpServletResponse response) 
        throws AuthenticationException {

        // ...

        UsernamePasswordAuthenticationToken authRequest
          = getAuthRequest(request);
        setDetails(request, authRequest);
        
        return this.getAuthenticationManager()
          .authenticate(authRequest);
    }

    private UsernamePasswordAuthenticationToken getAuthRequest(
      HttpServletRequest request) {
 
        String username = obtainUsername(request);
        String password = obtainPassword(request);
        String domain = obtainDomain(request);

        // ...

        String usernameDomain = String.format("%s%s%s", username.trim(), 
          String.valueOf(Character.LINE_SEPARATOR), domain);
        return new UsernamePasswordAuthenticationToken(
          usernameDomain, password);
    }

    // other methods
}

3.2. Simple UserDetails Service

The UserDetailsService contract defines a single method called loadUserByUsername. Our implementation extracts the username and domain. The values are then passed to our UserRepository to get the User:

public class SimpleUserDetailsService implements UserDetailsService {

    // ...

    @Override
    public UserDetails loadUserByUsername(String username) throws UsernameNotFoundException {
        String[] usernameAndDomain = StringUtils.split(
          username, String.valueOf(Character.LINE_SEPARATOR));
        if (usernameAndDomain == null || usernameAndDomain.length != 2) {
            throw new UsernameNotFoundException("Username and domain must be provided");
        }
        User user = userRepository.findUser(usernameAndDomain[0], usernameAndDomain[1]);
        if (user == null) {
            throw new UsernameNotFoundException(
              String.format("Username not found for domain, username=%s, domain=%s", 
                usernameAndDomain[0], usernameAndDomain[1]));
        }
        return user;
    }
}

3.3. Spring Security Configuration

Our setup is different from a standard Spring Security configuration because we insert our SimpleAuthenticationFilter into the filter chain before the default with a call to addFilterBefore:

@Override
protected void configure(HttpSecurity http) throws Exception {

    http
      .addFilterBefore(authenticationFilter(), 
        UsernamePasswordAuthenticationFilter.class)
      .authorizeRequests()
        .antMatchers("/css/**", "/index").permitAll()
        .antMatchers("/user/**").authenticated()
      .and()
      .formLogin().loginPage("/login")
      .and()
      .logout()
      .logoutUrl("/logout");
}

We’re able to use the provided DaoAuthenticationProvider because we configure it with our SimpleUserDetailsService. Recall that our SimpleUserDetailsService knows how to parse out our username and domain fields and return the appropriate User to use when authenticating:

public AuthenticationProvider authProvider() {
    DaoAuthenticationProvider provider = new DaoAuthenticationProvider();
    provider.setUserDetailsService(userDetailsService);
    provider.setPasswordEncoder(passwordEncoder());
    return provider;
}

Since we’re using a SimpleAuthenticationFilter, we configure our own AuthenticationFailureHandler to ensure failed login attempts are appropriately handled:

public SimpleAuthenticationFilter authenticationFilter() throws Exception {
    SimpleAuthenticationFilter filter = new SimpleAuthenticationFilter();
    filter.setAuthenticationManager(authenticationManagerBean());
    filter.setAuthenticationFailureHandler(failureHandler());
    return filter;
}

3.4. Login Page

The login page we use collects our additional domain field that gets extracted by our SimpleAuthenticationFilter:

<form class="form-signin" th:action="@{/login}" method="post">
 <h2 class="form-signin-heading">Please sign in</h2>
 <p>Example: user / domain / password</p>
 <p th:if="${param.error}" class="error">Invalid user, password, or domain</p>
 <p>
   <label for="username" class="sr-only">Username</label>
   <input type="text" id="username" name="username" class="form-control" 
     placeholder="Username" required autofocus/>
 </p>
 <p>
   <label for="domain" class="sr-only">Domain</label>
   <input type="text" id="domain" name="domain" class="form-control" 
     placeholder="Domain" required autofocus/>
 </p>
 <p>
   <label for="password" class="sr-only">Password</label>
   <input type="password" id="password" name="password" class="form-control" 
     placeholder="Password" required autofocus/>
 </p>
 <button class="btn btn-lg btn-primary btn-block" type="submit">Sign in</button><br/>
 <p><a href="/index" th:href="@{/index}">Back to home page</a></p>
</form>

When we run the application and access the context at http://localhost:8081, we see a link to access a secured page. Clicking the link will cause the login page to display. As expected, we see the additional domain field:

Spring Security Form Login with Extra Fields

3.5. Summary

In our first example, we were able to reuse DaoAuthenticationProvider and UsernamePasswordAuthenticationToken by “faking out” the username field.

As a result, we were able to add support for an extra login field with a minimal amount of configuration and additional code.

4. Custom Project Setup

Our second approach will be very similar to the first but may be more appropriate for non-trivial uses cases.

The key components of our second approach will include:

  • CustomAuthenticationFilter – an extension of UsernamePasswordAuthenticationFilter
  • CustomUserDetailsServicea custom interface declaring a loadUserbyUsernameAndDomain method
  • CustomUserDetailsServiceImplan implementation of our CustomUserDetailsService
  • CustomUserDetailsAuthenticationProvider – an extension of AbstractUserDetailsAuthenticationProvider
  • CustomAuthenticationToken – an extension of UsernamePasswordAuthenticationToken
  • User – an extension of the User class provided by Spring Security that declares our extra domain field
  • SecurityConfig – our Spring Security configuration that inserts our CustomAuthenticationFilter into the filter chain, declares security rules and wires up dependencies
  • login.html – the login page that collects the username, password, and domain

4.1. Custom Authentication Filter

In our CustomAuthenticationFilter, we extract the username, password, and domain fields from the request. These values are used to create an instance of our CustomAuthenticationToken which is passed to the AuthenticationProvider for authentication:

public class CustomAuthenticationFilter 
  extends UsernamePasswordAuthenticationFilter {

    public static final String SPRING_SECURITY_FORM_DOMAIN_KEY = "domain";

    @Override
    public Authentication attemptAuthentication(
        HttpServletRequest request,
        HttpServletResponse response) 
          throws AuthenticationException {

        // ...

        CustomAuthenticationToken authRequest = getAuthRequest(request);
        setDetails(request, authRequest);
        return this.getAuthenticationManager().authenticate(authRequest);
    }

    private CustomAuthenticationToken getAuthRequest(HttpServletRequest request) {
        String username = obtainUsername(request);
        String password = obtainPassword(request);
        String domain = obtainDomain(request);

        // ...

        return new CustomAuthenticationToken(username, password, domain);
    }

4.2. Custom UserDetails Service

Our CustomUserDetailsService contract defines a single method called loadUserByUsernameAndDomain. 

The CustomUserDetailsServiceImpl class we create simply implements the contract and delegates to our CustomUserRepository to get the User:

 public UserDetails loadUserByUsernameAndDomain(String username, String domain) 
     throws UsernameNotFoundException {
     if (StringUtils.isAnyBlank(username, domain)) {
         throw new UsernameNotFoundException("Username and domain must be provided");
     }
     User user = userRepository.findUser(username, domain);
     if (user == null) {
         throw new UsernameNotFoundException(
           String.format("Username not found for domain, username=%s, domain=%s", 
             username, domain));
     }
     return user;
 }

4.3. Custom UserDetailsAuthenticationProvider

Our CustomUserDetailsAuthenticationProvider extends AbstractUserDetailsAuthenticationProvider and delegates to our CustomUserDetailService to retrieve the User. The most important feature of this class is the implementation of the retrieveUser method.

Note that we must cast the authentication token to our CustomAuthenticationToken for access to our custom field:

@Override
protected UserDetails retrieveUser(String username, 
  UsernamePasswordAuthenticationToken authentication) 
    throws AuthenticationException {
 
    CustomAuthenticationToken auth = (CustomAuthenticationToken) authentication;
    UserDetails loadedUser;

    try {
        loadedUser = this.userDetailsService
          .loadUserByUsernameAndDomain(auth.getPrincipal()
            .toString(), auth.getDomain());
    } catch (UsernameNotFoundException notFound) {
 
        if (authentication.getCredentials() != null) {
            String presentedPassword = authentication.getCredentials()
              .toString();
            passwordEncoder.matches(presentedPassword, userNotFoundEncodedPassword);
        }
        throw notFound;
    } catch (Exception repositoryProblem) {
 
        throw new InternalAuthenticationServiceException(
          repositoryProblem.getMessage(), repositoryProblem);
    }

    // ...

    return loadedUser;
}

4.4. Summary

Our second approach is nearly identical to the simple approach we presented first. By implementing our own AuthenticationProvider and CustomAuthenticationToken, we avoided needing to adapt our username field with custom parsing logic.

5. Conclusion

In this article, we’ve implemented a form login in Spring Security that made use of an extra login field. We did this in 2 different ways:

  • In our simple approach, we minimized the amount of code we needed write. We were able to reuse DaoAuthenticationProvider and UsernamePasswordAuthentication by adapting the username with custom parsing logic
  • In our more customized approach, we provided custom field support by extending AbstractUserDetailsAuthenticationProvider and providing our own CustomUserDetailsService with a CustomAuthenticationToken

As always, all source code can be found over on GitHub.


A Guide to the finalize Method in Java

$
0
0

1. Overview

In this tutorial, we’ll focus on a core aspect of the Java language – the finalize method provided by the root Object class.

Simply put, this is called before the garbage collection for a particular object.

2. Using Finalizers

The finalize() method is called the finalizer.

Finalizers get invoked when JVM figures out that this particular instance should be garbage collected. Such a finalizer may perform any operations, including bringing the object back to life.

The main purpose of a finalizer is, however, to release resources used by objects before they’re removed from the memory. A finalizer can work as the primary mechanism for clean-up operations, or as a safety net when other methods fail.

To understand how a finalizer works, let’s take a look at a class declaration:

public class Finalizable {
    private BufferedReader reader;

    public Finalizable() {
        InputStream input = this.getClass()
          .getClassLoader()
          .getResourceAsStream("file.txt");
        this.reader = new BufferedReader(new InputStreamReader(input));
    }

    public String readFirstLine() throws IOException {
        String firstLine = reader.readLine();
        return firstLine;
    }

    // other class members
}

The class Finalizable has a field reader, which references a closeable resource. When an object is created from this class, it constructs a new BufferedReader instance reading from a file in the classpath.

Such an instance is used in the readFirstLine method to extract the first line in the given file. Notice that the reader isn’t closed in the given code.

We can do that using a finalizer:

@Override
public void finalize() {
    try {
        reader.close();
        System.out.println("Closed BufferedReader in the finalizer");
    } catch (IOException e) {
        // ...
    }
}

It’s easy to see that a finalizer is declared just like any normal instance method.

In reality, the time at which the garbage collector calls finalizers is dependent on the JVM’s implementation and the system’s conditions, which are out of our control.

To make garbage collection happen on the spot, we’ll take advantage of the System.gc method. In real-world systems, we should never invoke that explicitly, for a number of reasons:

  1. It’s costly
  2. It doesn’t trigger the garbage collection immediately – it’s just a hint for the JVM to start GC
  3. JVM knows better when GC needs to be called

If we need to force GC, we can use jconsole for that.

The following is a test case demonstrating the operation of a finalizer:

@Test
public void whenGC_thenFinalizerExecuted() throws IOException {
    String firstLine = new Finalizable().readFirstLine();
    assertEquals("baeldung.com", firstLine);
    System.gc();
}

In the first statement, a Finalizable object is created, then its readFirstLine method is called. This object isn’t assigned to any variable, hence it’s eligible for garbage collection when the System.gc method is invoked.

The assertion in the test verifies the content of the input file and is used just to prove that our custom class works as expected.

When we run the provided test, a message will be printed on the console about the buffered reader being closed in the finalizer. This implies the finalize method was called and it has cleaned up the resource.

Up to this point, finalizers look like a great way for pre-destroy operations. However, that’s not quite true.

In the next section, we’ll see why using them should be avoided.

3. Avoiding Finalizers

Let’s have a look at several problems we’ll be facing when using finalizers to perform critical actions.

The first noticeable issue associated with finalizers is the lack of promptness. We cannot know when a finalizer is executed since garbage collection may occur anytime.

By itself, this isn’t a problem because the most important thing is that the finalizer is still invoked, sooner or later. However, system resources are limited. Thus, we may run out of those resources before they get a chance to be cleaned up, potentially resulting in system crashes.

Finalizers also have an impact on the program’s portability. Since the garbage collection algorithm is JVM implementation dependent, a program may run very well on one system while behaving differently at runtime on another.

Another significant issue coming with finalizers is the performance cost. Specifically, JVM must perform much more operations when constructing and destroying objects containing a non-empty finalizer.

The details are implementation-specific, but the general ideas are the same across all JVMs: additional steps must be taken to ensure finalizers are executed before the objects are discarded. Those steps can make the duration of object creation and destruction increase by hundreds or even thousands of times.

The last problem we’ll be talking about is the lack of exception handling during finalization. If a finalizer throws an exception, the finalization process is canceled, and the exception is ignored, leaving the object in a corrupted state without any notification.

4. No-Finalizer Example

Let’s explore a solution providing the same functionality but without the use of finalize() method. Notice that the example below isn’t the only way to replace finalizers.

Instead, it’s used to demonstrate an important point: there are always options that help us to avoid finalizers.

Here’s the declaration of our new class:

public class CloseableResource implements AutoCloseable {
    private BufferedReader reader;

    public CloseableResource() {
        InputStream input = this.getClass()
          .getClassLoader()
          .getResourceAsStream("file.txt");
        reader = new BufferedReader(new InputStreamReader(input));
    }

    public String readFirstLine() throws IOException {
        String firstLine = reader.readLine();
        return firstLine;
    }

    @Override
    public void close() {
        try {
            reader.close();
            System.out.println("Closed BufferedReader in the close method");
        } catch (IOException e) {
            // handle exception
        }
    }
}

It’s not hard to see that the only difference between the new CloseableResource class and our previous Finalizable class is the implementation of the AutoCloseable interface instead of a finalizer definition.

Notice that the body of the close method of CloseableResource is almost the same as the body of the finalizer in class Finalizable.

The following is a test method, which reads an input file and releases the resource after finishing its job:

@Test
public void whenTryWResourcesExits_thenResourceClosed() throws IOException {
    try (CloseableResource resource = new CloseableResource()) {
        String firstLine = resource.readFirstLine();
        assertEquals("baeldung.com", firstLine);
    }
}

In the above test, a CloseableResource instance is created in the try block of a try-with-resources statement, hence that resource is automatically closed when the try-with-resources block completes execution.

Running the given test method, we’ll see a message printed out from the close method of the CloseableResource class.

5. Conclusion

In this tutorial, we focused on a core concept in Java – the finalize method. This looks useful on paper but can have ugly side effects at runtime. And, more importantly, there’s always an alternative solution to using a finalizer.

One critical point to notice is that finalize has been deprecated starting with Java 9 – and will eventually be removed.

As always, the source code for this tutorial can be found over on GitHub.

Java Weekly, Issue 213

$
0
0

Here we go…

1. Spring and Java

>> Fumigating the IDEA Ultimate code using dataflow analysis [blog.jetbrains.com]

A deep dive into the how code inspection in works in IntelliJ.

>> Immutable Versus Unmodifiable in JDK 10 [marxsoftware.blogspot.com]

An important concept to understand – an unmodifiable collection isn’t necessarily immutable.

Simply put, if the contained elements are mutable, the entire collection is clearly mutable, even though the collection itself might be unmodifiable.

>> Refining redirect semantics in the Servlet API [blog.frankel.ch]

Unfortunately, HttpServletResponse.sendRedirect() returns HTTP 302 instead of 303, so we may need to handle that manually if we need to implement the Post/Redirect/Get pattern.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> A Hypothetical Consulting Gig [daedtech.com]

This is how an average consulting gig might look like 🙂

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> Absurd Absolute [dilbert.com]

>> Data Encapsulation [dilbert.com]

>> Unforeseen Problems [dilbert.com]

5. Pick of the Week

>> The Brutal Lifecycle of JavaScript Frameworks [stackoverflow.blog]

How to Manually Authenticate User with Spring Security

$
0
0

1. Overview

In this quick article, we’ll focus on how to programmatically set an authenticated user in Spring Security and Spring MVC.

2. Spring Security

Simply put, Spring Security hold the principal information of each authenticated user in a ThreadLocal – represented as an Authentication object.

In order to construct and set this Authentication object – we need to use the same approach Spring Security typically uses to build the object on a standard authentication.

To, let’s manually trigger an authentication and then set the resulting Authentication object into the current SecurityContext used by the framework to hold the currently logged-in user:

UsernamePasswordAuthenticationToken authReq
 = new UsernamePasswordAuthenticationToken(user, pass);
Authentication auth = authManager.authenticate(authReq);
SecurityContext sc = SecurityContextHolder.getContext();
securityContext.setAuthentication(auth);

After setting the Authentication in the context, we’ll now be able to check if the current user is authenticated – using securityContext.getAuthentication().isAuthenticated().

3. Spring MVC

By default, Spring Security adds an additional filter in the Spring Security filter chain – which is capable of persisting the Security Context (SecurityContextPersistenceFilter class).

In turn, it delegates the persistence of the Security Context to an instance of SecurityContextRepository, defaulting to the HttpSessionSecurityContextRepository class.

So, in order to set the authentication on the request and hence, make it available for all subsequent requests from the client, we need to manually set the SecurityContext containing the Authentication in the HTTP session:

public void login(HttpServletRequest req, String user, String pass) { 
    UsernamePasswordAuthenticationToken authReq
      = new UsernamePasswordAuthenticationToken(user, pass);
    Authentication auth = authManager.authenticate(authReq);
    
    SecurityContext sc = SecurityContextHolder.getContext();
    sc.setAuthentication(auth);
    HttpSession session = req.getSession(true);
    session.setAttribute(SPRING_SECURITY_CONTEXT_KEY, sc);
}

SPRING_SECURITY_CONTEXT_KEY is a statically imported HttpSessionSecurityContextRepository.SPRING_SECURITY_CONTEXT_KEY.

It should be noted that we can’t directly use the HttpSessionSecurityContextRepository – because it works in conjunction with the SecurityContextPersistenceFilter.

That is because the filter uses the repository in order to load and store the security context before and after the execution of the rest of defined filters in the chain, but it uses a custom wrapper over the response which is passed to the chain.

So in this case, you should know the class type of the wrapper used and pass it to the appropriate save method in the repository.

4. Conclusion

In this quick tutorial, we went over how to manually set the user Authentication in the Spring Security context and how it can be made available for Spring MVC purposes, focusing on the code samples that illustrate the simplest way to achieve it.

As always, code samples can be found over on GitHub.

Intoduction to Spliterator in Java

$
0
0

1. Overview

The Spliterator interface, introduced in Java 8, can be used for traversing and partitioning sequences. It’s a base utility for Streams, especially parallel ones.

In this article, we’ll cover its usage, characteristics, methods and how to create our own custom implementations.

2. Spliterator API

2.1. tryAdvance

This is the main method used for stepping through a sequence. The method takes a Consumer that’s used to consume elements of the Spliterator one by one sequentially and returns false if there’re no elements to be traversed.

Here, we’ll take a look at how to use it to traverse and partition elements.

First, let’s assume that we’ve got an ArrayList with 35000 articles and that Article class is defined as:

public class Article {
    private List<Author> listOfAuthors;
    private int id;
    private String name;
    
    // standard constructors/getters/setters
}

Now, let’s implement a task that processes the list of articles and adds a suffix of “– published by Baeldung” to each article name:

public String call() {
    int current = 0;
    while (spliterator.tryAdvance(a -> a.setName(article.getName()
      .concat("- published by Baeldung")))) {
        current++;
    }
    
    return Thread.currentThread().getName() + ":" + current;
}

Notice that this task outputs the number of articles processed when it finishes the execution.

Another key point is that we used tryAdvance() method to process the next element.

2.2. trySplit

Next, let’s split Spliterators (hence the name) and process partitions independently.

The trySplit method tries to split it into two parts. Then the caller process elements, and finally, the returned instance will process the others, allowing the two to be processed in parallel.

Let’s generate our list first:

public static List<Article> generateElements() {
    return Stream.generate(() -> new Article("Java"))
      .limit(35000)
      .collect(Collectors.toList());
}

Next, we obtain our Spliterator instance using the spliterator() method. Then we apply our trySplit() method:

@Test
public void givenSpliterator_whenAppliedToAListOfArticle_thenSplittedInHalf() {
    assertThat(new Task(split1).call())
      .containsSequence(Executor.generateElements().size() / 2 + "");
    assertThat(new Task(split2).call())
      .containsSequence(Executor.generateElements().size() / 2 + "");
}

The splitting process worked as intended and divided the records equally.

2.3. estimatedSize 

The estimatedSize method gives us an estimated number of elements:

LOG.info("Size: " + split1.estimateSize());

This will output:

Size: 17500

2.4. hasCharacteristics 

This API checks if the given characteristics match the properties of the Spliterator. Then if we invoke the method above, the output will be an int representation of those characteristics:

LOG.info("Characteristics: " + split1.characteristics());
Characteristics: 16464

3. Spliterator Characteristics

It has eight different characteristics that describe its behavior. Those can be used as hints for external tools:

  • SIZED  if it’s capable of returning an exact number of elements with the estimateSize() method
  • SORTED – if it’s iterating through a sorted source
  • SUBSIZED – if we split the instance using a trySplit() method and obtain Spliterators that are SIZED as well
  • CONCURRENT – if source can be safely modified concurrently
  • DISTINCT – if for each pair of encountered elements x, y, !x.equals(y)
  • IMMUTABLE – if elements held by source can’t be structurally modified
  • NONNULL – if source holds nulls or not
  • ORDERED – if iterating over an ordered sequence

4. A Custom Spliterator

4.1. When to Customize

First, let’s assume the following scenario:

We’ve got an article class with a list of authors, and the article that can have more than one author. Furthermore, we consider an author related to the article if his related article’s id matches article id.

Our Author class will look like the this:

public class Author {
    private String name;
    private int relatedArticleId;

    // standard getters, setters & constructors
}

Next, we’ll implement a class to count authors while traversing a stream of authors. Then the class will perform a reduction on the stream.

Let’s have a look at the class implementation:

public class RelatedAuthorCounter {
    private int counter;
    private boolean isRelated;
 
    // standard constructors/getters
 
    public RelatedAuthorCounter accumulate(Author author) {
        if (author.getRelatedArticleId() == 0) {
            return isRelated ? this : new RelatedAuthorCounter( counter, true);
        } else {
            return isRelated ? new RelatedAuthorCounter(counter + 1, false) : this;
        }
    }

    public RelatedAuthorCounter combine(RelatedAuthorCounter RelatedAuthorCounter) {
        return new RelatedAuthorCounter(
          counter + RelatedAuthorCounter.counter, 
          RelatedAuthorCounter.isRelated);
    }
}

Each method in the above class performs a specific operation to count while traversing.

First, the accumulate() method traverse the authors one by one in an iterative way, then combine() sums two counters using their values. Finally, the getCounter() returns the counter.

Now, to test what we’ve done so far. Let’s convert our article’s list of authors to a stream of authors:

Stream<Author> stream = article.getListOfAuthors().stream();

And implement a countAuthor() method to perform the reduction on the stream using RelatedAuthorCounter:

private int countAutors(Stream<Author> stream) {
    RelatedAuthorCounter wordCounter = stream.reduce(
      new RelatedAuthorCounter(0, true), 
      RelatedAuthorCounter::accumulate, 
      RelatedAuthorCounter::combine);
    return wordCounter.getCounter();
}

If we used a sequential stream the output will be as expected “count = 9”, however, the problem arises when we try to parallelize the operation.

Let’s take a look at the following test case:

@Test
void 
  givenAStreamOfAuthors_whenProcessedInParallel_countProducesWrongOutput() {
    assertThat(Executor.countAutors(stream.parallel())).isGreaterThan(9);
}

Apparently, something has gone wrong – splitting the stream at a random position caused an author to be counted twice.

4.2. How to Customize

To solve this, we need to implement a Spliterator that splits authors only when related id and articleId matches. Here’s the implementation of our custom Spliterator:

public class RelatedAuthorSpliterator implements Spliterator<Author> {
    private final List<Author> list;
    AtomicInteger current = new AtomicInteger();
    // standard constructor/getters

    @Override
    public boolean tryAdvance(Consumer<? super Author> action) {
        action.accept(list.get(current.getAndIncrement()));
        return current.get() < list.size();
    }

    @Override
    public Spliterator<Author> trySplit() {
        int currentSize = list.size() - current.get();
        if (currentSize < 10) {
            return null;
        }
        for (int splitPos = currentSize / 2 + current.intValue();
          splitPos < list.size(); splitPos++) {
            if (list.get(splitPos).getRelatedArticleId() == 0) {
                Spliterator<Author> spliterator
                  = new RelatedAuthorSpliterator(
                  list.subList(current.get(), splitPos));
                current.set(splitPos);
                return spliterator;
            }
        }
        return null;
   }

   @Override
   public long estimateSize() {
       return list.size() - current.get();
   }
 
   @Override
   public int characteristics() {
       return CONCURRENT;
   }
}

Now applying countAuthors() method will give the correct output. The following code demonstrates that:

@Test
public void
  givenAStreamOfAuthors_whenProcessedInParallel_countProducesRightOutput() {
    Stream<Author> stream2 = StreamSupport.stream(spliterator, true);
 
    assertThat(Executor.countAutors(stream2.parallel())).isEqualTo(9);
}

Also, the custom Spliterator is created from a list of authors and traverses through it by holding the current position.

Let’s discuss in more details the implementation of each method:

  • tryAdvance passes authors to the Consumer at the current index position and increments its position
  • trySplit defines the splitting mechanism, in our case, the RelatedAuthorSpliterator is created when ids matched, and the splitting divides the list into two parts
  • estimatedSize – is the difference between the list size and the position of currently iterated author
  • characteristics – returns the Spliterator characteristics, in our case SIZED as the value returned by the estimatedSize() method is exact; moreover, CONCURRENT indicates that the source of this Spliterator may be safely modified by other threads

5. Support for Primitive Values

The Spliterator API supports primitive values including double, int and long.

The only difference between using a generic and a primitive dedicated Spliterator is the given Consumer and the type of the Spliterator.

For example, when we need it for an int value we need to pass an intConsumer. Furthermore, here’s a list of primitive dedicated Spliterators:

  • OfPrimitive<T, T_CONS, T_SPLITR extends Spliterator.OfPrimitive<T, T_CONS, T_SPLITR>>: parent interface for other primitives
  • OfInt: A Spliterator specialized for int
  • OfDouble: A Spliterator dedicated for double
  • OfLong: A Spliterator dedicated for long

6. Conclusion

In this article, we covered Java 8 Spliterator usage, methods, characteristics, splitting process, primitive support and how to customize it.

As always, the full implementation of this article can be found over on Github.

Geospatial Support in ElasticSearch

$
0
0

1. Introduction

Elasticsearch is best known for its full-text search capabilities but it also features full geospatial support.

We can find more about setting up Elasticsearch and getting started in this previous article.

Let’s take a look on to how we can save geo-data in Elasticsearch and how we can search those data using geo queries.

2. Geo Data Type

To enable geo-queries, we need to create the mapping of the index manually and explicitly set the field mapping.

Dynamic mapping won’t work while setting mapping for geo types.

Elasticsearch offers two ways to represent geodata:

  1. Latitude-longitude pairs using geo-point field type
  2. Complex shape defined in GeoJSON using geo-shape field type

Let’s take a more in-depth look at each of the above categories:

2.1. Geo Point Data Type

Geo-point field type accepts latitude-longitude pairs that can be used to:

  • Find points within a certain distance of central point
  • Find points within a box or a polygon
  • Aggregate documents geographically or by distance from the central point
  • Sort documents by distance

Below is sample mapping for the field to save geo point data:

PUT /index_name
{
    "mappings": {
        "TYPE_NAME": {
            "properties": {
                "location": { 
                    "type": "geo_point" 
                } 
            } 
        } 
    } 
}

As we can see from above example, type for location field is geo_point. Thus, we can now provide latitude-longitude pair in the location in the location field.

2.2. Geo Shape Data Type

Unlike geo-point, geo shape provides the functionality to save and search complex shapes like polygon and rectangle. Geo shape data type must be used when we want to search documents which contains shapes other than geo points.

Let’s take a look at mapping for geo shape data type:

PUT /index_name
{
    "mappings": {
        "TYPE_NAME": {
            "properties": {
                "location": {
                    "type": "geo_shape",
                    "tree": "quadtree",
                    "precision": "1m"
                }
            }
        }
    }
}

The above mapping will index location field with quadtree implementation with a precision of one meter. 

Elasticsearch breaks down the provided geo shape into series of geo hashes consisting of small grid-like squares called raster.

Depending on our requirement, we can control the indexing of geo shape fields. For example, when we’re searching documents for navigation, then precision up to one meter becomes very critical as it may lead to an incorrect path.

Whereas if we’re looking for some sightseeing places, a precision of up to 10-50 meters can be acceptable. 

One thing that we need to keep in mind while indexing geo shape data is, we’re always compromising performance with accuracy. With higher precision, Elasticsearch generates more terms – which leads to increased memory usage. Hence we need to very cautious when selecting mapping for the geo shape.

We can find more mapping options for geo-shape data type at official ES site.

3. Different Ways to Save Geo Point Data

3.1. Latitude Longitude Object

PUT index_name/index_type/1
{
    "location": { 
        "lat": 23.02,
        "lon": 72.57
    }
}

Here, geo-point location is saved as an object with latitude and longitude as keys.

3.2. Latitude Longitude Pair

{
    "location": "23.02,72.57"
}

Here, location is expressed as a latitude-longitude pair in a plain string format. Please note, the sequence of latitude and longitude in string format.

3.3. Geo Hash

{
    "location": "tsj4bys"
}

We can also provide geo point data in the form of geo hash as shown in the example above. We can use the online tool to convert latitude-longitude to geo hash.

3.4. Longitude Latitude Array

{
    "location": [72.57, 23.02]
}

The sequence of latitude-longitude is reversed when latitude and longitude are supplied as an array. Initially, the latitude-longitude pair was used in both string and in an array, but later it was reversed in order to match the format used by GeoJSON.

4. Different Ways to Save Geo Shape Data

4.1. Point

POST /index/type
{
    "location" : {
        "type" : "point",
        "coordinates" : [72.57, 23.02]
    }
}

Here, the geo shape type that we’re trying to insert is a point. Please take a look at location field, we have nested object consisting of fields type and coordinates. These meta-fields helps Elasticsaerch in identifying the geo shape and its actual data.

4.2. LineString

POST /index/type
{
    "location" : {
        "type" : "linestring",
        "coordinates" : [[77.57, 23.02], [77.59, 23.05]]
    }
}

Here, we’re inserting linestring geo shape. The coordinates for linestring consists of two points i.e. start and endpoint. LineString geo shape is very helpful for navigation use case.

4.3. Polygon

POST /index/type
{
    "location" : {
        "type" : "polygon",
        "coordinates" : [
            [ [10.0, 0.0], [11.0, 0.0], [11.0, 1.0], [10.0, 1.0], [10.0, 0.0] ]
        ]
    }
}

Here, we’re inserting polygon geo shape. Please take a look at the coordinates in above example, first and last coordinates in polygon should always match i.e a closed polygon.

Elasticsearch also supports other GeoJSON structures as well. A complete list of other supported formats is as below:

  • MultiPoint
  • MultiLineString
  • MultiPolygon
  • GeometryCollection
  • Envelope
  • Circle

We can find examples of above-supported formats on the official ES site.

For all structures, the inner type and coordinates are mandatory fields. Also, sorting and retrieving geo shape fields are currently not possible in Elasticsearch due to their complex structure. Thus, the only way to retrieve geo fields is from the source field.

5. ElasticSearch Geo Query

Now, that we know how to insert documents containing geo shapes, let’s dive into fetching those records using geo shape queries. But before we start using Geo Queries, we’ll need following maven dependencies to support Java API for Geo Queries:

<dependency>
    <groupId>org.locationtech.spatial4j</groupId>
    <artifactId>spatial4j</artifactId>
    <version>0.7</version> 
</dependency>
</dependency>
    <groupId>com.vividsolutions</groupId>
    <artifactId>jts</artifactId>
    <version>1.13</version>
    <exclusions>
        <exclusion>
            <groupId>xerces</groupId>
            <artifactId>xercesImpl</artifactId>
        </exclusion>
    </exclusions>
</dependency>

We can search for above dependencies in Maven Central repository as well.

Elasticsearch supports different types of geo queries and they are as follow:

5.1. Geo Shape Query

This requires the geo_shape mapping.

Similar to geo_shape type, geo_shape uses GeoJSON structure to query documents.

Below is sample query to fetch all documents that fall within given top-left and bottom-right coordinates:

{
    "query":{
        "bool": {
            "must": {
                "match_all": {}
            },
            "filter": {
                "geo_shape": {
                    "region": {
                        "shape": {
                            "type": "envelope",
                            "coordinates" : [[75.00, 25.0], [80.1, 30.2]]
                        },
                        "relation": "within"
                    }
                }
            }
        }
    }
}

Here, relation determines spatial relation operators used at search time.

Below is the list of supported operators:

  • INTERSECTS – (default) returns all documents whose geo_shape field intersects the query geometry
  • DISJOINT – retrieves all documents whose geo_shape field has nothing in common with the query geometry
  • WITHIN – gets all documents whose geo_shape field is within the query geometry
  • CONTAINS – returns all documents whose geo_shape field contains the query geometry

Similarly, we can query using different GeoJSON shapes.

Java code for above query is as below:

QueryBuilders
  .geoShapeQuery(
    "region",
    ShapeBuilder.newEnvelope().topLeft(75.00, 25.0).bottomRight(80.1, 30.2))
  .relation(ShapeRelation.WITHIN);

5.2. Geo Bounding Box Query

Geo Bounding Box query is used to fetch all the documents based on point location. Below is a sample bounding box query:

{
    "query": {
        "bool" : {
            "must" : {
                "match_all" : {}
            },
            "filter" : {
                "geo_bounding_box" : {
                    "location" : {
                        "bottom_left" : [28.3, 30.5],
                        "top_right" : [31.8, 32.12]
                    }
                }
            }
        }
    }
}

Java code for above bounding box query is as below:

QueryBuilders
  .geoBoundingBoxQuery("location").bottomLeft(28.3, 30.5).topRight(31.8, 32.12);

Geo Bounding Box query supports similar formats like we have in geo_point data type. Sample queries for supported formats can be found on the official site.

5.3. Geo Distance Query

Geo distance query is used to filter all documents that come with the specified range of the point.

Here’s a sample geo_distance query:

{
    "query": {
        "bool" : {
            "must" : {
                "match_all" : {}
            },
            "filter" : {
                "geo_distance" : {
                    "distance" : "10miles",
                    "location" : [31.131,29.976]
                }
            }
        }
    }
}

And here’s the Java code for above query:

QueryBuilders
  .geoDistanceQuery("location")
  .point(29.976, 31.131)
  .distance(10, DistanceUnit.MILES);

Similar to geo_point, geo distance query also supports multiple formats for passing location coordinates. More details on supported formats can be found at the official site.

5.4. Geo Polygon Query

A query to filter all records that have points that fall within the given polygon of points.

Let’s have a quick look at a sample query:

{
    "query": {
        "bool" : {
            "must" : {
                "match_all" : {}
            },
            "filter" : {
                "geo_polygon" : {
                    "location" : {
                        "points" : [
                        {"lat" : 22.733, "lon" : 68.859},
                        {"lat" : 24.733, "lon" : 68.859},
                        {"lat" : 23, "lon" : 70.859}
                        ]
                    }
                }
            }
        }
    }
}

And at the Java code for this query:

QueryBuilders
  .geoPolygonQuery("location")
  .addPoint(22.733, 68.859)
  .addPoint(24.733, 68.859)
  .addPoint(23, 70.859);

Geo Polygon Query also supports formats mentioned below:

  • lat-long as an array: [lon, lat]
  • lat-long as a string: “lat, lon”
  • geo hash

geo_point data type is mandatory in order to use this query.

6. Conclusion

In this article, we discussed different mapping options for indexing geo data i.e geo_point and geo_shape.

We also went through different ways to store geo-data and finally, we observed geo-queries and Java API to filter results using geo queries.

As always, the code is available in this GitHub project.

Introduction to Lettuce – the Java Redis Client

$
0
0

1. Overview

This article is an introduction to Lettuce, a Redis Java client.

Redis is an in-memory key-value store that can be used as a database, cache or message broker. Data is added, queried, modified, and deleted with commands that operate on keys in Redis’ in-memory data structure.

Lettuce supports both synchronous and asynchronous communication use of the complete Redis API, including its data structures, pub/sub messaging, and high-availability server connections.

2. Why Lettuce?

We’ve covered Jedis in one of the previous posts. What makes Lettuce different?

The most significant difference is its asynchronous support via the Java 8’s CompletionStage interface and support for Reactive Streams. As we’ll see below, Lettuce offers a natural interface for making asynchronous requests from the Redis database server and for creating streams.

It also uses Netty for communicating with the server. This makes for a “heavier” API, but also makes it better suited for sharing a connection with more than one thread.

3. Setup

3.1. Dependency

Let’s start by declaring the only dependency we’ll need in the pom.xml:

<dependency>
    <groupId>io.lettuce</groupId>
    <artifactId>lettuce-core</artifactId>
    <version>5.0.1.RELEASE</version>
</dependency>

The latest version of the library can be checked on the Github repository or on Maven Central.

3.2. Redis Installation

We’ll need to install and run at least one instance of Redis, two if we wish to test clustering or sentinel mode (although sentinel mode requires three servers to function correctly.) For this article, we’re using 4.0.x – the latest stable version at this moment.

More information about getting started with Redis can be found here, including downloads for Linux and MacOS.

Redis doesn’t officially support Windows, but there’s a port of the server here. We can also run Redis in Docker which is a better alternative for Windows 10 and a fast way to get up and running.

4. Connections

4.1. Connecting to a Server

Connecting to Redis consists of four steps:

  1. Creating a Redis URI
  2. Using the URI to connect to a RedisClient
  3. Opening a Redis Connection
  4. Generating a set of RedisCommands

Let’s see the implementation:

RedisClient redisClient = RedisClient
  .create("redis://password@localhost:6379/");
StatefulRedisConnection<String, String> connection
 = redisClient.connect();

A StatefulRedisConnection is what it sounds like; a thread-safe connection to a Redis server that will maintain its connection to the server and reconnect if needed. Once we have a connection, we can use it to execute Redis commands either synchronously or asynchronously.

RedisClient uses substantial system resources, as it holds Netty resources for communicating with the Redis server. Applications that require multiple connections should use a single RedisClient.

4.2. Redis URIs

We create a RedisClient by passing a URI to the static factory method.

Lettuce leverages a custom syntax for Redis URIs. This is the schema:

redis :// [password@] host [: port] [/ database]
  [? [timeout=timeout[d|h|m|s|ms|us|ns]]
  [&_database=database_]]

There are four URI schemes:

  • redis – a standalone Redis server
  • rediss – a standalone Redis server via an SSL connection
  • redis-socket – a standalone Redis server via a Unix domain socket
  • redis-sentinel – a Redis Sentinel server

The Redis database instance can be specified as part of the URL path or as an additional parameter. If both are supplied, the parameter has higher precedence.

In the example above, we’re using a String representation. Lettuce also has a RedisURI class for building connections. It offers the Builder pattern:

RedisURI.Builder
  .redis("localhost", 6379).auth("password")
  .database(1).build();

And a constructor:

new RedisURI("localhost", 6379, 60, TimeUnit.SECONDS);

4.3. Synchronous Commands

Similar to Jedis, Lettuce provides a complete Redis command set in the form of methods.

However, Lettuce implements both synchronous and asynchronous versions. We’ll look at the synchronous version briefly, and then use the asynchronous implementation for the rest of the tutorial.

After we create a connection, we use it to create a command set:

RedisCommands<String, String> syncCommands = connection.sync();

Now we have an intuitive interface for communicating with Redis.

We can set and get String values:

syncCommands.set("key", "Hello, Redis!");

String value = syncommands.get(“key”);

We can work with hashes:

syncCommands.hset("recordName", "FirstName", "John");
syncCommands.hset("recordName", "LastName", "Smith");
Map<String, String> record = syncCommands.hgetall("recordName");

We’ll cover more Redis later in the article.

The Lettuce synchronous API uses the asynchronous API. Blocking is done for us at the command level. This means that more than one client can share a synchronous connection.

4.4. Asynchronous Commands

Let’s take a look at the asynchronous commands:

RedisAsyncCommands<String, String> asyncCommands = connection.async();

We retrieve a set of RedisAsyncCommands from the connection, similar to how we retrieved the synchronous set. These commands return a RedisFuture (which is a CompletableFuture internally):

RedisFuture<String> result = asyncCommands.get("key");

A guide to working with a CompletableFuture can be found here.

4.5. Reactive API

Finally, let’s see how to work with non-blocking reactive API:

RedisStringReactiveCommands<String, String> reactiveCommands = connection.reactive();

These commands return results wrapped in a Mono or a Flux from Project Reactor.

A guide to working with Project Reactor can be found here.

5. Redis Data Structures

We briefly looked at strings and hashes above, let’s look at how Lettuce implements the rest of Redis’ data structures. As we’d expect, each Redis command has a similarly-named method.

5.1. Lists

Lists are lists of Strings with the order of insertion preserved. Values are inserted or retrieved from either end:

asyncCommands.lpush("tasks", "firstTask");
asyncCommands.lpush("tasks", "secondTask");
RedisFuture<String> redisFuture = asyncCommands.rpop("tasks");

String nextTask = redisFuture.get();

In this example, nextTask equals “firstTask“. Lpush pushes values to the head of the list, and then rpop pops values from the end of the list.

We can also pop elements from the other end:

asyncCommands.del("tasks");
asyncCommands.lpush("tasks", "firstTask");
asyncCommands.lpush("tasks", "secondTask");
redisFuture = asyncCommands.lpop("tasks");

String nextTask = redisFuture.get();

We start the second example by removing the list with del. Then we insert the same values again, but we use lpop to pop the values from the head of the list, so the nextTask holds “secondTask” text.

5.2. Sets

Redis Sets are unordered collections of Strings similar to Java Sets; there are no duplicate elements:

asyncCommands.sadd("pets", "dog");
asyncCommands.sadd("pets", "cat");
asyncCommands.sadd("pets", "cat");
 
RedisFuture<Set<String>> pets = asyncCommands.smembers("nicknames");
RedisFuture<Boolean> exists = asyncCommands.sismember("pets", "dog");

When we retrieve the Redis set as a Set, the size is two, since the duplicate “cat” was ignored. When we query Redis for the existence of “dog” with sismember, the response is true.

5.3. Hashes

We briefly looked at an example of hashes earlier. They are worth a quick explanation.

Redis Hashes are records with String fields and values. Each record also has a key in the primary index:

asyncCommands.hset("recordName", "FirstName", "John");
asyncCommands.hset("recordName", "LastName", "Smith");

RedisFuture<String> lastName 
  = syncCommands.hget("recordName", "LastName");
RedisFuture<Map<String, String>> record 
  = syncCommands.hgetall("recordName");

We use hset to add fields to the hash, passing in the name of the hash, the name of the field, and a value.

Then, we retrieve an individual value with hget, the name of the record and the field. Finally, we fetch the entire record as a hash with hgetall.

5.4. Sorted Sets

Sorted Sets contains values and a rank, by which they are sorted. The rank is 64-bit floating point value.

Items are added with a rank, and retrieved in a range:

asyncCommands.zadd("sortedset", 1, "one");
asyncCommands.zadd("sortedset", 4, "zero");
asyncCommands.zadd("sortedset", 2, "two");

RedisFuture<List<String>> valuesForward = asyncCommands.zrange(key, 0, 3);
RedisFuture<List<String>> valuesReverse = asyncCommands.zrevrange(key, 0, 3);

The second argument to zadd is a rank. We retrieve a range by rank with zrange for ascending order and zrevrange for descending.

We added “zero” with a rank of 4, so it will appear at the end of valuesForward and at the beginning of valuesReverse.

6. Transactions

Transactions allow the execution of a set of commands in a single atomic step. These commands are guaranteed to be executed in order and exclusively. Commands from another user won’t be executed until the transaction finishes.

Either all commands are executed, or none of them are. Redis will not perform a rollback if one of them fails. Once exec() is called, all commands are executed in the order specified.

Let’s look at an example:

asyncCommands.multi();
    
RedisFuture<String> result1 = asyncCommands.set("key1", "value1");
RedisFuture<String> result2 = asyncCommands.set("key2", "value2");
RedisFuture<String> result3 = asyncCommands.set("key3", "value3");

RedisFuture<TransactionResult> execResult = asyncCommands.exec();

TransactionResult transactionResult = execResult.get();

String firstResult = transactionResult.get(0);
String secondResult = transactionResult.get(0);
String thirdResult = transactionResult.get(0);

The call to multi starts the transaction. When a transaction is started, the subsequent commands are not executed until exec() is called.

In synchronous mode, the commands return null. In asynchronous mode, the commands return RedisFuture . Exec returns a TransactionResult which contains a list of responses.

Since the RedisFutures also receive their results, asynchronous API clients receive the transaction result in two places.

7. Batching

Under normal conditions, Lettuce executes commands as soon as they are called by an API client.

This is what most normal applications want, especially if they rely on receiving command results serially.

However, this behavior isn’t efficient if applications don’t need results immediately or if large amounts of data are being uploaded in bulk.

Asynchronous applications can override this behavior:

commands.setAutoFlushCommands(false);

List<RedisFuture<?>> futures = new ArrayList<>();
for (int i = 0; i < iterations; i++) {
    futures.add(commands.set("key-" + i, "value-" + i);
}
commands.flushCommands();

boolean result = LettuceFutures.awaitAll(5, TimeUnit.SECONDS,
  futures.toArray(new RedisFuture[0]));

With setAutoFlushCommands set to false, the application must call flushCommands manually. In this example, we queued multiple set command and then flushed the channel. AwaitAll waits for all of the RedisFutures to complete.

This state is set on a per connection basis and effects all threads that use the connection. This feature isn’t applicable to synchronous commands.

8. Publish/Subscribe

Redis offers a simple publish/subscribe messaging system. Subscribers consume messages from channels with the subscribe command. Messages aren’t persisted; they are only delivered to users when they are subscribed to a channel.

Redis uses the pub/sub system for notifications about the Redis dataset, giving clients the ability to receive events about keys being set, deleted, expired, etc.

See the documentation here for more details.

8.1. Subscriber

RedisPubSubListener receives pub/sub messages. This interface defines several methods, but we’ll just show the method for receiving messages here:

public class Listener implements RedisPubSubListener<String, String> {

    @Override
    public void message(String channel, String message) {
        log.debug("Got {} on channel {}",  message, channel);
        message = new String(s2);
    }
}

We use the RedisClient to connect a pub/sub channel and install the listener:

StatefulRedisPubSubConnection<String, String> connection
 = client.connectPubSub();
connection.addListener(new Listener())

RedisPubSubAsyncCommands<String, String> async
 = connection.async();
async.subscribe("channel");

With a listener installed, we retrieve a set of RedisPubSubAsyncCommands and subscribe to a channel.

8.2. Publisher

Publishing is just a matter of connecting a Pub/Sub channel and retrieving the commands:

StatefulRedisPubSubConnection<String, String> connection 
  = client.connectPubSub();

RedisPubSubAsyncCommands<String, String> async 
  = connection.async();
async.publish("channel", "Hello, Redis!");

Publishing requires a channel and a message.

8.3. Reactive Subscriptions

Lettuce also offers a reactive interface for subscribing to pub/sub messages:

StatefulRedisPubSubConnection<String, String> connection = client
  .connectPubSub();

RedisPubSubAsyncCommands<String, String> reactive = connection
  .reactive();

reactive.observeChannels().subscribe(message -> {
    log.debug("Got {} on channel {}",  message, channel);
    message = new String(s2);
});
reactive.subscribe("channel").subscribe();

The Flux returned by observeChannels receives messages for all channels, but since this is a stream, filtering is easy to do.

9. High Availability

Redis offers several options for high availability and scalability. Complete understanding requires knowledge of Redis server configurations, but we’ll go over a brief overview of how Lettuce supports them.

9.1. Master/Slave

Redis servers replicate themselves in a master/slave configuration. The master server sends the slave a stream of commands that replicate the master cache to the slave. Redis doesn’t support bi-directional replication, so slaves are read-only.

Lettuce can connect to Master/Slave systems, query them for the topology, and then select slaves for reading operations, which can improve throughput:

RedisClient redisClient = RedisClient.create();

StatefulRedisMasterSlaveConnection<String, String> connection
 = MasterSlave.connect(redisClient, 
   new Utf8StringCodec(), RedisURI.create("redis://localhost"));
 
connection.setReadFrom(ReadFrom.SLAVE);

9.2. Sentinel

Redis Sentinel monitors master and slave instances and orchestrates failovers to slaves in the event of a master failover.

Lettuce can connect to the Sentinel, use it to discover the address of the current master, and then return a connection to it.

To do this, we build a different RedisURI and connect our RedisClient with it:

RedisURI redisUri = RedisURI.Builder
  .sentinel("sentinelhost1", "clustername")
  .withSentinel("sentinelhost2").build();
RedisClient client = new RedisClient(redisUri);

RedisConnection<String, String> connection = client.connect();

We built the URI with the hostname (or address) of the first Sentinel and a cluster name, followed by a second sentinel address. When we connect to the Sentinel, Lettuce queries it about the topology and returns a connection to the current master server for us.

The complete documentation is available here.

9.3. Clusters

Redis Cluster uses a distributed configuration to provide high-availability and high-throughput.

Clusters shard keys across up to 1000 nodes, therefore transactions are not available in a cluster:

RedisURI redisUri = RedisURI.Builder.redis("localhost")
  .withPassword("authentication").build();
RedisClusterClient clusterClient = RedisClusterClient
  .create(rediUri);
StatefulRedisClusterConnection<String, String> connection
 = clusterClient.connect();
RedisAdvancedClusterCommands<String, String> syncCommands = connection
  .sync();

RedisAdvancedClusterCommands holds the set of Redis commands supported by the cluster, routing them to the instance that holds the key.

A complete specification is available here.

10. Conclusion

In this tutorial, we looked at how to use Lettuce to connect and query a Redis server from within our application.

Lettuce supports the complete set of Redis features, with the bonus of a completely thread-safe asynchronous interface. It also makes extensive use of Java 8’s CompletionStage interface to give applications fine-grained control over how they receive data.

Code samples, as always, can be found over on GitHub.

Introduction to Javadoc

$
0
0

1. Overview

Good API documentation is one of the many factors contributing to the overall success of a software project.

Fortunately, all modern versions of the JDK provide the Javadoc tool – for generating API documentation from comments present in the source code.

Prerequisites:

  1. JDK 1.4 (JDK 7+ is recommended for the latest version of the Maven Javadoc plugin)
  2. The JDK /bin folder added to the PATH environment variable
  3. (Optional) an IDE that with built-in tools

2. Javadoc Comments

Let’s start with comments.

Javadoc comments structure look very similar to a regular multi-line comment, but the key difference is the extra asterisk at the beginning:

// This is a single line comment

/*
 * This is a regular multi-line comment
 */

/**
 * This is a Javadoc
 */

Javadoc style comments may contain HTML tags as well.

2.1. Javadoc Format

Javadoc comments may be placed above any class, method, or field which we want to document.

These comments are commonly made up of two sections:

  1. The description of what we’re commenting on
  2. The standalone block tags (marked with the “@” symbol) which describe specific meta-data

We’ll be using some of the more common block tags in our example. For a complete list of block tags, visit the reference guide.

2.2. Javadoc at Class Level 

Let’s take a look at what a class-level Javadoc comment would look like:

/**
* Hero is the main entity we'll be using to . . .
* 
* Please see the {@link com.baeldung.javadoc.Person} class for true identity
* @author Captain America
* 
*/
public class SuperHero extends Person {
    // fields and methods
}

We have a short description and two different block tags- standalone and inline:

  • Standalone tags appear after the description with the tag as the first word in a line, e.g., the @author tag
  • Inline tags may appear anywhere and are surrounded with curly brackets, e.g., the @link tag in the description

In our example, we can also see two kinds of block tags being used:

  • {@link} provides an inline link to a referenced part of our source code
  • @author the name of the author who added the class, method, or field that is commented

2.3. Javadoc at Field Level

We can also use a description without any block tags like this inside our SuperHero class:

/**
 * The public name of a hero that is common knowledge
 */
private String heroName;

Private fields won’t have Javadoc generated for them unless we explicitly pass the -private option to the Javadoc command.

2.4. Javadoc at Method Level

Methods can contain a variety of Javadoc block tags.

Let’s take a look at a method we’re using:

/**
 * <p>This is a simple description of the method. . .
 * <a href="http://www.supermanisthegreatest.com">Superman!</a>
 * </p>
 * @param incomingDamage the amount of incoming damage
 * @return the amount of health hero has after attack
 * @see <a href="http://www.link_to_jira/HERO-402">HERO-402</a>
 * @since 1.0
 */
public int successfullyAttacked(int incomingDamage) {
    // do things
    return 0;
}

The successfullyAttacked method contains both a description and numerous standalone block tags.

There’re many block tags to help generate proper documentation and we can include all sorts of different kinds of information. We can even utilize basic HTML tags in the comments.

Let’s go over the tags we encounter in the example above:

  • @param provides any useful description about a method’s parameter or input it should expect
  • @return provides a description of what a method will or can return
  • @see will generate a link similar to the {@link} tag, but more in the context of a reference and not inline
  • @since specifies which version the class, field, or method was added to the project
  • @version specifies the version of the software, commonly used with %I% and %G% macros
  • @throws is used to further explain the cases the software would expect an exception
  • @deprecated gives an explanation of why code was deprecated, when it may have been deprecated, and what the alternatives are

Although both sections are technically optional, we’ll need at least one for the Javadoc tool to generate anything meaningful.

3. Javadoc Generation

In order to generate our Javadoc pages, we’ll want to take a look at the command line tool that ships with the JDK, and the Maven plugin.

3.1. Javadoc Command Line Tool

The Javadoc command line tool is very powerful but has some complexity attached to it.

Running the command javadoc without any options or parameters will result in an error and output parameters it expects.

We’ll need to at least specify what package or class we want documentation to be generated for.

Let’s open a command line and navigate to the project directory.

Assuming the classes are all in the src folder in the project directory:

user@baeldung:~$ javadoc -d doc src\*

This will generate documentation in a directory called doc as specified with the –d flag. If multiple packages or files exist, we’d need to provide all of them.

Utilizing an IDE with the built-in functionality is, of course, easier and generally recommended.


3.2. Javadoc With Maven Plugin

We can also make use of the Maven Javadoc plugin:

<build>
    <plugins>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-javadoc-plugin</artifactId>
            <version>3.0.0</version>
            <configuration>
                <source>1.8</source>
                <target>1.8</target>
            </configuration>
            <tags>
            ...
            </tags>
        </plugin>
    </plugins>
</build>

In the base directory of the project, we run the command to generate our Javadocs to a directory in target\site:

user@baeldung:~$ mvn javadoc:javadoc

The Maven plugin is very powerful and facilitates complex document generation seamlessly.

Let’s now see what a generated Javadoc page looks like:

We can see a tree view of the classes our SuperHero class extends. We can see our description, fields, and method, and we can click on links for more information.

A detailed view of our method looks like this:

3.3. Custom Javadoc Tags

In addition to using predefined block tags to format our documentation, we can also create custom block tags.

In order to do so, we just need to include a -tag option to our Javadoc command line in the format of <tag-name>:<locations-allowed>:<header>.

In order to create a custom tag called @location allowed anywhere, which is displayed in the “Notable Locations” header in our generated document, we need to run:

user@baeldung:~$ javadoc -tag location:a:"Notable Locations:" -d doc src\*

In order to use this tag, we can add it to the block section of a Javadoc comment:

/**
 * This is an example...
 * @location New York
 * @returns blah blah
 */

The Maven Javadoc plugin is flexible enough to also allow definitions of our custom tags in our pom.xml.

In order to set up the same tag above for our project, we can add the following to the <tags> section of our plugin:

...
<tags>
    <tag>
        <name>location</name>
        <placement>a</placement>
        <head>Notable Places:</head>
    </tag> 
</tags>
...

This way allows us to specify the custom tag once, instead of specifying it every time.

4. Conclusion

This quick introduction tutorial covered how to write basic Javadocs and generate them with the Javadoc command line.

An easier way to generate the documentation would to use any built-in IDE options or include the Maven plugin into our pom.xml file and run the appropriate commands.

The code samples, as always, can be found over on GitHub.


HTTP Requests with Kotlin and khttp

$
0
0

1. Introduction

The HTTP protocol and APIs built on it are of central importance in programming these days.

On the JVM we have several available options, from lower-level to very high-level libraries, from established projects to new kids on the block. However, most of them are targeted primarily at Java programs.

In this article, we’re going to look at khttp, an idiomatic Kotlin library for consuming HTTP-based resources and APIs.

2. Dependencies

In order to use the library in our project, first we have to add it to our dependencies:

<dependency>
    <groupId>khttp</groupId>
    <artifactId>khttp</artifactId>
    <version>0.1.0</version>
</dependency>

Since this is not yet on Maven Central, we also have to enable the JCenter repository:

<repository>
    <id>central</id>
    <url>http://jcenter.bintray.com</url>
</repository>

Version 0.1.0 is the current one at the time of writing. We can, of course, check JCenter for a newer one.

3. Basic Usage

The basics of the HTTP protocol are simple, even though the fine details can be quite complicated.  Therefore, khttp has a simple interface as well.

For every HTTP method, we can find a package-level function in the khttp package, such as get, post and so on.

The functions all take the same set of arguments and return a Response object; we’ll see the details of these in the following sections.

In the course of this article, we’ll use the fully qualified form, for example, khttp.put. In our projects we can, of course, import and possibly rename those methods:

import khttp.delete as httpDelete

Note: we’ve added type declarations for clarity throughout code examples because without an IDE they could be hard to follow.

4. A Simple Request

Every HTTP request has at least two required components: a method and a URL. In khttp, the method is determined by the function we invoke, as we’ve seen in the previous section.

The URL is the only required argument for the method; so, we can easily perform a simple request:

khttp.get("http://httpbin.org/get")

In the following sections, we’ll consider all requests to complete successfully.

4.1. Adding Parameters

We often have to provide query parameters in addition to the base URL, especially for GET requests.

khttp’s methods accept a params argument which is a Map of key-value pairs to include in the query String:

khttp.get(
  url = "http://httpbin.org/get",
  params = mapOf("key1" to "value1", "keyn" to "valuen"))

Notice that we’ve used the mapOf function to construct a Map on the fly; the resulting request URL will be:

http://httpbin.org/get?key1=value1&keyn=valuen

5. A Request Body

Another common operation we often need to perform is sending data, typically as the payload of a POST or PUT request.

For this, the library offers several options that we’re going to examine in the following sections.

5.1. Sending a JSON Payload

We can use the json argument to send a JSON object or array. It can be of several different types:

  • A JSONObject or JSONArray as provided by the org.json library
  • A Map, which is transformed into a JSON object
  • A Collection, Iterable or array, which is transformed to a JSON array

We can easily turn our earlier GET example into a POST one which will send a simple JSON object:

khttp.post(
  url = "http://httpbin.org/post",
  json = mapOf("key1" to "value1", "keyn" to "valuen"))

Note that the transformation from collections to JSON objects is shallow. For example, a List of Map‘s won’t be converted to a JSON array of JSON objects, but rather to an array of strings.

For deep conversion, we’d need a more complex JSON mapping library such as Jackson. The conversion facility of the library is only meant for simple cases.

5.2. Sending Form Data (URL Encoded)

To send form data (URL encoded, as in HTML forms) we use the data argument with a Map:

khttp.post(
  url = "http://httpbin.org/post",
  data = mapOf("key1" to "value1", "keyn" to "valuen"))

5.3. Uploading Files (Multipart Form)

We can send one or more files encoded as a multipart form data request.

In that case, we use the files argument:

khttp.post(
  url = "http://httpbin.org/post",
  files = listOf(
    FileLike("file1", "content1"),
    FileLike("file2", File("kitty.jpg"))))

We can see that khttp uses a FileLike abstraction, which is an object with a name and a content. The content can be a string, a byte array, a File, or a Path.

5.4. Sending Raw Content

If none of the options above are suitable, we can use an InputStream to send raw data as the body of an HTTP request:

khttp.post(url = "http://httpbin.org/post", data = someInputStream)

In this case, we’ll most likely need to manually set some headers too, which we’ll cover in a later section.

6. Handling the Response

So far we’ve seen various ways of sending data to a server. But many HTTP operations are useful because of the data they return as well.

khttp is based on blocking I/O, therefore all functions corresponding to HTTP methods return a Response object containing the response received from the server.

This object has various properties that we can access, depending on the type of content.

6.1. JSON Responses

If we know the response to be a JSON object or array, we can use the jsonObject and jsonArray properties:

val response : Response = khttp.get("http://httpbin.org/get")
val obj : JSONObject = response.jsonObject
print(obj["someProperty"])

6.2. Text or Binary Responses

If we want to read the response as a String instead, we can use the text property:

val message : String = response.text

Or, if we want to read it as binary data (e.g. a file download) we use the content property:

val imageData : ByteArray = response.content

Finally, we can also access the underlying InputStream:

val inputStream : InputStream = response.raw

7. Advanced Usage

Let’s also take a look at a couple of more advanced usage patterns which are generally useful, and that we haven’t yet treated in the previous sections.

7.1. Handling Headers and Cookies

All khttp functions take a headers argument which is a Map of header names and values.

val response = khttp.get(
  url = "http://httpbin.org/get",
  headers = mapOf("header1" to "1", "header2" to "2"))

Similarly for cookies:

val response = khttp.get(
  url = "http://httpbin.org/get",
  cookies = mapOf("cookie1" to "1", "cookie2" to "2"))

We can also access headers and cookies sent by the server in the response:

val contentType : String = response.headers["Content-Type"]
val sessionID : String = response.cookies["JSESSIONID"]

7.2. Handling Errors

There are two types of errors that can arise in HTTP: error responses, such as 404 – Not Found, which are part of the protocol; and low-level errors, such as “connection refused”.

The first kind doesn’t result in khttp throwing exceptions; instead, we should check the Response statusCode property:

val response = khttp.get(url = "http://httpbin.org/nothing/to/see/here")
if(response.statusCode == 200) {
    process(response)
} else {
    handleError(response)
}

Lower-level errors, instead, result in exceptions being thrown from the underlying Java I/O subsystem, such as ConnectException.

7.3. Streaming Responses

Sometimes the server can respond with a big piece of content, and/or take a long time to respond. In those cases, we may want to process the response in chunks, rather than waiting for it to complete and take up memory.

If we want to instruct the library to give us a streaming response, then we have to pass true as the stream argument:

val response = khttp.get(url = "http://httpbin.org", stream = true)

Then, we can process it in chunks:

response.contentIterator(chunkSize = 1024).forEach { arr : ByteArray -> handleChunk(arr) }

7.4. Non-Standard Methods

In the unlikely case that we need to use an HTTP method (or verb) that khttp doesn’t provide natively – say, for some extension of the HTTP protocol, like WebDAV – we’re still covered.

In fact, all functions in the khttp package, which correspond to HTTP methods, are implemented using a generic request function that we can use too:

khttp.request(
  method = "COPY",
  url = "http://httpbin.org/get",
  headers = mapOf("Destination" to "/copy-of-get"))

7.5. Other Features

We haven’t touched all the features of khttp. For example, we haven’t discussed timeouts, redirects and history, or asynchronous operations.

The official documentation is the ultimate source of information about the library and all of its features.

8. Conclusion

In this tutorial, we’ve seen how to make HTTP requests in Kotlin with the idiomatic library khttp.

The implementation of all these examples can be found in the GitHub project – this is a Maven project, so it should be easy to import and run as it is.

Get Log Output in JSON

$
0
0

1. Introduction

Most Java logging libraries today offer different layout options for formatting logs – to accurately fit the needs of each project.

In this quick article, we want to format and output our log entries as JSON. We’ll see how to do this for the two most widely used logging libraries: Log4j2 and Logback.

Both use Jackson internally for representing logs in the JSON format.

For an introduction to these libraries take a look at our introduction to Java Logging article.

2. Log4j2

Log4j2 is the direct successor of the most popular logging library for Java, Log4J.

As it’s the new standard for Java projects, we’ll show how to configure it to output JSON.

2.1. Dependencies

First, we have to include the following dependencies in our pom.xml file:

<dependencies>
    <dependency>
        <groupId>org.apache.logging.log4j</groupId>
        <artifactId>log4j-core</artifactId>
        <version>2.10.0</version>
    </dependency>

    <dependency>
        <groupId>com.fasterxml.jackson.core</groupId>
        <artifactId>jackson-databind</artifactId>
        <version>2.9.3</version>
    </dependency>
    
</dependencies>

The latest versions of the previous dependencies can be found on Maven Central: log4j-api, log4j-core, jackson-databind.

2.2. Configuration

Then, in our log4j2.xml file, we can create a new Appender that uses JsonLayout and a new Logger that uses this Appender:

<Appenders>
    <Console name="ConsoleJSONAppender" target="SYSTEM_OUT">
        <JsonLayout complete="false" compact="false">
            <KeyValuePair key="myCustomField" value="myCustomValue" />
        </JsonLayout>
    </Console>
</Appenders>

<Logger name="CONSOLE_JSON_APPENDER" level="TRACE" additivity="false">
    <AppenderRef ref="ConsoleJSONAppender" />
</Logger>

As we can see in the example config, it’s possible to add our own values to the log using KeyValuePair, that even supports lookout into the log context.

Setting the compact parameter to false will increase the size of the output but will make it also more human readable.

2.3. Using Log4j2

In our code, we can now instantiate our new JSON logger and make a new debug level trace:

Logger logger = LogManager.getLogger("CONSOLE_JSON_APPENDER");
logger.debug("Debug message");

The debug output message for the previous code would be:

{
  "timeMillis" : 1513290111664,
  "thread" : "main",
  "level" : "DEBUG",
  "loggerName" : "CONSOLE_JSON_APPENDER",
  "message" : "My debug message",
  "endOfBatch" : false,
  "loggerFqcn" : "org.apache.logging.log4j.spi.AbstractLogger",
  "threadId" : 1,
  "threadPriority" : 5,
  "myCustomField" : "myCustomValue"
}

3. Logback

Logback can be considered another successor of Log4J. It’s written by same developers and claims to be more efficient and faster than its predecessor.

So, let’s see how to configure it to get the output of the logs in JSON format.

3.1. Dependencies

Let’s include following dependencies in our pom.xml:

<dependencies>
    <dependency>
        <groupId>ch.qos.logback</groupId>
        <artifactId>logback-classic</artifactId>
        <version>1.1.7</version>
    </dependency>

    <dependency>
        <groupId>ch.qos.logback.contrib</groupId>
        <artifactId>logback-json-classic</artifactId>
        <version>0.1.5</version>
    </dependency>

    <dependency>
        <groupId>ch.qos.logback.contrib</groupId>
        <artifactId>logback-jackson</artifactId>
        <version>0.1.5</version>
    </dependency>

    <dependency>
        <groupId>com.fasterxml.jackson.core</groupId>
        <artifactId>jackson-databind</artifactId>
        <version>2.9.3</version>
    </dependency>
</dependencies>

We can check here for the latest versions of these dependencies: logback-classiclogback-json-classiclogback-jacksonjackson-databind

3.2. Configuration

First, we create a new appender in our logback.xml that uses JsonLayout and JacksonJsonFormatter.

After that, we can create a new logger that uses this appender:

<appender name="json" class="ch.qos.logback.core.ConsoleAppender">
    <layout class="ch.qos.logback.contrib.json.classic.JsonLayout">
        <jsonFormatter
            class="ch.qos.logback.contrib.jackson.JacksonJsonFormatter">
            <prettyPrint>true</prettyPrint>
        </jsonFormatter>
        <timestampFormat>yyyy-MM-dd' 'HH:mm:ss.SSS</timestampFormat>
    </layout>
</appender>

<logger name="jsonLogger" level="TRACE">
    <appender-ref ref="json" />
</logger>

As we see, the parameter prettyPrint is enabled to obtain a human-readable JSON.

3.3. Using Logback

Let’s instantiate the logger in our code and log a debug message:

Logger logger = LoggerFactory.getLogger("jsonLogger");
logger.debug("Debug message");

With this – we’ll obtain the following output:

{
  "timestamp" : "2017-12-14 23:36:22.305",
  "level" : "DEBUG",
  "thread" : "main",
  "logger" : "jsonLogger",
  "message" : "Debug log message",
  "context" : "default"
}

4. Conclusion

We’ve seen here how we can easily configure Log4j2 and Logback have a JSON output format. We’ve delegated all the complexity of the parsing to the logging library, so we don’t need to change any existing logger calls.

As always the code for this article is available on GitHub here and here.

A Simple Tagging Implementation with Elasticsearch

$
0
0

1. Overview

Tagging is a common design pattern that allows us to categorize and filter items in our data model.

In this article, we’ll implement tagging using Spring and Elasticsearch. We’ll be using both Spring Data and the Elasticsearch API.

First of all, we aren’t going to cover the basics of getting Elasticsearch and Spring Data – you can explore these here.

2. Adding Tags

The simplest implementation of tagging is an array of strings.  We can implement this by adding a new field to our data model like this:

@Document(indexName = "blog", type = "article")
public class Article {

    // ...

    @Field(type = String, index = not_analyzed)
    private String[] tags;

    // ...
}

Notice the use of the not_analyzed flag on the index. We only want exact matches of our tags to filter a result. This allows us to use similar but separate tags like elasticsearchIsAwesome and elasticsearchIsTerrible.

Analyzed fields would return partial hits which is a wrong behavior in this case.

3. Building Queries

Tags allow us to manipulate our queries in interesting ways. We can search across them like any other field, or we can use them to filter our results on match_all queries. We can also use them with other queries to tighten our results.

3.1. Searching Tags

The new tag field we created on our model is just like every other field in our index. We can search for any entity that has a specific tag like this:

@Query("{\"bool\": {\"must\": [{\"match\": {\"tags\": \"?0\"}}]}}")
Page<Article> findByTagUsingDeclaredQuery(String tag, Pageable pageable);

This example uses a Spring Data Repository to construct our query, but we can just as quickly use a Rest Template to query the Elasticsearch cluster manually.

Similarly, we can use the Elasticsearch API:

boolQuery().must(termQuery("tags", "elasticsearch"));

Assume we use the following documents in our index:

[
    {
        "id": 1,
        "title": "Spring Data Elasticsearch",
        "authors": [ { "name": "John Doe" }, { "name": "John Smith" } ],
        "tags": [ "elasticsearch", "spring data" ]
    },
    {
        "id": 2,
        "title": "Search engines",
        "authors": [ { "name": "John Doe" } ],
        "tags": [ "search engines", "tutorial" ]
    },
    {
        "id": 3,
        "title": "Second Article About Elasticsearch",
        "authors": [ { "name": "John Smith" } ],
        "tags": [ "elasticsearch", "spring data" ]
    },
    {
        "id": 4,
        "title": "Elasticsearch Tutorial",
        "authors": [ { "name": "John Doe" } ],
        "tags": [ "elasticsearch" ]
    },
]

Now we can use this query:

Page<Article> articleByTags = articleService.findByTagUsingDeclaredQuery("elasticsearch", new PageRequest(0, 10));

// articleByTags will contain 3 articles [ 1, 3, 4]
assertThat(articleByTags, containsInAnyOrder(
 hasProperty("id", is(1)),
 hasProperty("id", is(3)),
 hasProperty("id", is(4)))
);

3.2. Filtering All Documents

A common design pattern is to create a Filtered List View in the UI that shows all entities, but also allows the user to filter based on different criteria.

Let’s say we want to return all articles filtered by whatever tag the user selects:

@Query("{\"bool\": {\"must\": " +
  "{\"match_all\": {}}, \"filter\": {\"term\": {\"tags\": \"?0\" }}}}")
Page<Article> findByFilteredTagQuery(String tag, Pageable pageable);

Once again, we’re using Spring Data to construct our declared query.

Consequently, the query we’re using is split into two pieces. The scoring query is the first term, in this case, match_all. The filter query is next and tells Elasticsearch which results to discard.

Here is how we use this query:

Page<Article> articleByTags =
  articleService.findByFilteredTagQuery("elasticsearch", new PageRequest(0, 10));

// articleByTags will contain 3 articles [ 1, 3, 4]
assertThat(articleByTags, containsInAnyOrder(
  hasProperty("id", is(1)),
  hasProperty("id", is(3)),
  hasProperty("id", is(4)))
);

It is important to realize that although this returns the same results as our example above, this query will perform better.

3.3. Filtering Queries

Sometimes a search returns too many results to be usable. In that case, it’s nice to expose a filtering mechanism that can rerun the same search, just with the results narrowed down.

Here’s an example where we narrow down the articles an author has written, to just the ones with a specific tag:

@Query("{\"bool\": {\"must\": " + 
  "{\"match\": {\"authors.name\": \"?0\"}}, " +
  "\"filter\": {\"term\": {\"tags\": \"?1\" }}}}")
Page<Article> findByAuthorsNameAndFilteredTagQuery(
  String name, String tag, Pageable pageable);

Again, Spring Data is doing all the work for us.

Let’s also look at how to construct this query ourselves:

QueryBuilder builder = boolQuery().must(
  nestedQuery("authors", boolQuery().must(termQuery("authors.name", "doe"))))
  .filter(termQuery("tags", "elasticsearch"));

We can, of course, use this same technique to filter on any other field in the document. But tags lend themselves particularly well to this use case.

Here is how to use the above query:

SearchQuery searchQuery = new NativeSearchQueryBuilder().withQuery(builder)
  .build();
List<Article> articles = 
  elasticsearchTemplate.queryForList(searchQuery, Article.class);

// articles contains [ 1, 4 ]
assertThat(articleByTags, containsInAnyOrder(
 hasProperty("id", is(1)),
 hasProperty("id", is(4)))
);

4. Filter Context

When we build a query, we need to differentiate between the Query Context and the Filter Context. Every query in Elasticsearch has a Query Context so we should be used to seeing them.

Not every query type supports the Filter Context. Therefore if we want to filter on tags, we need to know which query types we can use.

The bool query has two ways to access the Filter Context. The first parameter, filter, is the one we use above. We can also use a must_not parameter to activate the context.

The next query type we can filter is constant_score. This is useful when uu want to replace the Query Context with the results of the Filter and assign each result the same score.

The final query type that we can filter based on tags is the filter aggregation. This allows us to create aggregation groups based on the results of our filter. In other words, we can group all articles by tag in our aggregation result.

5. Advanced Tagging

So far, we have only talked about tagging using the most basic implementation. The next logical step is to create tags that are themselves key-value pairs. This would allow us to get even fancier with our queries and filters.

For example, we could change our tag field into this:

@Field(type = Nested, index = not_analyzed)
private List<Tag> tags;

Then we’d just change our filters to use nestedQuery types.

Once we understand how to use key-value pairs it is a small step to using complex objects as our tag. Not many implementations will need a full object as a tag, but it’s good to know we have this option should we require it.

6. Conclusion

In this article, we’ve covered the basics of implementing tagging using Elasticsearch.

As always, examples can be found over on GitHub.

Compiling Java *.class Files with javac

$
0
0

1. Overview

This tutorial will introduce the javac tool and describes how to use it to compile Java source files into class files.

We’ll get started with a short description of the javac command, then examine the tool in more depth by looking at its various options.

2. The javac Command

We can specify options and source files when executing the javac tool:

javac [options] [source-files]

Where [options] denotes the options controlling operations of the tool, and [source-files] indicates one or more source files to be compiled.

All options are indeed entirely optional. Source files can be directly specified as arguments to the javac command or kept in a referenced argument file as described later. Notice that source files should be arranged in a directory hierarchy corresponding to the fully qualified names of the types they contain.

Options of javac are categorized into three groups: standard, cross-compilation, and extra. In this article, we’ll focus on the standard and extra options.

The cross-compilation options are used for the less common use case of compiling type definitions against a JVM implementation different from the compiler’s environment and won’t be addressed.

3. Type Definition

Let’s start by introducing the class we’re going to use to demonstrate the javac options:

public class Data {
    List<String> textList = new ArrayList();

    public void addText(String text) {
        textList.add(text);
    }

    public List getTextList() {
        return this.textList;
    }
}

The source code is placed in the file com/baeldung/javac/Data.java.

Note that we use *nix file separators in this article; on Windows machines, we must use the backslash (‘\’) instead of the forward slash (‘/’).

4. Standard Options

One of the most commonly used standard options of the javac command is -d, specifying the destination directory for generated class files. If a type isn’t part of the default package, a directory structure reflecting the package’s name is created to keep the class file of that type.

Let’s execute the following command in the directory containing the structure provided in the previous section:

javac -d javac-target com/baeldung/javac/Data.java

The javac compiler will generate the class file javac-target/com/baeldung/javac/Data.class. Note that on some systems, javac doesn’t automatically create the target directory, which is javac-target in this case. Therefore, we may need to do so manually.

Here are a couple of other frequently used options:

  • -cp (or -classpath, –class-path) – specifies where types required to compile our source files can be found. If this option is missing and the CLASSPATH environment variable isn’t set, the current working directory is used instead (as was the case in the example above).
  • -p (or –module-path) –  indicates the location of necessary application modules. This option is only applicable to Java 9 and above – please refer to this tutorial for a guide to the Java 9 module system.

If we want to know what’s going on during a compilation process, e.g. which classes are loaded and which are compiled, we can apply the -verbose option.

The last standard option we’ll cover is the argument file. Instead of passing arguments directly to the javac tool, we can store them in argument files. The names of those files, prefixed with the ‘@ character, are then used as command arguments.

When the javac command encounters an argument starting with ‘@, it interprets the following characters as the path to a file and expands the file’s content into an argument list. Spaces and newline characters can be used to separate arguments included in such an argument file.

Let’s assume we have two files, named options, and types, in the javac-args directory with the following content:

The options file:

-d javac-target
-verbose

The types file:

com/baeldung/javac/Data.java

We can compile the Data type like before with detail messages printed on the console by executing this command:

javac @javac-args/options @javac-args/types

Rather than keeping arguments in separate files, we can also store them all in a single file.

Suppose there is a file named arguments in the javac-args directory:

-d javac-target -verbose
com/baeldung/javac/Data.java

Let’s feed this file to javac to achieve the same result as with the two separate files before:

javac @javac-args/arguments

Notice the options we’ve gone through in this section are the most common ones only. For a complete list of standard javac options, check out this reference.

5. Extra Options

Extra options of javac are non-standard options, which are specific to the current compiler implementation and may be changed in the future. As such, we won’t go over these options in detail.

However, there is an option that’s very useful and worth mentioning, -Xlint. For a full description of the other javac extra options, follow this link.

The -Xlint option allows us to enable warnings during compilation. There are two ways to specify this option on the command line:

  • -Xlint – triggers all recommended warnings
  • -Xlint:key[,key]* – enables specific warnings

Here are some of the handiest -Xlint keys:

  • rawtypes – warns about the use of raw types
  • unchecked –  warns about unchecked operations
  • static – warns about the access to a static member from an instance member
  • cast – warns about unnecessary casts
  • serial – warns about serializable classes not having a serialversionUID
  • fallthrough – warns about the falling through in a switch statement

Now, create a file named xlint-ops in the javac-args directory with the following content:

-d javac-target
-Xlint:rawtypes,unchecked
com/baeldung/javac/Data.java

When running this command:

javac @javac-args/xlint-ops

we should see the rawtypes and unchecked warnings:

com/baeldung/javac/Data.java:7: warning: [rawtypes] found raw type: ArrayList
    List<String> textList = new ArrayList();
                                ^
  missing type arguments for generic class ArrayList<E>
  where E is a type-variable:
    E extends Object declared in class ArrayList
com/baeldung/javac/Data.java:7: warning: [unchecked] unchecked conversion
    List<String> textList = new ArrayList();
                            ^
  required: List<String>
  found:    ArrayList
...

6. Conclusion

This tutorial walked through the javac tool, showing how to use options to manage the typical compilation process.

In reality, we usually compile a program using an IDE or a build tool rather than directly relying on javac. However, a solid understanding of this tool will allow us to customize the compilation in advanced use cases.

As always, the source code for this tutorial can be found over on GitHub.

Custom Assertions with AssertJ

$
0
0

1. Overview

In this tutorial, we’ll walk through creating custom AssertJ assertions; the AssertJ’s basics can be found here.

Simply put, custom assertions allow creating assertions specific to our own classes, allowing our tests to better reflect the domain model.

2. Class Under Test

Test cases in this tutorial will be built around the Person class:

public class Person {
    private String fullName;
    private int age;
    private List<String> nicknames;

    public Person(String fullName, int age) {
        this.fullName = fullName;
        this.age = age;
        this.nicknames = new ArrayList<>();
    }

    public void addNickname(String nickname) {
        nicknames.add(nickname);
    }

    // getters
}

3. Custom Assertion Class

Writing a custom AssertJ assertion class is pretty simple. All we need to do is to declare a class that extends AbstractAssert, add a required constructor, and provide custom assertion methods.

The assertion class must extend the AbstractAssert class to give us access to essential assertion methods of the API, such as isNotNull and isEqualTo.

Here’s the skeleton of a custom assertion class for Person:

public class PersonAssert extends AbstractAssert<PersonAssert, Person> {

    public PersonAssert(Person actual) {
        super(actual, PersonAssert.class);
    }

    // assertion methods described later
}

We must specify two type arguments when extending the AbstractAssert class: the first is the custom assertion class itself, which is required for method chaining, and the second is the class under test.

To provide an entry point to our assertion class, we can define a static method that can be used to start an assertion chain:

public static PersonAssert assertThat(Person actual) {
    return new PersonAssert(actual);
}

Next, we’ll go over several custom assertions included in the PersonAssert class.

The first method verifies that the full name of a Person matches a String argument:

public PersonAssert hasFullName(String fullName) {
    isNotNull();
    if (!actual.getFullName().equals(fullName)) {
        failWithMessage("Expected person to have full name %s but was %s", 
          fullName, actual.getFullName());
    }
    return this;
}

The following method tests if a Person is an adult based on its age:

public PersonAssert isAdult() {
    isNotNull();
    if (actual.getAge() < 18) {
        failWithMessage("Expected person to be adult");
    }
    return this;
}

The last checks for the existence of a nickname:

public PersonAssert hasNickName(String nickName) {
    isNotNull();
    if (!actual.getNickNames().contains(nickName)) {
        failWithMessage("Expected person to have nickname %s", 
          nickName);
    }
    return this;
}

When having more than one custom assertion class, we may wrap all assertThat methods in a class, providing a static factory method for each of the assertion classes:

public class Assertions {
    public static PersonAssert assertThat(Person actual) {
        return new PersonAssert(actual);
    }

    // static factory methods of other assertion classes
}

The Assertions class shown above is a convenient entry point to all custom assertion classes.

Static methods of this class have the same name and are differentiated from each other by their parameter type.

4. In Action

The following test cases will illustrate the custom assertion methods we created in the previous section. Notice that the assertThat method is imported from our custom Assertions class, not the core AssertJ API.

Here’s how the hasFullName method can be used:

@Test
public void whenPersonNameMatches_thenCorrect() {
    Person person = new Person("John Doe", 20);
    assertThat(person)
      .hasFullName("John Doe");
}

This is a negative test case illustrating the isAdult method:

@Test
public void whenPersonAgeLessThanEighteen_thenNotAdult() {
    Person person = new Person("Jane Roe", 16);

    // assertion fails
    assertThat(person).isAdult();
}

and another test demonstrating the hasNickname method:

@Test
public void whenPersonDoesNotHaveAMatchingNickname_thenIncorrect() {
    Person person = new Person("John Doe", 20);
    person.addNickname("Nick");

    // assertion will fail
    assertThat(person)
      .hasNickname("John");
}

5. Assertions Generator

Writing custom assertion classes corresponding to the object model paves the way for very readable test cases.

However, if we have a lot of classes, it would be painful to manually create custom assertion classes for all of them. This is where the AssertJ assertions generator comes into play.

To use the assertions generator with Maven, we need to add a plugin to the pom.xml file:

<plugin>
    <groupId>org.assertj</groupId>
    <artifactId>assertj-assertions-generator-maven-plugin</artifactId>
    <version>2.1.0</version>
    <configuration>
        <classes>
            <param>com.baeldung.testing.assertj.custom.Person</param>
        </classes>
    </configuration>
</plugin>

The latest version of the assertj-assertions-generator-maven-plugin can be found here.

The classes element in the above plugin marks classes for which we want to generate assertions. Please see this post for other configurations of the plugin.

The AssertJ assertions generator creates assertions for each public property of the target class. The specific name of each assertion method depends on the field’s or property’s type. For a complete description of the assertions generator, check out this reference.

Execute the following Maven command in the project base directory:

mvn assertj:generate-assertions

We should see assertion classes generated in the folder target/generated-test-sources/assertj-assertions. For example, the generated entry point class for the generated assertions looks like this:

// generated comments are stripped off for brevity

package com.baeldung.testing.assertj.custom;

@javax.annotation.Generated(value="assertj-assertions-generator")
public class Assertions {

    @org.assertj.core.util.CheckReturnValue
    public static com.baeldung.testing.assertj.custom.PersonAssert
      assertThat(com.baeldung.testing.assertj.custom.Person actual) {
        return new com.baeldung.testing.assertj.custom.PersonAssert(actual);
    }

    protected Assertions() {
        // empty
    }
}

Now, we can copy the generated source files to the test directory, then add custom assertion methods to satisfy our testing requirements.

One important thing to notice is that the generated code isn’t guaranteed to be entirely correct. At this point, the generator isn’t a finished product, and the community is working on it.

Hence, we should use the generator as a supporting tool to make our life easier instead of taking it for granted.

6. Conclusion

In this tutorial, we’ve shown how to create custom assertions for creating readable test code with the AssertJ library, both manually and automatically.

If we have just a small number of classes under test, the manual solution is enough; otherwise, the generator should be used.

And, as always, the implementation of all the examples and code snippets can be found over on GitHub.

wait and notify() Methods in Java

$
0
0

1. Introduction

In this article, we’ll look at one of the most fundamental mechanisms in Java – thread synchronization.

We’ll first discuss some essential concurrency-related terms and methodologies.

And we’ll develop a simple application – where we’ll deal with concurrency issues, with the goal of better understanding wait() and notify().

2. Thread Synchronization in Java

In a multithreaded environment, multiple threads might try to modify the same resource. If threads aren’t managed properly, this will, of course, lead to consistency issues.

2.1. Guarded Blocks in Java

One tool we can use to coordinate actions of multiple threads in Java – is guarded blocks. Such blocks keep a check for a particular condition before resuming the execution.

With that in mind, we’ll make use of:

This can be better understood from the following diagram, that depicts the lifecycle of a Thread:

Please note that there are many ways of controlling this lifecycle; however, in this article, we’re going to focus only on wait() and notify().

3. The wait() Method

Simply put, when we call wait() – this forces the current thread to wait until some other thread invokes notify() or notifyAll() on the same object.

For this, the current thread must own the object’s monitor. According to Javadocs, this can happen when:

  • we’ve executed synchronized instance method for the given object
  • we’ve executed the body of a synchronized block on the given object
  • by executing synchronized static methods for objects of type Class

Note that only one active thread can own an object’s monitor at a time.

This wait() method comes with three overloaded signatures. Let’s have a look at these.

3.1. wait()

The wait() method causes the current thread to wait indefinitely until another thread either invokes notify() for this object or notifyAll().

3.2. wait(long timeout)

Using this method, we can specify a timeout after which thread will be woken up automatically. A thread can be woken up before reaching the timeout using notify() or notifyAll().

Note that calling wait(0) is the same as calling wait().

3.3. wait(long timeout, int nanos)

This is yet another signature providing the same functionality, with the only difference being that we can provide higher precision.

The total timeout period (in nanoseconds), is calculated as 1_000_000*timeout + nanos. 

4. notify() and notifyAll()

The notify() method is used for waking up threads that are waiting for an access to this object’s monitor.

There are two ways of notifying waiting threads.

4.1. notify()

For all threads waiting on this object’s monitor (by using any one of the wait() method), the method notify() notifies any one of them to wake up arbitrarily. The choice of exactly which thread to wake is non-deterministic and depends upon the implementation.

Since notify() wakes up a single random thread it can be used to implement mutually exclusive locking where threads are doing similar tasks, but in most cases, it would be more viable to implement notifyAll().

4.2. notifyAll()

This method simply wakes all threads that are waiting on this object’s monitor.

The awakened threads will complete in the usual manner – like any other thread.

But before we allow their execution to continue, always define a quick check for the condition required to proceed with the thread – because there may be some situations where the thread got woken up without receiving a notification (this scenario is discussed later in an example).

5. Sender-Receiver Synchronization Problem

Now that we understand the basics, let’s go through a simple SenderReceiver application – that will make use of the wait() and notify() methods to set up synchronization between them:

  • The Sender is supposed to send a data packet to the Receiver
  • The Receiver cannot process the data packet until the Sender is finished sending it
  • Similarly, the Sender mustn’t attempt to send another packet unless the Receiver has already processed the previous packet

Let’s first create Data class that consists of the data packet that will be sent from Sender to Receiver. We’ll use wait() and notifyAll() to set up synchronization between them:

public class Data {
    private String packet;
    
    // True if receiver should wait
    // False if sender should wait
    private boolean transfer = true;
 
    public synchronized void send(String packet) {
        while (!transfer) {
            try { 
                wait();
            } catch (InterruptedException e)  {
                Thread.currentThread().interrupt(); 
                Log.error("Thread interrupted", e); 
            }
        }
        transfer = false;
        
        this.packet = packet;
        notifyAll();
    }
 
    public synchronized String receive() {
        while (transfer) {
            try {
                wait();
            } catch (InterruptedException e)  {
                Thread.currentThread().interrupt(); 
                Log.error("Thread interrupted", e); 
            }
        }
        transfer = true;

        notifyAll();
        return packet;
    }
}

Let’s break down what’s going on here:

  • The packet variable denotes the data that is being transferred over the network
  • We have a boolean variable transfer – which the Sender and Receiver will use for synchronization:
    • If this variable is true, then the Receiver should wait for Sender to send the message
    • If it’s false, then Sender should wait for Receiver to receive the message
  • The Sender uses send() method to send data to the Receiver:
    • If transfer is false, we’ll wait by calling wait() on this thread
    • But when it is true, we toggle the status, set our message and call notifyAll() to wake up other threads to specify that a significant event has occurred and they can check if they can continue execution
  • Similarly, the Receiver will use receive() method:
    • If the transfer was set to false by Sender, then only it will proceed, otherwise we’ll call wait() on this thread
    • When the condition is met, we toggle the status, notify all waiting threads to wake up and return the data packet that was Receiver

5.1. Why Enclose wait() in a while Loop?

Since notify() and notifyAll() randomly wakes up threads that are waiting on this object’s monitor, it’s not always important that the condition is met. Sometimes it can happen that the thread is woken up, but the condition isn’t actually satisfied yet.

We can also define a check to save us from spurious wakeups – where a thread can wake up from waiting without ever having received a notification.

5.2. Why Do We Need to Synchronize send() and receive() Methods?

We placed these methods inside synchronized methods to provide intrinsic locks. If a thread calling wait() method does not own the inherent lock, an error will be thrown.

We’ll now create Sender and Receiver and implement the Runnable interface on both so that their instances can be executed by a thread.

Let’s first see how Sender will work:

public class Sender implements Runnable {
    private Data data;
 
    // standard constructors
 
    public void run() {
        String packets[] = {
          "First packet",
          "Second packet",
          "Third packet",
          "Fourth packet",
          "End"
        };
 
        for (String packet : packets) {
            data.send(packet);

            // Thread.sleep() to mimic heavy server-side processing
            try {
                Thread.sleep(ThreadLocalRandom.current().nextInt(1000, 5000));
            } catch (InterruptedException e)  {
                Thread.currentThread().interrupt(); 
                Log.error("Thread interrupted", e); 
            }
        }
    }
}

For this Sender:

  • We’re creating some random data packets that will be sent across the network in packets[] array
  • For each packet, we’re merely calling send() 
  • Then we’re calling Thread.sleep() with random interval to mimic heavy server-side processing

Finally, let’s implement our Receiver:

public class Receiver implements Runnable {
    private Data load;
 
    // standard constructors
 
    public void run() {
        for(String receivedMessage = load.receive();
          !"End".equals(receivedMessage);
          receivedMessage = load.receive()) {
            
            System.out.println(receivedMessage);

            // ...
            try {
                Thread.sleep(ThreadLocalRandom.current().nextInt(1000, 5000));
            } catch (InterruptedException e) {
                Thread.currentThread().interrupt(); 
                Log.error("Thread interrupted", e); 
            }
        }
    }
}

Here, we’re simply calling load.receive() in the loop until we get the last “End” data packet.

Let’s now see this application in action:

public static void main(String[] args) {
    Data data = new Data();
    Thread sender = new Thread(new Sender(data));
    Thread receiver = new Thread(new Receiver(data));
    
    sender.start();
    receiver.start();
}

We’ll receive the following output:

First packet
Second packet
Third packet
Fourth packet

And here we are – we’ve received all data packets in the right, sequential order and successfully established the correct communication between our sender and receiver.

6. Conclusion

In this article, we discussed some core synchronization concepts in Java; more specifically, we focused on how we can use wait() and notify() to solve interesting synchronization problems. And finally, we went through a code sample where we applied these concepts in practice.

Before we wind down here, it’s worth mentioning that all these low-level APIs, such as wait(), notify() and notifyAll() – are traditional methods that work well, but higher-level mechanism are often simpler and better – such as Java’s native Lock and Condition interfaces (available in java.util.concurrent.locks package).

For more information on the java.util.concurrent package, visit our overview of the java.util.concurrent article, and Lock and Condition are covered in the guide to java.util.concurrent.Locks, here.

As always, the complete code snippets used in this article are available over on GitHub.

Java Weekly, Issue 215

$
0
0

Here we go…

1. Spring and Java

>> Monitor and troubleshoot Java applications and services with Datadog 

Optimize performance with end-to-end tracing and out-of-the-box support for popular Java frameworks, application servers, and databases. Try it free.

>> Code First Java 9 Tutorial [blog.codefx.org]

Java 9 updates condensed into a single practical guide – super useful.

>> Reactive emoji tracker with WebClient and Reactor: consuming SSE [nurkiewicz.com]

>> Reactive emoji tracker with WebClient and Reactor: aggregating data [nurkiewicz.com]

A very interesting series showcasing how powerful reactive implementations can be.

>> EE4J: An Update [blogs.oracle.com]

A quick overview of the transfer and rebranding process of Java EE inside the Eclipse Foundation – if you want to keep track of what’s going on there.

>> Effective debugging with breakpoints [advancedweb.hu]

Back to debugging basics – certainly one of the more powerful skills you can build as a Java developer.

>> Java Magazine: Reactive Programming [blogs.oracle.com]

The reactive paradigm is finding its stride, no doubt about it.

Also worth reading:

Lot’s of fantastic presentations this week:

And a few solid releases:

2. Technical and Musings

>> How Long is Long Enough? Minimum Password Lengths by the World’s Top Sites [troyhunt.com]

A quick, interesting look at what minimum password rules are out there, in the wild. Quite interesting.

>> Positioning Strategy for the Aspiring Consultant [daedtech.com]

Doing consulting well is a long and complex journey. Speaking out of my own experience – it’s well worth it.

Also worth reading:

3. Pick of the Week

This week, I’m picking Datadog, the first sponsor I accepted for the Java Weekly newsletter (ever).

I soft-launched the sponsorships six months ago and refused a handful of companies up until this point – for various reasons (mainly because I wasn’t convinced by their products).

I hadn’t tried Datadog before, but I’ve used a lot of other APM solutions out there, so I knew what to expect. I’ve been playing with their system for a week now and I’m more than happy to have them as the first official sponsor.

It’s a solid, super mature solution, it’s actually useful from the very start, without me having to spend a full day setting it up, and it doesn’t cost an arm and a leg.

So, they’re my pick for this week.


Exploring the New HTTP Client in Java 9

$
0
0

1. Introduction

In this tutorial, we’ll explore Java 9’s new incubating HttpClient.

Until very recently, Java provided only the HttpURLConnection API – which is low-level and isn’t known for being feature-rich and user-friendly.

Therefore, some widely used third-party libraries were commonly used – such as Apache HttpClient, Jetty, and Spring’s RestTemplate.

2. Initial Setup

The HTTP Client module is bundled as an incubator module in JDK 9 and supports HTTP/2 with backward compatibility still facilitating HTTP/1.1.

To use it, we need to define our module using a module-info.java file which also indicates the required module to run our application:

module com.baeldung.java9.httpclient {   
  requires jdk.incubator.httpclient;
}

3. HTTP Client API Overview

Unlike HttpURLConnection, HTTP Client provides synchronous and asynchronous request mechanisms.

The API consists of 3 core classes:

  • HttpRequest – represents the request to be sent via the HttpClient
  • HttpClient – behaves as a container for configuration information common to multiple requests
  • HttpResponse – represents the result of an HttpRequest call

We’ll examine each of them in more details in the following sections. First, let’s focus on a request.

4. HttpRequest

HttpRequest, as the name suggests, is an object which represents request we want to send. New instances can be created using HttpRequest.Builder.

We can get it by calling HttpRequest.newBuilder(). Builder class provides a bunch of methods which we can use to configure our request.

We’ll cover the most important ones.

4.1. Setting URI

The first thing we have to do when creating a request is to provide the URL.

We can do that in two ways – by using the constructor for Builder with URI parameter or by calling method uri(URI) on the Builder instance:

HttpRequest.newBuilder(new URI("https://postman-echo.com/get"))
 
HttpRequest.newBuilder()
  .uri(new URI("https://postman-echo.com/get"))

The last thing we have to configure to create a basic request is an HTTP method.

4.2. Specifying the HTTP Method

We can define HTTP method which our request will use by calling one of the methods from Builder:

  • GET()
  • POST(BodyProcessor body)
  • PUT(BodyProcessor body)
  • DELETE(BodyProcessor body)

We’ll cover BodyProcessor in detail, later. Now, let’s just create a very simple GET request example:

HttpRequest request = HttpRequest.newBuilder()
  .uri(new URI("https://postman-echo.com/get"))
  .GET()
  .build();

This request has all parameters required by HttpClient. However, sometimes we need to add additional parameters to our request; here are some important ones are:

  • the version of the HTTP protocol
  • headers
  • a timeout

4.3. Setting HTTP Protocol Version

The API fully leverages the HTTP/2 protocol and uses it by default but we can define which version of protocol we want to use.

HttpRequest request = HttpRequest.newBuilder()
  .uri(new URI("https://postman-echo.com/get"))
  .version(HttpClient.Version.HTTP_2)
  .GET()
  .build();

Important to mention here is that the client will fallback to, e.g., HTTP/1.1 if HTTP/2 isn’t supported. 

4.4. Setting Headers

In case we want to add additional headers to our request, we can use the provided builder methods.

We can do that in one of two ways:

  • passing all headers as key-value pairs to the headers() method or by
  • using header() method for the single key-value header:
HttpRequest request = HttpRequest.newBuilder()
  .uri(new URI("https://postman-echo.com/get"))
  .headers("key1", "value1", "key2", "value2")
  .GET()
  .build();

HttpRequest request2 = HttpRequest.newBuilder()
  .uri(new URI("https://postman-echo.com/get"))
  .header("key1", "value1")
  .header("key2", "value2")
  .GET()
  .build();

The last useful method we can use to customize our request is timeout().

4.5. Setting a Timeout

Let’s now define the amount of time we want to wait for a response.

If the set time expires, a HttpTimeoutException will be thrown; the default timeout is set to infinity.

The timeout can be set with the Duration object – by calling method timeout() on the builder instance:

HttpRequest request = HttpRequest.newBuilder()
  .uri(new URI("https://postman-echo.com/get"))
  .timeout(Duration.of(10, SECONDS))
  .GET()
  .build();

5. Setting a Request Body

We can add a body to a request by using the request builder methods: POST(BodyProcessor body), PUT(BodyProcessor body) and DELETE(BodyProcessor body).

The new API provides a number of BodyProcessor implementations out-of-the-box which simplify passing the request body:

  • StringProcessor (reads body from a String, created with HttpRequest.BodyProcessor.fromString)
  • InputStreamProcessor (reads body from an InputStream, created with HttpRequest.BodyProcessor.fromInputStream)
  • ByteArrayProcessor (reads body from a byte array, created with HttpRequest.BodyProcessor.fromByteArray)
  • FileProcessor (reads body from a file at given path, created with HttpRequest.BodyProcessor.fromFile)

In case we don’t need a body, we can simply pass in an HttpRequest.noBody():

HttpRequest request = HttpRequest.newBuilder()
  .uri(new URI("https://postman-echo.com/post"))
  .POST(HttpRequest.noBody())
  .build();

5.1. StringBodyProcessor

Setting a request body with any BodyProcessor implementation is very simple and intuitive.

For example, if we want to pass simple String as a body, we can use StringBodyProcessor.

As we already mentioned, this object can be created with a factory method fromString(); it takes just a String object as an argument and creates a body from it:

HttpRequest request = HttpRequest.newBuilder()
  .uri(new URI("https://postman-echo.com/post"))
  .headers("Content-Type", "text/plain;charset=UTF-8")
  .POST(HttpRequest.BodyProcessor.fromString("Sample request body"))
  .build();

5.2. InputStreamBodyProcessor

To do that, the InputStream has to be passed as a Supplier (to make its creation lazy), so it’s a little bit different than described above StringBodyProcessor.

However, this is also quite straightforward:

byte[] sampleData = "Sample request body".getBytes();
HttpRequest request = HttpRequest.newBuilder()
  .uri(new URI("https://postman-echo.com/post"))
  .headers("Content-Type", "text/plain;charset=UTF-8")
  .POST(HttpRequest.BodyProcessor
   .fromInputStream(() -> new ByteArrayInputStream(sampleData)))
  .build();

Notice how we used a simple ByteArrayInputStream here; that can, of course, be any InputStream implementation.

5.3. ByteArrayProcessor

We can also use ByteArrayProcessor and pass an array of bytes as the parameter:

byte[] sampleData = "Sample request body".getBytes();
HttpRequest request = HttpRequest.newBuilder()
  .uri(new URI("https://postman-echo.com/post"))
  .headers("Content-Type", "text/plain;charset=UTF-8")
  .POST(HttpRequest.BodyProcessor.fromByteArray(sampleData))
  .build();

5.4. FileProcessor

To work with a File, we can make use of the provided FileProcessor; its factory method takes a path to the file as a parameter and creates a body from the content:

HttpRequest request = HttpRequest.newBuilder()
  .uri(new URI("https://postman-echo.com/post"))
  .headers("Content-Type", "text/plain;charset=UTF-8")
  .POST(HttpRequest.BodyProcessor.fromFile(
    Paths.get("src/test/resources/sample.txt")))
  .build();

We covered how to create HttpRequest and how to set additional parameters in it.

Now it’s time to take a deeper look on HttpClient class which is responsible for sending requests and receiving responses.

6. HttpClient

All requests are sent using HttpClient which can be instantiated using the HttpClient.newBuilder() method or by calling HttpClient.newHttpClient().

It provides a lot of useful and self-describing methods we can use to handle our request/response.

Let’s cover some of these here.

6.1. Setting a Proxy

We can define a proxy for the connection. Just merely call proxy() method on a Builder instance:

HttpResponse<String> response = HttpClient
  .newBuilder()
  .proxy(ProxySelector.getDefault())
  .build()
  .send(request, HttpResponse.BodyHandler.asString());

In our example, we used the default system proxy.

6.2. Setting the Redirect Policy

Sometimes the page we want to access has moved to a different address.

In that case, we’ll receive HTTP status code 3xx, usually with the information about new URI. HttpClient can redirect the request to the new URI automatically if we set appropriate redirect policy.

We can do it with the followRedirects() method on Builder:

HttpResponse<String> response = HttpClient.newBuilder()
  .followRedirects(HttpClient.Redirect.ALWAYS)
  .build()
  .send(request, HttpResponse.BodyHandler.asString());

All policies are defined and described in enum HttpClient.Redirect.

6.3. Setting Authenticator for a Connection

An Authenticator is an object which negotiates credentials (HTTP authentication) for a connection.

It provides different authentication schemes (like e.g., basic or digest authentication). In most cases, authentication requires username and password to connect to a server.

We can use PasswordAuthentication class which is just a holder of these values:

HttpResponse<String> response = HttpClient.newBuilder()
  .authenticator(new Authenticator() {
    @Override
    protected PasswordAuthentication getPasswordAuthentication() {
      return new PasswordAuthentication(
        "username", 
        "password".toCharArray());
    }
}).build()
  .send(request, HttpResponse.BodyHandler.asString());

In the example above we passed the username and password values as a plaintext; of course, in a production scenario, this will have to be different.

Note that not every request should use the same username and password. The Authenticator class provides a number of getXXX (e.g., getRequestingSite()) methods that can be used to find out what values should be provided.

Now we’re going to explore one of the most useful features of new HttpClient – asynchronous calls to the server.

6.4. Send Requests – Sync vs. Async

New HttpClient provides two possibilities for sending a request to a server:

  • send(…) – synchronously (blocks until the response comes)
  • sendAsync(…) – asynchronously (doesn’t wait for response, non-blocking)

Up until now, the send(…) method naturally waits for a response:

HttpResponse<String> response = HttpClient.newBuilder()
  .build()
  .send(request, HttpResponse.BodyHandler.asString());

This call returns an HttpResponse object, and we’re sure that the next instruction from our application flow will be executed only when the response is already here.

However, it has a lot of drawbacks especially when we are processing large amounts of data.

So, now, we can use sendAsync(…) method – which returns CompletableFeature<HttpResponse> – to process a request asynchronously:

CompletableFuture<HttpResponse<String>> response = HttpClient.newBuilder()
  .build()
  .sendAsync(request, HttpResponse.BodyHandler.asString());

The new API can also deal with multiple responses, and stream the request and response bodies:

List<URI> targets = Arrays.asList(
  new URI("https://postman-echo.com/get?foo1=bar1"),
  new URI("https://postman-echo.com/get?foo2=bar2"));
HttpClient client = HttpClient.newHttpClient();
List<CompletableFuture<String>> futures = targets.stream()
  .map(target -> client
    .sendAsync(
      HttpRequest.newBuilder(target).GET().build(),
      HttpResponse.BodyHandler.asString())
    .thenApply(response -> response.body()))
  .collect(Collectors.toList());

6.5. Setting Executor for Asynchronous Calls

We can also define an Executor which provides threads to be used by asynchronous calls.

This way we can, for example, limit the number of threads used for processing requests:

ExecutorService executorService = Executors.newFixedThreadPool(2);

CompletableFuture<HttpResponse<String>> response1 = HttpClient.newBuilder()
  .executor(executorService)
  .build()
  .sendAsync(request, HttpResponse.BodyHandler.asString());

CompletableFuture<HttpResponse<String>> response2 = HttpClient.newBuilder()
  .executor(executorService)
  .build()
  .sendAsync(request, HttpResponse.BodyHandler.asString());

By default, the HttpClient uses executor java.util.concurrent.Executors.newCachedThreadPool().

6.6. Defining a CookieManager

With new API and builder, it’s straightforward to set a CookieManager for our connection. We can use builder method cookieManager(CookieManager cookieManager) to define client-specific CookieManager.

Let’s, for example, define CookieManager which doesn’t allow to accept cookies at all:

HttpClient.newBuilder()
  .cookieManager(new CookieManager(null, CookiePolicy.ACCEPT_NONE))
  .build();

In case our CookieManager allows cookies to be stored, we can access them by checking CookieManager from our HttpClient:

httpClient.cookieManager().get().getCookieStore()

Now let’s focus on the last class from Http API – the HttpResponse.

7. HttpResponse Object

The HttpResponse class represents the response from the server. It provides a number of useful methods – but two the most important are:

  • statusCode() – returns status code (type int) for a response (HttpURLConnection class contains possible values)
  • body() – returns a body for a response (return type depends on the response BodyHandler parameter passed to the send() method)

The response object has other useful method which we’ll cover like uri(), headers(), trailers() and version().

7.1. URI of Response Object

The method uri() on the response object returns the URI from which we received the response.

Sometimes it can be different than URI in the request object, because a redirection may occur:

assertThat(request.uri()
  .toString(), equalTo("http://stackoverflow.com"));
assertThat(response.uri()
  .toString(), equalTo("https://stackoverflow.com/"));

7.2. Headers from Response

We can obtain headers from the response by calling method headers() on a response object:

HttpResponse<String> response = HttpClient.newHttpClient()
  .send(request, HttpResponse.BodyHandler.asString());
HttpHeaders responseHeaders = response.headers();

It returns HttpHeaders object as a return type. This is a new type defined in jdk.incubator.http package which represents a read-only view of HTTP Headers.

It has some useful methods which simplify searching for headers value.

7.3. Get Trailers from Response

The HTTP response may contain additional headers which are included after the response content. These headers are called trailer headers.

We can obtain them by calling method trailers() on HttpResponse:

HttpResponse<String> response = HttpClient.newHttpClient()
  .send(request, HttpResponse.BodyHandler.asString());
CompletableFuture<HttpHeaders> trailers = response.trailers();

Note that trailers() method returns CompletableFuture object.

7.4. Version of the Response

The method version() defines which version of HTTP protocol was used to talk with a server.

Remember, that even if we define that we want to use HTTP/2, the server can answer via HTTP/1.1.

The version in which server answered is specified in the response:

HttpRequest request = HttpRequest.newBuilder()
  .uri(new URI("https://postman-echo.com/get"))
  .version(HttpClient.Version.HTTP_2)
  .GET()
  .build();
HttpResponse<String> response = HttpClient.newHttpClient()
  .send(request, HttpResponse.BodyHandler.asString());
assertThat(response.version(), equalTo(HttpClient.Version.HTTP_1_1));

8. Conclusion

In this article, we explored Java 9’s HttpClient API which provides a lot of flexibility and powerful features.

As always, the complete code can be found over on GitHub.

Note: In the examples, we’ve used sample REST endpoints provided by https://postman-echo.com.

Asynchronous HTTP with async-http-client in Java

$
0
0

1. Overview

AsyncHttpClient (AHC) is a library build on top of Netty, with the purpose of easily executing HTTP requests and processing responses asynchronously.

In this article, we’ll present how to configure and use the HTTP client, how to execute a request and process the response using AHC.

2. Setup

The latest version of the library can be found in the Maven repository. We should be careful to use the dependency with the group id org.asynchttpclient and not the one with com.ning:

<dependency>
    <groupId>org.asynchttpclient</groupId>
    <artifactId>async-http-client</artifactId>
    <version>2.2.0</version>
</dependency>

3. HTTP Client Configuration

The most straightforward method of obtaining the HTTP client is by using the Dsl class. The static asyncHttpClient() method returns an AsyncHttpClient object:

AsyncHttpClient client = Dsl.asyncHttpClient();

If we need a custom configuration of the HTTP client, we can build the AsyncHttpClient object using the builder DefaultAsyncHttpClientConfig.Builder:

DefaultAsyncHttpClientConfig.Builder clientBuilder = Dsl.config()

This offers the possibility to configure timeouts, a proxy server, HTTP certificates and many more:

DefaultAsyncHttpClientConfig.Builder clientBuilder = Dsl.config()
  .setConnectTimeout(500)
  .setProxyServer(new ProxyServer(...));
AsyncHttpClient client = Dsl.asyncHttpClient(clientBuilder);

Once we’ve configured and obtained an instance of the HTTP client we can reuse it across out application. We don’t need to create an instance for each request because internally it creates new threads and connection pools, which will lead to performance issues.

Also, it’s important to note that once we’ve finished using the client we should call to close() method to prevent any memory leaks or hanging resources.

4. Creating an HTTP Request

There are two methods in which we can define an HTTP request using AHC:

  • bound
  • unbound

There is no major difference between the two request types in terms of performance. They only represent two separate APIs we can use to define a request. A bound request is tied to the HTTP client it was created from and will, by default, use the configuration of that specific client if not specified otherwise.

For example, when creating a bound request the disableUrlEncoding flag is read from the HTTP client configuration, while for an unbound request this is, by default set to false. This is useful because the client configuration can be changed without recompiling the whole application by using system properties passed as VM arguments:

java -jar -Dorg.asynchttpclient.disableUrlEncodingForBoundRequests=true

A complete list of properties can be found the ahc-default.properties file.

4.1. Bound Request

To create a bound request we use the helper methods from the class AsyncHttpClient that start with the prefix “prepare”. Also, we can use the prepareRequest() method which receives an already created Request object.

For example, the prepareGet() method will create an HTTP GET request:

BoundRequestBuilder getRequest = client.prepareGet("http://www.baeldung.com");

4.2. Unbound Request

An unbound request can be created using the RequestBuilder class:

Request getRequest = new RequestBuilder(HttpConstants.Methods.GET)
  .setUrl("http://www.baeldung.com")
  .build();

or by using the Dsl helper class, which actually uses the RequestBuilder for configuring the HTTP method and URL of the request:

Request getRequest = Dsl.get("http://www.baeldung.com").build()

5. Executing HTTP Requests

The name of the library gives us a hint about how the requests can be executed. AHC has support for both synchronous and asynchronous requests.

Executing the request depends on its type. When using a bound request we use the execute() method from the BoundRequestBuilder class and when we have an unbound request we’ll execute it using one of the implementations of the executeRequest() method from the AsyncHttpClient interface.

5.1. Synchronously

The library was designed to be asynchronous, but when needed we can simulate synchronous calls by blocking on the Future object. Both execute() and executeRequest() methods return a ListenableFuture<Response> object. This class extends the Java Future interface, thus inheriting the get() method, which can be used to block the current thread until the HTTP request is completed and returns a response:

Future<Response> responseFuture = boundGetRequest.execute();
responseFuture.get();
Future<Response> responseFuture = client.executeRequest(unboundRequest);
responseFuture.get();

Using synchronous calls is useful when trying to debug parts of our code, but it’s not recommended to be used in a production environment where asynchronous executions lead to better performance and throughput.

5.2. Asynchronously

When we talk about asynchronous executions, we also talk about listeners for processing the results. The AHC library provides 3 types of listeners that can be used for asynchronous HTTP calls:

  • AsyncHandler
  • AsyncCompletionHandler
  • ListenableFuture listeners

The AsyncHandler listener offers the possibility to control and process the HTTP call before it has completed. Using it can handle a series of events related to the HTTP call:

request.execute(new AsyncHandler<Object>() {
    @Override
    public State onStatusReceived(HttpResponseStatus responseStatus)
      throws Exception {
        return null;
    }

    @Override
    public State onHeadersReceived(HttpHeaders headers)
      throws Exception {
        return null;
    }

    @Override
    public State onBodyPartReceived(HttpResponseBodyPart bodyPart)
      throws Exception {
        return null;
    }

    @Override
    public void onThrowable(Throwable t) {

    }

    @Override
    public Object onCompleted() throws Exception {
        return null;
    }
});

The State enum lets us control the processing of the HTTP request. By returning State.ABORT we can stop the processing at a specific moment and by using State.CONTINUE we let the processing finish.

It’s important to mention that the AsyncHandler isn’t thread-safe and shouldn’t be reused when executing concurrent requests.

AsyncCompletionHandler inherits all the methods from the AsyncHandler interface and adds the onCompleted(Response) helper method for handling the call completion. All the other listener methods are overridden to return State.CONTINUE, thus making the code more readable:

request.execute(new AsyncCompletionHandler<Object>() {
    @Override
    public Object onCompleted(Response response) throws Exception {
        return response;
    }
});

The ListenableFuture interface lets us add listeners that will run when the HTTP call is completed.

Also, it let’s execute the code from the listeners – by using another thread pool:

ListenableFuture<Response> listenableFuture = client
  .executeRequest(unboundRequest);
listenableFuture.addListener(() -> {
    Response response = listenableFuture.get();
    LOG.debug(response.getStatusCode());
}, Executors.newCachedThreadPool());

Besides, the option to add listeners, the ListenableFuture interface lets us transform the Future response to a CompletableFuture.

7. Conclusion

AHC is a very powerful library, with a lot of interesting features. It offers a very simple way to configure an HTTP client and the capability of executing both synchronous and asynchronous requests.

As always, the source code for the article is available over on GitHub.

The Observer Pattern in Java

$
0
0

1. Overview

In this article, we’re going to describe the Observer pattern and take a look at a few Java implementation alternatives.

2. What is the Observer Pattern?

Observer is a behavioral design pattern. It specifies communication between objects: observable and observers. An observable is an object which notifies observers about the changes in its state.

For example, a news agency can notify channels when it receives news. Receiving news is what changes the state of the news agency, and it causes the channels to be notified.

Let’s see how we can implement it ourselves.

First, let’s define the NewsAgency class:

public class NewsAgency {
    private String news;
    private List<Channel> channels = new ArrayList<>();

    public void addObserver(Channel channel) {
        this.channels.add(channel);
    }

    public void removeObserver(Channel channel) {
        this.channels.remove(channel);
    }

    public void setNews(String news) {
        this.news = news;
        for (Channel channel : this.channels) {
            channel.update(this.news);
        }
    }
}

NewsAgency is an observable, and when news gets updated, the state of NewsAgency changes. When the change happens, NewsAgency notifies the observers about this fact by calling their update() method.

To be able to do that, the observable object needs to keep references to the observers, and in our case, it’s the channels variable.

Let’s now see how the observer, the Channel class, can look like. It should have the update() method which is invoked when the state of NewsAgency changes:

public class NewsChannel implements Channel {
    private String news;

    @Override
    public void update(Object news) {
        this.setNews((String) news);
    } 
}

The Channel interface has only one method:

public interface Channel {
    public void update(Object o);
}

Now, if we add an instance of NewsChannel to the list of observers, and change the state of NewsAgency, the instance of NewsChannel will be updated:

NewsAgency observable = new NewsAgency();
NewsChannel observer = new NewsChannel();

observable.addObserver(observer);
observable.setNews("news");
assertEquals(observer.getNews(), "news");

There’s a predefined Observer interface in Java core libraries, which makes implementing the observer pattern even simpler. Let’s look at it.

3. Implementation with Observer

The java.util.Observer interface defines the update() method, so there’s no need to define it ourselves as we did in the previous section.

Let’s see how we can use it in our implementation:

public class ONewsChannel implements Observer {

    private String news;

    @Override
    public void update(Observable o, Object news) {
        this.setNews((String) news);
    }
}

Here, the second argument comes from Observable as we’ll see below.

To define the observable, we need to extend Java’s Observable class:

public class ONewsAgency extends Observable {
    private String news;

    public void setNews(String news) {
        this.news = news;
        setChanged();
        notifyObservers(news);
    }
}

Note that we don’t need to call the observer’s update() method directly. We just call stateChanged() and notifyObservers(), and the Observable class is doing the rest for us.

Also, it contains a list of observers and exposes methods to maintain that list – addObserver() and deleteObserver().

To test the result, we just need to add the observer to this list and to set the news:

ONewsAgency observable = new ONewsAgency();
ONewsChannel observer = new ONewsChannel();

observable.addObserver(observer);
observable.setNews("news");
assertEquals(observer.getNews(), "news");

Observer interface isn’t perfect and is deprecated since Java 9. One of its cons is that Observable isn’t an interface but a class, that’s why subclasses can’t be used as observables.

Also, a developer could override some of the Observable‘s synchronized methods and disrupt their thread-safety.

Let’s look at the ProperyChangeListener interface, which is recommended instead of using Observer.

4. Implementation with PropertyChangeListener

In this implementation, an observable must keep a reference to the PropertyChangeSupport instance. It helps to send the notifications to observers when a property of the class is changed.

Let’s define the observable:

public class PCLNewsAgency {
    private String news;

    private PropertyChangeSupport support;

    public PCLNewsAgency() {
        support = new PropertyChangeSupport(this);
    }

    public void addPropertyChangeListener(PropertyChangeListener pcl) {
        support.addPropertyChangeListener(pcl);
    }

    public void removePropertyChangeListener(PropertyChangeListener pcl) {
        support.removePropertyChangeListener(pcl);
    }

    public void setNews(String value) {
        support.firePropertyChange("news", this.news, value);
        this.news = value;
    }
}

Using this support, we can add and remove observers, and notify them when the state of the observable changes:

support.firePropertyChange("news", this.news, value);

Here, the first argument is the name of the observed property. The second and the third arguments are its old and new value accordingly.

Observers should implement PropertyChangeListener:

public class PCLNewsChannel implements PropertyChangeListener {

    private String news;

    public void propertyChange(PropertyChangeEvent evt) {
        this.setNews((String) evt.getNewValue());
    }
}

Due to the PropertyChangeSupport class which is doing the wiring for us, we can restore the new property value from the event.

Let’s test the implementation to make sure that it also works:

PCLNewsAgency observable = new PCLNewsAgency();
PCLNewsChannel observer = new PCLNewsChannel();

observable.addPropertyChangeListener(observer);
observable.setNews("news");

assertEquals(observer.getNews(), "news");

5. Conclusion

In this article, we’ve examined two ways to implement the Observer design pattern in Java, with the PropertyChangeListener approach being preferred.

The source code for the article is available over on GitHub.

Flyweight Pattern in Java

$
0
0

1. Overview

In this article, we’ll take a look at the flyweight design pattern. This pattern is used to reduce the memory footprint. It can also improve performance in applications where object instantiation is expensive.

Simply put, the flyweight pattern is based on a factory which recycles created objects by storing them after creation. Each time an object is requested, the factory looks up the object in order to check if it’s already been created. If it has, the existing object is returned – otherwise, a new one is created, stored and then returned.

The flyweight object’s state is made up of an invariant component shared with other similar objects (intrinsic) and a variant component which can be manipulated by the client code (extrinsic).

It’s very important that the flyweight objects are immutable: any operation on the state must be performed by the factory.

2. Implementation

The main elements of the pattern are:

  • an interface which defines the operations that the client code can perform on the flyweight object
  • one or more concrete implementations of our interface
  • a factory to handle objects instantiation and caching

Let’s see how to implement each component.

2.1. Vehicle Interface

To begin with, we’ll create a Vehicle interface. Since this interface will be the return type of the factory method we need to make sure to expose all the relevant methods:

public void start();
public void stop();
public Color getColor();

2.2. Concrete Vehicle

Next up, let’s make a Car class as a concrete Vehicle. Our car will implement all the methods of the vehicle interface. As for its state, it’ll have an engine and a color field:

private Engine engine;
private Color color;

2.3. Vehicle Factory

Last but not least, we’ll create the VehicleFactory. Building a new vehicle is a very expensive operation so the factory will only create one vehicle per color.

In order to do that, we keep track of the created vehicles using a map as a simple cache:

private static Map<Color, Vehicle> vehiclesCache
  = new HashMap<>();

public static Vehicle createVehicle(Color color) {
    Vehicle newVehicle = vehiclesCache.computeIfAbsent(color, newColor -> { 
        Engine newEngine = new Engine();
        return new Car(newEngine, newColor);
    });
    return newVehicle;
}

Notice how the client code can only affect the extrinsic state of the object (the color of our vehicle) passing it as an argument to the createVehicle method.

3. Use Cases

3.1. Data Compression

The goal of the flyweight pattern is to reduce memory usage by sharing as much data as possible, hence, it’s a good basis for lossless compression algorithms. In this case, each flyweight object acts as a pointer with its extrinsic state being the context-dependent information.

A classic example of this usage is in a word processor. Here, each character is a flyweight object which shares the data needed for the rendering. As a result, only the position of the character inside the document takes up additional memory.

3.2. Data Caching

Many modern applications use caches to improve response time. The flyweight pattern is similar to the core concept of a cache and can fit this purpose well.

Of course, there are a few key differences in complexity and implementation between this pattern and a typical, general-purpose cache.

4. Conclusion

To sum up, this quick tutorial focused on the flyweight design pattern in Java. We also checked out some of the most common scenarios that involve the pattern.

All the code from the examples is available over on the GitHub project.

Priority-based Job Scheduling in Java

$
0
0

1. Introduction

In a multi-threaded environment, sometimes we need to schedule tasks based on custom criteria instead of just the creation time.

Let’s see how we can achieve this in Java – using a PriorityBlockingQueue.

2. Overview

Let us say we have jobs that we want to execute based on their priority:

public class Job implements Runnable {
    private String jobName;
    private JobPriority jobPriority;
    
    @Override
    public void run() {
        System.out.println("Job:" + jobName +
          " Priority:" + jobPriority);
        Thread.sleep(1000); // to simulate actual execution time
    }

    // standard setters and getters
}

For demonstration purposes, we’re printing the job name and priority in the run() method.

We also added sleep() so that we simulate a longer-running job; while the job is executing, more jobs will get accumulated in the priority queue.

Finally, JobPriority is a simple enum:

public enum JobPriority {
    HIGH,
    MEDIUM,
    LOW
}

3. Custom Comparator

We need to write a comparator defining our custom criteria; and, in Java 8, it’s trivial:

Comparator.comparing(Job::getJobPriority);

4. Priority Job Scheduler

With all the setup done, let’s now implement a simple job scheduler – which employs a single thread executor to look for jobs in the PriorityBlockingQueue and executes them:

public class PriorityJobScheduler {

    private ExecutorService priorityJobPoolExecutor;
    private ExecutorService priorityJobScheduler 
      = Executors.newSingleThreadExecutor();
    private PriorityBlockingQueue<Job> priorityQueue;

    public PriorityJobScheduler(Integer poolSize, Integer queueSize) {
        priorityJobPoolExecutor = Executors.newFixedThreadPool(poolSize);
        priorityQueue = new PriorityBlockingQueue<Job>(
          queueSize, 
          Comparator.comparing(Job::getJobPriority));
        priorityJobScheduler.execute(() -> {
            while (true) {
                try {
                    priorityJobPoolExecutor.execute(priorityQueue.take());
                } catch (InterruptedException e) {
                    // exception needs special handling
                    break;
                }
            }
        });
    }

    public void scheduleJob(Job job) {
        priorityQueue.add(job);
    }
}

The key here is to create an instance of PriorityBlockingQueue of Job type with a custom comparator. The next job to execute is picked from the queue using take() method which retrieves and removes the head of the queue.

The client code now simply needs to call the scheduleJob() – which adds the job to the queue. The priorityQueue.add() queues the job at appropriate position as compared to existing jobs in the queue, using the JobExecutionComparator.

Note that the actual jobs are executed using a separate ExecutorService with a dedicated thread pool.

5. Demo

Finally, here’s a quick demonstration of the scheduler:

private static int POOL_SIZE = 1;
private static int QUEUE_SIZE = 10;

@Test
public void whenMultiplePriorityJobsQueued_thenHighestPriorityJobIsPicked() {
    Job job1 = new Job("Job1", JobPriority.LOW);
    Job job2 = new Job("Job2", JobPriority.MEDIUM);
    Job job3 = new Job("Job3", JobPriority.HIGH);
    Job job4 = new Job("Job4", JobPriority.MEDIUM);
    Job job5 = new Job("Job5", JobPriority.LOW);
    Job job6 = new Job("Job6", JobPriority.HIGH);
    
    PriorityJobScheduler pjs = new PriorityJobScheduler(
      POOL_SIZE, QUEUE_SIZE);
    
    pjs.scheduleJob(job1);
    pjs.scheduleJob(job2);
    pjs.scheduleJob(job3);
    pjs.scheduleJob(job4);
    pjs.scheduleJob(job5);
    pjs.scheduleJob(job6);

    // clean up
}

In order to demo that the jobs are executed in the order of priority, we’ve kept the POOL_SIZE as 1 even though the QUEUE_SIZE is 10. We provide jobs with varying priority to the scheduler.

Here is a sample output we got for one of the runs:

Job:Job3 Priority:HIGH
Job:Job6 Priority:HIGH
Job:Job4 Priority:MEDIUM
Job:Job2 Priority:MEDIUM
Job:Job1 Priority:LOW
Job:Job5 Priority:LOW

The output could vary across runs. However, we should never have a case where a lower priority job is executed even when the queue contains a higher priority job.

6. Conclusion

In this quick tutorial, we saw how PriorityBlockingQueue can be used to execute jobs in a custom priority order.

As usual, source files can be found over on GitHub.

Viewing all 3692 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>