Quantcast
Channel: Baeldung
Viewing all 3689 articles
Browse latest View live

XML-Based Injection in Spring

$
0
0

1. Introduction

In this basic tutorial, we’ll learn how to do simple XML-based bean configuration with the Spring Framework.

2. Overview

Let’s start by adding Spring’s library dependency in the pom.xml:

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-context</artifactId>
    <version>5.0.1.RELEASE</version>         
</dependency>

The latest version of the Spring dependency can be found here.

3. Dependency Injection – An Overview

Dependency injection is a technique whereby dependencies of an object are supplied by external containers.

Let’s say we’ve got an application class that depends on a service that actually handles the business logic:

public class IndexApp {
    private IService service;
    // standard constructors/getters/setters
}

Now let’s say IService is an Interface:

public interface IService {
    public String serve();
}

This interface can have multiple implementations.

Let’s have a quick look at one potential implementation:

public class IndexService implements IService {
    @Override
    public String serve() {
        return "Hello World";
    }
}

Here, IndexApp is a high-level component that depends on the low-level component called IService.

In essence, we’re decoupling IndexApp from a particular implementation of the IService which can vary based on the various factors.

4. Dependency Injection – In Action

Let’s see how we can inject a dependency.

4.1. Using Properties

Let’s see how we can wire the dependencies together using an XML-based configuration:

<bean 
  id="indexService" 
  class="com.baeldung.di.spring.IndexService" />
     
<bean 
  id="indexApp" 
  class="com.baeldung.di.spring.IndexApp" >
    <property name="service" ref="indexService" />
</bean>    

As can be seen, we’re creating an instance of IndexService and assigning it an id. By default, the bean is a singleton. Also, we’re creating an instance of IndexApp.

Within this bean, we’re injecting the other bean using setter method.

4.2. Using Constructor

Instead of injecting a bean via the setter method, we can inject the dependency using the constructor:

<bean 
  id="indexApp" 
  class="com.baeldung.di.spring.IndexApp">
    <constructor-arg ref="indexService" />
</bean>    

4.3. Using Static Factory

We can also inject a bean returned by a factory. Let’s create a simple factory that returns an instance of IService based on the number supplied:

public class StaticServiceFactory {
    public static IService getNumber(int number) {
        // ...
    }
}

Now let’s see how we could use above implementation to inject a bean into IndexApp using an XML-based configuration:

<bean id="messageService"
  class="com.baeldung.di.spring.StaticServiceFactory"
  factory-method="getService">
    <constructor-arg value="1" />
</bean>   
  
<bean id="indexApp" class="com.baeldung.di.spring.IndexApp">
    <property name="service" ref="messageService" />
</bean>

In the above example, we’re calling the static getService method using factory-method to create a bean with id messageService which we inject into IndexApp.

4.4. Using Factory Method

Let’s consider an instance factory that returns an instance of IService based on the number supplied. This time, the method is not static:

public class InstanceServiceFactory {
    public IService getNumber(int number) {
        // ...
    }
}

Now let’s see how we could use above implementation to inject a bean into IndexApp using XML configuration:

<bean id="indexServiceFactory" 
  class="com.baeldung.di.spring.InstanceServiceFactory" />
<bean id="messageService"
  class="com.baeldung.di.spring.InstanceServiceFactory"
  factory-method="getService" factory-bean="indexServiceFactory">
    <constructor-arg value="1" />
</bean>  
<bean id="indexApp" class="com.baeldung.di.spring.IndexApp">
    <property name="service" ref="messageService" />
</bean>

In the above example, we’re calling the getService method on an instance of InstanceServiceFactory using factory-method to create a bean with id messageService which we inject in IndexApp.

5. Testing

This is how we can access configured beans:

@Test
public void whenGetBeans_returnsBean() {
    ApplicationContext applicationContext = new ClassPathXmlApplicationContext("...");
    IndexApp indexApp = applicationContext.getBean("indexApp", IndexApp.class);
    assertNotNull(indexApp);
}

6. Conclusion

In this quick tutorial, we illustrated examples of how we can inject dependency using the XML-based configuration using Spring Framework.

The implementation of these examples can be found in the GitHub project – this is a Maven-based project, so it should be easy to import and run as is.


Java Weekly, Issue 202

$
0
0

Here we go…

1. Spring and Java

>> New Version Scheme for Java SE Platform and the JDK [infoq.com]

The details of the next version scheme for Java.

>> How to Implement Conditional Auditing with Hibernate Envers [thoughts-on-java.org]

A dive into Hibernate Envers and conditional auditing.

>> SOLID Principles in Action: From Slack to Twilio [twilio.com]

An interesting, long read from Twilio engineering.

>> JEPs Proposed to Target JDK 10 [openjdk.java.net]

These are the early JEPs targeted for JDK 10.

Also worth reading:

Time to upgrade:

2. Technical

>> Microservices with Nomad and Consul [blog.codecentric.de]

The Nomad/Consul stack is another interesting option for Microservices.

>> The multiple usages of git rebase –onto [blog.frankel.ch]

git rebase certainly has many useful applications.

>> The Pain of Implicit Dependencies [blog.thecodewhisperer.com]

Introducing implicit dependencies can effectively make code legacy.

Also worth reading:

3. Musings

>> Becoming an accidental architect [oreilly.com]

The role of the architect might be more demanding than you think it is.

>> How to Politely Say No and When To Do It [daedtech.com]

If there’s ever a silver bullet, it’s saying “no”.

It’s an uncomfortable skill most never master, and it can unlock a lot of great things, so it’s worth exploring and learning how to do right.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Barry Dingle [dilbert.com]

>> Barry Dingle Asks About Blockchain [dilbert.com]

>> App For Jumping Off the Roof [dilbert.com]

5. Pick of the Week

>> You can have two Big Things, but not three [blog.asmartbear.com]

An Overview of Identifiers in Hibernate

$
0
0

1. Introduction

Identifiers in Hibernate represent the primary key of an entity. This implies the values are unique so that they can identify a specific entity, that they aren’t null and that they won’t be modified.

Hibernate provides a few different ways to define identifiers. In this article, we’ll review each method of mapping entity ids using the library.

2. Simple Identifiers

The most straightforward way to define an identifier is by using the @Id annotation.

Simple ids are mapped using @Id to a single property of one of these types: Java primitive and primitive wrapper types, String, Date, BigDecimal, BigInteger.

Let’s see a quick example of defining an entity with a primary key of type long:

@Entity
public class Student {

    @Id
    private long studentId;
    
    // standard constructor, getters, setters
}

3. Generated Identifiers

If we want the primary key value to be generated automatically for us, we can add the @GeneratedValue annotation.

This can use 4 generation types: AUTO, IDENTITY, SEQUENCE, TABLE.

If we don’t specify a value explicitly, the generation type defaults to AUTO.

3.1. AUTO Generation

If we’re using the default generation type, the persistence provider will determine values based on the type of the primary key attribute.  This type can be numerical or UUID.

For numeric values, the generation is based on a sequence or table generator, while UUID values will use the UUIDGenerator.

Let’s see an example of mapping an entity primary key using AUTO generation strategy:

@Entity
public class Student {

    @Id
    @GeneratedValue
    private long studentId;

    // ...
}

In this case, the primary key values will be unique at the database level.

An interesting feature introduced in Hibernate 5 is the UUIDGenerator. To use this, all we need to do is declare an id of type UUID with @GeneratedValue annotation:

@Entity
public class Course {

    @Id
    @GeneratedValue
    private UUID courseId;

    // ...
}

Hibernate will generate an id of the form “8dd5f315-9788-4d00-87bb-10eed9eff566”.

3.2. IDENTITY Generation

This type of generation relies on the IdentityGenerator which expects values generated by an identity column in the database, meaning they are auto-incremented.

To use this generation type, we only need to set the strategy parameter:

@Entity
public class Student {

    @Id
    @GeneratedValue (strategy = GenerationType.IDENTITY)
    private long studentId;

    // ...
}

One thing to note is that IDENTITY generation disables batch updates.

3.3. SEQUENCE Generation

To use a sequence-based id, Hibernate provides the SequenceStyleGenerator class.

This generator uses sequences if they’re supported by our database, and switches to table generation if they aren’t.

To customize the sequence name, we can use the JPA @SequenceGenerator annotation:

@Entity
public class User {
    @Id
    @GeneratedValue(strategy = GenerationType.SEQUENCE, 
      generator = "sequence-generator")
    @SequenceGenerator(name = "sequence-generator", 
      sequenceName = "user_sequence", initialValue = 4)
    private long userId;
    
    // ...
}

In this example, we’ve also set an initial value for the sequence, which means the primary key generation will start at 4.

SEQUENCE is the generation type recommended by the Hibernate documentation.

The generated values are unique per sequence. If you don’t specify a sequence name, Hibernate will re-use the same hibernate_sequence for different types.

3.4. TABLE Generation

The TableGenerator uses an underlying database table that holds segments of identifier generation values.

Let’s customize the table name using the @TableGenerator annotation:

@Entity
public class Department {
    @Id
    @GeneratedValue(strategy = GenerationType.TABLE, 
      generator = "table-generator")
    @TableGenerator(name = "table-generator", 
      table = "dep_ids", 
      pkColumnName = "seq_id", 
      valueColumnName = "seq_value")
    private long depId;

    // ...
}

In this example, we can see that other attributes such as the pkColumnName and valueColumnName can also be customized.

The disadvantage of this method is that it doesn’t scale well and can negatively affect performance.

To sum up, these four generation types will result in similar values being generated but use different database mechanisms.

3.5. Custom Generator

If we don’t want to use any of the out-of-the-box strategies, we can define our custom generator by implementing the IdentifierGenerator interface.

Let’s create a generator that builds identifiers containing a String prefix and a number:

public class MyGenerator 
  implements IdentifierGenerator, Configurable {

    private String prefix;

    @Override
    public Serializable generate(
      SharedSessionContractImplementor session, Object obj) 
      throws HibernateException {

        String query = String.format("select %s from %s", 
            session.getEntityPersister(obj.getClass().getName(), obj)
              .getIdentifierPropertyName(),
            obj.getClass().getSimpleName());

        Stream ids = session.createQuery(query).stream();

        Long max = ids.map(o -> o.replace(prefix + "-", ""))
          .mapToLong(Long::parseLong)
          .max()
          .orElse(0L);

        return prefix + "-" + (max + 1);
    }

    @Override
    public void configure(Type type, Properties properties, 
      ServiceRegistry serviceRegistry) throws MappingException {
        prefix = properties.getProperty("prefix");
    }
}

In this example, we override the generate() method from the IdentifierGenerator interface and first find the highest number from the existing primary keys of the form prefix-XX.

Then we add 1 to the maximum number found and append the prefix property to obtain the newly generated id value.

Our class also implements the Configurable interface, so that we can set the prefix property value in the configure() method.

Next, let’s add this custom generator to an entity. For this, we can use the @GenericGenerator annotation with a strategy parameter that contains the full class name of our generator class:

@Entity
public class Product {

    @Id
    @GeneratedValue(generator = "prod-generator")
    @GenericGenerator(name = "prod-generator", 
      parameters = @Parameter(name = "prefix", value = "prod"), 
      strategy = "com.baeldung.hibernate.pojo.generator.MyGenerator")
    private String prodId;

    // ...
}

Also, notice we’ve set the prefix parameter to “prod”.

Let’s see a quick JUnit test for a clearer understanding of the id values generated:

@Test
public void whenSaveCustomGeneratedId_thenOk() {
    Product product = new Product();
    session.save(product);
    Product product2 = new Product();
    session.save(product2);

    assertThat(product2.getProdId()).isEqualTo("prod-2");
}

Here, the first value generated using the “prod” prefix was “prod-1”, followed by “prod-2”.

4. Composite Identifiers

Besides the simple identifiers we’ve seen so far, Hibernate also allows us to define composite identifiers.

A composite id is represented by a primary key class with one or more persistent attributes.

The primary key class must fulfill several conditions:

  • it should be defined using @EmbeddedId or @IdClass annotations
  • it should be public, serializable and have a public no-arg constructor
  • it should implement equals() and hashCode() methods

The class’s attributes can be basic, composite or ManyToOne while avoiding collections and OneToOne attributes.

4.1. @EmbeddedId

To define an id using @EmbeddedId, first we need a primary key class annotated with @Embeddable:

@Embeddable
public class OrderEntryPK implements Serializable {

    private long orderId;
    private long productId;

    // standard constructor, getters, setters
    // equals() and hashCode() 
}

Next, we can add an id of type OrderEntryPK to an entity using @EmbeddedId:

@Entity
public class OrderEntry {

    @EmbeddedId
    private OrderEntryPK entryId;

    // ...
}

Let’s see how we can use this type of composite id to set the primary key for an entity:

@Test
public void whenSaveCompositeIdEntity_thenOk() {
    OrderEntryPK entryPK = new OrderEntryPK();
    entryPK.setOrderId(1L);
    entryPK.setProductId(30L);
        
    OrderEntry entry = new OrderEntry();
    entry.setEntryId(entryPK);
    session.save(entry);

    assertThat(entry.getEntryId().getOrderId()).isEqualTo(1L);
}

Here the OrderEntry object has an OrderEntryPK primary id formed of two attributes: orderId and productId.

4.2. @IdClass

The @IdClass annotation is similar to the @EmbeddedId, except the attributes are defined in the main entity class using @Id for each one.

The primary-key class will look the same as before.

Let’s rewrite the OrderEntry example with an @IdClass:

@Entity
@IdClass(OrderEntryPK.class)
public class OrderEntry {
    @Id
    private long orderId;
    @Id
    private long productId;
    
    // ...
}

Then we can set the id values directly on the OrderEntry object:

@Test
public void whenSaveIdClassEntity_thenOk() {        
    OrderEntry entry = new OrderEntry();
    entry.setOrderId(1L);
    entry.setProductId(30L);
    session.save(entry);

    assertThat(entry.getOrderId()).isEqualTo(1L);
}

Note that for both types of composite ids, the primary key class can also contain @ManyToOne attributes.

Hibernate also allows defining primary-keys made up of  @ManyToOne associations combined with @Id annotation. In this case, the entity class should also fulfill the conditions of a primary-key class.

The disadvantage of this method is that there’s no separation between the entity object and the identifier.

5. Derived Identifiers

Derived identifiers are obtained from an entity’s association using the @MapsId annotation.

First, let’s create a UserProfile entity which derives its id from a one-to-one association with the User entity:

@Entity
public class UserProfile {

    @Id
    private long profileId;
    
    @OneToOne
    @MapsId
    private User user;

    // ...
}

Next, let’s verify that a UserProfile instance has the same id as its associated User instance:

@Test
public void whenSaveDerivedIdEntity_thenOk() {        
    User user = new User();
    session.save(user);
       
    UserProfile profile = new UserProfile();
    profile.setUser(user);
    session.save(profile);

    assertThat(profile.getProfileId()).isEqualTo(user.getUserId());
}

6. Conclusion

In this article, we’ve seen the multiple ways we can define identifiers in Hibernate.

The full source code of the examples can be found over on GitHub.

Quick Guide to Java Stack

$
0
0

1. Overview

In this article, we’ll introduce the java.util.Stack class and start looking at how we can make use of it. 

The Stack is a generic data structure which represents a LIFO (last in, first out) collection of objects allowing for pushing/popping elements in constant time.

2. Creation

Let’s start by creating an empty instance of Stack, by using the default, no-argument constructor:

@Test
public void whenStackIsCreated_thenItHasSize0() {
    Stack<Integer> intStack = new Stack();
 
    assertEquals(0, intStack.size());
}

This will create a Stack with the default capacity of 10. If the number of added elements exceeds the total Stack size, it will be doubled automatically. However, its size will never shrink after removing elements.

3. Synchronization

Stack is a direct subclass of Vector; this means that similarly to its superclass, it’s a synchronized implementation.

However, synchronization isn’t always needed, in such cases, it’s advised to use ArrayDeque.

4. Adding

Let’s start by adding an element to the top of the Stack, with the push() method – which also returns the element that was added:

@Test
public void whenElementIsPushed_thenStackSizeIsIncreased() {
    Stack<Integer> intStack = new Stack();
    intStack.push(1);
 
    assertEquals(1, intStack.size());
}

Using push() method has the same effect as using addElement(). The only difference is that addElement() returns the result of the operation, instead of the element that was added.

We can also add multiple elements at once:

@Test
public void whenMultipleElementsArePushed_thenStackSizeisIncreased() {
    Stack<Integer> intStack = new Stack();
    List<Integer> intList = Arrays.asList(1, 2, 3, 4, 5, 6, 7);
    boolean result = intStack.addAll(intList);
 
    assertTrue(result);
    assertEquals(7, intList.size());
}

5. Retrieving

Next, let’s have a look at how to get and remove the last element in a Stack:

@Test
public void whenElementIsPoppedFromStack_thenSizeChanges() {
    Stack<Integer> intStack = new Stack();
    intStack.push(5);
    intStack.pop();

    assertTrue(intStack.isEmpty());
}

We can also get the last element of the Stack without removing it:

@Test
public void whenElementIsPeeked_thenElementIsNotRemoved() {
    Stack<Integer> intStack = new Stack();
    intStack.push(5);
    intStack.peek();

    assertEquals(1, intStack.search(5));
    assertEquals(1, intStack.size());
}

6. Searching for an Element

6.1. Search

Stack allows us to search for an element and get its distance from the top:

@Test
public void whenElementIsOnStack_thenSearchReturnsItsDistanceFromTheTop() {
    Stack<Integer> intStack = new Stack();
    intStack.push(5);

    assertEquals(1, intStack.search(5));
}

The result is an index of given Object. If more than one Object is present, the index of Object closest to the top is returned. The item that is on the top of the stack is considered to be at position 1.

If the Object is not found, search() will return -1.

6.2. Getting Index of Element

To get an index of an element on the Stack, we can also use the indexOf() and lastIndexOf() methods:

@Test
public void whenElementIsOnStack_thenIndexOfReturnsItsIndex() {
    Stack<Integer> intStack = new Stack();
    intStack.push(5);
    int indexOf = intStack.indexOf(5);

    assertEquals(0, indexOf);
}

The lastIndexOf() will always find the index of the element that’s closest to the top of the stack. This works very similarly to search() – with the important difference that it returns the index, instead of the distance from the top:

@Test
public void whenMultipleElementsAreOnStack_thenIndexOfReturnsLastElementIndex() {
    Stack<Integer> intStack = new Stack();
    intStack.push(5);
    intStack.push(5);
    intStack.push(5);
    int lastIndexOf = intStack.lastIndexOf(5);

    assertEquals(2, lastIndexOf);
}

7. Removing Elements

Apart from the pop() operation, used both for removing and retrieving elements, we can also use multiple operations inherited from the Vector class to remove elements.

7.1. Removing Specified Elements

We can use the removeElement() method to remove the first occurrence of given element:

@Test
public void whenRemoveElementIsInvoked_thenElementIsRemoved() {
    Stack<Integer> intStack = new Stack();
    intStack.push(5);
    intStack.push(5);
    intStack.removeElement(5);
 
    assertEquals(1, intStack.size());
}

We can also use the removeElementAt() to delete elements under a specified index in the Stack:

@Test 
public void whenRemoveElementAtIsInvoked_thenElementIsRemoved() { 
    Stack<Integer> intStack = new Stack(); 
    intStack.push(5); intStack.push(7); 
    intStack.removeElementAt(1); 
 
    assertEquals(-1, intStack.search(7)); 
}

7.2. Removing Multiple Elements

Let’s have a quick look at how to remove multiple elements from a Stack using the removeAll() API – which will take a Collection as an argument and remove all matching elements from the Stack:

@Test
public void whenRemoveAllIsInvoked_thenAllElementsFromCollectionAreRemoved() {
    Stack<Integer> intStack = new Stack();
    List<Integer> intList = Arrays.asList(1, 2, 3, 4, 5, 6, 7);
    intStack.addAll(intList);
    intStack.add(500);
    intStack.removeAll(intList);
 
    assertEquals(1, intStack.size());
}

It’s also possible to remove all elements from the Stack using the clear() or removeAllElements() methods; both of those methods work the same:

@Test
public void whenRemoveIfIsInvoked_thenAllElementsSatysfyingConditionAreRemoved() {
    Stack<Integer> intStack = new Stack();
    List<Integer> intList = Arrays.asList(1, 2, 3, 4, 5, 6, 7);
    intStack.addAll(intList);
    intStack.removeIf(element -> element < 6);
 
    assertEquals(2, intStack.size());
}

7.3. Removing Elements Using Filter

We can also use a condition for removing elements from the Stack. Let’s see how to do this using the removeIf(), with a filter expression as an argument:

@Test
public void whenRemoveIfIsInvoked_thenAllElementsSatysfyingConditionAreRemoved() {
    Stack<Integer> intStack = new Stack();
    List<Integer> intList = Arrays.asList(1, 2, 3, 4, 5, 6, 7);
    intStack.addAll(intList);
    intStack.removeIf(element -> element < 6);
    
    assertEquals(2, intStack.size());
}

8. Iterating

Stack allows us to use both an Iterator and a ListIterator. The main difference is that the first one allows us to traverse Stack in one direction and second allows to do this in both directions:

@Test
public void whenAnotherStackCreatedWhileTraversingStack_thenStacksAreEqual() {
    Stack<Integer> intStack = new Stack<>();
    List<Integer> intList = Arrays.asList(1, 2, 3, 4, 5, 6, 7);
    intStack.addAll(intList);
    ListIterator<Integer> it = intStack.listIterator();
    Stack<Integer> result = new Stack();
    while(it.hasNext()) {
        result.push(it.next());
    }

    assertThat(result, equalTo(intStack));
}

All Iterators returned by Stack are fail-fast.

9. Stream API

A Stack is a collection, which means we can use it with Java 8 Streams API. Using Streams with the Stack is similar to using it with any other Collection:

@Test
public void whenStackIsFiltered_allElementsNotSatisfyingFilterConditionAreDiscarded() {
    Stack<Integer> intStack = new Stack();
    List<Integer> inputIntList = Arrays.asList(1, 2, 3, 4, 5, 6, 7,9,10);
    intStack.addAll(inputIntList);
    int[] intArray = intStack.stream()
      .mapToInt(element -> (int)element)
      .filter(element -> element <= 3)
      .toArray();
 
    assertEquals(3, intArray.length);
}

10. Summary

This tutorial was a quick guide to understanding the Java Stack. To learn more about this topic refer to Javadoc.

And, as always, all code samples can be found over on Github.

Generating Prime Numbers in Java

$
0
0

1. Introduction

In this tutorial, we’ll show various ways in which we can generate prime numbers using Java.

If you’re looking to check if a number is prime – here’s a quick guide on how to do that.

2. Prime Numbers

Let’s start with the core definition. A prime number is a natural number greater than one that has no positive divisors other than one and itself.

For example, 7 is prime because 1 and 7 are its only positive integer factors, whereas 12 is not because it has the divisors 3 and 2 in addition to 1, 4 and 6.

3. Generating Prime Numbers

In this section, we’ll see how we can generate prime numbers efficiently that are lower than a given value.

3.1. Java 7 And Before – Brute Force

public static List<Integer> primeNumbersBruteForce(int n) {
    List<Integer> primeNumbers = new LinkedList<>();
    for (int i = 2; i <= n; i++) {
        if (isPrimeBruteForce(i)) {
            primeNumbers.add(i);
        }
    }
    return primeNumbers;
}
public static boolean isPrimeBruteForce(int number) {
    for (int i = 2; i < number; i++) {
        if (number % i == 0) {
            return false;
        }
    }
    return true;
}

As you can see, primeNumbersBruteForce is iterating over the numbers from 2 to n and simply calling the isPrimeBruteForce() method to check if a number is prime or not.

The method checks each numbers divisibility by the numbers in a range from 2 till number-1.

If at any point we encounter a number that is divisible, we return false. At the end when we find that number is not divisible by any of its prior number, we return true indicating its a prime number.

3.2. Efficiency And Optimization

The previous algorithm is not linear and has the time complexity of O(n^2). The algorithm is also not efficient and there’s clearly a room for improvement.

Let’s look at the condition in the isPrimeBruteForce() method.

When a number is not a prime, this number can be factored into two factors namely a and b i.e. number = a * b. If both a and b were greater than the square root of n, a*b would be greater than n.

So at least one of those factors must be less than or equal the square root of a number and to check if a number is prime, we only need to test for factors lower than or equal to the square root of the number being checked.

Prime numbers can never be an even number as even numbers are all divisible by 2.

Additionally, prime numbers can never be an even number as even numbers are all divisible by 2.

Keeping in mind above ideas, let’s improve the algorithm:

public static List<Integer> primeNumbersBruteForce(int n) {
    List<Integer> primeNumbers = new LinkedList<>();
    if (n >= 2) {
        primeNumbers.add(2);
    }
    for (int i = 3; i <= n; i += 2) {
        if (isPrimeBruteForce(i)) {
            primeNumbers.add(i);
        }
    }
    return primeNumbers;
}
private static boolean isPrimeBruteForce(int number) {
    for (int i = 2; i*i < number; i++) {
        if (number % i == 0) {
            return false;
        }
    }
    return true;
}

3.3. Using Java 8

Let’s see how we can rewrite the previous solution using Java 8 idioms:

public static List<Integer> primeNumbersTill(int n) {
    return IntStream.rangeClosed(2, n)
      .filter(x -> isPrime(x)).boxed()
      .collect(Collectors.toList());
}
private static boolean isPrime(int number) {
    return IntStream.rangeClosed(2, (int) (Math.sqrt(number)))
      .filter(n -> (n & 0X1) != 0)
      .allMatch(n -> x % n != 0);
}

3.4. Using Sieve Of Eratosthenes

There’s yet another efficient method which could help us to generate prime numbers efficiently, and it’s called Sieve Of Eratosthenes. Its time efficiency is O(n logn).

Let’s take a look at the steps of this algorithm:

  1. Create a list of consecutive integers from 2 to n: (2, 3, 4, …, n)
  2. Initially, let p be equal 2, the first prime number
  3. Starting from p, count up in increments of p and mark each of these numbers greater than p itself in the list. These numbers will be 2p, 3p, 4p, etc.; note that some of them may have already been marked
  4. Find the first number greater than p in the list that is not marked. If there was no such number, stop. Otherwise, let p now equal this number (which is the next prime), and repeat from step 3

At the end when the algorithm terminates, all the numbers in the list that are not marked are the prime numbers.

Here’s what the code looks like:

public static List<Integer> sieveOfEratosthenes(int n) {
    boolean prime[] = new boolean[n + 1];
    Arrays.fill(prime, true);
    for (int p = 2; p * p <= n; p++) {
        if (prime[p]) {
            for (int i = p * 2; i <= n; i += p) {
                prime[i] = false;
            }
        }
    }
    List<Integer> primeNumbers = new LinkedList<>();
    for (int i = 2; i <= n; i++) {
        if (prime[i]) {
            primeNumbers.add(i);
        }
    }
    return primeNumbers;
}

3.5. Working Example of Sieve Of Eratosthenes

Let’s see how it works for n=30.

Consider the image above, here are the passes made by the algorithm:

  1. The loop starts with 2, so we leave 2 unmarked and mark all the divisors of 2. It’s marked in image with the red color
  2. The loop moves to 3, so we leave 3 unmarked and mark all the divisors of 3 not already marked. It’s marked in image with the green color
  3. Loop moves to 4, it’s already marked, so we continue
  4. Loop moves to 5, so we leave 5 unmarked and mark all the divisors of 5 not already marked. It’s marked in image with the purple color
  5. We continue above steps until loop is reached equal to square root of n

4. Conclusion

In this quick tutorial, we illustrated ways in which we can generate prime numbers untill ‘N’ value.

The implementation of these examples can be found over on GitHub.

Creating a Java Compiler Plugin

$
0
0

1. Overview

Java 8 provides an API for creating Javac plugins. Unfortunately, it’s hard to find good documentation for it.

In this article, we’re going to show the whole process of creating a compiler extension which adds custom code to *.class files.

2. Setup

First, we need to add JDK’s tools.jar as a dependency for our project:

<dependency>
    <groupId>com.sun</groupId>
    <artifactId>tools</artifactId>
    <version>1.8.0</version>
    <scope>system</scope>
    <systemPath>${java.home}/../lib/tools.jar</systemPath>
</dependency>

Every compiler extension is a class which implements com.sun.source.util.Plugin interface. Let’s create it in our example:

Let’s create it in our example:

public class SampleJavacPlugin implements Plugin {

    @Override
    public String getName() {
        return "MyPlugin";
    }

    @Override
    public void init(JavacTask task, String... args) {
        Context context = ((BasicJavacTask) task).getContext();
        Log.instance(context)
          .printRawLines(Log.WriterKind.NOTICE, "Hello from " + getName());
    }
}

For now, we’re just printing “Hello” to ensure that our code is successfully picked up and included in the compilation.

Our end goal will be to create a plugin that adds runtime checks for every numeric argument marked with a given annotation, and throw an exception if the argument doesn’t match a condition.

There’s one more necessary step to make the extension discoverable by Javac: it should be exposed through the ServiceLoader framework.

To achieve this, we need to create a file named com.sun.source.util.Plugin with content which is our plugin’s fully qualified class name (com.baeldung.javac.SampleJavacPlugin) and place it in the META-INF/services directory.

After that, we can call Javac with the -Xplugin:MyPlugin switch:

baeldung/tutorials$ javac -cp ./core-java/target/classes -Xplugin:MyPlugin ./core-java/src/main/java/com/baeldung/javac/TestClass.java
Hello from MyPlugin

Note that we must always use a String returned from the plugin’s getName() method as a -Xplugin option value.

3. Plugin Lifecycle

A plugin is called by the compiler only once, through the init() method.

To be notified of subsequent events, we have to register a callback. These arrive before and after every processing stage per source file:

  • PARSE – builds an Abstract Syntax Tree (AST)
  • ENTER – source code imports are resolved
  • ANALYZE – parser output (an AST) is analyzed for errors
  • GENERATE – generating binaries for the target source file

There are two more event kinds – ANNOTATION_PROCESSING and ANNOTATION_PROCESSING_ROUND but we’re not interested in them here.

For example, when we want to enhance compilation by adding some checks based on source code info, it’s reasonable to do that at the PARSE finished event handler:

public void init(JavacTask task, String... args) {
    task.addTaskListener(new TaskListener() {
        public void started(TaskEvent e) {
        }

        public void finished(TaskEvent e) {
            if (e.getKind() != TaskEvent.Kind.PARSE) {
                return;
            }
            // Perform instrumentation
        }
    });
}

4. Extract AST Data

We can get an AST generated by the Java compiler through the TaskEvent.getCompilationUnit(). Its details can be examined through the TreeVisitor interface.

Note that only a Tree element, for which the accept() method is called, dispatches events to the given visitor.

For example, when we execute ClassTree.accept(visitor), only visitClass() is triggered; we can’t expect that, say, visitMethod() is also activated for every method in the given class.

We can use TreeScanner to overcome the problem:

public void finished(TaskEvent e) {
    if (e.getKind() != TaskEvent.Kind.PARSE) {
        return;
    }
    e.getCompilationUnit().accept(new TreeScanner<Void, Void>() {
        @Override
        public Void visitClass(ClassTree node, Void aVoid) {
            return super.visitClass(node, aVoid);

        @Override
        public Void visitMethod(MethodTree node, Void aVoid) {
            return super.visitMethod(node, aVoid);
        }
    }, null);
}

In this example, it’s necessary to call super.visitXxx(node, value) to recursively process the current node’s children.

5. Modify AST

To showcase how we can modify the AST, we’ll insert runtime checks for all numeric arguments marked with a @Positive annotation.

This is a simple annotation that can be applied to method parameters:

@Documented
@Retention(RetentionPolicy.CLASS)
@Target({ElementType.PARAMETER})
public @interface Positive { }

Here’s an example of using the annotation:

public void service(@Positive int i) { }

In the end, we want the bytecode to look as if it’s compiled from a source like this:

public void service(@Positive int i) {
    if (i <= 0) {
        throw new IllegalArgumentException("A non-positive argument ("
          + i + ") is given as a @Positive parameter 'i'");
    }
}

What this means is that we want an IllegalArgumentException to be thrown for every argument marked with @Positive which is equal or less than 0. 

5.1. Where to Instrument

Let’s find out how we can locate target places where the instrumentation should be applied:

private static Set<String> TARGET_TYPES = Stream.of(
  byte.class, short.class, char.class, 
  int.class, long.class, float.class, double.class)
 .map(Class::getName)
 .collect(Collectors.toSet());

For simplicity, we’ve only added primitive numeric types here.

Next, let’s define a shouldInstrument() method that checks if the parameter has a type in the TARGET_TYPES set as well as the @Positive annotation:

private boolean shouldInstrument(VariableTree parameter) {
    return TARGET_TYPES.contains(parameter.getType().toString())
      && parameter.getModifiers().getAnnotations().stream()
      .anyMatch(a -> Positive.class.getSimpleName()
        .equals(a.getAnnotationType().toString()));
}

Then we’ll continue the finished() method in our SampleJavacPlugin class with applying a check to all parameters that fulfill our conditions:

public void finished(TaskEvent e) {
    if (e.getKind() != TaskEvent.Kind.PARSE) {
        return;
    }
    e.getCompilationUnit().accept(new TreeScanner<Void, Void>() {
        @Override
        public Void visitMethod(MethodTree method, Void v) {
            List<VariableTree> parametersToInstrument
              = method.getParameters().stream()
              .filter(SampleJavacPlugin.this::shouldInstrument)
              .collect(Collectors.toList());
            
              if (!parametersToInstrument.isEmpty()) {
                Collections.reverse(parametersToInstrument);
                parametersToInstrument.forEach(p -> addCheck(method, p, context));
            }
            return super.visitMethod(method, v);
        }
    }, null);

In this example, we’ve reversed the parameters list because there’s a possible case that more than one argument is marked by @Positive. As every check is added as the very first method instruction, we process them RTL to ensure correct order.

As every check is added as the very first method instruction, we process them RTL to ensure correct order.

5.2. How to Instrument

The problem is that “read AST” lays in the public API area, while “modify AST” operations like “add null-checks” are a private API.

To address this, we’ll create new AST elements through a TreeMaker instance.

First, we need to obtain a Context instance:

@Override
public void init(JavacTask task, String... args) {
    Context context = ((BasicJavacTask) task).getContext();
    // ...
}

Then, we can obtain the TreeMarker object through the TreeMarker.instance(Context) method.

Now we can build new AST elements, e.g., an if expression can be constructed by a call to TreeMaker.If():

private static JCTree.JCIf createCheck(VariableTree parameter, Context context) {
    TreeMaker factory = TreeMaker.instance(context);
    Names symbolsTable = Names.instance(context);
        
    return factory.at(((JCTree) parameter).pos)
      .If(factory.Parens(createIfCondition(factory, symbolsTable, parameter)),
        createIfBlock(factory, symbolsTable, parameter), 
        null);
}

Please note that we want to show the correct stack trace line when an exception is thrown from our check. That’s why we adjust the AST factory position before creating new elements through it with factory.at(((JCTree) parameter).pos).

The createIfCondition() method builds the “parameterId < 0″ if condition:

private static JCTree.JCBinary createIfCondition(TreeMaker factory, 
  Names symbolsTable, VariableTree parameter) {
    Name parameterId = symbolsTable.fromString(parameter.getName().toString());
    return factory.Binary(JCTree.Tag.LE, 
      factory.Ident(parameterId), 
      factory.Literal(TypeTag.INT, 0));
}

Next, the createIfBlock() method builds a block that returns an IllegalArgumentException:

private static JCTree.JCBlock createIfBlock(TreeMaker factory, 
  Names symbolsTable, VariableTree parameter) {
    String parameterName = parameter.getName().toString();
    Name parameterId = symbolsTable.fromString(parameterName);
        
    String errorMessagePrefix = String.format(
      "Argument '%s' of type %s is marked by @%s but got '", 
      parameterName, parameter.getType(), Positive.class.getSimpleName());
    String errorMessageSuffix = "' for it";
        
    return factory.Block(0, com.sun.tools.javac.util.List.of(
      factory.Throw(
        factory.NewClass(null, nil(), 
          factory.Ident(symbolsTable.fromString(
            IllegalArgumentException.class.getSimpleName())),
            com.sun.tools.javac.util.List.of(factory.Binary(JCTree.Tag.PLUS, 
            factory.Binary(JCTree.Tag.PLUS, 
              factory.Literal(TypeTag.CLASS, errorMessagePrefix), 
              factory.Ident(parameterId)), 
              factory.Literal(TypeTag.CLASS, errorMessageSuffix))), null))));
}

Now that we’re able to build new AST elements, we need to insert them into the AST prepared by the parser. We can achieve this by casting public API elements to private API types:

private void addCheck(MethodTree method, VariableTree parameter, Context context) {
    JCTree.JCIf check = createCheck(parameter, context);
    JCTree.JCBlock body = (JCTree.JCBlock) method.getBody();
    body.stats = body.stats.prepend(check);
}

6. Testing the Plugin

We need to be able to test our plugin. It involves the following:

  • compile the test source
  • run the compiled binaries and ensure that they behave as expected

For this, we need to introduce a few auxiliary classes.

SimpleSourceFile exposes the given source file’s text to the Javac:

public class SimpleSourceFile extends SimpleJavaFileObject {
    private String content;

    public SimpleSourceFile(String qualifiedClassName, String testSource) {
        super(URI.create(String.format(
          "file://%s%s", qualifiedClassName.replaceAll("\\.", "/"),
          Kind.SOURCE.extension)), Kind.SOURCE);
        content = testSource;
    }

    @Override
    public CharSequence getCharContent(boolean ignoreEncodingErrors) {
        return content;
    }
}

SimpleClassFile holds the compilation result as a byte array:

public class SimpleClassFile extends SimpleJavaFileObject {

    private ByteArrayOutputStream out;

    public SimpleClassFile(URI uri) {
        super(uri, Kind.CLASS);
    }

    @Override
    public OutputStream openOutputStream() throws IOException {
        return out = new ByteArrayOutputStream();
    }

    public byte[] getCompiledBinaries() {
        return out.toByteArray();
    }

    // getters
}

SimpleFileManager ensures the compiler uses our bytecode holder:

public class SimpleFileManager
  extends ForwardingJavaFileManager<StandardJavaFileManager> {

    private List<SimpleClassFile> compiled = new ArrayList<>();

    // standard constructors/getters

    @Override
    public JavaFileObject getJavaFileForOutput(Location location,
      String className, JavaFileObject.Kind kind, FileObject sibling) {
        SimpleClassFile result = new SimpleClassFile(
          URI.create("string://" + className));
        compiled.add(result);
        return result;
    }

    public List<SimpleClassFile> getCompiled() {
        return compiled;
    }
}

Finally, all of that is bound to the in-memory compilation:

public class TestCompiler {
    public byte[] compile(String qualifiedClassName, String testSource) {
        StringWriter output = new StringWriter();

        JavaCompiler compiler = ToolProvider.getSystemJavaCompiler();
        SimpleFileManager fileManager = new SimpleFileManager(
          compiler.getStandardFileManager(null, null, null));
        List<SimpleSourceFile> compilationUnits 
          = singletonList(new SimpleSourceFile(qualifiedClassName, testSource));
        List<String> arguments = new ArrayList<>();
        arguments.addAll(asList("-classpath", System.getProperty("java.class.path"),
          "-Xplugin:" + SampleJavacPlugin.NAME));
        JavaCompiler.CompilationTask task 
          = compiler.getTask(output, fileManager, null, arguments, null,
          compilationUnits);
        
        task.call();
        return fileManager.getCompiled().iterator().next().getCompiledBinaries();
    }
}

After that, we need only to run the binaries:

public class TestRunner {

    public Object run(byte[] byteCode, String qualifiedClassName, String methodName,
      Class<?>[] argumentTypes, Object... args) throws Throwable {
        ClassLoader classLoader = new ClassLoader() {
            @Override
            protected Class<?> findClass(String name) throws ClassNotFoundException {
                return defineClass(name, byteCode, 0, byteCode.length);
            }
        };
        Class<?> clazz;
        try {
            clazz = classLoader.loadClass(qualifiedClassName);
        } catch (ClassNotFoundException e) {
            throw new RuntimeException("Can't load compiled test class", e);
        }

        Method method;
        try {
            method = clazz.getMethod(methodName, argumentTypes);
        } catch (NoSuchMethodException e) {
            throw new RuntimeException(
              "Can't find the 'main()' method in the compiled test class", e);
        }

        try {
            return method.invoke(null, args);
        } catch (InvocationTargetException e) {
            throw e.getCause();
        }
    }
}

A test might look like this:

public class SampleJavacPluginTest {

    private static final String CLASS_TEMPLATE
      = "package com.baeldung.javac;\n\n" +
        "public class Test {\n" +
        "    public static %1$s service(@Positive %1$s i) {\n" +
        "        return i;\n" +
        "    }\n" +
        "}\n" +
        "";

    private TestCompiler compiler = new TestCompiler();
    private TestRunner runner = new TestRunner();

    @Test(expected = IllegalArgumentException.class)
    public void givenInt_whenNegative_thenThrowsException() throws Throwable {
        compileAndRun(double.class,-1);
    }
    
    private Object compileAndRun(Class<?> argumentType, Object argument) 
      throws Throwable {
        String qualifiedClassName = "com.baeldung.javac.Test";
        byte[] byteCode = compiler.compile(qualifiedClassName, 
          String.format(CLASS_TEMPLATE, argumentType.getName()));
        return runner.run(byteCode, qualifiedClassName, 
        "service", new Class[] {argumentType}, argument);
    }
}

Here we’re compiling a Test class with a service() method that has a parameter annotated with @Positive. Then, we’re running the Test class by setting a double value of -1 for the method parameter.

As a result of running the compiler with our plugin, the test will throw an IllegalArgumentException for the negative parameter.

7. Conclusion

In this article, we’ve shown the full process of creating, testing and running a Java Compiler plugin.

The full source code of the examples can be found over on GitHub.

Introduction to Spring REST Shell

$
0
0

1. Overview

In this article, we’ll have a look at Spring REST Shell and some of its features.

It’s a Spring Shell extension so we recommend reading about it first.

2. Introduction

The Spring REST Shell is a command-line shell designed to facilitate working with Spring HATEOAS-compliant REST resources.

We no longer need to manipulate the URLs in bash by using tools like curl. Spring REST Shell provides a more convenient way of interacting with REST resources.

3. Installation

If we’re using a macOS machine with Homebrew, we can simply execute the next command:

brew install rest-shell

For users of other operating systems, we need to download a binary package from the official GitHub project page, unpack the package and find an executable to run:

tar -zxvf rest-shell-1.2.0.RELEASE.tar.gz
cd rest-shell-1.2.0.RELEASE
bin/rest-shell

Another option is to download the source code and perform a Gradle task:

git clone git://github.com/spring-projects/rest-shell.git
cd rest-shell
./gradlew installApp
cd build/install/rest-shell-1.2.0.RELEASE
bin/rest-shell

If everything is set correctly, we’ll see the following greeting:

 ___ ___  __ _____  __  _  _     _ _  __    
| _ \ __/' _/_   _/' _/| || |   / / | \ \   
| v / _|`._`. | | `._`.| >< |  / / /   > >  
|_|_\___|___/ |_| |___/|_||_| |_/_/   /_/   
1.2.1.RELEASE

Welcome to the REST shell. For assistance hit TAB or type "help".
http://localhost:8080:>

4. Getting Started

We’ll be working with the API already developed for another article. The localhost:8080 is used as a base URL.

Here’s a list of exposed endpoints:

  • GET /articles – get all Articles
  • GET /articles/{id} – get an Article by id
  • GET /articles/search/findByTitle?title={title} – get an Article by title
  • GET /profile/articles – get the profile data for an Article resource
  • POST /articles – create a new Article with a body provided

The Article class has three fields: id, title, and content.

4.1. Creating New Resources

Let’s add a new article. We’re going to use the post command passing a JSON String with the –data parameter.

First, we need to follow the URL associated with the resource we want to add. The command follow takes a relative URI, concatenates it with the baseUri and sets the result as the current location:

http://localhost:8080:> follow articles
http://localhost:8080/articles:> post --data "{title: "First Article"}"

The result of the execution of the command will be:

< 201 CREATED
< Location: http://localhost:8080/articles/1
< Content-Type: application/hal+json;charset=UTF-8
< Transfer-Encoding: chunked
< Date: Sun, 29 Oct 2017 23:04:43 GMT
< 
{
  "title" : "First Article",
  "content" : null,
  "_links" : {
    "self" : {
      "href" : "http://localhost:8080/articles/1"
    },
    "article" : {
      "href" : "http://localhost:8080/articles/1"
    }
  }
}

4.2. Discovering Resources

Now, when we’ve got some resources, let’s find them out. We’re going use the discover command which reveals all available resources at the current URI:

http://localhost:8080/articles:> discover

rel        href                                  
=================================================
self       http://localhost:8080/articles/       
profile    http://localhost:8080/profile/articles
article    http://localhost:8080/articles/1

Being aware of the resource URI, we can fetch it by using the get command:

http://localhost:8080/articles:> get 1

> GET http://localhost:8080/articles/1

< 200 OK
< Content-Type: application/hal+json;charset=UTF-8
< Transfer-Encoding: chunked
< Date: Sun, 29 Oct 2017 23:25:36 GMT
< 
{
  "title" : "First Article",
  "content" : null,
  "_links" : {
    "self" : {
      "href" : "http://localhost:8080/articles/1"
    },
    "article" : {
      "href" : "http://localhost:8080/articles/1"
    }
  }
}

4.3. Adding Query Parameters

We can specify query parameters as JSON fragments using the –params parameter.

Let’s get an article by the given title:

http://localhost:8080/articles:> get search/findByTitle \
> --params "{title: "First Article"}"

> GET http://localhost:8080/articles/search/findByTitle?title=First+Article

< 200 OK
< Content-Type: application/hal+json;charset=UTF-8
< Transfer-Encoding: chunked
< Date: Sun, 29 Oct 2017 23:39:39 GMT
< 
{
  "title" : "First Article",
  "content" : null,
  "_links" : {
    "self" : {
      "href" : "http://localhost:8080/articles/1"
    },
    "article" : {
      "href" : "http://localhost:8080/articles/1"
    }
  }
}

4.4. Setting Headers

The command called headers allows managing headers within the session scope – every request will be sent using these headers. The headers set takes the  –name and –value arguments to determine a header.

We are going to add a few headers and make a request including those headers:

http://localhost:8080/articles:>
  headers set --name Accept --value application/json

{
  "Accept" : "application/json"
}

http://localhost:8080/articles:>
  headers set --name Content-Type --value application/json

{
  "Accept" : "application/json",
  "Content-Type" : "application/json"
}

http://localhost:8080/articles:> get 1

> GET http://localhost:8080/articles/1
> Accept: application/json
> Content-Type: application/json

4.5. Writing Results to a File

It’s not always desirable to print out the results of an HTTP request to the screen. Sometimes, we need to save the results in a file for further analysis. 

The –output parameter allows performing such operations:

http://localhost:8080/articles:> get search/findByTitle \
> --params "{title: "First Article"}" \
> --output first_article.txt

>> first_article.txt

4.6. Reading JSON From a File

Often, JSON data is too large or too complex to be entered through the console using the –data parameter.

Also, there are some limitations on the format of the JSON data we can enter directly into the command line.

The –from parameter gives the possibility of reading data from a file or a directory.

If the value is a directory, the shell will read each file that ends with “.json” and perform a POST or PUT with the content of that file.

If the parameter is a file, then the shell will load the file and POST/PUT data from that file.

Let’s create the next article from the file second_article.txt:

http://localhost:8080/articles:> post --from second_article.txt

1 files uploaded to the server using POST

4.7. Setting Context Variables

We can also define variables within the current session context. The command var defines the get and set parameters for getting and setting a variable respectively.

By analogy with the headers, the arguments –name and –value are for giving the name and the value of a new variable:

http://localhost:8080:> var set --name articlesURI --value articles
http://localhost:8080/articles:> var get --name articlesURI

articles

Now, we’re going to print out a list of currently available variables within the context:

http://localhost:8080:> var list

{
  "articlesURI" : "articles"
}

Having made sure that our variable was saved, we’ll use it with the follow command to switch to the given URI:

http://localhost:8080:> follow #{articlesURI}
http://localhost:8080/articles:> 

4.8. Viewing History

All the paths we visit are recorded. The command history shows these paths in the chronological order:

http://localhost:8080:> history list

1: http://localhost:8080/articles
2: http://localhost:8080

Each URI is associated with a number that can be used to go to that URI:

http://localhost:8080:> history go 1
http://localhost:8080/articles:>

5. Conclusion

In this tutorial, we’ve focused on an interesting and rare tool in the Spring ecosystem – a command line tool.

You can find more information about the project over on GitHub.

And, as always, all the code snippets mentioned in the article can be found in our repository.

Deploy Application at Tomcat Root

$
0
0

1. Overview

In this quick article, we’ll discuss deploying a web application at the root of a Tomcat.

2. Tomcat Deployment Basics and Terminology

First, the basics of deploying an application to Tomcat can be found in this guide: How to Deploy a WAR File to Tomcat.

Simply put, web applications are placed under $CATALINA_HOME\webapps, where $CATALINA_HOME is the Tomcat’s installation directory.

The context path refers to the location relative to the server’s address which represents the name of the web application.

By default, Tomcat derives it from the name of the deployed war-file. So if we deploy a file ExampleApp.war, it will be available at http://localhost:8080/ExampleApp. I. e. the context path is /ExampleApp.

If we now need to have that app available at http://localhost:8080/ instead, we have a few options, which we’ll discuss in the following sections.

For a more detailed explanation of the context concept of Tomcat, have a look at the official Tomcat documentation.

3. Deploying the App as ROOT.war

The first option is very straightforward: we just have to delete the default /ROOT/ folder in $CATALINA_HOME\webapps, rename our ExampleApp.war to ROOT.war, and deploy it.

Our app will now be available at http://localhost:8080/.

4. Specifying the Context Path in the server.xml

The second option is to set the context path of the application in the server.xml (which is located at $CATALINA_HOME\conf).

We must insert the following inside the <Host> tag for that:

<Context path="" docBase="ExampleApp"></Context>

Note: defining the context path manually has the side effect that the application is deployed twice by default: at http://localhost:8080/ExampleApp/ as well as at http://localhost:8080/.

To prevent this, we have to set autoDeploy=”false” and deployOnStartup=”false” in the <Host> tag:

<Host name="localhost" appBase="webapps" unpackWARs="true"
  autoDeploy="false" deployOnStartup="false">
    <Context path="" docBase="ExampleApp"></Context>

    <!-- Further settings for localhost -->
</Host>

Important: this option is not recommended anymore, since Tomcat 5: it makes context configurations more invasive, since the server.xml file cannot be reloaded without restarting Tomcat.


5. Specifying the Context Path in an App-Specific XML File

To avoid this problem with the server.xml, we’ve got the third option: we’ll set the context path in an application-specific XML file.

Therefore, we have to create a ROOT.xml at $CATALINA_HOME\conf\Catalina\localhost with the following content:

<Context docBase="../deploy/ExampleApp.war"/>

Two points are worth nothing here.

First, we don’t have to specify the path explicitly like in the previous option – Tomcat derives that from the name of our ROOT.xml.

And second – since we’re defining our context in a different file than the server.xml, our docBase has to be outside of $CATALINA_HOME\webApps.

6. Conclusion

In this tutorial, we discussed different options of how to deploy a web application at the root of a Tomcat.


JMX Data to the Elastic Stack (ELK)

$
0
0

1. Overview

In this quick tutorial, we’re going to have a look at how to send JMX data from our Tomcat server to the Elastic Stack (formerly known as ELK).

We’ll discuss how to configure Logstash to read data from JMX and send it to Elasticsearch.

2. Install the Elastic Stack

First, we need to install Elastic stack (ElastichsearchLogstashKibana)

Then, to make sure everything is connected and working properly, we’ll send the JMX data to Logstash and visualize it over on Kibana.

2.1. Test Logstash

First, we will go to the Logstash installation directory which varies by operating system (in our case Ubuntu):

cd /opt/logstash

We can set a simple configuration to Logstash from the command line:

bin/logstash -e 'input { stdin { } } output { elasticsearch { hosts => ["localhost:9200"] } }'

Then, we can simply type some sample data in the console – and use the CTRL-D command to close pipeline when we’re done.

2.2. Test Elasticsearch

After adding the sample data, a Logstash index should be available on Elasticsearch – which we can check as follows:

curl -X GET 'http://localhost:9200/_cat/indices'

Sample Output:

yellow open logstash-2017.11.10 5 1 3531 0 506.3kb 506.3kb 
yellow open .kibana             1 1    3 0   9.5kb   9.5kb 
yellow open logstash-2017.11.11 5 1 8671 0   1.4mb   1.4mb

2.3. Test Kibana

Kibana runs by default on port 5601 – we can access the homepage at:

http://localhost:5601/app/kibana

We should be able to create a new index with the pattern “logstash-*” – and see our sample data there.

3. Configure Tomcat

Next, we need to enable JMX by adding the following to CATALINA_OPTS:

-Dcom.sun.management.jmxremote
  -Dcom.sun.management.jmxremote.port=9000
  -Dcom.sun.management.jmxremote.ssl=false
  -Dcom.sun.management.jmxremote.authenticate=false

Note that:

  • You can configure CATALINA_OPTS by modifying setenv.sh
  • For Ubuntu users setenv.sh can be found in ‘/usr/share/tomcat8/bin’

4. Connect JMX and Logstash

Now, let’s connect our JMX metrics to Logstash – for which we’ll need to have the JMX input plugin installed there (more on that later).

4.1. Configure JMX Metrics

First we need to configure the JMX metrics we want to stash; we’ll provide the configuration in JSON format.

Here’s our jmx_config.json:

{
  "host" : "localhost",
  "port" : 9000,
  "alias" : "reddit.jmx.elasticsearch",
  "queries" : [
  {
    "object_name" : "java.lang:type=Memory",
    "object_alias" : "Memory"
  }, {
    "object_name" : "java.lang:type=Threading",
    "object_alias" : "Threading"
  }, {
    "object_name" : "java.lang:type=Runtime",
    "attributes" : [ "Uptime", "StartTime" ],
    "object_alias" : "Runtime"
  }]
}

Note that:

  • We used the same port for JMX from CATALINA_OPTS 
  • We can provide as many configuration files as we want, but we need them to be in the same directory (in our case, we saved jmx_config.json in ‘/monitor/jmx/’)

4.2. JMX Input Plugin

Next, let’s install JMX input plugin by running the following command in the Logstash installation directory:

bin/logstash-plugin install logstash-input-jmx

Then, we need to create a Logstash configuration file (jmx.conf), where the input is JMX metrics and output directed to Elasticsearch:

input {
  jmx {
    path => "/monitor/jmx"
    polling_frequency => 60
    type => "jmx"
    nb_thread => 3
  }
}

output {
    elasticsearch {
        hosts => [ "localhost:9200" ]
    }
}

Finally, we need to run Logstash and specify our configuration file:

bin/logstash -f jmx.conf

Note that our Logstash configuration file jmx.conf is saved in the Logstash home directory (in our case /opt/logstash)

5. Visualize JMX Metrics

Finally, let’s create a simple visualization of our JMX metrics data, over on Kibana. We’ll create a simple chart – to monitor the heap memory usage.

5.1. Create New Search

First, we’ll create a new search to get metrics related to heap memory usage:

  • Click on “New Search” icon in search bar
  • Type the following query
    metric_path:reddit.jmx.elasticsearch.Memory.HeapMemoryUsage.used
  • Press Enter
  • Make sure to add ‘metric_path‘ and ‘metric_value_number‘ fields from sidebar
  • Click on ‘Save Search’ icon in search bar
  • Name the search ‘used memory’

In case any fields from sidebar marked as unindexed, go to ‘Settings’ tab and refresh the field list in the ‘logstash-*‘ index.

5.2. Create Line Chart

Next, we’ll create a simple line chart to monitor our heap memory usage over time:

  • Go to ‘Visualize’ tab
  • Choose ‘Line Chart’
  • Choose ‘From saved search’
  • Choose ‘used memory’ search that we created earlier

For Y-Axis, make sure to choose:

  • Aggregation: Average
  • Field: metric_value_number

For the X-Axis, choose ‘Date Histogram’ – then save the visualization.

5.3. Use Scripted Field

As the memory usage is in bytes, it’s not very readable. We can convert the metric type and value by adding a scripted field in Kibana:

  • From ‘Settings’, go to indices and choose ‘logstash-*‘ index
  • Go to ‘Scripted fields’ tab and click ‘Add Scripted Field’
  • Name: metric_value_formatted
  • Format: Bytes
  • For Script, we will simply use value of ‘metric_value_number‘:
    doc['metric_value_number'].value

Now, you can change your search and visualization to use field ‘metric_value_formatted‘ instead of ‘metric_value_number‘ – and the data is going to be properly displayed.

6. Conclusion

And we’re done. As you can see, the configuration isn’t particularly difficult, and getting the JMX data to be visible in Kibana allows us to do a lot of interesting visualization work to create a fantastic production monitoring dashboard.

REST API Testing with Karate

$
0
0

1. Overview

In this article, we’ll introduce Karate, a Behavior Driven Development (BDD) testing framework for Java.

2. Karate and BDD

Karate is built on top of Cucumber, another BDD testing framework, and shares some of the same concepts. One of these is the use of a Gherkin file, which describes the tested feature. However, unlike Cucumber, tests aren’t written in Java and are fully described in the Gherkin file.

A Gherkin file is saved with the “.feature” extension. It begins with the Feature keyword, followed by the feature name on the same line. It also contains different test scenarios, each beginning with the keyword Scenario and consisting of multiple steps with the keywords GivenWhenThenAnd, and But.

More about Cucumber and the Gherkin structure can be found here.

3. Maven Dependencies

To make use of Karate in a Maven project, we need to add the karate-apache dependency to the pom.xml:

<dependency>
    <groupId>com.intuit.karate</groupId>
    <artifactId>karate-apache</artifactId>
    <version>0.6.0</version>
</dependency>

We’ll also need the karate-junit4 dependency to facilitate JUnit testing:

<dependency>
    <groupId>com.intuit.karate</groupId>
    <artifactId>karate-junit4</artifactId>
    <version>0.6.0</version>
</dependency>

4. Creating Tests

We’ll start by writing tests for some common scenarios in a Gherkin Feature file.

4.1. Testing the Status Code

Let’s write a scenario that tests a GET endpoint and checks if it returns a 200 (OK) HTTP status code:

Scenario: Testing valid GET endpoint
Given url 'http://localhost:8080/user/get'
When method GET
Then status 200

This works obviously with all possible HTTP status codes.

4.2. Testing the Response

Let’s a write another scenario that tests that the REST endpoint returns a specific response:

Scenario: Testing the exact response of a GET endpoint
Given url 'http://localhost:8080/user/get'
When method GET
Then status 200
And match $ == {id:"1234",name:"John Smith"}

The match operation is used for the validation where ‘$’ represents the response. So the above scenario checks that the response exactly matches ‘{id:”1234″,name:”John Smith”}’.

We can also check specifically for the value of the id field:

And match $.id == "1234"

The match operation can also be used to check if the response contains certain fields. This is helpful when only certain fields need to be checked or when not all response fields are known:

Scenario: Testing that GET response contains specific field
Given url 'http://localhost:8080/user/get'
When method GET
Then status 200
And match $ contains {id:"1234"}

4.3. Validating Response Values with Markers

In the case where we don’t know the exact value that is returned, we can still validate the value using markers — placeholders for matching fields in the response.

For example, we can use a marker to indicate whether we expect a null value or not:

  • #null
  • #notnull

Or we can use a marker to match a certain type of value in a field:

  • #boolean
  • #number
  • #string

Other markers are available for when we expect a field to contain a JSON object or array:

  • #array
  • #object

And there’re markers for matching on a certain format or regular expression and one that evaluates a boolean expression:

  • #uuid — value conforms to the UUID format
  • #regex STR — value matches the regular expression STR
  • #? EXPR — asserts that the JavaScript expression EXPR evaluates to true

Finally, if we don’t want any kind of check on a field, we can use the #ignore marker.

Let’s rewrite the above scenario to check that the id field is not null:

Scenario: Test GET request exact response
Given url 'http://localhost:8080/user/get'
When method GET
Then status 200
And match $ == {id:"#notnull",name:"John Smith"}

4.4. Testing a POST Endpoint with a Request Body

Let’s look at a final scenario that tests a POST endpoint and takes a request body:

Scenario: Testing a POST endpoint with request body
Given url 'http://localhost:8080/user/create'
And request { id: '1234' , name: 'John Smith'}
When method POST
Then status 200
And match $ contains {id:"#notnull"}

5. Running Tests

Now that the test scenarios are complete, we can run our tests by integrating Karate with JUnit.

We’ll use the @CucumberOptions annotation to specify the exact location of the Feature files:

@RunWith(Karate.class)
@CucumberOptions(features = "classpath:karate")
public class KarateUnitTest {
//...     
}

To demonstrate the REST API, we’ll use a WireMock server. 

For this example, we mock all the endpoints that are being tested in the method annotated with @BeforeClass. We’ll shut down the WireMock server in the method annotated with @AfterClass:

private static WireMockServer wireMockServer
  = new WireMockServer();

@BeforeClass
public static void setUp() throws Exception {
    wireMockServer.start();
    configureFor("localhost", 8080);
    stubFor(
      get(urlEqualTo("/user/get"))
        .willReturn(aResponse()
          .withStatus(200)
          .withHeader("Content-Type", "application/json")
          .withBody("{ \"id\": \"1234\", name: \"John Smith\" }")));

    stubFor(
      post(urlEqualTo("/user/create"))
        .withHeader("content-type", equalTo("application/json"))
        .withRequestBody(containing("id"))
        .willReturn(aResponse()
          .withStatus(200)
          .withHeader("Content-Type", "application/json")
          .withBody("{ \"id\": \"1234\", name: \"John Smith\" }")));

}

@AfterClass
public static void tearDown() throws Exception {
    wireMockServer.stop();
}

When we run the KarateUnitTest class, the REST Endpoints are created by the WireMock Server, and all the scenarios in the specified feature file are run.

6. Conclusion

In this tutorial, we looked at how to test REST APIs using the Karate Testing Framework.

Complete source code and all code snippets for this article can be found over on GitHub.

Java Weekly, Issue 203

$
0
0

Here we go…

1. Spring and Java

>> Elegant delegates in Kotlin [blog.codecentric.de]

Kotlin has many powerful features that should be used with extra care – and delegation is one of them.

>> 10 Common Hibernate Mistakes That Cripple Your Performance [thoughts-on-java.org]

If you’re working with Hibernate, these are definitely good things to keep in mind.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Building a Microservices Ecosystem with Kafka Streams and KSQL [confluent.io]

A comprehensive guide to piecing together a microservice-based system making good use of Kafka Streams and KSQL.

>> Startup Mistakes: Choice of Datastore [stavros.io]

Adopting trendy technologies without evaluating their pros and cons doesn’t end well.

>> Grafana vs. Kibana: How to Get the Most Out of Your Data Visualization [blog.takipi.com]

A quick comparison for two fantastic tools, both doing data visualization well.

Also worth reading:

3. Musings

>> Is Object-Oriented Programming compatible with an enterprise context? [blog.frankel.ch]

It’s surely doable but migrating to the OOP-compatible design has its price.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Wally Is Working If You Don’t See Him [dilbert.com]

>> Traffic App [dilbert.com]

>> Wally’s Watch is a Snitch [dilbert.com]

5. Pick of the Week

>> How Do You Focus? [m.signalvnoise.com]

The Java continue and break Keywords

$
0
0

1. Overview

In this quick article, we’ll introduce continue and break Java keywords and focus on how to use them in practice.

Simply put, execution of these statements causes branching of the current control flow and terminates the execution of the code in the current iteration.

2. The break Statement

The break statement comes in two forms: unlabeled and labeled.

2.1. Unlabeled break

We can use the unlabeled statement to terminate a for, while or do-while loop as well as the switch-case block:

for (int i = 0; i < 5; i++) {
    if (i == 3) {
        break;
    }
}

This snippet defines a for loop that is supposed to iterate five times. But when counter equals 3, the if condition becomes true and the break statement terminates the loop. This causes the control flow to be transferred to the statement that follows after the end of for loop.

In case of nested loops, an unlabeled break statement only terminates the inner loop that it’s in. Outer loops continue execution:

for (int rowNum = 0; rowNum < 3; rowNum++) {
    for (int colNum = 0; colNum < 4; colNum++) {
        if (colNum == 3) {
            break;
        }
    }
}

This snippet has nested for loops. When colNum equals 3, the if the condition evaluates to true and the break statement causes the inner for loop to terminate. However, the outer for loop continues iterating.

2.2. Labeled break

We can also use a labeled break statement to terminate a for, while or do-while loop. A labeled break terminates the outer loop.

Upon termination, the control flow is transferred to the statement immediately after the end of the outer loop:

compare: 
for (int rowNum = 0; rowNum < 3; rowNum++) {
    for (int colNum = 0; colNum < 4; colNum++) {
        if (rowNum == 1 && colNum == 3) {
            break compare;
        }
    }
}

In this example, we introduced a label just before the outer loop. When rowNum equals 1 and colNum equals 3, the if condition evaluates to true and the break statement terminates the outer loop.

The control flow is then transferred to the statement following the end of outer for loop.

3. The continue Statement

The continue statement also comes in two forms: unlabeled and labeled.

3.1. Unlabeled continue

We can use an unlabeled statement to bypass the execution of rest of the statements in the current iteration of a for, while or do-while loop. It skips to the end of the inner loop and continues the loop:

int counter = 0;
for (int rowNum = 0; rowNum < 3; rowNum++) {
    for (int colNum = 0; colNum < 4; colNum++) {
        if (colNum != 3) {
            continue;
        }
        counter++;
    }
}

In this snippet, whenever colNum is not equal to 3, the unlabeled continue statement skips the current iteration, thus bypassing the increment of the variable counter in that iteration. However, the outer for loop continues to iterate. So, the increment of counter happens only when colNum equals 3 in each iteration of the outer for loop.

3.2. Labeled continue

We can also use a labeled continue statement which skips the outer loop. Upon skipping, the control flow is transferred to the end of the outer loop, effectively continuing the iteration of the outer loop:

int counter = 0;
compare: 
for (int rowNum = 0; rowNum < 3; rowNum++) {
    for (int colNum = 0; colNum < 4; colNum++) {
        if (colNum == 3) {
            counter++;
            continue compare;
        }
    }
}

We introduced a label just before the outer loop. Whenever colNum equals 3, the variable counter is incremented. The labeled continue statement causes the iteration of outer for loop to skip.

The control flow is transferred to the end of the outer for loop, which continues with the next iteration.

4. Conclusion

In this tutorial, we’ve seen different ways of using the keywords break and continue as branching statements in Java.

The complete code presented in this article is available over on GitHub.

Lazy Verification with Mockito 2

$
0
0

1. Introduction

In this short tutorial, we’ll look at lazy verifications in Mockito 2.

Instead of failing-fast, Mockito allows us to see all results collected and reported at the end of a test.

2. Maven Dependencies

Let’s start by adding the Mockito 2 dependency:

<dependency>
    <groupId>org.mockito</groupId>
    <artifactId>mockito-core</artifactId>
    <version>2.12.0</version>
</dependency>

3. Lazy Verification

The default behavior of Mockito is to stop at the first failure i.e. eagerly – the approach is also known as fail-fast.

Sometimes we might need to execute and report all verifications – regardless of previous failures.

VerificationCollector is a JUnit rule which collects all verifications in test methods.

They’re executed and reported at the end of the test if there are failures:

public class LazyVerificationTest {
 
    @Rule
    public VerificationCollector verificationCollector = MockitoJUnit.collector();

    // ...
}

Let’s add a simple test:

@Test
public void testLazyVerification() throws Exception {
    List mockList = mock(ArrayList.class);
    
    verify(mockList).add("one");
    verify(mockList).clear();
}

When this test is executed, failures of both verifications will be reported:

org.mockito.exceptions.base.MockitoAssertionError: There were multiple verification failures:
1. Wanted but not invoked:
arrayList.add("one");
-> at com.baeldung.mockito.java8.LazyVerificationTest.testLazyVerification(LazyVerificationTest.java:21)
Actually, there were zero interactions with this mock.

2. Wanted but not invoked:
arrayList.clear();
-> at com.baeldung.mockito.java8.LazyVerificationTest.testLazyVerification(LazyVerificationTest.java:22)
Actually, there were zero interactions with this mock.

Without VerificationCollector rule, only the first verification gets reported:

Wanted but not invoked:
arrayList.add("one");
-> at com.baeldung.mockito.java8.LazyVerificationTest.testLazyVerification(LazyVerificationTest.java:19)
Actually, there were zero interactions with this mock.

4. Conclusion

We had a quick look at how we can use lazy verification in Mockito 2.

Also, as always, code samples can be found over on GitHub.

An Example of Backward Chaining in Drools

$
0
0

1. Overview

In this article, we’ll see how what Backward Chaining is and how we can use it with Drools.

This article is a part of a series showcasing the Drools Business Rules Engine.

2. Maven Dependencies

Let’s start by importing the drools-core dependency:

<dependency>
    <groupId>org.drools</groupId>
    <artifactId>drools-core</artifactId>
    <version>7.4.1.Final</version>
</dependency>

3. Forward Chaining

First of all, with forward chaining, we start by analyzing data and make our way towards a particular conclusion.

An example of applying forward chaining would be a system that discovers new routes by inspecting already known connections between nodes.

4. Backward Chaining

As opposed to forward chaining, backward chaining starts directly with the conclusion (hypothesis) and validates it by backtracking through a sequence of facts.

When comparing forward chaining and backward chaining, the first one can be described as “data-driven” (data as input), while the latter one can be described as “event(or goal)-driven” (goals as inputs).

An example of applying backward chaining would be to validate if there’s a route connecting two nodes.

5. Drools Backward Chaining

The Drools project was created primarily as a forward chaining system. But, starting with version 5.2.0, it supports backward chaining as well.

Let’s create a simple application and try to validate a simple hypothesis – if the Great Wall of China is on Planet Earth.

5.1. The Data

Let’s create a simple fact base describing things and its location:

  1. Planet Earth
  2. Asia, Planet Earth
  3. China, Asia
  4. Great Wall of China, China

5.2. Defining Rules

Now, let’s create a “.drl” file called BackwardChaining.drl which we’ll place in /resources/com/baeldung/drools/rules/. This will contain all necessary queries and rules to be used in the example.

The main belongsTo query, that will utilize backward chaining, can be written as:

query belongsTo(String x, String y)
    Fact(x, y;)
    or
    (Fact(z, y;) and belongsTo(x, z;))
end

Additionally, let’s add two rules that will make it possible to review our results easily:

rule "Great Wall of China BELONGS TO Planet Earth"
when
    belongsTo("Great Wall of China", "Planet Earth";)
then
    result.setValue("Decision one taken: Great Wall of China BELONGS TO Planet Earth");
end

rule "print all facts"
when
    belongsTo(element, place;)
then
    result.addFact(element + " IS ELEMENT OF " + place);
end

5.3. Creating the Application

Now, we’ll need a Java class for representing facts:

public class Fact {
 
    @Position(0)
    private String element;

    @Position(1)
    private String place;

    // getters, setters, contructors, and other methods ...    
}

Here we use the @Position annotation to tell the application in which order Drools will supply values for those attributes.

Also, we’ll create the POJO representing results:

public class Result {
    private String value;
    private List<String> facts = new ArrayList<>();
 
    //... getters, setters, constructors, and other methods
}

And now, we can run the example:

public class BackwardChainingTest {

    @Before
    public void before() {
        result = new Result();
        ksession = new DroolsBeanFactory().getKieSession();
    }

    @Test
    public void whenWallOfChinaIsGiven_ThenItBelongsToPlanetEarth() {

        ksession.setGlobal("result", result);
        ksession.insert(new Fact("Asia", "Planet Earth"));
        ksession.insert(new Fact("China", "Asia"));
        ksession.insert(new Fact("Great Wall of China", "China"));

        ksession.fireAllRules();
        
        assertEquals(
          result.getValue(),
          "Decision one taken: Great Wall of China BELONGS TO Planet Earth");
    }
}

When the test cases are executed, they add the given facts (“Asia belongs to Planet Earth“, “China belongs to Asia”, “Great Wall of China belongs to China”).

After that, the facts are processed with the rules described in BackwardChaining.drl, which provides a recursive query belongsTo(String x, String y). 

This query is invoked by the rules which use backward chaining to find if the hypothesis (“Great Wall of China BELONGS TO Planet Earth”), is true or false.

6. Conclusion

We’ve shown an overview of Backward Chaining, a feature of Drools used to retrieve a list of facts to validate if a decision is true.

As always, the full example can be found in our GitHub repository.

Introduction to Spring Cloud Stream

$
0
0

1. Overview

Spring Cloud Stream is a framework built on top of Spring Boot and Spring Integration that helps in creating event-driven or message-driven microservices.

In this article, we’ll introduce concepts and constructs of Spring Cloud Stream with some simple examples.

2. Maven Dependencies

To get started, we’ll need to add the Spring Cloud Starter Stream with the broker RabbitMQ Maven dependency as messaging-middleware to our pom.xml:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-stream-rabbit</artifactId>
    <version>1.3.0.RELEASE</version>
</dependency>

And we’ll add the module dependency from Maven Central to enable JUnit support as well:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-stream-test-support</artifactId>
    <version>1.3.0.RELEASE</version>
    <scope>test</scope>
</dependency>

3. Main Concepts

Microservices architecture follows the “smart endpoints and dumb pipes” principle. Communication between endpoints is driven by messaging-middleware parties like RabbitMQ or Apache Kafka. Services communicate by publishing domain events via these endpoints or channels.

Let’s walk through the concepts that make up the Spring Cloud Stream framework, along with the essential paradigms that we must be aware of to build message-driven services.

3.1. Constructs

Let’s look at a simple service in Spring Cloud Stream that listens to input binding and sends a response to the output binding:

@SpringBootApplication
@EnableBinding(Processor.class)
public class MyLoggerServiceApplication {
    public static void main(String[] args) {
        SpringApplication.run(MyLoggerServiceApplication.class, args);
    }

    @StreamListener(Processor.INPUT)
    @SendTo(Processor.OUTPUT)
    public LogMessage enrichLogMessage(LogMessage log) {
        return new LogMessage(String.format("[1]: %s", log.getMessage()));
    }
}

The annotation @EnableBinding configures the application to bind the channels INPUT and OUTPUT defined within the interface Processor. Both channels are bindings that can be configured to use a concrete messaging-middleware or binder.

Let’s take a look at the definition of all these concepts:

  • Bindings — a collection of interfaces that identify the input and output channels declaratively
  • Binder — messaging-middleware implementation such as Kafka or RabbitMQ
  • Channel — represents the communication pipe between messaging-middleware and the application
  • StreamListeners — message-handling methods in beans that will be automatically invoked on a message from the channel after the MessageConverter does the serialization/deserialization between middleware-specific events and domain object types / POJOs
  • Message Schemas — used for serialization and deserialization of messages, these schemas can be statically read from a location or loaded dynamically, supporting the evolution of domain object types

3.2. Communication Patterns

Messages designated to destinations are delivered by the Publish-Subscribe messaging pattern. Publishers categorize messages into topics, each identified by a name. Subscribers express interest in one or more topics. The middleware filters the messages, delivering those of the interesting topics to the subscribers.

Now, the subscribers could be grouped. A consumer group is a set of subscribers or consumers, identified by a group id, within which messages from a topic or topic’s partition are delivered in a load-balanced manner.

4. Programming Model

This section describes the basics of building Spring Cloud Stream applications.

4.1. Functional Testing

The test support is a binder implementation that allows interacting with the channels and inspecting messages.

Let’s send a message to the above enrichLogMessage service and check whether the response contains the text “[1]: “ at the beginning of the message:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes = MyLoggerServiceApplication.class)
@DirtiesContext
public class MyLoggerApplicationTests {

    @Autowired
    private Processor pipe;

    @Autowired
    private MessageCollector messageCollector;

    @Test
    public void whenSendMessage_thenResponseShouldUpdateText() {
        pipe.input()
          .send(MessageBuilder.withPayload(new LogMessage("This is my message"))
          .build());

        Object payload = messageCollector.forChannel(pipe.output())
          .poll()
          .getPayload();

        assertEquals("[1]: This is my message", payload.toString());
    }
}

4.2. Custom Channels

In the above example, we used the Processor interface provided by Spring Cloud, which has only one input and one output channel.

If we need something different, like one input and two output channels, we can create a custom processor:

public interface MyProcessor {
    String INPUT = "myInput";

    @Input
    SubscribableChannel myInput();

    @Output("myOutput")
    MessageChannel anOutput();

    @Output
    MessageChannel anotherOutput();
}

Spring will provide the proper implementation of this interface for us. The channel names can be set using annotations like in @Output(“myOutput”).

Otherwise, Spring will use the method names as the channel names. Therefore, we’ve got three channels called myInput, myOutput, and anotherOutput.

Now, let’s imagine we want to route the messages to one output if the value is less than 10 and into another output is the value is greater than or equal to 10:

@Autowired
private MyProcessor processor;

@StreamListener(MyProcessor.INPUT)
public void routeValues(Integer val) {
    if (val < 10) {
        processor.anOutput().send(message(val));
    } else {
        processor.anotherOutput().send(message(val));
    }
}

private static final <T> Message<T> message(T val) {
    return MessageBuilder.withPayload(val).build();
}

4.3. Conditional Dispatching

Using the @StreamListener annotation, we also can filter the messages we expect in the consumer using any condition that we define with SpEL expressions.

As an example, we could use conditional dispatching as another approach to route messages into different outputs:

@Autowired
private MyProcessor processor;

@StreamListener(
  target = MyProcessor.INPUT, 
  condition = "payload < 10")
public void routeValuesToAnOutput(Integer val) {
    processor.anOutput().send(message(val));
}

@StreamListener(
  target = MyProcessor.INPUT, 
  condition = "payload >= 10")
public void routeValuesToAnotherOutput(Integer val) {
    processor.anotherOutput().send(message(val));
}

The only limitation of this approach is that these methods must not return a value.

5. Setup

Let’s set up the application that will process the message from the RabbitMQ broker.

5.1. Binder Configuration

We can configure our application to use the default binder implementation via META-INF/spring.binders:

rabbit:\
org.springframework.cloud.stream.binder.rabbit.config.RabbitMessageChannelBinderConfiguration

Or we can add the binder library for RabbitMQ to the classpath by including this dependency:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-stream-binder-rabbit</artifactId>
    <version>1.3.0.RELEASE</version>
</dependency>

If no binder implementation is provided, Spring will use direct message communication between the channels.

5.2. RabbitMQ Configuration

To configure the example in section 3.1 to use the RabbitMQ binder, we need to update the application.yml located at src/main/resources:

spring:
  cloud:
    stream:
      bindings:
        input:
          destination: queue.log.messages
          binder: local_rabbit
        output:
          destination: queue.pretty.log.messages
          binder: local_rabbit
      binders:
        local_rabbit:
          type: rabbit
          environment:
            spring:
              rabbitmq:
                host: <host>
                port: 5672
                username: <username>
                password: <password>
                virtual-host: /

The input binding will use the exchange called queue.log.messages, and the output binding will use the exchange queue.pretty.log.messages. Both bindings will use the binder called local_rabbit.

Note that we don’t need to create the RabbitMQ exchanges or queues in advance. When running the application, both exchanges are automatically created.

To test the application, we can use the RabbitMQ management site to publish a message. In the Publish Message panel of the exchange queue.log.messages, we need to enter the request in JSON format.

5.3. Customizing Message Conversion

Spring Cloud Stream allows us to apply message conversion for specific content types. In the above example, instead of using JSON format, we want to provide plain text.

To do this, we’ll to apply a custom transformation to LogMessage using a MessageConverter:

@SpringBootApplication
@EnableBinding(Processor.class)
public class MyLoggerServiceApplication {
    //...

    @Bean
    public MessageConverter providesTextPlainMessageConverter() {
        return new TextPlainMessageConverter();
    }

    //...
}
public class TextPlainMessageConverter extends AbstractMessageConverter {

    public TextPlainMessageConverter() {
        super(new MimeType("text", "plain"));
    }

    @Override
    protected boolean supports(Class<?> clazz) {
        return (LogMessage.class == clazz);
    }

    @Override
    protected Object convertFromInternal(Message<?> message, 
        Class<?> targetClass, Object conversionHint) {
        Object payload = message.getPayload();
        String text = payload instanceof String 
          ? (String) payload 
          : new String((byte[]) payload);
        return new LogMessage(text);
    }
}

After applying these changes, going back to the Publish Message panel, if we set the header “contentTypes” to “text/plain” and the payload to “Hello World“, it should work as before.

5.4. Consumer Groups

When running multiple instances of our application, every time there is a new message in an input channel, all subscribers will be notified.

Most of the time, we need the message to be processed only once. Spring Cloud Stream implements this behavior via consumer groups.

To enable this behavior, each consumer binding can use the spring.cloud.stream.bindings.<CHANNEL>.group property to specify a group name:

spring:
  cloud:
    stream:
      bindings:
        input:
          destination: queue.log.messages
          binder: local_rabbit
          group: logMessageConsumers
          ...

6. Message-Driven Microservices

In this section, we introduce all the required features for running our Spring Cloud Stream applications in a microservices context.

6.1. Scaling Up

When multiple applications are running, it’s important to ensure the data is split properly across consumers. To do so, Spring Cloud Stream provides two properties:

  • spring.cloud.stream.instanceCount — number of running applications
  • spring.cloud.stream.instanceIndex — index of the current application

For example, if we’ve deployed two instances of the above MyLoggerServiceApplication application, the property spring.cloud.stream.instanceCount should be 2 for both applications, and the property spring.cloud.stream.instanceIndex should be 0 and 1 respectively.

These properties are automatically set if we deploy the Spring Cloud Stream applications using Spring Data Flow as described in this article.

6.2. Partitioning

The domain events could be Partitioned messages. This helps when we are scaling up the storage and improving application performance.

The domain event usually has a partition key so that it ends up in the same partition with related messages.

Let’s say that we want the log messages to be partitioned by the first letter in the message, which would be the partition key, and grouped into two partitions.

There would be one partition for the log messages that start with A-M and another partition for N-Z. This can be configured using two properties:

  • spring.cloud.stream.bindings.output.producer.partitionKeyExpression — the expression to partition the payloads
  • spring.cloud.stream.bindings.output.producer.partitionCount — the number of groups

Sometimes the expression to partition is too complex to write it in only one line. For these cases, we can write our custom partition strategy using the property spring.cloud.stream.bindings.output.producer.partitionKeyExtractorClass.

6.3. Health Indicator

In a microservices context, we also need to detect when a service is down or starts failing. Spring Cloud Stream provides the property management.health.binders.enabled to enable the health indicators for binders.

When running the application, we can query the health status at http://<host>:<port>/health.

7. Conclusion

In this tutorial, we presented the main concepts of Spring Cloud Stream and showed how to use it through some simple examples over RabbitMQ. More info about Spring Cloud Stream can be found here.

The source code for this article can be found over on GitHub.


A Guide to Google-Http-Client

$
0
0

1. Overview

In this article, we’ll have a look at the Google HTTP Client Library for Java, which is a fast, well-abstracted library for accessing any resources via the HTTP connection protocol.

The main features of the client are:

  • an HTTP abstraction layer that lets you decouple any low-level library
  • fast, efficient and flexible JSON and XML parsing models of the HTTP response and request content
  • easy to use annotations and abstractions for HTTP resource mappings

The library can also be used in Java 5 and above, making it a considerable choice for legacy (SE and EE ) projects.

In this article, we’re going to develop a simple application that will connect to the GitHub API and retrieve users, while covering some of the most interesting features of the library.

2. Maven Dependencies

To use the library we’ll need the google-http-client dependency:

<dependency>
    <groupId>com.google.http-client</groupId>
    <artifactId>google-http-client</artifactId>
    <version>1.23.0</version>
</dependency>

The latest version can be found at Maven Central.

3. Making a Simple Request

Let’s start by making a simple GET request to the GitHub page to showcase how the Google Http Client works out of the box:

HttpRequestFactory requestFactory
  = new NetHttpTransport().createRequestFactory();
HttpRequest request = requestFactory.buildGetRequest(
  new GenericUrl("https://github.com"));
String rawResponse = request.execute().parseAsString()

To make the simplest of request we’ll need at least:

  • HttpRequestFactory this is used to build our requests
  • HttpTransport an abstraction of the low-level HTTP transport layer
  • GenericUrl a class that wraps the Url
  • HttpRequest handles the actual execution of the request

We’ll go through all these and a more complex example with an actual API that returns a JSON format in the following sections.

4. Pluggable HTTP Transport

The library has a well-abstracted HttpTransport class that allows us to build on top of it and change to the underlying low-level HTTP transport library of choice:

public class GitHubExample {
    static HttpTransport HTTP_TRANSPORT = new NetHttpTransport();
}

In this example, we’re using the NetHttpTransport, which is based on the HttpURLConnection that is found in all Java SDKs. This is a good starting choice since it’s well-known and reliable.

Of course, there might be the case where we need some advanced customization, and thus the requirement of a more complex low-level library.

For this kind of cases, there is the ApacheHttpTransport:

public class GitHubExample {
    static HttpTransport HTTP_TRANSPORT = new ApacheHttpTransport();
}

The ApacheHttpTransport is based on the popular Apache HttpClient which includes a wide variety of choices to configure connections.

Additionally, the library provides the option to build your low-level implementation, making it very flexible.

5. JSON Parsing

The Google Http Client includes another abstraction for JSON parsing. A major advantage of this is that the choice of low-level parsing library is interchangeable.

There’re three built-in choices, all of which extend JsonFactory, and it also includes the possibility of implementing our own.

5.1. Interchangeable Parsing library

In our example, we’re going to use the Jackson2 implementation, which requires the google-http-client-jackson2 dependency:

<dependency>
    <groupId>com.google.http-client</groupId>
    <artifactId>google-http-client-jackson2</artifactId>
    <version>1.23.0</version>
</dependency>

Following this, we can now include the JsonFactory:

public class GitHubExample {

    static HttpTransport HTTP_TRANSPORT = new NetHttpTransport();
    staticJsonFactory JSON_FACTORY = new JacksonFactory();
}

The JacksonFactory is the fastest and most popular library for parsing/serialization operations. 

This comes at the cost of the library size (which could be a concern in certain situations). For this reason, Google also provides the GsonFactory, which is an implementation of the Google GSON library, a light-weight JSON parsing library.

There is also the possibility of writing our low-level parser implementation.

5.2. The @Key annotation

We can use the @Key annotation to indicate fields that need to be parsed from or serialized to JSON:

public class User {
 
    @Key
    private String login;
    @Key
    private long id;
    @Key("email")
    private String email;

    // standard getters and setters
}

Here we’re making a User abstraction, which we receive in batch from the GitHub API (we will get to the actual parsing later in this article).

Please note that fields that don’t have the @Key annotation are considered internal and are not parsed from or serialized to JSON. Also, the visibility of the fields does not matter, nor does the existence of the getter or setter methods.

We can specify the value of the @Key annotation, to map it to the correct JSON key.

5.3. GenericJson

Only the fields we declare, and mark as @Key are parsed.

To retain the other content, we can declare our class to extend GenericJson:

public class User extends GenericJson {
    //...
}

GenericJson implements the Map interface, which means we can use the get and put methods to set/get JSON content in the request/response.

6. Making the Call

To connect to an endpoint with the Google Http Client, we’ll need an HttpRequestFactory, which will be configured with our previous abstractions HttpTransport and JsonFactory:

public class GitHubExample {

    static HttpTransport HTTP_TRANSPORT = new NetHttpTransport();
    static JsonFactory JSON_FACTORY = new JacksonFactory();

    private static void run() throws Exception {
        HttpRequestFactory requestFactory 
          = HTTP_TRANSPORT.createRequestFactory(
            (HttpRequest request) -> {
              request.setParser(new JsonObjectParser(JSON_FACTORY));
          });
    }
}

The next thing we’re going to need is a URL to connect to. The library handles this as a class extending GenericUrl on which any field declared is treated as a query parameter:

public class GitHubUrl extends GenericUrl {

    public GitHubUrl(String encodedUrl) {
        super(encodedUrl);
    }

    @Key
    public int per_page;
 
}

Here in our GitHubUrl, we declare the per_page property to indicate how many users we want in a single call to the GitHub API.

Let’s continue building our call using the GitHubUrl:

private static void run() throws Exception {
    HttpRequestFactory requestFactory
      = HTTP_TRANSPORT.createRequestFactory(
        (HttpRequest request) -> {
          request.setParser(new JsonObjectParser(JSON_FACTORY));
        });
    GitHubUrl url = new GitHubUrl("https://api.github.com/users");
    url.per_page = 10;
    HttpRequest request = requestFactory.buildGetRequest(url);
    Type type = new TypeToken<List<User>>() {}.getType();
    List<User> users = (List<User>)request
      .execute()
      .parseAs(type);
}

Notice how we specify how many users we’ll need for the API call, and then we build the request with the HttpRequestFactory.

Following this, since the GitHub API’s response contains a list of users, we need to provide a complex Type, which is a List<User>.

Then, on the last line, we make the call and parse the response to a list of our User class.

7. Custom Headers

One thing we usually do when making an API request is to include some kind of custom header or even a modified one:

HttpHeaders headers = request.getHeaders();
headers.setUserAgent("Baeldung Client");
headers.set("Time-Zone", "Europe/Amsterdam");

We do this by getting the HttpHeaders after we’ve created our request but before executing it and adding the necessary values.

Please be aware that the Google Http Client includes some headers as special methods. The User-Agent header for example, if we try to include it with just the set method it would throw an error.

8. Exponential Backoff

Another important feature of the Google Http Client is the possibility to retry requests based on certain status codes and thresholds.

We can include our exponential backoff settings right after we’ve created our request object:

ExponentialBackOff backoff = new ExponentialBackOff.Builder()
  .setInitialIntervalMillis(500)
  .setMaxElapsedTimeMillis(900000)
  .setMaxIntervalMillis(6000)
  .setMultiplier(1.5)
  .setRandomizationFactor(0.5)
  .build();
request.setUnsuccessfulResponseHandler(
  new HttpBackOffUnsuccessfulResponseHandler(backoff));

Exponential Backoff is turned off by default in HttpRequest, so we must include an instance of HttpUnsuccessfulResponseHandler to the HttpRequest to activate it.

9. Logging

The Google Http Client uses java.util.logging.Logger for logging HTTP request and response details, including URL, headers, and content.

Commonly, logging is managed using a logging.properties file:

handlers = java.util.logging.ConsoleHandler
java.util.logging.ConsoleHandler.level = ALL
com.google.api.client.http.level = ALL

In our example we use ConsoleHandler, but it’s also possible to choose the FileHandler.

The properties file configures the operation of the JDK logging facility. This config file can be specified as a system property:

-Djava.util.logging.config.file=logging.properties

So after setting the file and system property, the library will produce a log like the following:

-------------- REQUEST  --------------
GET https://api.github.com/users?page=1&per_page=10
Accept-Encoding: gzip
User-Agent: Google-HTTP-Java-Client/1.23.0 (gzip)

Nov 12, 2017 6:43:15 PM com.google.api.client.http.HttpRequest execute
curl -v --compressed -H 'Accept-Encoding: gzip' -H 'User-Agent: Google-HTTP-Java-Client/1.23.0 (gzip)' -- 'https://api.github.com/users?page=1&per_page=10'
Nov 12, 2017 6:43:16 PM com.google.api.client.http.HttpResponse 
-------------- RESPONSE --------------
HTTP/1.1 200 OK
Status: 200 OK
Transfer-Encoding: chunked
Server: GitHub.com
Access-Control-Allow-Origin: *
...
Link: <https://api.github.com/users?page=1&per_page=10&since=19>; rel="next", <https://api.github.com/users{?since}>; rel="first"
X-GitHub-Request-Id: 8D6A:1B54F:3377D97:3E37B36:5A08DC93
Content-Type: application/json; charset=utf-8
...

10. Conclusion

In this tutorial, we’ve shown the Google HTTP Client Library for Java and its more useful features. Their Github contains more information about it as well as the source code of the library.

As always, the full source code of this tutorial is available over on GitHub.

Spring Security 5 for Reactive Applications

$
0
0

1. Introduction

In this article, we’ll explore new features of the Spring Security 5 framework for securing reactive applications. This release is aligned with Spring 5 and Spring Boot 2.

In this article, we won’t go into details about the reactive applications themselves, which is a new feature of the Spring 5 framework. Be sure to check out the article Intro to Reactor Core for more details.

2. Maven Setup

We’ll use Spring Boot starters to bootstrap our project together with all required dependencies.

The basic setup requires a parent declaration, web starter, and security starter dependencies. We’ll also need the Spring Security test framework:

<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>2.0.0.M6</version>
    <relativePath/>
</parent>

<dependencies>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
    </dependency>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-security</artifactId>
    </dependency>
    <dependency>
        <groupId>org.springframework.security</groupId>
        <artifactId>spring-security-test</artifactId>
        <scope>test</scope>
    </dependency>
</dependencies>

At the time of this writing, the latest version of Spring Security 5 is in the release candidate state. The Spring Boot library, which supports it, is in the milestone state.

So we’ll need to provide the milestone repository for Maven setup:

<repositories>
    <repository>
        <id>spring-milestones</id>
        <name>Spring Milestones</name>
        <url>https://repo.spring.io/milestone</url>
        <snapshots>
            <enabled>false</enabled>
        </snapshots>
    </repository>
</repositories>

We can check out the current version of Spring Boot security starter over at Maven Central.

3. Project Setup

3.1. Bootstrapping the Reactive Application

We won’t use the standard @SpringBootApplication configuration but instead, configure a Netty-based web server. Netty is an asynchronous NIO-based framework which is a good foundation for reactive applications.

The @EnableWebFlux annotation enables the standard Spring Web Reactive configuration for the application:

@ComponentScan(basePackages = {"com.baeldung.security"})
@EnableWebFlux
public class SpringSecurity5Application {

    public static void main(String[] args) {
        try (AnnotationConfigApplicationContext context 
         = new AnnotationConfigApplicationContext(
            SpringSecurity5Application.class)) {
 
            context.getBean(NettyContext.class).onClose().block();
        }
    }

Here, we create a new application context and wait for Netty to shut down by calling .onClose().block() chain on the Netty context.

After Netty is shut down, the context will be automatically closed using the try-with-resources block.

We’ll also need to create a Netty-based HTTP server, a handler for the HTTP requests, and the adapter between the server and the handler:

@Bean
public NettyContext nettyContext(ApplicationContext context) {
    HttpHandler handler = WebHttpHandlerBuilder
      .applicationContext(context).build();
    ReactorHttpHandlerAdapter adapter 
      = new ReactorHttpHandlerAdapter(handler);
    HttpServer httpServer = HttpServer.create("localhost", 8080);
    return httpServer.newHandler(adapter).block();
}

3.2. Spring Security Configuration Class

For our basic Spring Security configuration, we’ll create a configuration class – SecurityConfig.

To enable WebFlux support in Spring Security 5, we only need to specify the @EnableWebFluxSecurity annotation:

@EnableWebFluxSecurity
public class SecurityConfig {
    // ...
}

Now we can take advantage of the class ServerHttpSecurity to build our security configuration.

This class is a new feature of Spring 5. It’s similar to HttpSecurity builder, but it’s only enabled for WebFlux applications.

The ServerHttpSecurity is already preconfigured with some sane defaults, so we could skip this configuration completely. But for starters, we’ll provide the following minimal config:

@Bean
public SecurityWebFilterChain securitygWebFilterChain(
  ServerHttpSecurity http) {
    return http.authorizeExchange()
      .anyExchange().authenticated()
      .and().build();
}

Also, we’ll need a user details service. Spring Security provides us with a convenient mock user builder and an in-memory implementation of the user details service:

@Bean
public MapReactiveUserDetailsService userDetailsService() {
    UserDetails user = User.withDefaultPasswordEncoder()
      .username("user")
      .password("password")
      .roles("USER")
      .build();
    return new MapReactiveUserDetailsService(user);
}

Since we’re in reactive land, the user details service should also be reactive. If we check out the ReactiveUserDetailsService interface, we’ll see that its findByUsername method actually returns a Mono publisher:

public interface ReactiveUserDetailsService {

    Mono<UserDetails> findByUsername(String username);
}

Now we can run our application and observe a regular HTTP basic authentication form.

4. Styled Login Form

A small but striking improvement in Spring Security 5 is a new styled login form which uses the Bootstrap 4 CSS framework. The stylesheets in the login form link to CDN, so we’ll only see the improvement when connected to the Internet.

To use the new login form, let’s add the corresponding formLogin() builder method to the ServerHttpSecurity builder:

public SecurityWebFilterChain securitygWebFilterChain(
  ServerHttpSecurity http) {
    return http.authorizeExchange()
      .anyExchange().authenticated()
      .and().formLogin()
      .and().build();
}

If we now open the main page of the application, we’ll see that it looks much better than the default form we’re used to since previous versions of Spring Security:

 

Note that this is not a production-ready form, but it’s a good bootstrap of our application.

If we now log in and then go to the http://localhost:8080/logout URL, we’ll see the logout confirmation form, which is also styled.

5. Reactive Controller Security

To see something behind the authentication form, let’s implement a simple reactive controller that greets the user:

@RestController
public class GreetController {

    @GetMapping("/")
    public Mono<String> greet(Mono<Principal> principal) {
        return principal
          .map(Principal::getName)
          .map(name -> String.format("Hello, %s", name));
    }

}

After logging in, we’ll see the greeting. Let’s add another reactive handler that would be accessible by admin only:

@GetMapping("/admin")
public Mono<String> greetAdmin(Mono<Principal> principal) {
    return principal
      .map(Principal::getName)
      .map(name -> String.format("Admin access: %s", name));
}

Now let’s create a second user with the role ADMIN: in our user details service:

UserDetails admin = User.withDefaultPasswordEncoder()
  .username("admin")
  .password("password")
  .roles("ADMIN")
  .build();

We can now add a matcher rule for the admin URL that requires the user to have the ROLE_ADMIN authority.

Note that we have to put matchers before the .anyExchange() chain call. This call applies to all other URLs which were not yet covered by other matchers:

return http.authorizeExchange()
  .pathMatchers("/admin").hasAuthority("ROLE_ADMIN")
  .anyExchange().authenticated()
  .and().formLogin()
  .and().build();

If we now log in with user or admin, we’ll see that they both observe initial greeting, as we’ve made it accessible for all authenticated users.

But only the admin user can go to the http://localhost:8080/admin URL and see her greeting.

6. Reactive Method Security

We’ve seen how we can secure the URLs, but what about methods?

To enable method-based security for reactive methods, we only need to add the @EnableReactiveMethodSecurity annotation to our SecurityConfig class:

@EnableWebFluxSecurity
@EnableReactiveMethodSecurity
public class SecurityConfig {
    // ...
}

Now let’s create a reactive greeting service with the following content:

@Service
public class GreetService {

    public Mono<String> greet() {
        return Mono.just("Hello from service!");
    }
}

We can inject it into the controller, go to http://localhost:8080/greetService and see that it actually works:

@RestController
public class GreetController {

    private GreetService greetService

    @GetMapping("/greetService")
    public Mono<String> greetService() {
        return greetService.greet();
    }

    // standard constructors...
}

But if we now add the @PreAuthorize annotation on the service method with the ADMIN role, then the greet service URL won’t be accessible to a regular user:

@Service
public class GreetService {

@PreAuthorize("hasRole('ADMIN')")
public Mono<String> greet() {
    // ...
}

7. Mocking Users in Tests

Let’s check out how easy it is to test our reactive Spring application.

First, we’ll create a test with an injected application context:

@ContextConfiguration(classes = SpringSecurity5Application.class)
public class SecurityTest {

    @Autowired
    ApplicationContext context;

    // ...
}

Now we’ll set up a simple reactive web test client, which is a feature of the Spring 5 test framework:

@Before
public void setup() {
    this.rest = WebTestClient
      .bindToApplicationContext(this.context)
      .configureClient()
      .build();
}

This allows us to quickly check that the unauthorized user is redirected from the main page of our application to the login page:

@Test
public void whenNoCredentials_thenRedirectToLogin() {
    this.rest.get()
      .uri("/")
      .exchange()
      .expectStatus().is3xxRedirection();
}

If we now add the @MockWithUser annotation to a test method, we can provide an authenticated user for this method.

The login and password of this user would be user and password respectively, and the role is USER. This, of course, can all be configured with the @MockWithUser annotation parameters.

Now we can check that the authorized user sees the greeting:

@Test
@WithMockUser
public void whenHasCredentials_thenSeesGreeting() {
    this.rest.get()
      .uri("/")
      .exchange()
      .expectStatus().isOk()
      .expectBody(String.class).isEqualTo("Hello, user");
}

The @WithMockUser annotation is available since Spring Security 4. However, in Spring Security 5 it was also updated to cover reactive endpoints and methods.

8. Conclusion

In this tutorial, we’ve discovered new features of the upcoming Spring Security 5 release, especially in the reactive programming arena.

As always, the source code for the article is available over on GitHub.

Guide to Java String Pool

$
0
0

1. Overview

The String object is the most used class in the Java language.

In this quick article, we’ll explore the Java String Pool — the special memory region where Strings are stored by the JVM.

2. String Interning

Thanks to the immutability of Strings in Java, the JVM can optimize the amount of memory allocated for them by storing only one copy of each literal String in the pool. This process is called interning.

When we create a String variable and assign a value to it, the JVM searches the pool for a String of equal value.

If found, the Java compiler will simply return a reference to its memory address, without allocating additional memory.

If not found, it’ll be added to the pool (interned) and its reference will be returned.

Let’s write a small test to verify this:

String constantString1 = "Baeldung";
String constantString2 = "Baeldung";
        
assertThat(constantString1)
  .isSameAs(constantString2);

3. Strings Allocated using the Constructor

When we create a String via the new operator, the Java compiler will create a new object and store it in the heap space reserved for the JVM.

Every String created like this will point to a different memory region with its own address.

Let’s see how this is different from the previous case:

String constantString = "Baeldung";
String newString = new String("Baeldung");

assertThat(constantString).isNotSameAs(newString);

4. Manual Interning

We can manually intern a String in the Java String Pool by calling the intern() method on the object we want to intern.

Manually interning the String will store its reference in the pool, and the JVM will return this reference when needed.

Let’s create a test case for this:

String constantString = "interned Baeldung";
String newString = new String("interned Baeldung");

assertThat(constantString).isNotSameAs(newString);

String internedString = newString.intern();

assertThat(constantString)
  .isSameAs(internedString);

5. Garbage Collection

Before Java 7, the JVM placed the Java String Pool in the PermGen space, which has a fixed size — it can’t be expanded at runtime and is not eligible for garbage collection.

The risk of interning Strings in the PermGen (instead of the Heap) is that we can get an OutOfMemory error from the JVM if we intern too many Strings.

From Java 7 onwards, the Java String Pool is stored in the Heap space, which is garbage collected by the JVMThe advantage of this approach is the reduced risk of OutOfMemory error because unreferenced Strings will be removed from the pool, thereby releasing memory.

6. Performance and Optimizations

In Java 6, the only optimization we can perform is increasing the PermGen space during the program invocation with the MaxPermSize JVM option:

-XX:MaxPermSize=1G

In Java 7, we have more detailed options to examine and expand/reduce the pool size. Let’s see the two options for viewing the pool size:

-XX:+PrintFlagsFinal
-XX:+PrintStringTableStatistics

The default pool size is 1009. If we want to increase the pool size, we can use the StringTableSize JVM option:

-XX:StringTableSize=4901

Note that increasing the pool size will consume more memory but has the advantage of reducing the time required to insert the Strings into the table.

7. A Note About Java 9

Until Java 8, Strings were internally represented as an array of characters – char[], encoded in UTF-16, so that every character uses two bytes of memory.

With Java 9 a new representation is provided, called Compact Strings. This new format will choose the appropriate encoding between char[] and byte[] depending on the stored content.

Since the new String representation will use the UTF-16 encoding only when necessary, the amount of heap memory will be significantly lower, which in turn causes less Garbage Collector overhead on the JVM.

8. Conclusion

In this guide, we showed how the JVM and the Java compiler optimize memory allocations for String objects via the Java String Pool.

All code samples used in the article are available over on Github.

A Guide to Spring AbstractRoutingDatasource

$
0
0

1. Overview

In this quick article, we’ll look at Spring’s AbstractRoutingDatasource as a way of dynamically determining the actual DataSource based on the current context.

As a result, we’ll see that we can keep DataSource lookup logic out of the data access code.

2. Maven Dependencies

Let’s start by declaring  spring-context, spring-jdbc, spring-test, and h2 as dependencies in the pom.xml:

<dependencies>
    <dependency>
        <groupId>org.springframework</groupId>
        <artifactId>spring-context</artifactId>
        <version>4.3.8.RELEASE</version>
    </dependency>

    <dependency>
        <groupId>org.springframework</groupId>
        <artifactId>spring-jdbc</artifactId>
        <version>4.3.8.RELEASE</version>
    </dependency>

    <dependency> 
        <groupId>org.springframework</groupId> 
        <artifactId>spring-test</artifactId>
        <version>4.3.8.RELEASE</version>
        <scope>test</scope>
    </dependency>
    <dependency>
        <groupId>com.h2database</groupId>
        <artifactId>h2</artifactId>
        <version>1.4.195</version>
        <scope>test</scope>
    </dependency>
</dependencies>

The latest version of the dependencies can be found here.

3. Datasource Context

AbstractRoutingDatasource requires information to know which actual DataSource to route to. This information is typically referred to as a Context.

While the Context used with AbstractRoutingDatasource can be any Object, an enum is used for defining them. In our example, we’ll use the notion of a ClientDatabase as our context with the following implementation:

public enum ClientDatabase {
    CLIENT_A, CLIENT_B
}

Its worth noting that, in practice, the context can be whatever makes sense for the domain in question.

For example, another common use case involves using the notion of an Environment to define the context. In such a scenario, the context could be an enum containing PRODUCTION, DEVELOPMENT, and TESTING.

4. Context Holder

The context holder implementation is a container that stores the current context as a ThreadLocal reference.

In addition to holding the reference, it should contain static methods for setting, getting, and clearing it. AbstractRoutingDatasource will query the ContextHolder for the Context and will then use the context to look up the actual DataSource.

It’s critically important to use ThreadLocal here so that the context is bound to the currently executing thread.

It’s essential to take this approach so that behavior is reliable when data access logic spans multiple data sources and uses transactions:

public class ClientDatabaseContextHolder {

    private static ThreadLocal<ClientDatabase> CONTEXT
      = new ThreadLocal<>();

    public static void set(ClientDatabase clientDatabase) {
        Assert.notNull(clientDatabase, "clientDatabase cannot be null");
        CONTEXT.set(clientDatabase);
    }

    public static ClientDatabase getClientDatabase() {
        return CONTEXT.get();
    }

    public static void clear() {
        CONTEXT.remove();
    }
}

5. Datasource Router

We define our ClientDataSourceRouter to extend the Spring AbstractRoutingDataSource. We implement the necessary determineCurrentLookupKey method to query our ClientDatabaseContextHolder and return the appropriate key.

The AbstractRoutingDataSource implementation handles the rest of the work for us and transparently returns the appropriate DataSource:

public class ClientDataSourceRouter
  extends AbstractRoutingDataSource {

    @Override
    protected Object determineCurrentLookupKey() {
        return ClientDatabaseContextHolder.getClientDatabase();
    }
}

6. Configuration

We need a Map of contexts to DataSource objects to configure our AbstractRoutingDataSource. We can also specify a default DataSource to use if there is no context set.

The DataSources we use can come from anywhere but will typically be either created at runtime or looked up using JNDI:

@Configuration
public class RoutingTestConfiguration {

    @Bean
    public ClientService clientService() {
        return new ClientService(new ClientDao(clientDatasource()));
    }
 
    @Bean
    public DataSource clientDatasource() {
        Map<Object, Object> targetDataSources = new HashMap<>();
        DataSource clientADatasource = clientADatasource();
        DataSource clientBDatasource = clientBDatasource();
        targetDataSources.put(ClientDatabase.CLIENT_A, 
          clientADatasource);
        targetDataSources.put(ClientDatabase.CLIENT_B, 
          clientBDatasource);

        ClientDataSourceRouter clientRoutingDatasource 
          = new ClientDataSourceRouter();
        clientRoutingDatasource.setTargetDataSources(targetDataSources);
        clientRoutingDatasource.setDefaultTargetDataSource(clientADatasource);
        return clientRoutingDatasource;
    }

    // ...
}

7. Usage

When using our AbstractRoutingDataSource, we first set the context and then perform our operation. We make use of a service layer that takes the context as a parameter and sets it before delegating to data-access code and clearing the context after the call.

As an alternative to manually clearing the context within a service method, the clearing logic can be handled by an AOP point cut.

It’s important to remember that the context is thread bound especially if data access logic will be spanning multiple data sources and transactions:

public class ClientService {

    private ClientDao clientDao;

    // standard constructors

    public String getClientName(ClientDatabase clientDb) {
        ClientDatabaseContextHolder.set(clientDb);
        String clientName = this.clientDao.getClientName();
        ClientDatabaseContextHolder.clear();
        return clientName;
    }
}

8. Conclusion

In this tutorial, we looked at the example how to use the Spring AbstractRoutingDataSource. We implemented a solution using the notion of a Client – where each client has its DataSource.

And, as always, examples can found over on GitHub.

Introduction to Spring Cloud CLI

$
0
0

1. Introduction

In this article, we take a look at Spring Boot Cloud CLI (or Cloud CLI for short). The tool provides a set of command line enhancements to the Spring Boot CLI that helps in further abstracting and simplifying Spring Cloud deployments.

The CLI was introduced in late 2016 and allows quick auto-configuration and deployment of standard Spring Cloud services using a command line, .yml configuration files, and Groovy scripts.

2. Set Up

Spring Boot Cloud CLI 1.3.x requires Spring Boot CLI 1.5.x, so make sure to grab the latest version of Spring Boot CLI from Maven Central (installation instructions) and the most recent version of the Cloud CLI from Maven Repository (the official Spring repository)!

To make sure the CLI is installed and ready to use, simply run:

$ spring --version

After verifying your Spring Boot CLI installation, install the latest stable version of Cloud CLI:

$ spring install org.springframework.cloud:spring-cloud-cli:1.3.2.RELEASE

Then verify the Cloud CLI:

$ spring cloud --version

Advanced installation features can be found on the official Cloud CLI page!

3. Default Services and Configuration

The CLI provides seven core services that can be run and deployed with single line commands.

To launch a Cloud Config server on http://localhost:8888:

$ spring cloud configserver

To start a Eureka server on http://localhost:8761:

$ spring cloud eureka

To initiate an H2 server on http://localhost:9095:

$ spring cloud h2

To launch a Kafka server on http://localhost:9091:

$ spring cloud kafka

To start a Zipkin server on http://localhost:9411:

$ spring cloud zipkin

To launch a Dataflow server on http://localhost:9393:

$ spring cloud dataflow

To start a Hystrix dashboard on http://localhost:7979:

$ spring cloud hystrixdashboard

List currently running cloud services:

$ spring cloud --list

The handy help command:

$ spring help cloud

For more details about these commands, please check out the official blog.

4. Customizing Cloud Services with YML

Each of the services that are deployable through the Cloud CLI can also be configured using correspondingly-named .yml files:

spring:
  profiles:
    active: git
  cloud:
    config:
      server:
        git:
          uri: https://github.com/spring-cloud-samples/config-repo

This constitutes a simple configuration file that we can use for launching the Cloud Config Server.

We can, for example, specify a Git repository as the URI source that will be automatically cloned and deployed when we issue the ‘spring cloud configserver’ command.

Cloud CLI uses the Spring Cloud Launcher under the hood. That means that Cloud CLI supports most of the Spring Boot configuration mechanisms. Here’s the official list of Spring Boot properties.

Spring Cloud configuration conforms to the ‘spring.cloud…‘ convention. Settings for Spring Cloud and Spring Config Server can be found at this link.

We can also specify several different modules and services directly into the cloud.yml:

spring:
  cloud:
    launcher:
      deployables:
        - name: configserver
          coordinates: maven://...:spring-cloud-launcher-configserver:1.3.2.RELEASE
          port: 8888
          waitUntilStarted: true
          order: -10
        - name: eureka
          coordinates: maven:/...:spring-cloud-launcher-eureka:1.3.2.RELEASE
          port: 8761

The cloud.yml allows custom services or modules to be added and the use of Maven and Git repositories to be used.

5. Running Custom Groovy Scripts

Custom components can be written in Groovy and deployed efficiently since Cloud CLI can compile and deploy Groovy code.

Here’s an example minimal REST API implementation:

@RestController
@RequestMapping('/api')
class api {
 
    @GetMapping('/get')
    def get() { [message: 'Hello'] }
}

Assuming that the script is saved as rest.groovy, we can launch our minimal server like this:

$ spring run rest.groovy

Pinging http://localhost:8080/api/get should reveal:

{"message":"Hello"}

6. Encrypt/Decrypt

Cloud CLI also provides a tool for encryption and decryption (found in the package org.springframework.cloud.cli.command.*) that can be used directly through the command line or indirectly by passing a value to a Cloud Config Server endpoint.

Let’s set it up and see how to use it.

6.1. Setup

Both Cloud CLI as well as Spring Cloud Config Server use org.springframework.security.crypto.encrypt.* for handling encrypt and decrypt commands.

As such, both require the JCE Unlimited Strength Extension provided by Oracle here.

6.2. Encrypt and Decrypt By Command

To encrypt ‘my_value‘ via the terminal, invoke:

$ spring encrypt my_value --key my_key

File paths can be substituted for the key name (e.g. ‘my_key‘ above) by using ‘@’ followed by the path (commonly used for RSA public keys):

$ spring encrypt my_value --key @${WORKSPACE}/foos/foo.pub

my_value‘ will now be encrypted to something like:

c93cb36ce1d09d7d62dffd156ef742faaa56f97f135ebd05e90355f80290ce6b

Furthermore, it will be stored in memory under key ‘my_key‘. This allows us to decrypt ‘my_key‘ back into’my_value‘ via command line:

$ spring decrypt --key my_key

We can also now use the encrypted value in a configuration YAML or properties file, where it will be automatically decrypted by the Cloud Config Server when loaded:

encrypted_credential: "{cipher}c93cb36ce1d09d7d62dffd156ef742faaa56f97f135ebd05e90355f80290ce6b"

6.3. Encrypt and Decrypt with Config Server

Spring Cloud Config Server exposes RESTful endpoints where keys and encrypted value pairs can be stored in the Java Security Store or memory.

For more information on how to correctly set up and configure your Cloud Config Server to accept symmetric or asymmetric encryption, please check out our article or the official docs.

Once Spring Cloud Config Server is configured and up running using the ‘spring cloud configserver‘ command, you’ll be able to call its API:

$ curl localhost:8888/encrypt -d mysecret
//682bc583f4641835fa2db009355293665d2647dade3375c0ee201de2a49f7bda
$ curl localhost:8888/decrypt -d 682bc583f4641835fa2db009355293665d2647dade3375c0ee201de2a49f7bda
//mysecret

7. Conclusion

We’ve focused here on an introduction to Spring Boot Cloud CLI. For more information, please check out the official docs.

The configuration and bash examples used in this article are available over on GitHub.

Viewing all 3689 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>