Quantcast
Channel: Baeldung
Viewing all 3692 articles
Browse latest View live

Guide to java.util.GregorianCalendar

$
0
0

1. Introduction

In this tutorial, we’re going to take a quick peek at the GregorianCalendar class.

2. GregorianCalendar

GregorianCalendar is a concrete implementation of the abstract class java.util.Calendar. Not surprisingly, the Gregorian Calendar is the most widely used civil calendar in the world. 

2.1. Getting an Instance

There are two options available to get an instance of GregorianCalendar: Calendar.getInstance() and using one of the constructors.

Using the static factory method Calendar.getInstance() isn’t a recommended approach as it will return an instance subjective to the default locale.

It might return a BuddhistCalendar for Thai or JapaneseImperialCalendar for Japan. Not knowing the type of the instance being returned may lead to a ClassCastException:

@Test(expected = ClassCastException.class)
public void test_Class_Cast_Exception() {
    TimeZone tz = TimeZone.getTimeZone("GMT+9:00");
    Locale loc = new Locale("ja", "JP", "JP");
    Calendar calendar = Calendar.getInstance(loc);
    GregorianCalendar gc = (GregorianCalendar) calendar;
}

Using one of the seven overloaded constructors we can initialize the Calendar object either with the default date and time depending on the locale of our operating system or we can specify a combination of date, time, locale and time zone.

Let’s understand the different constructors by which a GregorianCalendar object can be instantiated.

The default constructor will initialize the calendar with the current date and time in the time zone and locale of the operating system:

new GregorianCalendar();

We can specify the year, month, dayOfMonth, hourOfDay, minute, and second for the default time zone with the default locale:

new GregorianCalendar(2018, 6, 27, 16, 16, 47);

Note that we don’t have to specify hourOfDay, minute and second as there are other constructors without these parameters.

We can pass the time zone as a parameter to create a calendar in this time zone with the default locale:

new GregorianCalendar(TimeZone.getTimeZone("GMT+5:30"));

We can pass the locale as a parameter to create a calendar in this locale with the default time zone:

new GregorianCalendar(new Locale("en", "IN"));

Finally, we can pass both the time zone and locale as parameters:

new GregorianCalendar(TimeZone.getTimeZone("GMT+5:30"), new Locale("en", "IN"));

2.2. New Methods with Java 8

With Java 8, new methods have been introduced to GregorianCalendar.

The from() method gets an instance of GregorianCalendar with the default locale from a ZonedDateTime object.

Using getCalendarType() we can get the type of the calendar instance. The available calendar types are ‘gregory’, ‘buddhist’ and ‘japanese’.

We can use this, for example, to make sure we have a calendar of a certain type before moving on with our application logic:

@Test
public void test_Calendar_Return_Type_Valid() {
    Calendar calendar = Calendar.getInstance();
    assert ("gregory".equals(calendar.getCalendarType()));
}

Calling toZonedDateTime() we can convert the calendar object into a ZonedDateTime object that represents the same point on the timeline as this GregorianCalendar.

2.3. Modifying Dates

The calendar fields can be modified using the methods add(), roll() and set().

The add() method allows us to add time to the calendar in a specified unit based on the calendar’s internal ruleset:

@Test
public void test_whenAddOneDay_thenMonthIsChanged() {
    int finalDay1 = 1;
    int finalMonthJul = 6; 
    GregorianCalendar calendarExpected = new GregorianCalendar(2018, 5, 30);
    calendarExpected.add(Calendar.DATE, 1);
    System.out.println(calendarExpected.getTime());
 
    assertEquals(calendarExpected.get(Calendar.DATE), finalDay1);
    assertEquals(calendarExpected.get(Calendar.MONTH), finalMonthJul);
}

We can also use the add() method to subtract time from the calendar object:

@Test
public void test_whenSubtractOneDay_thenMonthIsChanged() {
    int finalDay31 = 31;
    int finalMonthMay = 4; 
    GregorianCalendar calendarExpected = new GregorianCalendar(2018, 5, 1);
    calendarExpected.add(Calendar.DATE, -1);

    assertEquals(calendarExpected.get(Calendar.DATE), finalDay31);
    assertEquals(calendarExpected.get(Calendar.MONTH), finalMonthMay);
}

Execution of the add() method forces an immediate re-computation of the calendar’s milliseconds and all fields.

Note that using add() may also change the higher calendar fields (MONTH in this case).

The roll() method adds a signed amount to the specified calendar field without changing the larger fields. A larger field represents a larger unit of time. For example, DAY_OF_MONTH is larger than HOUR.

Let’s see an example of how to roll up months.

In this case, YEAR being a larger field will not be incremented:

@Test
public void test_whenRollUpOneMonth_thenYearIsUnchanged() {
    int rolledUpMonthJuly = 7, orginalYear2018 = 2018;
    GregorianCalendar calendarExpected = new GregorianCalendar(2018, 6, 28);
    calendarExpected.roll(Calendar.MONTH, 1);
 
    assertEquals(calendarExpected.get(Calendar.MONTH), rolledUpMonthJuly);
    assertEquals(calendarExpected.get(Calendar.YEAR), orginalYear2018);
}

Similarly, we can roll down months:

@Test
public void test_whenRollDownOneMonth_thenYearIsUnchanged() {
    int rolledDownMonthJune = 5, orginalYear2018 = 2018;
    GregorianCalendar calendarExpected = new GregorianCalendar(2018, 6, 28);
    calendarExpected.roll(Calendar.MONTH, -1);
 
    assertEquals(calendarExpected.get(Calendar.MONTH), rolledDownMonthJune);
    assertEquals(calendarExpected.get(Calendar.YEAR), orginalYear2018);
}

We can directly set a calendar field to a specified value using the set() method. The calendar’s time value in milliseconds will not recomputed until the next call to get(), getTime(), add() or roll() is made.

Thus, multiple calls to set() don’t trigger unnecessary computations.

Let’s see an example which will set the month field to 3 (i.e. April):

@Test
public void test_setMonth() {
    GregorianCalendarExample calendarDemo = new GregorianCalendarExample();
    GregorianCalendar calendarActual = new GregorianCalendar(2018, 6, 28);
    GregorianCalendar calendarExpected = new GregorianCalendar(2018, 6, 28);
    calendarExpected.set(Calendar.MONTH, 3);
    Date expectedDate = calendarExpected.getTime();

    assertEquals(expectedDate, calendarDemo.setMonth(calendarActual, 3));
}

2.4. Working with XMLGregorianCalendar

JAXB allows mapping Java classes to XML representations. The  javax.xml.datatype.XMLGregorianCalendar type can help in mapping the basic XSD schema types such as xsd:date, xsd:time and xsd:dateTime.

Let’s have a look at an example to convert from GregorianCalendar type into the XMLGregorianCalendar type:

@Test
public void test_toXMLGregorianCalendar() throws Exception {
    GregorianCalendarExample calendarDemo = new GregorianCalendarExample();
    DatatypeFactory datatypeFactory = DatatypeFactory.newInstance();
    GregorianCalendar calendarActual = new GregorianCalendar(2018, 6, 28);
    GregorianCalendar calendarExpected = new GregorianCalendar(2018, 6, 28);
    XMLGregorianCalendar expectedXMLGregorianCalendar = datatypeFactory
      .newXMLGregorianCalendar(calendarExpected);
 
    assertEquals(
      expectedXMLGregorianCalendar, 
      alendarDemo.toXMLGregorianCalendar(calendarActual));
}

Once the calendar object has been translated into XML format, it can be used in any use cases that require a date to be serialized, like messaging or web service calls.

Let’s see an example on how to convert from XMLGregorianCalendar type back into GregorianCalendar:

@Test
public void test_toDate() throws DatatypeConfigurationException {
    GregorianCalendar calendarActual = new GregorianCalendar(2018, 6, 28);
    DatatypeFactory datatypeFactory = DatatypeFactory.newInstance();
    XMLGregorianCalendar expectedXMLGregorianCalendar = datatypeFactory
      .newXMLGregorianCalendar(calendarActual);
    expectedXMLGregorianCalendar.toGregorianCalendar().getTime();
    assertEquals(
      calendarActual.getTime(), 
      expectedXMLGregorianCalendar.toGregorianCalendar().getTime() );
}

2.5. Comparing Dates

We can use the Calendar classes’ compareTo() method to compare dates. The result will be positive if the base date is in the future and negative if the base data is in the past of the date we compare it to:

@Test
public void test_Compare_Date_FirstDate_Greater_SecondDate() {
    GregorianCalendar firstDate = new GregorianCalendar(2018, 6, 28);
    GregorianCalendar secondDate = new GregorianCalendar(2018, 5, 28);
    assertTrue(1 == firstDate.compareTo(secondDate));
}

@Test
public void test_Compare_Date_FirstDate_Smaller_SecondDate() {
    GregorianCalendar firstDate = new GregorianCalendar(2018, 5, 28);
    GregorianCalendar secondDate = new GregorianCalendar(2018, 6, 28);
    assertTrue(-1 == firstDate.compareTo(secondDate));
}

@Test
public void test_Compare_Date_Both_Dates_Equal() {
    GregorianCalendar firstDate = new GregorianCalendar(2018, 6, 28);
    GregorianCalendar secondDate = new GregorianCalendar(2018, 6, 28);
    assertTrue(0 == firstDate.compareTo(secondDate));
}

2.6. Formatting Dates

We can convert GregorianCalendar into a specific format by using a combination of ZonedDateTime and DateTimeFormatter to get the desired output:

@Test
public void test_dateFormatdMMMuuuu() {
    String expectedDate = new GregorianCalendar(2018, 6, 28).toZonedDateTime()
      .format(DateTimeFormatter.ofPattern("d MMM uuuu"));
    assertEquals("28 Jul 2018", expectedDate);
}

2.7. Getting Information about the Calendar

GregorianCalendar provides several get methods which can be used to fetch different calendar attributes. Let’s look at the different options we have:

  • getActualMaximum(int field) returns the maximum value for the specified calendar field taking into consideration the current time values. The following example will return value 30 for the DAY_OF_MONTH field because June has 30 days:
    GregorianCalendar calendar = new GregorianCalendar(2018 , 5, 28);
    assertTrue(30 == calendar.getActualMaximum(calendar.DAY_OF_MONTH));
  • getActualMinimum(int field) returns the minimum value for the specified calendar field taking into consideration the current time values:
    GregorianCalendar calendar = new GregorianCalendar(2018 , 5, 28);
    assertTrue(1 == calendar.getActualMinimum(calendar.DAY_OF_MONTH));
  • getGreatestMinimum(int field) returns the highest minimum value for the given calendar field:
    GregorianCalendar calendar = new GregorianCalendar(2018 , 5, 28);
    assertTrue(1 == calendar.getGreatestMinimum(calendar.DAY_OF_MONTH));
  • getLeastMaximum(int field) Returns the lowest maximum value for the given calendar field. For the DAY_OF_MONTH field this is 28, because February may have only 28 days:
    GregorianCalendar calendar = new GregorianCalendar(2018 , 5, 28);
    assertTrue(28 == calendar.getLeastMaximum(calendar.DAY_OF_MONTH));
  • getMaximum(int field) returns the maximum value for the given calendar field:
    GregorianCalendar calendar = new GregorianCalendar(2018 , 5, 28);
    assertTrue(31 == calendar.getMaximum(calendar.DAY_OF_MONTH));
  • getMinimum(int field) returns the minimum value for the given calendar field:
    GregorianCalendar calendar = new GregorianCalendar(2018 , 5, 28);
    assertTrue(1 == calendar.getMinimum(calendar.DAY_OF_MONTH));
  • getWeekYear() returns the year of the week represented by this GregorianCalendar:
    GregorianCalendar calendar = new GregorianCalendar(2018 , 5, 28);
    assertTrue(2018 == calendar.getWeekYear());
  • getWeeksInWeekYear() returns the number of weeks in the week year for the calendar year:
    GregorianCalendar calendar = new GregorianCalendar(2018 , 5, 28);
    assertTrue(52 == calendar.getWeeksInWeekYear());
  • isLeapYear() returns true if the year is a leap year:
    GregorianCalendar calendar = new GregorianCalendar(2018 , 5, 28);
    assertTrue(false == calendar.isLeapYear(calendar.YEAR));

3. Conclusion

In this article, we explored certain aspects of GregorianCalendar.

As always, the sample code is available over on GitHub.


Stack Memory and Heap Space in Java

$
0
0

1. Introduction

To run an application in an optimal way, JVM divides memory into stack and heap memory. Whenever we declare new variables and objects, call new method, declare a String or perform similar operations, JVM designates memory to these operations from either Stack Memory or Heap Space.

In this tutorial, we’ll discuss these memory models. We’ll enlist some key differences between them, how they are stored in RAM, the features they offer and where to use them.

2. Stack Memory in Java

Stack Memory in Java is used for static memory allocation and the execution of a thread. It contains primitive values that are specific to a method and references to objects that are in a heap, referred from the method.

Access to this memory is in Last-In-First-Out (LIFO) order. Whenever a new method is called, a new block on top of the stack is created which contains values specific to that method, like primitive variables and references to objects.

When the method finishes execution, it’s corresponding stack frame is flushed, the flow goes back to the calling method and space becomes available for the next method.

2.1. Key Features of Stack Memory

Apart from what we have discussed so far, following are some other features of stack memory:

  • It grows and shrinks as new methods are called and returned respectively
  • Variables inside stack exist only as long as the method that created them is running
  • It’s automatically allocated and deallocated when method finishes execution
  • If this memory is full, Java throws java.lang.StackOverFlowError
  • Access to this memory is fast when compared to heap memory
  • This memory is threadsafe as each thread operates in its own stack

3. Heap Space in Java

Heap space in Java is used for dynamic memory allocation for Java objects and JRE classes at the runtime. New objects are always created in heap space and the references to this objects are stored in stack memory.

These objects have global access and can be accessed from anywhere in the application.

This memory model is further broken into smaller parts called generations, these are:

  1. Young Generation – this is where all new objects are allocated and aged. A minor Garbage collection occurs when this fills up
  2. Old or Tenured Generation – this is where long surviving objects are stored. When objects are stored in the Young Generation, a threshold for the object’s age is set and when that threshold is reached, the object is moved to the old generation
  3. Permanent Generation – this consists of JVM metadata for the runtime classes and application methods

These different portions are also discussed in this article – Difference Between JVM, JRE, and JDK.

We can always manipulate the size of heap memory as per our requirement. For more information, visit this linked Baeldung article.

3.1. Key Features of Java Heap Memory

Apart from what we have discussed so far, following are some other features of heap space:

  • It’s accessed via complex memory management techniques that include Young Generation, Old or Tenured Generation, and Permanent Generation
  • If heap space is full, Java throws java.lang.OutOfMemoryError
  • Access to this memory is relatively slower than stack memory
  • This memory, in contrast to stack, isn’t automatically deallocated. It needs Garbage Collector to free up unused objects so as to keep the efficiency of the memory usage
  • Unlike stack, a heap isn’t threadsafe and needs to be guarded by properly synchronizing the code

4. Example

Based on what we’ve learned so far, let’s analyze a simple Java code and let’s assess how memory is managed here:

class Person {
    int pid;
    String name;
    
    // constructor, setters/getters
}

public class Driver {
    public static void main(String[] args) {
        int id = 23;
        String pName = "Jon";
        Person p = null;
        p = new Person(id, pName);
    }
}

Let’s analyze this step by step:

  1. Upon entering the main() method, a space in stack memory would be created to store primitives and references of this method
    • The primitive value of integer id will be stored directly in stack memory
    • The reference variable of type Person will also be created in stack memory which will point to the actual object in the heap
  2. The call to the parameterized constructor Person(int, String) from main() will allocate further memory on top of the previous stack. This will store:
    • The this object reference of the calling object in stack memory
    • The primitive value id in the stack memory
    • The reference variable of String argument personName which will point to the actual string from string pool in heap memory
  3. This default constructor is further calling setPersonName() method, for which further allocation will take place in stack memory on top of previous one. This will again store variables in the manner described above.
  4. However, for the newly created object of type Person, all instance variables will be stored in heap memory.

This allocation is explained in this diagram:

5. Summary

Before we conclude this article, let’s quickly summarize the differences between the Stack Memory and the Heap Space:

Parameter Stack Memory Heap Space
Application Stack is used in parts, one at a time during execution of a thread The entire application uses Heap space during runtime
Size Stack has size limits depending upon OS and is usually smaller then Heap There is no size limit on Heap
Storage Stores only primitive variables and references to objects that are created in Heap Space All the newly created objects are stored here
Order It is accessed using Last-in First-out (LIFO) memory allocation system This memory is accessed via complex memory management techniques that include Young Generation, Old or Tenured Generation, and Permanent Generation.
Life Stack memory only exists as long as the current method is running Heap space exists as long as the application runs
Efficiency Comparatively much faster to allocate when compared to heap Slower to allocate when compared to stack
Allocation/Deallocation This Memory is automatically allocated and deallocated when a method is called and returned respectively Heap space is allocated when new objects are created and deallocated by Gargabe Collector when they are no longer referenced

6. Conclusion

Stack and heap are two ways in which Java allocates memory. In this article, we understood how they work and when to use them for developing better Java programs.

To learn more about Memory Management in Java, have a look at this article here. We also discussed the JVM Garbage Collector which is discussed briefly over in this article.

RxJava One Observable, Multiple Subscribers

$
0
0

1. Overview

The default behavior of multiple subscribers isn’t always desirable. In this article, we’ll cover how to change this behavior and handle multiple subscribers in a proper way.

But first, let’s have a look at the default behavior of multiple subscribers.

2. Default Behaviour

Let’s say we have the following Observable:

private static Observable getObservable() {
    return Observable.create(subscriber -> {
        subscriber.onNext(gettingValue(1));
        subscriber.onNext(gettingValue(2));

        subscriber.add(Subscriptions.create(() -> {
            LOGGER.info("Clear resources");
        }));
    });
}

This emits two elements as soon as the Subscribers subscribes.

In our example we have two Subscribers:

LOGGER.info("Subscribing");

Subscription s1 = obs.subscribe(i -> LOGGER.info("subscriber#1 is printing " + i));
Subscription s2 = obs.subscribe(i -> LOGGER.info("subscriber#2 is printing " + i));

s1.unsubscribe();
s2.unsubscribe();

Imagine that getting each element is a costly operation – it may include, for example, an intensive computation or opening an URL-connection.

To keep things simple we’ll just return a number:

private static Integer gettingValue(int i) {
    LOGGER.info("Getting " + i);
    return i;
}

Here is the output:

Subscribing
Getting 1
subscriber#1 is printing 1
Getting 2
subscriber#1 is printing 2
Getting 1
subscriber#2 is printing 1
Getting 2
subscriber#2 is printing 2
Clear resources
Clear resources

As we can see getting each element as well as clearing the resources is performed twice by default – once for each Subscriber. This isn’t what we want. The ConnectableObservable class helps to fix the problem.

3. ConnectableObservable

The ConnectableObservable class allows to share the subscription with multiple subscribers and not to perform the underlying operations several times.

But first, let’s create a ConnectableObservable.

3.1. publish()

publish() method is what creates a ConnectableObservable from an Observable:

ConnectableObservable obs = Observable.create(subscriber -> {
    subscriber.onNext(gettingValue(1));
    subscriber.onNext(gettingValue(2));
    subscriber.add(Subscriptions.create(() -> {
        LOGGER.info("Clear resources");
    }));
}).publish();

But for now, it does nothing. What makes it work is the connect() method.

3.2. connect()

Until ConnectableObservable‘s connect() method isn’t called Observable‘s onSubcribe() callback isn’t triggered even if there are some subscribers.

Let’s demonstrate this:

LOGGER.info("Subscribing");
obs.subscribe(i -> LOGGER.info("subscriber #1 is printing " + i));
obs.subscribe(i -> LOGGER.info("subscriber #2 is printing " + i));
Thread.sleep(1000);
LOGGER.info("Connecting");
Subscription s = obs.connect();
s.unsubscribe();

We subscribe and then wait for a second before connecting. The output is:

Subscribing
Connecting
Getting 1
subscriber #1 is printing 1
subscriber #2 is printing 1
Getting 2
subscriber #1 is printing 2
subscriber #2 is printing 2
Clear resources

As we can see:

    • Getting elements occurs only once as we wanted
    • Clearing resources occur only once as well
    • Getting elements starts a second after the subscribing.
    • Subscribing doesn’t trigger emitting of elements anymore. Only connect() does this

This delay can be beneficial – sometimes we need to give all the subscribers the same sequence of elements even if one of them subscribes earlier than another.

3.3. The Consistent View of the Observables – connect() After subscribe()

This use case can’t be demonstrated on our previous Observable as it runs cold and both subscribers get the whole sequence of elements anyway.

Imagine, instead, that an element emitting doesn’t depend on the moment of the subscription, events emitted on mouse clicks, for example. Now also imagine that a second Subscriber subscribes a second after the first.

The first Subscriber will get all the elements emitted during this example, whereas the second Subscriber will only receive some elements.

On the other hand, using the connect() method in the right place can give both subscribers the same view on the Observable sequence.

Example of Hot Observable

Let’s create a hot Observable. It will be emitting elements on mouse clicks on JFrame.

Each element will be the x-coordinate of the click:

private static Observable getObservable() {
    return Observable.create(subscriber -> {
        frame.addMouseListener(new MouseAdapter() {
            @Override
            public void mouseClicked(MouseEvent e) {
                subscriber.onNext(e.getX());
            }
        });
        subscriber.add(Subscriptions.create(() {
            LOGGER.info("Clear resources");
            for (MouseListener listener : frame.getListeners(MouseListener.class)) {
                frame.removeMouseListener(listener);
            }
        }));
    });
}

The Default Behavior of Hot Observable

Now if we subscribe two Subscribers one after another with a second interval, run the program and start clicking, we’ll see that the first Subscriber will get more elements:

public static void defaultBehaviour() throws InterruptedException {
    Observable obs = getObservable();

    LOGGER.info("subscribing #1");
    Subscription subscription1 = obs.subscribe((i) -> 
        LOGGER.info("subscriber#1 is printing x-coordinate " + i));
    Thread.sleep(1000);
    LOGGER.info("subscribing #2");
    Subscription subscription2 = obs.subscribe((i) -> 
        LOGGER.info("subscriber#2 is printing x-coordinate " + i));
    Thread.sleep(1000);
    LOGGER.info("unsubscribe#1");
    subscription1.unsubscribe();
    Thread.sleep(1000);
    LOGGER.info("unsubscribe#2");
    subscription2.unsubscribe();
}
subscribing #1
subscriber#1 is printing x-coordinate 280
subscriber#1 is printing x-coordinate 242
subscribing #2
subscriber#1 is printing x-coordinate 343
subscriber#2 is printing x-coordinate 343
unsubscribe#1
clearing resources
unsubscribe#2
clearing resources

connect() After subscribe()

To make both subscribers get the same sequence we’ll convert this Observable to the ConnectableObservable and call connect() after the subscription both Subscribers:

public static void subscribeBeforeConnect() throws InterruptedException {

    ConnectableObservable obs = getObservable().publish();

    LOGGER.info("subscribing #1");
    Subscription subscription1 = obs.subscribe(
      i -> LOGGER.info("subscriber#1 is printing x-coordinate " + i));
    Thread.sleep(1000);
    LOGGER.info("subscribing #2");
    Subscription subscription2 = obs.subscribe(
      i ->  LOGGER.info("subscriber#2 is printing x-coordinate " + i));
    Thread.sleep(1000);
    LOGGER.info("connecting:");
    Subscription s = obs.connect();
    Thread.sleep(1000);
    LOGGER.info("unsubscribe connected");
    s.unsubscribe();
}

Now they’ll get the same sequence:

subscribing #1
subscribing #2
connecting:
subscriber#1 is printing x-coordinate 317
subscriber#2 is printing x-coordinate 317
subscriber#1 is printing x-coordinate 364
subscriber#2 is printing x-coordinate 364
unsubscribe connected
clearing resources

So the point is to wait for the moment when all subscribers are ready and then call connect().

In a Spring application, we may subscribe all of the components during the application startup for example and call connect() in onApplicationEvent().

But let’s return to our example; note that all the clicks before the connect() method are missed. If we don’t want to miss elements but on the contrary process them we can put connect() earlier in the code and force the Observable to produce events in the absence of any Subscriber.

3.4. Forcing Subscription in the Absence of Any Subscriberconnect() Before subscribe()

To demonstrate this let’s correct our example:

public static void connectBeforeSubscribe() throws InterruptedException {
    ConnectableObservable obs = getObservable()
      .doOnNext(x -> LOGGER.info("saving " + x)).publish();
    LOGGER.info("connecting:");
    Subscription s = obs.connect();
    Thread.sleep(1000);
    LOGGER.info("subscribing #1");
    obs.subscribe((i) -> LOGGER.info("subscriber#1 is printing x-coordinate " + i));
    Thread.sleep(1000);
    LOGGER.info("subscribing #2");
    obs.subscribe((i) -> LOGGER.info("subscriber#2 is printing x-coordinate " + i));
    Thread.sleep(1000);
    s.unsubscribe();
}

The steps are relatively simple:

  • First, we connect
  • Then we wait for one second and subscribe the first Subscriber
  • Finally, we wait for another second and subscribe the second Subscriber

Note that we’ve added doOnNext() operator. Here we could store elements in the database for example but in our code, we just print “saving… “.

If we launch the code and begin clicking we’ll see that the elements are emitted and processed immediately after the connect() call:

connecting:
saving 306
saving 248
subscribing #1
saving 377
subscriber#1 is printing x-coordinate 377
saving 295
subscriber#1 is printing x-coordinate 295
saving 206
subscriber#1 is printing x-coordinate 206
subscribing #2
saving 347
subscriber#1 is printing x-coordinate 347
subscriber#2 is printing x-coordinate 347
clearing resources

If there were no subscribers, the elements would still be processed.

So the connect() method starts emitting and processing elements regardless of whether someone is subscribed as if there was an artificial Subscriber with an empty action which consumed the elements.

And if some real Subscribers subscribe, this artificial mediator just propagates elements to them.

To unsubscribe the artificial Subscriber we perform:

s.unsubscribe();

Where:

Subscription s = obs.connect();

3.5. autoConnect()

This method implies that connect() isn’t called before or after subscriptions but automatically when the first Subscriber subscribes.

Using this method, we can’t call connect() ourselves as the returned object is a usual Observable which doesn’t have this method but uses an underlying ConnectableObservable:

public static void autoConnectAndSubscribe() throws InterruptedException {
    Observable obs = getObservable()
    .doOnNext(x -> LOGGER.info("saving " + x)).publish().autoConnect();

    LOGGER.info("autoconnect()");
    Thread.sleep(1000);
    LOGGER.info("subscribing #1");
    Subscription s1 = obs.subscribe((i) -> 
        LOGGER.info("subscriber#1 is printing x-coordinate " + i));
    Thread.sleep(1000);
    LOGGER.info("subscribing #2");
    Subscription s2 = obs.subscribe((i) -> 
        LOGGER.info("subscriber#2 is printing x-coordinate " + i));

    Thread.sleep(1000);
    LOGGER.info("unsubscribe 1");
    s1.unsubscribe();
    Thread.sleep(1000);
    LOGGER.info("unsubscribe 2");
    s2.unsubscribe();
}

Note that we can’t also unsubscribe the artificial Subscriber. We can unsubscribe all the real Subscribers but the artificial Subscriber will still process the events.

To understand this let’s look at what is happening at the end after the last subscriber has unsubscribed:

subscribing #1
saving 296
subscriber#1 is printing x-coordinate 296
saving 329
subscriber#1 is printing x-coordinate 329
subscribing #2
saving 226
subscriber#1 is printing x-coordinate 226
subscriber#2 is printing x-coordinate 226
unsubscribe 1
saving 268
subscriber#2 is printing x-coordinate 268
saving 234
subscriber#2 is printing x-coordinate 234
unsubscribe 2
saving 278
saving 268

As we can see clearing resources doesn’t happen and saving elements with doOnNext() continues after the second unsubscribing. This means that the artificial Subscriber doesn’t unsubscribe but continues to consume elements.

3.6. refCount()

refCount() is similar to autoConnect() in that connecting also happens automatically as soon as the first Subscriber subscribes.

Unlike autoconnect() disconnecting also happens automatically when the last Subscriber unsubscribes:

public static void refCountAndSubscribe() throws InterruptedException {
    Observable obs = getObservable()
      .doOnNext(x -> LOGGER.info("saving " + x)).publish().refCount();

    LOGGER.info("refcount()");
    Thread.sleep(1000);
    LOGGER.info("subscribing #1");
    Subscription subscription1 = obs.subscribe(
      i -> LOGGER.info("subscriber#1 is printing x-coordinate " + i));
    Thread.sleep(1000);
    LOGGER.info("subscribing #2");
    Subscription subscription2 = obs.subscribe(
      i -> LOGGER.info("subscriber#2 is printing x-coordinate " + i));

    Thread.sleep(1000);
    LOGGER.info("unsubscribe#1");
    subscription1.unsubscribe();
    Thread.sleep(1000);
    LOGGER.info("unsubscribe#2");
    subscription2.unsubscribe();
}
refcount()
subscribing #1
saving 265
subscriber#1 is printing x-coordinate 265
saving 338
subscriber#1 is printing x-coordinate 338
subscribing #2
saving 203
subscriber#1 is printing x-coordinate 203
subscriber#2 is printing x-coordinate 203
unsubscribe#1
saving 294
subscriber#2 is printing x-coordinate 294
unsubscribe#2
clearing resources

4. Conclusion

The ConnectableObservable class helps to handle multiple subscribers with little effort.

Its methods look similar but change the subscribers’ behavior greatly due to implementation subtleties meaning even the order of the methods matters.

The full source code for all the examples used in this article can be found in the GitHub project.

Java Weekly, Issue 237

$
0
0

Here we go…

1. Spring and Java

>> A beginner’s guide to the Hibernate JPQL and Native Query Plan Cache [vladmihalcea.com]

A solid introduction to the performance gains to be had through proper caching of pre-compiled JPA and native queries. Good stuff!

>> The best way to use SQL functions in JPQL or Criteria API queries with JPA and Hibernate [vladmihalcea.com]

A practical tutorial that shows how to register and use any SQL function with JPA and Hibernate.

>> Enhance your Java Spring application with R data science [medium.com]

A fascinating piece on achieving interoperability between Java and the R library in Spring Boot running on the polyglot GraalVM. Very cool.

>> Reactive Spring Security Authentication [medium.com]

An overview of the authentication mechanisms available in Spring Security Webflux.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> Chaos Engineering – withstanding turbulent conditions in production [blog.codecentric.de]

An interesting methodology for finding and fixing potential defects in a distributed system. A must-read if you’re thinking of deploying microservices.

>> Comparing Apache Spark, Storm, Flink and Samza stream processing engines – Part 1 [blog.scottlogic.com]

A good general overview of stream processing engines, along with a simple use case implemented in three of the most popular engines from Apache.

>> Functional Reactive Programming – Streams on steroids [medium.com]

Speaking of streams, here’s a good writeup about a not-so-new programming paradigm that is quickly gaining momentum.

>> Auto Scaling Production Services on Titus [medium.com]

And, finally, a quick look at how Netflix’s need for automatic scaling policy support and a collaboration with AWS led to the new Custom Resource Scaling offering.

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> Dilbert, the Ultimate Editor [dilbert.com]

>> Honesty is the Best Policy [dilbert.com]

>> Hope is Not a Strategy [dilbert.com]

4. Pick of the Week

>> The open-plan office is a terrible, horrible, no good, very bad idea [m.signalvnoise.com]

Add Hours To a Date In Java

$
0
0

1. Overview

Before Java 8, java.util.Date was one of the most commonly used classes for representing date-time values in Java.

Then Java 8 introduced java.time.LocalDateTime and java.time.ZonedDateTime. Java 8 also allows us to represent a specific time on the timeline using java.time.Instant.

In this tutorial, we’ll learn to add or subtract n hours from a given date-time in Java. We’ll first look at some standard Java date-time related classes, and then we’ll showcase a few third-party options.

To learn more about the Java 8 DateTime API, we would suggest reading this article.

2. java.util.Date

If we’re using Java 7 or lower, we can use the java.util.Date and java.util.Calendar classes for most date-time related handling.

Let’s see how to add n hours to a given Date object:

public Date addHoursToJavaUtilDate(Date date, int hours) {
    Calendar calendar = Calendar.getInstance();
    calendar.setTime(date);
    calendar.add(Calendar.HOUR_OF_DAY, hours);
    return calendar.getTime();
}

Note that Calendar.HOUR_OF_DAY is referring to a 24-hour clock.

The above method returns a new Date object, the value of which would be either (date + hours) or (date – hours), depending on whether we pass a positive or negative value of hours respectively.

Suppose we have a Java 8 application, but still we want to work our way with java.util.Date instances.

For such a case, we can opt to take the following alternate approach:

  1. Use java.util.Date toInstant() method to convert a Date object to a java.time.Instant instance
  2. Add a specific Duration to the java.time.Instant object using the plus() method
  3. Recover our java.util.Date instance by passing in the java.time.Instant object to the java.util.Date.from() method

Let’s have a quick look at this approach:

@Test
public void givenJavaUtilDate_whenUsingToInstant_thenAddHours() {
    Date actualDate = new GregorianCalendar(2018, Calendar.JUNE, 25, 5, 0)
      .getTime();
    Date expectedDate = new GregorianCalendar(2018, Calendar.JUNE, 25, 7, 0)
      .getTime();

    assertThat(Date.from(actualDate.toInstant().plus(Duration.ofHours(2))))
      .isEqualTo(expectedDate);
}

However, note that it’s always recommended to use the new DateTime API for all applications on Java 8 or higher versions.

3. java.time.LocalDateTime/ZonedDateTime

In Java 8 or later, adding hours to either a java.time.LocalDateTime or java.time.ZonedDateTime instance is pretty straightforward and makes use of the plusHours() method:

@Test
public void givenLocalDateTime_whenUsingPlusHours_thenAddHours() {
    LocalDateTime actualDateTime = LocalDateTime
      .of(2018, Month.JUNE, 25, 5, 0);
    LocalDateTime expectedDateTime = LocalDateTime.
      of(2018, Month.JUNE, 25, 10, 0);

    assertThat(actualDateTime.plusHours(5)).isEqualTo(expectedDateTime);
}

What if we wish to subtract a few hours?

Passing a negative value of hours to plusHours() method would do just fine. However, it’s recommended to use the minusHours() method:

@Test
public void givenLocalDateTime_whenUsingMinusHours_thenSubtractHours() {
    LocalDateTime actualDateTime = LocalDateTime
      .of(2018, Month.JUNE, 25, 5, 0);
    LocalDateTime expectedDateTime = LocalDateTime
      .of(2018, Month.JUNE, 25, 3, 0);
   
    assertThat(actualDateTime.minusHours(2)).isEqualTo(expectedDateTime);

}

The plusHours() and minusHours() methods in the java.time.ZonedDateTime works exactly the same way.

4. java.time.Instant

As we know, java.time.Instant introduced in Java 8 DateTime API represents a specific moment on the timeline.

To add some hours to an Instant object, we can use its plus() method with a java.time.temporal.TemporalAmount:

@Test
public void givenInstant_whenUsingAddHoursToInstant_thenAddHours() {
    Instant actualValue = Instant.parse("2018-06-25T05:12:35Z");
    Instant expectedValue = Instant.parse("2018-06-25T07:12:35Z");

    assertThat(actualValue.plus(2, ChronoUnit.HOURS))
      .isEqualTo(expectedValue);
}

Similarly, the minus() method can be used for subtracting a specific TemporalAmount.

5. Apache Commons DateUtils

The DateUtils class from the Apache Commons Lang library exposes a static addHours() method:

public static Date addHours(Date date, int amount)

The method takes-in a java.util.Date object along with an amount we wish to add to it, the value of which could be either positive or negative.

A new java.util.Date object is returned as an outcome:

@Test
public void givenJavaUtilDate_whenUsingApacheCommons_thenAddHours() {
    Date actualDate = new GregorianCalendar(2018, Calendar.JUNE, 25, 5, 0)
      .getTime();
    Date expectedDate = new GregorianCalendar(2018, Calendar.JUNE, 25, 7, 0)
      .getTime();

    assertThat(DateUtils.addHours(actualDate, 2)).isEqualTo(expectedDate);
}

The latest version of Apache Commons Lang is available at Maven Central.

6. Joda Time

Joda Time is an alternative to the Java 8 DateTime API and provides its own DateTime implementations.

Most of its DateTime related classes expose plusHours() and minusHours() methods to help us add or subtract a given number of hours from a DateTime object.

Let’s look at an example:

@Test
public void givenJodaDateTime_whenUsingPlusHoursToDateTime_thenAddHours() {
    DateTime actualDateTime = new DateTime(2018, 5, 25, 5, 0);
    DateTime expectedDateTime = new DateTime(2018, 5, 25, 7, 0);

    assertThat(actualDateTime.plusHours(2)).isEqualTo(expectedDateTime);
}

We can easily check the latest available version of Joda Time at Maven Central.

7. Conclusion

In this tutorial, we covered several ways to add or subtract a given number of hours from standard Java date-time values.

We also looked at some third-party libraries as an alternative. As usual, the complete source code is available over on Github.

Increment Date in Java

$
0
0

1. Overview

In this tutorial, we’ll look at ways to increment date by one day using Java. Before Java 8, the standard Java date and time libraries weren’t very user-friendly. Hence, Joda-Time became the de facto standard date and time library for Java prior to Java 8.

There are also other classes and libraries that could be used to accomplish the task, like java.util.Calendar and Apache Commons.

Java 8 included a better date and time API to address the shortcomings of its older libraries.

Therefore, we’re looking at how to increment date by one day using Java 8, Joda-Time API, Java’s Calendar class and Apache Commons library.

2. Maven Dependencies

The following dependencies should be included in the pom.xml file:

<dependencies>
    <dependency>
        <groupId>joda-time</groupId>
        <artifactId>joda-time</artifactId>
        <version>2.10</version>
    </dependency>
    <dependency>
        <groupId>org.apache.commons</groupId>
        <artifactId>commons-lang3</artifactId>
        <version>3.5</version>
    </dependency>
</dependencies>

You can find the latest version of the Joda-Time on Maven Central, and also the latest version of Apache Commons Lang.

3. Using java.time

The java.time.LocalDate class is an immutable date-time representation, often viewed as year-month-day.

LocalDate has many methods for date manipulation, let’s see how we can use it to accomplish the same task:

public static String addOneDay(String date) {
    return LocalDate
      .parse(date)
      .plusDays(1)
      .toString();
}

In this example, we’re using java.time.LocalDate class and its plusDays() method to increment the date by one day.

Now, let’s verify that this method is working as expected:

@Test
public void givenDate_whenUsingJava8_thenAddOneDay() 
  throws Exception {
 
    String incrementedDate = addOneDay("2018-07-03");
    assertEquals("2018-07-04", incrementedDate);
}

4. Using java.util.Calendar

Another approach is using java.util.Calendar and its add() method to increment the date.

We’ll use it along with java.text.SimpleDateFormat for date formatting purposes:

public static String addOneDayCalendar(String date) 
  throws ParseException {
 
    SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd");
    Calendar c = Calendar.getInstance();
    c.setTime(sdf.parse(date));
    c.add(Calendar.DATE, 1);
    return sdf.format(c.getTime());
}

java.text.SimpleDateFormat is there to ensure the expected date format is used. The date is increased via the add() method.

Once again, let’s make sure this approach works as intended:

@Test
public void givenDate_whenUsingCalendar_thenAddOneDay() 
  throws Exception {
 
    String incrementedDate = addOneDayCalendar("2018-07-03");
    assertEquals("2018-07-04", incrementedDate);
}

5. Using Joda-Time

The org.joda.time.DateTime class has many methods that help to properly deal with date and time.

Let’s see how we can use it to increment the date by one day:

public static String addOneDayJodaTime(String date) {
    DateTime dateTime = new DateTime(date);
    return dateTime
      .plusDays(1)
      .toString("yyyy-MM-dd");
}

Here, we use org.joda.time.DateTime class and its plusDays() method to increment the date by one day.

We can verify that the code above works with the following unit test:

@Test
public void givenDate_whenUsingJodaTime_thenAddOneDay() throws Exception {
    String incrementedDate = addOneDayJodaTime("2018-07-03");
    assertEquals("2018-07-04", incrementedDate);
}

6. Using Apache Commons

Another library commonly used for date manipulation (among other things) is Apache Commons. It’s a suite of utilities surrounding the use of the java.util.Calendar and java.util.Date objects.

For our task, we can use the org.apache.commons.lang3.time.DateUtils class and its addDays() method (note that SimpleDateFormat is again used for date formatting):

public static String addOneDayApacheCommons(String date) 
  throws ParseException {
 
    SimpleDateFormat sdf
      = new SimpleDateFormat("yyyy-MM-dd");
    Date incrementedDate = DateUtils
      .addDays(sdf.parse(date), 1);
    return sdf.format(incrementedDate);
}

As usual, we’ll verify the results with a unit test:

@Test
public void givenDate_whenUsingApacheCommons_thenAddOneDay()
  throws Exception {
 
    String incrementedDate = addOneDayApacheCommons(
      "2018-07-03");
    assertEquals("2018-07-04", incrementedDate);
}

7. Conclusion

In this quick article, we looked at various approaches to dealing with a simple task of incrementing date by one day. We’ve shown how it can be accomplished using Java’s core APIs as well as some popular 3rd party libraries.

The code samples used in this article can be found over on GitHub.

@Component vs @Repository and @Service in Spring

$
0
0

1. Introduction

In this quick tutorial, we’re going to learn about the differences between @Component, @Repository, @Service annotations, in the Spring Framework.

2. Spring Annotations

In most typical applications, we have distinct layers like data access, presentation, service, business, etc.

And, in each layer, we have various beans. Simply put, to detect them automatically,  Spring uses classpath scanning annotations.

Then, it registers each bean in the ApplicationContext.

Here’s a quick overview of a few of these annotations:

  • @Component is a generic stereotype for any Spring-managed component
  • @Service annotates classes at the service layer
  • @Repository annotates classes at the persistence layer, which will act as a database repository

We already have an extended article about these annotations. So we’ll keep the focus only on the differences between them.

3. What’s Different?

The major difference between these stereotypes is they are used for different classification. When we annotate a class for auto-detection, then we should use the respective stereotype.

Now, let’s go through them in more detail.

3.1. @Component

We can use @Component across the application to mark the beans as Spring’s managed components. Spring only pick up and registers beans with @Component  and doesn’t look for @Service and @Repository in general.

They are registered in ApplicationContext because they themselves are annotated with @Component:

@Component
public @interface Service {
}
@Component
public @interface Repository {
}

@Service and @Repository are special cases of @Component. They are technically the same but we use them for the different purposes.

3.2. @Repository

@Repository’s job is to catch persistence specific exceptions and rethrow them as one of Spring’s unified unchecked exception.

For this Spring provides PersistenceExceptionTranslationPostProcessor, that requires to add in our application context:

<bean class="org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor"/>

This bean post processor adds an advisor to any bean that’s annotated with @Repository.

3.3. @Service

We mark beans with @Service to indicate that it’s holding the business logic. So there’s no any other specialty except using it in the service layer.

4. Conclusion

In this write-up, we learned about the differences between  @Component, @Repository, @Service annotations. We examined each annotation separately with the areas of their use.

As a conclusion, it’s always a good idea to choose the annotation-based on their layer conventions.

Guide to Spring 5 WebFlux

$
0
0

1. Overview

Spring WebFlux framework is part of Spring 5 and provides reactive programming support for web applications.

In this tutorial, we’ll be creating a small reactive REST application using the reactive web components RestController and WebClient.

We will also be looking at how to secure our reactive endpoints using Spring Security.

2. Spring WebFlux Framework

Spring WebFlux internally uses Project Reactor and its publisher implementations – Flux and Mono.

The new framework supports two programming models:

  • Annotation-based reactive components
  • Functional routing and handling

Here, we’re going to be focusing on the Annotation-based reactive components, as we already explored the functional style – routing and handling.

3. Dependencies

Let’s start with the spring-boot-starter-webflux dependency, which actually pulls in all other required dependencies:

  • spring-boot and spring-boot-starter for basic Spring Boot application setup
  • spring-webflux framework
  • reactor-core that we need for reactive streams and also reactor-netty
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-webflux</artifactId>
    <version>2.0.3.RELEASE</version>
</dependency>

The latest spring-boot-starter-webflux can be downloaded from Maven Central.

4. Reactive REST Application

We’ll now build a very simple Reactive REST EmployeeManagement application – using Spring WebFlux:

  • We’ll use a simple domain model – Employee with an id and a name field
  • We’ll build REST APIs for publishing and retrieve Single as well as Collection Employee resources using RestController and WebClient
  • And we will also be creating a secured reactive endpoint using WebFlux and Spring Security

5. Reactive RestController

Spring WebFlux supports the annotation-based configurations in the same way as Spring Web MVC framework.

To begin with, on the server side, we create an annotated controller that publishes our reactive streams of Employee.

Let’s create our annotated EmployeeController:

@RestController
@RequestMapping("/employees")
public class EmployeeReactiveController {

    private final EmployeeRepository employeeRepository;
    
    // constructor...
}

EmployeeRepository can be any data repository that supports non-blocking reactive streams.

5.1. Single Resource

Let’s create an endpoint in our controller that publishes a single Employee resource:

@GetMapping("/{id}")
private Mono<Employee> getEmployeeById(@PathVariable String id) {
    return employeeRepository.findEmployeeById(id);
}

For a single Employee resource, we have used a Mono of type Employee because it will emit at most 1 element.

5.2. Collection Resource

Let’s also add an endpoint in our controller that publishes the collection resource of all Employees:

@GetMapping
private Flux<Employee> getAllEmployees() {
    return employeeRepository.findAllEmployees();
}

For the collection resource, we have used Flux of type Employee – since that’s the publisher focused on emitting 0..n elements.

6. Reactive Web Client

WebClient introduced in Spring 5 is a non-blocking client with support for Reactive Streams.

On the client side, we use WebClient to retrieve data from our endpoints created in EmployeeController.

Let’s create a simple EmployeeWebClient:

public class EmployeeWebClient {

    WebClient client = WebClient.create("http://localhost:8080");

    // ...
}

Here we have created a WebClient using its factory method create. It’ll point to localhost:8080 for relative URLs.

6.1. Retrieving a Single Resource

To retrieve single resource of type Mono from endpoint /employee/{id}:

Mono<Employee> employeeMono = client.get()
  .uri("/employees/{id}", "1")
  .retrieve()
  .bodyToMono(Employee.class);

employeeMono.subscribe(System.out::println);

6.2. Retrieving Collection Resource

Similarly, to retrieve collection resource of type Flux from endpoint /employees:

Flux<Employee> employeeFlux = client.get()
  .uri("/employees")
  .retrieve()
  .bodyToFlux(Employee.class);
        
employeeFlux.subscribe(System.out::println);

We also have a detailed article on setting up and working with WebClient.

7. Spring WebFlux Security

We can use Spring Security to secure our reactive endpoints.

Let’s suppose we have a new endpoint in our EmployeeController. This endpoint updates Employee details and sends back the updated Employee.

Since this allows users to change existing employees, we want to restrict this endpoint to ADMIN role users only.

Let’s add a new method to our EmployeeController:

@PostMapping("/update")
private Mono<Employee> updateEmployee(@RequestBody Employee employee) {
    return employeeRepository.updateEmployee(employee);
}

Now, to restrict access to this method let’s create SecurityConfig and define some path-based rules to only allow ADMIN users:

@EnableWebFluxSecurity
public class EmployeeWebSecurityConfig {

    // ...

    @Bean
    public SecurityWebFilterChain springSecurityFilterChain(
      ServerHttpSecurity http) {
        http.csrf().disable()
          .authorizeExchange()
          .pathMatchers(HttpMethod.POST, "/employees/update").hasRole("ADMIN")
          .pathMatchers("/**").permitAll()
          .and()
          .httpBasic();
        return http.build();
    }
}

This configuration will restrict access to the endpoint /employees/update. Therefore only users having a role ADMIN will be able to access this endpoint and update an existing Employee.

Finally, the annotation @EnableWebFluxSecurity adds Spring Security WebFlux support with some default configurations.

We also have a detailed article on configuring and working with Spring WebFlux security.

8. Conclusion

In this article, we’ve explored how to create and work with reactive web components supported by Spring WebFlux framework by creating a small Reactive REST application.

We learned how to use RestController and WebClient to publish and consume reactive streams respectively.

We also looked into how to create a secured reactive endpoint with the help of Spring Security.

Other than Reactive RestController and WebClient, WebFlux framework also supports reactive WebSocket and corresponding WebSocketClient for socket style streaming of Reactive Streams.

We have a detailed article focused on working with Reactive WebSocket with Spring 5.

Finally, the complete source code used in this tutorial is available over on Github.


Spring Security Login Page with React

$
0
0

1. Overview

React is a component-based JavaScript library built by Facebook. With React, we can build complex web applications with ease. In this article, we’re going to make Spring Security work together with a React Login page.

We’ll take advantage of the existing Spring Security configurations of previous examples. So we’ll build on top of a previous article about creating a Form Login with Spring Security.

2. Set up React

First, let’s use the command-line tool create-react-app to create an application by executing the command “create-react-app react”.

We’ll have a configuration like the following in react/package.json:

{
    "name": "react",
    "version": "0.1.0",
    "private": true,
    "dependencies": {
        "react": "^16.4.1",
        "react-dom": "^16.4.1",
        "react-scripts": "1.1.4"
    },
    "scripts": {
        "start": "react-scripts start",
        "build": "react-scripts build",
        "test": "react-scripts test --env=jsdom",
        "eject": "react-scripts eject"
    }
}

Then, we’ll use the frontend-maven-plugin to help build our React project with Maven:

<plugin>
    <groupId>com.github.eirslett</groupId>
    <artifactId>frontend-maven-plugin</artifactId>
    <version>1.6</version>
    <configuration>
        <nodeVersion>v8.11.3</nodeVersion>
        <npmVersion>6.1.0</npmVersion>
        <workingDirectory>src/main/webapp/WEB-INF/view/react</workingDirectory>
    </configuration>
    <executions>
        <execution>
            <id>install node and npm</id>
            <goals>
                <goal>install-node-and-npm</goal>
            </goals>
        </execution>
        <execution>
            <id>npm install</id>
            <goals>
                <goal>npm</goal>
            </goals>
        </execution>
        <execution>
            <id>npm run build</id>
            <goals>
                <goal>npm</goal>
            </goals>
            <configuration>
                <arguments>run build</arguments>
            </configuration>
        </execution>
    </executions>
</plugin>

The latest version of the plugin can be found here.

When we run mvn compile, this plugin will download node and npm, install all node module dependencies and build the react project for us.

There are several configuration properties we need to explain here. We specified the versions of node and npm, so that the plugin will know which version to download.

Our React login page will serve as a static page in Spring, so we use “src/main/webapp/WEB-INF/view/react” as npm‘s working directory.

3. Spring Security Configuration

Before we dive into the React components, we update the Spring configuration to serve the static resources of our React app:

@EnableWebMvc
@Configuration
public class MvcConfig extends WebMvcConfigurerAdapter {

    @Override
    public void addResourceHandlers(
      ResourceHandlerRegistry registry) {
 
        registry.addResourceHandler("/static/**")
          .addResourceLocations("/WEB-INF/view/react/build/static/");
        registry.addResourceHandler("/*.js")
          .addResourceLocations("/WEB-INF/view/react/build/");
        registry.addResourceHandler("/*.json")
          .addResourceLocations("/WEB-INF/view/react/build/");
        registry.addResourceHandler("/*.ico")
          .addResourceLocations("/WEB-INF/view/react/build/");
        registry.addResourceHandler("/index.html")
          .addResourceLocations("/WEB-INF/view/react/build/index.html");
    }
}

Note that we add the login page “index.html” as a static resource instead of a dynamically served JSP.

Next, we update the Spring Security configuration to allow access to these static resources.

Instead of using “login.jsp” as we did in the previous form login article, here we use “index.html” as our Login page:

@Configuration
@EnableWebSecurity
@Profile("!https")
public class SecSecurityConfig 
  extends WebSecurityConfigurerAdapter {

    //...

    @Override
    protected void configure(final HttpSecurity http) 
      throws Exception {
        http.csrf().disable().authorizeRequests()
          //...
          .antMatchers(
            HttpMethod.GET,
            "/index*", "/static/**", "/*.js", "/*.json", "/*.ico")
            .permitAll()
          .anyRequest().authenticated()
          .and()
          .formLogin().loginPage("/index.html")
          .loginProcessingUrl("/perform_login")
          .defaultSuccessUrl("/homepage.html",true)
          .failureUrl("/index.html?error=true")
          //...
    }
}

As we can see from the snippet above when we post form data to “/perform_login“, Spring will redirect us to “/homepage.html” if the credentials match successfully and to “/index.html?error=true” otherwise.

4. React Components

Now let’s get our hands dirty on React. We’ll build and manage a form login using components.

Note that we’ll use ES6 (ECMAScript 2015) syntax to build our application.

4.1. Input

Let’s start with an Input component that backs the <input /> elements of the login form in react/src/Input.js:

import React, { Component } from 'react'
import PropTypes from 'prop-types'

class Input extends Component {
    constructor(props){
        super(props)
        this.state = {
            value: props.value? props.value : '',
            className: props.className? props.className : '',
            error: false
        }
    }

    //...

    render () {
        const {handleError, ...opts} = this.props
        this.handleError = handleError
        return (
          <input {...opts} value={this.state.value}
            onChange={this.inputChange} className={this.state.className} /> 
        )
    }
}

Input.propTypes = {
  name: PropTypes.string,
  placeholder: PropTypes.string,
  type: PropTypes.string,
  className: PropTypes.string,
  value: PropTypes.string,
  handleError: PropTypes.func
}

export default Input

As seen above, we wrap the <input /> element into a React controlled component to be able to manage its state and perform field validation.

React provides a way to validate the types using PropTypes. Specifically, we use Input.propTypes = {…} to validate the type of properties passed in by the user.

Note that PropType validation works for development only. PropType validation is to check that all the assumptions that we’re making about our components are being met.

It’s better to have it rather than getting surprised by random hiccups in production.

4.2. Form

Next, we’ll build a generic Form component in the file Form.js that combines multiple instances of our Input component on which we can base our login form.

In the Form component, we take attributes of HTML <input/> elements and create Input components from them.

Then the Input components and validation error messages are inserted into the Form:

import React, { Component } from 'react'
import PropTypes from 'prop-types'
import Input from './Input'

class Form extends Component {

    //...

    render() {
        const inputs = this.props.inputs.map(
          ({name, placeholder, type, value, className}, index) => (
            <Input key={index} name={name} placeholder={placeholder} type={type} value={value}
              className={type==='submit'? className : ''} handleError={this.handleError} />
          )
        )
        const errors = this.renderError()
        return (
            <form {...this.props} onSubmit={this.handleSubmit} ref={fm => {this.form=fm}} >
              {inputs}
              {errors}
            </form>
        )
    }
}

Form.propTypes = {
  name: PropTypes.string,
  action: PropTypes.string,
  method: PropTypes.string,
  inputs: PropTypes.array,
  error: PropTypes.string
}

export default Form

Now let’s take a look at how we manage field validation errors and login error:

class Form extends Component {

    constructor(props) {
        super(props)
        if(props.error) {
            this.state = {
              failure: 'wrong username or password!',
              errcount: 0
            }
        } else {
            this.state = { errcount: 0 }
        }
    }

    handleError = (field, errmsg) => {
        if(!field) return

        if(errmsg) {
            this.setState((prevState) => ({
                failure: '',
                errcount: prevState.errcount + 1, 
                errmsgs: {...prevState.errmsgs, [field]: errmsg}
            }))
        } else {
            this.setState((prevState) => ({
                failure: '',
                errcount: prevState.errcount===1? 0 : prevState.errcount-1,
                errmsgs: {...prevState.errmsgs, [field]: ''}
            }))
        }
    }

    renderError = () => {
        if(this.state.errcount || this.state.failure) {
            const errmsg = this.state.failure 
              || Object.values(this.state.errmsgs).find(v=>v)
            return <div className="error">{errmsg}</div>
        }
    }

    //...

}

In this snippet, we define the handleError function to manage the error state of the form. Recall that we also used it for Input field validation. Actually, handleError() is passed to the Input Components as a callback in the render() function.

We use renderError() to construct the error message element. Note that Form’s constructor consumes an error property. This property indicates if the login action fails.

Then comes the form submission handler:

class Form extends Component {

    //...

    handleSubmit = (event) => {
        event.preventDefault()
        if(!this.state.errcount) {
            const data = new FormData(this.form)
            fetch(this.form.action, {
              method: this.form.method,
              body: new URLSearchParams(data)
            })
            .then(v => {
                if(v.redirected) window.location = v.url
            })
            .catch(e => console.warn(e))
        }
    }
}

We wrap all form fields into FormData and send it to the server using the fetch API.

Let’s not forget our login form comes with a successUrl and failureUrl, meaning that no matter if the request is successful or not, the response would require a redirection.

That’s why we need to handle redirection in the response callback.

4.3. Form Rendering

Now that we’ve set up all the components we need, we can continue to put them in the DOM. The basic HTML structure is as follows (find it under react/public/index.html):

<!DOCTYPE html>
<html lang="en">
  <head>
    <!-- ... -->
  </head>
  <body>

    <div id="root">
      <div id="container"></div>
    </div>

  </body>
</html>

Finally, we’ll render the Form into the <div/> with id “container” in react/src/index.js:

import React from 'react'
import ReactDOM from 'react-dom'
import './index.css'
import Form from './Form'

const inputs = [{
  name: "username",
  placeholder: "username",
  type: "text"
},{
  name: "password",
  placeholder: "password",
  type: "password"
},{
  type: "submit",
  value: "Submit",
  className: "btn" 
}]

const props = {
  name: 'loginForm',
  method: 'POST',
  action: '/perform_login',
  inputs: inputs
}

const params = new URLSearchParams(window.location.search)

ReactDOM.render(
  <Form {...props} error={params.get('error')} />,
  document.getElementById('container'))

So our form now contains two input fields: username and password, and a submit button.

Here we pass an additional error attribute to the Form component because we want to handle login error after redirection to the failure URL: /index.html?error=true.

form login error

Now we’ve finished building a Spring Security login application using React. The last thing we need to do is to run mvn compile.

During the process, the Maven plugin will help build our React application and gather the build result in src/main/webapp/WEB-INF/view/react/build.

5. Conclusion

In this article, we’ve covered how to build a React login app and let it interact with a Spring Security backend. A more complex application would involve state transition and routing using React Router or Redux, but that’d be beyond the scope of this article.

As always, the full implementation can be found over on Github. To run it locally, execute mvn jetty:run in the project root folder, then we can access the React login page at http://localhost:8080.

Handling Errors in Spring WebFlux

$
0
0

1. Overview

In this tutorial, we’ll look at various strategies available for handling errors in a Spring WebFlux project while walking through a practical example.

We’ll also point out where it might be advantageous to use one strategy over another and provide a link to the full source code at the end.

2. Setting Up the Example

The Maven setup is the same as our previous article, which provides an introduction to Spring Webflux.

For our example, we’ll use a RESTful endpoint that takes a username as a query parameter and returns “Hello username” as a result.

First, let’s create a router function that routes the /hello request to a method named handleRequest in the passed-in handler:

@Bean
public RouterFunction<ServerResponse> routeRequest(Handler handler) {
    return RouterFunctions.route(RequestPredicates.GET("/hello")
      .and(RequestPredicates.accept(MediaType.TEXT_PLAIN)), 
        handler::handleRequest);
    }

Next, we’ll define the handleRequest() method which calls the sayHello() method and finds a way of including / returning its result in the ServerResponse body:

public Mono<ServerResponse> handleRequest(ServerRequest request) {
    return 
      //...
        sayHello(request)
      //...
}

Finally, the sayHello() method is a simple utility method that concatenates the “Hello” String and the username:

private Mono<String> sayHello(ServerRequest request) {
    //...
    return Mono.just("Hello, " + request.queryParam("name").get());
    //...
}

So long as a username is present as part of our request e.g. if the endpoint is called as “/hello?username=Tonni“, then, this endpoint will always function correctly.

However, if we call the same endpoint without specifying a username e.g. “/hello”, it will throw an exception.

Below, we’ll look at where and how we can reorganize our code to handle this exception in WebFlux.

3. Handling Errors at a Functional Level

There are two key operators built into the Mono and Flux APIs to handle errors at a functional level.

Let’s briefly explore them and their usage.

3.1. Handling Errors with onErrorReturn

We can use onErrorReturn() to return a static default value whenever an error occurs:

public Mono<ServerResponse> handleRequest(ServerRequest request) {
    return sayHello(request)
      .onErrorReturn("Hello Stranger")
      .flatMap(s -> ServerResponse.ok()
      .contentType(MediaType.TEXT_PLAIN)
      .syncBody(s));
}

Here we’re returning a static “Hello Stranger” whenever the buggy concatenation function sayHello() throws an exception.

3.2. Handling Errors with onErrorResume

There are three ways that we can use onErrorResume to handle errors:

  • Compute a dynamic fallback value
  • Execute an alternative path with a fallback method
  • Catch, wrap, and re-throw an error e.g. as a custom business exception

Let’s see how we can compute a value:

public Mono<ServerResponse> handleRequest(ServerRequest request) {
    return sayHello(request)
      .flatMap(s -> ServerResponse.ok()
      .contentType(MediaType.TEXT_PLAIN)
          .syncBody(s))
        .onErrorResume(e -> Mono.just("Error " + e.getMessage())
          .flatMap(s -> ServerResponse.ok()
            .contentType(MediaType.TEXT_PLAIN)
            .syncBody(s)));
}

Here, we’re returning a String consisting of the dynamically obtained error message appended to the string “Error” whenever sayHello() throws an exception.

Next, let’s call a fallback method when an error occurs:

public Mono<ServerResponse> handleRequest(ServerRequest request) {
    return sayHello(request)
      .flatMap(s -> ServerResponse.ok()
      .contentType(MediaType.TEXT_PLAIN)
      .syncBody(s))
      .onErrorResume(e -> sayHelloFallback()
      .flatMap(s ->; ServerResponse.ok()
      .contentType(MediaType.TEXT_PLAIN)
      .syncBody(s)));
}

Here, we’re calling the alternative method sayHelloFallback() whenever sayHello() throws an exception.

The final option using onErrorResume() is to catch, wrap, and re-throw an error e.g. as a NameRequiredException:

public Mono<ServerResponse> handleRequest(ServerRequest request) {
    return ServerResponse.ok()
      .body(sayHello(request)
      .onErrorResume(e -> Mono.error(new NameRequiredException(
        HttpStatus.BAD_REQUEST, 
        "username is required", e))), String.class);
}

Here, we’re throwing a custom exception with the message: “username is required” whenever sayHello() throws an exception.

4. Handling Errors at a Global Level

So far, all the examples we’ve presented have tackled error handling at a functional level.

We can, however, opt to handle our WebFlux errors at a global level. To do this, we only need to take two steps:

  • Customize the Global Error Response Attributes
  • Implement the Global Error Handler

The exception that our handler throws will be automatically translated to an HTTP status and a JSON error body. To customize these, we can simply extend the DefaultErrorAttributes class and override its getErrorAttributes() method:

public class GlobalErrorAttributes extends DefaultErrorAttributes{
    
    @Override
    public Map<String, Object> getErrorAttributes(ServerRequest request, 
      boolean includeStackTrace) {
        Map<String, Object> map = super.getErrorAttributes(
          request, includeStackTrace);
        map.put("status", HttpStatus.BAD_REQUEST);
        map.put("message", "username is required");
        return map;
    }

}

Here, we want the status: BAD_REQUEST and message: “username is required” returned as part of the error attributes when an exception occurs.

Next, let’s implement the Global Error Handler. For this, Spring provides a convenient AbstractErrorWebExceptionHandler class for us to extend and implement in handling global errors:

@Component
@Order(-2)
public class GlobalErrorWebExceptionHandler extends 
    AbstractErrorWebExceptionHandler {

    // constructors

    @Override
    protected RouterFunction<ServerResponse> getRoutingFunction(
      ErrorAttributes errorAttributes) {

        return RouterFunctions.route(
          RequestPredicates.all(), this::renderErrorResponse);
    }

    private Mono<ServerResponse> renderErrorResponse(
       ServerRequest request) {

       Map<String, Object> errorPropertiesMap = getErrorAttributes(request, false);

       return ServerResponse.status(HttpStatus.BAD_REQUEST)
         .contentType(MediaType.APPLICATION_JSON_UTF8)
         .body(BodyInserters.fromObject(errorPropertiesMap));
    }
}

In this example, we set the order of our global error handler to -2. This is to give it a higher priority than the DefaultErrorWebExceptionHandler which is registered at @Order(-1).

The errorAttributes object will be the exact copy of the one that we pass in the Web Exception Handler’s constructor. This should ideally be our customized Error Attributes class.

Then, we’re clearly stating that we want to route all error handling requests to the renderErrorResponse() method.

Finally, we get the error attributes and insert them inside a server response body.

This then produces a JSON response with details of the error, the HTTP status and the exception message for machine clients. For browser clients, it has a ‘whitelabel’ error handler that renders the same data in HTML format. This can, of course, be customized.

5. Conclusion

In this article, we looked at various strategies available for handling errors in a Spring WebFlux project and pointed out where it might be advantageous to use one strategy over another.

As promised, the full source code that accompanies the article is available over on GitHub.

Container Configuration in Spring Boot 2

$
0
0

1. Overview 

In this quick tutorial, we’ll have a look at how to replace the EmbeddedServletContainerCustomizer and ConfigurableEmbeddedServletContainer in Spring Boot 2.

These classes were part of previous versions of Spring Boot, but have been removed starting with Spring Boot 2. Of course, the functionality is still available through the interface WebServerFactoryCustomizer and the class ConfigurableServletWebServerFactory.

Let’s have a look at how to use these.

2. Prior to Spring Boot 2

First, let’s have a look at a configuration that uses the old class and interface and that we’ll need to replace:

@Component
public class CustomContainer implements EmbeddedServletContainerCustomizer {
 
    @Override
    public void customize(ConfigurableEmbeddedServletContainer container) {
        container.setPort(8080);
        container.setContextPath("");
     }
}

Here, we’re customizing the servlet container’s port and context path.

Another possibility to achieve this is to use more specific sub-classes of ConfigurableEmbeddedServletContainer, for a container type such as Tomcat:

@Component
public class CustomContainer implements EmbeddedServletContainerCustomizer {
 
    @Override
    public void customize(ConfigurableEmbeddedServletContainer container) {
        if (container instanceof TomcatEmbeddedServletContainerFactory) {
            TomcatEmbeddedServletContainerFactory tomcatContainer = 
              (TomcatEmbeddedServletContainerFactory) container;
            tomcatContainer.setPort(8080);
            tomcatContainer.setContextPath("");
        }
    }
}

3. Upgrade to Spring Boot 2

In Spring Boot 2, the EmbeddedServletContainerCustomizer interface is replaced by WebServerFactoryCustomizer, while the ConfigurableEmbeddedServletContainer class is replaced with ConfigurableServletWebServerFactory.

Let’s rewrite the previous example for a Spring Boot 2 project:

public class CustomContainer implements 
  WebServerFactoryCustomizer<ConfigurableServletWebServerFactory> {
 
    public void customize(ConfigurableServletWebServerFactory factory) {
        factory.setPort(8080);
        factory.setContextPath("");
     }
}

And the second example will now use a TomcatServletWebServerFactory:

@Component
public class CustomContainer implements 
  WebServerFactoryCustomizer<TomcatServletWebServerFactory> {

    @Override
    public void customize(TomcatServletWebServerFactory factory) {
        factory.setContextPath("");
        factory.setPort(8080);
    }
}

Similarly, we have the JettyServletWebServerFactory and UndertowServletWebServerFactory as equivalents for the removed JettyEmbeddedServletContainerFactory and UndertowEmbeddedServletContainerFactory.

4. Conclusion

This short write-up showed how to fix an issue we might encounter when upgrading a Spring Boot application to version 2.x.

An example of a Spring Boot 2 project is available in our GitHub repository.

Optimizing Spring Integration Tests

$
0
0

1. Introduction

In this article, we’ll have a holistic discussion about integration tests using Spring and how to optimize them.

First, we’ll briefly discuss the importance of integration tests and their place in modern Software focusing on the Spring ecosystem.

Later, we’ll cover multiple scenarios, focusing on web-apps.

Next, we’ll discuss some strategies to improve testing speed, by learning about different approaches that could influence both the way we shape our tests and the way we shape the app itself.

Before getting started, it is important to keep in mind this is an opinion article based on experience. Some of this things might suit you, some might not.

Finally, this article uses Kotlin for the code samples to keep them as concise as possible, but the concepts aren’t specific to this language and code snippets should feel meaningful to Java and Kotlin developers alike.

2. Integration tests

Integration tests are a fundamental part of automated test suites. Although they shouldn’t be as numerous as unit tests if we follow a healthy test pyramid. Relying on frameworks such as Spring leave us needing a fair amount of integration testing in order to de-risk certain behaviors of our system.

The more we simplify our code by using Spring modules (data, security, social…), the bigger a need for integration tests. This becomes particularly true when we move bits and bobs of our infrastructure into @Configuration classes.

We shouldn’t “test the framework”, but we should certainly verify the framework is configured to fulfill our needs.

Integration tests help us build confidence but they come at a price:

  • That is a slower execution speed, which means slower builds
  • Also, integration tests imply a broader testing scope which is not ideal in most cases

With this in mind, we’ll try to find some solutions to mitigate the above-mentioned problems.

3. Testing Web Apps

Spring brings a few options in order to test web applications, and most Spring developers are familiar with them, these are:

  • MockMvc: Mocks the servlet API, useful for non-reactive web apps
  • TestRestTemplate: Can be used pointing to our app, useful for non-reactive web apps where mocked servlets are not desirable
  • WebTestClient: Is a testing tool for reactive web apps, both with mocked requests/responses or hitting a real server

As we already have articles covering these topics we won’t spend time talking about them.

Feel free to have a look if you’d like to dig deeper.

4. Optimizing Execution Time

Integration tests are great. They give us a good degree of confidence. Also if implemented appropriately, they can describe the intent of our app in a very clear way, with less mocking and setup noise.

However, as our app matures and the development piles up, build time inevitably goes up. As build time increases it might become impractical to keep running all tests every time.

Thereafter, impacting our feedback loop and getting on the way of best development practices.

Furthermore, integration tests are inherently expensive. Starting up persistence of some sort, sending requests through (even if they never leave localhost), or doing some IO simply takes time.

It’s paramount to keep an eye on our build time, including test execution. And there are some tricks we can apply in Spring to keep it low.

In the next sections, we’ll cover a few points to help us out optimize our build time as well as some pitfalls that might impact its speed:

  • Using profiles wisely – how profiles impact performance
  • Reconsidering @MockBean – how mocking hits performance
  • Refactoring @MockBean – alternatives to improve performance
  • Thinking carefully about @DirtiesContext – a useful but dangerous annotation and how not to use it
  • Using test slices – a cool tool that can help or get on our way
  • Using class inheritance – a way to organize tests in a safe manner
  • State management – good practices to avoid flakey tests
  • Refactoring into unit tests – the best way to get a solid and snappy build

Let’s get started!

4.1. Using Profiles Wisely

Profiles are a pretty neat tool. Namely, simple tags that can enable or disable certain areas of our App. We could even implement feature flags with them!

As our profiles get richer, it’s tempting to swap every now and then in our integration tests. There are convenient tools to do so, like @ActiveProfiles. However, every time we pull a test with a new profile, a new ApplicationContext gets created.

Creating application contexts might be snappy with a vanilla spring boot app with nothing in it. Add an ORM and a few modules and it will quickly skyrocket to 7+ seconds.

Add a bunch of profiles, and scatter them through a few tests and we’ll quickly get a 60+ seconds build (assuming we run tests as part of our build – and we should).

Once we face a complex enough application, fixing this is daunting. However, if we plan carefully in advance, it becomes trivial to keep a sensible build time.

There are a few tricks we could keep in mind when it comes to profiles in integration tests:

  • Create an aggregate profile, i.e. test, include all needed profiles within – stick to our test profile everywhere
  • Design our profiles with testability in mind. If we end up having to switch profiles perhaps there is a better way
  • State our test profile in a centralized place – we’ll talk about this later
  • Avoid testing all profiles combinations. Alternatively, we could have an e2e test-suite per environment testing the app with that specific profile-set

4.2. The Problems with @MockBean

@MockBean is a pretty powerful tool.

When we need some Spring magic but want to mock a particular component, @MockBean comes in really handy. But it does so at a price.

Every time @MockBean appears in a class, the ApplicationContext cache gets marked as dirty, hence the runner will clean the cache after the test-class is done. Which again adds an extra bunch of seconds to our build.

This is a controversial one, but trying to exercise the actual app instead of mocking for this particular scenario could help. Of course, there’s no silver bullet here. Boundaries get blurry when we don’t allow ourselves to mock dependencies.

We might think: Why would we persist when all we want to test is our REST layer? This is a fair point, and there’s always a compromise.

However, with a few principles in mind, this might actually can be turned into an advantage that leads to better design of both tests and our app and reduces testing time.

4.3. Refactoring @MockBean

In this section, we’ll try to refactor a ‘slow’ test using @MockBean to make it reuse the cached ApplicationContext.

Let’s assume we want to test a POST that creates a user. If we were mocking – using @MockBean, we could simply verify that our service has been called with a nicely serialized user.

If we tested our service properly this approach should suffice:

class UsersControllerIntegrationTest : AbstractSpringIntegrationTest() {

    @Autowired
    lateinit var mvc: MockMvc
    
    @MockBean
    lateinit var userService: UserService

    @Test
    fun links() {
        mvc.perform(post("/users")
          .contentType(MediaType.APPLICATION_JSON)
          .content("""{ "name":"jose" }"""))
          .andExpect(status().isCreated)
        
        verify(userService).save("jose")
    }
}

interface UserService {
    fun save(name: String)
}

We want to avoid @MockBean though. So we’ll end up persisting the entity (assuming that’s what the service does).

The most naive approach here would be to test the side effect: After POSTing, my user is in my DB, in our example, this would use JDBC.

This, however, violates testing boundaries:

@Test
fun links() {
    mvc.perform(post("/users")
      .contentType(MediaType.APPLICATION_JSON)
      .content("""{ "name":"jose" }"""))
      .andExpect(status().isCreated)

    assertThat(
      JdbcTestUtils.countRowsInTable(jdbcTemplate, "users"))
      .isOne()
}

In this particular example we violate testing boundaries because we treat our app as an HTTP black box to send the user, but later we assert using implementation details, that is, our user has been persisted in some DB.

If we exercise our app through HTTP, can we assert the result through HTTP too?

@Test
fun links() {
    mvc.perform(post("/users")
      .contentType(MediaType.APPLICATION_JSON)
      .content("""{ "name":"jose" }"""))
      .andExpect(status().isCreated)

    mvc.perform(get("/users/jose"))
      .andExpect(status().isOk)
}

There are a few advantages if we follow the last approach:

  • Our test will start quicker (arguably, it might take a tiny bit longer to execute though, but it should pay back)
  • Also, our test isn’t aware of side effects not related to HTTP boundaries i.e. DBs
  • Finally, our test expresses with clarity the intent of the system: If you POST, you’ll be able to GET Users

Of course, this might not always be possible for various reasons:

  • We might not have the ‘side-effect’ endpoint: An option here is to consider creating ‘testing endpoints’
  • Complexity is too high to hit the entire app: An option here is to consider slices (we’ll talk about them later)

4.4. Thinking Carefully about @DirtiesContext

Sometimes, we might need to modify the ApplicationContext in our tests. For this scenario, @DirtiesContext delivers exactly that functionality.

For the same reasons exposed above, @DirtiesContext is an extremely expensive resource when it comes to execution time, and as such, we should be careful.

Some misuses of @DirtiesContext include application cache reset or in memory DB resets. There are better ways to handle these scenarios in integration tests, and we’ll cover some in further sections.

4.5. Using Test Slices

Test Slices are a Spring Boot feature introduced in the 1.4. The idea is fairly simple, Spring will create a reduced application context for a specific slice of your app.

Also, the framework will take care of configuring the very minimum.

There are a sensible number of slices available out of the box in Spring Boot and we can create our own too:

  • @JsonTest: Registers JSON relevant components
  • @DataJpaTest: Registers JPA beans, including the ORM available
  • @JdbcTest: Useful for raw JDBC tests, takes care of the data source and in memory DBs without ORM frills
  • @DataMongoTest: Tries to provide an in-memory mongo testing setup
  • @WebMvcTest: A mock MVC testing slice without the rest of the app
  • … (we can check the source to find them all)

This particular feature if used wisely can help us build narrow tests without such a big penalty in terms of performance particularly for small/medium sized apps.

However, if our application keeps growing it also piles up as it creates one (small) application context per slice.

4.6. Using Class Inheritance

Using a single AbstractSpringIntegrationTest class as the parent of all our integration tests is a simple, powerful and pragmatic way of keeping the build fast.

If we provide a solid setup, our team will simply extend it, knowing that everything ‘just works’. This way we can worry less about managing state or configuring the framework and focus on the problem at hand.

We could set all the test requirements there:

  • The Spring runner – or preferably rules, in case we need other runners later
  • profiles – ideally our aggregate test profile
  • initial config – setting the state of our application

Let’s have a look at a simple base class that takes care of the previous points:

@SpringBootTest
@ActiveProfiles("test")
abstract class AbstractSpringIntegrationTest {

    @Rule
    @JvmField
    val springMethodRule = SpringMethodRule()

    companion object {
        @ClassRule
        @JvmField
        val SPRING_CLASS_RULE = SpringClassRule()
    }
}

4.7. State Management

It’s important to remember where ‘unit’ in Unit Test comes from. Simply put, it means we can run a single test (or a subset) at any point getting consistent results.

Hence, the state should be clean and known before every test starts.

In other words, the result of a test should be consistent regardless of whether it is executed in isolation or together with other tests.

This idea applies just the same to integration tests. We need to ensure our app has a known (and repeatable) state before starting a new test. The more components we reuse to speed things up (app context, DBs, queues, files…), the more chances to get state pollution.

Assuming we went all in with class inheritance, now, we have a central place to manage state.

Let’s enhance our abstract class to make sure our app is in a known state before running tests.

In our example, we’ll assume there are several repositories (from various data sources), and a Wiremock server:

@SpringBootTest
@ActiveProfiles("test")
@AutoConfigureWireMock(port = 8666)
@AutoConfigureMockMvc
abstract class AbstractSpringIntegrationTest {

    //... spring rules are configured here, skipped for clarity

    @Autowired
    protected lateinit var wireMockServer: WireMockServer

    @Autowired
    lateinit var jdbcTemplate: JdbcTemplate

    @Autowired
    lateinit var repos: Set<MongoRepository<*, *>>

    @Autowired
    lateinit var cacheManager: CacheManager

    @Before
    fun resetState() {
        cleanAllDatabases()
        cleanAllCaches()
        resetWiremockStatus()
    }

    fun cleanAllDatabases() {
        JdbcTestUtils.deleteFromTables(jdbcTemplate, "table1", "table2")
        jdbcTemplate.update("ALTER TABLE table1 ALTER COLUMN id RESTART WITH 1")
        repos.forEach { it.deleteAll() }
    }

    fun cleanAllCaches() {
        cacheManager.cacheNames
          .map { cacheManager.getCache(it) }
          .filterNotNull()
          .forEach { it.clear() }
    }

    fun resetWiremockStatus() {
        wireMockServer.resetAll()
        // set default requests if any
    }
}

4.8. Refactoring into Unit Tests

This is probably one of the most important points. We’ll find ourselves over and over with some integration tests that are actually exercising some high-level policy of our app.

Whenever we find some integration tests testing a bunch of cases of core business logic, it’s time to rethink our approach and break them down into unit tests.

A possible pattern here to accomplish this successfully could be:

  • Identify integration tests that are testing multiple scenarios of core business logic
  • Duplicate the suite, and refactor the copy into unit Tests – at this stage, we might need to break down the production code too to make it testable
  • Get all tests green
  • Leave a happy path sample that is remarkable enough in the integration suite – we might need to refactor or join and reshape a few
  • Remove the remaining integration Tests

Michael Feathers covers many techniques to achieve this and more in Working Effectively with Legacy Code.

5. Summary

In this article, we had an introduction to Integration tests with a focus on Spring.

First, we talked about the importance of integration tests and why they are particularly relevant in Spring applications.

After that, we summarized some tools that might come in handy for certain types of Integration tests in Web Apps.

Finally, we went through a list of potential issues that slow down our test execution time, as well as tricks to improve it.

Copy a List to Another List in Java

$
0
0

1. Overview

In this quick tutorial, we’ll show different ways to copy a List to another List and a common error produced in the process.

For an introduction to the use of Collections, please refer to this article here.

2. Constructor

A simple way to copy a List is by using the constructor that takes a collection as its argument:

List<Plant> copy = new ArrayList<>(list);

Due to the fact that we’re copying reference here and not cloning the objects,  every amends made in one element will affect both lists.

For that reason, using the constructor is good to copy immutable objects:

List<Integer> copy = new ArrayList<>(list);

Integer is an immutable class, its value is set when the instance is created and can never change.

An Integer reference can thus be shared by multiple lists and threads and there’s no way anybody can change its value.

3. List ConcurrentAccessException

A common problem working with lists is the ConcurrentAccessException. This could mean that we’re modifying the list while we’re trying to copy it, most likely in another thread.

To fix this issue we have to either:

  • Use a designed for concurrent access collection
  • Lock the collection appropriately to iterate over it
  • Find a way to avoid needing to copy the original collection

Considering our last approach, it isn’t thread-safe. So that if we want to resolve our problem with the first option, we may want to use CopyOnWhiteArrayList, in which all mutative operations are implemented by making a fresh copy of the underlying array.

For further information, please refer to this article.

In case we want to lock the Collection, it’s possible to use a lock primitive to serialize read/write access, such as ReentrantReadWriteLock.

4. AddAll

Another approach to copy elements is using the addAll method:

List<Integer> copy = new ArrayList<>();
copy.addAll(list);

It’s important to keep on mind whenever using this method that, as with the constructor, the contents of both lists will reference the same objects.

5. Collections.copy

The Collections class consists exclusively of static methods that operate on or return collections.

One of them is copy, which needs a source list and a destination list at least as long as the source.

It will maintain the index of each copied element in the destination list such as the original:

List<Integer> source = Arrays.asList(1,2,3);
List<Integer> dest = Arrays.asList(4,5,6);
Collections.copy(dest, source);

In the above example, all the previous elements in the dest list were overwritten because both lists have the same size.

In case that the destination list size is larger than the source:

List<Integer> source = Arrays.asList(1, 2, 3);
List<Integer> dest = Arrays.asList(5, 6, 7, 8, 9, 10);
Collections.copy(dest, source);

Just the three first items were overwritten while the rest of the elements in the list are conserved.

6. Using Java 8

This version of Java opens our possibilities by adding new tools. The one we will explore in the next examples is Stream:

List<String> copy = list.stream()
  .collect(Collectors.toList());

The main advantages of this way are the opportunity to use skip and filters. In the next example we’re going to skip the first element:

List<String> copy = list.stream()
  .skip(1)
  .collect(Collectors.toList());

It’s possible to filter by the length of the String too or by comparing an attribute of our objects:

List<String> copy = list.stream()
  .filter(s -> s.length() > 10)
  .collect(Collectors.toList());
List<Flower> flowers = list.stream()
  .filter(f -> f.getPetals() > 6)
  .collect(Collectors.toList());

It’s probable we want to work in a null-safe way:

List<Flower> flowers = Optional.ofNullable(list)
  .map(List::stream)
  .orElseGet(Stream::empty)
  .collect(Collectors.toList());

And skip an element using this way too:

List<Flower> flowers = Optional.ofNullable(list)
  .map(List::stream).orElseGet(Stream::empty)
  .skip(1)
  .collect(Collectors.toList());

7. Using Java 10

Finally, one of the last Java version allows us to create an immutable List containing the elements of the given Collection:

List<T> copy = List.copyOf(list);
The only conditions are that the given Collection mustn’t be null, and it mustn’t contain any null elements.

8. Conclusion

In this article, we’ve explored different ways to copy a List to another List with different Java versions and a common error produced in the process.
As always, code samples can be found over on GitHub here and here.



                       

Spring Security Custom AuthenticationFailureHandler

$
0
0

1. Overview

In this quick tutorial, we’re going to illustrate how to customize Spring Security’s authentication failures handling in a Spring Boot application. The goal is to authenticate users using a form login approach.

For an introduction to Spring Security and Form Login in Spring Boot, please refer to this and this article, respectively.

2. Authentication and Authorization

Authentication and Authorization are often used in conjunction because they play an essential, and equally important, role when it comes to granting access to the system.

However, they have different meanings and apply different constraints when validating a request:

  • Authentication – precedes Authorization; it’s about validating the received credentials; it’s where we verify that both username and password match the ones that our application recognizes
  • Authorization it’s about verifying if the successfully authenticated user has permissions to access a certain functionality of the application

We can customize both authentication and authorization failures handling, however, in this application, we’re going to focus on authentication failures.

3. Spring Security’s AuthenticationFailureHandler

Spring Security provides a component that handles authentication failures for us by default.

However, it’s not uncommon to find ourselves in a scenario where the default behavior isn’t enough to meet requirements.

If that is the case, we can create our own component and provide the custom behavior we want by implementing the AuthenticationFailureHandler interface:

public class CustomAuthenticationFailureHandler 
  implements AuthenticationFailureHandler {
 
    private ObjectMapper objectMapper = new ObjectMapper();

    @Override
    public void onAuthenticationFailure(
      HttpServletRequest request,
      HttpServletResponse response,
      AuthenticationException exception) 
      throws IOException, ServletException {
 
        response.setStatus(HttpStatus.UNAUTHORIZED.value());
        Map<String, Object> data = new HashMap<>();
        data.put(
          "timestamp", 
          Calendar.getInstance().getTime());
        data.put(
          "exception", 
          exception.getMessage());

        response.getOutputStream()
          .println(objectMapper.writeValueAsString(data));
    }
}

By default, Spring redirects the user back to the login page with a request parameter containing information about the error.

In this application, we’ll return a 401 response that contains information about the error, as well as the timestamp of its occurrence.

Besides the default component, Spring has others ready to use components that we can leverage depending on what we want to do:

  • DelegatingAuthenticationFailureHandler delegates AuthenticationException subclasses to different AuthenticationFailureHandlers, meaning we can create different behaviors for different instances of AuthenticationException
  • ExceptionMappingAuthenticationFailureHandler redirects the user to a specific URL depending on the AuthenticationException’s full class name
  • ForwardAuthenticationFailureHandler will forward the user to the specified URL regardless of the type of the AuthenticationException
  • SimpleUrlAuthenticationFailureHandler is the component that is used by default, it will redirect the user to a failureUrl, if specified; otherwise, it will simply return a 401 response

Now that we have created our custom AuthenticationFailureHandler, let’s configure our application and override Spring’s default handler:

@Configuration
@EnableWebSecurity
public class SecurityConfiguration 
  extends WebSecurityConfigurerAdapter {

    @Override
    protected void configure(AuthenticationManagerBuilder auth) 
      throws Exception {
        auth
          .inMemoryAuthentication()
          .withUser("baeldung")
          .password("baeldung")
          .roles("USER");
    }

    @Override
    protected void configure(HttpSecurity http) 
      throws Exception {
        http
          .authorizeRequests()
          .anyRequest()
          .authenticated()
          .and()
          .formLogin()
          .failureHandler(customAuthenticationFailureHandler());
    }

    @Bean
    public AuthenticationFailureHandler customAuthenticationFailureHandler() {
        return new CustomAuthenticationFailureHandler();
    }
}

Note the failureHandler()call, it’s where we can tell Spring to use our custom component instead of using the default one.

4. Conclusion

In this example, we customized our application’s authentication failure handler leveraging Spring’s AuthenticationFailureHandler interface.

The implementation of this example can be found in the Github project.

When running locally, you can access and test the application at localhost:8080

Common Java Exceptions

$
0
0

1. Introduction

This tutorial focuses on some common Java exceptions.

We’ll start by discussing what an exception basically is. Later, we’ll discuss different types of checked and unchecked exceptions in detail.

2. Exceptions

An exception is an abnormal condition that occurs in a code sequence during the execution of a program. This abnormal condition arises when a program violates certain constraints at runtime.

All exception types are subclasses of the class Exception. This class is then subclassed into checked exceptions and unchecked exceptions. We’ll consider them in detail in the subsequent sections.

3. Checked Exceptions

Checked exceptions are mandatory to handle. They are direct subclasses of the class Exception.

There’s a debate on their importance that’s worth taking a look.

Let’s define some checked exceptions in detail.

3.1. IOException

A method throws an IOException or a direct subclass of it when any Input/Output operation fails. 

Typical uses of these I/O operations include:

  • Working with the file system or data streams using java.io package
  • Creating network applications using java.net package

FileNotFoundException

FileNotFoundException is a common type of IOException while working with the file system:

try {
    new FileReader(new File("/invalid/file/location"));
} catch (FileNotFoundException e) {
    LOGGER.info("FileNotFoundException caught!");
}

MalformedURLException

When working with URLs, we might encounter with MalformedURLException – if our URLs are invalid.

try {
    new URL("malformedurl");
} catch (MalformedURLException e) {
    LOGGER.error("MalformedURLException caught!");
}

3.2. ParseException

Java uses text parsing to create an object based on a given String. If parsing causes an error, it throws a ParseException.

For instance, we could represent Date in different ways e.g. dd/mm/yyyy or dd,mm,yyyy, but try to parse a string with a different format:

try {
    new SimpleDateFormat("MM, dd, yyyy").parse("invalid-date");
} catch (ParseException e) {
    LOGGER.error("ParseException caught!");
}

Here, the String is malformed and causes a ParseException.

3.3. InterruptedException

Whenever a Java thread calls join(), sleep() or wait() it goes into either the WAITING state or the TIMED_WAITING state.

In addition, a thread can interrupt another thread by calling another thread’s interrupt() method.

Consequently, the thread throws an InterruptedException if another thread interrupts it while it is in the WAITING or in the TIMED_WAITING state.

Consider the following example with two threads:

  • The main thread starts the child thread and interrupts it
  • The child thread starts and calls sleep()

This scenario results in an InterruptedException:

class ChildThread extends Thread {

    public void run() {
        try {
            Thread.sleep(1000);
        } catch (InterruptedException e) {
            LOGGER.error("InterruptedException caught!");
        }
    }
}

public class MainThread {

    public static void main(String[] args) 
      throws InterruptedException {
        ChildThread childThread = new ChildThread();
        childThread.start();
        childThread.interrupt();
    }
}

4. Unchecked Exceptions

For Unchecked Exceptions, the compiler doesn’t check during the compilation process. Hence, it isn’t mandatory for the method to handle these exceptions.

All unchecked exceptions extend the class RuntimeException.

Let’s discuss some unchecked exceptions in detail.

4.1. NullPointerException

If an application attempts to use null where it actually requires an object instance, the method will throw a NullPointerException

There are different scenarios where illegal uses of null causes NullPointerException. Let’s consider some of them.

Calling a method of the class that has no object instance:

String strObj = null;
strObj.equals("Hello World"); // throws NullPointerException.

Also, if an application tries to access or modify an instance variable with a null reference, we get a NullPointerException:

Person personObj = null;
String name = personObj.personName; // Accessing the field of a null object
personObj.personName = "Jon Doe"; // Modifying the field of a null object

4.2. ArrayIndexOutOfBoundsException

An array stores its elements in contiguous fashion. Thus, we can access its elements via indices.

However, if a piece of code tries to access an illegal index of an array, the respective method throws an ArrayIndexOutOfBoundException.

Let’s see a few examples that throw ArrayIndexOutOfBoundException:

int[] nums = new int[] {1, 2, 3};
int numFromNegativeIndex = nums[-1]; // Trying to access at negative index
int numFromGreaterIndex = nums[4];   // Trying to access at greater index
int numFromLengthIndex = nums[3];    // Trying to access at index equal to size of the array

4.3. StringIndexOutOfBoundsException

The String class in Java provides the methods to access a particular character of the string or to slice out a character array out of the String. When we use these methods, internally it converts the String into a character array.

Again, there could be an illegal use of indexes on this array. In such cases, these methods of the String class throws the StringIndexOutOfBoundsException.

This exception indicates that the index is either greater than or equal to the size of the String. StringIndexOutOfBoundsException extends IndexOutOfBoundsException.

The method charAt(index) of the class String throws this exception when we try to access a character at the index equal to the String’s length or some other illegal index:

String str = "Hello World";
char charAtNegativeIndex = str.charAt(-1); // Trying to access at negative index
char charAtLengthIndex = str.charAt(11);   // Trying to access at index equal to size of the string		

4.4. NumberFormatException

Quite often an application ends up with numeric data in a String. In order to interpret this data as numeric, Java allows the conversion of String to numeric types. The wrapper classes such as Integer, Float, etc. contains utility methods for this purpose.

However, if the String doesn’t have an appropriate format during the conversion, the method throws a NumberFormatException.

Let’s consider the following snippet.

Here, we declare a String with an alphanumeric data. Further, we try to use the methods of the Integer wrapper class to interpret this data as numeric.

Consequently, this results in NumberFormatException:

String str = "100ABCD";
int x = Integer.parseInt(str); // Throws NumberFormatException
int y = Integer.valueOf(str); //Throws NumberFormatException

4.5. ArithmeticException

When a program evaluates an arithmetic operation and it results in some exceptional condition, it throws ArithmeticException. In addition, ArithmeticException applies to only int and long data types.

For instance, if we try to divide an integer by zero, we get an ArithmeticException:

int illegalOperation = 30/0; // Throws ArithmeticException

4.6. ClassCastException

Java allows typecasting between the objects in order to support inheritance and polymorphism. We can either upcast an object or downcast it.

In upcasting, we cast an object to its supertype. And in downcasting, we cast an object to one of its subtypes.

However, at runtime, if the code attempts to downcast an object to a subtype of which it isn’t an instance, the method throws a ClassCastException.

The runtime instance is what actually matters in typecasting. Consider the following inheritance between AnimalDog, and Lion:

class Animal {}

class Dog extends Animal {}

class Lion extends Animal {}

Further, in the driver class, we cast the Animal reference containing an instance of Lion into a Dog.

However, at the runtime, the JVM notices that instance Lion isn’t compatible with the subtype of the class Dog.

This results in ClassCastException:

Animal animal = new Lion(); // At runtime the instance is Lion
Dog tommy = (Dog) animal; // Throws ClassCastException

4.7. IllegalArgumentException

A method throws an IllegalArgumentException if we call it with some illegal or inappropriate arguments.

For instance, the sleep() method of the Thread class expects positive time and we pass a negative time interval as an argument. This results in IllegalArgumentException:

Thread.currentThread().sleep(-10000); // Throws IllegalArgumentException

4.8. IllegalStateException

IllegalStateException signals that a method’s been invoked at an illegal or inappropriate time.

Every Java object has a state (instance variables) and some behavior (methods). Thus, IllegalStateException means it’s illegal to invoke the behavior of this object with the current state variables.

However, with some different state variables, it might be legal.

For example, we use an iterator to iterate a list. Whenever we initialize one, it internally sets its state variable lastRet to -1.

With this context, the program tries to call the remove method on the list:

//Initialized with index at -1
Iterator<Integer> intListIterator = new ArrayList<>().iterator(); 

intListIterator.remove(); // IllegalStateException

Internally, the remove method checks the state variable lastRet and if it is less than 0, it throws IllegalStateException. Here, the variable is still pointing to the value -1.

As a result, we get an IllegalStateException.

5. Conclusion

In this article, we first discussed what are exceptions. An exception is an event, which occurs during the execution of a program, that disrupts the normal flow of the program’s instructions.

Then, we categorized the exceptions into the Checked Exceptions and the Unchecked Exceptions.

Next, we discussed different types of exceptions that can come up during the compile time or at the runtime.

We can find the code for this article over on GitHub.


Introduction to Micronaut Framework

$
0
0

1. What is Micronaut

Micronaut is a JVM-based framework for building lightweight, modular applications. Developed by OCI, the same company that created Grails, Micronaut is the latest framework designed to make creating microservices quick and easy.

While Micronaut contains some features that are similar to existing frameworks like Spring, it also has some new features that set it apart. And with support for Java, Groovy, and Kotlin, it offers a variety of ways to create applications.

2. Main Features

One of the most exciting features of Micronaut is its compile time dependency injection mechanism. Most frameworks use reflection and proxies to perform dependency injection at runtime. Micronaut, however, builds its dependency injection data at compile time. The result is faster application startup and smaller memory footprints.

Another feature is its first class support for reactive programming, for both clients and servers. The choice of a specific reactive implementation is left to the developer as both RxJava and Project Reactor are supported.

Micronaut also has several features that make it an excellent framework for developing cloud-native applications. It supports multiple service discovery tools such as Eureka and Consul, and also works with different distributed tracing systems such as Zipkin and Jaeger.

It also provides support for creating AWS lambda functions, making it easy to create serverless applications.

3. Getting Started

The easiest way to get started is using SDKMAN:

> sdk install micronaut 1.0.0.M2

This installs all the binary files we’ll need to build, test, and deploy Micronaut applications. It also provides the Micronaut CLI tool, which lets us easily start new projects.

The binary artifacts are also available on Sonatype and GitHub.

In the following sections we’ll look at some features of the framework.

4. Dependency Injection

As mentioned earlier, Micronaut handles dependency injection at compile time, which is different than most IoC containers.

However, it still fully supports JSR-330 annotations so working with beans is similar to other IoC frameworks.

To autowire a bean into our code, we use @Inject:

@Inject
private EmployeeService service;

The @Inject annotation works just like @Autowired and can be used on fields, methods, constructors, and parameters.

By default, all beans are scoped as a prototype. We can quickly create singleton beans using @Singleton. If multiple classes implement the same bean interface, @Primary can be used to deconflict them:

@Primary
@Singleton
public class BlueCar implements Car {}

The @Requires annotation can be used when beans are optional, or to only perform autowiring when certain conditions are met.

In this regard, it behaves much like the Spring Boot @Conditional annotations:

@Singleton
@Requires(beans = DataSource.class)
@Requires(property = "enabled")
@Requires(missingBeans = EmployeeService)
@Requires(sdk = Sdk.JAVA, value = "1.8")
public class JdbcEmployeeService implements EmployeeService {}

5. Building an HTTP Server

Now let’s look at creating a simple HTTP server application. To start, we’ll use SDKMAN to create a project:

> mn create-app hello-world-server -build maven

This will create a new Java project using Maven in a directory named hello-world-server. Inside this directory, we’ll find our main application source code, Maven POM file, and other support files for the project.

The default application that is very simple:

public class ServerApplication {
    public static void main(String[] args) {
        Micronaut.run(ServerApplication.class);
    }
}

5.1. Blocking HTTP

On its own, this application won’t do much. Let’s add a controller that has two endpoints. Both will return a greeting, but one will use the GET HTTP verb, and the other will use POST:

@Controller("/greet")
public class GreetController {

    @Inject
    private GreetingService greetingService;

    @Get("/{name}")
    public String greet(String name) {
        return greetingService.getGreeting() + name;
    }

    @Post(value = "/{name}", consumes = MediaType.TEXT_PLAIN)
    public String setGreeting(@Body String name) {
        return greetingService.getGreeting() + name;
    }
}

5.2. Reactive IO

By default, Micronaut will implement these endpoints using traditional blocking I/O. However, we can quickly implement non-blocking endpoints by merely changing the return type to any reactive non-blocking type.

For example, with RxJava we can use Observable. Likewise, when using Reactor, we can return Mono or Flux data types:

@Get("/{name}")
public Mono<String> greet(String name) {
    return Mono.just(greetingService.getGreeting() + name);
}

For both blocking and non-blocking endpoints, Netty is the underlying server used to handle HTTP requests.

Normally, the requests are handled on the main I/O thread pool that is created at startup, making them block.

However, when a non-blocking data type is returned from a controller endpoint, Micronaut uses the Netty event loop thread, making the whole request non-blocking.

6. Building an HTTP Client

Now let’s build a client to consume the endpoints we just created. Micronaut provides two ways of creating HTTP clients:

  • A declarative HTTP Client
  • A programmatic HTTP Client

6.1 Declarative HTTP Client

The first and quickest way to create is using a declarative approach:

@Client("/greet")
public interface GreetingClient {
    @Get("/{name}")
    String greet(String name);
}

Notice how we don’t implement any code to call our service. Instead, Micronaut understands how to call the service from the method signature and annotations we have provided.

To test this client, we can create a JUnit test that uses the embedded server API to run an embedded instance of our server:

public class GreetingClientTest {
    private EmbeddedServer server;
    private GreetingClient client;

    @Before
    public void setup() {
        server = ApplicationContext.run(EmbeddedServer.class);
        client = server.getApplicationContext().getBean(GreetingClient.class);
    }

    @After
    public void cleanup() {
        server.stop();
    }

    @Test
    public void testGreeting() {
        assertEquals(client.greet("Mike"), "Hello Mike");
    }
}

6.2. Programmatic HTTP Client

We also have the option of writing a more traditional client if we need more control over its behavior and implementation:

@Singleton
public class ConcreteGreetingClient {
   private RxHttpClient httpClient;

   public ConcreteGreetingClient(@Client("/") RxHttpClient httpClient) {
      this.httpClient = httpClient;
   }

   public String greet(String name) {
      HttpRequest<String> req = HttpRequest.GET("/greet/" + name);
      return httpClient.retrieve(req).blockingFirst();
   }

   public Single<String> greetAsync(String name) {
      HttpRequest<String> req = HttpRequest.GET("/async/greet/" + name);
      return httpClient.retrieve(req).first("An error as occurred");
   }
}

The default HTTP client uses RxJava, so can easily work with blocking or non-blocking calls.

7. Micronaut CLI

We’ve already seen the Micronaut CLI tool in action above when we used it to create our sample project.

In our case, we created a standalone application, but it has several other capabilities as well.

7.1. Federation Projects

In Micronaut, a federation is just a group of standalone applications that live under the same directory. By using federations, we can easily manage them together and ensure they get the same defaults and settings.

When we use the CLI tool to generate a federation, it takes all the same arguments as the create-app command. It will create a top-level project structure, and each standalone app will be created in its sub-directory from there.

7.2. Features

When creating a standalone application or federation, we can decide which features our app needs. This helps ensure the minimal set of dependencies is included in the project.

We specify features using the -features argument, and supplying a comma-separated list of feature names.

We can find a list of available features by running the following command:

> mn profile-info service

Provided Features:
--------------------
* annotation-api - Adds Java annotation API
* config-consul - Adds support for Distributed Configuration with Consul
* discovery-consul - Adds support for Service Discovery with Consul
* discovery-eureka - Adds support for Service Discovery with Eureka
* groovy - Creates a Groovy application
[...] More features available

7.3. Existing Projects

We can also use the CLI tool to modify existing projects. Enabling us to create beans, clients, controllers, and more. When we run the mn command from inside an existing project, we’ll have a new set of commands available:

> mn help
| Command Name         Command Description
-----------------------------------------------
create-bean            Creates a singleton bean
create-client          Creates a client interface
create-controller      Creates a controller and associated test
create-job             Creates a job with scheduled method

8. Conclusion

In this brief introduction to Micronaut, we’ve seen how easy it is to build both blocking and non-blocking HTTP servers and clients. Also, we explored some features of its CLI.

But this is just a small taste of the features it offers. There is also full support for serverless functions, service discovery, distributed tracing, monitoring and metrics, a distributed configuration, and much more.

And while many of its features are derived from existing frameworks such as Grails and Spring, it also has plenty of unique features that help it stand out on its own.

As always, we can find the samples code above in our GitHub repo.

Spring REST and HAL Browser

$
0
0

1. Overview

In this tutorial, we’ll be discussing what HAL is and why it’s useful, before introducing the HAL browser.

We’ll then use Spring to build a simple REST API with a few interesting endpoints and populate our database with some test data.

Finally, using the HAL browser, we’ll explore our REST API and discover how to traverse the data contained within.

2. HAL and the HAL Browser

JSON Hypertext Application Language, or HAL, is a simple format that gives a consistent and easy way to hyperlink between resources in our API. Including HAL within our REST API makes it much more explorable to users as well as being essentially self-documenting.

It works by returning data in JSON format which outlines relevant information about the API.

The HAL model revolves around two simple concepts.

Resources, which contain:

  • Links to relevant URIs
  • Embedded Resources
  • State

Links:

  • A target URI
  • A relation, or rel, to the link
  • A few other optional properties to help with depreciation, content negotiation, etc

The HAL browser was created by the same person who developed HAL and provides an in-browser GUI to traverse your REST API.

We’ll now build a simple REST API, plug in the HAL browser and explore the features.

3. Dependencies

Below is the single dependency needed to integrate the HAL browser into our REST API. You can find the rest of the dependencies for the API in the GitHub code.

Firstly, the dependency for Maven-based projects:

<dependency>
    <groupId>org.springframework.data</groupId>
    <artifactId>spring-data-rest-hal-browser</artifactId>
    <version>3.0.8.RELEASE</version>
</dependency>

If you’re building with Gradle, you can add this line to your build.gradle file:

compile group: 'org.springframework.data', name: 'spring-data-rest-hal-browser', version: '3.0.8.RELEASE'

4. Building a Simple REST API

4.1. Simple Data Model

In our example, we’ll be setting up a simple REST API to browse different books in our library.

Here, we define a simple book entity which contains appropriate annotations so that we can persist the data with Hibernate:

@Entity
public class Book {

  @Id
  @GeneratedValue(strategy = GenerationType.IDENTITY)
  private long id;

  @NotNull
  @Column(columnDefinition = "VARCHAR", length = 100)
  private String title;

  @NotNull
  @Column(columnDefinition = "VARCHAR", length = 100)
  private String author;

  @Column(columnDefinition = "VARCHAR", length = 1000)
  private String blurb;

  private int pages;

  // usual getters, setters and constructors

}

4.2. Introducing a CRUD Repository

Next, we’ll need some endpoints. To do this, we can leverage the PagingAndSortingRepository and specify that we want to get data from our Book entity.

This Class provides simple CRUD commands, as well as paging and sorting capabilities right out of the box:

@Repository
public interface BookRepository extends PagingAndSortingRepository<Book, Long> {

    @RestResource(rel = "title-contains", path="title-contains")
    Page<Book> findByTitleContaining(@Param("query") String query, Pageable page);

    @RestResource(rel = "author-contains", path="author-contains", exported = false)
    Page<Book> findByAuthorContaining(@Param("query") String query, Pageable page);
}

If this looks a bit strange, or if you’d like to know more about Spring Repositories, you can read more here.

We’ve extended the repository by adding two new endpoints:

  • findByTitleContaining – returns books that contain the query included in the title
  • findByAuthorContaining – returns books from the database where the author of a book contains the query

Note that our second endpoint contains the export = false attribute. This attribute stops the HAL links being generated for this endpoint, and won’t be available via the HAL browser.

Finally, we’ll load our data when Spring is started by defining a class which implements the ApplicationRunner interface. You can find the code on GitHub.

5. Installing the HAL Browser

The setup for the HAL browser is remarkably easy when building a REST API with Spring. As long as we have the dependency, Spring will auto-configure the browser, and make it available via the default endpoint.

All we need to do now is press run and switch to the browser. The HAL browser will then be available on http://localhost:8080/

6. Exploring our REST API with the HAL Browser

The HAL browser is broken down into two parts – the explorer and the inspector. We’ll break down and explore each section separately.

6.1. The HAL Explorer

As it sounds, the explorer is devoted to exploring new parts of our API relative to the current endpoint. It contains a search bar, as well as text boxes to display Custom Request Headers and Properties of the current endpoint.

Below these, we have the links section and a clickable list of Embedded Resources.

6.2. Using Links

If we navigate to our /books endpoint we can view the existing links:

These links are generated from the HAL in the adjacent section:

"_links": {
    "first": {
      "href": "http://localhost:8080/books?page=0&size=20"
    },
    "self": {
      "href": "http://localhost:8080/books{?page,size,sort}",
      "templated": true
    },
    "next": {
      "href": "http://localhost:8080/books?page=1&size=20"
    },
    "last": {
      "href": "http://localhost:8080/books?page=4&size=20"
    },
    "profile": {
      "href": "http://localhost:8080/profile/books"
    },
    "search": {
      "href": "http://localhost:8080/books/search"
    }
  },

If we move to the search endpoint, we can also view the custom endpoints we created using the PagingAndSortingRepository:

{
  "_links": {
    "title-contains": {
      "href": "http://localhost:8080/books/search/title-contains{?query,page,size,sort}",
      "templated": true
    },
    "self": {
      "href": "http://localhost:8080/books/search"
    }
  }
}

The HAL above shows our title-contains endpoint displaying suitable search criteria. Note how the author-contains endpoint is missing since we defined that it should not be exported.

6.3. Viewing Embedded Resources

Embedded Resources show the details of the individual book records on our /books endpoint. Each resource also contains its own Properties and Links section:

6.4.  Using Forms

The question mark button in the GET column within the links section denotes that a form modal can be used to enter custom search criteria.

Here is the form for our title-contains endpoint:

The HAL browser selection form

Our custom URI returns the first page of 20 books where the title contains the word ‘Java’.

6.5. The Hal Inspector

The inspector makes up the right-side of the browser and contains the Response Headers and Response Body. This HAL data is used to render the Links and Embedded Resources that we saw earlier in the tutorial.

7. Conclusion

In this article, we’ve summarised what HAL is, why it’s useful and why it can help us to create superior self-documenting REST APIs.

We have built a simple REST API with Spring which implements the PagingAndSortingRepository, as well as defining our own endpoints. We’ve also seen how to exclude certain endpoints from the HAL browser.

After defining our API, we populated it with test data and explored it in detail with the help of the HAL browser. We saw how the HAL browser is structured, and the UI controls which allowed us to step through the API and explore its data.

As always, the code is available over on GitHub.

Java Weekly, Issue 238

$
0
0

Here we go…

1. Spring and Java

>> The best way to use SQL functions in JPQL or Criteria API queries with JPA and Hibernate [vladmihalcea.com]

If we’re building the JPQL dynamically using Criteria API, we can still call any SQL function as long as Hibernate knows about it. Good to know.

>> A Beginner’s Guide to JPA’s persistence.xml [thoughts-on-java.org]

It’s hard to remember all the JPA configuration details – having this as a reference and a good place to refresh them is a good idea.

>> Getting to Know Graal, the New Java JIT Compiler [infoq.com]

A major evolution seems to be coming to the JVM – can’t wait.

>> Spring Boot, migrating to functional [blog.frankel.ch]

A quick look at the new bread of Java web application.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> Evolution of Application Data Caching: From RAM to SSD [medium.com]

A super interesting dive in decisions made by Netflix that turned out to be great.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Elbonian Sales Video Assignment [dilbert.com]

>>Dilbert Is Misinterpreted [dilbert.com]

>> No Plans To Reorganize [dilbert.com]

5. Pick of the Week

>> Software Development [xkcd.com]

Uploading MultipartFile with Spring RestTemplate

$
0
0

1. Overview

This quick tutorial focuses on how to upload a multipart file using Spring’s RestTemplate.

We’ll see both a single file and multiple files – upload using the RestTemplate.

2. What is an HTTP Multipart Request?

Simply put, a basic HTTP POST request body holds form data in name/value pairs.

On the other hand, HTTP clients can construct HTTP multipart requests to send text or binary files to the server; it’s mainly used for uploading files.

Another common use-case is sending the email with an attachment. Multipart file requests break a large file into smaller chunks and use boundary markers to indicate the start and end of the block.

Explore more about multipart requests here.

3. Maven Dependency

This single dependency is enough for the client application:

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-web</artifactId>
    <version>5.0.7.RELEASE</version>
</dependency>

4. The File Upload Server

The file server API exposes two REST endpoints for uploading single and multiple files respectively:

  • POST /fileserver/singlefileupload/
  • POST /fileserver/multiplefileupload/

5. Uploading a Single File

First, let’s see single file upload using the RestTemplate.

We need to create HttpEntitywith header and body. Set the content-type header value to MediaType.MULTIPART_FORM_DATA. When this header is set, RestTemplate automatically marshals the file data along with some metadata.

Metadata includes file name, file size, and file content type (for example text/plain):

HttpHeaders headers = new HttpHeaders();
headers.setContentType(MediaType.MULTIPART_FORM_DATA);

Next, build the request body as an instance of LinkedMultiValueMap class. LinkedMultiValueMap wraps LinkedHashMap storing multiple values for each key in a LinkedList.

In our example, the getTestFile( ) method generates a dummy file on the fly and returns a FileSystemResource:

MultiValueMap<String, Object> body
  = new LinkedMultiValueMap<>();
body.add("file", getTestFile());

Finally, construct an HttpEntity instance that wraps the header and the body object and post it using a RestTemplate.

Note that the single file upload points to the /fileserver/singlefileupload/ endpoint.

In the end, the call restTemplate.postForEntity( ) completes the job of connecting to the given URL and sending the file to the server:

HttpEntity<MultiValueMap<String, Object>> requestEntity
 = new HttpEntity<>(body, headers);

String serverUrl = "http://localhost:8082/spring-rest/fileserver/singlefileupload/";

RestTemplate restTemplate = new RestTemplate();
ResponseEntity<String> response = restTemplate
  .postForEntity(serverUrl, requestEntity, String.class);

6. Uploading Multiple Files

In multiple file upload, the only change from single file upload is in constructing the body of the request.

Let’s create multiple files and add them with the same key in MultiValueMap.

Obviously, the request URL should refer to endpoint for multiple file upload:

MultiValueMap<String, Object> body
  = new LinkedMultiValueMap<>();
body.add("files", getTestFile());
body.add("files", getTestFile());
body.add("files", getTestFile());
    
HttpEntity<MultiValueMap<String, Object>> requestEntity
  = new HttpEntity<>(body, headers);

String serverUrl = "http://localhost:8082/spring-rest/fileserver/multiplefileupload/";

RestTemplate restTemplate = new RestTemplate();
ResponseEntity<String> response = restTemplate
  .postForEntity(serverUrl, requestEntity, String.class);

It’s always possible to model single file upload using the multiple file upload.

7. Conclusion

In conclusion, we saw a case of MultipartFile transfer using Spring RestTemplate.

As always, the example client and server source code is available over on GitHub.

Idiomatic Logging in Kotlin

$
0
0

1. Introduction

In this tutorial, we’ll take a look at a few logging idioms that fit typical Kotlin programming styles.

2. Logging Idioms

Logging is a ubiquitous need in programming. While apparently a simple idea (just print stuff!), there are many ways to do it.

In fact, every language, operating system and environment has its own idiomatic and sometimes idiosyncratic logging solution; often, actually, more than one.

Here, we’ll focus on Kotlin’s logging story.

We’ll also use logging as a pretext for diving into some advanced Kotlin features and exploring their nuances.

3. Setup

For the code examples, we’ll use the SLF4J library, but the same patterns and solutions apply to Log4J, JUL, and other logging libraries.

So, let’s begin by including the SLF4J API and Logback dependencies in our pom:

<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-api</artifactId>
    <version>1.7.25</version>
</dependency>
<dependency>
    <groupId>ch.qos.logback</groupId>
    <artifactId>logback-classic</artifactId>
    <version>1.2.3</version>
</dependency>
<dependency>
    <groupId>ch.qos.logback</groupId>
    <artifactId>logback-core</artifactId>
    <version>1.2.3</version>
</dependency>

Now, let’s take a look at what logging looks like for four different approaches:

  • A property
  • A companion object
  • An extension method, and
  • A delegated property

4. Logger as a Property

The first thing we might try is to declare a logger property wherever we need it:

class Property {
    private val logger = LoggerFactory.getLogger(javaClass)

    //...
}

Here, we’ve used javaClass to dynamically compute the logger’s name from the defining class name. We can thus readily copy and paste this snippet wherever we want.

Then, we can use the logger in any method of the declaring class:

fun log(s: String) {
    logger.info(s)
}

We’ve chosen to declare the logger as private because we don’t want other classes, including subclasses, to have access to it and log on behalf of our class.

Of course, this is merely a hint for programmers rather than a strongly enforced rule, since it’s easy to obtain a logger with the same name.

4.1. Saving Some Typing

We could shorten our code a bit by factoring the getLogger call to a function:

fun getLogger(forClass: Class<*>): Logger =
  LoggerFactory.getLogger(forClass)

And by placing this into a utility class, we can now simply call getLogger(javaClass) instead of LoggerFactory.getLogger(javaClass) throughout the samples below.

5. Logger in a Companion Object

While the last example is powerful in its simplicity, it is not the most efficient.

First, to hold a reference to a logger in each class instance costs memory. Second, even though loggers are cached, we’ll still incur a cache lookup for every object instance that has a logger.

Let’s see if companion objects fare any better.

5.1. A First Attempt

In Java, declaring the logger as static is a pattern that addresses the above concerns.

In Kotlin, though, we don’t have static properties.

But we can emulate them with companion objects:

class LoggerInCompanionObject {
    companion object {
        private val loggerWithExplicitClass
          = getLogger(LoggerInCompanionObject::class.java)
    }

    //...
}

Notice how we’ve reused the getLogger convenience function from section 4.1. We’ll keep referring to it throughout the article.

So, with the above code, we can use again the logger exactly as before, in any method of the class:

fun log(s: String) {
    loggerWithExplicitClass.info(s)
}

5.2. What Happened to javaClass?

Sadly, the above approach comes with a drawback. Because we are directly referring to the enclosing class:

LoggerInCompanionObject::class.java

we’ve lost the ease of copy-pasting.

But why not just use javaClass like before? Actually, we can’t. If we had, we would have incorrectly obtained a logger named after the companion object’s class:

//Incorrect!
class LoggerInCompanionObject {
    companion object {
        private val loggerWithWrongClass = getLogger(javaClass)
    }
}
//...
loggerWithWrongClass.info("test")

The above would output a slightly wrong logger name. Take a look at the $Companion bit:

21:46:36.377 [main] INFO
com.baeldung.kotlin.logging.LoggerInCompanionObject$Companion - test

In fact, IntelliJ IDEA marks the declaration of the logger with a warning, because it recognizes that the reference to javaClass in a companion object probably isn’t what we want.

5.3. Deriving the Class Name With Reflection

Still, not all is lost.

We do have a way to derive the class name automatically and restore our ability to copy and paste the code, but we need an extra piece of reflection to do so.

First, let’s ensure we have the kotlin-reflect dependency in our pom:

<dependency>
    <groupId>org.jetbrains.kotlin</groupId>
    <artifactId>kotlin-reflect</artifactId>
    <version>1.2.51</version>
</dependency>

Then, we can dynamically obtain the correct class name for logging:

companion object {
    @Suppress("JAVA_CLASS_ON_COMPANION")
    private val logger = getLogger(javaClass.enclosingClass)
}
//...
logger.info("I feel good!")

We’ll now get the correct output:

10:00:32.840 [main] INFO
com.baeldung.kotlin.logging.LoggerInCompanionObject - I feel good!

The reason we use enclosingClass comes from the fact that companion objects, in the end, are instances of inner classes, so enclosingClass refers to the outer class, or in this case, LoggerInCompanionObject.

Also, it’s okay now for us to suppress the warning that IntelliJ IDEA gives on javaClass since now we’re doing the right thing with it.

5.4. @JvmStatic

While the properties of companion objects look like static fields, companion objects are more like singletons.

Kotlin companion objects have a special feature though, at least when running on a JVM, that converts companion objects to static fields:

@JvmStatic
private val logger = getLogger(javaClass.enclosingClass)

5.5. Putting It All Together

Let’s put all three improvements together. When joined together, these improvements make our logging construct copy-pastable and static:

class LoggerInCompanionObject {
    companion object {
        @Suppress("JAVA_CLASS_ON_COMPANION")
        @JvmStatic
        private val logger = getLogger(javaClass.enclosingClass)
    }

    fun log(s: String) {
        logger.info(s)
    }
}

6. Logger From an Extension Method

While interesting and efficient, using a companion object is verbose. What started as a one-liner is now multiple lines to copy-paste all over the code base.

Also, using companion objects produces extra inner classes. Compared with the simple static logger declaration in Java, using companion objects is heavier.

So, let’s try an approach using extension methods.

6.1. A First Attempt

The basic idea is to define an extension method that returns a Logger, so every class that needs it can just call the method and obtain the correct instance.

We can define this anywhere on the classpath:

fun <T : Any> T.logger(): Logger = getLogger(javaClass)

Extension methods are basically copied to any class on which they’re applicable; so, we can simply refer directly to javaClass again.

And now, all classes will have the method logger as if it had been defined in the type:

class LoggerAsExtensionOnAny { // implied ": Any"
    fun log(s: String) {
        logger().info(s)
    }
}

While this approach is more concise than companion objects, we might want to smooth out some problems with it first.

6.2. Pollution of the Any Type

A significant drawback of our first extension method is that it pollutes the Any type.

Because we defined it as applying to any type at all, it ends up a bit invasive:

"foo".logger().info("uh-oh!")
// Sample output:
// 13:19:07.826 [main] INFO java.lang.String - uh-oh!

By defining logger() on Any, we’ve polluted all types in the language with the method.

This isn’t necessarily a problem. It doesn’t prevent other classes from having their own logger methods.

However, aside from the extra noise, it also breaks encapsulation. Types could now log for each other, which we don’t want.

And logger will now pop up on almost every IDE code suggestion.

6.3. Extension Method on a Marker Interface

We can narrow our extension method’s scope with a marker interface:

interface Logging

Having defined this interface, we can indicate that our extension method only applies to types that implement this interface:

fun <T : Logging> T.logger(): Logger = getLogger(javaClass)

And now, if we change our type to implement Logging, we can use logger as before:

class LoggerAsExtensionOnMarkerInterface : Logging {
    fun log(s: String) {
        logger().info(s)
    }
}

6.4. Reified Type Parameter

In the last two examples, we’ve used reflection to obtain the javaClass and give a distinguished name to our logger.

However, we can also extract such information from the T type parameter, avoiding a reflection call at runtime. To achieve this, we’ll declare the function as inline and reify the type parameter:

inline fun <reified T : Logging> T.logger(): Logger =
  getLogger(T::class.java)

Note that this changes the semantics of the code with respect to inheritance.  We’ll discuss this in detail in section 8.

6.5. Combining with Logger Properties

A nice thing about extension methods is that we can combine it with our first approach:

val logger = logger()

6.6. Combining with Companion Objects

But the story is more complex if we want to use our extension method in a companion object:

companion object : Logging {
    val logger = logger()
}

Because we’d have the same problem with javaClass as before:

com.baeldung.kotlin.logging.LoggerAsExtensionOnMarkerInterface$Companion

To account for this, let’s first define a method that obtains the class more robustly:

inline fun <T : Any> getClassForLogging(javaClass: Class<T>): Class<*> {
    return javaClass.enclosingClass?.takeIf {
        it.kotlin.companionObject?.java == javaClass
    } ?: javaClass
}

Here, getClassForLogging returns the enclosingClass if javaClass refers to a companion object.

And now we can again update our extension method:

inline fun <reified T : Logging> T.logger(): Logger
  = getLogger(getClassForLogging(T::class.java))

This way, we can actually use the same extension method whether the logger is included as a property or a companion object.

7. Logger as a Delegated Property

Lastly, let’s look at delegated properties.

What’s nice about this approach is that we avoid namespace pollution without requiring a marker interface:

class LoggerDelegate<in R : Any> : ReadOnlyProperty<R, Logger> {
    override fun getValue(thisRef: R, property: KProperty<*>)
     = getLogger(getClassForLogging(thisRef.javaClass))
}

We can then use it with a property:

private val logger by LoggerDelegate()

Because of getClassForLogging, this works for companion objects, too:

companion object {
    val logger by LoggerDelegate()
}

And while delegated properties are powerful, note that getValue is re-computed each time the property is read.

Also, we should remember that delegate properties must use reflection for it to work.

8. A Few Notes About Inheritance

It’s very typical to have one logger per class. And that’s why we also typically declare loggers as private.

However, there are times when we’ll want our subclasses to refer to their superclass’s logger.

And depending on our use case, the above four approaches will behave differently.

In general, when we use reflection or other dynamic features, we pick up the actual class of the object at runtime.

But, when we statically refer to a class or a reified type parameter by name, the value will be fixed at compile time.

For example, with delegated properties, since the logger instance is obtained dynamically every time the property is read, it will take the name of the class where it’s used:

open class LoggerAsPropertyDelegate {
    protected val logger by LoggerDelegate()
    //...
}

class DelegateSubclass : LoggerAsPropertyDelegate() {
    fun show() {
        logger.info("look!")
    }
}

Let’s look at the output:

09:23:33.093 [main] INFO
com.baeldung.kotlin.logging.DelegateSubclass - look!

Even though logger is declared in the superclass, it prints the name of the subclass.

The same happens when a logger is declared as a property and instantiated using javaClass.

And extension methods exhibit this behavior, too, unless we reify the type parameter.

Conversely, with reified generics, explicit class names and companion objects, a logger’s name stays the same across the type hierarchy.

9. Conclusions

In this article, we’ve looked at several Kotlin techniques that we can apply to the task of declaring and instantiating loggers.

Starting simply, we progressively increased complexity in a series of attempts to improve efficiency and reduce boilerplate, taking a look at Kotlin companion objects, extension methods, and delegated properties.

As always, these examples are available in full over on GitHub.

Viewing all 3692 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>