Quantcast
Channel: Baeldung
Viewing all 3689 articles
Browse latest View live

Introduction to Creational Design Patterns

$
0
0

1. Introduction

In software engineering, a Design Pattern describes an established solution to the most commonly encountered problems in software design. It represents the best practices evolved over a long period through trial and error by experienced software developers.

Design Patterns gained popularity after the book Design Patterns: Elements of Reusable Object-Oriented Software was published in 1994 by Erich Gamma, John Vlissides, Ralph Johnson, and Richard Helm (also known as Gang of Four or GoF).

In this article, we’ll explore creational design patterns and their types. We’ll also look at some code samples and discuss the situations when these patterns fit our design.

2. Creational Design Patterns

Creational Design Patterns are concerned with the way in which objects are created. They reduce complexities and instability by creating objects in a controlled manner.

The new operator is often considered harmful as it scatters objects all over the application. Over time it can become challenging to change an implementation because classes become tightly coupled.

Creational Design Patterns address this issue by decoupling the client entirely from the actual initialization process.

In this article, we’ll discuss four types of Creational Design Pattern:

  1. Singleton – Ensures that at most only one instance of an object exists throughout application
  2. Factory Method – Creates objects of several related classes without specifying the exact object to be created
  3. Abstract Factory – Creates families of related dependent objects
  4. Builder – Constructs complex objects using step-by-step approach

Let’s now discuss each of these patterns in detail.

3. Singleton Design Pattern

The Singleton Design Pattern aims to keep a check on initialization of objects of a particular class by ensuring that only one instance of the object exists throughout the Java Virtual Machine.

A Singleton class also provides one unique global access point to the object so that each subsequent call to the access point returns only that particular object.

3.1. Singleton Pattern Example

Although the Singleton pattern was introduced by GoF, the original implementation is known to be problematic in multithreaded scenarios.

So here, we’re going to follow a more optimal approach that makes use of a static inner class:

public class Singleton  {    
    private Singleton() {}
    
    private static class SingletonHolder {    
        public static final Singleton instance = new Singleton();
    }

    public static Singleton getInstance() {    
        return SingletonHolder.instance;    
    }
}

Here, we’ve created a static inner class that holds the instance of the Singleton class. It creates the instance only when someone calls the getInstance() method and not when the outer class is loaded.

This is a widely used approach for a Singleton class as it doesn’t require synchronization, is thread safe, enforces lazy initialization and has comparatively less boilerplate.

Also, note that the constructor has the private access modifier. This is a requirement for creating a Singleton since a public constructor would mean anyone could access it and start creating new instances.

Remember, this isn’t the original GoF implementation. For the original version, please visit this linked Baeldung article on Singletons in Java.

3.2. When to Use Singleton Design Pattern

  • For resources that are expensive to create (like database connection objects)
  • It’s good practice to keep all loggers as Singletons which increases performance
  • Classes which provide access to configuration settings for the application
  • Classes that contain resources that are accessed in shared mode

4. Factory Method Design Pattern

The Factory Design Pattern or Factory Method Design Pattern is one of the most used design patterns in Java.

According to GoF, this pattern “defines an interface for creating an object, but let subclasses decide which class to instantiate. The Factory method lets a class defer instantiation to subclasses”.

This pattern delegates the responsibility of initializing a class from the client to a particular factory class by creating a type of virtual constructor.

To achieve this, we rely on a factory which provides us with the objects, hiding the actual implementation details. The created objects are accessed using a common interface.

4.1. Factory Method Design Pattern Example

In this example, we’ll create a Polygon interface which will be implemented by several concrete classes. A PolygonFactory will be used to fetch objects from this family:

Factory Method Design Pattern - Class Diagram

Let’s first create the Polygon interface:

public interface Polygon {
    String getType();
}

Next, we’ll create a few implementations like Square, Triangle, etc. that implement this interface and return an object of Polygon type.

Now we can create a factory that takes the number of sides as an argument and returns the appropriate implementation of this interface:

public class PolygonFactory {
    public Polygon getPolygon(int numberOfSides) {
        if(numberOfSides == 3) {
            return new Triangle();
        }
        if(numberOfSides == 4) {
            return new Square();
        }
        if(numberOfSides == 5) {
            return new Pentagon();
        }
        if(numberOfSides == 4) {
            return new Heptagon();
        }
        else if(numberOfSides == 8) {
            return new Octagon();
        }
        return null;
    }
}

Notice how the client can rely on this factory to give us an appropriate Polygon, without having to initialize the object directly.

4.2. When to Use Factory Method Design Pattern

  • When the implementation of an interface or an abstract class is expected to change frequently
  • When the current implementation cannot comfortably accommodate new change
  • When the initialization process is relatively simple, and the constructor only requires a handful of parameters

5. Abstract Factory Design Pattern

In the previous section, we saw how the Factory Method design pattern could be used to create objects related to a single family.

By contrast, the Abstract Factory Design Pattern is used to create families of related or dependent objects. It’s also sometimes called a factory of factories.

The GoF definition states that an Abstract Factory “provides an interface for creating families of related or dependent objects without specifying their concrete classes”.

5.1. Abstract Factory Design Pattern Example

In this example, we’ll create two implementations of the Factory Method Design pattern: AnimalFactory and ColorFactory.

We’ll then manage access to them using an Abstract Factory AbstractFactory:

Abstract Factory Design Pattern - Class Diagram

First, we’ll create a family of Animal class and will, later on, use it in our Abstract Factory.

Here’s the Animal interface:

public interface Animal {
    String getAnimal();
    String makeSound();
}

and a concrete implementation Duck:

public class Duck implements Animal {

    @Override
    public String getAnimal() {
        return "Duck";
    }

    @Override
    public String makeSound() {
        return "Squeks";
    }
}

We can create more concrete implementations of Animal interface (like Dog, Bear, etc.) exactly in this manner.

The Abstract Factory deals with families of dependent objects. With that in mind, we’re going to introduce one more family Color as an interface with a few implementations (White, Brown,…).

We’ll skip the actual code for now, but it can be found here.

Now that we’ve got multiple families ready, we can create an AbstractFactory interface for them:

public interface AbstractFactory {
    Animal getAnimal(String animalType) ;
    Color getColor(String colorType);
}

Next, we’ll implement an AnimalFactory using the Factory Method design pattern that we discussed in the previous section:

public class AnimalFactory implements AbstractFactory {

    @Override
    public Animal getAnimal(String animalType) {
        if ("Dog".equalsIgnoreCase(animalType)) {
            return new Dog();
        } else if ("Duck".equalsIgnoreCase(animalType)) {
            return new Duck();
        }

        return null;
    }

    @Override
    public Color getColor(String color) {
        throw new UnsupportedOperationException();
    }

}

Similarly, we can implement a factory for the Color interface using the same design pattern.

When all this is set, we’ll create a FactoryProvider class that will provide us with an implementation of AnimalFactory or ColorFactory depending on the argument supplied to the getFactory() method:

public class FactoryProvider {
    public static AbstractFactory getFactory(String choice){
        
        if("Animal".equalsIgnoreCase(choice)){
            return new AnimalFactory();
        }
        else if("Color".equalsIgnoreCase(choice)){
            return new ColorFactory();
        }
        
        return null;
    }
}

5.2. When to Use Abstract Factory Pattern:

  • The client should be independent of how the products are created and composed in the system
  • The system consists of multiple families of products, and these families are designed to be used together
  • We need a run-time value to construct a particular dependency

6. Builder Design Pattern

The Builder Design Pattern is another creational pattern designed to deal with the construction of comparatively complex objects.

When the complexity of creating object increases, the Builder pattern can separate out the instantiation process by using another object (a builder) to construct the object.

This builder can then be used to create many other similar representations using a simple step-by-step approach.

6.1. Builder Pattern Example

The original Builder Design Pattern introduced by GoF focuses on abstraction and is very good when dealing with complex objects, however, the design is a little complicated.

Joshua Bloch, in his book Effective Java, introduced an improved version of the builder pattern which is clean, highly readable (because it makes use of fluent design) and easy to use from client’s perspective. In this example, we’ll discuss that version.

This example has only one class, BankAccount which contains a builder as a static inner class:

public class BankAccount {
    
    private String name;
    private String accountNumber;
    private String email;
    private boolean newsletter;

    // constructors/getters
    
    public static class BankAccountBuilder {
        // builder code
    }
}

Note that all the access modifiers on the fields are declared private since we don’t want outer objects to access them directly.

The constructor is also private so that only the Builder assigned to this class can access it. All of the properties set in the constructor are extracted from the builder object which we supply as an argument.

We’ve defined BankAccountBuilder in a static inner class:

public static class BankAccountBuilder {
    
    private String name;
    private String accountNumber;
    private String email;
    private boolean newsletter;
    
    public BankAccountBuilder(String name, String accountNumber) {
        this.name = name;
        this.accountNumber = accountNumber;
    }

    public BankAccountBuilder withEmail(String email) {
        this.email = email;
        return this;
    }

    public BankAccountBuilder wantNewsletter(boolean newsletter) {
        this.newsletter = newsletter;
        return this;
    }
    
    public BankAccount build() {
        return new BankAccount(this);
    }
}

Notice we’ve declared the same set of fields that the outer class contains. Any mandatory fields are required as arguments to the inner class’s constructor while the remaining optional fields can be specified using the setter methods.

This implementation also supports the fluent design approach by having the setter methods return the builder object.

Finally, the build method calls the private constructor of the outer class and passes itself as the argument. The returned BankAccount will be instantiated with the parameters set by the BankAccountBuilder.

Let’s see a quick example of the builder pattern in action:

BankAccount newAccount = new BankAccount
  .BankAccountBuilder("Jon", "22738022275")
  .withEmail("jon@example.com")
  .wantNewsletter(true)
  .build();

6.2. When to Use Builder Pattern

  1. When the process involved in creating an object is extremely complex, with lots of mandatory and optional parameters
  2. When an increase in the number of constructor parameters leads to a large list of constructors
  3. When client expects different representations for the object that’s constructed

7. Conclusion

In this article, we learned about creational design patterns in Java. We also discussed their four different types, i.e., Singleton, Factory Method, Abstract Factory and Builder Pattern, their advantages, examples and when should we use them.

As always, the complete code snippets are available over on GitHub.


Spring 5 Testing with @EnabledIf Annotation

$
0
0

1. Introduction

In this quick article, we’ll discover the @EnabledIf and @DisabledIf annotations in Spring 5 using JUnit 5.

Simply put, those annotations make it possible to disable/enable particular test if a specified condition is met.

We’ll use a simple test class to show how these annotations work:

@SpringJUnitConfig(Spring5EnabledAnnotationTest.Config.class)
public class Spring5EnabledAnnotationTest {
 
    @Configuration
    static class Config {}
}

2. @EnabledIf

Let’s add to our class this simple test with a text literal “true”:

@EnabledIf("true")
@Test
void givenEnabledIfLiteral_WhenTrue_ThenTestExecuted() {
    assertTrue(true);
}

If we run this test, it executes normally.

However, if we replace the provided String with “false” it’s not executed:

Keep in mind that if you want to statically disable a test, there’s a dedicated @Disabled annotation for this.

3. @EnabledIf with a Property Placeholder

A more practical way of using @EnabledIf is by using a property placeholder:

@Test
@EnabledIf(
  expression = "${tests.enabled}", 
  loadContext = true)
void givenEnabledIfExpression_WhenTrue_ThenTestExecuted() {
    // ...
}

First of all, we need to make sure that the loadContext parameter is set to true so that the Spring context gets loaded.

By default, this parameter is set to false to avoid unnecessary context loading.

4. @EnabledIf with a SpEL Expression

Finally, we can use the annotation with Spring Expression Language (SpEL) expressions.

For example, we can enable tests only when running JDK 1.8

@Test
@EnabledIf("#{systemProperties['java.version'].startsWith('1.8')}")
void givenEnabledIfSpel_WhenTrue_ThenTestExecuted() {
    assertTrue(true);
}

5. @DisabledIf

This annotation is the opposite of @EnabledIf.

For example, we can disable test when running on Java 1.7:

@Test
@DisabledIf("#{systemProperties['java.version'].startsWith('1.7')}")
void givenDisabledIf_WhenTrue_ThenTestNotExecuted() {
    assertTrue(true);
}

6. Conclusion

In this brief article, we went through several examples of the usage of @EnabledIf and @DisabledIf annotations in JUnit 5 tests using the SpringExtension.

The full source code for the examples is available over on GitHub.

Display All Time Zones With GMT And UTC in Java

$
0
0

1. Overview

Whenever we deal with times and dates, we need a frame of reference. The standard for that is UTC, but we also see GMT in some applications.

In short, UTC is the standard, while GMT is a time zone.

This is what Wikipedia tells us regarding what to use:

For most purposes, UTC is considered interchangeable with Greenwich Mean Time (GMT), but GMT is no longer precisely defined by the scientific community.

In other words, once we compile a list with time zone offsets in UTC, we’ll have it for GMT as well.

First, we’ll have a look at the Java 8 way of achieving this and then we’ll see how we can get the same result in Java 7.

2. Getting a List Of Zones

To start with, we need to retrieve a list of all defined time zones.

For this purpose, the ZoneId class has a handy static method:

Set<String> availableZoneIds = ZoneId.getAvailableZoneIds();

Then, we can use the Set to generate a sorted list of time zones with their corresponding offsets:

public List<String> getTimeZoneList(OffsetBase base) {
 
    LocalDateTime now = LocalDateTime.now();
    return ZoneId.getAvailableZoneIds().stream()
      .map(ZoneId::of)
      .sorted(new ZoneComparator())
      .map(id -> String.format(
        "(%s%s) %s", 
        base, getOffset(now, id), id.getId()))
      .collect(Collectors.toList());
}

The method above uses an enum parameter which represents the offset we want to see:

public enum OffsetBase {
    GMT, UTC
}

Now let’s go over the code in more detail.

Once we’ve retrieved all available zone IDs, we need an actual time reference, represented by LocalDateTime.now().

After that, we use Java’s Stream API to iterate over each entry in our set of time zone String id’s and transform it into a list of formatted time zones with the corresponding offset.

For each of these entries, we generate a ZoneId instance with map(ZoneId::of). 

3. Getting Offsets

We also need to find actual UTC offsets. For example, in the case of Central European Time, the offset would be +01:00.

To get the UTC offset for any given zone, we can use LocalDateTime’s getOffset() method.

Also note that Java represents +00:00 offsets as Z.

So, to have a consistent looking String for time zones with the zero offset, we’ll replace Z with +00:00:

private String getOffset(LocalDateTime dateTime, ZoneId id) {
    return dateTime
      .atZone(id)
      .getOffset()
      .getId()
      .replace("Z", "+00:00");
}

4. Making Zones Comparable

Optionally, we can also sort the time zones according to offset.

For this, we’ll use a ZoneComparator class:

private class ZoneComparator implements Comparator<ZoneId> {

    @Override
    public int compare(ZoneId zoneId1, ZoneId zoneId2) {
        LocalDateTime now = LocalDateTime.now();
        ZoneOffset offset1 = now.atZone(zoneId1).getOffset();
        ZoneOffset offset2 = now.atZone(zoneId2).getOffset();

        return offset1.compareTo(offset2);
    }
}

5. Displaying Time Zones

All that’s left to do is putting the above pieces together by calling the getTimeZoneList() method for each OffsetBase enum value and displaying the lists:

public class TimezoneDisplayApp {

    public static void main(String... args) {
        TimezoneDisplay display = new TimezoneDisplay();

        System.out.println("Time zones in UTC:");
        List<String> utc = display.getTimeZoneList(
          TimezoneDisplay.OffsetBase.UTC);
        utc.forEach(System.out::println);

        System.out.println("Time zones in GMT:");
        List<String> gmt = display.getTimeZoneList(
          TimezoneDisplay.OffsetBase.GMT);
        gmt.forEach(System.out::println);
    }
}

When we run the above code, it’ll print the time zones for UTC and GMT.

Here’s a snippet of how the output will look like:

Time zones in UTC:
(UTC+14:00) Pacific/Apia
(UTC+14:00) Pacific/Kiritimati
(UTC+14:00) Pacific/Tongatapu
(UTC+14:00) Etc/GMT-14

6. Java 7 and Before

Java 8 makes this task easier by using the Stream and Date and Time APIs.

However, if we have a Java 7 and before a project, we can still achieve the same result by relying on the java.util.TimeZone class with its getAvailableIDs() method:

public List<String> getTimeZoneList(OffsetBase base) {
    String[] availableZoneIds = TimeZone.getAvailableIDs();
    List<String> result = new ArrayList<>(availableZoneIds.length);

    for (String zoneId : availableZoneIds) {
        TimeZone curTimeZone = TimeZone.getTimeZone(zoneId);
        String offset = calculateOffset(curTimeZone.getRawOffset());
        result.add(String.format("(%s%s) %s", base, offset, zoneId));
    }
    Collections.sort(result);
    return result;
}

The main difference with the Java 8 code is the offset calculation.

The rawOffset we get from TimeZone()‘s getRawOffset() method expresses the time zone’s offset in milliseconds.

Therefore, we need to convert this to hours and minutes using the TimeUnit class:

private String calculateOffset(int rawOffset) {
    if (rawOffset == 0) {
        return "+00:00";
    }
    long hours = TimeUnit.MILLISECONDS.toHours(rawOffset);
    long minutes = TimeUnit.MILLISECONDS.toMinutes(rawOffset);
    minutes = Math.abs(minutes - TimeUnit.HOURS.toMinutes(hours));

    return String.format("%+03d:%02d", hours, Math.abs(minutes));
}

7. Conclusion

In this quick tutorial, we’ve seen how we can compile a list of all available time zones with their UTC and GMT offsets.

And, as always, the full source code for the examples is available over on GitHub, both the Java 8 version and Java 7 version.

Java Weekly, Issue 204

$
0
0

Lots of interesting writeups on Java 9 this week.

Here we go…

1. Spring and Java

>> First Contact With ‘var’ In Java 10 [blog.codefx.org]

Java 9 was released two months ago and there’s already quite a lot of excitement around features of the next version.

>> Fresh Async With Kotlin: Roman Elizarov Presents at QCon SF [infoq.com]

Kotlin has some cool features for asynchronous programming.

>> Dynamic Validation with Spring Boot Validation [blog.codecentric.de]

An interesting case of making the Bean Validation dynamic in Spring.

>> Java 10 – The Story So Far [infoq.com]

Here’s what we already know about Java 10.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> The Myth of Advanced TDD [blog.thecodewhisperer.com]

Before you start looking at advanced TDD techniques, it’s important to make sure you have basics mastered first.

>> Install IntelliJ IDEA on Ubuntu with Snaps [blog.jetbrains.com]

Ubuntu users can finally install IntelliJ IDEA easily 🙂

Also worth reading: 

3. Musings

>> On developer shortage [blog.frankel.ch]

Simply put, if you don’t want to face the problem of not being able to find and attract good developers, make sure that you’re an attractive place for them to work.

>> Customize Your Agile Approach: What Do You Need for Estimation? [infoq.com]

Agile is less restrictive than you’d think – when you adapt only practices that actually work for you.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Wally is a Maverick [dilbert.com]

>> Tina the Whistleblower [dilbert.com]

>> Logical Reasons for Learning to Negotiate [dilbert.com]

5. Pick of the Week

>> Finally, An Official Shell in Java 9 – Introducing JShell [stackify.com]

How to Copy a File with Java

$
0
0

1. Overview

In this article, we’ll cover common ways of copying files in Java.

First, we’ll use the standard IO and NIO.2 APIs, and two external libraries: commons-io and guava.

2. IO API (Before JDK7)

First of all, to copy a file with java.io API, we’re required to open a stream, loop through the content and write it out to another stream:

@Test
public void givenIoAPI_whenCopied_thenCopyExistsWithSameContents() 
  throws IOException {
 
    File copied = new File("src/test/resources/copiedWithIo.txt");
    try (
      InputStream in = new BufferedInputStream(
        new FileInputStream(original));
      OutputStream out = new BufferedOutputStream(
        new FileOutputStream(copied))) {
 
        byte[] buffer = new byte[1024];
        int lengthRead;
        while ((lengthRead = in.read(buffer)) > 0) {
            out.write(buffer, 0, lengthRead);
            out.flush();
        }
    }
 
    assertThat(copied).exists();
    assertThat(Files.readAllLines(original.toPath())
      .equals(Files.readAllLines(copied.toPath())));
}

Quite a lot of work to implement such basic functionality.

Luckily for us, Java has improved its core APIs and we have a simpler way of copying files using NIO.2 API.

3. NIO.2 API (JDK7)

Using NIO.2 can significantly increase file copying performance since the NIO.2 utilizes lower-level system entry points.

Let’s take a closer look at how the Files.copy() method works.

The copy() method gives us the ability to specify an optional argument representing a copy option. By default, copying files and directories won’t overwrite existing ones, nor will it copy file attributes.

This behavior can be changed using the following copy options:

  • REPLACE_EXISTING – replace a file if it exists
  • COPY_ATTRIBUTES – copy metadata to the new file
  • NOFOLLOW_LINKS – shouldn’t follow symbolic links

The NIO.2 Files class provides a set of overloaded copy() methods for copying files and directories within the file system.

Let’s take a look at an example using copy() with two Path arguments:

@Test
public void givenNIO2_whenCopied_thenCopyExistsWithSameContents() 
  throws IOException {
 
    Path copied = Paths.get("src/test/resources/copiedWithNio.txt");
    Path originalPath = original.toPath();
    Files.copy(originalPath, copied, StandardCopyOption.REPLACE_EXISTING);
 
    assertThat(copied).exists();
    assertThat(Files.readAllLines(originalPath)
      .equals(Files.readAllLines(copied)));
}

Note that directory copies are shallow, meaning that files and sub-directories within the directory are not copied.

4. Apache Commons IO

Another common way to copy a file with Java is by using the commons-io library. 

First, we need to add the dependency:

<dependency>
    <groupId>commons-io</groupId>
    <artifactId>commons-io</artifactId>
    <version>2.6</version>
</dependency>

The latest version can be downloaded from Maven Central.

Then, to copy a file we just need to use the copyFile() method defined in the FileUtils class. The method takes a source and a target file.

Let’s take a look at a JUnit test using the copyFile() method:

@Test
public void givenCommonsIoAPI_whenCopied_thenCopyExistsWithSameContents() 
  throws IOException {
    
    File copied = new File(
      "src/test/resources/copiedWithApacheCommons.txt");
    FileUtils.copyFile(original, copied);
    
    assertThat(copied).exists();
    assertThat(Files.readAllLines(original.toPath())
      .equals(Files.readAllLines(copied.toPath())));
}

5. Guava

Finally, we’ll take a look at Google’s Guava library.

Again, if we want to use Guavawe need to include the dependency:

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>23.0</version>
</dependency>

The latest version can be found on Maven Central.

And here’s the Guava’s way of copying a file:

@Test
public void givenGuava_whenCopied_thenCopyExistsWithSameContents() 
  throws IOException {
 
    File copied = new File("src/test/resources/copiedWithGuava.txt");
    com.google.common.io.Files.copy(original, copied);
 
    assertThat(copied).exists();
    assertThat(Files.readAllLines(original.toPath())
      .equals(Files.readAllLines(copied.toPath())));
}

6. Conclusion

In this article, we explored the most common ways to copy a file in Java.

The full implementation of this article can be found over on Github.

Introduction to Gradle

$
0
0

1. Overview

Gradle is a Groovy-based build management system designed specifically for building Java-based projects.

Installation instructions can be found here.

2. Building Blocks – Projects and Tasks

In Gradle, Builds consist of one or more projects and each project consists of one or more tasks.

A project in Gradle can be assembling a jar, war or even a zip file.

A task is a single piece of work. This can include compiling classes, or creating and publishing Java/web archives.

A simple task can be defined as:

task hello {
    doLast {
        println 'Baeldung'
    }
}

If we execute above task using gradle -q hello command from the same location where build.gradle resides, we should see the output in the console.

2.1. Tasks

Gradle’s build scripts are nothing but Groovy:

task toLower {
    doLast {
        String someString = 'HELLO FROM BAELDUNG'
        println "Original: "+ someString
        println "Lower case: " + someString.toLowerCase()
    }
}

We can define tasks that depend on other tasks. Task dependency can be defined by passing the dependsOn: taskName argument in a task definition:

task helloGradle {
    doLast {
        println 'Hello Gradle!'
    }
}

task fromBaeldung(dependsOn: helloGradle) {
    doLast {
        println "I'm from Baeldung"
    }
}

2.2. Adding Behavior to a Task

We can define a task and enhance it with some additional behaviour:

task helloBaeldung {
    doLast {
        println 'I will be executed second'
    }
}

helloBaeldung.doFirst {
    println 'I will be executed first'
}

helloBaeldung.doLast {
    println 'I will be executed third'
}

helloBaeldung {
    doLast {
        println 'I will be executed fourth'
    }
}

doFirst and doLast add actions at the top and bottom of the action list, respectively, and can be defined multiple times in a single task.

2.3. Adding Task Properties

We can also define properties:

task ourTask {
    ext.theProperty = "theValue"
}

Here, we’re setting “theValue” as theProperty of the ourTask task.

3. Managing Plugins

There’re two types of plugins in Gradle – script and binary.

To benefit from an additional functionality, every plugin needs to go through two phases: resolving and applying.

Resolving means finding the correct version of the plugin jar and adding that to the classpath of the project. 

Applying plugins is executing Plugin.apply(T) on the project.

3.1. Applying Script Plugins

In the aplugin.gradle, we can define a task:

task fromPlugin {
    doLast {
        println "I'm from plugin"
    }
}

If we want to apply this plugin to our project build.gradle file, all we need to do is add this line to our build.gradle:

apply from: 'aplugin.gradle'

Now, executing gradle tasks command should display the fromPlugin task in the task list.

3.2. Applying Binary Plugins Using Plugins DSL

In the case of adding a core binary plugin, we can add short names or a plugin id:

plugins {
    id 'application'
}

Now the run task from application plugin should be available in a project to execute any runnable jar. To apply a community plugin, we have to mention a fully qualified plugin id :

plugins {
    id "org.shipkit.bintray" version "0.9.116"
}

Now, Shipkit tasks should be available on gradle tasks list.

The limitations of the plugins DSL are:

  • It doesn’t support Groovy code inside the plugins block
  • plugins block needs to be the top level statement in project’s build scripts (only buildscripts{} block is allowed before it)
  • Plugins DSL cannot be written in scripts plugin, settings.gradle file or in init scripts

Plugins DSL is still incubating. The DSL and other configuration may change in the later Gradle versions.

3.3. Legacy Procedure for Applying Plugins

We can also apply plugins using the “apply plugin”:

apply plugin: 'war'

If we need to add a community plugin, we have to add the external jar to the build classpath using buildscript{} block.

Then, we can apply the plugin in the build scripts but only after any existing plugins{} block:

buildscript {
    repositories {
        maven {
            url "https://plugins.gradle.org/m2/"
        }
    }
    dependencies {
        classpath "org.shipkit:shipkit:0.9.117"
    }
}
apply plugin: "org.shipkit.bintray-release"

4. Dependency Management

Gradle supports very flexible dependency management system, it’s compatible with the wide variety of available approaches.

Best practices for dependency management in Gradle are versioning, dynamic versioning, resolving version conflicts and managing transitive dependencies.

4.1. Dependency Configuration

Dependencies are grouped into different configurations. A configuration has a name and they can extend each other.

If we apply the Java plugin, we’ll have compile, testCompile, runtime configurations available for grouping our dependencies. The default configuration extends “runtime”.

4.2. Declaring Dependencies

Let’s look at an example of adding some dependencies (Spring and Hibernate) using several different ways:

dependencies {
    compile group: 
      'org.springframework', name: 'spring-core', version: '4.3.5.RELEASE'
    compile 'org.springframework:spring-core:4.3.5.RELEASE',
            'org.springframework:spring-aop:4.3.5.RELEASE'
    compile(
        [group: 'org.springframework', name: 'spring-core', version: '4.3.5.RELEASE'],
        [group: 'org.springframework', name: 'spring-aop', version: '4.3.5.RELEASE']
    )
    testCompile('org.hibernate:hibernate-core:5.2.12.Final') {
        transitive = true
    }
    runtime(group: 'org.hibernate', name: 'hibernate-core', version: '5.2.12.Final') {
        transitive = false
    }
}

We’re declaring dependencies in various configurations: compiletestCompile, and runtime in various formats.

Sometimes we need dependencies that have multiple artifacts. In such cases, we can add an artifact-only notations @extensionName (or ext in the expanded form) to download the desired artifact:

runtime "org.codehaus.groovy:groovy-all:2.4.11@jar"
runtime group: 'org.codehaus.groovy', name: 'groovy-all', version: '2.4.11', ext: 'jar'

Here, we added the @jar notation to download only the jar artifact without the dependencies.

To add dependencies to any local files, we can use something like this:

compile files('libs/joda-time-2.2.jar', 'libs/junit-4.12.jar')
compile fileTree(dir: 'libs', include: '*.jar')

When we want to avoid transitive dependencies, we can do it on configuration level or on dependency level:

configurations {
    testCompile.exclude module: 'junit'
}
 
testCompile("org.springframework.batch:spring-batch-test:3.0.7.RELEASE"){
    exclude module: 'junit'
}

5. Multi-Project Builds

5.1. Build Lifecycle

In the initialization phase, Gradle determines which projects are going to take part in a multi-project build.

This is usually mentioned in settings.gradle file, which is located in the project root. Gradle also creates instances of the participating projects.

In the configuration phase, all created projects instances are configured based on Gradle feature configuration on demand.

In this feature, only required projects are configured for a specific task execution. This way, configuration time is highly reduced for a large multi-project build. This feature is still incubating.

Finally, in the execution phase, a subset of tasks, created and configured are executed. We can include code in the settings.gradle and build.gradle files to perceive these three phases.

In settings.gradle :

println 'At initialization phase.'

In build.gradle :

println 'At configuration phase.'

task configured { println 'Also at the configuration phase.' }

task execFirstTest { doLast { println 'During the execution phase.' } }

task execSecondTest {
    doFirst { println 'At first during the execution phase.' }
    doLast { println 'At last during the execution phase.' }
    println 'At configuration phase.'
}

5.2. Creating Multi-Project Build

We can execute the gradle init command in the root folder to create a skeleton for both settings.gradle and build.gradle file.

All common configuration will be kept in the root build script:

allprojects {
    repositories {
        mavenCentral() 
    }
}

subprojects {
    version = '1.0'
}

The setting file needs to include root project name and subproject name:

rootProject.name = 'multi-project-builds'
include 'greeting-library','greeter'

Now we need to have a couple of subproject folders named greeting-library and greeter to have a demo of a multi-project build. Each subproject needs to have an individual build script to configure their individual dependencies and other necessary configurations.

If we’d like to have our greeter project dependent on the greeting-library, we need to include the dependency in the build script of greeter:

dependencies {
    compile project(':greeting-library') 
}

6. Using Gradle Wrapper

If a Gradle project has gradlew file for Linux and gradlew.bat file for Windows, we don’t need to install Gradle to build the project.

If we execute gradlew build in Windows and ./gradlew build in Linux, a Gradle distribution specified in gradlew file will be downloaded automatically.

If we’d like to add the Gradle wrapper to our project:

gradle wrapper --gradle-version 4.2.1

The command needs to be executed from the root of the project. This will create all necessary files and folders to tie Gradle wrapper to the project. The other way to do the same is to add the wrapper task to the build script:

task wrapper(type: Wrapper) {
    gradleVersion = '4.2.1'
}

Now we need to execute the wrapper task and the task will tie our project to the wrapper. Besides the gradlew files, a wrapper folder is generated inside the gradle folder containing a jar and a properties file.

If we want to switch to a new version of Gradle, we only need to change an entry in gradle-wrapper.properties.

7. Conclusion

In this article, we had a look at Gradle and saw that it has greater flexibility over other existing build tools in terms of resolving version conflicts and managing transitive dependencies.

The source code for this article is available over on GitHub.

Send the Logs of a Java App to the Elastic Stack (ELK)

$
0
0

1. Overview

In this quick tutorial, we’ll discuss, step by step, how to send out application logs to the Elastic Stack (ELK).

In an earlier article, we focused on setting up the Elastic Stack and sending JMX data into it.

2. Configure Logback

let’s start by configuring Logback to write app logs into a file using FileAppender:

<appender name="STASH" class="ch.qos.logback.core.rolling.RollingFileAppender">
    <file>logback/redditApp.log</file>
    <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
        <fileNamePattern>logback/redditApp.%d{yyyy-MM-dd}.log</fileNamePattern>
        <maxHistory>7</maxHistory>
    </rollingPolicy>  
    <encoder class="net.logstash.logback.encoder.LogstashEncoder"/>
</appender>
<root level="DEBUG">
    <appender-ref ref="STASH" />        
</root>

Note that:

  • We keep logs of each day in a separate file by using RollingFileAppender with TimeBasedRollingPolicy (more about this appender here)
  • We’ll keep old logs for only a week (7 days) by setting maxHistory to 7

Also notice how we’re using the LogstashEncoder to do the encoding into a JSON format – which is easier to use with Logstash.

To make use of this encoder, we need to add the following dependency into our pom.xml:

<dependency> 
    <groupId>net.logstash.logback</groupId> 
    <artifactId>logstash-logback-encoder</artifactId> 
    <version>4.11</version> 
</dependency>

Finally, let’s make sure the app has permissions to access logging directory:

sudo chmod a+rwx /var/lib/tomcat8/logback

3. Configure Logstash

Now, we need to configure Logstash to read data from log files created by our app and send it to ElasticSearch.

Here is our configuration file logback.conf:

input {
    file {
        path => "/var/lib/tomcat8/logback/*.log"
        codec => "json"
        type => "logback"
    }
}

output {
    if [type]=="logback" {
         elasticsearch {
             hosts => [ "localhost:9200" ]
             index => "logback-%{+YYYY.MM.dd}"
        }
    }
}

Note that:

  • input file is used as Logstash will read logs this time from logging files
  • path is set to our logging directory and all files with .log extension will be processed
  • index is set to new index “logback-%{+YYYY.MM.dd}” instead of default “logstash-%{+YYYY.MM.dd}”

To run Logstash with new configuration, we’ll use:

bin/logstash -f logback.conf

4. Visualize Logs using Kibana

We can now see our Logback data in the ‘logback-*‘ index.

We’ll create a new search ‘Logback logs’ to make sure to separate Logback data by using the following query:

type:logback

Finally, we can create a simple visualization of our Logback data:

  • Navigate to ‘Visualize’ tab
  • Choose ‘Vertical Bar Chart’
  • Choose ‘From Saved Search’
  • Choose ‘Logback logs’ search we just created

For Y-axis, make sure to choose Aggregation: Count

For X-axis, choose:

  • Aggregation: Terms
  • Field: level

After running the visualization, you should see multiple bars represent count of logs per level (DEBUG, INFO, ERROR, …)

5. Conclusion

In this article, we learned the basics of setting up Logstash in our system to push the log data it generates into Elasticsearch – and visualize that data with the help of Kibana.

CAS SSO With Spring Security

$
0
0

1. Overview

In this article, we’re going to look at integrating the Central Authentication Service (CAS) with Spring Security. CAS is a Single Sign-On (SSO) service.

Let’s say we have applications requiring user authentication. The most common method is to implement a security mechanism for each application. However, it’d be better to implement user authentication for all the apps in one place.

This is precisely what the CAS SSO system does. This article gives more details on the architecture. The protocol diagram can be found here.

2. Project Setup and Installation

There’re at least two components involved in setting up a Central Authentication Service. One component is a Spring-based server – called cas-server. Other components are made up of one or more clients.

A client can be any web application that’s using the server for authentication.

2.1. CAS Server Setup

The server uses the Maven (Gradle) War Overlay style to facilitate easy setup and deployment. There’s a quick start template that can be cloned and used.

Let’s clone it:

git clone https://github.com/apereo/cas-overlay-template.git cas-server

This command clones the cas-overlay-template into the cas-server directory on the local machine.

Next, let’s add additional dependencies to the root pom.xml. These dependencies enable service registration via a JSON configuration.

Also, they facilitate connections to the database:

<dependency>
    <groupId>org.apereo.cas</groupId>
    <artifactId>cas-server-support-json-service-registry</artifactId>
    <version>${cas.version}</version>
</dependency>
<dependency>
    <groupId>org.apereo.cas</groupId>
    <artifactId>cas-server-support-jdbc</artifactId>
    <version>${cas.version}</version>
</dependency>
<dependency>
    <groupId>org.apereo.cas</groupId>
    <artifactId>cas-server-support-jdbc-drivers</artifactId>
    <version>${cas.version}</version>
</dependency>

The latest version of cas-server-support-json-service-registry, cas-server-support-jdbc and cas-server-support-jdbc-drivers dependencies can be found on Maven Central. Please note that the parent pom.xml automatically manages the artifact versions.

Next, let’s create the folder cas-server/src/main/resources and copy the folder cas-server/etc. into it. We’re also going to change the port of the application as well as the path of the SSL key store.

We configure these by editing the associated entries in cas-server/src/main/resources/application.properties:

server.port=6443
server.ssl.key-store=classpath:/etc/cas/thekeystore
cas.standalone.config=classpath:/etc/cas/config

The config folder path was also set to classpath:/etc/cas/config. It points to the cas-server/src/main/resources/etc/cas/config.

The next step is to generate a local SSL key store. The key store is used for establishing HTTPS connections. This step is important and may not be skipped.

From the terminal, change directory to cas-server/src/main/resources/etc/cas. After that run the following command:

keytool -genkey -keyalg RSA -alias thekeystore -keystore thekeystore 
-storepass changeit -validity 360 -keysize 2048

It’s important to use localhost when prompted for a first and last name, organization name and even organization unit. Failure to do this may lead an to error during SSL Handshake. Other fields such as city, state and country can be set as appropriate.

The above command generates a key store with the name thekeystore and password changeit. It’s stored in the current directory.

Next, the generated key store need to be exported to a .crt format for use by the client applications. So, still in the same directory, run the following command to export the generated thekeystore file to thekeystore.crt. The password remains unchanged:

keytool -export -alias thekeystore -file thekeystore.crt 
-keystore thekeystore

Now, let’s import the exported thekeystore.crt into the Java cacerts key store. The terminal prompt should still be in the directory cas-server/src/main/resources/etc/cas directory.

From there, execute the command:

keytool -import -alias thekeystore -storepass changeit -file thekeystore.crt
 -keystore "C:\Program Files\Java\jdk1.8.0_152\jre\lib\security\cacerts"

Just to be double sure, we can also import the certificate into a JRE that is outside of the JDK installation:

keytool -import -alias thekeystore -storepass changeit -file thekeystore.crt 
-keystore "C:\Program Files\Java\jre1.8.0_152\lib\security\cacerts"

Note that the -keystore flag points to the location of the Java key store on the local machine. This location may be different depending on the Java installation at hand.

Moreover, ensure that the JRE that is referenced as the location of the key store is the same as the one that is used for the client application.

After successfully adding thekeystore.crt to the Java key store, we need to restart the system. Equivalently, we can kill every instance of the JVM running on the local machine.

Next, from the root project directory, cas-server, invoke the commands build package and build run from the terminal. Starting the server may take some time. When it’s ready, it prints READY in the console.

At this point, visiting https://localhost:6443/cas with a browser renders a login form. The default username is casuser and password is Mellon.

2.2. CAS Client Setup

Let’s use the Spring Initializr to generate the project with the following dependencies: Web, Security, Freemarker and optionally DevTools.

In addition to the dependencies generated by Spring Initializr, let’s add the dependency for the Spring Security CAS module:

<dependency>
    <groupId>org.springframework.security</groupId>
    <artifactId>spring-security-cas</artifactId>
</dependency>

The latest version of the dependency can be found on Maven Central. Let’s also configure the server’s port to listen on port 9000 by adding the following entry in application.properties:

server.port=9000

3. Registering Services/Clients with CAS Server

The server doesn’t allow just any client to access it for authentication. The clients/services must be registered in the CAS server services registry.

There’re a couple of ways of registering a service with the server. These include YAML, JSON, Mongo, LDAP, and others.

Depending on the method, there are dependencies to be included in the pom.xml file. In this article, we use the JSON Service Registry method. The dependency was already included in the pom.xml file in the previous section.

Let’s create a JSON file that contains the definition of the client application. Inside the cas-server/src/main/resources folder, let’s create yet another folder – services. It’s this services folder that contains the JSON files.

Next, we create a JSON file named, casSecuredApp-19991.json in the cas-server/src/main/resources/services directory with the following content:

{
    "@class" : "org.apereo.cas.services.RegexRegisteredService",
    "serviceId" : "^http://localhost:9000/login/cas",
    "name" : "CAS Spring Secured App",
    "description": "This is a Spring App that usses the CAS Server for it's authentication",
    "id" : 19991,
    "evaluationOrder" : 1
}

The serviceId attribute defines a regex URL pattern for the client application that intends to use the server for authentication. In this case, the pattern matches an application running on localhost and listening on port 9000.

The id attribute should be unique to avoid conflicts and accidentally overriding configurations. The service configuration file name follows the convention serviceName-id.json. Other configurable attributes such as theme, proxyPolicy, logo, privacyUrl, and others can be found here.

For now, let’s just add two more configuration items to turn the JSON Service Registry on. One is to inform the server on the directory where the service configuration files are located. The other is to enable initialization of the service registry from JSON configuration files.

Both these configuration items are placed in another file, named cas.properties. We create this file in the cas-server/src/main/resources directory:

cas.serviceRegistry.initFromJson=true
cas.serviceRegistry.config.location=classpath:/services

Let’s execute the build run command again and take note of lines such as “Loaded [3] service(s) from [JsonServiceRegistryDao]” on the console.

4. Spring Security Configuration

4.1. Configuring Single Sign-On

Now that the Spring Boot Application has been registered with the CAS Server as a service. Let’s configure Spring Security to work in concert with the server for user authentication. The full sequence of interactions between Spring Security and server can be found here.

Let’s first configure the beans that are related to the CAS module of Spring Security. This enables Spring Security to collaborate with the central authentication service.

To this extent, we need to add config beans to the CasSecuredAppApplication class – the entry point to the Spring Boot application:

@Bean
public ServiceProperties serviceProperties() {
    ServiceProperties serviceProperties = new ServiceProperties();
    serviceProperties.setService("http://localhost:9000/login/cas");
    serviceProperties.setSendRenew(false);
    return serviceProperties;
}

@Bean
@Primary
public AuthenticationEntryPoint authenticationEntryPoint(
  ServiceProperties sP) {
 
    CasAuthenticationEntryPoint entryPoint
      = new CasAuthenticationEntryPoint();
    entryPoint.setLoginUrl("https://localhost:6443/cas/login");
    entryPoint.setServiceProperties(sP);
    return entryPoint;
}

@Bean
public TicketValidator ticketValidator() {
    return new Cas30ServiceTicketValidator(
      "https://localhost:6443/cas");
}

@Bean
public CasAuthenticationProvider casAuthenticationProvider() {
 
    CasAuthenticationProvider provider = new CasAuthenticationProvider();
    provider.setServiceProperties(serviceProperties());
    provider.setTicketValidator(ticketValidator());
    provider.setUserDetailsService(
      s -> new User("casuser", "Mellon", true, true, true, true,
        AuthorityUtils.createAuthorityList("ROLE_ADMIN")));
    provider.setKey("CAS_PROVIDER_LOCALHOST_9000");
    return provider;
}

We configure the ServiceProperties bean with the default service login URL that the CasAuthenticationFilter will be internally mapped to. The sendRenew property of ServiceProperties is set to false. As a consequence, a user only needs to present login credentials to the server once.

Subsequent authentication will be done automatically, i.e., without asking the user for username and password again. This behavior means that a single user that has access to multiple services that use the same server for authentication.

As we’ll see later, if a user logs out from the server completely, his ticket is invalidated. As a consequence, the user is logged out of all applications connected to the server at the same time. This is called the Single Logout.

We configure the AuthenticationEntryPoint bean with the default login URL of the server. Note that this URL is different from the Service login URL. This server login URL is the location where the user will be redirected to for authentication.

The TicketValidator is the bean that the service app uses to validate a service ticket granted to a user upon successful authentication with the server.

The flow is:

  1. A user attempts to access a secured page
  2. The AuthenticationEntryPoint is triggered and takes the user to the server. The login address of the server has been specified in the AuthenticationEntryPoint
  3. On a successful authentication with the server, it redirects the request back to the service URL that has been specified, with the service ticket appended as a query parameter
  4. CasAuthenticationFilter is mapped to a URL that matches the pattern and in turn, triggers the ticket validation internally.
  5. If the ticket is valid, a user will be redirected to the originally requested URL

Now, we need to configure Spring Security to protect some routes and use the CasAuthenticationEntryPoint bean.

Let’s create SecurityConfig.java that extends WebSecurityConfigurerAdapter and override the config():

@Override
protected void configure(HttpSecurity http) throws Exception {
  http
    .authorizeRequests()
    .regexMatchers("/secured.*", "/login")
    .authenticated()
    .and()
    .authorizeRequests()
    .regexMatchers("/")
    .permitAll()
    .and()
    .httpBasic()
    .authenticationEntryPoint(authenticationEntryPoint);
}

Also, in the SecurityConfig class, we override the following methods and create the CasAuthenticationFilter bean at the same time:

@Override
protected void configure(AuthenticationManagerBuilder auth) 
  throws Exception {
    auth.authenticationProvider(authenticationProvider);
}

@Override
protected AuthenticationManager authenticationManager() throws Exception {
    return new ProviderManager(
      Arrays.asList(authenticationProvider));
}

@Bean
public CasAuthenticationFilter casAuthenticationFilter(ServiceProperties sP) 
  throws Exception {
    CasAuthenticationFilter filter = new CasAuthenticationFilter();
    filter.setServiceProperties(sP);
    filter.setAuthenticationManager(authenticationManager());
    return filter;
}

Let’s create controllers that handle requests directed to /secured, /login and the home page as well.

The homepage is mapped to an IndexController that has a method index(). This method merely returns the index view:

@GetMapping("/")
public String index() {
    return "index";
}

The /login path is mapped to the login() method from the AuthController class. It just redirects to the default login successful page.

Notice that while configuring the HttpSecurity above, we configured the /login path so that it requires authentication. This way, we redirect the user to the CAS server for authentication.

This mechanism is a bit different from the normal configuration where the /login path is not a protected route and returns a login form:

@GetMapping("/login")
public String login() {
    return "redirect:/secured";
}

The /secured path is mapped to the index() method from the SecuredPageController class. It gets the username of the authenticated user and displays it as part of the welcome message:

@GetMapping
public String index(ModelMap modelMap) {
  Authentication auth = SecurityContextHolder.getContext()
    .getAuthentication();
  if(auth != null 
    && auth.getPrincipal() != null
    && auth.getPrincipal() instanceof UserDetails) {
      modelMap.put("username", ((UserDetails) auth.getPrincipal()).getUsername());
  }
  return "secure/index";
}

Note that all the views are available in the resources folder of the cas-secured-app. At this point, the cas-secured-app should be able to use the server for authentication.

Finally, we execute build run from the terminal and simultaneously start the Spring boot app as well. Note that SSL is key in this whole process, so the SSL generation step above should not be skipped!

4.2. Configuring Single Logout

Let’s proceed with the authentication process by logging out a user from the system. There are two places a user can be logged out from: the client app and the server.

Logging a user out of the client app/service is the first thing to do. This does not affect the authentication state of the user in other applications connected to the same server. Of course, logging a user out from the server also logs the user out from all other registered services/clients.

Let’s start by defining some bean configurations in CasSecuredAppApplicaiton class:

@Bean
public SecurityContextLogoutHandler securityContextLogoutHandler() {
    return new SecurityContextLogoutHandler();
}

@Bean
public LogoutFilter logoutFilter() {
    LogoutFilter logoutFilter = new LogoutFilter(
      "https://localhost:6443/cas/logout", 
      securityContextLogoutHandler());
    logoutFilter.setFilterProcessesUrl("/logout/cas");
    return logoutFilter;
}

@Bean
public SingleSignOutFilter singleSignOutFilter() {
    SingleSignOutFilter singleSignOutFilter = new SingleSignOutFilter();
    singleSignOutFilter.setCasServerUrlPrefix("https://localhost:6443/cas");
    singleSignOutFilter.setIgnoreInitConfiguration(true);
    return singleSignOutFilter;
}

@EventListener
public SingleSignOutHttpSessionListener singleSignOutHttpSessionListener(
  HttpSessionEvent event) {
    return new SingleSignOutHttpSessionListener();
}

We configure the logoutFilter to intercept the URL pattern /logout/cas and to redirect the application to the server for a system-wide log-out. The server sends a single logout request to all services concerned. Such a request is handled by the SingleSignOutFilter, which invalidates the HTTP session.

Let’s modify the HttpSecurity configuration in the config() of SecurityConfig class. The CasAuthenticationFilter and LogoutFilter that were configured earlier are now added to the chain as well:

http
  .authorizeRequests()
  .regexMatchers("/secured.*", "/login")
  .authenticated()
  .and()
  .authorizeRequests()
  .regexMatchers("/")
  .permitAll()
  .and()
  .httpBasic()
  .authenticationEntryPoint(authenticationEntryPoint)
  .and()
  .logout().logoutSuccessUrl("/logout")
  .and()
  .addFilterBefore(singleSignOutFilter, CasAuthenticationFilter.class)
  .addFilterBefore(logoutFilter, LogoutFilter.class);

For the logout to work correctly, we should implement a logout() method that first logs a user out of the system locally and shows a page with a link to optionally log the user out from all other services connected to the server.

The link is the same as the one set as the filter process URL of the LogoutFilter we configured above:

@GetMapping("/logout")
public String logout(
  HttpServletRequest request, 
  HttpServletResponse response, 
  SecurityContextLogoutHandler logoutHandler) {
    Authentication auth = SecurityContextHolder
      .getContext().getAuthentication();
    logoutHandler.logout(request, response, auth );
    new CookieClearingLogoutHandler(
      AbstractRememberMeServices.SPRING_SECURITY_REMEMBER_ME_COOKIE_KEY)
      .logout(request, response, auth);
    return "auth/logout";
}

The logout view:

<html>
<head>
    <title>Cas Secured App - Logout</title>
</head>
<body>
<h1>You have logged out of Cas Secured Spring Boot App Successfully</h1>
<br>
<a href="/logout/cas">Log out of all other Services</a>
</body>
</html>

5. Connecting the CAS Server to a Database

We’ve been using static user credentials for authentication. However, in production environments, user credentials are stored in a database most of the time. So, next, we show how to connect our server to a MySQL database (database name: test) running locally.

We do this by appending the following data to the application.properties file in the cas-server/src/main/resources directory:

cas.authn.accept.users=
cas.authn.accept.name=

cas.authn.jdbc.query[0].sql=SELECT * FROM users WHERE email = ?
cas.authn.jdbc.query[0].url=jdbc:mysql://127.0.0.1:3306/test?useUnicode=true&useJDBCCompliantTimezoneShift=true&useLegacyDatetimeCode=false&serverTimezone=UTC
cas.authn.jdbc.query[0].dialect=org.hibernate.dialect.MySQLDialect
cas.authn.jdbc.query[0].user=root
cas.authn.jdbc.query[0].password=root
cas.authn.jdbc.query[0].ddlAuto=none
cas.authn.jdbc.query[0].driverClass=com.mysql.cj.jdbc.Driver
cas.authn.jdbc.query[0].fieldPassword=password
cas.authn.jdbc.query[0].passwordEncoder.type=NONE

Remember that the complete content of application.properties can be found in the source code. Leaving the value of cas.authn.accept.users blank deactivates the use of static user repositories by the server.

Furthermore, we define the SQL statement that gets the users from the database. The ability to configure the SQL itself makes the storage of users in the database very flexible.

According to the SQL above, a users’ record is stored in the users table. The email column is what represents the users’ principal (username). Further down the configuration, we set the name of the password field, cas.authn.jdbc.query[0].fieldPassword. We set it to the value password to increase the flexibility further.

Other attributes that we configured are the database user (root) and password (blank), dialect and the JDBC connection String. The list of supported databases, available drivers and dialects can be found here.

Another essential attribute is the encryption type used for storing the password. In this case, it is set to NONE.

However, the server supports more encryption mechanisms, such as Bcrypt. These encryption mechanisms that can be found here, together with other configurable properties.

Running the server (build run) now enables the authentication of users with credentials that are present in the configured database. Note again that the principal in the database that the server uses must be the same as that of the client applications.

In this case, the Spring Boot app should have the same value (test@test.com) for the principal (username) as that of the database connected to the server.

Let’s then modify the UserDetails connected to the CasAuthenticationProvider bean configured in the CasSecuredAppApplication class of the Spring Boot application:

@Bean
public CasAuthenticationProvider casAuthenticationProvider() {
    CasAuthenticationProvider provider = new CasAuthenticationProvider();
    provider.setServiceProperties(serviceProperties());
    provider.setTicketValidator(ticketValidator());
    provider.setUserDetailsService((s) -> new User(
      "test@test.com", "testU",
      true, true, true, true,
    AuthorityUtils.createAuthorityList("ROLE_ADMIN")));
    provider.setKey("CAS_PROVIDER_LOCALHOST_9000");
    return provider;
}

Another thing to take note of is that although the UserDetails is given a password, it’s not used. However, if the username differs from that of the server, authentication will fail.

For the application to authenticate successfully with the credentials stored in the database, start a MySQL server running on 127.0.0.1 and port 3306 with username root and password root.

Then use the SQL file, cas-server\src\main\resources\create_test_db_and_users_tbl.sql, which is part of the source code, to create the table users in database test.

By default, it contains the email test@test.com and password Mellon. Remember, we can always modify the database connection settings in application.properties.

Start the CAS Server once again with build run, go to https://localhost:6443/cas and use those credentials for authentication. The same credentials will also work for the cas-secured Spring Boot App.

6. Conclusion

We’ve looked extensively at how to use CAS Server SSO with Spring Security and many of the configuration files involved.

There are many other aspects of a server that can be configured ranging from themes and protocol types to authentication policies. These can all be found here in the docs.

The source code for the server in this article and it’s configuration files can be found here, and that of the Spring Boot application can be found here.


Quick Guide on data.sql and schema.sql Files in Spring Boot

$
0
0

1. Overview

Spring Boot makes it really easy to manage our database changes in an easy way. If we leave the default configuration, it’ll search for entities in our packages and create the respective tables automatically.

But sometimes we’ll need some finer grained control over the database alterations. That’s when we can use the data.sql and schema.sql files in Spring.

2. The data.sql File

Let’s also make the assumption here that we’re working with JPA – and define a simple Country entity in our project:

@Entity
public class Country {

    @Id
    @GeneratedValue(strategy = IDENTITY)
    private Integer id;
    
    @Column(nullable = false)
    private String name;

    //...
}

If we run our application, Spring Boot will create an empty table for us, but won’t populate it with anything.

An easy way to do this is to create a file named data.sql:

INSERT INTO country (name) VALUES ('India');
INSERT INTO country (name) VALUES ('Brazil');
INSERT INTO country (name) VALUES ('USA');
INSERT INTO country (name) VALUES ('Italy');

When we run the project with this file on the classpath, Spring will pick it up and use it for populating the database.

3. The schema.sql File

Sometimes, we don’t want to rely on the default schema creation mechanism. In such cases, we can create a custom schema.sql file:

CREATE TABLE country (
    id   INTEGER      NOT NULL AUTO_INCREMENT,
    name VARCHAR(128) NOT NULL,
    PRIMARY KEY (id)
);

Spring will pick this file up and use it for creating a schema.

It’s also important to remember to turn off automatic schema creation to avoid conflicts:

spring.jpa.hibernate.ddl-auto=none

4. Conclusion

In this quick article, we saw how we can leverage schema.sql and data.sql files for setting up an initial schema and populating it with data.

Keep in mind that this approach is more suited for basic and simple scenarios, any advanced database handling would require more advanced and refined tooling like Liquibase or Flyway.

Code snippets, as always, can be found over on GitHub.

Customizing the Order of Tests in JUnit 5

$
0
0

1. Overview

By default, JUnit runs tests using a deterministic, but unpredictable order (MethodSorters.DEFAULT).

In most cases, that behavior is perfectly fine and acceptable; but there’re cases when we need to enforce a specific ordering.

2. Using MethodSorters.DEFAULT

This default strategy compares test methods using their hashcodes. In case of a hash collision, the lexicographical order is used:

@FixMethodOrder(MethodSorters.DEFAULT)
public class DefaultOrderOfExecutionTest {
    private static StringBuilder output = new StringBuilder("");

    @Test
    public void secondTest() {
        output.append("b");
    }

    @Test
    public void thirdTest() {
        output.append("c");
    }

    @Test
    public void firstTest() {
        output.append("a");
    }

    @AfterClass
    public static void assertOutput() {
        assertEquals(output.toString(), "cab");
    }
}

When we execute the tests in the class above, we will see that they all pass, including assertOutput().

3. Using MethodSorters.JVM

Another ordering strategy is MethodSorters.JVMthis strategy utilizes the natural JVM ordering – which can be different for each run:

@FixMethodOrder(MethodSorters.JVM)
public class JVMOrderOfExecutionTest {
    // same as above
}

Each time we execute the tests in this class, we get a different result.

4. Using MethodSorters.NAME_ASCENDING

Finally, this strategy can be used for running test in their lexicographic order:

@FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class NameAscendingOrderOfExecutionTest {
    // same as above
    
    @AfterClass
    public static void assertOutput() {
        assertEquals(output.toString(), "abc");
    }
}

Similarly, when we execute the tests in this class, we see that they all pass, including assertOutput(), which confirms the execution order that we set with the annotation.

5. Conclusion

In this quick tutorial, we went through the ways of setting the execution order available in JUnit.

And, as always, the examples used in this article can be found over on GitHub.

Mock Final Classes and Methods with Mockito

$
0
0

1. Overview

In this short article, we’ll focus on how to mock final classes and methods – using Mockito.

As with other articles focused on the Mockito framework (like Mockito VerifyMockito When/Then, and Mockito’s Mock Methods) we’ll use the MyList class shown below as the collaborator in test cases.

We’ll add a new method for this tutorial:

public final class MyList extends AbstractList {
    final public int finalMethod() {
        return 0;
    }
}

And we’ll also extend it with a final subclass:

public final class FinalList extends MyList {

    @Override
    public int size() {
        return 1;
    }
}

2. Configure Mockito for Final Methods and Classes

Before Mockito can be used for mocking final classes and methods, it needs to be configured.

We need to add a text file to the project’s src/test/resources/mockito/extensions directory named org.mockito.plugins.MockMaker and add a single line of text:

mock-maker-inline

Mockito checks the extensions directory for configuration files when it is loaded. This file enables the mocking of final methods and classes.

3. Mock a Final Method

Once Mockito is properly configured, a final method can be mocked like any other:

@Test
public void whenMockFinalMethodMockWorks() {

    MyList myList = new MyList();

    MyList mock = mock(MyList.class);
    when(mock.finalMethod()).thenReturn(1);

    assertNotEquals(mock.finalMethod(), myList.finalMethod());
}

By creating a concrete instance and a mock instance of MyList, we can compare the values returned by both versions of finalMethod() and verify that the mock is called.

4. Mock a Final Class

Mocking a final class is just as easy as mocking any other class:

@Test
public void whenMockFinalClassMockWorks() {

    FinalList finalList = new FinalList();

    FinalList mock = mock(FinalList.class);
    when(mock.size()).thenReturn(2);

    assertNotEquals(mock.size(), finalList.size());
}

Similar to the test above, we create a concrete instance and a mock instance of our final class, mock a method and verify that the mocked instance behaves differently.

5. Conclusion

In this quick tutorial, we covered how to mock final classes and methods with Mockito by using a Mockito extension.

The full examples, as always, can be found over on GitHub.

Hibernate Inheritance Mapping

$
0
0

1. Overview

Relational databases don’t have a straightforward way to map class hierarchies onto database tables.

To address this, the JPA specification provides several strategies:

  • MappedSuperclass – the parent classes, can’t be entities
  • Single Table – the entities from different classes with a common ancestor are placed in a single table
  • Joined Table – each class has its table and querying a subclass entity requires joining the tables
  • Table-Per-Class – all the properties of a class, are in its table, so no join is required

Each strategy results in a different database structure.

Entity inheritance means that we can use polymorphic queries for retrieving all the sub-class entities when querying for a super-class.

Since Hibernate is a JPA implementation, it contains all of the above as well as a few Hibernate-specific features related to inheritance.

In the next sections, we’ll go over available strategies in more detail.

2. MappedSuperclass

Using the MappedSuperclass strategy, inheritance is only evident in the class, but not the entity model.

Let’s start by creating a Person class which will represent a parent class:

@MappedSuperclass
public class Person {

    @Id
    private long personId;
    private String name;

    // constructor, getters, setters
}

Notice that this class no longer has an @Entity annotation, as it won’t be persisted in the database by itself.

Next, let’s add an Employee sub-class:

@Entity
public class MyEmployee extends Person {
    private String company;
    // constructor, getters, setters 
}

In the database, this will correspond to one “MyEmployee” table with three columns for the declared and inherited fields of the sub-class.

If we’re using this strategy, ancestors cannot contain associations with other entities.

3. Single Table

The Single Table strategy creates one table for each class hierarchy. This is also the default strategy chosen by JPA if we don’t specify one explicitly.

We can define the strategy we want to use by adding the @Inheritance annotation to the super-class:

@Entity
@Inheritance(strategy = InheritanceType.SINGLE_TABLE)
public class MyProduct {
    @Id
    private long productId;
    private String name;

    // constructor, getters, setters
}

The identifier of the entities is also defined in the super-class.

Then, we can add the sub-class entities:

@Entity
public class Book extends MyProduct {
    private String author;
}
@Entity
public class Pen extends MyProduct {
    private String color;
}

3.1. Discriminator Values

Since the records for all entities will be in the same table, Hibernate needs a way to differentiate between them.

By default, this is done through a discriminator column called DTYPE which has the name of the entity as a value.

To customize the discriminator column, we can use the @DiscriminatorColumn annotation:

@Entity(name="products")
@Inheritance(strategy = InheritanceType.SINGLE_TABLE)
@DiscriminatorColumn(name="product_type", 
  discriminatorType = DiscriminatorType.INTEGER)
public class MyProduct {
    // ...
}

Here we’ve chosen to differentiate MyProduct sub-class entities by an integer column called product_type.

Next, we need to tell Hibernate what value each sub-class record will have for the product_type column:

@Entity
@DiscriminatorValue("1")
public class Book extends MyProduct {
    // ...
}
@Entity
@DiscriminatorValue("2")
public class Pen extends MyProduct {
    // ...
}

Hibernate adds two other pre-defined values that the annotation can take: “null” and “not null“:

  • @DiscriminatorValue(“null”) – means that any row without a discriminator value will be mapped to the entity class with this annotation; this can be applied to the root class of the hierarchy
  • @DiscriminatorValue(“not null”) – any row with a discriminator value not matching any of the ones associated with entity definitions will be mapped to the class with this annotation

Instead of a column, we can also use the Hibernate-specific @DiscriminatorFormula annotation to determine the differentiating values:

@Entity
@Inheritance(strategy = InheritanceType.SINGLE_TABLE)
@DiscriminatorFormula("case when author is not null then 1 else 2 end")
public class MyProduct { ... }

This strategy has the advantage of polymorphic query performance since only one table needs to be accessed when querying parent entities. On the other hand, this also means that we can no longer use NOT NULL constraints on sub-class entity properties.

4. Joined Table

Using this strategy, each class in the hierarchy is mapped to its table. The only column which repeatedly appears in all the tables is the identifier, which will be used for joining them when needed.

Let’s create a super-class that uses this strategy:

@Entity
@Inheritance(strategy = InheritanceType.JOINED)
public class Animal {
    @Id
    private long animalId;
    private String species;

    // constructor, getters, setters 
}

Then, we can simply define a sub-class:

@Entity
public class Pet extends Animal {
    private String name;

    // constructor, getters, setters
}

Both tables will have an animalId identifier column. The primary key of the Pet entity also has a foreign key constraint to the primary key of its parent entity. To customize this column, we can add the @PrimaryKeyJoinColumn annotation:

@Entity
@PrimaryKeyJoinColumn(name = "petId")
public class Pet extends Animal {
    // ...
}

The disadvantage of this inheritance mapping method is that retrieving entities requires joins between tables, which can result in lower performance for large numbers of records.

The number of joins is higher when querying the parent class as it will join with every single related child – so performance is more likely to be affected the higher up the hierarchy we want to retrieve records.

5. Table Per Class

The Table Per Class strategy maps each entity to its table which contains all the properties of the entity, including the ones inherited.

The resulting schema is similar to the one using @MappedSuperclass, but unlike it, table per class will indeed define entities for parent classes, allowing associations and polymorphic queries as a result.

To use this strategy, we only need to add the @Inheritance annotation to the base class:

@Entity
@Inheritance(strategy = InheritanceType.TABLE_PER_CLASS)
public class Vehicle {
    @Id
    private long vehicleId;

    private String manufacturer;

    // standard constructor, getters, setters
}

Then, we can create the sub-classes in the standard way.

This is not very different from merely mapping each entity without inheritance. The distinction is apparent when querying the base class, which will return all the sub-class records as well by using a UNION statement in the background.

The use of UNION can also lead to inferior performance when choosing this strategy. Another issue is that we can no longer use identity key generation.

6. Polymorphic Queries

As mentioned, querying a base class will retrieve all the sub-class entities as well.

Let’s see this behavior in action with a JUnit test:

@Test
public void givenSubclasses_whenQuerySuperclass_thenOk() {
    Book book = new Book(1, "1984", "George Orwell");
    session.save(book);
    Pen pen = new Pen(2, "my pen", "blue");
    session.save(pen);

    assertThat(session.createQuery("from MyProduct")
      .getResultList()).hasSize(2);
}

In this example, we’ve created two Book and Pen objects, then queried their super-class MyProduct to verify that we’ll retrieve two objects.

Hibernate can also query interfaces or base classes which are not entities but are extended or implemented by entity classes. Let’s see a JUnit test using our @MappedSuperclass example:

@Test
public void givenSubclasses_whenQueryMappedSuperclass_thenOk() {
    MyEmployee emp = new MyEmployee(1, "john", "baeldung");
    session.save(emp);

    assertThat(session.createQuery(
      "from com.baeldung.hibernate.pojo.inheritance.Person")
      .getResultList())
      .hasSize(1);
}

Note that this also works for any super-class or interface, whether it’s a @MappedSuperclass or not. The difference from a usual HQL query is that we have to use the fully qualified name since they are not Hibernate-managed entities.

If we don’t want a sub-class to be returned by this type of query, then we only need to add the Hibernate @Polymorphism annotation to its definition, with type EXPLICIT:

@Entity
@Polymorphism(type = PolymorphismType.EXPLICIT)
public class Bag implements Item { ...}

In this case, when querying for Items, the Bag records won’t be returned.

7. Conclusion

In this article, we’ve shown the different strategies for mapping inheritance in Hibernate.

The full source code of the examples can be found over on GitHub.

Java – Append Data to a File

$
0
0

1. Introduction

In this quick tutorial, we’ll see how we use Java to append data to the content of a file – in a few simple ways.

Let’s start with how we can do this using core Java’s FileWriter.

2. Using FileWriter

Here’s a simple test – reading an existing file, appending some text, and then making sure that got appended correctly:

@Test
public void whenAppendToFileUsingFileWriter_thenCorrect()
  throws IOException {
 
    FileWriter fw = new FileWriter(fileName, true);
    BufferedWriter bw = new BufferedWriter(fw);
    bw.write("Spain");
    bw.newLine();
    bw.close();
    
    assertThat(getStringFromInputStream(
      new FileInputStream(fileName)))
      .isEqualTo("UK\r\n" + "US\r\n" + "Germany\r\n" + "Spain\r\n");
}

Note that FileWriter’s constructor accepts a boolean marking if we want to append data to an existing file.

If we set it to false, then the existing content will be replaced.

3. Using FileOutputStream

Next – let’s see how we can do the same operation – using FileOutputStream:

@Test
public void whenAppendToFileUsingFileOutputStream_thenCorrect()
 throws Exception {
 
    FileOutputStream fos = new FileOutputStream(fileName, true);
    fos.write("Spain\r\n".getBytes());
    fos.close();
    
    assertThat(StreamUtils.getStringFromInputStream(
      new FileInputStream(fileName)))
      .isEqualTo("UK\r\n" + "US\r\n" + "Germany\r\n" + "Spain\r\n");
}

Similarly, the FileOutputStream constructor accepts a boolean that should be set to true to mark that we want to append data to an existing file.

4. Using java.nio.file

Next – we can also append content to files using functionality in java.nio.file – which was introduced in JDK 7:

@Test
public void whenAppendToFileUsingFiles_thenCorrect() 
 throws IOException {
 
    String contentToAppend = "Spain\r\n";
    Files.write(
      Paths.get(fileName), 
      contentToAppend.getBytes(), 
      StandardOpenOption.APPEND);
    
    assertThat(StreamUtils.getStringFromInputStream(
      new FileInputStream(fileName)))
      .isEqualTo("UK\r\n" + "US\r\n" + "Germany\r\n" + "Spain\r\n");
}

5. Using Guava

To start using Guava, we need to add its dependency to our pom.xml:

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>23.0</version>
</dependency>

Now, let’s see how we can start using Guava to append content to an existing file:

@Test
public void whenAppendToFileUsingFileWriter_thenCorrect()
 throws IOException {
 
    File file = new File(fileName);
    CharSink chs = Files.asCharSink(
      file, Charsets.UTF_8, FileWriteMode.APPEND);
    chs.write("Spain\r\n");
	
    assertThat(StreamUtils.getStringFromInputStream(
      new FileInputStream(fileName)))
      .isEqualTo("UK\r\n" + "US\r\n" + "Germany\r\n" + "Spain\r\n");
}

6. Using Apache Commons IO FileUtils

Finally – let’s see how we can append content to an existing file using Apache Commons IO FileUtils.

First, let’s add the Apache Commons IO dependency to our pom.xml:

<dependency>
    <groupId>commons-io</groupId>
    <artifactId>commons-io</artifactId>
    <version>2.6</version>
</dependency>

Now, let’s see a quick example that demonstrates appending content to an existing file using FileUtils:

@Test
public void whenAppendToFileUsingFiles_thenCorrect()
 throws IOException {
    File file = new File(fileName);
    FileUtils.writeStringToFile(
      file, "Spain\r\n", StandardCharsets.UTF_8, true);
    
    assertThat(StreamUtils.getStringFromInputStream(
      new FileInputStream(fileName)))
      .isEqualTo("UK\r\n" + "US\r\n" + "Germany\r\n" + "Spain\r\n");
}

7. Conclusion

In this article, we’ve seen how we can append content in multiple ways.

The full implementation of this tutorial can be found over on GitHub.

Introduction to OSGi

$
0
0

1. Introduction

Several Java mission-critical and middleware applications have some hard technological requirements.

Some have to support hot deploy, so as not to disrupt the running services – and others have to be able to work with different versions of the same package for the sake of supporting external legacy systems.

The OSGi platforms represent a viable solution to support this kind of requirements.

The Open Service Gateway Initiative is a specification defining a Java-based component system. It’s currently managed by the OSGi Alliance, and its first version dates back to 1999.

Since then, it has proved to be a great standard for component systems, and it’s widely used nowadays. The Eclipse IDE, for instance, is an OSGi-based application.

In this article, we’ll explore some basic features of OSGi leveraging the implementation provided by Apache.

2. OSGi Basics

In OSGi, a single component is called a bundle.

Logically, a bundle is a piece of functionality that has an independent lifecycle – which means it can be started, stopped and removed independently.

Technically, a bundle is just a jar file with a MANIFEST.MF file containing some OSGi-specific headers.

The OSGi platform provides a way to receive notifications about bundles becoming available or when they’re removed from the platform. This will allow a properly designed client to keep working, maybe with degraded functionality, even when a service it depends on, is momentarily unavailable.

Because of that, a bundle has to explicitly declare what packages it needs to have access to and the OSGi platform will start it only if the dependencies are available in the bundle itself or in other bundles already installed in the platform.

3. Getting the Tools

We’ll start our journey in OSGi by downloading the latest version of Apache Karaf from this link. Apache Karaf is a platform that runs OSGi-based applications; it’s based on the Apache‘s implementation of OSGi specification called Apache Felix.

Karaf offers some handy features on top of Felix that will help us in getting acquainted with OSGi, for example, a command line interface that will allow us to interact with the platform.

To install Karaf, you can follow the installation instruction from the official documentation.

4. Bundle Entry Point

To execute an application in an OSGi environment, we have to pack it as an OSGi bundle and define the application entry point, and that’s not the usual public static void main(String[] args) method.

So, let’s start by building an OSGi- based “Hello World” application.

We start setting up a simple dependency on the core OSGi API:

<dependency>
    <groupId>org.osgi</groupId> 
    <artifactId>org.osgi.core</artifactId>
    <version>6.0.0</version>
    <scope>provided</scope>
</dependency>

The dependency is declared as provided because it will be available in the OSGi runtime, and the bundle doesn’t need to embed it.

Let’s now write the simple HelloWorld class:

public class HelloWorld implements BundleActivator {
    public void start(BundleContext ctx) {
        System.out.println("Hello world.");
    }
    public void stop(BundleContext bundleContext) {
        System.out.println("Goodbye world.");
    }
}

BundleActivator is an interface provided by OSGi that has to be implemented by classes that are entry points for a bundle.

The start() method is invoked by the OSGi platform when the bundle containing this class is started. In the other hand stop() is invoked before just before the bundle is stopped.

Let’s keep in mind that each bundle can contain at most one BundleActivator. The BundleContext object provided to both methods allows interacting with the OSGi runtime. We’ll get back to it soon.

5. Building a Bundle

Let’s modify the pom.xml and make it an actual OSGi bundle.

First of all, we have to explicitly state that we’re going to build a bundle, not a jar:

<packaging>bundle</packaging>

Then we leverage the maven-bundle-plugin, courtesy of the Apache Felix community, to package the HelloWorld class as an OSGi bundle:

<plugin>
    <groupId>org.apache.felix</groupId>
    <artifactId>maven-bundle-plugin</artifactId>
    <version>3.3.0</version>
    <extensions>true</extensions>
    <configuration>
        <instructions>
            <Bundle-SymbolicName>
                ${pom.groupId}.${pom.artifactId}
            </Bundle-SymbolicName>
            <Bundle-Name>${pom.name}</Bundle-Name>
            <Bundle-Version>${pom.version}</Bundle-Version>
            <Bundle-Activator>
                com.baeldung.osgi.sample.activator.HelloWorld
            </Bundle-Activator>
            <Private-Package>
                com.baeldung.osgi.sample.activator
            </Private-Package>            
        </instructions>
    </configuration>
</plugin>

In the instructions section, we specify the values of the OSGi headers we want to include in the bundle’s MANIFEST file.

Bundle-Activator is the fully qualified name of the BundleActivator implementation that will be used to start and stop the bundle, and it refers to the class we’ve just written.

Private-Package is not an OSGi header, but it’s used to tell the plugin to include the package in the bundle but not make it available to other ones. We can now build the bundle with the usual command mvn clean install.

6. Installing and Running the Bundle

Let’s start Karaf by executing the command:

<KARAF_HOME>/bin/karaf start

where <KARAF_HOME> is the folder where Karaf is installed. When the prompt of the Karaf console appears we can execute the following command to install the bundle:

> bundle:install mvn:com.baeldung/osgi-intro-sample-activator/1.0-SNAPSHOT
Bundle ID: 63

This instructs Karaf to load the bundle from the local Maven repository. 

In return Karaf prints out the numeric ID assigned to the bundle that depends on the number of bundles already installed and may vary. The bundle is now just installed, we can now start it with the following command:

> bundle:start 63
Hello World

“Hello World” immediately appears as soon the bundle is started. We can now stop and uninstall the bundle with:

> bundle:stop 63
> bundle:uninstall 63

“Goodbye World” appears on the console, accordingly to the code in the stop() method.

7. An OSGi Service

Let’s go on writing a simple OSGi service, an interface that exposes a method for greeting people:

package com.baeldung.osgi.sample.service.definition;
public interface Greeter {
    public String sayHiTo(String name);
}

Let’s write an implementation of it that is a BundleActivator too, so we’ll be able to instantiate the service and register it on the platform when the bundle is started:

package com.baeldung.osgi.sample.service.implementation;
public class GreeterImpl implements Greeter, BundleActivator {

    private ServiceReference<Greeter> reference;
    private ServiceRegistration<Greeter> registration;

    @Override
    public String sayHiTo(String name) {
        return "Hello " + name;
    }

    @Override 
    public void start(BundleContext context) throws Exception {
        System.out.println("Registering service.");
        registration = context.registerService(
          Greeter.class, 
          new GreeterImpl(), 
          new Hashtable<String, String>());
        reference = registration
          .getReference();
    }

    @Override 
    public void stop(BundleContext context) throws Exception {
        System.out.println("Unregistering service.");
        registration.unregister();
    }
}

We use the BundleContext as a mean of requesting the OSGi platform to register a new instance of the service.

We should also provide the type of the service and a map of the possible configuration parameters, which aren’t needed in our simple scenario. Let’s now proceed with the configuration of the maven-bundle-plugin:

<plugin>
    <groupId>org.apache.felix</groupId>
    <artifactId>maven-bundle-plugin</artifactId>
    <extensions>true</extensions>
    <configuration>
        <instructions>
            <Bundle-SymbolicName>
                ${project.groupId}.${project.artifactId}
            </Bundle-SymbolicName>
            <Bundle-Name>
                ${project.artifactId}
            </Bundle-Name>
            <Bundle-Version>
                ${project.version}
            </Bundle-Version>
            <Bundle-Activator>
                com.baeldung.osgi.sample.service.implementation.GreeterImpl
            </Bundle-Activator>
            <Private-Package>
                com.baeldung.osgi.sample.service.implementation
            </Private-Package>
            <Export-Package>
                com.baeldung.osgi.sample.service.definition
            </Export-Package>
        </instructions>
    </configuration>
</plugin>

It’s worth noting that only the com.baeldung.osgi.sample.service.definition package has been exported this time, through the Export-Package header.

Thanks to this, OSGi will allow other bundles to invoke only the methods specified in the service interface. Package com.baeldung.osgi.sample.service.implementation is marked as private, so no other bundle will be able to access the members of the implementation directly.

8. An OSGi Client

Let’s now write the client. It simply looks up the service at startup and invokes it:

public class Client implements BundleActivator, ServiceListener {
}

Let’s implement the BundleActivator start() method:

private BundleContext ctx;
private ServiceReference serviceReference;

public void start(BundleContext ctx) {
    this.ctx = ctx;
    try {
        ctx.addServiceListener(
          this, "(objectclass=" + Greeter.class.getName() + ")");
    } catch (InvalidSyntaxException ise) {
        ise.printStackTrace();
    }
}

The addServiceListener() method allows the client to ask the platform to send notifications about the service that complies with the provided expression.

The expression uses a syntax similar to the LDAP’s one, and in our case, we’re requesting notifications about a Greeter service.

Let’s go on to the callback method:

public void serviceChanged(ServiceEvent serviceEvent) {
    int type = serviceEvent.getType();
    switch (type){
        case(ServiceEvent.REGISTERED):
            System.out.println("Notification of service registered.");
            serviceReference = serviceEvent
              .getServiceReference();
            Greeter service = (Greeter)(ctx.getService(serviceReference));
            System.out.println( service.sayHiTo("John") );
            break;
        case(ServiceEvent.UNREGISTERING):
            System.out.println("Notification of service unregistered.");
            ctx.ungetService(serviceEvent.getServiceReference());
            break;
        default:
            break;
    }
}

When some modification involving the Greeter service happens, the method is notified.

When the service is registered to the platform, we get a reference to it, we store it locally, and we then use it to acquire the service object and invoke it.

When the server is later unregistered, we use the previously stored reference to unget it, meaning that we tell the platform that we are not going to use it anymore.

We now just need to write the stop() method:

public void stop(BundleContext bundleContext) {
    if(serviceReference != null) {
        ctx.ungetService(serviceReference);
    }
}

Here again, we unget the service to cover the case in which the client is stopped before the service is being stopped. Let’s give a final look at the dependencies in the pom.xml:

<dependency>
    <groupId>com.baeldung</groupId>
    <artifactId>osgi-intro-sample-service</artifactId>
    <version>1.0-SNAPSHOT</version>
    <scope>provided</scope>
</dependency>
<dependency>
    <groupId>org.osgi</groupId>
    <artifactId>org.osgi.core</artifactId>
    <version>6.0.0</version>
</dependency>

9. Client and Service

Let’s now install the client and service bundles in Kafka by doing:

> install mvn:com.baeldung/osgi-intro-sample-service/1.0-SNAPSHOT
Bundle ID: 64
> install mvn:com.baeldung/osgi-intro-sample-client/1.0-SNAPSHOT
Bundle ID: 65

Always keep in mind that the identifier numbers assigned to each bundle may vary.

Let’s now start the client bundle:

> start 65

Therefore, nothing happens because the client is active and it’s waiting for the service, that we can start with:

> start 64
Registering service.
Service registered.
Hello John

What happens is that as soon as the service’s BundleActivator starts, the service is registered to the platform. That, in turn, notifies the client that the service it was waiting for is available.

The client then gets a reference to the service and uses it to invoke the implementation delivered through the service bundle.

10. Conclusion

In this article, we explored the essential features of OSGi with a straightforward example that it’s enough to understand the potential of OSGi.

In conclusion, whenever we have to guarantee that a single application has to be updated without any disservice, OSGi can be a viable solution.

The code for this post can be found over on GitHub.

Introduction to the Java ArrayDeque

$
0
0

1. Overview

In this tutorial, we’ll show how to use the Java’s ArrayDeque class – which is an implementation of Deque interface.

An ArrayDeque (also known as an “Array Double Ended Queue”, pronounced as “ArrayDeck”) is a special kind of a growable array that allows us to add or remove an element from both sides.

An ArrayDeque implementation can be used as a Stack (Last-In-First-Out) or a Queue(First-In-First-Out).

2. The API at a Glance

For each operation, we basically have two options.

The first group consists of methods that throw exception if the operation fails. The other group returns a status or a value:

Operation Method Method throwing Exception
Insertion from Head offerFirst(e) addFirst(e)
Removal from Head pollFirst() removeFirst()
Retrieval from Head peekFirst() getFirst()
Insertion from Tail offerLast(e) addLast(e)
Removal from Tail pollLast() removeLast()
Retrieval from Tail peekLast() getLast()

3. Using Methods

Let’s look at a few simple example of how we can make use of the ArrayDeque.

3.1. Using ArrayDeque as a Stack

We’ll start with an example of how we can treat the class as a Stack – and push an element:

@Test
public void whenPush_addsAtFirst() {
    Deque<String> stack = new ArrayDeque<>();
    stack.push("first");
    stack.push("second");
 
    assertEquals("second", stack.getFirst());
}

Let’s also see how we can pop an element from the ArrayDeque – when used as a Stack:

@Test
public void whenPop_removesLast() {
    Deque<String> stack = new ArrayDeque<>();
    stack.push("first");
    stack.push("second");
 
    assertEquals("second", stack.pop());
}

The pop method throws NoSuchElementException when a stack is empty.

3.2. Using ArrayDeque as a Queue

Let’s now start with a simple example showing how we can offer an element in an ArrayDeque – when used as a simple Queue:

@Test
public void whenOffer_addsAtLast() {
    Deque<String> queue = new ArrayDeque<>();
    queue.offer("first");
    queue.offer("second");
 
    assertEquals("second", queue.getLast());
}

And let’s see how we can poll an element from an ArrayDeque, also when used as a Queue:

@Test
public void whenPoll_removesFirst() {
    Deque<String> queue = new ArrayDeque<>();
    queue.offer("first");
    queue.offer("second");
 
    assertEquals("first", queue.poll());
}

The poll method returns a null value if a queue is empty.

4. How’s ArrayDeque implemented



Under the hood, the ArrayDeque is backed by an array which doubles its size when it gets filled.

Initially, the array is initialized with a size of 16. It’s implemented as a double-ended queue where it maintains two pointers namely a head and a tail.

Let’s see this logic in action – at a high level.

4.1. ArrayDeque as Stack



As can be seen, when a user adds in an element using the push method, it moves the head pointer by one.

When we pop an element, it sets the element at the head position as null so the element could be garbage collected, and then moves back the head pointer by one.

4.2. ArrayDeque as  a Queue



When we add in an element using the offer method, it moves the tail pointer by one.

While when user polls an element, it sets the element at the head position to null so the element could be garbage collected, and then moves the head pointer.

4.3. Notes on ArrayDeque

Finally, a few more notes worth understanding and remembering about this particular implementation:

  • It’s not thread-safe
  • Null elements are not accepted
  • Works significantly faster than the synchronized Stack
  • Is a faster queue than LinkedList due to the better locality of reference
  • Most operations have amortized constant time complexity
  • An Iterator returned by an ArrayDeque is fail-fast
  • ArrayDeque automatically doubles the size of an array when head and tail pointer meets each other while adding an element

5. Conclusion

In this short article, we illustrated the usage of methods in ArrayDeque.

The implementation of all these examples can be found in the GitHub project; this is a Maven-based project, so it should be easy to import and run as is.


Implementing the Template Method Pattern in Java

$
0
0

1. Overview

In this quick tutorial, we’ll see how to leverage the template method pattern – one of the most popular GoF patterns.

It makes it easier to implement complex algorithms by encapsulating logic in a single method.

2. Implementation

To demonstrate how the template method pattern works, let’s create a simple example which represents building a computer station.

Given the pattern’s definition, the algorithm’s structure will be defined in a base class that defines the template build() method:

public abstract class ComputerBuilder {
    
    // ...
    
    public final Computer buildComputer() {
        addMotherboard();
        setupMotherboard();
        addProcessor();
        return new Computer(computerParts);
    }
   
    public abstract void addMotherboard();
    public abstract void setupMotherboard();
    public abstract void addProcessor();
    
    // ...
}

The ComputerBuilder class is responsible for outlining the steps required to build a computer by declaring methods for adding and setting up different components, such as a motherboard and a processor.

Here, the build() method is the template method, which defines steps of the algorithm for assembling the computer parts and returns fully-initialized Computer instances.

Notice that it’s declared as final to prevent it from being overridden.

3. In Action

With the base class already set, let’s try to use it by creating two subclasses. One which builds a “standard” computer, and the other that builds a “high-end” computer:

public class StandardComputerBuilder extends ComputerBuilder {

    @Override
    public void addMotherboard() {
        computerParts.put("Motherboard", "Standard Motherboard");
    }
    
    @Override
    public void setupMotherboard() {
        motherboardSetupStatus.add(
          "Screwing the standard motherboard to the case.");
        motherboardSetupStatus.add(
          "Pluging in the power supply connectors.");
        motherboardSetupStatus.forEach(
          step -> System.out.println(step));
    }
    
    @Override
    public void addProcessor() {
        computerParts.put("Processor", "Standard Processor");
    }
}

And here’s the HighEndComputerBuilder variant:

public class HighEndComputerBuilder extends ComputerBuilder {

    @Override
    public void addMotherboard() {
        computerParts.put("Motherboard", "High-end Motherboard");
    }
    
    @Override
    public void setupMotherboard() {
        motherboardSetupStatus.add(
          "Screwing the high-end motherboard to the case.");
        motherboardSetupStatus.add(
          "Pluging in the power supply connectors.");
        motherboardSetupStatus.forEach(
          step -> System.out.println(step));
    }
    
    @Override
    public void addProcessor() {
         computerParts.put("Processor", "High-end Processor");
    }
}

As we can see, we didn’t need to worry about the whole assembly process but only for providing implementations for separate methods.

Now, let’s see it in action:

new StandardComputerBuilder()
  .buildComputer();
  .getComputerParts()
  .forEach((k, v) -> System.out.println("Part : " + k + " Value : " + v));
        
new HighEndComputerBuilder()
  .buildComputer();
  .getComputerParts()
  .forEach((k, v) -> System.out.println("Part : " + k + " Value : " + v));

4. Template Methods in Java Core Libraries

This pattern is widely used in the Java core libraries, for example by java.util.AbstractList, or java.util.AbstractSet.

For instance, Abstract List provides a skeletal implementation of the List interface.

An example of a template method can be the addAll() method, although it’s not explicitly defined as final:

public boolean addAll(int index, Collection<? extends E> c) {
    rangeCheckForAdd(index);
    boolean modified = false;
    for (E e : c) {
        add(index++, e);
        modified = true;
    }
    return modified;
}

Users only need to implement the add() method:

public void add(int index, E element) {
    throw new UnsupportedOperationException();
}

Here, it’s the responsibility of the programmer to provide an implementation for adding an element to the list at the given index (the variant part of the listing algorithm).

5. Conclusion

In this article, we showed the template method pattern and how to implement it in Java.

The template method pattern promotes code reuse and decoupling, but at the expense of using inheritance.

As always, all the code samples shown in this article are available over on GitHub.

Convert Date to LocalDate or LocalDateTime and Back

$
0
0

1. Overview

Starting with Java 8, we have a new Date API – java.time.

However, sometimes we still need to perform conversions between the new and the old APIs, and work with date representations from both.

2. Converting java.util.Date to java.time.LocalDate

Let’s start with converting the old date representation to the new one.

Here, we can take advantage of a new toInstant() method which was added to java.util.Date in Java 8.

When we’re converting an Instant object, it’s required to use a ZoneId, because Instant objects are timezone-agnostic – just points on the timeline.

The atZone(ZoneId zone) API from Instant object returns a ZonedDateTime – so we just need to extract LocalDate from it using the toLocalDate() method.

In our first example here, we’re using the default system ZoneId:

public LocalDate convertToLocalDateViaInstant(Date dateToConvert) {
    return dateToConvert.toInstant()
      .atZone(ZoneId.systemDefault())
      .toLocalDate();
}

A similar solution to the above one, but with a different way of creating an Instant object – using the ofEpochMilli() method:

public LocalDate convertToLocalDateViaMilisecond(Date dateToConvert) {
    return Instant.ofEpochMilli(dateToConvert.getTime())
      .atZone(ZoneId.systemDefault())
      .toLocalDate();
}

Before we move on, let’s also have a quick look at the old java.sql.Date class and how that can be converted to a LocalDate as well.

Starting with Java 8, we can find an additional toLocalDate() method on java.sql.Date – which also gives us an easy way of converting it to java.time.LocalDate.

In this case, we don’t need to worry about the timezone:

public LocalDate convertToLocalDateViaSqlDate(Date dateToConvert) {
    return new java.sql.Date(dateToConvert.getTime()).toLocalDate();
}

Very similarly, we can convert old Date object into a LocalDateTime object as well – let’s have a look at that next.

3. Converting java.util.Date to java.time.LocalDateTime

To get a LocalDateTime instance – we can similarly use an intermediary ZonedDateTime, and then using the toLocalDateTime() API.

Just like before, we can use two possible solutions of getting an Instant object from java.util.Date:

public LocalDateTime convertToLocalDateTimeViaInstant(Date dateToConvert) {
    return dateToConvert.toInstant()
      .atZone(ZoneId.systemDefault())
      .toLocalDateTime();
}

public LocalDateTime convertToLocalDateTimeViaMilisecond(Date dateToConvert) {
    return Instant.ofEpochMilli(dateToConvert.getTime())
      .atZone(ZoneId.systemDefault())
      .toLocalDateTime();
}

And, starting with Java 8, we can also use java.sql.Timestamp to obtain a LocalDateTime:

ocalDateTime convertToLocalDateTimeViaSqlTimestamp(Date dateToConvert) {
    return new java.sql.Timestamp(
      dateToConvert.getTime()).toLocalDateTime();
}

4. Convert java.time.LocalDate to java.util.Date

Now that we have a good understanding of how to convert form the old data representation to the new one, let’s have a look at converting in the other direction.

We’ll discuss two possible ways of converting LocalDate to Date.

In the first one, we use a new valueOf(LocalDate date) method provided in java.sql.Date object which takes LocalDate as a parameter:

public Date convertToDateViaSqlDate(LocalDate dateToConvert) {
    return java.sql.Date.valueOf(dateToConvert);
}

As we can see, it is effortless and intuitive. It uses local time zone for conversion (all is done under the hood, no need to worry).

In another, Java 8 example, we use an Instant object which we pass to the from(Instant instant) method of java.util.Date object:

public Date convertToDateViaInstant(LocalDate dateToConvert) {
    return java.util.Date.from(dateToConvert.atStartOfDay()
      .atZone(ZoneId.systemDefault())
      .toInstant());
}

You’ll notice we make use of an Instant object here, and that we also need to care about time zones when doing this conversion.

Next, let’s use a very similar solutions to convert a LocalDateTime to a Date object.

5. Convert java.time.LocalDateTime to java.util.Date

The easiest way of getting a java.util.Date from LocalDateTime is to use an extension to the java.sql.Timestamp – available with Java 8:

public Date convertToDateViaSqlTimestamp(LocalDateTime dateToConvert) {
    return java.sql.Timestamp.valueOf(dateToConvert);
}

But of course, an alternative solution is using an Instant object – which we obtain from ZonedDateTime:

Date convertToDateViaInstant(LocalDateTime dateToConvert) {
    return java.util.Date
      .from(dateToConvert.atZone(ZoneId.systemDefault())
      .toInstant());
}

6. Java 9 Additions

In Java 9, there’re new methods available which simplify conversion between java.util.Date and java.time.LocalDate or java.time.LocalDateTime.

LocalDate.ofInstant(Instant instant, ZoneId zone) and LocalDateTime.ofInstant(Instant instant, ZoneId zone) provide handy shortcuts:

public LocalDate convertToLocalDate(Date dateToConvert) {
    return LocalDate.ofInstant(
      dateToConvert.toInstant(), ZoneId.systemDefault());
}

public LocalDateTime convertToLocalDateTime(Date dateToConvert) {
    return LocalDateTime.ofInstant(
      dateToConvert.toInstant(), ZoneId.systemDefault());
}

7. Conclusion

In this tutorial, we covered possible ways of converting old java.util.Date into new java.time.LocalDate and java.time.LocalDateTime and another way around.

The full implementation of this article is available over on Github.

Introduction to the JSON Binding API (JSR 367) in Java

$
0
0

1. Overview

For a long time, there was no standard for JSON processing in Java. The most common libraries used for JSON processing are Jackson and Gson.

Recently, Java EE7 came with an API for parsing and generating JSON (JSR 353: Java API for JSON Processing).

And finally, with the release of JEE 8, there is a standardized API (JSR 367: Java API for JSON Binding (JSON-B)).

For now, its main implementations are Eclipse Yasson (RI) and Apache Johnzon.

2. JSON-B API

2.1. Maven Dependency

Let’s start by adding the necessary dependency.

Keep in mind that in many cases it’ll be enough to include the dependency for the chosen implementation and the javax.json.bind-api will be included transitively:

<dependency>
    <groupId>javax.json.bind</groupId>
    <artifactId>javax.json.bind-api</artifactId>
    <version>1.0</version>
</dependency>

The most recent version can be found at Maven Central.

3. Using Eclipse Yasson

Eclipse Yasson is the official reference implementation of JSON Binding API (JSR-367).

3.1. Maven Dependency

To use it, we need to include the following dependencies in our Maven project:

<dependency>
    <groupId>org.eclipse</groupId>
    <artifactId>yasson</artifactId>
    <version>1.0.1</version>
</dependency>
<dependency>
    <groupId>org.glassfish</groupId>
    <artifactId>javax.json</artifactId>
    <version>1.1.2</version>
</dependency>

The most recent versions can be found at Maven Central.

4. Using Apache Johnzon

Another implementation we can use is Apache Johnzon which complies with the JSON-P (JSR-353) and JSON-B (JSR-367) APIs.

4.1. Maven Dependency

To use it, we need to include the following dependencies in our Maven project:

<dependency>
    <groupId>org.apache.geronimo.specs</groupId>
    <artifactId>geronimo-json_1.1_spec</artifactId>
    <version>1.0</version>
</dependency>
<dependency>
    <groupId>org.apache.johnzon</groupId>
    <artifactId>johnzon-jsonb</artifactId>
    <version>1.1.4</version>
</dependency>

The most recent versions can be found at Maven Central.

5. API Features

The API provides annotations for customizing serialization/deserialization.

Let’s create a simple class and see how the example configuration looks like:

public class Person {

    private int id;

    @JsonbProperty("person-name")
    private String name;
    
    @JsonbProperty(nillable = true)
    private String email;
    
    @JsonbTransient
    private int age;
     
    @JsonbDateFormat("dd-MM-yyyy")
    private LocalDate registeredDate;
    
    private BigDecimal salary;
    
    @JsonbNumberFormat(locale = "en_US", value = "#0.0")
    public BigDecimal getSalary() {
        return salary;
    }
 
    // standard getters and setters
}

After serialization, an object of this class will look like:

{
   "email":"jhon@test.com",
   "id":1,
   "person-name":"Jhon",
   "registeredDate":"07-09-2019",
   "salary":"1000.0"
}

The annotations used here are:

  • @JsonbProperty – which is used for specifying a custom field name
  • @JsonbTransient – when we want to ignore the field during deserialization/serialization
  • @JsonbDateFormat – when we want to define the display format of the date
  • @JsonbNumberFormat – for specifying the display format for numeric values
  • @JsonbNillable – for enabling serialization of null values

5.1. Serialization and Deserialization

First of all, to obtain the JSON representation of our object, we need to use the JsonbBuilder class and its toJson() method.

To start, let’s create a simple Person object like this:

Person person = new Person(
  1, 
  "Jhon", 
  "jhon@test.com", 
  20, 
  LocalDate.of(2019, 9, 7), 
  BigDecimal.valueOf(1000));

And, instantiate the Jsonb class:

Jsonb jsonb = JsonbBuilder.create();

Then, we use the toJson method:

String jsonPerson = jsonb.toJson(person);

To obtain the following JSON representation:

{
    "email":"jhon@test.com",
    "id":1,
    "person-name":"Jhon",
    "registeredDate":"07-09-2019",
    "salary":"1000.0"
}

If we want to do the conversion the other way, we can use the fromJson method:

Person person = jsonb.fromJson(jsonPerson, Person.class);

Naturally, we can also process collections:

List<Person> personList = Arrays.asList(...);
String jsonArrayPerson = jsonb.toJson(personList);

To obtain the following JSON representation:

[ 
    {
      "email":"jhon@test.com",
      "id":1,
      "person-name":"Jhon", 
      "registeredDate":"09-09-2019",
      "salary":"1000.0"
    },
    {
      "email":"jhon1@test.com",
      "id":2,
      "person-name":"Jhon",
      "registeredDate":"09-09-2019",
      "salary":"1500.0"
    },
    ...
]

To convert from JSON array to List we’ll use the fromJson API:

List<Person> personList = jsonb.fromJson(
  personJsonArray, 
  new ArrayList<Person>(){}.getClass().getGenericSuperclass()
);

5.2. Custom Mapping with JsonbConfig

The JsonbConfig class allows us to customize the mapping process for all classes.

For example, we can change the default naming strategies or the properties order.

Now, we’ll use the LOWER_CASE_WITH_UNDERSCORES strategy:

JsonbConfig config = new JsonbConfig().withPropertyNamingStrategy(
  PropertyNamingStrategy.LOWER_CASE_WITH_UNDERSCORES);
Jsonb jsonb = JsonbBuilder.create(config);
String jsonPerson = jsonb.toJson(person);

To obtain the following JSON representation:

{
   "email":"jhon@test.com",
   "id":1,
   "person-name":"Jhon",
   "registered_date":"07-09-2019",
   "salary":"1000.0"
}

Now, we’ll change the property order with the REVERSE strategy. Using this strategy, the order of properties is in reverse order to lexicographical order.
This can also be configured at compile time with the annotation @JsonbPropertyOrder. Let’s see it in action:

JsonbConfig config 
  = new JsonbConfig().withPropertyOrderStrategy(PropertyOrderStrategy.REVERSE);
Jsonb jsonb = JsonbBuilder.create(config);
String jsonPerson = jsonb.toJson(person);

To obtain the following JSON representation:

{
    "salary":"1000.0",
    "registeredDate":"07-09-2019",
    "person-name":"Jhon",
    "id":1,
    "email":"jhon@test.com"
}

5.3. Custom Mapping with Adapters

When the annotations and the JsonbConfig class aren’t enough for us, we can use adapters.

To use them, we’ll need to implement the JsonbAdapter interface, which defines the following methods:

  • adaptToJson – With this method, we can use custom conversion logic for the serialization process.
  • adaptFromJson – This method allows us to use custom conversion logic for the deserialization process.

Let’s create a PersonAdapter to process the id and name attributes of the Person class:

public class PersonAdapter implements JsonbAdapter<Person, JsonObject> {

    @Override
    public JsonObject adaptToJson(Person p) throws Exception {
        return Json.createObjectBuilder()
          .add("id", p.getId())
          .add("name", p.getName())
          .build();
    }

    @Override
    public Person adaptFromJson(JsonObject adapted) throws Exception {
        Person person = new Person();
        person.setId(adapted.getInt("id"));
        person.setName(adapted.getString("name"));
        return person;
    }
}

Furthermore, we’ll assign the adapter to our JsonbConfig instance:

JsonbConfig config = new JsonbConfig().withAdapters(new PersonAdapter());
Jsonb jsonb = JsonbBuilder.create(config);

And we’ll get the following JSON representation:

{
    "id":1, 
    "name":"Jhon"
}

6. Conclusion

In this tutorial, we saw an example of how to integrate the JSON-B API with Java applications using the available implementations, along with examples of customizing serialization and deserialization at both compile and runtime.

The complete code is available, as always, over on Github.

Guide to java.util.Formatter

$
0
0

1. Overview

In this article, we’ll discuss the String formatting in Java using the java.util.Formatter class, which provides support for the layout justification and alignment.

2. How to Use the Formatter

Remember C’s printf? Formatting a String in Java feels very similar.

The format() method of the Formatter is exposed via a static method from the String class. This method accepts a template String and a list of arguments to populate the template with:

String greetings = String.format(
  "Hello Folks, welcome to %s !", 
  "Baeldung");

The resulting String is:

"Hello Folks, welcome to Baeldung !"

A template is a String that contains some static text and one or more format specifiers, which indicate which argument is to be placed at the particular position.

In this case, there’s a single format specifier %s, which gets replaced by the corresponding argument.

3. Format Specifiers

3.1. General Syntax

The syntax of format specifiers for General, Character, and Numeric type is:

%[argument_index$][flags][width][.precision]conversion

Specifiers argument_index, flag, width, and precision are optional. 

  • argument_index part is an integer i – indicating that the ith argument from the argument list should be used here
  • flags is a set of characters used for modifying the output format
  • width is a positive integer which indicates the minimum number of characters to be written to the output
  • precision is an integer usually used to restrict the number of characters, whose specific behavior depends on the conversion
  • is the mandatory part. It’s a character indicating how the argument should be formatted. The set of valid conversions for a given argument depends on the argument’s data type

In our example above, if we want to specify the number of an argument explicitly, we can write it using 1$ and 2$ argument indices.

Both these being the first and second argument respectively:

String greetings = String.format(
  "Hello %2$s, welcome to %1$s !", 
  "Baeldung", 
  "Folks");

3.2. For Date/Time Representation

%[argument_index$][flags][width]conversion

Again the argument_index, flags, and width are optional.

Let’s take an example to understand this:

@Test
public void whenFormatSpecifierForCalendar_thenGotExpected() {
    Calendar c = new GregorianCalendar(2017, 11, 10);
    String s = String.format(
      "The date is: %tm %1$te,%1$tY", c);

    assertEquals("The date is: 12 10,2017", s);
}

Here, for every format specifier, the 1st argument will be used, hence 1$. Here if we skip the argument_index for 2nd and 3rd format specifier, it tries to find 3 arguments, but we need to use the same argument for all 3 format specifiers.

So, it’s ok if we don’t specify argument _index for the first one, but we need to specify it for the other two.

The flag here is made up of two characters. Where the first character is always a ‘t’ or ‘T’. The second character depends on what part of Calendar is to be displayed.

In our example, the first format specifiers tm, indicates month formatted as two digits, te indicates the day of the month and tY indicated Year formatted as four digits.

3.3. Format Specifiers Without Arguments

%[flags][width]conversion

The optional flags and width are the same as defined in above sections.

The required conversion is a character or String indicating content to be inserted in the output. Currently, only the ‘%’ and newline ‘n’ can be printed using this:

@Test
public void whenNoArguments_thenExpected() {
    String s = String.format("John scored 90%% in Fall semester");
 
    assertEquals("John scored 90% in Fall semester", s);
}

Inside format(), if we want to print ‘%’ – we need to escape it by using ‘%%’.

4. Conversions

Let’s now dig into every detail of the Format Specifier syntax, starting with a conversion. Note that you can find all the details in the Formatter javadocs.

As we noticed in the above examples, conversion part is required in all format specifiers, and it can be divided into several categories.

Let’s take a look at each one by taking examples.

4.1. General 

Used for any argument type. The general conversions are:

  1. ‘b’ or ‘B’ – for Boolean values
  2. ‘h’ or ‘H’ – for HashCode
  3. ‘s’ or ‘S’ – for String, if null, it prints “null”, else arg.toString()

We’ll now try to display boolean and String values, using the corresponding conversions:

@Test
public void givenString_whenGeneralConversion_thenConvertedString() {
    String s = String.format("The correct answer is %s", false);
    assertEquals("The correct answer is false", s);

    s = String.format("The correct answer is %b", null);
    assertEquals("The correct answer is false", s);

    s = String.format("The correct answer is %B", true);
    assertEquals("The correct answer is TRUE", s);
}

4.2. Character 

Used for the basic types which represent Unicode characters: char, Character, byte, Byte, short, and ShortThis conversion can also be used for the types int and Integer when the Character.isValidCodePoint(int) returns true for them.

It can be written as ‘c’ or ’C’ based on the case we want.

Let’s try to print some characters:

@Test
public void givenString_whenCharConversion_thenConvertedString() {
    String s = String.format("The correct answer is %c", 'a');
    assertEquals("The correct answer is a", s);

    s = String.format("The correct answer is %c", null);
    assertEquals("The correct answer is null", s);

    s = String.format("The correct answer is %C", 'b');
    assertEquals("The correct answer is B", s);

    s = String.format("The valid unicode character: %c", 0x0400);
    assertTrue(Character.isValidCodePoint(0x0400));
    assertEquals("The valid unicode character: Ѐ", s);
}

Let’s take one more example of an invalid code point:

@Test(expected = IllegalFormatCodePointException.class)
public void whenIllegalCodePointForConversion_thenError() {
    String s = String.format("The valid unicode character: %c", 0x11FFFF);
 
    assertFalse(Character.isValidCodePoint(0x11FFFF));
    assertEquals("The valid unicode character: Ā", s);
}

4.3. Numeric – Integral

These are used for Java integral types: byte, Byte, short, Short, int and Integer, long, Long, and BigIntegerThere are three conversions in this category:

  1. ‘d’ – for decimal number
  2. ‘o’ – for octal number
  3. ‘X’ or ‘x’ – for hexadecimal number

Let’s try to print each of these:

@Test
public void whenNumericIntegralConversion_thenConvertedString() {
    String s = String.format("The number 25 in decimal = %d", 25);
    assertEquals("The number 25 in decimal = 25", s);

    s = String.format("The number 25 in octal = %o", 25);
    assertEquals("The number 25 in octal = 31", s);

    s = String.format("The number 25 in hexadecimal = %x", 25);
    assertEquals("The number 25 in hexadecimal = 19", s);
}

4.4. Numeric – Floating Point

Used for Java floating-point types: float, Float, double, Double, and BigDecimal

  1. ‘e’ or ‘E’ – formatted as a decimal number in computerized scientific notation
  2. ‘f’ – formatted as a decimal number
  3. ‘g’ or ‘G’based on the precision value after rounding, this conversion formats into computerized scientific notation or decimal format

Let’s try to print the floating point numbers:

@Test
public void whenNumericFloatingConversion_thenConvertedString() {
    String s = String.format(
      "The computerized scientific format of 10000.00 "
      + "= %e", 10000.00);
 
    assertEquals(
      "The computerized scientific format of 10000.00 = 1.000000e+04", s);
    
    String s2 = String.format("The decimal format of 10.019 = %f", 10.019);
    assertEquals("The decimal format of 10.019 = 10.019000", s2);
}

4.5. Other Conversions 

  • Date/Time – for Java types which are capable of encoding a date or time: long, Long, Calendar, Date and TemporalAccessor. For this, we need to use prefixed ‘t’ or ‘T’, as we saw earlier
  • Percent – prints a literal ‘%’ (‘\u0025’)
  • Line Separator – prints a platform-specific line separator

Let’s have a look at a simple example:

@Test
public void whenLineSeparatorConversion_thenConvertedString() {
    String s = String.format("First Line %nSecond Line");
 
    assertEquals("First Line \n" + "Second Line", s);
}

5. Flags

Flags, in general, are used to format the output. Whereas in case of date and time, they are used to specify which part of the date is to be displayed, as we saw in the Section 4 example.

A number of flags are available, a list of which can be found in the documentation.

Let’s see a flag example to understand it’s usage. ‘-‘ is used to format the output as left justified: 

@Test
public void whenSpecifyFlag_thenGotFormattedString() {
    String s = String.format("Without left justified flag: %5d", 25);
    assertEquals("Without left justified flag:    25", s);

    s = String.format("With left justified flag: %-5d", 25);
    assertEquals("With left justified flag: 25   ", s);
}

6. Precision

For general conversions, precision is just the maximum number of characters to be written to the output. Whereas, for the floating-point conversions the precision is the number of digits after the radix point.

The first statement is an example of precision with floating-point numbers, and the second one with general conversions:

@Test
public void whenSpecifyPrecision_thenGotExpected() {
    String s = String.format(
      "Output of 25.09878 with Precision 2: %.2f", 25.09878);
 
    assertEquals("Output of 25.09878 with Precision 2: 25.10", s);

    String s2 = String.format(
      "Output of general conversion type with Precision 2: %.2b", true);
 
    assertEquals("Output of general conversion type with Precision 2: tr", s2);
}

7. Argument Index

As mentioned previously, the argument_index is an integer that indicates the position of the argument in the argument list1$ indicates the first argument, 2$ the second argument, and so on.

Also, there is another way to reference arguments by position, by using the ‘<‘ (‘\u003c’) flag, which means the argument from the previous format specifier will be re-used. For example, these two statements would produce the identical output:

@Test
public void whenSpecifyArgumentIndex_thenGotExpected() {
    Calendar c = Calendar.getInstance();
    String s = String.format("The date is: %tm %1$te,%1$tY", c);
    assertEquals("The date is: 12 10,2017", s);

    s = String.format("The date is: %tm %<te,%<tY", c);
    assertEquals("The date is: 12 10,2017", s);
}

8. Other Ways of Using Formatter

Till now we saw the use of format() method of the Formatter class. We can also create a Formatter instance, and use that to invoke the format() method.

We can create an instance by passing in an Appendable, OutputStream, File or file name. Based on this, the formatted String is stored in an Appendable, OutputStream, File respectively.

Let’s see an example of using it with an Appendable. We can use it with others in the same way.

8.1. Using Formatter with Appendable

Let’s create a StringBuilder instance sb, and create a Formatter using it. Then we’ll invoke format() to format a String:

@Test
public void whenCreateFormatter_thenFormatterWithAppendable() {
    StringBuilder sb = new StringBuilder();
    Formatter formatter = new Formatter(sb);
    formatter.format("I am writting to a %s Instance.", sb.getClass());
    
    assertEquals(
      "I am writting to a class java.lang.StringBuilder Instance.", 
      sb.toString());
}

9. Conclusion

In this article, we saw the formatting facilities provided by the java.util.Formatter class. We saw various syntax that can be used to format the String and the conversion types that can be used for different data types.

As usual, the code for the examples we saw can be found over on Github.

Java Weekly, Issue 205

$
0
0

Here we go…

1. Spring and Java

 

>> JUnit 5 Meets AssertJ [blog.codeleak.pl]

Although JUnit5 is much more flexible than the previous version, AssertJ is still a must.

>> Binding applications to HashiCorp’s Vault with Spring in Cloud Foundry [spring.io]

A quick guide to binding a Spring application to a HashiCorp’s Vault service broker on Cloud Foundry.

>> How to Ensure Your Code Works With Older JDKs [blog.jooq.org]

The Animal Sniffer Maven Plugin can come in handy for complex scenarios.

>> Early Java EE 8 implementation in the November Liberty beta [developer.ibm.com]

An early Java EE 8 implementation is up for grabs 🙂

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Understanding Monads. A Guide for the Perplexed [infoq.com]

Monads are easier than you might think 🙂

Also worth reading:

3. Musings

>> How to become a Java Champion [vladmihalcea.com]

Some very cool insights into Vlad’s journey to becoming a Java Champion.

>> Making the most out of conferences [blog.frankel.ch]

Plan, go offline, take notes… but don’t overdo it.

>> Learning in a World Where Programming Skills Aren’t That Important [daedtech.com]

After a few years of commercial programming, it’s very easy to reach the plateau and career stagnation and highly advanced programming skills will not help you progress further.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Worthless Financial Projections [dilbert.com]

>> Brain Scan [dilbert.com]

>> Initial Coin Offering [dilbert.com]

5. Pick of the Week

This week I’ve finally announced the new material that’s coming in my security course – all related to Spring Security 5 (along with the upcoming price change):

>> The upcoming new modules in Learn Spring Security

Viewing all 3689 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>