Quantcast
Channel: Baeldung
Viewing all 3770 articles
Browse latest View live

Java 10 LocalVariable Type-Inference

$
0
0

1. Overview

One of the most visible enhancements in JDK 10 is type inference of local variables with initializers.

This tutorial provides the details of this feature with examples.

2. Introduction

Until Java 9, we had to mention the type of the local variable explicitly and ensure it was compatible with the initializer used to initialize it:

String message = "Good bye, Java 9";

In Java 10, this is how we could declare a local variable:

@Test
public void whenVarInitWithString_thenGetStringTypeVar() {
    var message = "Hello, Java 10";
    assertTrue(message instanceof String);
}

We don’t provide the data type of message. Instead, we mark the message as a var, and the compiler infers the type of message from the type of the initializer present on the right-hand side.

In above example, the type of message would be String.

Note that this feature is available only for local variables with the initializer. It cannot be used for member variables, method parameters, return types, etc – the initializer is required as without which compiler won’t be able to infer the type.

This enhancement helps in reducing the boilerplate code; for example:

Map<Integer, String> map = new HashMap<>();

This can now be rewritten as:

var idToNameMap = new HashMap<Integer, String>();

This also helps to focus on the variable name rather than on the variable type.

Another thing to note is that var is not a keyword – this ensures backward compatibility for programs using var say, as a function or variable name. var is a reserved type name, just like int.

Finally, note that there is no runtime overhead in using var nor does it make Java a dynamically typed language. The type of the variable is still inferred at compile time and cannot be changed later.

3. Illegal Use of var

As mentioned earlier, var won’t work without the initializer:

var n; // error: cannot use 'var' on variable without initializer

Nor would it work if initialized with null:

var emptyList = null; // error: variable initializer is 'null'

It won’t work for non-local variables:

public var = "hello"; // error: 'var' is not allowed here

Lambda expression needs explicit target type, and hence var cannot be used:

var p = (String s) -> s.length() > 10; // error: lambda expression needs an explicit target-type

Same is the case with the array initializer:

var arr = { 1, 2, 3 }; // error: array initializer needs an explicit target-type

4. Guidelines for Using var

There are situations where var can be used legally, but may not be a good idea to do so.

For example, in situations where the code could become less readable:

var result = obj.prcoess();

Here, although a legal use of var, it becomes difficult to understand the type returned by the process()making the code less readable.

java.nethas a dedicated article on Style Guidelines for Local Variable Type Inference in Java which talks about how we should use judgment while using this feature.

Another situation where it’s best to avoid var is in streams with long pipeline:

var x = emp.getProjects.stream()
  .findFirst()
  .map(String::length)
  .orElse(0);

Usage of var may also give unexpected result.

For example, if we use it with the diamond operator introduced in Java 7:

var empList = new ArrayList<>();

The type of empListwill be ArrayList<Object>and not List<Object>. If we want it to be ArrayList<Employee>, we will have to be explicit:

var empList = new ArrayList<Employee>();

Using var with non-denotable types could cause unexpected error.

For example, if we use var with the anonymous class instance:

@Test
public void whenVarInitWithAnonymous_thenGetAnonymousType() {
    var obj = new Object() {};
    assertFalse(obj.getClass().equals(Object.class));
}

Now, if we try to assign another Objectto obj, we would get a compilation error:

obj = new Object(); // error: Object cannot be converted to <anonymous Object>

This is because the inferred type of obj isn’t Object.

5. Conclusion

In this article, we saw the new Java 10 local variable type inference feature with examples.

As usual, code snippets can be found over on GitHub.


Java 10 Performance Improvements

$
0
0

1. Overview

In this quick tutorial, we will discuss the performance improvements that come along with the latest Java 10 release.

These improvements apply to all applications running under JDK 10, with no need for any code changes to leverage them.

2. Parallel Full GC for G1

The G1 garbage collector is the default one since JDK 9. However, the full GC for G1 used a single threaded mark-sweep-compact algorithm.

This has been changed to the parallel mark-sweep-compact algorithm in Java 10 effectively reducing the stop-the-world time during full GC.

3. Application Class-Data Sharing

Class-Data Sharing, introduced in JDK 5, allows a set of classes to be pre-processed into a shared archive file that can then be memory-mapped at runtime to reduce startup time which can also reduce dynamic memory footprint when multiple JVMs share the same archive file.

CDS only allowed the bootstrap class loader, limiting the feature to system classes only. Application CDS (AppCDS) extends CDS to allow the built-in system class loader (a.k.a., the “app class loader”), the built-in platform class loader, and custom class loaders to load archived classes. This makes it possible to use the feature for application classes.

We can use the following steps to make use of this feature:

1. Get the list of classes to archive

The following command will dump the classes loaded by the HelloWorld application into hello.lst:

$ java -Xshare:off -XX:+UseAppCDS -XX:DumpLoadedClassList=hello.lst \ 
    -cp hello.jar HelloWorld

2. Create the AppCDS archive

Following command creates hello.js a using the hello.lst as input:

$ java -Xshare:dump -XX:+UseAppCDS -XX:SharedClassListFile=hello.lst \
    -XX:SharedArchiveFile=hello.jsa -cp hello.jar

3. Use the AppCDS archive

Following command starts the HelloWorld application with hello.jsa as input:

$ java -Xshare:on -XX:+UseAppCDS -XX:SharedArchiveFile=hello.jsa \
    -cp hello.jar HelloWorld

AppCDS was a commercial feature in Oracle JDK for JDK 8 and JDK 9. Now it is open sourced and made publicly available.

4. Experimental Java-Based JIT Compiler

Graal is a dynamic compiler written in Java that integrates with the HotSpot JVM; it’s focused on high performance and extensibility. It’s also the basis of the experimental Ahead-of-Time (AOT) compiler introduced in JDK 9.

JDK 10 enables the Graal compiler, to be used as an experimental JIT compiler on the Linux/x64 platform.

To enable Graal as the JIT compiler, use the following options on the java command line:

-XX:+UnlockExperimentalVMOptions -XX:+UseJVMCICompiler

Note that this is an experimental feature and we may not necessarily get better performance than the existing JIT compilers.

5. Conclusion

In this quick article, we focused on and explored the performance improvement features in Java 10.

Interoperability Between Java and Vavr

$
0
0

1. Overview

As Vavr primarily works within the Java ecosystem, there’s always a need to convert Vavr’s data structures into Java-understandable data structures.

For example, consider a function which returns an io.vavr.collection.List, and we need to pass on the result to another function which accepts java.util.List. This is where the Java-Vavr interoperability comes in handy.

In this tutorial, we’re going to look at how to convert several Vavr data structures into our standard Java collections and vice versa.

2. Vavr to Java Conversion

The Value interface in Vavr is a base interface for most Vavr tools. Thus, all Vavr’s collections inherit the properties of Value.

This is useful as the Value interface comes with a lot of toJavaXXX() methods that allow us to convert the Vavr data structures to Java equivalents.

Let’s see how a Java List can be obtained from the List or Stream of Vavr:

List<String> vavrStringList = List.of("JAVA", "Javascript", "Scala");
java.util.List<String> javaStringList = vavrStringList.toJavaList();
Stream<String> vavrStream = Stream.of("JAVA", "Javascript", "Scala");
java.util.List<String> javaStringList = vavrStream.toJavaList();

The first example converts a Vavr list to a Java list, and the next one converts a stream to Java list. Both examples rely on the toJavaList() method.

Similarly, we can obtain other Java collections from Vavr objects.

Let’s see another example of converting a Vavr Map to a Java Map:

Map<String, String> vavrMap = HashMap.of("1", "a", "2", "b", "3", "c");
java.util.Map<String, String> javaMap = vavrMap.toJavaMap();

Besides standard Java collections, Vavr also provides APIs for converting values to Java streams and Optionals.

Let’s see an example of obtaining an Optional using the toJavaOptional() method:

List<String> vavrList = List.of("Java");
Optional<String> optional = vavrList.toJavaOptional();
assertEquals("Java", optional.get());

As an overview of the Vavr methods of this type, we have:

  • toJavaArray()
  • toJavaCollection()
  • toJavaList()
  • toJavaMap()
  • toJavaSet()
  • toJavaOptional()
  • toJavaParallelStream()
  • toJavaStream()

A full list of useful APIs can be found here.

3. Java to Vavr Conversion

All the collection implementations in Vavr have a base type of Traversable. Thus, each collection type features a static method ofAll() that takes an Iterable and converts it to the corresponding Vavr collection.

Let’s see how we can convert a java.util.List to a Vavr List:

java.util.List<String> javaList = Arrays.asList("Java", "Haskell", "Scala");
List<String> vavrList = List.ofAll(javaList);

Similarly, we can use the ofAll() method to convert Java streams to Vavr collections:

java.util.stream.Stream<String> javaStream 
  = Arrays.asList("Java", "Haskell", "Scala").stream();
Stream<String> vavrStream = Stream.ofAll(javaStream);

4. Java Collection Views

The Vavr library also provides Java collection views which delegate calls to the underlying Vavr collections.

The Vavr to Java conversion methods creates a new instance by iterating through all elements to build a Java collection. This means the performance of the conversion is linear, whereas the performance of creating collection views is constant.

As of writing this article, only the List view is supported in Vavr.

For the List, there are two methods that are available to get our View. First is asJava() which returns an immutable List and the next one is asJavaMutable().

Here’s an example that demonstrates the immutable Java List:

@Test(expected = UnsupportedOperationException.class)
public void givenParams_whenVavrListConverted_thenException() {
    java.util.List<String> javaList 
      = List.of("Java", "Haskell", "Scala").asJava();
    
    javaList.add("Python");
    assertEquals(4, javaList.size());
}

As the List is immutable performing any modification on it throws an UnsupportedOperationException.

We can also get a mutable List by invoking the asJavaMutable() method on List.

Here’s how we do it:

@Test
public void givenParams_whenVavrListConvertedToMutable_thenRetunMutableList() {
    java.util.List<String> javaList = List.of("Java", "Haskell", "Scala")
      .asJavaMutable();
    javaList.add("Python");
 
    assertEquals(4, javaList.size());
}

5. Conversion Between Vavr Objects

Similar to conversion between Java to Vavr and vice-versa, we can convert a Value type in Vavr to other Value types. This conversion feature helps conversion between the Vavr objects when there is a need.

For example, we have a List of items, and we want to filter the duplicates while preserving the order. In this case, we would need a LinkedHashSet. Here’s an example that demonstrates the use case:

List<String> vavrList = List.of("Java", "Haskell", "Scala", "Java");
Set<String> linkedSet = vavrList.toLinkedSet();
assertEquals(3, linkedSet.size());
assertTrue(linkedSet instanceof LinkedHashSet);

There are many other methods available in the Value interface which help us to convert collections to different types depending on the use cases.

A full list of the APIs can be found here.

6. Conclusion

In this article, we’ve learned about conversion between Vavr and Java collection types. To check out more APIs provided for conversion by the framework as per the use case, refer to the JavaDoc and the user guide.

The complete source code for all the examples in this article can be found over on Github.

Java Optional – orElse() vs orElseGet()

$
0
0

1. Introduction

The API of Optional typically has two methods that can cause confusion: orElse() and orElseGet().

In this quick tutorial, we’ll look at the difference between those two and explore when to use each one.

2. Signatures

Let’s first start with the basics by looking at their signatures:

public T orElse(T other)

public T orElseGet(Supplier<? extends T> other)

Clearly, orElse() takes any parameter of a type whereas orElseGet() accepts a functional interface of type Supplier that returns an object of type T.

Now, based on their Javadocs:

  • orElse(): returns the value if present, otherwise return other
  • orElseGet(): returns the value if present, otherwise invoke other and return the result of its invocation

3. Differences

It’s easy to be a bit confused by this simplified definitions, so let’s dig a little deeper and look at some actual usage scenarios.

3.1. orElse()

Assuming we have our logger configured properly, let’s start with writing a simple piece of code:

String name = Optional.of("baeldung")
  .orElse(getRandomName());

Notice that getRandomName() is a method which returns a random name from a List<String>of names:

public String getRandomName() {
    LOG.info("getRandomName() method - start");
    
    Random random = new Random();
    int index = random.nextInt(5);
    
    LOG.info("getRandomName() method - end");
    return names.get(index);
}

On executing our code, we’ll find below messages printed in the console:

getRandomName() method - start
getRandomName() method - end

The variable name will hold “baeldung” at the end of the code execution.

With it, we can easily infer that the parameter of orElse() is evaluated even when having a non-empty Optional.

3.2. orElseGet()

Now, let’s try writing similar code using orElseGet():

String name = Optional.of("baeldung")
  .orElseGet(() -> getRandomName());

The above code will not invoke getRandomName() method.

Remember (from the Javadoc) that the Supplier method passed as an argument is only executed when an Optional value is not present.

Using orElseGet() for our case will, therefore, save us some time involved in computing a random name.

4. Measuring Performance Impact

Now, to also understand the differences in performance, let’s use JMH and see some actual numbers:

@Benchmark
@BenchmarkMode(Mode.AverageTime)
public String orElseBenchmark() {
    return Optional.of("baeldung").orElse(getRandomName());
}

And orElseGet():

@Benchmark
@BenchmarkMode(Mode.AverageTime)
public String orElseGetBenchmark() {
    return Optional.of("baeldung").orElseGet(() -> getRandomName());
}

While executing our benchmark methods, we get:

Benchmark           Mode  Cnt      Score       Error  Units
orElseBenchmark     avgt   20  60934.425 ± 15115.599  ns/op
orElseGetBenchmark  avgt   20      3.798 ±     0.030  ns/op

As we can see, the performance impact might be substantial even for such a simple use-case scenario.

The numbers above might slightly vary, however, orElseGet() has clearly outperformed orElse() for our particular example.

Afterall, orElse() involves computation of getRandomName() method for each run.

5. What’s Important?

Apart from the performance aspects, other worth-considering factors involve:

  • What if the method would execute some additional logic? E.g. making some DB inserts or updates
  • Even when we assign an object to orElse() parameter:
    String name = Optional.of("baeldung").orElse("Other")

    we’re still creating “Other” object for no reason

And that’s why it is important for us to make a careful decision among orElse() and orElseGet() depending on our needs – by default, it makes more sense to use orElseGet() every time unless the default object is already constructed and accessible directly.

6. Conclusion

In this article, we’ve learned the nuances between Optional orElse() and OrElseGet() methods. We also noticed how sometimes such simple concepts can have a deeper meaning.

As always, the complete source code can be found over on Github.

Java Weekly, Issue 229

$
0
0

Here we go…

1. Spring and Java

>> Monitor and troubleshoot Java applications and services with Datadog

Optimize performance with end-to-end tracing and out-of-the-box support for popular Java frameworks, application servers, and databases. Try it free

>> Metrics with Spring Boot 2.0 – Migration [blog.frankel.ch]

If you’ve been confused about the new way of doing metrics in Spring Boot 2, this is a good place to start.

>> Java and Docker, the limitations [royvanrijn.com]

Running Java in Docker safely might be harder than you think. Luckily, starting with Java 10 – this shouldn’t really be a problem anymore.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> Performance testing at Gradle [stefan-oehme.com]

Very interesting insights into how Gradle’s performance gets assessed and tracked.

>> Sometimes The Ax Is Sharp Enough [blog.jbrains.ca]

Best practices are always contextual 🙂

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Purchasing did not Order Part [dilbert.com]

>> Do Not Implicate Boss [dilbert.com]

>> Conditions for Wally to be on the Team [dilbert.com]

5. Pick of the Week

>> The Three Levels of Self-Awareness [markmanson.net]

Variable and Method Hiding in Java

$
0
0

1. Introduction

In this tutorial, we’re going to learn about variable and method hiding in the Java language.

First, we’ll understand the concept and purpose of each of these scenarios. After that, we’ll dive into the use cases and examine different examples.

2. Variable Hiding

Variable hiding happens when we declare a property in a local scope that has the same name as the one we already have in the outer scope.

Before jumping to the examples, let’s briefly recap the possible variable scopes in Java. We can define them with the following categories:

  • local variables – declared in a piece of code such as methods, constructors, in any block of code with curly braces
  • instance variables – defined inside of a class and belong to the instance of the object
  • class or static variables – are declared in the class with the static keyword. They have a class level scope.

Now, let’s describe the hiding with examples, for each individual category of variables.

2.1. The Power of Local

Let’s have a look at the HideVariable class:

public class HideVariable {

    private String message = "this is instance variable";

    HideVariable() {
        String message = "constructor local variable";
        System.out.println(message);
    }

    public void printLocalVariable() {
        String message = "method local variable";
        System.out.println(message);
    }

    public void printInstanceVariable() {
        String message = "method local variable";
        System.out.println(this.message);
    }
}

Here we have the message variable declared in 4 different places. The local variables declared inside of the constructor and the two methods are shadowing the instance variable.

Let’s test the initialization of an object and calling the methods:

HideVariable variable = new HideVariable();
variable.printLocalVariable();

variable.printInstanceVariable();

The output of the code above is:

constructor local variable
method local variable
this is instance variable

Here, the first 2 calls are retrieving the local variables.

To access the instance variable from the local scope, we can use this keyword like it is shown in printInstanceVariable() method.

2.2. Hiding and The Hierarchy

Similarly, when both the child and the parent classes have a variable with the same name, the child’s variable hides the one from the parent.

Let’s suppose we have the parent class:

public class ParentVariable {

    String instanceVariable = "parent variable";

    public void printInstanceVariable() {
        System.out.println(instanceVariable);
    }
}

After that we define a child class:

public class ChildVariable extends ParentVariable {

    String instanceVariable = "child variable";

    public void printInstanceVariable() {
        System.out.println(instanceVariable);
    }
}

To test it, let’s initialize two instances. One with parent class and another with the child, then invoke the printInstanceVariable() methods on each of them:

ParentVariable parentVariable = new ParentVariable();
ParentVariable childVariable = new ChildVariable();

parentVariable.printInstanceVariable();
childVariable.printInstanceVariable();

The output shows the property hiding:

parent variable
child variable

In most cases, we should avoid creating variables with the same name both in parent and child classes. Instead, we should use a proper access modifier like private and provide getter/setter methods for that purpose.

3. Method Hiding

Method hiding may happen in any hierarchy structure in java. When a child class defines a static method with the same signature as a static method in the parent class, then the child’s method hides the one in the parent class. To learn more about the static keyword,  this write-up is a good place to start.

The same behavior involving the instance methods is called method overriding. To learn more about method overriding checkout our guide here.

Now, let’s have a look at this practical example:

public class BaseMethodClass {

    public static void printMessage() {
        System.out.println("base static method");
    }
}

BaseMethodClass has a single printMessage() static method.

Next, let’s create a child class with the same signature as in the base class:

public class ChildMethodClass extends BaseMethodClass {

    public static void printMessage() {
        System.out.println("child static method");
    }
}

Here’s how it works:

ChildMethodClass.printMessage();

The output after calling the printMessage() method:

child static method

The ChildMethodClass.printMessage() hides the method in BaseMethodClass.

3.1. Method Hiding vs Overriding

Hiding doesn’t work like overriding, because static methods are not polymorphic. Overriding occurs only with instance methods. It supports late binding, so which method will be called is determined at runtime.

On the other hand, method hiding works with static ones. Therefore it’s determined at compile time.

4. Conclusion

In this article, we went over the concept of method and variable hiding in Java. We showed different scenarios of variable hiding and shadowing. The important highlight of the article is also comparing the method overriding and hiding.

As usual, the complete code is available over on GitHub.

Deploy a Spring Boot WAR into a Tomcat Server

$
0
0

1. Introduction

Spring Boot is a convention over configuration framework that allows us to set up a production-ready setup of a Spring project, and Tomcat is one of the most popular Java Servlet Containers.

By default, Spring Boot builds a standalone Java application that can run as a desktop application or be configured as a system service, but there are environments where we can’t install a new service or run the application manually.

Opposite to standalone applications, Tomcat is installed as a service that can manage multiple applications within the same application process, avoiding the need for a specific setup for each application.

In this guide, we’re going to create a simple Spring Boot application and adapt it to work within Tomcat.

2. Setting up a Spring Boot Application

We’re going to setup a simple Spring Boot web application using one of the available starter templates:

<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId> 
    <version>2.0.2.RELEASE</version> 
    <relativePath/> 
</parent> 
<dependencies>
    <dependency> 
        <groupId>org.springframework.boot</groupId> 
        <artifactId>spring-boot-starter-web</artifactId> 
    </dependency> 
</dependencies>

There’s no need for additional configurations beyond the standard @SpringBootApplication since Spring Boot takes care of the default setup.

We add a simple REST EndPoint to return some valid content for us:

@RestController
public class TomcatController {

    @GetMapping("/hello")
    public Collection<String> sayHello() {
        return IntStream.range(0, 10)
          .mapToObj(i -> "Hello number " + i)
          .collect(Collectors.toList());
    }
}

Now let’s execute the application with mvn spring-boot:run and start a browser at http://localhost:8080/hello to check the results.

3. Creating a Spring Boot WAR

Servlet containers expect the applications to meet some contracts to be deployed. For Tomcat the contract is the Servlet API 3.0.

To have our application meeting this contract, we have to perform some small modifications in the source code.

First, we need to package a WAR application instead of a JAR. For this, we change pom.xml with the following content:

<packaging>war</packaging>

Now, let’s modify the final WAR file name to avoid including version numbers:

<build>
    <finalName>${artifactId}</finalName>
    ... 
</build>

Then, we’re going to add the Tomcat dependency:

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-tomcat</artifactId>
   <scope>provided</scope>
</dependency>

Finally, we initialize the Servlet context required by Tomcat by implementing the SpringBootServletInitializer interface:

@SpringBootApplication
public class SpringBootTomcatApplication extends SpringBootServletInitializer {
}

To build our Tomcat-deployable WAR application, we execute the mvn clean package. After that, our WAR file is generated at target/spring-boot-tomcat.war (assuming the Maven artifactId is “spring-boot-tomcat”).

We should consider that this new setup makes our Spring Boot application a non-standalone application (if you would like to have it working in standalone mode again, remove the provided scope from the tomcat dependency).

4. Deploying the WAR to Tomcat

To have our WAR file deployed and running in Tomcat, we need to complete the following steps:

  1. Download Apache Tomcat and unpackage it into a tomcat folder
  2. Copy our WAR file from target/spring-boot-tomcat.war to the tomcat/webapps/ folder
  3. From a terminal navigate to tomcat/bin folder and execute
    1. catalina.bat run (on Windows)
    2. catalina.sh run (on Unix-based systems)
  4. Go to http://localhost:8080/spring-boot-tomcat/hello

This has been a quick Tomcat setup, please check the guide on Tomcat Installation for a complete setup guide. There are also additional ways of deploying a WAR file to Tomcat.

 5. Conclusion

In this short guide, we created a simple Spring Boot application and turned it into a valid WAR application deployable on a Tomcat server.

As always, the full source code of the examples is available over on GitHub.

Sending SMS in Java with Twilio

$
0
0

1. Introduction

Sending SMS messages is a big part of many modern applications. There are a variety of uses cases that SMS messages can serve: two-factor authentication, real-time alerts, chatbots, and many more.

In this tutorial, we’ll build a simple Java application that sends SMS messages using Twilio.

There are a number of services that provide SMS capabilities, such as Nexmo, Plivo, Amazon Simple Notification Service (SNS), Zapier, and more.

Using the Twilio Java client, we can send an SMS message in just a few lines of code.

2. Setting up Twilio

To get started we’ll need a Twilio account. They offer a trial account that is sufficient for testing every feature of their platform.

As part of the account setup, we must also create a phone number. This is important because the trial account requires a verified phone number for sending messages.

Twilio offers a quick setup tutorial for new accounts. Once we complete the account setup and verify our phone number, we can start sending messages.

3. Introduction to TwiML

Before we write our sample application, let’s take a quick look at the data exchange format used for Twilio services.

TwiML is a proprietary markup language based on XML. The elements in a TwiML message mirror the different actions we can take related to telephony: make a phone call, record a message, send a message, and so on.

Here is an example TwiML message for sending an SMS:

<Response>
    <Message>
        <Body>Sample Twilio SMS</Body>
    </Message>
</Response>

And here is another example TwiML message that makes a phone call:

<Response>
    <Dial>
        <Number>415-123-4567</Number>
    </Dial>
</Response>

These are both trivial examples, but they give us a good understanding of how TwiML looks like.  It’s composed of verbs and nouns that are easy to remember and directly relate to the action we’d perform with a phone.

4. Sending SMS in Java with Twilio

Twilio provides a rich Java client that makes interacting with their services easy. Instead of having to write code that builds TwiML messages from scratch, we can use an out-of-the-box Java client.

4.1. Dependencies

We can download the dependency directly from Maven Central or by adding the following entry to our pom.xml file:

<dependency>
    <groupId>com.twilio.sdk</groupId>
    <artifactId>twilio</artifactId>
    <version>7.20.0</version>
</dependency>

4.2. Sending an SMS

To get started, let’s look at some sample code:

Twilio.init(ACCOUNT_SID, AUTH_TOKEN);
Message message = Message.creator(
    new PhoneNumber("+12225559999"),
    new PhoneNumber(TWILIO_NUMBER),
    "Sample Twilio SMS using Java")
.create();

Let’s break down into key pieces the code in the above sample:

  • The Twilio.init() call is required once to set up the Twilio environment with our unique Account Sid and Token
  • The Message object is the Java equivalent to the TwiML <Message> element we saw earlier
  • Message.creator() requires 3 parameters: To phone number, From phone number, and the message body
  • The create() method handles sending the message

4.3. Sending an MMS

The Twilio API also supports sending multimedia messages. We can mix and match text and images, for this to work the receiving phone must support media messaging:

Twilio.init(ACCOUNT_SID, AUTH_TOKEN);
Message message = Message.creator(
    new PhoneNumber("+12225559999"),
    new PhoneNumber(TWILIO_NUMBER),
    "Sample Twilio MMS using Java")
.setMediaUrl(
    Promoter.listOfOne(URI.create("http://www.domain.com/image.png")))
.create();

5. Tracking Message Status

In the previous examples, we didn’t confirm if the message was actually delivered. However Twilio provides a mechanism for us to determine whether a message was successfully delivered or not.

5.1. Message Status Codes

When sending a message, it’ll have one of the statuses at any time:

  • Queued – Twilio has received the message and queued it for delivery
  • Sending – the server is in the process of dispatching your message to the nearest upstream carrier in the network
  • Sent – the message was successfully accepted by the nearest upstream carrier
  • Delivered – Twilio has received confirmation of message delivery from the upstream carrier, and possibly the destination handset when available
  • Failed – the message couldn’t be send
  • Undelivered – the server has received a delivery receipt indicating the message wasn’t delivered

Note that for the last two statuses we can find an error code with more specific details to help us troubleshoot delivery problems.

The Twilio Java Client offers a synchronous and asynchronous methods to fetch status. Let’s have a look.

5.2. Checking Delivery Status (Synchronous)

Once we’ve created a Message object, we can call Message.getStatus() to see which status it’s currently in:

Twilio.init(ACCOUNT_SID, AUTH_TOKEN);
ResourceSet messages = Message.reader().read();
for (Message message : messages) {
    System.out.println(message.getSid() + " : " + message.getStatus());
}

Note that Message.reader().read() makes a remote API call so use it sparingly. By default, it returns all messages we’ve send, but we can filter the returned messages by phone numbers or date range.

5.3. Checking Delivery Status (Async)

Because retrieving message status requires a remote API call, it can take a long time. To avoid blocking the current thread, the Twilio Java client provides also an asynchronous version of Message.getStatus().read().

Twilio.init(ACCOUNT_SID, AUTH_TOKEN);
ListenableFuture<ResourceSet<Message>> future = Message.reader().readAsync();
Futures.addCallback(
    future,
    new FutureCallback<ResourceSet<Message>>() {
        public void onSuccess(ResourceSet<Message> messages) {
            for (Message message : messages) {
                System.out.println(message.getSid() + " : " + message.getStatus());
             }
         }
         public void onFailure(Throwable t) {
             System.out.println("Failed to get message status: " + t.getMessage());
         }
     });

This uses the Guava ListenableFuture interface to process the response from Twilio on a different thread.

6. Conclusion

In this article, we learned how to send SMS and MMS using Twilio and Java.

While TwiML is the basis of all messages to and from Twilio servers, the Twilio Java client makes sending messages incredibly easy.

And, as always, the full codebase for this example can be found in our GitHub repository.


Uploading Files with Servlets and JSP

$
0
0

1. Introduction

In this quick tutorial, we’ll see how to upload a file from a servlet.

To achieve this, we’ll first see the vanilla Java EE solution with file upload capabilities provided by native @MultipartConfig annotation.

Then, we’ll go over the Apache Commons FileUpload library, for earlier versions of the Servlet API.

2. Using Java EE @MultipartConfig

Starting with version 6, Java EE has the ability to support multi-part uploads out of the box.

As such, it’s probably a default go-to when enriching a Java EE app with file upload support.

First, let’s add a form to our HTML file:

<form method="post" action="multiPartServlet" enctype="multipart/form-data">
    Choose a file: <input type="file" name="multiPartServlet" />
    <input type="submit" value="Upload" />
</form>

The form should be defined using the enctype=”multipart/form-data” attribute to signal a multipart upload.

Next, we’ll want to annotate our HttpServlet with the correct information using the @MultipartConfig annotation:

@MultipartConfig(fileSizeThreshold = 1024 * 1024,
  maxFileSize = 1024 * 1024 * 5, 
  maxRequestSize = 1024 * 1024 * 5 * 5)
public class MultipartServlet extends HttpServlet {
    //...
}

Then, let’s make sure that our default server upload folder is set:

String uploadPath = getServletContext().getRealPath("") + File.separator + UPLOAD_DIRECTORY;
File uploadDir = new File(uploadPath);
if (!uploadDir.exists()) uploadDir.mkdir();

Finally, we can easily retrieve our inbound File from the request using the getParts() method, and save it to the disk:

for (Part part : request.getParts()) {
    fileName = getFileName(part);
    part.write(uploadPath + File.separator + fileName);
}

Note that, in this example, we’re using a helper method getFileName():

private String getFileName(Part part) {
    for (String content : part.getHeader("content-disposition").split(";")) {
        if (content.trim().startsWith("filename"))
            return content.substring(content.indexOf("=") + 2, content.length() - 1);
        }
    return Constants.DEFAULT_FILENAME;
}

For Servlet 3.1. projects, we could alternatively use the Part.getSubmittedFileName() method:

fileName = part.getSubmittedFileName();

3. Using Apache Commons FileUpload

If we’re not on a Servlet 3.0 project, we can use the Apache Commons FileUpload library directly.

3.1. Setup

We’ll want to use the following pom.xml dependencies to get our example running:

<dependency> 
    <groupId>commons-fileupload</groupId>
    <artifactId>commons-fileupload</artifactId>
    <version>1.3.3</version>
</dependency>
<dependency>
    <groupId>commons-io</groupId>
    <artifactId>commons-io</artifactId>
    <version>2.6</version>
</dependency>

The most recent versions can be found with a quick search on Maven’s Central Repository: commons-fileupload and commons-io.

3.2. Upload Servlet

The three main parts to incorporating Apache’s FileUpload library go as follows:

  • An upload form in a .jsp page.
  • Configuring your DiskFileItemFactory and ServletFileUpload object.
  • Processing the actual contents of a multipart file upload.

The upload form is the same as the one in the previous section.

Let’s move on to creating our Java EE servlet.

In our request processing method, we can wrap the incoming HttpRequest with a check to see if it’s a multi-part upload.

We’ll also specify what resources to allocate to the file upload temporarily (while being processed) on our DiskFileItemFactory.

Lastly, we’ll create a ServletFileUpload object which will represent the actual file itself. It will expose the contents of the multi-part upload for final persistence server side:

if (ServletFileUpload.isMultipartContent(request)) {

    DiskFileItemFactory factory = new DiskFileItemFactory();
    factory.setSizeThreshold(MEMORY_THRESHOLD);
    factory.setRepository(new File(System.getProperty("java.io.tmpdir")));

    ServletFileUpload upload = new ServletFileUpload(factory);
    upload.setFileSizeMax(MAX_FILE_SIZE);
    upload.setSizeMax(MAX_REQUEST_SIZE);
    String uploadPath = getServletContext().getRealPath("") 
      + File.separator + UPLOAD_DIRECTORY;
    File uploadDir = new File(uploadPath);
    if (!uploadDir.exists()) {
        uploadDir.mkdir();
    }
    //...
}

And, then we can extract those contents and write them to disk:

if (ServletFileUpload.isMultipartContent(request)) {
    //...
    List<FileItem> formItems = upload.parseRequest(request);
    if (formItems != null && formItems.size() > 0) {
        for (FileItem item : formItems) {
	    if (!item.isFormField()) {
	        String fileName = new File(item.getName()).getName();
	        String filePath = uploadPath + File.separator + fileName;
                File storeFile = new File(filePath);
                item.write(storeFile);
                request.setAttribute("message", "File "
                  + fileName + " has uploaded successfully!");
	    }
        }
    }
}

4. Running the Example

After we’ve compiled our project into a .war, we can drop it into our local Tomcat instance and start it up.

From there, we can bring up the main upload view where we’re presented with a form:

After successfully uploading our file, we should see the message:

Lastly, we can check the location specified in our servlet:

5. Conclusion

That’s it! We’ve learned how to provide multi-part file uploads using Java EE, as well as Apache’s Common FileUpload library!

Code snippets, as always, can be found over on GitHub.

Access Modifiers in Java

$
0
0

1. Overview

In this tutorial, we’re going over access modifiers in Java, which are used for setting the access level to classes, variables, methods, and constructors.

Simply put, there are four access modifiers: public, private, protected and default (no keyword).

Before we begin let’s note that a top-level class can use public or default access modifiers only. At the member level, we can use all four.

2. Default

When we don’t use any keyword explicitly, Java will set a default access to a given class, method or property. The default access modifier is also called package-private, which means that all members are visible within the same package but aren’t accessible from other packages:

package com.baeldung.accessmodifiers;

public class SuperPublic {
    static void defaultMethod() {
        ...
    }
}

defaultMethod() is accessible in another class of the same package:

package com.baeldung.accessmodifiers;

public class Public {
    public Public() {
        SuperPublic.defaultMethod(); // Available in the same package.
    }
}

However, it’s not available in other packages.

3. Public

If we add the public keyword to a class, method or property then we’re making it available to the whole world, i.e. all other classes in all packages will be able to use it. This is the least restrictive access modifier:

package com.baeldung.accessmodifiers;

public class SuperPublic {
    public static void publicMethod() {
        ...
    }
}

publicMethod() is available in another package:

package com.baeldung.accessmodifiers.another;

import com.baeldung.accessmodifiers.SuperPublic;

public class AnotherPublic {
    public AnotherPublic() {
        SuperPublic.publicMethod(); // Available everywhere. Let's note different package.
    }
}

4. Private

Any method, property or constructor with the private keyword is accessible from the same class only. This is the most restrictive access modifier and is core to the concept of encapsulation. All data will be hidden from the outside world:

package com.baeldung.accessmodifiers;

public class SuperPublic {
    static private void privateMethod() {
        ...
    }
    
     private void anotherPrivateMethod() {
         privateMethod(); // available in the same class only.
    }
}

5. Protected

Between public and private access levels, there’s the protected access modifier.

If we declare a method, property or constructor with the protected keyword, we can access the member from the same package (as with package-private access level) and in addition from all subclasses of its class, even if they lie in other packages:

package com.baeldung.accessmodifiers;

public class SuperPublic {
    static protected void protectedMethod() {
        ...
    }
}

protectedMethod() is available in subclasses (regardless of the package):

package com.baeldung.accessmodifiers.another;

import com.baeldung.accessmodifiers.SuperPublic;

public class AnotherSubClass extends SuperPublic {
    public AnotherSubClass() {
        SuperPublic.protectedMethod(); // Available in subclass. Let's note different package.
    }
}

6. Comparison

The table below summarises the available access modifiers. We can see that a class, regardless of the access modifiers used, always has access to its members:

Modifier Class Package Subclass World
public
Y Y Y Y
protected
Y Y Y N
default
Y Y N N
private
Y N N N

7. Conclusion

In this short article, we went over access modifiers in Java.

It’s good practice to use the most restrictive access level possible for any given member to prevent misuse. We should always use the private access modifier unless there is a good reason not to.

Public access level should only be used if a member is part of an API.

As always, the code examples are available over on Github.

Singleton Session Bean in Java EE

$
0
0

1. Overview

Whenever a single instance of a Session Bean is required for a given use-case, we can use a Singleton Session Bean.

In this tutorial, we’re going to explore the this through an example, with a Java EE application.

2. Maven

First of all, we need to define required Maven dependencies in the pom.xml.

Let’s define the dependencies for the EJB APIs and Embedded EJB container for deployment of the EJB:

<dependency>
    <groupId>javax</groupId>
    <artifactId>javaee-api</artifactId>
    <version>8.0</version>
    <scope>provided</scope>
</dependency>

<dependency>
    <groupId>org.apache.openejb</groupId>
    <artifactId>tomee-embedded</artifactId>
    <version>1.7.5</version>
</dependency>

Latest versions can be found on Maven Central at JavaEE API and tomEE.

3. Types of Session Beans

There are three types of Session Beans. Before we explore Singleton Session Beans, let’s see what is the difference between the lifecycles of the three types.

3.1. Stateful Session Beans

A Stateful Session Bean maintains the conversational state with the client it is communicating.

Each client creates a new instance of Stateful Bean and is not shared with other clients.

When the communication between the client and bean ends, the Session Bean also terminates.

3.2. Stateless Session Beans

A Stateless Session Bean doesn’t maintain any conversational state with the client. The bean contains the state specific to the client only till the duration of method invocation.

Consecutive method invocations are independent unlike with the Stateful Session Bean.

The container maintains a pool of Stateless Beans and these instances can be shared between multiple clients.

3.3. Singleton Session Beans

A Singleton Session Bean maintains the state of the bean for the complete lifecycle of the application.

Singleton Session Beans are similar to Stateless Session Beans but only one instance of the Singleton Session Bean is created in the whole application and does not terminates until the application is shut down.

The single instance of the bean is shared between multiple clients and can be concurrently accessed.

4. Creating a Singleton Session Bean

Let’s start by creating an interface for it.

For this example, let’s use the javax.ejb.Local annotation to define the interface:

@Local
public interface CountryState {
   List<String> getStates(String country);
   void setStates(String country, List<String> states);
}

Using @Local means the bean is accessed within the same application. We also have the option to use javax.ejb.Remote annotation which allows us to call the EJB remotely.

Now, we’ll define the implementation EJB bean class. We mark the class as a Singleton Session Bean by using the annotation javax.ejb.Singleton.

In addition, let’s also mark the bean with the javax.ejb.Startup annotation to inform the EJB container to initialize the bean at the startup:

@Singleton
@Startup
public class CountryStateContainerManagedBean implements CountryState {
    ...
}

This is called eager initialization. If we don’t use @Startup, the EJB container determines when to initialize the bean.

We can also define multiple Session Beans to initialize the data and load the beans in the specific order. Therefore, we’ll use, javax.ejb.DependsOn annotation to define our bean’s dependency on other Session Beans.

The value for the @DependsOn annotation is an array of the names of Bean class names that our Bean depends on:

@Singleton 
@Startup 
@DependsOn({"DependentBean1", "DependentBean2"}) 
public class CountryStateCacheBean implements CountryState { 
    ...
}

We’ll define an initialize() method which initializes the bean, and makes it a lifecycle callback method using javax.annotation.PostConstruct annotation.

With this annotation, it’ll be called by the container upon instantiation of the bean:

@PostConstruct
public void initialize() {

    List<String> states = new ArrayList<String>();
    states.add("Texas");
    states.add("Alabama");
    states.add("Alaska");
    states.add("Arizona");
    states.add("Arkansas");

    countryStatesMap.put("UnitedStates", states);
}

5. Concurrency

Next, we’ll design the concurrency management of Singleton Session Bean. EJB provides two methods for implementing concurrent access to the Singleton Session Bean: Container-managed concurrency, and Bean-managed concurrency.

The annotation javax.ejb.ConcurrencyManagement defines the concurrency policy for a method. By default, the EJB container uses container-managed concurrency.

The @ConcurrencyManagement annotation takes a javax.ejb.ConcurrencyManagementType value. The options are:

  • ConcurrencyManagementType.CONTAINER for container-managed concurrency.
  • ConcurrencyManagementType.BEAN for bean-managed concurrency.

5.1. Container-Managed Concurrency

Simply put, in container-managed concurrency, the container controls how clients’ access to methods.

Let’s use the @ConcurrencyManagement annotation with value javax.ejb.ConcurrencyManagementType.CONTAINER:

@Singleton
@Startup
@ConcurrencyManagement(ConcurrencyManagementType.CONTAINER)
public class CountryStateContainerManagedBean implements CountryState {
    ...
}

To specify the access level to each of the singleton’s business methods, we’ll use javax.ejb.Lock annotation. javax.ejb.LockType contains the values for the @Lock annotation. javax.ejb.LockType defines two values:

  • LockType.WRITE – This value provides an exclusive lock to the calling client and prevents all other clients from accessing all methods of the bean. Use this for methods that change the state of the singleton bean.
  • LockType.READThis value provides concurrent locks to multiple clients to access a method.
    Use this for methods which only read data from the bean.

With this in mind, we’ll define the setStates() method with @Lock(LockType.WRITE) annotation, to prevent simultaneous update of the state by clients.

To allow clients to read the data concurrently, we’ll annotate getStates() with @Lock(LockType.READ):

@Singleton 
@Startup 
@ConcurrencyManagement(ConcurrencyManagementType.CONTAINER) 
public class CountryStateContainerManagedBean implements CountryState { 

    private final Map<String, List<String> countryStatesMap = new HashMap<>();

    @Lock(LockType.READ) 
    public List<String> getStates(String country) { 
        return countryStatesMap.get(country);
    }

    @Lock(LockType.WRITE)
    public void setStates(String country, List<String> states) {
        countryStatesMap.put(country, states);
    }
}

To stop the methods execute for a long time and blocking the other clients indefinitely, we’ll use the javax.ejb.AccessTimeout annotation to timeout long-waiting calls.

Use the @AccessTimeout annotation to define the number of milliseconds method times-out. After the timeout, the container throws a javax.ejb.ConcurrentAccessTimeoutException and the method execution terminates.

5.2. Bean-Managed Concurrency

In Bean managed concurrency, the container doesn’t control simultaneous access of Singleton Session Bean by clients. The developer is required to implement concurrency by themselves.

Unless concurrency is implemented by the developer, all methods are accessible to all clients simultaneously. Java provides the synchronization and volatile primitives for implementing concurrency.

To find out more about concurrency read about java.util.concurrent here and Atomic Variables here.

For bean-managed concurrency, let’s define the @ConcurrencyManagement annotation with the javax.ejb.ConcurrencyManagementType.BEAN value for the Singleton Session Bean class:

@Singleton 
@Startup 
@ConcurrencyManagement(ConcurrencyManagementType.BEAN) 
public class CountryStateBeanManagedBean implements CountryState { 
   ... 
}

Next, we’ll write the setStates() method which changes the state of the bean using synchronized keyword:

public synchronized void setStates(String country, List<String> states) {
    countryStatesMap.put(country, states);
}

The synchronized keyword makes the method accessible by only one thread at a time.

The getStates() method doesn’t change the state of the Bean and so it doesn’t need to use the synchronized keyword.

6. Client

Now we can write the client to access our Singleton Session Bean.

We can deploy the Session Bean on application container servers like JBoss, Glassfish etc. To keep things simple, we will use the javax.ejb.embedded.EJBContainer class. EJBContainer runs in the same JVM as the client and provides most of the services of an enterprise bean container.

First, we’ll create an instance of EJBContainer. This container instance will search and initialize all the EJB modules present in the classpath:

public class CountryStateCacheBeanTest {

    private EJBContainer ejbContainer = null;

    private Context context = null;

    @Before
    public void init() {
        ejbContainer = EJBContainer.createEJBContainer();
        context = ejbContainer.getContext();
    }
}

Next, we’ll get the javax.naming.Context object from the initialized container object. Using the Context instance, we can get the reference to CountryStateContainerManagedBean and call the methods:

@Test
public void whenCallGetStatesFromContainerManagedBean_ReturnsStatesForCountry() throws Exception {

    String[] expectedStates = {"Texas", "Alabama", "Alaska", "Arizona", "Arkansas"};

    CountryState countryStateBean = (CountryState) context
      .lookup("java:global/singleton-ejb-bean/CountryStateContainerManagedBean");
    List<String> actualStates = countryStateBean.getStates("UnitedStates");

    assertNotNull(actualStates);
    assertArrayEquals(expectedStates, actualStates.toArray());
}

@Test
public void whenCallSetStatesFromContainerManagedBean_SetsStatesForCountry() throws Exception {

    String[] expectedStates = { "California", "Florida", "Hawaii", "Pennsylvania", "Michigan" };
 
    CountryState countryStateBean = (CountryState) context
      .lookup("java:global/singleton-ejb-bean/CountryStateContainerManagedBean");
    countryStateBean.setStates(
      "UnitedStates", Arrays.asList(expectedStates));
 
    List<String> actualStates = countryStateBean.getStates("UnitedStates");
    assertNotNull(actualStates);
    assertArrayEquals(expectedStates, actualStates.toArray());
}

Similarly, we can use the Context instance to get the reference for Bean-Managed Singleton Bean and call the respective methods:

@Test
public void whenCallGetStatesFromBeanManagedBean_ReturnsStatesForCountry() throws Exception {

    String[] expectedStates = { "Texas", "Alabama", "Alaska", "Arizona", "Arkansas" };

    CountryState countryStateBean = (CountryState) context
      .lookup("java:global/singleton-ejb-bean/CountryStateBeanManagedBean");
    List<String> actualStates = countryStateBean.getStates("UnitedStates");

    assertNotNull(actualStates);
    assertArrayEquals(expectedStates, actualStates.toArray());
}

@Test
public void whenCallSetStatesFromBeanManagedBean_SetsStatesForCountry() throws Exception {

    String[] expectedStates = { "California", "Florida", "Hawaii", "Pennsylvania", "Michigan" };

    CountryState countryStateBean = (CountryState) context
      .lookup("java:global/singleton-ejb-bean/CountryStateBeanManagedBean");
    countryStateBean.setStates("UnitedStates", Arrays.asList(expectedStates));

    List<String> actualStates = countryStateBean.getStates("UnitedStates");
    assertNotNull(actualStates);
    assertArrayEquals(expectedStates, actualStates.toArray());
}

End our tests by closing the EJBContainer in the close() method:

@After
public void close() {
    if (ejbContainer != null) {
        ejbContainer.close();
    }
}

7. Conclusion

Singleton Session Beans are just as flexible and powerful as any standard Session Bean but allow us to apply a Singleton pattern to share state across our application’s clients.

Concurrency management of the Singleton Bean could be easily implemented using Container-Managed Concurrency where the container takes care of concurrent access by multiple clients, or you could also implement your own custom concurrency management using Bean-Managed Concurrency.

The source code of this tutorial can be found over on GitHub.

Programmatic Configuration with Log4j 2

$
0
0

1. Introduction

In this tutorial, we’ll take a look at different ways to programmatically configure Apache Log4j 2.

2. Initial Setup

To start using Log4j 2, we merely need to include the log4j-core and log4j-slf4j-impl dependencies in our pom.xml:

<dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-core</artifactId>
    <version>2.11.0</version>
</dependency>
<dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-slf4j-impl</artifactId>
    <version>2.11.0</version>
</dependency>

3. ConfigurationBuilder

Once we have Maven configured, then we need to create a ConfigurationBuilder, which is the class that lets us configure appenders, filters, layouts, and loggers.

Log4j 2 provides several ways to get a ConfigurationBuilder.

Let’s start with the most direct way:

ConfigurationBuilder<BuiltConfiguration> builder
 = ConfigurationBuilderFactory.newConfigurationBuilder();

And to begin configuring components, ConfigurationBuilder is equipped with a corresponding new method, like newAppender or newLayout, for each component.

Some components have different subtypes, like FileAppender or ConsoleAppender, and these are referred to in the API as plugins.

3.1. Configuring Appenders

Let’s tell the builder where to send each log line by configuring an appender:

AppenderComponentBuilder console 
  = builder.newAppender("stdout", "Console"); 

builder.add(console);

AppenderComponentBuilder file 
  = builder.newAppender("log", "File"); 
file.addAttribute("fileName", "target/logging.log");

builder.add(file);

While most new methods don’t support this, newAppender(name, plugin) allows us to give the appender a name, which will turn out to be important later on. These appenders, we’ve called stdout and log, though we could’ve named them anything.

We’ve also told the builder which appender plugin (or, more simply, which kind of appender) to use. Console and File refer to Log4j 2’s appenders for writing to standard out and the file system, respectively.

Though Log4j 2 supports several appenders, configuring them using Java can be a bit tricky since AppenderComponentBuilder is a generic class for all appender types.

This makes it have methods like addAttribute and addComponent instead of setFileName and addTriggeringPolicy:

AppenderComponentBuilder rollingFile 
  = builder.newAppender("rolling", "RollingFile");
rollingFile.addAttribute("fileName", "rolling.log");
rollingFile.addAttribute("filePattern", "rolling-%d{MM-dd-yy}.log.gz");
rollingFile.addComponent(triggeringPolicies);

builder.add(rollingFile);

And, finally, don’t forget to call builder.add to append it to the main configuration!

3.2. Configuring Filters

We can add filters to each of our appenders, which decide on each log line whether it should be appended or not.

Let’s use the MarkerFilter plugin on our console appender:

FilterComponentBuilder flow = builder.newFilter(
  "MarkerFilter", 
  Filter.Result.ACCEPT,
  Filter.Result.DENY);  
flow.addAttribute("marker", "FLOW");

console.add(flow);

Note that this new method doesn’t allow us to name the filter, but it does ask us to indicate what to do if the filter passes or fails.

In this case, we’ve kept it simple, stating that if the MarkerFilter passes, then ACCEPT the logline. Otherwise, DENY it.

Note in this case that we don’t append this to the builder but instead to the appenders that we want to use this filter.

3.3. Configuring Layouts

Next, let’s define the layout for each log line. In this case, we’ll use the PatternLayout plugin:

LayoutComponentBuilder standard 
  = builder.newLayout("PatternLayout");
standard.addAttribute("pattern", "%d [%t] %-5level: %msg%n%throwable");

console.add(standard);
file.add(standard);
rolling.add(standard);

Again, we’ve added these directly to the appropriate appenders instead of to the builder directly.

3.4. Configuring the Root Logger

Now that we know where logs will be shipped to, we want to configure which logs will go to each destination.

The root logger is the highest logger, kind of like Object in Java. This logger is what will be used by default unless overridden.

So, let’s use a root logger to set the default logging level to ERROR and the default appender to our stdout appender from above:

RootLoggerComponentBuilder rootLogger 
  = builder.newRootLogger(Level.ERROR);
rootLogger.add(builder.newAppenderRef("stdout"));

builder.add(rootLogger);

To point our logger at a specific appender, we don’t give it an instance of the builder. Instead, we refer to it by the name that we gave it earlier.

3.5. Configuring Additional Loggers

Child loggers can be used to target specific packages or logger names.

Let’s add a logger for the com package in our application, setting the logging level to DEBUG and having those go to our log appender:

LoggerComponentBuilder logger = builder.newLogger("com", Level.DEBUG);
logger.add(builder.newAppenderRef("log"));
logger.addAttribute("additivity", false);

builder.add(logger);

Note that we can set additivity with our loggers, which indicates whether this logger should inherit properties like logging level and appender types from its ancestors.

3.6. Configuring Other Components

Not all components have a dedicated new method on ConfigurationBuilder.

So, in that case, we call newComponent.

For example, because there isn’t a TriggeringPolicyComponentBuilder, we need to use newComponent for something like specifying our triggering policy for rolling file appenders:

ComponentBuilder triggeringPolicies = builder.newComponent("Policies")
  .addComponent(builder.newComponent("CronTriggeringPolicy")
    .addAttribute("schedule", "0 0 0 * * ?"))
  .addComponent(builder.newComponent("SizeBasedTriggeringPolicy")
    .addAttribute("size", "100M"));
 
rolling.addComponent(triggeringPolicies);

3.7. The XML Equivalent

ConfigurationBuilder comes equipped with a handy method to print out the equivalent XML:

builder.writeXMLConfiguration(System.out);

Running the above line prints out:

<?xml version="1.0" encoding="UTF-8"?>
<Configuration>
   <Appenders>
      <Console name="stdout">
         <PatternLayout pattern="%d [%t] %-5level: %msg%n%throwable" />
         <MarkerFilter onMatch="ACCEPT" onMisMatch="DENY" marker="FLOW" />
      </Console>
      <RollingFile name="rolling" 
        fileName="target/rolling.log" 
        filePattern="target/archive/rolling-%d{MM-dd-yy}.log.gz">
         <PatternLayout pattern="%d [%t] %-5level: %msg%n%throwable" />
         <Policies>
            <CronTriggeringPolicy schedule="0 0 0 * * ?" />
            <SizeBasedTriggeringPolicy size="100M" />
         </Policies>
      </RollingFile>
      <File name="FileSystem" fileName="target/logging.log">
         <PatternLayout pattern="%d [%t] %-5level: %msg%n%throwable" />
      </File>
   </Appenders>
   <Loggers>
      <Logger name="com" level="DEBUG" additivity="false">
         <AppenderRef ref="log" />
      </Logger>
      <Root level="ERROR" additivity="true">
         <AppenderRef ref="stdout" />
      </Root>
   </Loggers>
</Configuration>

This comes in handy when we want to double-check our configuration or if we want to persist our configuration, say, to the file system.

3.8. Putting it all Together

Now that we are fully configured, let’s tell Log4j 2 to use our configuration:

Configurator.initialize(builder.build());

After this is invoked, future calls to Log4j 2 will use our configuration.

Note that this means that we need to invoke Configurator.initialize before we make any calls to LogManager.getLogger.

4. ConfigurationFactory

Now that we’ve seen one way to get and apply a ConfigurationBuilder, let’s take a look at one more:

public class CustomConfigFactory
  extends ConfigurationFactory {
 
    public Configuration createConfiguration(
      LoggerContext context, 
      ConfigurationSource src) {
 
        ConfigurationBuilder<BuiltConfiguration> builder = super
          .newConfigurationBuilder();

        // ... configure appenders, filters, etc.

        return builder.build();
    }

    public String[] getSupportedTypes() { 
        return new String[] { "*" };
    }
}

In this case, instead of using ConfigurationBuilderFactory, we subclassed ConfigurationFactory, an abstract class targetted for creating instances of Configuration.

Then, instead of calling Configurator.initialize like we did the first time, we simply need to let Log4j 2 know about our new configuration factory.

There are three ways to do this:

  • Static initialization
  • A runtime property, or
  • The @Plugin annotation

4.1. Use Static Initialization

Log4j 2 supports calling setConfigurationFactory during static initialization:

static {
    ConfigurationFactory custom = new CustomConfigFactory();
    ConfigurationFactory.setConfigurationFactory(custom);
}

This approach has the same limitation as for the last approach we saw, which is that we’ll need to invoke it before any calls to LogManager.getLogger.

4.2. Use a Runtime Property

If we have access to the Java startup command, then Log4j 2 also supports specifying the ConfigurationFactory to use via a -D parameter:

-Dlog4j2.configurationFactory=com.baeldung.log4j2.CustomConfigFactory

The main benefit of this approach is that we don’t have to worry about initialization order as we do with the first two approaches.

4.3. Use the @Plugin Annotation

And finally, in circumstances where we don’t want to fiddle with the Java startup command by adding a -D, we can simply annotate our CustomConfigurationFactory with the Log4j 2 @Plugin annotation:

@Plugin(
  name = "CustomConfigurationFactory", 
  category = ConfigurationFactory.CATEGORY)
@Order(50)
public class CustomConfigFactory
  extends ConfigurationFactory {

  // ... rest of implementation
}

Log4j 2 will scan the classpath for classes having the @Plugin annotation, and, finding this class in the ConfigurationFactory category, will use it.

4.4. Combining with Static Configuration

Another benefit to using a ConfigurationFactory extension is that we can easily combine our custom configuration with other configuration sources like XML:

public Configuration createConfiguration(
  LoggerContext context, 
  ConfigurationSource src) {
    return new WithXmlConfiguration(context, src);
}

The source parameter represents the static XML or JSON configuration file that Log4j 2 finds if any.

We can take that configuration file and send it to our custom implementation of XmlConfiguration where we can place whatever overriding configuration we need:

public class WithXmlConfiguration extends XmlConfiguration {
 
    @Override
    protected void doConfigure() {
        super.doConfigure(); // parse xml document

        // ... add our custom configuration
    }
}

5. Conclusion

In this article, we looked at how to use the new ConfigurationBuilder API available in Log4j 2.

We also took a look at customizing ConfigurationFactory in combination with ConfigurationBuilder for more advanced use cases.

Don’t forget to check out my complete examples over on GitHub.

Using Lombok’s @Builder Annotation

$
0
0

1. Overview

Project Lombok’s @Builder is a useful mechanism for using the Builder pattern without writing boilerplate code. We can apply this annotation to a Class or a method.

In this brief tutorial, we’ll look at the different use cases for @Builder.

2. Maven Dependencies

First, we need to add Project Lombok to our pom.xml:

<dependency>
    <groupId>org.projectlombok</groupId>
    <artifactId>lombok</artifactId>
    <version>1.16.20.0</version>
</dependency>

Maven Central has the latest version of Project Lombok here.

3. Using @Builder on a Class

In the first use case, we’re simply implementing a Class, and we want to use a builder to create instances of our class.

The first and only step is to add the annotation to the class declaration:

@Getter
@Builder
public class Widget {
    private final String name;
    private final int id;
}

Lombok does all of the work for us. We can now build a Widget and test it:

Widget testWidget = Widget.builder()
  .name("foo")
  .id(1)
  .build();

assertThat(testWidget.getName())
  .isEqualTo("foo");
assertThat(testWidget.getId())
  .isEqualTo(1);

If we want to create copies or near-copies of objects, we can add the property toBuilder = true to the @Builder annotation:

@Builder(toBuilder = true)
public class Widget {
//...
}

This tells Lombok to add a toBuilder() method to our Class. When we invoke the toBuilder() method, it returns a builder initialized with the properties of the instance it is called on:

Widget testWidget = Widget.builder()
  .name("foo")
  .id(1)
  .build();

Widget.WidgetBuilder widgetBuilder = testWidget.toBuilder();

Widget newWidget = widgetBuilder.id(2).build();
assertThat(newWidget.getName())
  .isEqualTo("foo");
assertThat(newWidget.getId())
  .isEqualTo(2);

We can see in the test code that the builder class generated by Lombok is named like our class, with “Builder” appended to it — WidgetBuilder in this case. We can then modify the properties we wish and build() a new instance.

4. Using @Builder on a Method

Suppose we’re using an object that we want to construct with a builder, but we can’t modify the source or extend the Class.

First, let’s create a quick example using Lombok’s @Value annotation:

@Value
final class ImmutableClient {
    private int id;
    private String name;
}

Now we have a final Class with two immutable members, getters for them, and an all-arguments constructor.

We covered how to use @Builder on a Class, but we can use it on methods, too. We’ll use this ability to work around not being able to modify or extend ImmutableClient.

Next, we’ll create a new class with a method for creating ImmutableClients:

class ClientBuilder {

    @Builder(builderMethodName = "builder")
    public static ImmutableClient newClient(int id, String name) {
        return new ImmutableClient(id, name);
    }
}

This annotation creates a method named builder() that returns a Builder for creating ImmutableClients.

Now we can build an ImmutableClient:

ImmutableClient testImmutableClient = ClientBuilder.builder()
  .name("foo")
  .id(1)
  .build();
assertThat(testImmutableClient.getName())
  .isEqualTo("foo");
assertThat(testImmutableClient.getId())
  .isEqualTo(1);

5. Conclusion

In this article, we used Lombok’s @Builder annotation on a method to create a builder for a final Class.

Code samples, as always, can be found over on GitHub.

Introduction to Java Microservices with MSF4J

$
0
0

1. Overview

In this tutorial, we’ll showcase microservices development using the MSF4J framework.

This is a lightweight tool which provides an easy way to build a wide variety of services focused on high performance.

2. Maven Dependencies

We’ll need a bit more Maven configuration than usual to build an MSF4J-based microservice. The simplicity and the power of this framework do come at a price: basically, we need to define a parent artifact, as well as the main class:

<parent>
    <groupId>org.wso2.msf4j</groupId>
    <artifactId>msf4j-service</artifactId>
    <version>2.6.0</version>
</parent>

<properties>
    <microservice.mainClass>
        com.baeldung.msf4j.Application
    </microservice.mainClass>
</properties>

The latest version of msf4j-service can be found on Maven Central.

Next, we’ll show three different microservices scenarios. First a minimalistic example, then a RESTful API, and finally a Spring integration sample.

3. Basic Project

3.1. Simple API

We’re going to publish a simple web resource.

This service is provided with a class using some annotations where each method handles a request. Through these annotations, we set the method, the path, and the parameters required for each request.

The returned content type is just plain text:

@Path("/")
public class SimpleService {

    @GET
    public String index() {
        return "Default content";
    }

    @GET
    @Path("/say/{name}")
    public String say(@PathParam("name") String name) {
        return "Hello " + name;
    }
}

And remember that all classes and annotations used are just standard JAX-RS elements, which we already covered in this article.

3.2. Application

We can launch the microservice with this main class where we set, deploy and run the service defined earlier:

public class Application {
    public static void main(String[] args) {
        new MicroservicesRunner()
          .deploy(new SimpleService())
          .start();
    }
}

If we want, we can chain deploy calls here to run several services at once:

new MicroservicesRunner()
  .deploy(new SimpleService())
  .deploy(new ComplexService())
  .start()

3.3. Running the Microservice

To run the MSF4J microservice, we have a couple of options:

  1. On an IDE, running as a Java application
  2. Running the generated jar package

Once started, you can see the result at http://localhost:9090.

3.4. Startup Configurations

We can tweak the configuration in a lot of ways just by adding some clauses to the startup code.

For example, we can add any kind of interceptor for the requests:

new MicroservicesRunner()
  .addInterceptor(new MetricsInterceptor())
  .deploy(new SimpleService())
  .start();

Or, we can add a global interceptor, like one for authentication:

new MicroservicesRunner()
  .addGlobalRequestInterceptor(newUsernamePasswordSecurityInterceptor())
  .deploy(new SimpleService())
  .start();

Or, if we need session management, we can set a session manager:

new MicroservicesRunner()
  .deploy(new SimpleService())
  .setSessionManager(new PersistentSessionManager()) 
  .start();

For more details about each of this scenarios and to see some working samples, check out MSF4J’s official GitHub repo.

4. Building an API Microservice

We’ve shown the simplest example possible. Now we’ll move to a more realistic project.

This time, we show how to build an API with all the typical CRUD operations to manage a repository of meals.

4.1. The Model

The model is just a simple POJO representing a meal:

public class Meal {
    private String name;
    private Float price;

    // getters and setters
}

4.2. The API

We build the API as a web controller. Using standard annotations, we set each function with the following:

  • URL path
  • HTTP method: GET, POST, etc.
  • input (@Consumes) content type
  • output (@Produces) content type

So, let’s create a method for each standard CRUD operation:

@Path("/menu")
public class MenuService {

    private List<Meal> meals = new ArrayList<Meal>();

    @GET
    @Path("/")
    @Produces({ "application/json" })
    public Response index() {
        return Response.ok()
          .entity(meals)
          .build();
    }

    @GET
    @Path("/{id}")
    @Produces({ "application/json" })
    public Response meal(@PathParam("id") int id) {
        return Response.ok()
          .entity(meals.get(id))
          .build();
    }

    @POST
    @Path("/")
    @Consumes("application/json")
    @Produces({ "application/json" })
    public Response create(Meal meal) {
        meals.add(meal);
        return Response.ok()
          .entity(meal)
          .build();
    }

    // ... other CRUD operations
}

4.3. Data Conversion Features

MSF4J offers support for different data conversion libraries such as GSON (which comes by default) and Jackson (through the msf4j-feature dependency). For example, we can use GSON explicitly:

@GET
@Path("/{id}")
@Produces({ "application/json" })
public String meal(@PathParam("id") int id) {
    Gson gson = new Gson();
    return gson.toJson(meals.get(id));
}

In passing, note that we’ve used curly braces in both @Consumes and @Produces annotation so we can set more than one mime type.

4.4. Running the API Microservice

We run the microservice just as we did in the previous example, through an Application class that publishes the MenuService.

Once started, you can see the result at http://localhost:9090/menu.

5. MSF4J and Spring

We can also apply Spring in our MSF4J based microservices, from which we’ll get its dependency injection features.

5.1. Maven Dependencies

We’ll have to add the appropriate dependencies to the previous Maven configuration to add Spring and Mustache support:

<dependencies>
    <dependency>
        <groupId>org.wso2.msf4j</groupId>
        <artifactId>msf4j-spring</artifactId>
        <version>2.6.1</version>
    </dependency>
    <dependency>
        <groupId>org.wso2.msf4j</groupId>
        <artifactId>msf4j-mustache-template</artifactId>
        <version>2.6.1</version>
    </dependency>
</dependencies>

The latest version of msf4j-spring and msf4j-mustache-template can be found on Maven Central.

5.2. Meal API

This API is just a simple service, using a mock meal repository. Notice how we use Spring annotations for auto-wiring and to set this class as a Spring service component.

@Service
public class MealService {
 
    @Autowired
    private MealRepository mealRepository;

    public Meal find(int id) {
        return mealRepository.find(id);
    }

    public List<Meal> findAll() {
        return mealRepository.findAll();
    }

    public void create(Meal meal) {
        mealRepository.create(meal);
    }
}

5.3. Controller

We declare the controller as a component and Spring provides the service through auto-wiring. The first method shows how to serve a Mustache template and the second a JSON resource:

@Component
@Path("/meal")
public class MealResource {

    @Autowired
    private MealService mealService;

    @GET
    @Path("/")
    public Response all() {
        Map map = Collections.singletonMap("meals", mealService.findAll());
        String html = MustacheTemplateEngine.instance()
          .render("meals.mustache", map);
        return Response.ok()
          .type(MediaType.TEXT_HTML)
          .entity(html)
          .build();
    }

    @GET
    @Path("/{id}")
    @Produces({ "application/json" })
    public Response meal(@PathParam("id") int id) {
        return Response.ok()
          .entity(mealService.find(id))
          .build();
    }

}

5.4. Main Program

In the Spring scenario, this is how we get the microservice started:

public class Application {

    public static void main(String[] args) {
        MSF4JSpringApplication.run(Application.class, args);
    }
}

Once started, we can see the result at http://localhost:8080/meals. The default port differs in Spring projects, but we can set it to whatever port we want.

5.5. Configuration Beans

To enable specific settings, including interceptors and session management, we can add configuration beans.

For example, this one changes the default port for the microservice:

@Configuration
public class PortConfiguration {

    @Bean
    public HTTPTransportConfig http() {
        return new HTTPTransportConfig(9090);
    }

}

6. Conclusion

In this article, we’ve introduced the MSF4J framework, applying different scenarios to build Java-based microservices.

There is a lot of buzz around this concept, but some theoretical background has been already set, and MSF4J provides a convenient and standardized way to apply this pattern.

Also, for some further reading, take a look at building Microservices with Eclipse Microprofile, and of course our guide on Spring Microservices with Spring Boot and Spring Cloud.

And finally, all the examples here can be found in the GitHub repo.

NaN in Java

$
0
0

1. Overview

Simply put, NaN is a numeric data type value which stands for “not a number”.

In this quick tutorial, we’ll explain the NaN value in Java and the various operations that can produce or involve this value.

2. What is NaN?

NaN usually indicates the result of invalid operations. For example, attempting to divide zero by zero is one such operation.

We also use NaN for unrepresentable values. The square root of -1 is one such case, as we can describe the value (i) only in complex numbers.

The IEEE Standard for Floating-Point Arithmetic (IEEE 754) defines the NaN value. In Java, the floating-point types float and double implement this standard.

Java defines NaN constants of both float and double types as Float.NaN and Double.NaN:

A constant holding a Not-a-Number (NaN) value of type double. It is equivalent to the value returned by Double.longBitsToDouble(0x7ff8000000000000L).”

and:

“A constant holding a Not-a-Number (NaN) value of type float. It is equivalent to the value returned by Float.intBitsToFloat(0x7fc00000).”

We don’t have this type of constants for other numeric data types in Java.

3. Comparisons with NaN

While writing methods in Java, we should check that the input is valid and within the expected range. NaN value is not a valid input in most cases. Therefore, we should verify that the input value is not a NaN value and handle these input values appropriately.

NaN cannot be compared with any floating type value. This means that we’ll get false for all comparison operations involving NaN (except “!=” for which we get true).

We get true for “x != x” if and only if x is NaN:

System.out.println("NaN == 1 = " + (NAN == 1));
System.out.println("NaN > 1 = " + (NAN > 1));
System.out.println("NaN < 1 = " + (NAN < 1));
System.out.println("NaN != 1 = " + (NAN != 1));
System.out.println("NaN == NaN = " + (NAN == NAN));
System.out.println("NaN > NaN = " + (NAN > NAN));
System.out.println("NaN < NaN = " + (NAN < NAN));
System.out.println("NaN != NaN = " + (NAN != NAN));

Let’s have a look at the result of running the code above:

NaN == 1 = false
NaN > 1 = false
NaN < 1 = false
NaN != 1 = true
NaN == NaN = false
NaN > NaN = false
NaN < NaN = false
NaN != NaN = true

Hence, we cannot check for NaN by comparing with NaN using “==” or “!= “. In fact, we should rarely use “==” or “!= ” operators with float or double types.

Instead, we can use the expression “x != x”. This expression returns true only for NAN.

We can also use the methods Float.isNaN and Double.isNaN to check for these values. This is the preferred approach as it’s more readable and understandable:

double x = 1;
System.out.println(x + " is NaN = " + (x != x));
System.out.println(x + " is NaN = " + (Double.isNaN(x)));
        
x = Double.NaN;
System.out.println(x + " is NaN = " + (x != x));
System.out.println(x + " is NaN = " + (Double.isNaN(x)));

We’ll get the following result when running this code:

1.0 is NaN = false
1.0 is NaN = false
NaN is NaN = true
NaN is NaN = true

4. Operations Producing NaN

While doing operations involving float and double types, we need to be aware of the NaN values.

Some floating-point methods and operations produce NaN values instead of throwing an Exception. We may need to handle such results explicitly.

A common case resulting in not-a-number values are mathematically undefined numerical operations:

double ZERO = 0;
System.out.println("ZERO / ZERO = " + (ZERO / ZERO));
System.out.println("INFINITY - INFINITY = " + 
  (Double.POSITIVE_INFINITY - Double.POSITIVE_INFINITY));
System.out.println("INFINITY * ZERO = " + (Double.POSITIVE_INFINITY * ZERO));

These examples result in the following output:

ZERO / ZERO = NaN
INFINITY - INFINITY = NaN
INFINITY * ZERO = NaN

Numerical operations which don’t have results in real numbers also produce NaN:

System.out.println("SQUARE ROOT OF -1 = " + Math.sqrt(-1));
System.out.println("LOG OF -1 = " +  Math.log(-1));

These statements will result in:

SQUARE ROOT OF -1 = NaN
LOG OF -1 = NaN

All numeric operations with NaN as an operand produce NaN as a result:

System.out.println("2 + NaN = " +  (2 + Double.NaN));
System.out.println("2 - NaN = " +  (2 - Double.NaN));
System.out.println("2 * NaN = " +  (2 * Double.NaN));
System.out.println("2 / NaN = " +  (2 / Double.NaN));

And the result of the above is:

2 + NaN = NaN
2 - NaN = NaN
2 * NaN = NaN
2 / NaN = NaN

Finally, we cannot assign null to double or float type variables. Instead, we may explicitly assign NaN to such variables to indicate missing or unknown values:

double maxValue = Double.NaN;

5. Conclusion

In this article, we discussed NaN and the various operations involving it. We also discussed the need to handle NaN while doing floating-point computations in Java explicitly.

The full source code can be found over on GitHub.


Java Weekly, Issue 230

$
0
0

Here we go…

1. Spring and Java

>> Java & Docker: Java 10 improvements strengthen the friendship! [aboullaite.me]

Java 10 is finally fully suitable for working with Docker 🙂

>> Metrics with Spring Boot 2.0 – Counters and gauges [blog.frankel.ch]

These two concepts are key to understanding the new metrics functionality in Spring Boot 2 (and in general).

>> TestContainers and Spring Boot [java-allandsundry.com]

TestContainers greatly simplify the process of starting Docker containers for the sake of testing – highly recommended.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> Docker Hub vs Creating a Local Docker Registry [code-maze.com]

The title says all.

>> The Benefits of Side Projects [techblog.bozho.net]

It’s always a good idea to not really on only a single source of income… or experience.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Second Opinion [dilbert.com]

>> Question and Answer [dilbert.com]

>> Idea Stealing [dilbert.com]

5. Pick of the Week

Just a quick reminder:

>> Concurrency != Parallelism [monades.roperzh.com]

Guide to Java 10

$
0
0

1. Introduction

JDK 10, which is an implementation of Java SE 10, was released on March 20, 2018.

In this article, we’ll cover and explore the new features and changes introduced in JDK 10.

2. Local Variable Type Inference

Follow the link for an in-depth article on this feature:

Java 10 Local Variable Type Inference

3. Unmodifiable Collections

There are a couple of changes related to unmodifiable collections in Java 10.

3.1. copyOf()

java.util.List, java.util.Map and java.util.Set each got a new static method copyOf(Collection).

It returns the unmodifiable copy of the given Collection:

@Test(expected = UnsupportedOperationException.class)
public void whenModifyCopyOfList_thenThrowsException() {
    List<Integer> copyList = List.copyOf(someIntList);
    copyList.add(4);
}

Any attempt to modify such a collection would result in java.lang.UnsupportedOperationExceptionruntime exception.

3.2. toUnmodifiable*()

java.util.stream.Collectors get additional methods to collect a Stream into unmodifiable List, Map or Set:

@Test(expected = UnsupportedOperationException.class)
public void whenModifyToUnmodifiableList_thenThrowsException() {
    List<Integer> evenList = someIntList.stream()
      .filter(i -> i % 2 == 0)
      .collect(Collectors.toUnmodifiableList());
    evenList.add(4);
}

Any attempt to modify such a collection would result in java.lang.UnsupportedOperationExceptionruntime exception.

4. Optional*.orElseThrow()

java.util.Optional, java.util.OptionalDouble, java.util.OptionalIntand java.util.OptionalLongeach got a new method orElseThrow()which doesn’t take any argument and throws NoSuchElementExceptionif no value is present:

@Test
public void whenListContainsInteger_OrElseThrowReturnsInteger() {
    Integer firstEven = someIntList.stream()
      .filter(i -> i % 2 == 0)
      .findFirst()
      .orElseThrow();
    is(firstEven).equals(Integer.valueOf(2));
}

It’s synonymous with and is now the preferred alternative to the existing get()method.

5. Performance Improvements

Follow the link for an in-depth article on this feature:

Java 10 Performance Improvements

6. Container Awareness

JVMs are now aware of being run in a Docker container and will extract container-specific configuration instead of querying the operating system itself – it applies to data like the number of CPUs and total memory that have been allocated to the container.

However, this support is only available for Linux-based platforms. This new support is enabled by default and can be disabled in the command line with the JVM option:

-XX:-UseContainerSupport

Also, this change adds a JVM option that provides the ability to specify the number of CPUs that the JVM will use:

-XX:ActiveProcessorCount=count

Also, three new JVM options have been added to allow Docker container users to gain more fine-grained control over the amount of system memory that will be used for the Java Heap:

-XX:InitialRAMPercentage
-XX:MaxRAMPercentage
-XX:MinRAMPercentage

7. Root Certificates

The cacerts keystore, which was initially empty so far, is intended to contain a set of root certificates that can be used to establish trust in the certificate chains used by various security protocols.

As a result, critical security components such as TLS didn’t work by default under OpenJDK builds.

With Java 10, Oracle has open-sourced the root certificates in Oracle’s Java SE Root CA program in order to make OpenJDK builds more attractive to developers and to reduce the differences between those builds and Oracle JDK builds.

8. Deprecations and Removals

8.1. Command Line Options and Tools

Tool javah has been removed from Java 10 which generated C headers and source files which were required to implement native methods – now, javac -h can be used instead.

policytool was the UI based tool for policy file creation and management. This has now been removed. The user can use simple text editor for performing this operation.

Removed java -Xprofoption. The option was used to profile the running program and send profiling data to standard output. The user should now use jmap tool instead.

8.2. APIs

Deprecated java.security.acl package has been marked forRemoval=true and is subject to removal in a future version of Java SE. It’s been replaced by java.security.Policy and related classes.

Similarly, java.security.{Certificate,Identity,IdentityScope,Signer} APIs are marked forRemoval=true.

9. Time-Based Release Versioning

Starting with Java 10, Oracle has moved to the time-based release of Java. This has following implications:

  1. A new Java release every six months. The March 2018 release is JDK 10, the September 2018 release is JDK 11, and so forth. These are called feature releases and are expected to contain at least one or two significant features
  2. Support for the feature release will last only for six months, i.e., until next feature release
  3. Long-term support release will be marked as LTS. Support for such release will be for three years
  4. Java 11 will be an LTS release

java -version will now contain the GA date, making it easier to identify how old the release is:

$ java -version
openjdk version "10" 2018-03-20
OpenJDK Runtime Environment 18.3 (build 10+46)
OpenJDK 64-Bit Server VM 18.3 (build 10+46, mixed mode)

10. Conclusion

In this article, we saw the new features and changes brought in by Java 10.

As usual, code snippets can be found over on GitHub.

On Kotlin

$
0
0

Guide to JNI (Java Native Interface)

$
0
0

1. Introduction

As we know, one of the main strengths of Java is its portability – meaning that once we write and compile code, the result of this process is platform-independent bytecode.

Simply put, this can run on any machine or device capable of running a Java Virtual Machine, and it will work as seamlessly as we could expect.

However, sometimes we do actually need to use code that’s natively-compiled for a specific architecture.

There could be some reasons for needing to use native code:

  • The need to handle some hardware
  • Performance improvement for a very demanding process
  • An existing library that we want to reuse instead of rewriting it in Java.

To achieve this, the JDK introduces a bridge between the bytecode running in our JVM and the native code (usually written in C or C++).

The tool is called Java Native Interface. In this article, we’ll see how it is to write some code with it.

2. How It Works

2.1. Native Methods: The JVM Meets Compiled Code

Java provides the native keyword that’s used to indicate that the method implementation will be provided by a native code.

Normally, when making a native executable program, we can choose to use static or shared libs:

  • Static libs – all library binaries will be included as part of our executable during the linking process. Thus, we won’t need the libs anymore, but it’ll increase the size of our executable file.
  • Shared libs – the final executable only has references to the libs, not the code itself. It requires that the environment in which we run our executable has access to all the files of the libs used by our program.

The latter is what makes sense for JNI as we can’t mix bytecode and natively compiled code into the same binary file.

Therefore, our shared lib will keep the native code separately within its .so/.dll/.dylib file (depending on which Operating System we’re using) instead of being part of our classes.

The native keyword transforms our method into a sort of abstract method:

private native void aNativeMethod();

With the main difference that instead of being implemented by another Java class, it will be implemented in a separated native shared library.

A table with pointers in memory to the implementation of all of our native methods will be constructed so they can be called from our Java code.

2.2. Components Needed

Here’s a brief description of the key components that we need to take into account. We’ll explain them further later in this article

  • Java Code – our classes. They will include at least one native method.
  • Native Code – the actual logic of our native methods, usually coded in C or C++.
  • JNI header file – this header file for C/C++ (include/jni.h into the JDK directory) includes all definitions of JNI elements that we may use into our native programs.
  • C/C++ Compiler – we can choose between GCC, Clang, Visual Studio, or any other we like as far as it’s able to generate a native shared library for our platform.

2.3. JNI Elements In Code (Java And C/C++)

Java elements:

  • “native” keyword – as we’ve already covered, any method marked as native must be implemented in a native, shared lib.
  • System.loadLibrary(String libname) – a static method that loads a shared library from the file system into memory and makes its exported functions available for our Java code.

C/C++ elements (many of them defined within jni.h)

  • JNIEXPORT- marks the function into the shared lib as exportable so it will be included in the function table, and thus JNI can find it
  • JNICALL – combined with JNIEXPORT, it ensures that our methods are available for the JNI framework
  • JNIEnv – a structure containing methods that we can use our native code to access Java elements
  • JavaVM – a structure that lets us manipulate a running JVM (or even start a new one) adding threads to it, destroying it, etc…

3. Hello World JNI

Next, let’s look at how JNI works in practice.

In this tutorial, we’ll use C++ as the native language and G++ as compiler and linker.

We can use any other compiler of our preference, but here’s how to install G++ on Ubuntu, Windows, and MacOS:

  • Ubuntu Linux – run command “sudo apt-get install build-essential” in a terminal
  • Windows – Install MinGW
  • MacOS – run command “g++” in a terminal and if it’s not yet present, it will install it.

3.1. Creating the Java Class

Let’s start creating our first JNI program by implementing a classic “Hello World”.

To begin, we create the following Java class that includes the native method that will perform the work:

package com.baeldung.jni;

public class HelloWorldJNI {

    static {
        System.loadLibrary("native");
    }
    
    public static void main(String[] args) {
        new HelloWorldJNI().sayHello();
    }

    // Declare a native method sayHello() that receives no arguments and returns void
    private native void sayHello();
}

As we can see, we load the shared library in a static block. This ensures that it will be ready when we need it and from wherever we need it.

Alternatively, in this trivial program, we could instead load the library just before calling our native method because we’re not using the native library anywhere else.

3.2. Implementing a Method in C++

Now, we need to create the implementation of our native method in C++.

Within C++ the definition and the implementation are usually stored in .h and .cpp files respectively.

First, to create the definition of the method, we have to use the -h flag of the Java compiler:

javac -h . HelloWorldJNI.java

This will generate a com_baeldung_jni_HelloWorldJNI.h file with all the native methods included in the class passed as a parameter, in this case, only one:

JNIEXPORT void JNICALL Java_com_baeldung_jni_HelloWorldJNI_sayHello
  (JNIEnv *, jobject);

As we can see, the function name is automatically generated using the fully qualified package, class and method name.

Also, something interesting that we can notice is that we’re getting two parameters passed to our function; a pointer to the current JNIEnv; and also the Java object that the method is attached to, the instance of our HelloWorldJNI class.

Now, we have to create a new .cpp file for the implementation of the sayHello function. This is where we’ll perform actions that print “Hello World” to console.

We’ll name our .cpp file with the same name as the .h one containing the header and add this code to implement the native function:

JNIEXPORT void JNICALL Java_com_baeldung_jni_HelloWorldJNI_sayHello
  (JNIEnv* env, jobject thisObject) {
    std::cout << "Hello from C++ !!" << std::endl;
}

3.3. Compiling And Linking

At this point, we have all parts we need in place and have a connection between them.

We need to build our shared library from the C++ code and run it!

To do so, we have to use G++ compiler, not forgetting to include the JNI headers from our Java JDK installation.

Ubuntu version:

g++ -c -fPIC -I${JAVA_HOME}/include -I${JAVA_HOME}/include/linux com_baeldung_jni_HelloWorldJNI.cpp -o com_baeldung_jni_HelloWorldJNI.o

Windows version:

g++ -c -I%JAVA_HOME%\include -I%JAVA_HOME%\include\win32 com_baeldung_jni_HelloWorldJNI.cpp -o com_baeldung_jni_HelloWorldJNI.o

MacOS version;

g++ -c -fPIC -I${JAVA_HOME}/include -I${JAVA_HOME}/include/darwin com_baeldung_jni_HelloWorldJNI.cpp -o com_baeldung_jni_HelloWorldJNI.o

Once we have the code compiled for our platform into the file com_baeldung_jni_HelloWorldJNI.o, we have to include it in a new shared library. Whatever we decide to name it is the argument passed into the method System.loadLibrary.

We named ours “native”, and we’ll load it when running our Java code.

The G++ linker then links the C++ object files into our bridged library.

Ubuntu version:

g++ -shared -fPIC -o libnative.so com_baeldung_jni_HelloWorldJNI.o -lc

Windows version:

g++ -shared -o native.dll com_baeldung_jni_HelloWorldJNI.o -Wl,--add-stdcall-alias

MacOS version:

g++ -dynamiclib -o libnative.dylib com_baeldung_jni_HelloWorldJNI.o -lc

And that’s it!

We can now run our program from the command line.

However, we need to add the full path to the directory containing the library we’ve just generated. This way Java will know where to look for our native libs:

java -cp . -Djava.library.path=/NATIVE_SHARED_LIB_FOLDER com.baeldung.jni.HelloWorldJNI

Console output:

Hello from C++ !!

4. Using Advanced JNI Features

Saying hello is nice but not very useful. Usually, we would like to exchange data between Java and C++ code and manage this data in our program.

4.1. Adding Parameters To Our Native Methods

We’ll add some parameters to our native methods. Let’s create a new class called ExampleParametersJNI with two native methods using parameters and returns of different types:

private native long sumIntegers(int first, int second);
    
private native String sayHelloToMe(String name, boolean isFemale);

And then, repeat the procedure to create a new .h file with “javac -h” as we did before.

Now create the corresponding .cpp file with the implementation of the new C++ method:

...
JNIEXPORT jlong JNICALL Java_com_baeldung_jni_ExampleParametersJNI_sumIntegers 
  (JNIEnv* env, jobject thisObject, jint first, jint second) {
    std::cout << "C++: The numbers received are : " << first << " and " << second << std::endl;
    return (long)first + (long)second;
}
JNIEXPORT jstring JNICALL Java_com_baeldung_jni_ExampleParametersJNI_sayHelloToMe 
  (JNIEnv* env, jobject thisObject, jstring name, jboolean isFemale) {
    const char* nameCharPointer = env->GetStringUTFChars(name, NULL);
    std::string title;
    if(isFemale) {
        title = "Ms. ";
    }
    else {
        title = "Mr. ";
    }

    std::string fullName = title + nameCharPointer;
    return env->NewStringUTF(fullName.c_str());
}
...

We’ve used the pointer *env of type JNIEnv to access the methods provided by the JNI environment instance.

JNIEnv allows us, in this case, to pass Java Strings into our C++ code and back out without worrying about the implementation.

We can check the equivalence of Java types and C JNI types into Oracle official documentation.

To test our code, we’ve to repeat all the compilation steps of the previous HelloWorld example.

4.2. Using Objects And Calling Java Methods From Native Code

In this last example, we’re going to see how we can manipulate Java objects into our native C++ code.

We’ll start creating a new class UserData that we’ll use to store some user info:

package com.baeldung.jni;

public class UserData {
    
    public String name;
    public double balance;
    
    public String getUserInfo() {
        return "[name]=" + name + ", [balance]=" + balance;
    }
}

Then, we’ll create another Java class called ExampleObjectsJNI with some native methods with which we’ll manage objects of type UserData:

...
public native UserData createUser(String name, double balance);
    
public native String printUserData(UserData user);

One more time, let’s create the .h header and then the C++ implementation of our native methods on a new .cpp file:

JNIEXPORT jobject JNICALL Java_com_baeldung_jni_ExampleObjectsJNI_createUser
  (JNIEnv *env, jobject thisObject, jstring name, jdouble balance) {
  
    // Create the object of the class UserData
    jclass userDataClass = env->FindClass("com/baeldung/jni/UserData");
    jobject newUserData = env->AllocObject(userDataClass);
	
    // Get the UserData fields to be set
    jfieldID nameField = env->GetFieldID(userDataClass , "name", "Ljava/lang/String;");
    jfieldID balanceField = env->GetFieldID(userDataClass , "balance", "D");
	
    env->SetObjectField(newUserData, nameField, name);
    env->SetDoubleField(newUserData, balanceField, balance);
    
    return newUserData;
}

JNIEXPORT jstring JNICALL Java_com_baeldung_jni_ExampleObjectsJNI_printUserData
  (JNIEnv *env, jobject thisObject, jobject userData) {
  	
    // Find the id of the Java method to be called
    jclass userDataClass=env->GetObjectClass(userData);
    jmethodID methodId=env->GetMethodID(userDataClass, "getUserInfo", "()Ljava/lang/String;");

    jstring result = (jstring)env->CallObjectMethod(userData, methodId);
    return result;
}

Again, we’re using the JNIEnv *env pointer to access the needed classes, objects, fields and methods from the running JVM.

Normally, we just need to provide the full class name to access a Java class, or the correct method name and signature to access an object method.

We’re even creating an instance of the class com.baeldung.jni.UserData in our native code. Once we have the instance, we can manipulate all its properties and methods in a way similar to Java reflection.

We can check all other methods of JNIEnv into the Oracle official documentation.

4. Disadvantages Of Using JNI

JNI bridging does have its pitfalls.

The main downside being the dependency on the underlying platform; we essentially lose the “write once, run anywhere” feature of Java. This means that we’ll have to build a new lib for each new combination of platform and architecture we want to support. Imagine the impact that this could have on the build process if we supported Windows, Linux, Android, MacOS…

JNI not only adds a layer of complexity to our program. It also adds a costly layer of communication between the code running into the JVM and our native code: we need to convert the data exchanged in both ways between Java and C++ in a marshaling/unmarshaling process.

Sometimes there isn’t even a direct conversion between types so we’ll have to write our equivalent.

5. Conclusion

Compiling the code for a specific platform (usually) makes it faster than running bytecode.

This makes it useful when we need to speed up a demanding process. Also, when we don’t have other alternatives such as when we need to use a library that manages a device.

However, this comes at a price as we’ll have to maintain additional code for each different platform we support.

That’s why it’s usually a good idea to only use JNI in the cases where there’s no Java alternative.

As always the code for this article is available over on GitHub.

Spring Boot Configuration with Jasypt

$
0
0

1. Introduction

Jasypt (Java Simplified Encryption) Spring Boot provides utilities for encrypting property sources in Boot applications.

In this article, we’ll discuss how we can add jasypt-spring-boot‘s support and use it.

For more information on using Jasypt as a framework for encryption, take a look at our Introduction to Jasypt here.

2. Why Jasypt?

Whenever we need to store sensitive information in the configuration file – that means we’re essentially making that information vulnerable; this includes any kind of sensitive information, such as credentials, but certainly a lot more than that.

By using Jasypt, we can provide encryption for the property file attributes and our application will do the job of decrypting it and retrieving the original value.

3. Ways to Use JASYPT with Spring Boot

Let’s discuss the different ways to use Jasypt with Spring Boot.

3.1. Using jasypt-spring-boot-starter

We need to add a single dependency to our project:

<dependency>
    <groupId>com.github.ulisesbocchio</groupId>
    <artifactId>jasypt-spring-boot-starter</artifactId>
    <version>2.0.0</version>
</dependency>

Maven Central has the latest version of the jasypt-spring-boot-starter.

Let’s now encrypt the text “Password@1” with secret key “password” and add it to the encrypted.properties:

encrypted.property=ENC(uTSqb9grs1+vUv3iN8lItC0kl65lMG+8)

And let’s define a configuration class AppConfigForJasyptStarter – to specify the encrypted.properties file as a PropertySource :

@Configuration
@PropertySource("encrypted.properties")
public class AppConfigForJasyptStarter {
}

Now, we’ll write a service bean PropertyServiceForJasyptStarter to retrieve the values from the encrypted.properties. The decrypted value can be retrieved using the @Value annotation or the getProperty() method of Environment class:

@Service
public class PropertyServiceForJasyptStarter {

    @Value("${encrypted.property}")
    private String property;

    public String getProperty() {
        return property;
    }

    public String getPasswordUsingEnvironment(Environment environment) {
        return environment.getProperty("encrypted.property");
    }
}

Finally, using the above service class and setting the secret key which we used for encryption, we can easily retrieve the decrypted password and use in our application:

@Test
public void whenDecryptedPasswordNeeded_GetFromService() {
    System.setProperty("jasypt.encryptor.password", "password");
    PropertyServiceForJasyptStarter service = appCtx
      .getBean(PropertyServiceForJasyptStarter.class);
 
    assertEquals("Password@1", service.getProperty());
 
    Environment environment = appCtx.getBean(Environment.class);
 
    assertEquals(
      "Password@1", 
      service.getPasswordUsingEnvironment(environment));
}

3.2. Using jasypt-spring-boot

For projects not using @SpringBootApplication or @EnableAutoConfiguration, we can use the jasypt-spring-boot dependency directly:

<dependency>
    <groupId>com.github.ulisesbocchio</groupId>
    <artifactId>jasypt-spring-boot</artifactId>
    <version>2.0.0</version>
</dependency>

Simillarly, let’s encrypt the text “Password@2” with secret key “password” and add it to the encryptedv2.properties:

encryptedv2.property=ENC(dQWokHUXXFe+OqXRZYWu22BpXoRZ0Drt)

And let’s have a new configuration class for jasypt-spring-boot dependency.

Here, we need to add the annotation @EncryptablePropertySource :

@Configuration
@EncryptablePropertySource("encryptedv2.properties")
public class AppConfigForJasyptSimple {
}

Also, a new PropertyServiceForJasyptSimple bean to return encryptedv2.properties is defined:

@Service
public class PropertyServiceForJasyptSimple {
 
    @Value("${encryptedv2.property}")
    private String property;

    public String getProperty() {
        return property;
    }
}

Finally, using the above service class and setting the secret key which we used for encryption, we can easily retrieve the encryptedv2.property:

@Test
public void whenDecryptedPasswordNeeded_GetFromService() {
    System.setProperty("jasypt.encryptor.password", "password");
    PropertyServiceForJasyptSimple service = appCtx
      .getBean(PropertyServiceForJasyptSimple.class);
 
    assertEquals("Password@2", service.getProperty());
}

3.3. Using Custom JASYPT Encryptor

The encryptors defined in section 3.1. and 3.2. are constructed with the default configuration values.

However, let’s go and define our own Jasypt encryptor and try to use for our application.

S0, the custom encryptor bean will look like:

@Bean(name = "encryptorBean")
public StringEncryptor stringEncryptor() {
    PooledPBEStringEncryptor encryptor = new PooledPBEStringEncryptor();
    SimpleStringPBEConfig config = new SimpleStringPBEConfig();
    config.setPassword("password");
    config.setAlgorithm("PBEWithMD5AndDES");
    config.setKeyObtentionIterations("1000");
    config.setPoolSize("1");
    config.setProviderName("SunJCE");
    config.setSaltGeneratorClassName("org.jasypt.salt.RandomSaltGenerator");
    config.setStringOutputType("base64");
    encryptor.setConfig(config);
    return encryptor;
}

Furthermore, we can modify all the properties for the SimpleStringPBEConfig.

Also, we need to add a property “jasypt.encryptor.bean” to our application.properties, so that Spring Boot knows which Custom Encryptor it should use.

For example, we add the custom text “Password@3” encrypted with secret key “password” in the application.properties:

jasypt.encryptor.bean=encryptorBean
encryptedv3.property=ENC(askygdq8PHapYFnlX6WsTwZZOxWInq+i)

Once we set it, we can easily get the encryptedv3.property from the Spring’s Environment:

@Test
public void whenConfiguredExcryptorUsed_ReturnCustomEncryptor() {
    Environment environment = appCtx.getBean(Environment.class);
 
    assertEquals(
      "Password@3", 
      environment.getProperty("encryptedv3.property"));
}

4. Conclusion

By using Jasypt we can provide additional security for the data that application handles.

It enables us to focus more on the core of our application and can also be used to provide custom encryption if required.

As always, the complete code for this example is available over on Github.

Viewing all 3770 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>