Quantcast
Channel: Baeldung
Viewing all 3875 articles
Browse latest View live

Writing a Custom Filter in Spring Security

$
0
0

1. Overview

In this quick article, we’ll focus on writing a custom filter for the Spring Security filter chain.

2. Creating the Filter

Spring Security provides a number of filters by default, and most of the time, these are enough.

But of course sometimes it’s necessary to implement new functionality with create a new filter to use in the chain.

We’ll start by implementing the org.springframework.web.filter.GenericFilterBean.

The GenericFilterBean is a simple javax.servlet.Filter implementation implementation that is Spring aware.

On to the implementation – we only need to implement a single method:

public class CustomFilter extends GenericFilterBean {

    @Override
    public void doFilter(
      ServletRequest request, 
      ServletResponse response,
      FilterChain chain) throws IOException, ServletException {
        chain.doFilter(request, response);
    }
}

3. Using the Filter in the Security Config

We’re free to choose either XML configuration or Java configuration to wire the filter into the Spring Security configuration.

3.1. Java Configuration

You can register the filter programmatically overriding the configure method from WebSecurityConfigurerAdapter. For example, it works with the addFilterAfter method on a HttpSecurity instance:

@Configuration
public class CustomWebSecurityConfigurerAdapter
  extends WebSecurityConfigurerAdapter {

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http.addFilterAfter(
          new CustomFilter(), BasicAuthenticationFilter.class);
    }
}

There are a couple of possible methods:

3.2. XML Configuration

You can add the filter to the chain using the custom-filter tag and one of these names to specify the position of your filter. For instance, it can be pointed out by the after attribute:

<http>
    <custom-filter after="BASIC_AUTH_FILTER" ref="myFilter" />
</http>

<beans:bean id="myFilter" class="org.baeldung.security.filter.CustomFilter"/>

Here are all attributes to specify exactly a place your filter in the stack:

  • after – describes the filter immediately after which a custom filter will be placed in the chain
  • before – defines the filter before which our filter should be placed in the chain
  • position – allows replacing a standard filter in the explicit position by a custom filter

4. Conclusion

In this quick article, we created a custom filter and wired that into the Spring Security filter chain.

As always, all code examples are available in the sample Github project.


Java Web Weekly, Issue 149

$
0
0

1. Spring and Java

>> Introducing the Spring Cloud CLI Launcher [spring.io]

Spring Cloud is definitely moving fast.

Here’s a quick writeup covering a new CLI command that takes some of the work away by running the necessary supporting applications.

>> Java Microservices: The Cake Is a Lie but You Can’t Ignore It [takipi.com]

A zoomed-out look at the Java microservice framework landscape.

The writeup discusses most of the solutions available and the approach they each take to help developers get to a microservice implementation without shooting themselves in the foot.

>> Dijkstra’s Algorithm [cleancoder.com]

A fun way to apply TDD to a get to a clean solution for a well-known problem domain.

>> 5 Things Only Experienced Developers Can Teach You About Java [takipi.com]

There’s nothing like running your work in production. I look at my first few years as a developer as the time before and the time after I had work being used (and abused) by actual users.

So this writeup hits the nail on the head – experience, and more specifically production experience is going to be a fantastic accelerator of learning.

>> Inside Java 9 – Version Schema, Multi-Release JARs, and More [sitepoint.com]

As Java 9 gets closer and closer (finger crossed) it’s getting more and more important to starting digging into the bits and bolts that are going to make up the new release.

This writeup is definitely a good place to start doing that.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> REST – Resource Association vs Resource Aggregation [alexecollins.com]

Really solid, practical post discussing how to design the URL space and semantics of an API. A quick read as well, and definitely worth the 5 minutes.

>> Applying Queueing Theory to Dynamic Connection Pool Sizing with FlexyPool [jooq.org]

A quick technical look at how to improve upon and get more out of traditional connection pool implementations.

>> On becoming a test automation craftsman [ontestautomation.com]

Like most other things in this field of developing software, testing is all about the value and the results, and really not about the shiny new tool.

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> We’re going with the web-only business model [dilbert.com]

>> 10.000 hours of practice [dilbert.com]

>> The Feng Shui consultant [dilbert.com]

4. Pick of the Week

>> Status meetings are the scourge [m.signalvnoise.com]

Java NIO2 Path API

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Overview

In this article, we will learn how to use the new I/O (NIO2) Path API in Java.

The Path APIs in NIO2 constitute one of the major new functional areas that shipped with Java 7 and specifically a subset of the new file system API alongside File APIs.

2. Setup

The NIO2 support is bundled in the java.nio.file package. So setting up your project to use the Path APIs is just a matter of importing everything in this package:

import java.nio.file.*;

Since the code samples in this article will probably be running in different environments, let’s get a handle on the home directory of the user:

private static String HOME = System.getProperty("user.home");

This variable will point to a valid location in any environment.

The Paths class is the main entry point to all operations involving file system paths. It allows us to create and manipulate paths to files and directories.

Worthy of note is that path operations are mainly syntactic in nature; they have no effect on the underlying file system and neither does the file system have any effect on whether they succeed or fail. This means that passing an inexistent path as a parameter of a path operation has no bearing on whether it succeed or fails.

3. Path Operations

In this section, we will introduce the main syntax used in path operations. As its name implies, the Path class is a programmatic representation of a path in the file system.

A Path object contains the file name and directory list used to construct the path and is used to examine, locate, and manipulate files.

The helper class, java.nio.file.Paths (in plural form) is the formal way of creating Path objects. It has two static methods for creating a Path from a path string:

Path path = Paths.get("path string");

Whether we use a forward or backslash in the path String, does not matter, the API resolves this parameter according to the underlying file system’s requirements.

And from a java.net.URI object:

Path path = Paths.get(URI object);

We can now go ahead and see these in action.

4. Creating a Path

To create a Path object from a path string:

@Test
public void givenPathString_whenCreatesPathObject_thenCorrect() {
    Path p = Paths.get("/articles/baeldung");
 
    assertEquals("\\articles\\baeldung", p.toString());
}

The get API can take a variable arguments parameter of path string parts (in this case, articles and baeldung) in addition to the first part (in this case, articles).

If we provide these parts instead of a complete path string, they will be used to construct the Path object, we do not need to include the name-separators (slashes) in the variable arguments part:

@Test
public void givenPathParts_whenCreatesPathObject_thenCorrect() {
    Path p = Paths.get("/articles", "baeldung");
    
    assertEquals("\\articles\\baeldung", p.toString());
}

5. Retrieving Path Information

You can think of the Path object as name elements as a sequence. A path String such as E:\baeldung\articles\java consists of three name elements i.e. baeldung, articles, and java. The highest element in the directory structure would be located at index 0, in this case being baeldung.

The lowest element in the directory structure would be located at index [n-1], where n is the number of name elements in the path. This lowest element is called the file name regardless of whether it is an actual file or not:

@Test
public void givenPath_whenRetrievesFileName_thenCorrect() {
    Path p = Paths.get("/articles/baeldung/logs");

    Path fileName = p.getFileName();
 
    assertEquals("logs", fileName.toString());
}

Methods are available for retrieving individual elements by index:

@Test
public void givenPath_whenRetrievesNameByIndex_thenCorrect() {
    Path p = Paths.get("/articles/baeldung/logs");
    Path name0 = getName(0);
    Path name1 = getName(1);
    Path name2 = getName(2);
    assertEquals("articles", name0.toString());
    assertEquals("baeldung", name1.toString());
    assertEquals("logs", name2.toString());
}

or a sub-sequence of the path using these index ranges:

@Test
public void givenPath_whenCanRetrieveSubsequenceByIndex_thenCorrect() {
    Path p = Paths.get("/articles/baeldung/logs");

    Path subPath1 = p.subpath(0,1);
    Path subPath2 = p.subpath(0,2);
 
    assertEquals("articles", subPath1.toString());
    assertEquals("articles\\baeldung", subPath2.toString());
    assertEquals("articles\\baeldung\\logs", p.subpath(0, 3).toString());
    assertEquals("baeldung", p.subpath(1, 2).toString());
    assertEquals("baeldung\\logs", p.subpath(1, 3).toString());
    assertEquals("logs", p.subpath(2, 3).toString());
}

Each path is associated with a parent path or null if the path has no parent. The parent of a path object consists of the path’s root component, if any, and each element in the path except for the file name. As an example, the parent path of /a/b/c is /a/b and that of /a is null:

@Test
public void givenPath_whenRetrievesParent_thenCorrect() {
    Path p1 = Paths.get("/articles/baeldung/logs");
    Path p2 = Paths.get("/articles/baeldung");
    Path p3 = Paths.get("/articles");
    Path p4 = Paths.get("/");

    Path parent1 = p1.getParent();
    Path parent2 = p2.getParent();
    Path parent3 = p3.getParent();
    Path parent4 = p4.getParenth();

    assertEquals("\\articles\\baeldung", parent1.toString());
    assertEquals("\\articles", parent2.toString());
    assertEquals("\\", parent3.toString());
    assertEquals(null, parent4);
}

We can also get the root element of a path:

@Test
public void givenPath_whenRetrievesRoot_thenCorrect() {
    Path p1 = Paths.get("/articles/baeldung/logs");
    Path p2 = Paths.get("c:/articles/baeldung/logs");

    Path root1 = p1.getRoot();
    Path root2 = p2.getRoot();

    assertEquals("\\", root1.toString());
    assertEquals("c:\\", root2.toString());
}

6. Normalizing a Path

Many file systems use “.” notation to denote the current directory and “..” to denote the parent directory. You might have a situation where a path contains redundant directory information.

For example, consider the following path strings:

/baeldung/./articles
/baeldung/authors/../articles
/baeldung/articles

They all resolve to the same location /baeldung/articles. The first two have redundancies while the last one does not.

Normalizing a path involves removing redundancies in it. The Path.normalize() operation is provided for this purpose.

This example should now be self-explanatory:

@Test
public void givenPath_whenRemovesRedundancies_thenCorrect1() {
    Path p = Paths.get("/home/./baeldung/articles");

    Path cleanPath = p.normalize();
 
    assertEquals("\\home\\baeldung\\articles", cleanPath.toString());
}

This one too:

@Test
public void givenPath_whenRemovesRedundancies_thenCorrect2() {
    Path p = Paths.get("/home/baeldung/../articles");

    Path cleanPath = p.normalize();
 
    assertEquals("\\home\\articles", cleanPath.toString());
}

7. Path Conversion

There are operations to convert a path to a chosen presentation format. To convert any path into a string that can be opened from the browser, we use the toUri method:

@Test
public void givenPath_whenConvertsToBrowseablePath_thenCorrect() {
    Path p = Paths.get("/home/baeldung/articles.html");

    URI uri = p.toUri();
    assertEquals(
      "file:///E:/home/baeldung/articles.html", 
        uri.toString());
}

We can also convert a path to its absolute representation. The toAbsolutePath method resolves a path against a file system default directory:

@Test
public void givenPath_whenConvertsToAbsolutePath_thenCorrect() {
    Path p = Paths.get("/home/baeldung/articles.html");

    Path absPath = p.toAbsolutePath();
 
    assertEquals(
      "E:\\home\\baeldung\\articles.html", 
        absPath.toString());
}

However, when the path to be resolved is detected to be already absolute, the method returns it as is:

@Test
public void givenAbsolutePath_whenRetainsAsAbsolute_thenCorrect() {
    Path p = Paths.get("E:\\home\\baeldung\\articles.html");

    Path absPath = p.toAbsolutePath();
 
    assertEquals(
      "E:\\home\\baeldung\\articles.html", 
        absPath.toString());
}

We can also convert any path to its real equivalent by calling the toRealPath method. This method tries to resolve the path by mapping it’s elements to actual directories and files in the file system.

Time to use the variable we created in the Setup section which points to logged-in user’s home location in the file system:

@Test
public void givenExistingPath_whenGetsRealPathToFile_thenCorrect() {
    Path p = Paths.get(HOME);

    Path realPath = p.toRealPath();
 
    assertEquals(HOME, realPath.toString());
}

The above test does not really tell us much about the behavior of this operation. The most obvious result is that if the path does not exist in the file system, then the operation will throw an IOException, read on.

For the lack of a better way to drive this point home, just take a look at the next test, which attempts to convert an inexistent path to a real path:

@Test(expected = NoSuchFileException.class)
public void givenInExistentPath_whenFailsToConvert_thenCorrect() {
    Path p = Paths.get("E:\\home\\baeldung\\articles.html");
    
    p.toRealPath();
}

The test succeeds when we catch an IOException. The actual subclass of IOException that this operation throws is NoSuchFileException.

8. Joining Paths

Joining any two paths can be achieved using the resolve method.

Simply put, we can call the resolve method on any Path and pass in a partial path as the argument. That partial path is appended to the original path:

@Test
public void givenTwoPaths_whenJoinsAndResolves_thenCorrect() {
    Path p = Paths.get("/baeldung/articles");

    Path p2 = p.resolve("java");
 
    assertEquals("\\baeldung\\articles\\java", p2.toString());
}

However, when the path string passed to the resolve method is not a partial path; most notably an absolute path, then the passed-in path is returned:

@Test
public void givenAbsolutePath_whenResolutionRetainsIt_thenCorrect() {
    Path p = Paths.get("/baeldung/articles");

    Path p2 = p.resolve("C:\\baeldung\\articles\java");
 
    assertEquals("C:\\baeldung\\articles\\java", p2.toString());
}

The same thing happens with any path that has a root element. The path string “java” has no root element while the path string “/java” has a root element. Therefore, when you pass in a path with a root element, it is returned as is:

@Test
public void givenPathWithRoot_whenResolutionRetainsIt_thenCorrect2() {
    Path p = Paths.get("/baeldung/articles");

    Path p2 = p.resolve("/java");
 
    assertEquals("\\java", p2.toString());
}

9. Relativizing Paths

The term relativizing simply means creating a direct path between two known paths. For instance, if we have a directory /baeldung and inside it, we have two other directories such that /baeldung/authors and /baeldung/articles are valid paths.

The path to articles relative to authors would be described as “move one level up in the directory hierarchy then into articles directory” or ..\articles:

@Test
public void givenSiblingPaths_whenCreatesPathToOther_thenCorrect() {
    Path p1 = Paths.get("articles");
    Path p2 = Paths.get("authors");

    Path p1_rel_p2 = p1.relativize(p2);
    Path p2_rel_p1 = p2.relativize(p1);
 
    assertEquals("..\\authors", p1_rel_p2.toString());
    assertEquals("..\\articles", p2_rel_p1.toString());
}

Assuming we move the articles directory to authors folder such that they are no longer siblings. The following relativizing operations involve creating a path between baeldung and articles and vice versa:

@Test
public void givenNonSiblingPaths_whenCreatesPathToOther_thenCorrect() {
    Path p1 = Paths.get("/baeldung");
    Path p2 = Paths.get("/baeldung/authors/articles");

    Path p1_rel_p2 = p1.relativize(p2);
    Path p2_rel_p1 = p2.relativize(p1);
 
    assertEquals("authors\\articles", p1_rel_p2.toString());
    assertEquals("..\\..", p2_rel_p1.toString());
}

10. Comparing Paths

The Path class has an intuitive implementation of the equals method which enables us to compare two paths for equality:

@Test
public void givenTwoPaths_whenTestsEquality_thenCorrect() {
    Path p1 = Paths.get("/baeldung/articles");
    Path p2 = Paths.get("/baeldung/articles");
    Path p3 = Paths.get("/baeldung/authors");

    assertTrue(p1.equals(p2));
    assertFalse(p1.equals(p3));
}

You can also check if a path begins with a given string:

@Test
public void givenPath_whenInspectsStart_thenCorrect() {
    Path p1 = Paths.get("/baeldung/articles");
 
    assertTrue(p1.startsWith("/baeldung"));
}

Or ends with some other string:

@Test
public void givenPath_whenInspectsEnd_thenCorrect() {
    Path p1 = Paths.get("/baeldung/articles");
  
    assertTrue(p1.endsWith("articles"));
}

11. Conclusion

In this article, we showed Path operations in the new file system API (NIO2) that was shipped as a part of Java 7 and saw most of them in action.

The code samples used in this article can be found in the article’s Github project.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Working with Network Interfaces in Java

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Overview

In this article, we’ll focus on network interfaces and how to access them programmatically in Java.

Simply put, a network interface is the point of interconnection between a device and any of its network connections.

In everyday language, we refer to them by the term Network Interface Cards (NICs) – but they don’t all have to be of hardware form.

For example, the popular localhost IP 127.0.0.1, which we use a lot in testing web and network applications is the loopback interface – which is not a direct hardware interface.

Of course, systems often have multiple active network connections, such as wired ethernet, WIFI, Bluetooth, etc.

In Java, the main API we can use to interact directly with them is the java.net.NetworkInterface class. And so, to get started quickly, let’s import the full package:

import java.net.*;

2. Why Access Network Interfaces?

Most Java programs won’t probably interact with them directly; there are however special scenarios when we do need this kind of low-level access.

The most outstanding of these is where a system has multiple cards and you would like to have the freedom to choose a specific interface to use a socket with. In such a scenario, we usually know the name but not necessarily the IP address.

Normally, when we want to make a socket connection the to a specific server address:

Socket socket = new Socket();
socket.connect(new InetSocketAddress(address, port));

This way, the system will pick a suitable local address, bind to it and communicate to the server through its network interface. However, this approach does not allow us to choose our own.

We will make an assumption here; we don’t know the address but we know the name. Just for demonstration purposes, let’s assume we want the connection over the loopback interface, by convention, its name is lo, at least on Linux and Windows systems, on OSX it is lo0:

NetworkInterface nif = NetworkInterface.getByName("lo");
Enumeration<InetAddress> nifAddresses = nif.getInetAddresses();

Socket socket = new Socket();
socket.bind(new InetSocketAddress(nifAddresses.nextElement(), 0));
socket.connect(new InetSocketAddress(address, port));

So we retrieve the network interface attached to lo first, retrieve the addresses attached to it, create a socket, bind it to any of the enumerated addresses which we don’t even know at compile time and then connect.

A NetworkInterface object contains a name and a set of IP addresses assigned to it. So binding to any of these addresses will guarantee communication through this interface.

This does not really say anything special about the API. We know that if we want our local address to be localhost, the first snippet would suffice if we just added the binding code.

Additionally, we would never really have to go through all the several steps since localhost has one well-known address, 127.0.0.1 and we can easily bind the socket to it.

However, in your case, lo could perhaps have represented other interfaces like Bluetooth – net1, wireless network – net0 or ethernet – eth0. In such cases, you would not know the IP address at compile time.

3. Retrieving Network Interfaces

In this section, we will explore the other available APIs for retrieving the available interfaces. In the previous section, we saw just one of these approaches; the getByName() static method.

It’s worth noting that the NetworkInterface class does not have any public constructors, so we are of course not able to create a new instance. Instead, we’re going to use the available APIs to retrieve one.

The API we looked at so far is used to search a network interface by the specified name:

@Test
public void givenName_whenReturnsNetworkInterface_thenCorrect() {
    NetworkInterface nif = NetworkInterface.getByName("lo");

    assertNotNull(nif);
}

It returns null if none is for the name:

@Test
public void givenInExistentName_whenReturnsNull_thenCorrect() {
    NetworkInterface nif = NetworkInterface.getByName("inexistent_name");

    assertNull(nif);
}

The second API is getByInetAddress(), it also requires that we provide a known parameter, this time we can provide the IP address:

@Test
public void givenIP_whenReturnsNetworkInterface_thenCorrect() {
    byte[] ip = new byte[] { 127, 0, 0, 1 };

    NetworkInterface nif = NetworkInterface.getByInetAddress(
      InetAddress.getByAddress(ip));

    assertNotNull(nif);
}

Or name of the host:

@Test
public void givenHostName_whenReturnsNetworkInterface_thenCorrect()  {
    NetworkInterface nif = NetworkInterface.getByInetAddress(
      InetAddress.getByName("localhost"));

    assertNotNull(nif);
}

Or if you are specific about localhost:

@Test
public void givenLocalHost_whenReturnsNetworkInterface_thenCorrect() {
    NetworkInterface nif = NetworkInterface.getByInetAddress(
      InetAddress.getLocalHost());

    assertNotNull(nif);
}

Another alternative is also to explicitly use the loopback interface:

@Test
public void givenLoopBack_whenReturnsNetworkInterface_thenCorrect() {
    NetworkInterface nif = NetworkInterface.getByInetAddress(
      InetAddress.getLoopbackAddress());

    assertNotNull(nif);
}

The third approach which has only been available since Java 7 is to get a network interface by its index:

NetworkInterface nif = NetworkInterface.getByIndex(int index);

The final approach involves using the getNetworkInterfaces API. It returns an Enumeration of all available network interfaces in the system. It’s upon us to retrieve the returned objects in a loop, the standard idiom uses a List:

Enumeration<NetworkInterface> nets = NetworkInterface.getNetworkInterfaces();

for (NetworkInterface nif: Collections.list(nets)) {
    //do something with the network interface
}

4. Network Interface Parameters

There is a lot of valuable information we can get from one after retrieving its object. One of the most useful is the list of IP addresses assigned to it.

We can get IP addresses using two APIs. The first API is getInetAddresses(). It returns an Enumeration of InetAddress instances which we can process as we deem fit:

@Test
public void givenInterface_whenReturnsInetAddresses_thenCorrect()  {
    NetworkInterface nif = NetworkInterface.getByName("lo");
    Enumeration<InetAddress> addressEnum = nif.getInetAddresses();
    InetAddress address = addressEnum.nextElement();

    assertEquals("127.0.0.1", address.getHostAddress());
}

The second API is getInterfaceAddresses(). It returns a List of InterfaceAddress instances which are more powerful than InetAddress instances. For example, apart from the IP address, you may be interested in the broadcast address:

@Test
public void givenInterface_whenReturnsInterfaceAddresses_thenCorrect() {
    NetworkInterface nif = NetworkInterface.getByName("lo");
    List<InterfaceAddress> addressEnum = nif.getInterfaceAddresses();
    InterfaceAddress address = addressEnum.get(0);

    InetAddress localAddress=address.getAddress();
    InetAddress broadCastAddress = address.getBroadcast();

    assertEquals("127.0.0.1", localAddress.getHostAddress());
    assertEquals("127.255.255.255",broadCastAddress.getHostAddress());
}

We can access network parameters about an interface beyond the name and IP addresses assigned to it. To check if it is up and running:

@Test
public void givenInterface_whenChecksIfUp_thenCorrect() {
    NetworkInterface nif = NetworkInterface.getByName("lo");

    assertTrue(nif.isUp());
}

To check if it is a loopback interface:

@Test
public void givenInterface_whenChecksIfLoopback_thenCorrect() {
    NetworkInterface nif = NetworkInterface.getByName("lo");

    assertTrue(nif.isLoopback());
}

To check if it represents a point to point network connection:

@Test
public void givenInterface_whenChecksIfPointToPoint_thenCorrect() {
    NetworkInterface nif = NetworkInterface.getByName("lo");

    assertFalse(nif.isPointToPoint());
}

Or if it’s a virtual interface:

@Test
public void givenInterface_whenChecksIfVirtual_thenCorrect() {
    NetworkInterface nif = NetworkInterface.getByName("lo");
    assertFalse(nif.isVirtual());
}

To check if multicasting is supported:

@Test
public void givenInterface_whenChecksMulticastSupport_thenCorrect() {
    NetworkInterface nif = NetworkInterface.getByName("lo");

    assertTrue(nif.supportsMulticast());
}

Or to retrieve its physical address, usually called MAC address:

@Test
public void givenInterface_whenGetsMacAddress_thenCorrect() {
    NetworkInterface nif = NetworkInterface.getByName("lo");
    byte[] bytes = nif.getHardwareAddress();

    assertNotNull(bytes);
}

Another parameter is the Maximum Transmission Unit which defines the largest packet size that can be transmitted through this interface:

@Test
public void givenInterface_whenGetsMTU_thenCorrect() {
    NetworkInterface nif = NetworkInterface.getByName("net0");
    int mtu = nif.getMTU();

    assertEquals(1500, mtu);
}

5. Conclusion

In this article, we have shown network interfaces, how to access them programmatically and why we would need to access them.

The full source code and samples used in this article are available in the Github project.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Introduction to the Java NIO2 File API

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Overview

In this article, we’re going to focus on the new I/O APIs in the Java Platform – NIO2 – to do basic file manipulation.

File APIs in NIO2 constitute one of the major new functional areas of the Java Platform that shipped with Java 7, specifically a subset of the new file system API alongside Path APIs .

2. Setup

Setting up your project to use File APIs is just a matter of making this import:

import java.nio.file.*;

Since the code samples in this article will probably be running in different environments, let’s get a handle on the home directory of the user, which will be valid across all operating systems:

private static String HOME = System.getProperty("user.home");

The Files class is one of the primary entry points of the java.nio.file package. This class offers a rich set of APIs for reading, writing, and manipulating files and directories. The Files class methods work on instances of Path objects.

3. Checking a File or Directory

We can have a Path instance representing a file or a directory on the file system. Whether that file or directory it’s pointing to exists or not, is accessible or not can be confirmed by a file operation.

For the sake of simplicity, whenever we use the term file, we will be referring to both files and directories unless stated explicitly otherwise.

To check if a file exists, we use the exists API:

@Test
public void givenExistentPath_whenConfirmsFileExists_thenCorrect() {
    Path p = Paths.get(HOME);

    assertTrue(Files.exists(p));
}

To check that a file does not exist, we use the notExists API:

@Test
public void givenNonexistentPath_whenConfirmsFileNotExists_thenCorrect() {
    Path p = Paths.get(HOME + "/inexistent_file.txt");

    assertTrue(Files.notExists(p));
}

We can also check if a file is a regular file like myfile.txt or is just a directory, we use the isRegularFile API:

@Test
public void givenDirPath_whenConfirmsNotRegularFile_thenCorrect() {
    Path p = Paths.get(HOME);

    assertFalse(Files.isRegularFile(p));
}

There are also static methods to check for file permissions. To check if a file is readable, we use the isReadable API:

@Test
public void givenExistentDirPath_whenConfirmsReadable_thenCorrect() {
    Path p = Paths.get(HOME);

    assertTrue(Files.isReadable(p));
}

To check if it is writable, we use the isWritable API:

@Test
public void givenExistentDirPath_whenConfirmsWritable_thenCorrect() {
    Path p = Paths.get(HOME);

    assertTrue(Files.isWritable(p));
}

Similarly, to check if it is executable:

@Test
public void givenExistentDirPath_whenConfirmsExecutable_thenCorrect() {
    Path p = Paths.get(HOME);
    assertTrue(Files.isExecutable(p));
}

When we have two paths, we can check if they both point to the same file on the underlying file system:

@Test
public void givenSameFilePaths_whenConfirmsIsSame_thenCorrect() {
    Path p1 = Paths.get(HOME);
    Path p2 = Paths.get(HOME);

    assertTrue(Files.isSameFile(p1, p2));
}

4. Creating Files

The file system API provides single line operations for creating files. To create a regular file, we use the createFile API and pass to it a Path object representing the file we want to create.

All the name elements in the path must exist, apart from the file name, otherwise, we will get an IOException:

@Test
public void givenFilePath_whenCreatesNewFile_thenCorrect() {
    String fileName = "myfile_" + UUID.randomUUID().toString() + ".txt";
    Path p = Paths.get(HOME + "/" + fileName);
    assertFalse(Files.exists(p));

    Files.createFile(p);

    assertTrue(Files.exists(p));
}

In the above test, when we first check the path, it is inexistent, then after the createFile operation, it is found to be existent.

To create a directory, we use the createDirectory API:

@Test
public void givenDirPath_whenCreatesNewDir_thenCorrect() {
    String dirName = "myDir_" + UUID.randomUUID().toString();
    Path p = Paths.get(HOME + "/" + dirName);
    assertFalse(Files.exists(p));

    Files.createDirectory(p);

    assertTrue(Files.exists(p));
    assertFalse(Files.isRegularFile(p));
    assertTrue(Files.isDirectory(p));
}

This operation requires that all name elements in the path exist, if not, we also get an IOException:

@Test(expected = NoSuchFileException.class)
public void givenDirPath_whenFailsToCreateRecursively_thenCorrect() {
    String dirName = "myDir_" + UUID.randomUUID().toString() + "/subdir";
    Path p = Paths.get(HOME + "/" + dirName);
    assertFalse(Files.exists(p));

    Files.createDirectory(p);
}

However, if we desire to create a hierarchy of directories with a single call, we use the createDirectories method. Unlike the previous operation, when it encounters any missing name elements in the path, it does not throw an IOException, it creates them recursively leading up to the last element:

@Test
public void givenDirPath_whenCreatesRecursively_thenCorrect() {
    Path dir = Paths.get(
      HOME + "/myDir_" + UUID.randomUUID().toString());
    Path subdir = dir.resolve("subdir");
    assertFalse(Files.exists(dir));
    assertFalse(Files.exists(subdir));

    Files.createDirectories(subdir);

    assertTrue(Files.exists(dir));
    assertTrue(Files.exists(subdir));
}

5. Creating Temporary Files

Many applications create a trail of temporary files in the file system as they run. As a result, most file systems have a dedicated directory to store temporary files generated by such applications.

The new file system API provides specific operations for this purpose. The createTempFile API performs this operation. It takes a path object, a file prefix, and a file suffix:

@Test
public void givenFilePath_whenCreatesTempFile_thenCorrect() {
    String prefix = "log_";
    String suffix = ".txt";
    Path p = Paths.get(HOME + "/");

    Files.createTempFile(p, prefix, suffix);
        
    assertTrue(Files.exists(p));
}

These parameters are sufficient for requirements that need this operation. However, if you need to specify specific attributes of the file, there is a fourth variable arguments parameter.

The above test creates a temporary file in the HOME directory, pre-pending and appending the provided prefix and suffix strings respectively. We will end up with a file name like log_8821081429012075286.txt. The long numeric string is system generated.

However, if we don’t provide a prefix and a suffix, then the file name will only include the long numeric string and a default .tmp extension:

@Test
public void givenPath_whenCreatesTempFileWithDefaults_thenCorrect() {
    Path p = Paths.get(HOME + "/");

    Files.createTempFile(p, null, null);
        
    assertTrue(Files.exists(p));
}

The above operation creates a file with a name like 8600179353689423985.tmp.

Finally, if we provide neither path, prefix nor suffix, then the operation will use defaults throughout. The default location of the created file will be the file system provided temporary-file directory:

@Test
public void givenNoFilePath_whenCreatesTempFileInTempDir_thenCorrect() {
    Path p = Files.createTempFile(null, null);

    assertTrue(Files.exists(p));
}

On windows, this will default to something like C:\Users\user\AppData\Local\Temp\6100927974988978748.tmp.

All the above operations can be adapted to create directories rather than regular files by using createTempDirectory instead of createTempFile.

6. Deleting a File

To delete a file, we use the delete API. For clarity purpose, the following test first ensures that the file does not already exist, then creates it and confirms that it now exists and finally deletes it and confirms that it’s no longer existent:

@Test
public void givenPath_whenDeletes_thenCorrect() {
    Path p = Paths.get(HOME + "/fileToDelete.txt");
    assertFalse(Files.exists(p));
    Files.createFile(p);
    assertTrue(Files.exists(p));

    Files.delete(p);

    assertFalse(Files.exists(p));
}

However, if a file is not existent in the file system, the delete operation will fail with an IOException:

@Test(expected = NoSuchFileException.class)
public void givenInexistentFile_whenDeleteFails_thenCorrect() {
    Path p = Paths.get(HOME + "/inexistentFile.txt");
    assertFalse(Files.exists(p));

    Files.delete(p);
}

We can avoid this scenario by using deleteIfExists which fail silently in case the file does not exist. This is important when multiple threads are performing this operation and we don’t want a failure message simply because a thread performed the operation earlier than the current thread which has failed:

@Test
public void givenInexistentFile_whenDeleteIfExistsWorks_thenCorrect() {
    Path p = Paths.get(HOME + "/inexistentFile.txt");
    assertFalse(Files.exists(p));

    Files.deleteIfExists(p);
}

When dealing with directories and not regular files, we should remember that the delete operation does not work recursively by default. So if a directory is not empty it will fail with an IOException:

@Test(expected = DirectoryNotEmptyException.class)
public void givenPath_whenFailsToDeleteNonEmptyDir_thenCorrect() {
    Path dir = Paths.get(
      HOME + "/emptyDir" + UUID.randomUUID().toString());
    Files.createDirectory(dir);
    assertTrue(Files.exists(dir));

    Path file = dir.resolve("file.txt");
    Files.createFile(file);

    Files.delete(dir);

    assertTrue(Files.exists(dir));
}

7. Copying Files

You can copy a file or directory by using the copy API:

@Test
public void givenFilePath_whenCopiesToNewLocation_thenCorrect() {
    Path dir1 = Paths.get(
      HOME + "/firstdir_" + UUID.randomUUID().toString());
    Path dir2 = Paths.get(
      HOME + "/otherdir_" + UUID.randomUUID().toString());

    Files.createDirectory(dir1);
    Files.createDirectory(dir2);

    Path file1 = dir1.resolve("filetocopy.txt");
    Path file2 = dir2.resolve("filetocopy.txt");

    Files.createFile(file1);

    assertTrue(Files.exists(file1));
    assertFalse(Files.exists(file2));

    Files.copy(file1, file2);

    assertTrue(Files.exists(file2));
}

The copy fails if the target file exists unless the REPLACE_EXISTING option is specified:

@Test(expected = FileAlreadyExistsException.class)
public void givenPath_whenCopyFailsDueToExistingFile_thenCorrect() {
    Path dir1 = Paths.get(
      HOME + "/firstdir_" + UUID.randomUUID().toString());
    Path dir2 = Paths.get(
      HOME + "/otherdir_" + UUID.randomUUID().toString());

    Files.createDirectory(dir1);
    Files.createDirectory(dir2);

    Path file1 = dir1.resolve("filetocopy.txt");
    Path file2 = dir2.resolve("filetocopy.txt");

    Files.createFile(file1);
    Files.createFile(file2);

    assertTrue(Files.exists(file1));
    assertTrue(Files.exists(file2));

    Files.copy(file1, file2);

    Files.copy(file1, file2, StandardCopyOption.REPLACE_EXISTING);
}

However, when copying directories, the contents are not copied recursively. This means that if /baeldung contains /articles.db and /authors.db files, copying /baeldung to a new location will create an empty directory.

8. Moving Files

You can move a file or directory by using the move API. It is in most ways similar to the copy operation. If the copy operation is analogous to a copy and paste operation in GUI based systems, then move is analogous to a cut and paste operation:

@Test
public void givenFilePath_whenMovesToNewLocation_thenCorrect() {
    Path dir1 = Paths.get(
      HOME + "/firstdir_" + UUID.randomUUID().toString());
    Path dir2 = Paths.get(
      HOME + "/otherdir_" + UUID.randomUUID().toString());

    Files.createDirectory(dir1);
    Files.createDirectory(dir2);

    Path file1 = dir1.resolve("filetocopy.txt");
    Path file2 = dir2.resolve("filetocopy.txt");
    Files.createFile(file1);

    assertTrue(Files.exists(file1));
    assertFalse(Files.exists(file2));

    Files.move(file1, file2);

    assertTrue(Files.exists(file2));
    assertFalse(Files.exists(file1));
}

The move operation fails if the target file exists unless the REPLACE_EXISTING option is specified just like we did with the copy operation:

@Test(expected = FileAlreadyExistsException.class)
public void givenFilePath_whenMoveFailsDueToExistingFile_thenCorrect() {
    Path dir1 = Paths.get(
      HOME + "/firstdir_" + UUID.randomUUID().toString());
    Path dir2 = Paths.get(
      HOME + "/otherdir_" + UUID.randomUUID().toString());

    Files.createDirectory(dir1);
    Files.createDirectory(dir2);

    Path file1 = dir1.resolve("filetocopy.txt");
    Path file2 = dir2.resolve("filetocopy.txt");

    Files.createFile(file1);
    Files.createFile(file2);

    assertTrue(Files.exists(file1));
    assertTrue(Files.exists(file2));

    Files.move(file1, file2);

    Files.move(file1, file2, StandardCopyOption.REPLACE_EXISTING);

    assertTrue(Files.exists(file2));
    assertFalse(Files.exists(file1));
}

9. Conclusion

In this article, we learned about file APIs in the new file system API (NIO2) that was shipped as a part of Java 7 and saw most of the important file operations in action.

The code samples used in this article can be found in the article’s Github project.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Introduction to FindBugs

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Overview

FindBugs is an open source tool used to perform static analysis on Java code.

In this article, we’re going to have a look at setting up FindBugs on a Java project and integrating it into the IDE and the Maven build.

2. FindBugs Maven Plugin

2.1. Maven Configuration

In order to start generating static analysis reports, we first need to add the FindBugs plugin in our pom.xml:

<reporting>
    <plugins>
        <plugin>
            <groupId>org.codehaus.mojo</groupId>
            <artifactId>findbugs-maven-plugin</artifactId>
            <version>3.0.4</version>
        </plugin>
    </plugins>
</reporting>

You can checkout the latest version of the plugin on Maven Central.

2.2. Report Generation

Now that we have the Maven plugin properly configured, let’s generate the project documentation using the mvn site command.

The report will be generated in the folder target/site in the project directory under the name findbugs.html.

You can also run the mvn findbugs:gui command to launch the GUI interface to browse the generated reports for the current project.

The FindBugs plugin can also be configured to fail under some circumstances – by adding the execution goal check to our configuration:

<plugin>
    <groupId>org.codehaus.mojo</groupId>
    <artifactId>findbugs-maven-plugin</artifactId>
    <version>3.0.4</version>
    <configuration>
        <effort>Max</effort>
    </configuration>
    <executions>
        <execution>
            <goals>
                <goal>check</goal>
            </goals>
        </execution>
    </executions>
</plugin>

The effort – when maxed out, performs a more complete and precise analysis, revealing more bugs in the code, though, it consumes more resources and takes more time to complete.

You can now run the command mvn verify, to check if the build will succeed or not – depending on the defects detected while running the analysis.

You can also enhance the report generation process and take more control over the analysis, by adding some basic configuration to the plugin declaration:

<configuration>
    <onlyAnalyze>org.baeldung.web.controller.*</onlyAnalyze>
    <omitVisitors>FindNullDeref</omitVisitors>
    <visitors>FindReturnRef</visitors>
</configuration>

The onlyAnalyze option declares a comma separated values of classes/packages eligible for analysis.

The visitors/omitVisitors options are also comma separated values, they are used to specify which detectors should/shouldn’t be run during the analysis – Note that visitors and omitVisitors cannot be used at the same time.

A detector is specified by its class name, without any package qualification. Find the details of all detectors class names available by following this link.

3. FindBugs Eclipse Plugin

3.1. Installation

The IDE installation of the FindBugs Plugin is pretty straightforward – you just need to use the software update feature in Eclipse, with the following update site: http://findbugs.cs.umd.edu/eclipse.

To make sure that FindBugs is properly installed in your Eclipse environment, then, look for the option labeled FindBugs under Windows -> Preferences -> Java.

3.2. Reports Browsing

In order to launch a static analysis on a project using the FindBugs Eclipse plugin, you need to right-click the project in the package explorer, then, click on the option labeled find bugs.

After launch, Eclipse shows the results under the Bug Explorer window as shown in the screenshot below:

bug_explorerAs of version 2, FindBugs started ranking bugs with a scale from 1 to 20 to measure the severity of defects:

  • Scariest: ranked between 1 & 4.
  • Scary: ranked between 5 & 9.
  • Troubling: ranked between 10 & 14.
  • Of concern: ranked between 15 & 20.

While the bug rank describes severity, the confidence factor reflects the likelihood of these bugs to be flagged as real ones. The confidence was originally called priority, but it was renamed in the new version.

Of course some defects can be open to interpretation, and they can even exist without causing any harm to the desired behavior of a software. That’s why, in a real world situation, we need to properly configure static analysis tools by choosing a limited set of defects to activate in a specific project.

3.3. Eclipse Configuration

FindBugs plugin makes it easy to customize the bugs analysis strategy, by offering various ways to filter warning and limit the strictness of the results. You can check the configuration interface by going to Window -> Preferences -> Java -> FindBugs:

fb_preferences

You can freely uncheck unwanted categories, raise the minimum rank to report, specify the minimum confidence to report, and customize markers for bugs ranks – Warning, Info, or Error.

FindBugs divide defects in many categories:

  • Correctness – gathers general bugs, e.g. infinite loops, inappropriate use of equals(), etc
  • Bad practice, e.g. exceptions handling, opened streams, Strings comparison, etc
  • Performance, e.g. idle objects
  • Multithreaded correctness – gathers synchronization inconsistencies and various problems in a multi-threaded environment
  • Internationalization – gathers problems related to encoding and application’s internationalization
  • Malicious code vulnerability – gathers vulnerabilities in code, e.g. code snippets that can be exploited by potential attackers
  • Security – gathers security holes related to specific protocols or SQL injections
  • Dodgy – gathers code smells, e.g. useless comparisons, null checks, unused variables, etc

Under the Detector configuration tab, you can check the rules you’re supposed to respect in your project:

fb_preferences_detector

The speed attribute reflects how costly the analysis will be. The fastest the detector, the smallest the resources consumed to perform it.

You can find the exhaustive list of bugs recognized by FindBugs at the official documentation page.

Under the Filter files panel, you can create custom file filters, in order to include/exclude parts of the code-base. This feature is useful – for example – when you want to prevent “unmanaged” or “trash” code, defects to pop up in the reports, or may exclude all classes from the test package for instance.

4. FindBugs IntelliJ IDEA Plugin

4.1. Installation

If you are an IntelliJ IDEA fan, and you want to start inspecting Java code using FindBugs, you can simply grab the plugin installation package from the official JetBrains site, and extract it to the folder %INSTALLATION_DIRECTORY%/plugins. Restart your IDE and you’re good to go.

Alternatively, you can navigate to Settings -> Plugins and search all repositories for FindBugs plugin.

By the time of writing this article, the version 1.0.1 of the IntelliJ IDEA plugin is just out,

To make sure that the FindBugs plugin is properly installed, check for the option labeled “Analyze project code” under Analyze -> FindBugs.

4.2. Reports Browsing

In order to launch static analysis in IDEA, click on “Analyze project code”, under Analyze -> FindBugs, then look for the FindBugs-IDEA panel to inspect the results:

findbugs-idea

You can use the second column of commands on the left side of the screenshot, to group defects using different factors:

  1. Group by a bug category.
  2. Group by a class.
  3. Group by a package.
  4. Group by a bug rank.

It is also possible to export the reports in XML/HTML format, by clicking the “export” button in the fourth column of commands.

4.3. Configuration

The FindBugs plugin preferences pages inside IDEA is pretty self-explanatory:

intellij-preferences

This settings window is quite similar to the one we’ve seen in Eclipse, thus you can perform all kinds of configuration in an analogous fashion, starting from analysis effort level, bugs ranking, confidence, classes filtering, etc.

The preferences panel can be accessed inside IDEA, by clicking the “Plugin preferences” icon under the FindBugs-IDEA panel.

5. Report Analysis for the Spring-Rest Project

In this section we’re going to shed some light on a static analysis done on the spring-rest project available on Github as an example:

spring-rest-analysis

Most of the defects are minor — Of Concern, but let’s see what we can do to fix some of them.

Method ignores exceptional return value:

File fileServer = new File(fileName);
fileServer.createNewFile();

As you can probably guess, FindBugs is complaining about the fact that we’re throwing away the return value of the createNewFile() method. A possible fix would be to store the returned value in a newly declared variable, then, log something meaningful using the DEBUG log level — e.g. “The named file does not exist and was successfully created” if the returned value is true.

Method may fail to close stream on exception: this particular defect illustrates a typical use case for exception handling that suggests to always close streams in a finally block:

try {
    DateFormat dateFormat 
      = new SimpleDateFormat("yyyy_MM_dd_HH.mm.ss");
    String fileName = dateFormat.format(new Date());
    File fileServer = new File(fileName);
    fileServer.createNewFile();
    byte[] bytes = file.getBytes();
    BufferedOutputStream stream 
      = new BufferedOutputStream(new FileOutputStream(fileServer));
    stream.write(bytes);
    stream.close();
    return "You successfully uploaded " + username;
} catch (Exception e) {
    return "You failed to upload " + e.getMessage();
}

When an exception is thrown before the stream.close() instruction, the stream is never closed, that’s why it’s always preferable to make use of the finally{} block to close streams opened during a try/catch routine.

An Exception is caught when Exception is not thrown: As you may already know, catching Exception is a bad coding practice, FindBugs thinks that you must catch a most specific exception, so you can handle it properly. So basically manipulating streams in a Java class, catching IOException would be more appropriate than catching a more generic Exception.

Field not initialized in the constructor but dereferenced without null check: it always a good idea to initialize fields inside constructors, otherwise, we should live with the possibility that the code will raise an NPE. Thus, it is recommended to perform null checks whenever we’re not sure if the variable is properly initialized or not.

6. Conclusion

In this article, we’ve covered the basic key points to use and customize FindBugs in a Java project.

As you can see, FindBugs is a powerful, yet simple static analysis tool, it helps to detect potential quality holes in your system – if tuned and used correctly.

Finally, it is worth mentioning that FindBugs can also be run as part of a separate continuous automatic code review tool like Sputnik, which can be very helpful to give the reports a lot more visibility.

The sample code we used for static analysis is available over on Github.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Introduction into Intercepting Filter Pattern in Java

$
0
0

I just released the Master Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

In this tutorial, we’re going to introduce the Intercepting Filter Pattern presentation-tier Core J2EE Pattern.

This is the second tutorial in our Pattern Series and a follow-up to the Front Controller Pattern guide which can be found here.

Intercepting Filters are filters that trigger actions before or after an incoming request is processed by a handler.

Intercepting filters represents centralized components in a web application, common to all requests and extensible without affecting existing handlers.

2. Use Cases

Let’s extend the example from the previous guide and implement an authentication mechanism, request logging, and a visitor counter. In addition, we want the ability to deliver our pages in various different encoding.

All these are use cases for intercepting filters because they are common to all requests and should be independent of the handlers.

3. Filter Strategies

Let us introduce different filter strategies and exemplary use cases. To run the code with Jetty Servlet container, simply execute:

$> mvn install jetty:run

3.1. Custom Filter Strategy

The custom filter strategy is used in every use case that requires an ordered processing of requests, in the meaning of one filter is based on the results of a previous filter in an execution chain.

These chains will be created by implementing the FilterChain interface and registering various Filter classes with it.

When using multiple filter chains with different concerns, you can join them together in a filter manager:

Intercepting Filter - Custom Filter Strategy

In our example, the visitor counter is working by counting unique usernames from logged-in users, which means it’s based on the result of the authentication filter, therefore, both filters have to be chained.

Let’s implement this filter chain.

First, we’ll create an authentication filter which checks if the session exists for a set ‘username’ attribute and issue a login procedure if not:

public class AuthenticationFilter implements Filter {
    ...
    @Override
    public void doFilter(
      ServletRequest request,
      ServletResponse response, 
      FilterChain chain) {
        HttpServletRequest httpServletRequest = (HttpServletRequest) request;
        HttpServletResponse httpServletResponse = (HttpServletResponse) response;
        
        HttpSession session = httpServletRequest.getSession(false);
        if (session == null || session.getAttribute("username") == null) {
            FrontCommand command = new LoginCommand();
            command.init(httpServletRequest, httpServletResponse);
            command.process();
        } else {
            chain.doFilter(request, response);
        }
    }
    
    ...
}

Now let’s create the visitor counter. This filter maintains a HashSet of unique usernames and adds a ‘counter’ attribute to the request:

public class VisitorCounterFilter implements Filter {
    private static Set<String> users = new HashSet<>();

    ...
    @Override
    public void doFilter(ServletRequest request, ServletResponse response,
      FilterChain chain) {
        HttpSession session = ((HttpServletRequest) request).getSession(false);
        Optional.ofNullable(session.getAttribute("username"))
          .map(Object::toString)
          .ifPresent(users::add);
        request.setAttribute("counter", users.size());
        chain.doFilter(request, response);
    }

    ...
}

Next, we’ll implement a FilterChain that iterates registered filters and executes doFilter method:

public class FilterChainImpl implements FilterChain {
    private Iterator<Filter> filters;

    public FilterChainImpl(Filter... filters) {
        this.filters = Arrays.asList(filters).iterator();
    }

    @Override
    public void doFilter(ServletRequest request, ServletResponse response) {
        if (filters.hasNext()) {
            Filter filter = filters.next();
            filter.doFilter(request, response, this);
        }
    }
}

To wire our components together, let’s create a simple static manager which is responsible for instantiating filter chains, registering its filters, and initiate it:

public class FilterManager {
    public static void process(HttpServletRequest request,
      HttpServletResponse response, OnIntercept callback) {
        FilterChain filterChain = new FilterChainImpl(
          new AuthenticationFilter(callback), new VisitorCounterFilter());
        filterChain.doFilter(request, response);
    }
}

As the last step we’ll have to call our FilterManager as common part of the request processing sequence from within our FrontCommand:

public abstract class FrontCommand {
    ...

    public void process() {
        FilterManager.process(request, response);
    }

    ...
}

3.2. Base Filter Strategy

In this section, we’ll present the Base Filter Strategy, with which a common superclass is used for all implemented filters.

This strategy plays nicely together with the custom strategy from the previous section or with the Standard Filter Strategy that we’ll introduce in the next section.

The abstract base class can be used to apply custom behavior that belongs to a filter chain. We’ll use it in our example to reduce boilerplate code related to filter configuration and debug logging:

public abstract class BaseFilter implements Filter {
    private Logger log = LoggerFactory.getLogger(BaseFilter.class);

    protected FilterConfig filterConfig;

    @Override
    public void init(FilterConfig filterConfig) throws ServletException {
        log.info("Initialize filter: {}", getClass().getSimpleName());
        this.filterConfig = filterConfig;
    }

    @Override
    public void destroy() {
        log.info("Destroy filter: {}", getClass().getSimpleName());
    }
}

Let’s extend this base class to create a request logging filter, which will be integrated into the next section:

public class LoggingFilter extends BaseFilter {
    private static final Logger log = LoggerFactory.getLogger(LoggingFilter.class);

    @Override
    public void doFilter(
      ServletRequest request, 
      ServletResponse response,
      FilterChain chain) {
        chain.doFilter(request, response);
        HttpServletRequest httpServletRequest = (HttpServletRequest) request;
        
        String username = Optional
          .ofNullable(httpServletRequest.getAttribute("username"))
          .map(Object::toString)
          .orElse("guest");
        
        log.info(
          "Request from '{}@{}': {}?{}", 
          username, 
          request.getRemoteAddr(),
          httpServletRequest.getRequestURI(), 
          request.getParameterMap());
    }
}

3.3. Standard Filter Strategy

A more flexible way of applying filters is to implement the Standard Filter Strategy. This can be done by declaring filters in a deployment descriptor or, since Servlet specification 3.0, by annotation.

The standard filter strategy allows to plug-in new filters into a default chain without having an explicitly defined filter manager:

Intercepting Filter - Standard Filter Strategy

Note that the order, in which the filters get applied, cannot be specified via annotation. If you need an ordered execution, you have to stick with a deployment descriptor or implement a custom filter strategy.

Let’s implement an annotation driven encoding filter that also uses the base filter strategy:

@WebFilter(servletNames = {"intercepting-filter"}, 
  initParams = {@WebInitParam(name = "encoding", value = "UTF-8")})
public class EncodingFilter extends BaseFilter {
    private String encoding;

    @Override
    public void init(FilterConfig filterConfig) throws ServletException {
        super.init(filterConfig);
        this.encoding = filterConfig.getInitParameter("encoding");
    }

    @Override
    public void doFilter(ServletRequest request,
      ServletResponse response, FilterChain chain) {
        String encoding = Optional
          .ofNullable(request.getParameter("encoding"))
          .orElse(this.encoding);
        response.setCharacterEncoding(encoding); 
        
        chain.doFilter(request, response);
    }
}

In a Servlet scenario with having a deployment descriptor, our web.xml would contain these extra declarations:

<filter>
    <filter-name>encoding-filter</filter-name>
    <filter-class>
      com.baeldung.patterns.intercepting.filter.filters.EncodingFilter
    </filter-class>
</filter>
<filter-mapping>
    <filter-name>encoding-filter</filter-name>
    <servlet-name>intercepting-filter</servlet-name>
</filter-mapping>

Let’s pick-up our logging filter and annotate it too, in order to get used by the Servlet:

@WebFilter(servletNames = "intercepting-filter")
public class LoggingFilter extends BaseFilter {
    ...
}

3.4. Template Filter Strategy

The Template Filter Strategy is pretty much the same as the base filter strategy, except that it uses template methods declared in the base class that must be overridden in implementations:

Intercepting Filter - Template Filter Strategy

Let’s create a base filter class with two abstract filter methods that get called before and after further processing.

Since this strategy is less common and we don’t use it in our example, a concrete implementation and use case is up to your imagination:

public abstract class TemplateFilter extends BaseFilter {
    protected abstract void preFilter(HttpServletRequest request,
      HttpServletResponse response);

    protected abstract void postFilter(HttpServletRequest request,
      HttpServletResponse response);

    @Override
    public void doFilter(ServletRequest request,
      ServletResponse response, FilterChain chain) {
        HttpServletRequest httpServletRequest = (HttpServletRequest) request;
        HttpServletResponse httpServletResponse = (HttpServletResponse) response;
        
        preFilter(httpServletRequest, httpServletResponse);
        chain.doFilter(request, response);
        postFilter(httpServletRequest, httpServletResponse);
    }
}

4. Conclusion

The Intercepting Filter Pattern captures cross-cutting concerns that can evolve independently of the business logic. From the perspective of business operations, filters are executed as a chain of pre or post actions.

As we’ve seen so far, the Intercepting Filter Pattern can be implemented using different strategies. In a ‘real world’ applications these different approaches can be combined.

As usual, you’ll find the sources on GitHub.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

A Secondary Facebook Login with Spring Social

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Overview

In this tutorial, we’ll focus on adding a new Facebook login to an existing form-login app.

We’re going to be using the Spring Social support to interact with Facebook and keep things clean and simple.

2. Maven Configuration

First, we will need to add spring-social-facebook dependency to our pom.xml:

<dependency>
    <groupId>org.springframework.social</groupId>
    <artifactId>spring-social-facebook</artifactId>
</dependency>

3. Security Config – Just Form Login

Let’s first start from the simple security configuration where we just have form-based authentication:

@Configuration
@EnableWebSecurity
@ComponentScan(basePackages = { "org.baeldung.security" })
public class SecurityConfig extends WebSecurityConfigurerAdapter {

    @Autowired
    private UserDetailsService userDetailsService;

    @Override
    protected void configure(AuthenticationManagerBuilder auth) 
      throws Exception {
        auth.userDetailsService(userDetailsService);
    }

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http
        .csrf().disable()
        .authorizeRequests()
        .antMatchers("/login*").permitAll()
        .anyRequest().authenticated()
        .and()
        .formLogin().loginPage("/login").permitAll();
    } 
}

We’re not going to spend a lot of time on this config – if you want to understand it better, have a look at the form login article.

4. Security Config – Adding Facebook

Now, let’s add a new way to authenticate into the system – driven by Facebook:

public class SecurityConfig extends WebSecurityConfigurerAdapter {

    @Autowired
    private ConnectionFactoryLocator connectionFactoryLocator;

    @Autowired
    private UsersConnectionRepository usersConnectionRepository;

    @Autowired
    private FacebookConnectionSignup facebookConnectionSignup;

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http
        .authorizeRequests()
        .antMatchers("/login*","/signin/**","/signup/**").permitAll()
        ...
    } 

    @Bean
    public ProviderSignInController providerSignInController() {
        ((InMemoryUsersConnectionRepository) usersConnectionRepository).setConnectionSignUp(facebookConnectionSignup);
        return new ProviderSignInController(connectionFactoryLocator, usersConnectionRepository, new FacebookSignInAdapter());
    }
}

Let’s carefully look at the new config:

  • we’re using a ProviderSignInController to enable the Facebook authentication
  • by sending a POST to “/signin/facebook” – this controller will initiate a user sign-in using the Facebook service provider
  • we’re setting up a SignInAdapter to handle the login logic in our application
  • and we also setting up a ConnectionSignUp to handle signing up users implicitly when they first authenticate with Facebook

5. The Sign-In Adapter

Simply put, this adapter is a bridge between the controller above – driving the Facebook user sign-in flow – and our specific local application:

public class FacebookSignInAdapter implements SignInAdapter {
    @Override
    public String signIn(String localUserId, Connection<?> connection, NativeWebRequest request) {
        SecurityContextHolder.getContext().setAuthentication(
          new UsernamePasswordAuthenticationToken(
          connection.getDisplayName(), null, 
          Arrays.asList(new SimpleGrantedAuthority("FACEBOOK_USER"))));
        
        return null;
    }
}

Note that users logged-in using Facebook will have role FACEBOOK_USER, while users logged in using form will have role USER.

6. Connection Sign Up

When a user authenticates with Facebook for the first time, they have no existing account in our application.

This is the point where we need to create that account automatically for them; we’re going to be using a ConnectionSignUp to drive that user creation logic:

@Service
public class FacebookConnectionSignup implements ConnectionSignUp {

    @Autowired
    private UserRepository userRepository;

    @Override
    public String execute(Connection<?> connection) {
        User user = new User();
        user.setUsername(connection.getDisplayName());
        user.setPassword(randomAlphabetic(8));
        userRepository.save(user);
        return user.getUsername();
    }
}

As you can see, we created an account for the new user – using their DisplayName as username.

7. The Facebook Properties

Next, let’s configure Facebook properties in our application.properties:

spring.social.facebook.appId=YOUR_APP_ID
spring.social.facebook.appSecret=YOUR_APP_SECRET

Note that:

  • We need to create a Facebook application to obtain appId and appSecret
  • From Facebook application Settings, make sure to Add Platform “Website” and http://localhost:8080/ is the “Site URL”

8. The Front End

Finally, let’s take a look at our front end.

We’re going to now have support for these two authentication flows – form login and Facebook – on our login page:

<html>
<body>
<div th:if="${param.logout}">You have been logged out</div>
<div th:if="${param.error}">There was an error, please try again</div>

<form th:action="@{/login}" method="POST" >
    <input type="text" name="username" />
    <input type="password" name="password" />
    <input type="submit" value="Login" />
</form>
	
<form action="/signin/facebook" method="POST">
    <input type="hidden" name="scope" value="public_profile" />
    <input type="submit" value="Login using Facebook"/>
</form>
</body>
</html>

Finally – here’s the index.html:

<html>
<body>
<nav>
    <p sec:authentication="name">Username</p>      
    <a th:href="@{/logout}">Logout</a>                     
</nav>

<h1>Welcome, <span sec:authentication="name">Username</span></h1>
<p sec:authentication="authorities">User authorities</p>
</body>
</html>

Note how this index page is displaying usernames and authorities.

And that’s it – we now have two ways to authenticate into the application.

9. Conclusion

In this quick article we learned how to use spring-social-facebook to implement a secondary authentication flow for our application.

And of course, as always, the source code is fully available over on GitHub.

The Master Class "Learn Spring Security" is out:

>> CHECK OUT THE COURSE


A Guide to the Java URL

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Overview

In this article, we are going to explore low-level operations with Java network programming. We’ll be taking a deeper look at URLs.

A URL is a reference or an address to a resource on the network. And simply put, Java code communicating over the network can use the java.net.URL class to represent the addresses of resources.

The Java platform ships with built-in networking support, bundled up in the java.net package:

import java.net.*;

2. Creating a URL

Let’s first create a java.net.URL object by using its constructor and passing in a String representing the human readable address of the resource:

URL url = new URL("http://www.baeldung.com/a-guide-to-java-sockets");

We’ve just created an absolute URL object. The address has all the parts required to reach the desired resource.

We can also create a relative URL; assuming we have the URL object representing the home page of Baeldung:

URL home = new URL("http://baeldung.com");

Next, let’s create a new URL pointing to a resource we already know; we’re going to use another constructor, that takes both an existing URL and a resource name relative to that URL:

URL url = new URL(home, "a-guide-to-java-sockets");

We have now created a new URL object url relative to home; so the relative URL is only valid within the context of the base URL.

We can see this in a test:

@Test
public void givenBaseUrl_whenCreatesRelativeUrl_thenCorrect() {
    URL baseUrl = new URL("http://baeldung.com");
    URL relativeUrl = new URL(baseUrl, "a-guide-to-java-sockets");
    
    assertEquals(
      "http://baeldung.com/a-guide-to-java-sockets", 
      relativeUrl.toString());
}

However if the relative URL is detected to be absolute in its component parts, then the baseURL is ignored:

@Test
public void givenAbsoluteUrl_whenIgnoresBaseUrl_thenCorrect() {
    URL baseUrl = new URL("http://baeldung.com");
    URL relativeUrl = new URL(
      baseUrl, "http://baeldung.com/a-guide-to-java-sockets");
    
    assertEquals(
      "http://baeldung.com/a-guide-to-java-sockets", 
      relativeUrl.toString());
}

Finally, we can create a URL by calling another constructor which takes in the component parts of the URL string. We will cover this in the next section after covering URL components.

3. URL Components

A URL is made up of a few components – which we’ll explore in this section.

Let’s first look at the separation between the protocol identifier and the resource – these two components are separated by a colon followed by two forward slashes i.e. ://. 

If we have a URL such as http://baeldung.com then the part before the separator, http, is the protocol identifier while the one that follows is the resource name, baeldung.com.

Let’s have a look at the API that the URL object exposes.

3.1. The Protocol

To retrieve the protocol – we use the getProtocol() method:

@Test
public void givenUrl_whenCanIdentifyProtocol_thenCorrect(){
    URL url = new URL("http://baeldung.com");
    
    assertEquals("http", url.getProtocol());
}

3.2. The Port

To get the port – we use the getPort() method:

@Test
public void givenUrl_whenGetsDefaultPort_thenCorrect(){
    URL url = new URL("http://baeldung.com");
    
    assertEquals(-1, url.getPort());
    assertEquals(80, url.getDefaultPort());
}

Note that this method retrieves the explicitly defined port. If no port is defined explicitly, it will return -1.

And because HTTP communication uses port 80 by default – no port is defined.

Here’s an example where we do have an explicitly defined port:

@Test
public void givenUrl_whenGetsPort_thenCorrect(){
    URL url = new URL("http://baeldung.com:8090");
    
    assertEquals(8090, url.getPort());
}

3.3. The Host

The host is the part of the resource name that starts right after the :// separator and ends with the domain name extension, in our case .com.

We call the getHost() method to retrieve the hostname:

@Test
public void givenUrl_whenCanGetHost_thenCorrect(){
    URL url = new URL("http://baeldung.com");
    
    assertEquals("baeldung.com", url.getHost());
}

3.4. The File Name

Whatever follows after the hostname in a URL is referred to as the file name of the resource. It can include both path and query parameters or just a file name:

@Test
public void givenUrl_whenCanGetFileName_thenCorrect1() {
    URL url = new URL("http://baeldung.com/guidelines.txt");
    
    assertEquals("/guidelines.txt", url.getFile());
}

Assuming Baeldung has java 8 articles under the URL http://baeldung.com/articles?topic=java&version=8. Everything after the hostname is the file name:

@Test
public void givenUrl_whenCanGetFileName_thenCorrect2() {
    URL url = new URL(
      "http://baeldung.com/articles?topic=java&version=8");
    
    assertEquals("/articles?topic=java&version=8", url.getFile());
}

3.5. Path Parameters

We can also only inspect the path parameters which in our case is /articles:

@Test
public void givenUrl_whenCanGetPathParams_thenCorrect() {
    URL url = new URL(
      "http://baeldung.com/articles?topic=java&version=8");
    
    assertEquals("/articles", url.getPath());
}

3.6. Query Parameters

Likewise, we can inspect the query parameters which is topic=java&version=8:

@Test
public void givenUrl_whenCanGetQueryParams_thenCorrect() {
    URL url = new URL("http://baeldung.com/articles?topic=java<em>&amp;version=8</em>");
    
    assertEquals("topic=java<em>&amp;version=8</em>", url.getQuery());
}

4. Creating URL with Component Parts

Since we have now looked at the different URL components and their place in forming the complete address to the resource, we can look at another method of creating a URL object by passing in the component parts.

The first constructor takes the protocol, the hostname and the file name respectively:

@Test
public void givenUrlComponents_whenConstructsCompleteUrl_thenCorrect() {
    String protocol = "http";
    String host = "baeldung.com";
    String file = "/guidelines.txt";
    URL url = new URL(protocol, host, file);
    
    assertEquals("http://baeldung.com/guidelines.txt", url.toString());
}

Keep in mind the meaning of filename in this context, the following test should make it clearer:

@Test
public void givenUrlComponents_whenConstructsCompleteUrl_thenCorrect2() {
    String protocol = "http";
    String host = "baeldung.com";
    String file = "/articles?topic=java&version=8";
    URL url = new URL(protocol, host, file);
    
    assertEquals("http://baeldung.com/guidelines.txt", url.toString());
}

The second constructor takes the protocol, the hostname, the port number and the filename respectively:

@Test
public void givenUrlComponentsWithPort_whenConstructsCompleteUrl_
  thenCorrect() {
    String protocol = "http";
    String host = "baeldung.com";
    int port = 9000;
    String file = "/guidelines.txt";
    URL url = new URL(protocol, host, port, file);
    
    assertEquals(
      "http://baeldung.com:9000/guidelines.txt", url.toString());
}

5. Conclusion

In this tutorial, we covered the URL class and showed how to use it in Java to access network resources programmatically.

As always, the full source code for the article and all code snippets can be found in the GitHub project.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

A Guide To HTTP Cookies In Java

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Overview

In this article, we are going to explore low-level operations with Java network programming. We’ll be taking a deeper look at Cookies.

The Java platform ships with built-in networking support, bundled up in the java.net package:

import java.net.*;

2. HTTP Cookies

Whenever a client sends an HTTP request to a server and receives a response for it, the server forgets about this client. The next time the client requests again, it will be seen as a totally new client.

However, cookies, as we know, make it possible to establish a session between the client and server such that the server can remember the client across multiple request response pairs.

From this section henceforth, we will learn how to use cookies to enhance our client-server communications in Java network programming.

The main class in the java.net package for handling cookies is CookieHandler. There are other helper classes and interfaces such as CookieManager, CookiePolicy, CookieStore, and HttpCookie.

3. The CookieHandler Class

Consider this scenario; we are communicating with the server at http://baeldung.com, or any other URL that uses HTTP protocol, the URL object will be using an engine called the HTTP protocol handler.

This HTTP protocol handler checks if there is a default CookieHandler instance in the system. If there is, it invokes it to take charge of state management.

So the CookieHandler class has a purpose of providing a callback mechanism for the benefit of the HTTP protocol handler.

CookieHandler is an abstract class. It has a static getDefault() method that can be called to retrieve the current
CookieHandler installation or we can call setDefault(CookieHandler) to set our own. Note that calling setDefault installs a CookieHandler object on a system-wide basis.

It also has put(uri, responseHeaders) for saving any cookies to the cookie store. These cookies are retrieved from the response headers of the HTTP response from the given URI. It’s called every time a response is received.

A related API method – get(uri,requestHeaders) retrieves the cookies saved under the given URI and adds them to the requetHeaders. It’s called just before a request is made.

These methods must all be implemented in a concrete class of CookieHandler. At this point, the CookieManager class is worth our attention. This class offers a complete implementation of CookieHandler class for most common use cases.

In the next two sections, we are going to explore the CookieManager class; first in its default mode and later in custom mode.

4. The Default CookieManager

To have a complete cookie management framework, we need to have implementations of CookiePolicy and CookieStore.

CookiePolicy establishes the rules for accepting and rejecting cookies. We can of course change these rules to suit our needs.

Next – CookieStore does exactly what it’s name suggests, it has methods for saving and retrieving cookies. Naturally we can tweak the storage mechanism here as well if we need to.

Let’s first look at the defaults. To create the default CookieHandler and set it as the system-wide default:

CookieManager cm = new CookieManager();
CookieHandler.setDefault(cm);

We should note that the default CookieStore will have volatile memory i.e. it only lives for the lifetime of the JVM. To have a more persistent storage for cookies, we must customize it.

When it comes to CookiePolicy, the default implementation is CookiePolicy.ACCEPT_ORIGINAL_SERVER. This means that if the response is received through a proxy server, then the cookie will be rejected.

5. A Custom CookieManager

Let’s now customize the default CookieManager by providing our own instance of CookiePolicy or CookieStore (or both).

5.1. CookiePolicy

CookiePolicy provides some pre-defined implementations for convenience:

  • CookiePolicy.ACCEPT_ORIGINAL_SERVER – only cookies from the original server can be saved (the default implementation)
  • CookiePolicy.ACCEPT_ALL – all cookies can be saved regardless of their origin
  • CookiePolicy.ACCEPT_NONE – no cookies can be saved

To simply change the current CookiePolicy without implementing our own, we call the setCookiePolicy on the CookieManager instance:

CookieManager cm=new CookieManager();
cm.setCookiePolicy(CookiePolicy.ACCEPT_ALL);

But we can do a lot more customization than this. Knowing the behavior of CookiePolicy.ACCEPT_ORIGINAL_SERVER, let’s assume we trust a particular proxy server and would like to accept cookies from it on top of the original server.

We’ll have to implement the CookiePolicy interface and implement the shouldAccept method; it’s here where we’ll change the acceptance rule by adding the chosen proxy server’s domain name.

Let’s call the new policy – ProxyAcceptCookiePolicy. It will basically reject any other proxy server from its shouldAccept implementation apart from the given proxy address, then call the shouldAccept method of the CookiePolicy.ACCEPT_ORIGINAL_SERVER to complete the implementation:

public class ProxyAcceptCookiePolicy implements CookiePolicy {
    private String acceptedProxy;

    public boolean shouldAccept(URI uri, HttpCookie cookie) {
        String host = InetAddress.getByName(uri.getHost())
          .getCanonicalHostName();
        if (!HttpCookie.domainMatches(acceptedProxy, host)) {
            return false;
        }

        return CookiePolicy.ACCEPT_ORIGINAL_SERVER
          .shouldAccept(uri, cookie);
    }

    // standard constructors
}

When we create an instance of ProxyAcceptCookiePolicy, we pass in a String of the domain address we would like to accept cookies from in addition to the original server.

We then set this instance as the cookie policy of the CookieManager instance before setting it as the default CookieHandler:

CookieManager cm = new CookieManager();
cm.setCookiePolicy(new ProxyAcceptCookiePolicy("baeldung.com"));
CookieHandler.setDefault(cm);

This way the cookie handler will accept all cookies from the original server and also those from http://www.baeldung.com.

5.2. CookieStore

CookieManager adds the cookies to the CookieStore for every HTTP response and retrieves cookies from the CookieStore for every HTTP request.

The default CookieStore implementation does not have persistence, it rather loses all it’s data when the JVM is restarted. More like RAM in a computer.

So if we would like our CookieStore implementation to behave like the hard disk and retain the cookies across JVM restarts, we must customize it’s storage and retrieval mechanism.

One thing to note is that we cannot pass a CookieStore instance to CookieManager after creation. Our only option is to pass it during the creation of CookieManager or obtain a reference to the default instance by calling new CookieManager().getCookieStore() and complementing its behavior.

Here is the implementation of PersistentCookieStore:

public class PersistentCookieStore implements CookieStore, Runnable {
    private CookieStore store;

    public PersistentCookieStore() {
        store = new CookieManager().getCookieStore();
        // deserialize cookies into store
        Runtime.getRuntime().addShutdownHook(new Thread(this));
    }

    @Override
    public void run() {
        // serialize cookies to persistent storage
    }

    @Override
    public void add(URI uri, HttpCookie cookie) {
        store.add(uri, cookie);

    }
    
    // delegate all implementations to store object like above
}

Notice that we retrieved a reference to the default implementation in the constructor.

We implement runnable so that we can add a shutdown hook that runs when the JVM is shutting down. Inside the run method, we persist all our cookies into memory.

We can serialize the data into a file or any suitable storage. Notice also that inside the constructor, we first read all cookies from persistent memory into the CookieStore. These two simple features make the default CookieStore essentially persistent (in a simplistic way).

6. Conclusion

In this tutorial, we covered HTTP cookies and showed how to access and manipulate them programmatically.

The full source code for the article and all code snippets can be found in the GitHub project.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Guide to the Java 8 forEach

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Overview

Introduced in Java 8, the forEach loop provides programmers a new, concise and interesting way for iterating over a collection.

In this article, we’ll see how to use forEach with collections, what kind of argument it takes and how this loop differs from the enhanced for-loop.

If you need to brush up some concepts of Java 8, we have a collection of articles than can help you.

2. Basics of forEach

In java, the Collection interface has Iterable as its super interface – and starting with Java 8 this interface has a new API:

void forEach(Consumer<? super T> action)

Simply put, the Javadoc of forEach stats that it “performs the given action for each element of the Iterable until all elements have been processed or the action throws an exception.” 

And so, with forEach, we can iterate over a collection and perform a given action on each element – by just passing a class that implements the Consumer interface.

2.1. The Consumer Interface

The Consumer interface is a functional interface (an interface with a single abstract method). It accepts an input and returns no result.

Here’s the definition:

@FunctionalInterface
public interface Consumer {
    void accept(T t);
}

With that in mind, let’s consider the example below:

List<String> names = new ArrayList<>();
names.add("Larry");
names.add("Steve");
names.add("James");
names.add("Conan");
names.add("Ellen");

names.forEach(new Consumer<String>() {
    public void accept(String name) {
        System.out.println(name);
    }
});

We can create an anonymous inner class that implements the Consumer interface and then define the action to perform on each element in the collection (in this case just printing out the name).

This works well but if we analyze at the example above we’ll see that the actual part that is of use is the code inside the accept() method.

The major benefit of Java 8 functional interfaces is that we can use lambda expressions to instantiate them and avoid using bulky anonymous class implementations.

So let’s consider the following:

Consumer<String> consumerNames = name -> {
    System.out.println(name);
};

names.forEach(consumerNames);

Now we have a Consumer that we can pass as an argument to the forEach method.

Also, we can use the Consumer in other forEach loops in the application.

Lambdas do have a very real learning curve, so if you’re getting started, this writeup goes over some good practices of working the new language feature.

3. forEach vs for-loop

From a simple point of view, both loops provide the same functionality – loop through elements in a collection.

The main difference between the two of them is that they are different iterators – the enhanced for-loop is an external iterator whereas the new forEach method is an internal one.

3.1. Internal Iterator

This type of iterator manage the iteration in the background and leaves the programmer to just code what is meant to be done with the elements of the collection, rather than managing the iteration and making sure that all the elements are processed one-by-one.

Let’s see an example of an internal iterator:

names.forEach(name -> System.out.println(name));

In the forEach method above, we can see that the argument provided is a lambda expression. This means that the method only needs to know what is to be done and all the work of iterating will be taken care of internally.

3.2. External Iterator

External iterators mix the what and the how the loop is to be done.

Enumerations, Iterators and enhanced for-loop are all external iterators (remember the methods iterator(), next() or hasNext() ? ). In all these iterators it’s our job to specify how the iteration will be performed.

Consider this familiar loop:

for (String name : names) {
    System.out.println(name);
}

Though we are not explicitly invoking hasNext() or next() methods while iterating over the list, the underlying code which makes this iteration work uses these methods.  This implies that the complexity of these operations is hidden from the programmer but it still exists.

Contrary from an internal iterator in which the collection does the iteration itself, here we require external code that takes every element out of the collection.

4. Using the forEach Method

As we saw earlier in the article, in order to use this method we need to pass it as argument an implementation of the interface Consumer.

We already know that since the Consumer is a functional interface we can use a lambda expression to simplify the code. But there is also another kind of argument to use with this method.

Let’s see the 3 most used ways in which you will use the forEach method.

4.1. Anonymous Consumer Implementation

We already saw this implementation. We instantiated an implementation of the Consumer interface using an anonymous class implementation and then apply it as an argument to the forEach method.

Here is the complete example:


Consumer<String> consumerNames = new Consumer<String>() {
    public void accept(String name) {
        System.out.println(name);
    }
};
names.forEach(consumerNames);

In the example above, we want to print out each element of the list. In order to this, we instantiate a Consumer (consumerNames) using an anonymous class implementation that just prints strings. The Consumer is then passed as a parameter to the forEach method.

Although lambda expressions are now the normal and easier way to do this, it’s still worth that you know how you can implement the Consumer interface.

4.2. A Lambda Expression

Here, the lambda expression is an anonymous representation of a function descriptor of a functional interface.

names.forEach(name -> System.out.println(name));

Since the introduction of lambda expressions in Java 8, this is probably the most common way to use the forEach method.

4.3. A Method Reference

In a case where a method already exists to perform an operation on the class, this syntax can be used instead of the normal lambda expression syntax:

names.forEach(System.out::println);

5. Conclusion

In this article, we showed that the forEach loop is more convenient than the normal for-loop.

We also saw how the forEach method works and what kind of implementation can receive as an argument in order to perform an action on each element in the collection.

You can see the examples used in our Github repository.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Introduction to Spring Cloud Rest Client with Netflix Ribbon

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Introduction

Netflix Ribbon is an Inter Process Communication (IPC) cloud library. Ribbon primarily provides client-side load balancing algorithms.

Apart from the client-side load balancing algorithms, Ribbon provides also other features:

  • Service Discovery Integration – Ribbon load balancers provide service discovery in dynamic environments like a cloud. Integration with Eureka and Netflix service discovery component is included in the ribbon library
  • Fault Tolerance – the Ribbon API can dynamically determine whether the servers are up and running in a live environment and can detect those servers that are down
  • Configurable load-balancing rules – Ribbon supports RoundRobinRule, AvailabilityFilteringRule, WeightedResponseTimeRule out of the box and also supports defining custom rules

Ribbon API works based on the concept called “Named Client”. While configuring Ribbon in our application configuration file we provide a name for the list of servers included for the load balancing.

Let’s take it for a spin.

2. Dependency Management

The Netflix Ribbon API can be added to our project by adding the below dependency to our pom.xml:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-ribbon</artifactId>
</dependency>

The latest libraries can be found here.

3. Example Application

In order to see the working of Ribbon API, we build a sample microservice application with Spring RestTemplate and we enhance it with Netflix Ribbon API along with Spring Cloud Netflix API.

We’ll use one of Ribbon’s load-balancing strategies, WeightedResponseTimeRule, to enable the client side load balancing between 2 servers, which are defined under a named client in the configuration file, in our application.

4. Ribbon Configuration

Ribbon API enables us to configure the following components of the load balancer:

  • Rule – Logic component which specifies the load balancing rule we are using in our application
  • Ping – A Component which specifies the mechanism we use to determine the server’s availability in real-time
  • ServerList – can be dynamic or static. In our case, we are using a static list of servers and hence we are defining them in the application configuration file directly

Let write a simple configuration for the library:

public class RibbonConfiguration {

    @Autowired
    IClientConfig ribbonClientConfig;

    @Bean
    public IPing ribbonPing(IClientConfig config) {
        return new PingUrl();
    }

    @Bean
    public IRule ribbonRule(IClientConfig config) {
        return new WeightedResponseTimeRule();
    }
}

Notice how we used the WeightedResponseTimeRule rule to determine the server and PingUrl mechanism to determine the server’s availability in real-time.

According to this rule, each server is given a weight according to its average response time, lesser the response time gives lesser the weight. This rule randomly selects a server where the possibility is determined by server’s weight.

And the PingUrl will ping every URL to determine the server’s availability.

5. application.yml

Below is the application.yml configuration file we created for this sample application:

spring:
  application:
    name: spring-cloud-ribbon

server:
  port: 8888

ping-server:
  ribbon:
    eureka:
      enabled: false
    listOfServers: localhost:9092,localhost:9999
    ServerListRefreshInterval: 15000

In the above file, we specified:

  • Application name
  • Port number of the application
  • Named client for the list of servers: “ping-server”
  • Disabled Eureka service discovery component, by setting eureka: enabled to false
  • Defined the list of servers available for load balancing, in this case, 2 servers
  • Configured the server refresh rate with ServerListRefreshInterval

6. RibbonClient

Let’s now set up the main application component snippet – where we use the RibbonClient to enable the load balancing instead of the plain RestTemplate:

@SpringBootApplication
@RestController
@RibbonClient(
  name = "ping-a-server",
  configuration = RibbonConfiguration.class)
public class ServerLocationApp {

    @LoadBalanced
    @Bean
    RestTemplate getRestTemplate() {
        return new RestTemplate();
    }

    @Autowired
    RestTemplate restTemplate;

    @RequestMapping("/server-location")
    public String serverLocation() {
        return this.restTemplate.getForObject(
          "http://ping-server/locaus", String.class);
    }

    public static void main(String[] args) {
        SpringApplication.run(ServerLocationApp.class, args);
    }
}

We defined a controller class with the annotation @RestController; we also annotated the class with @RibbonClient with a name and a configuration class.

The configuration class we defined here is the same class that we defined before in which we provided the desired Ribbon API configuration for this application.

Notice we also annotated the RestTemplate with @LoadBalanced which suggests that we want this to be load balanced and in this case with Ribbon.

7. Failure Resiliency in Ribbon

As we discussed earlier in this article, Ribbon API not only provides client side load balancing algorithms but also it has built in failure resiliency.

As stated before, Ribbon API can determine the server’s availability through the constant pinging of servers at regular intervals and has a capability of skipping the servers which are not live.

In addition to that, it also implements Circuit Breaker pattern to filter out the servers based on specified criteria.

The Circuit Breaker pattern minimizes the impact of a server failure on performance by swiftly rejecting a request to that server that is failing without waiting for a time-out. We can disable this Circuit Breaker feature by setting the property niws.loadbalancer.availabilityFilteringRule.filterCircuitTripped to false.

When all servers are down, thus no server is available to serve the request, the pingUrl() will fail and we receive an exception java.lang.IllegalStateException with a message “No instances are available to serve the request”.

8. Conclusion

In this article, we discussed Netflix Ribbon API and its implementation in a simple sample application.

The complete source code for the example described above can be found on the GitHub repository.

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

Java Web Weekly, Issue 150

$
0
0

1. Spring and Java

>> Oracle Presents First Proposal for Value Types Implementation [infoq.com]

The value types proposal is being approached in a gradual, intelligent manner – first the JVM support and then the actual language support.

>> A beginner’s guide to SQL injection and how you should prevent it [vladmihalcea.com]

A monster writeup about Hibernate, SQL injection, and staying clear of it.

So many of the huge data breaches this year have been SQL injection attacks – it’s definitely worth learning more about the technique.

>> Add Stormpath to Your JHipster Application [stormpath.com]

A quick and interesting integration.

>> Inside Java 9 – Performance, Compiler, and More [sitepoint.com]

A deep-dive into the JVM internals coming in Java 9, and a discussion of the various optimizations that are being developed in the new release.

>> Meet Thorben Janssen [in.relation.to]

A quick discussion about Hibernate and building a training business and the future of the framework.

>> How Java developers can use the Wiremock framework to simulate HTTP-based APIs [infoq.com]

A good, simple intro to Wiremock.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> Ubiquiti all the things: how I finally fixed my dodgy wifi [troyhunt.com]

Hmm, makes me reconsider my home setup. If you’re a gear nut like me, you’re going to enjoy this one.

>> Be Careful About What You Dislike [lucumr.pocoo.org]

Interesting points about forming and cementing opinions, especially around tech.

>> From Developer to Consultant [daedtech.com]

An interesting read about the lay of the land for developers that want to do higher leverage work.

>> Negotiating for time [dandreamsofcoding.com]

Great advice for the final stretch of the job hunting process. Keeping your cool, being open, enthusiastic but also in control – all good stuff.

>> The Developer Feedback Loop [daedtech.com]

A shorter feedback look is the driving force behind oh-so many technologies and techniques we can now take for granted, such as TDD, CI, CD, etc.

Definitely worth keeping a close eye on what kind of feedback look we’re getting from our systems.

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> People all over the word continued to act like idiots [dilbert.com]

>> You picked a bad time to become insightful [dilbert.com]

>> I’m willing to touch a rat that touches you [dilbert.com]

4. Pick of the Week

>> Let Your Workers Rebel [hbr.org]

Java 9 Stream API Improvements

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Overview

In this quick write-up, we are going to focus on the new interesting Stream API improvements coming in Java 9.

2. Stream takeWhile/dropWhile

Discussions about these methods have appeared repeatedly on StackOverflow (the most popular is this one).

Imagine that we want to generate a Stream of Strings by adding one character to the previous Stream‘s value until the length of the current value in this Stream is lower than 10.

How would we solve it in Java 8? We could use one of the short-circuiting intermediate operations like limit, allMatch that actually serve for other purposes or write our own takeWhile implementation based on a Spliterator that, in turn, complicates such a simple issue.

With Java 9, the solution is easy:

Stream<String> stream = Stream.iterate("", s -> s + "s")
  .takeWhile(s -> s.length() < 10);

The takeWhile operation takes a Predicate which is applied to elements to determine the longest prefix of these elements (if a stream is ordered) or a subset of the stream’s elements (when a stream is unordered).

To move forward, we had better understand what terms “the longest prefix” and “a Stream’s subset” mean:

  • the longest prefix is a contiguous sequence of elements of the stream that match the given predicate.  The first element of the sequence is the first element of this stream, and the element immediately following the last element of the sequence does not match the given predicate
  • a Stream’s subset is a set of some (but not all) elements of the Stream match the given predicate.

After introducing these key terms, we can easily comprehend another new dropWhile operation.

It does exactly the opposite of takeWhile. If a stream is ordered, the dropWile returns a stream consisting of the remaining elements of this Stream after dropping the longest prefix of elements that match the given predicate.

Otherwise, if a Stream is unordered, the dropWile returns a stream consisting of the remaining elements of this Stream after dropping a subset of elements that match the given predicate.

Let’s throw away the first five elements by using the preceding obtained Stream:

stream.dropWhile(s -> !s.contains("sssss"));

Simply put, the dropWhile operation will remove elements while the given predicate for an element returns true and stops removing on the first predicate’s false.

3. Stream iterate

The next new feature is the overloaded iterate method for finite Streams generation. Not to be confused with the finite variant which returns an infinite sequential ordered Stream produced by some function.

A new iterate slightly modifies this method by adding a predicate which applies to elements to determine when the Stream must terminate. Its usage is very convenient and concise:

Stream.iterate(0, i -> i < 10, i -> i + 1)
  .forEach(System.out::println);

It can be associated with the corresponding for statement:

for (int i = 0; i < 10; ++i) {
    System.out.println(i);
}

4. Stream ofNullable

There are some situations when we need to put an element into a Stream. Sometimes, this element may be a null, but we don’t want that our Stream contains such values. It causes of writing either an if statement or a ternary operator which checks whether an element is a null.

Assuming that collection and map variables, have been created and filled successfully, have a look at the following example:

collection.stream()
  .flatMap(s -> {
      Integer temp = map.get(s);
      return temp != null ? Stream.of(temp) : Stream.empty();
  })
  .collect(Collectors.toList());

To avoid such boilerplate code, the ofNullable method has been added to the Stream class. With this method the preceding sample can be simply transformed into:

collection.stream()
  .flatMap(s -> Stream.ofNullable(map.get(s)))
  .collect(Collectors.toList());

5. Conclusion

We considered major changes of Stream API in Java 9 and how these improvements will help us to write more emphatic programs with fewer efforts.

As always, the code snippets can be found over on Github.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Custom Error Pages with Spring MVC

$
0
0

1. Overview

A common requirement in any web application are customized error pages.

For instance, suppose you’re running a vanilla Spring MVC app on top of Tomcat. A user enters an invalid URL in his browser and is shown a not so user-friendly blue and white stack trace – not ideal.

In this tutorial we’ll set up customized error pages for a few HTTP error codes.

The working assumption is that the reader is fairly comfortable working with Spring MVC; if not, this is a great way to start.

2. The Simple Steps

Let’s start with the simple steps we’re going to follow here:

  1.  Specify a single URL /errors in web.xml that maps to a method that would handle the error whenever an error is generated
  2.  Create a Controller called ErrorController with a mapping /errors
  3.  Figure out the HTTP error code at runtime and display a message according to the HTTP error code. For instance, if a 404 error is generated, then the user should see a message like ‘Resource not found’ , whereas for a 500 error, the user should see something on the lines of ‘Sorry! An Internal Server Error was generated at our end’

3. The web.xml

We start by adding the following lines to our web.xml:

<error-page>
    <location>/errors</location>
</error-page>

Note that this feature is only available in Servlet versions greater than 3.0.

Any error generated within an app is associated with a HTTP error code. For example, suppose that a user enters a URL /invalidUrl into the browser, but no such RequestMapping has been defined inside of Spring. Then, a HTTP code of 404 generated by the underlying web server. The lines that we have just added to our web.xml tells Spring to execute the logic written in the method that is mapped to the URL /errors.

A quick side-note here – the corresponding Java Servlet configuration doesn’t unfortunately have an API for setting the error page – so in this case, we actually need the web.xml.

4. The Controller

Moving on, we now create our ErrorController. We create a single unifying method that intercepts the error and displays an error page:

@Controller
public class ErrorController {

    @RequestMapping(value = "errors", method = RequestMethod.GET)
    public ModelAndView renderErrorPage(HttpServletRequest httpRequest) {
        
        ModelAndView errorPage = new ModelAndView("errorPage");
        String errorMsg = "";
        int httpErrorCode = getErrorCode(httpRequest);

        switch (httpErrorCode) {
            case 400: {
                errorMsg = "Http Error Code: 400. Bad Request";
                break;
            }
            case 401: {
                errorMsg = "Http Error Code: 401. Unauthorized";
                break;
            }
            case 404: {
                errorMsg = "Http Error Code: 404. Resource not found";
                break;
            }
            case 500: {
                errorMsg = "Http Error Code: 500. Internal Server Error";
                break;
            }
        }
        errorPage.addObject("errorMsg", errorMsg);
        return errorPage;
    }
    
    private int getErrorCode(HttpServletRequest httpRequest) {
        return (Integer) httpRequest
          .getAttribute("javax.servlet.error.status_code");
    }
}

5. The Front-End

For demonstration purposes, we will be keeping our error page very simple and compact. This page will only contain a message displayed on a white screen. Create a jsp file called errorPage.jsp :

<%@ taglib uri="http://java.sun.com/jsp/jstl/core" prefix="c"%>
<%@ page session="false"%>
<html>
<head>
    <title>Home</title>
</head>
<body>
    <h1>${errorMsg}</h1>
</body>
</html>

6. Testing

We will simulate two of the most common errors that occur within any application: the HTTP 404 error, and HTTP 500 error.

Run the server and head on over to localhost:8080/spring-mvc-xml/invalidUrl.Since this URL doesn’t exist, we expect to see our error page with the message ‘Http Error Code : 404. Resource not found’.

Let’s see what happens when one of handler methods throws a NullPointerException. We add the following method to ErrorController:

@RequestMapping(value = "500Error", method = RequestMethod.GET)
public void throwRuntimeException() {
    throw new NullPointerException("Throwing a null pointer exception");
}

Go over to localhost:8080/spring-mvc-xml/500Error. You should see a white screen with the message ‘Http Error Code : 500. Internal Server Error’.

7. Conclusion

We saw how to set up error pages for different HTTP codes with Spring MVC. We created a single error page where an error message is displayed dynamically according to the HTTP error code.


Improved Java Logging with Mapped Diagnostic Context (MDC)

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Overview

In this article, we will explore the use of Mapped Diagnostic Context (MDC) to improve the application logging.

The basic idea of Mapped Diagnostic Context is to provide a way to enrich log messages with pieces of information that could be not available in the scope where the logging actually occurs, but that can be indeed useful to better track the execution of the program.

2. Why Use MDC

Let’s start with an example. Let’s suppose we have to write a software that transfers money. We set up a Transfer class to represent some basic information: a unique transfer id and the name of the sender:

public class Transfer {
    private String transactionId;
    private String sender;
    private Long amount;
    
    public Transfer(String transactionId, String sender, long amount) {
        this.transactionId = transactionId;
        this.sender = sender;
        this.amount = amount;
    }
    
    public String getSender() {
        return sender;
    }

    public String getTransactionId() {
        return transactionId;
    }

    public Long getAmount() {
        return amount;
    }
}

To perform the transfer – we need to use a service backed by a simple API:

public abstract class TransferService {

    public boolean transfer(long amount) {
        // connects to the remote service to actually transfer money
    }

    abstract protected void beforeTransfer(long amount);

    abstract protected void afterTransfer(long amount, boolean outcome);
}

The beforeTransfer() and afterTransfer() methods can be overridden to run custom code right before and right after the transfer completes.

We’re going to leverage beforeTransfer() and afterTransfer() to log some information about the transfer.

Let’s create the service implementation:

import org.apache.log4j.Logger;
import com.baeldung.mdc.TransferService;

public class Log4JTransferService extends TransferService {
    private Logger logger = Logger.getLogger(Log4JTransferService.class);

    @Override
    protected void beforeTransfer(long amount) {
        logger.info("Preparing to transfer " + amount + "$.");
    }

    @Override
    protected void afterTransfer(long amount, boolean outcome) {
        logger.info(
          "Has transfer of " + amount + "$ completed successfully ? " + outcome + ".");
    }
}

The main issue to note here is that – when the log message is created, it is not possible to access the Transfer object – just the amount is accessible making impossible to log either the transaction id or the sender

Let’s set up the usual log4j.properties file to log on the console:

log4j.appender.consoleAppender=org.apache.log4j.ConsoleAppender
log4j.appender.consoleAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.consoleAppender.layout.ConversionPattern=%-4r [%t] %5p %c %x - %m%n
log4j.rootLogger = TRACE, consoleAppender

Let’s finally set up a small application that is able to run multiple transfer at the same time through an ExecutorService:

public class TransferDemo {

    public static void main(String[] args) {
        ExecutorService executor = Executors.newFixedThreadPool(3);
        TransactionFactory transactionFactory = new TransactionFactory();
        for (int i = 0; i < 10; i++) {
            Transfer tx = transactionFactory.newInstance();
            Runnable task = new Log4JRunnable(tx);            
            executor.submit(task);
        }
        executor.shutdown();
    }
}

We note that in order to use the ExecutorService we need to wrap the execution of the Log4JTransferService in an adapter because executor.submit() expects a Runnable:

public class Log4JRunnable implements Runnable {
    private Transfer tx;
    
    public Log4JRunnable(Transfer tx) {
        this.tx = tx;
    }
    
    public void run() {
        log4jBusinessService.transfer(tx.getAmount());
    }
}

When we run our demo application that manages multiple transfers at the same time, we very quickly discover that the log is not useful as we would like it to be. It’s complex to track the execution of each transfer because the only useful information being logged is the amount of money transferred and the name of the thread that is executing that particular transfer.

What’s more, it’s impossible to distinguish between two different transactions of the same amount executed by the same thread because the related log lines look substantially the same:

...
519  [pool-1-thread-3]  INFO Log4JBusinessService 
  - Preparing to transfer 1393$.
911  [pool-1-thread-2]  INFO Log4JBusinessService 
  - Has transfer of 1065$ completed successfully ? true.
911  [pool-1-thread-2]  INFO Log4JBusinessService 
  - Preparing to transfer 1189$.
989  [pool-1-thread-1]  INFO Log4JBusinessService 
  - Has transfer of 1350$ completed successfully ? true.
989  [pool-1-thread-1]  INFO Log4JBusinessService 
  - Preparing to transfer 1178$.
1245 [pool-1-thread-3]  INFO Log4JBusinessService 
  - Has transfer of 1393$ completed successfully ? true.
1246 [pool-1-thread-3]  INFO Log4JBusinessService 
  - Preparing to transfer 1133$.
1507 [pool-1-thread-2]  INFO Log4JBusinessService 
  - Has transfer of 1189$ completed successfully ? true.
1508 [pool-1-thread-2]  INFO Log4JBusinessService 
  - Preparing to transfer 1907$.
1639 [pool-1-thread-1]  INFO Log4JBusinessService 
  - Has transfer of 1178$ completed successfully ? true.
1640 [pool-1-thread-1]  INFO Log4JBusinessService 
  - Preparing to transfer 674$.
...

Luckily MDC can help.

3. MDC in Log4j

Let’s introduce MDC.

MDC in Log4j allows us to fill a map-like structure with pieces of information that are accessible to the appender when the log message is actually written.

The MDC structure is internally attached to the executing thread in the same way a ThreadLocal variable would be.

And so, the high level idea is:

  1. to fill the MDC with pieces of information that we want to make available to the appender
  2. then log a message
  3. and finally, clear the MDC

The pattern of the appender should be obviously changed in order to retrieve the variables stored in the MDC.

So let’s then change the code according to these guidelines:

import org.apache.log4j.MDC;

public class Log4JRunnable implements Runnable {
    private Transfer tx;
    private static Log4JTransferService log4jBusinessService = new Log4JTransferService();

    public Log4JRunnable(Transfer tx) {
        this.tx = tx;
    }

    public void run() {
        MDC.put("transaction.id", tx.getTransactionId());
        MDC.put("transaction.owner", tx.getSender());
        log4jBusinessService.transfer(tx.getAmount());
        MDC.clear();
    }
}

Unsurprisingly MDC.put() is used to add a key and a corresponding value in the MDC while MDC.clear() empties the MDC.

Let’s now change the log4j.properties to print the information that we’ve just store in the MDC. It is enough to change the conversion pattern, using the %X{} placeholder for each entry contained in the MDC we would like to be logged:

log4j.appender.consoleAppender.layout.ConversionPattern=
  %-4r [%t] %5p %c{1} %x - %m - tx.id=%X{transaction.id} tx.owner=%X{transaction.owner}%n

Now, if we run the application, we’ll note that each line carries also the information about the transaction being processed making far more easier for us to track the execution of the application:

638  [pool-1-thread-2]  INFO Log4JBusinessService 
  - Has transfer of 1104$ completed successfully ? true. - tx.id=2 tx.owner=Marc
638  [pool-1-thread-2]  INFO Log4JBusinessService 
  - Preparing to transfer 1685$. - tx.id=4 tx.owner=John
666  [pool-1-thread-1]  INFO Log4JBusinessService 
  - Has transfer of 1985$ completed successfully ? true. - tx.id=1 tx.owner=Marc
666  [pool-1-thread-1]  INFO Log4JBusinessService 
  - Preparing to transfer 958$. - tx.id=5 tx.owner=Susan
739  [pool-1-thread-3]  INFO Log4JBusinessService 
  - Has transfer of 783$ completed successfully ? true. - tx.id=3 tx.owner=Samantha
739  [pool-1-thread-3]  INFO Log4JBusinessService 
  - Preparing to transfer 1024$. - tx.id=6 tx.owner=John
1259 [pool-1-thread-2]  INFO Log4JBusinessService 
  - Has transfer of 1685$ completed successfully ? false. - tx.id=4 tx.owner=John
1260 [pool-1-thread-2]  INFO Log4JBusinessService 
  - Preparing to transfer 1667$. - tx.id=7 tx.owner=Marc

4. MDC in Log4j2

The very same feature is available in Log4j2 too, so let’s see how to use it.

Let’s firstly set up a TransferService subclass that logs using Log4j2:

import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;

public class Log4J2TransferService extends TransferService {
    private static final Logger logger = LogManager.getLogger();

    @Override
    protected void beforeTransfer(long amount) {
        logger.info("Preparing to transfer {}$.", amount);
    }

    @Override
    protected void afterTransfer(long amount, boolean outcome) {
        logger.info("Has transfer of {}$ completed successfully ? {}.", amount, outcome);
    }
}

Let’s then change the code that uses the MDC, that is actually called ThreadContext in Log4j2:

import org.apache.log4j.MDC;

public class Log4J2Runnable implements Runnable {
    private final Transaction tx;
    private Log4J2BusinessService log4j2BusinessService = new Log4J2BusinessService();

    public Log4J2Runnable(Transaction tx) {
        this.tx = tx;
    }

    public void run() {
        ThreadContext.put("transaction.id", tx.getTransactionId());
        ThreadContext.put("transaction.owner", tx.getOwner());
        log4j2BusinessService.transfer(tx.getAmount());
        ThreadContext.clearAll();
    }
}

Again, ThreadContext.put() adds an entry in the MDC and ThreadContext.clearAll() removes all the existing entries.

We still miss the log4j2.xml file to configure the logging. As we can note, the syntax to specify which MDC entries should be logged is the same than the one used in Log4j:

<Configuration status="INFO">
    <Appenders>
        <Console name="stdout" target="SYSTEM_OUT">
            <PatternLayout
              pattern="%-4r [%t] %5p %c{1} - %m - tx.id=%X{transaction.id} tx.owner=%X{transaction.owner}%n" />
        </Console>
    </Appenders>
    <Loggers>
        <Logger name="com.baeldung.log4j2" level="TRACE" />
        <AsyncRoot level="DEBUG">
            <AppenderRef ref="stdout" />
        </AsyncRoot>
    </Loggers>
</Configuration>

Again, let’s execute the application and we’ll see the MDC information being printed in the log:

1119 [pool-1-thread-3]  INFO Log4J2BusinessService 
  - Has transfer of 1198$ completed successfully ? true. - tx.id=3 tx.owner=Samantha
1120 [pool-1-thread-3]  INFO Log4J2BusinessService 
  - Preparing to transfer 1723$. - tx.id=5 tx.owner=Samantha
1170 [pool-1-thread-2]  INFO Log4J2BusinessService 
  - Has transfer of 701$ completed successfully ? true. - tx.id=2 tx.owner=Susan
1171 [pool-1-thread-2]  INFO Log4J2BusinessService 
  - Preparing to transfer 1108$. - tx.id=6 tx.owner=Susan
1794 [pool-1-thread-1]  INFO Log4J2BusinessService 
  - Has transfer of 645$ completed successfully ? true. - tx.id=4 tx.owner=Susan

5. MDC in SLF4J/logback

MDC is available in SLF4J too, under the condition that it is supported by the underlying logging library.

Both Logback and Log4j supports MDC as we’ve just seen, so we need nothing special to use it with a standard set up.

Let’s prepare the usual TransferService subclass, this time using the Simple Logging Facade for Java:

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

final class Slf4TransferService extends TransferService {
    private static final Logger logger = LoggerFactory.getLogger(Slf4TransferService.class);

    @Override
    protected void beforeTransfer(long amount) {
        logger.info("Preparing to transfer {}$.", amount);
    }

    @Override
    protected void afterTransfer(long amount, boolean outcome) {
        logger.info("Has transfer of {}$ completed successfully ? {}.", amount, outcome);
    }
}

Let’s now use the SLF4J’s flavor of MDC. In this case, the syntax and semantic is the same than log4j’s:

import org.slf4j.MDC;

public class Slf4jRunnable implements Runnable {
    private final Transaction tx;
    
    public Slf4jRunnable(Transaction tx) {
        this.tx = tx;
    }
    
    public void run() {
        MDC.put("transaction.id", tx.getTransactionId());
        MDC.put("transaction.owner", tx.getOwner());
        new Slf4TransferService().transfer(tx.getAmount());
        MDC.clear();
    }
}

We have to provide the Logback configuration file logback.xml:

<configuration>
    <appender name="stdout" class="ch.qos.logback.core.ConsoleAppender">
        <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
            <pattern>%-4r [%t] %5p %c{1} - %m - tx.id=%X{transaction.id} tx.owner=%X{transaction.owner}%n</pattern>
	</encoder>
    </appender>
    <root level="TRACE">
        <appender-ref ref="stdout" />
    </root>
</configuration>

Again we’ll see that the information in the MDC is properly added to the logged messages even though this information is not explicitly provided in the log.info() method:

1020 [pool-1-thread-3]  INFO c.b.m.s.Slf4jBusinessService 
  - Has transfer of 1869$ completed successfully ? true. - tx.id=3 tx.owner=John
1021 [pool-1-thread-3]  INFO c.b.m.s.Slf4jBusinessService 
  - Preparing to transfer 1303$. - tx.id=6 tx.owner=Samantha
1221 [pool-1-thread-1]  INFO c.b.m.s.Slf4jBusinessService 
  - Has transfer of 1498$ completed successfully ? true. - tx.id=4 tx.owner=Marc
1221 [pool-1-thread-1]  INFO c.b.m.s.Slf4jBusinessService 
  - Preparing to transfer 1528$. - tx.id=7 tx.owner=Samantha
1492 [pool-1-thread-2]  INFO c.b.m.s.Slf4jBusinessService 
  - Has transfer of 1110$ completed successfully ? true. - tx.id=5 tx.owner=Samantha
1493 [pool-1-thread-2]  INFO c.b.m.s.Slf4jBusinessService 
  - Preparing to transfer 644$. - tx.id=8 tx.owner=John

It is worth noting that in case we set up the SLF4J back-end to a logging system that does not support MDC all the related invocations will be simply skipped without side effects.

6. Conclusion

MDC has lots of applications, mainly in scenarios in which execution of several different threads causes interleaved log messages that would be otherwise hard to read.

And as we’ve seen, it’s supported by three of the most widely used logging frameworks in Java.

As usual, you’ll find the sources over on GitHub.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Getting Started with Java Properties

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Overview

Most Java application need to use properties at some point, generally to store simple parameters as key-value pairs, outside of compiled code.

And so the language has first class support for properties – the java.util.Properties – an utility class designed for handling this type of configuration files.

That’s what we’ll focus on in this article.

2. Loading Properties

2.1. From Properties Files

Let’s start with an example for loading key-value pairs from properties files; we’re loading two files we have available on our classpath:

app.properties:

version=1.0
name=TestApp
date=2016-11-12

And catalog:

c1=files
c2=images
c3=videos

Notice that although the properties files are recommended to use “.properties“, the suffix, it’s not necessary.

We can now load them very simply into a Properties instance:

String rootPath = Thread.currentThread().getContextClassLoader().getResource("").getPath();
String appConfigPath = rootPath + "app.properties";
String catalogConfigPath = rootPath + "catalog";

Properties appProps = new Properties();
appProps.load(new FileInputStream(appConfigPath));

Properties catalogProps = new Properties();
catalogProps.load(new FileInputStream(catalogConfigPath));

As long as a file’s content meet properties file format requirements, it can be parsed correctly by Properties class. Here are more details for Property file format.

2.2. Load From XML Files

Besides properties files, Properties class can also load XML files which conform to the specific DTD specifications.

Here is an example for loading key-value pairs from an XML file – icons.xml:

<?xml version="1.0" encoding="utf-8" ?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
    <comment>xml example</comment>
    <entry key="fileIcon">icon1.jpg</entry>
    <entry key="imageIcon">icon2.jpg</entry>
    <entry key="videoIcon">icon3.jpg</entry>
</properties>

Now, let’s load it:

String rootPath = Thread.currentThread().getContextClassLoader().getResource("").getPath();
String iconConfigPath = rootPath + "icons.xml";
Properties iconProps = new Properties();
iconProps.loadFromXML(new FileInputStream(iconConfigPath));

3. Get Properties

We can use getProperty(String key) and getProperty(String key, String defaultValue) to get value by its key.

If the key-value pair exists, the two methods will both return the corresponding value. But if there is no such key-value pair, the former will return null, and the latter will return defaultValue instead.

Example code:

String appVersion = appProps.getProperty("version");
String appName = appProps.getProperty("name", "defaultName");
String appGroup = appProps.getProperty("group", "baeldung");
String appDownloadAddr = appProps.getProperty("downloadAddr");
System.out.println(appVersion);
System.out.println(appName);
System.out.println(appGroup);
System.out.println(appDownloadAddr);

Note that although Properties class inherits get() method from Hashtable class, I wouldn’t recommend you use it to get value. Because its get() method will return an Object value which can only be cast to String and the getProperty() method already handles the raw Object value properly for you.

The code below will throw an Exception:

float appVerFloat = (float) appProps.get("version");

4. Set Properties

We can use setProperty() method to update an existed key-value pair or add a new key-value pair.

Example code:

appProps.setProperty("name", "NewAppName"); // update an old value
appProps.setProperty("downloadAddr", "www.baeldung.com/downloads"); // add new key-value pair
String newAppName = appProps.getProperty("name");
String newAppDownloadAddr = appProps.getProperty("downloadAddr");
System.out.println("new app name: " + newAppName);
System.out.println("new app downloadAddr: " + newAppDownloadAddr);

Note that although Properties class inherits put() method and putAll() method from Hashtable class, I wouldn’t recommend you use them for the same reason as for get() method: only String values can be used in Properties.

The code below will not work as you wish, when you use getProperty() to get its value, it will return null:

appProps.put("version", 2);

5. Remove Properties

If you want to remove a key-value pair, you can use remove() method.

Example Code:

System.out.println("before removal, version is: " + appProps.getProperty("version"));
appProps.remove("version");
System.out.println("after removal, version is: " + appProps.getProperty("version"));

6. Store

6.1. Store to Properties Files

Properties class provides a store() method to output key-value pairs.

Example code:

String newAppConfigPropertiesFile = rootPath + "newApp.properties";
appProps.store(new FileWriter(newAppConfigPropertiesFile), "store to properties file");

The second parameter is for comment. If you don’t want to write any comment, simply use null for it.

6.2. Store to XML Files

Properties class also provides a storeToXML() method to output key-value pairs in XML format.

Example code:

String newAppConfigXmlFile = rootPath + "newApp.xml";
appProps.storeToXML(new FileOutputStream(newAppConfigXmlFile), "store to xml file");

The second parameter is as same as it in the store() method.

7. Other Common Operations

Properties class also provides some other methods to operate the properties.

Example code:

appProps.list(System.out); // list all key-value pairs

Enumeration<Object> valueEnumeration = appProps.elements();
while (valueEnumeration.hasMoreElements()) {
    System.out.println(valueEnumeration.nextElement());
}

Enumeration<Object> keyEnumeration = appProps.keys();
while (keyEnumeration.hasMoreElements()) {
    System.out.println(keyEnumeration.nextElement());
}

int size = appProps.size();
System.out.println(size);

8. Default Property List

A Properties object can contain another Properties object as its default property list. The default property list will be searched if the property key is not found in the original one.

Besides “app.properties“, we have another file – “default.properties” – on our classpath:

default.properties:

site=www.google.com
name=DefaultAppName
topic=Properties
category=core-java

Example Code:

String rootPath = Thread.currentThread().getContextClassLoader().getResource("").getPath();

String defaultConfigPath = rootPath + "default.properties";
Properties defaultProps = new Properties();
defaultProps.load(new FileInputStream(defaultConfigPath));

String appConfigPath = rootPath + "app.properties";
Properties appProps = new Properties(defaultProps);
appProps.load(new FileInputStream(appConfigPath));

System.out.println(appProps.getProperty("name"));
System.out.println(appProps.getProperty("version"));
System.out.println(appProps.getProperty("site"));

7. Properties and Encoding

By default, properties files are expected to be ISO-8859-1 (Latin-1) encoded, so properties with characters outside of the ISO-8859-1 shouldn’t generally be used.

We can work around that limitation with the help of tools such as the JDK native2ascii tool or explicit encodings on files, if necessary.

For XML files, the loadFromXML() method and the storeToXML() method use UTF-8 character encoding by default.

However, when reading an XML file encoded differently, we can specify that in the DOCTYPE declaration; writing is also flexible enough – we can specify the encoding in a third parameter of the  storeToXML() API.

8. Conclusion

In this article, we have discussed basic Properties class usage, including how to use Properties load and store key-value pairs in both properties and XML format, how to operate key-value pairs in a Properties object, such as retrieve values, update values, get its size, and how to use a default list for a Properties object.

The complete source code for the example is available in this GitHub project.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

PDF Conversions in Java

$
0
0

I just released the Master Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Introduction

In this quick article, we’ll focus on doing programmatic conversion between PDF files and other formats in Java.

More specifically, we’ll describe how to save PDFs as image files, such as PNG or JPEG, convert PDFs to Microsoft Word documents, export as an HTML, and extract the texts, by using multiple Java open-source libraries.

2. Maven Dependencies

The first library we’ll look at is Pdf2Dom. Let’s start with the Maven dependencies we need to add to our project:

<dependency>
    <groupId>org.apache.pdfbox</groupId>
    <artifactId>pdfbox-tools</artifactId>
    <version>2.0.3</version>
</dependency>
<dependency>
    <groupId>net.sf.cssbox</groupId>
    <artifactId>pdf2dom</artifactId>
    <version>1.6</version>
</dependency>

We’re going to use the first dependency to load the selected PDF file. The second dependency is responsible for the conversion itself. The latest versions can be found here: pdfbox-tools and pdf2dom.

What’s more, we’ll use iText to extract the text from a PDF file and POI to create the .docx document.

Let’s take a look at Maven dependencies that we need to include in our project:

<dependency>
    <groupId>com.itextpdf</groupId>
    <artifactId>itextpdf</artifactId>
    <version>5.5.10</version>
</dependency>
<dependency>
    <groupId>com.itextpdf.tool</groupId>
    <artifactId>xmlworker</artifactId>
    <version>5.5.10</version>
</dependency>
<dependency>
    <groupId>org.apache.poi</groupId>
    <artifactId>poi-ooxml</artifactId>
    <version>3.15</version>
</dependency>
<dependency>
    <groupId>org.apache.poi</groupId>
    <artifactId>poi-scratchpad</artifactId>
    <version>3.15</version>
</dependency>

The latest version of iText can be found here and you can look for Apache POI here.

3. PDF and HTML Conversions

To work with HTML files we’ll use Pdf2Dom – a PDF parser that converts the documents to an HTML DOM representation. The obtained DOM tree can then be then serialized to an HTML file or further processed.

To convert PDF to HTML, we need to use XMLWorker, library that is provided by iText.

3.1. PDF to HTML

Let’s have a look at a simple conversion from PDF to HTML:

private void generateHTMLFromPDF(String filename) {
    PDDocument pdf = PDDocument.load(new File(filename));
    Writer output = new PrintWriter("src/output/pdf.html", "utf-8");
    new PDFDomTree().writeText(pdf, output);
    
    output.close();
}

In the code snippet above we load the PDF file, using the load API from PDFBox. With the PDF loaded, we use the parser to parse the file and write to output specified by java.io.Writer.

Note that converting PDF to HTML is never a 100%, pixel-to-pixel result. The results depend on the complexity and the structure of the particular PDF file.

3.2. HTML to PDF

Now, let’s have a look at conversion from HTML to PDF:

private static void generatePDFFromHTML(String filename) {
    Document document = new Document();
    PdfWriter writer = PdfWriter.getInstance(document,
      new FileOutputStream("src/output/html.pdf"));
    document.open();
    XMLWorkerHelper.getInstance().parseXHtml(writer, document,
      new FileInputStream(filename));
    document.close();
}

Note that converting HTML to PDF, you need to ensure that HTML has all tags properly started and closed, otherwise the PDF will be not created. The positive aspect of this approach is that PDF will be created exactly the same as it was in HTML file.

4. PDF to Image Conversions

There are many ways of converting PDF files to an image. One of the most popular solutions is named Apache PDFBox. This library is an open source Java tool for working with PDF documents. For image to PDF conversion, we’ll use iText again.

4.1. PDF to Image

To start converting PDFs to images, we need to use dependency mentioned in the previous section – pdfbox-tools.

Let’s take a look at the code example:

private void generateImageFromPDF(String filename, String extension) {
    PDDocument document = PDDocument.load(new File(filename));
    PDFRenderer pdfRenderer = new PDFRenderer(document);
    for (int page = 0; page < document.getNumberOfPages(); ++page) {
        BufferedImage bim = pdfRenderer.renderImageWithDPI(
          page, 300, ImageType.RGB);
        ImageIOUtil.writeImage(
          bim, String.format("src/output/pdf-%d.%s", page + 1, extension), 300);
    }
    document.close();
}

There are few important parts in the above-mentioned code. We need to use PDFRenderer, in order to render PDF as a BufferedImage. Also, each page of the PDF file needs to be rendered separately.

Finally, we use ImageIOUtil, from Apache PDFBox Tools, to write an image, with the extension that we specify. Possible file formats are jpeg, jpg, gif, tiff or png.

Note that Apache PDFBox is an advanced tool – we can create our own PDF files from scratch, fill forms inside PDF file, sign and/or encrypt the PDF file.

4.2. Image to PDF

Let’s take a look at the code example:

private static void generatePDFFromImage(String filename, String extension) {
    Document document = new Document();
    String input = filename + "." + extension;
    String output = "src/output/" + extension + ".pdf";
    FileOutputStream fos = new FileOutputStream(output);

    PdfWriter writer = PdfWriter.getInstance(document, fos);
    writer.open();
    document.open();
    document.add(Image.getInstance((new URL(input))));
    document.close();
    writer.close();
}

Please note, that we can provide an image as a file, or load it from URL, as it is shown in the example above. Moreover, the extensions of the output file that we can use are jpeg, jpg, gif, tiff or png.

5. PDF to Text Conversions

To extract the raw text out of a PDF file, we’ll also use Apache PDFBox again. For text to PDF conversion, we are going to use iText.

5.1. PDF to Text

We created a method named generateTxtFromPDF(…) and divided it into three main parts: loading of the PDF file, extraction of text, and final file creation.

Let’s start with loading part:

File f = new File(filename);
String parsedText;
PDFParser parser = new PDFParser(new RandomAccessFile(f, "r"));
parser.parse();

In order to read a PDF file, we use PDFParser, with an “r” (read) option. Moreover, we need to use the parser.parse() method that will cause the PDF to be parsed as a stream and populated into the COSDocument object.

Let’s take a look at the extracting text part:

COSDocument cosDoc = parser.getDocument();
PDFTextStripper pdfStripper = new PDFTextStripper();
PDDocument pdDoc = new PDDocument(cosDoc);
parsedText = pdfStripper.getText(pdDoc);

In the first line, we’ll save COSDocument inside the cosDoc variable. It will be then used to construct PDocument, which is the in-memory representation of the PDF document. Finally, we will use PDFTextStripper to return the raw text of a document. After all of those operations, we’ll need to use close() method to close all the used streams.

In the last part, we’ll save text into the newly created file using the simple Java PrintWriter:

PrintWriter pw = new PrintWriter("src/output/pdf.txt");
pw.print(parsedText);
pw.close();

Please note that you cannot preserve formatting in a plain text file because it contains text only.

5.2. Text to PDF

Converting text files to PDF is bit tricky. In order to maintain the file formatting, you’ll need to apply additional rules.

In the following example, we are not taking into consideration the formatting of the file.

First, we need to define the size of the PDF file, version and output file. Let’s have a look at the code example:

Document pdfDoc = new Document(PageSize.A4);
PdfWriter.getInstance(pdfDoc, new FileOutputStream("src/output/txt.pdf"))
  .setPdfVersion(PdfWriter.PDF_VERSION_1_7);
pdfDoc.open();

In the next step, we’ll define the font and also the command that is used to generate new paragraph:

Font myfont = new Font();
myfont.setStyle(Font.NORMAL);
myfont.setSize(11);
pdfDoc.add(new Paragraph("\n"));

Finally, we are going to add paragraphs into newly created PDF file:

BufferedReader br = new BufferedReader(new FileReader(filename));
String strLine;
while ((strLine = br.readLine()) != null) {
    Paragraph para = new Paragraph(strLine + "\n", myfont);
    para.setAlignment(Element.ALIGN_JUSTIFIED);
    pdfDoc.add(para);
}	
pdfDoc.close();
br.close();

6. PDF to Docx Conversions

Creating PDF file from Word document is not easy, and we’ll not cover this topic here. We recommend 3rd party libraries to do it, like jWordConvert.

To create Microsoft Word file from a PDF, we’ll need two libraries. Both libraries are open source. The first one is iText and it is used to extract the text from a PDF file. The second one is POI and is used to create the .docx document.

Let’s take a look at the code snippet for the PDF loading part:

XWPFDocument doc = new XWPFDocument();
String pdf = filename;
PdfReader reader = new PdfReader(pdf);
PdfReaderContentParser parser = new PdfReaderContentParser(reader);

After loading of the PDF, we need to read and render each page separately in the loop, and then write to the output file:

for (int i = 1; i <= reader.getNumberOfPages(); i++) {
    TextExtractionStrategy strategy =
      parser.processContent(i, new SimpleTextExtractionStrategy());
    String text = strategy.getResultantText();
    XWPFParagraph p = doc.createParagraph();
    XWPFRun run = p.createRun();
    run.setText(text);
    run.addBreak(BreakType.PAGE);
}
FileOutputStream out = new FileOutputStream("src/output/pdf.docx");
doc.write(out);
// Close all open files

Please note, that with the SimpleTextExtractionStrategy() extraction strategy, we’ll lose all formatting rules. In order to fix it, play with extraction strategies described here, to achieve a more complex solution.

7. PDF to X Commercial Libraries

In previous sections, we described open source libraries. There are few more libraries worth notice, but they are paid:

  • jPDFImages –  jPDFImages can create images from pages in a PDF document and export them as JPEG, TIFF, or PNG images.
  • JPEDAL – JPedal is an actively developed and very capable native Java PDF library SDK used for printing, viewing and conversion of files
  • pdfcrowd – it’s another Web/HTML to PDF and PDF to Web/HTML conversion library, with advanced GUI

8. Conclusion

In this article, we discussed the ways to convert PDF file into various formats.

The full implementation of this tutorial can be found in the GitHub project – this is a Maven-based project. In order to test, just simply run the examples and see the results in the output folder.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Introduction to Apache CXF Aegis Data Binding

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Overview

This tutorial gives an introduction to Aegis data binding, a subsystem that can map between Java objects and XML documents described by XML schemas. Aegis allows detailed control over the mapping process while keeping programming effort to a minimum.

Aegis is part of Apache CXF, but not constrained to be used within this framework only. Instead, this data binding mechanism may be used anywhere and hence in this tutorial we focus on its usage as an independent subsystem.

2. Maven Dependencies

The only dependency required to activate Aegis data binding is:

<dependency>
    <groupId>org.apache.cxf</groupId>
    <artifactId>cxf-rt-databinding-aegis</artifactId>
    <version>3.1.8</version>
</dependency>

The latest version of this artifact can be found here.

3. Type Definitions

This section walks through the definitions of three types used to illustrate Aegis.

3.1. Course

This is the simplest class in our example, which is defined as:

public class Course {
    private int id;
    private String name;
    private String instructor;
    private Date enrolmentDate;

    // standard getters and setters
}

3.2. CourseRepo

CourseRepo is the top-level type in our model. We define it as an interface rather than a class to demonstrate how easy it is to marshal a Java interface, which is impossible in JAXB without a custom adapter:

public interface CourseRepo {
    String getGreeting();
    void setGreeting(String greeting);
    Map<Integer, Course> getCourses();
    void setCourses(Map<Integer, Course> courses);
    void addCourse(Course course);  
}

Note that we declare the getCourses method with the return type Map. This is intentional to express another advantage of Aegis over JAXB. The latter cannot marshal a map without a custom adapter while the former can.

3.3. CourseRepoImpl

This class provides implementation to the CourseRepo interface:

public class CourseRepoImpl implements CourseRepo {
    private String greeting;
    private Map<Integer, Course> courses = new HashMap<>();

    // standard getters and setters

    @Override
    public void addCourse(Course course) {
        courses.put(course.getId(), course);
    }
}

4. Custom Data Binding

In order for the customization to take effect, XML mapping files must be present on the classpath. It is required that those files are placed into a directory whose structure corresponds to the package hierarchy of the associated Java types.

For example, if a fully qualified name class is named package.ClassName, its associated mapping file must be inside the package/ClassName subdirectory on the classpath. The name of a mapping file must be equal to the associated Java type with a .aegis.xml suffix appended to it.

4.1. CourseRepo Mapping

The CourseRepo interface belongs to the com.baeldung.cxf.aegis package, so its corresponding mapping file is named CourseRepo.aegis.xml and put into the com/baeldung/cxf/aegis directory on the classpath.

In the CourseRepo mapping file, we change name and namespace of the XML element associated with the CourseRepo interface, as well as the style of its greeting property:

<mappings xmlns:ns="http://courserepo.baeldung.com">
    <mapping name="ns:Baeldung">
        <property name="greeting" style="attribute"/>
    </mapping>
</mappings>

4.2. Course Mapping

Similar to the CourseRepo type, the mapping file of class Course is named Course.aegis.xml and located in the com/baeldung/cxf/aegis directory as well.

In this mapping file, we instruct Aegis to ignore the instructor property of the Course class when marshaling, so that its value is not available in the object recreated from the output XML document:

<mappings>
    <mapping>
        <property name="instructor" ignore="true"/>
    </mapping>
</mappings>

Aegis’ home page is where we can find more customization options.

5. Testing

This section is a step-by-step guide to set up and execute a test case that illustrates the usage of Aegis data bindings.

To facilitate the testing process, we declare two fields within the test class:

public class BaeldungTest {
    private AegisContext context;
    private String fileName = "baeldung.xml";

    // other methods
}

These fields have been defined here to be used by other methods of this class.

5.1. AegisContext Initialization

First, an AegisContext object must be created:

context = new AegisContext();

That AegisContext instance is then configured and initialized. Here is how we set root classes for the context:

Set<Type> rootClasses = new HashSet<Type>();
rootClasses.add(CourseRepo.class);
context.setRootClasses(rootClasses);

Aegis creates an XML mapping element for each Type within the Set<Type> object. In this tutorial, we set only CourseRepo as a root type.

Now, let’s set the implementation map for the context to specify the proxy class for the CourseRepo interface:

Map<Class<?>, String> beanImplementationMap = new HashMap<>();
beanImplementationMap.put(CourseRepoImpl.class, "CourseRepo");
context.setBeanImplementationMap(beanImplementationMap);

The last configuration for the Aegis context is telling it to set the xsi:type attribute in the corresponding XML document. This attribute carries the actual type name of the associated Java object unless overridden by the mapping file:

context.setWriteXsiTypes(true);

Our AegisContext instance is now ready to be initialized:

context.initialize();

To keep the code clean, we collect all code snippets from this subsection into one helper method:

private void initializeContext() {
    // ...
}

5.2. Simple Data Setup

Due to the simple nature of this tutorial, we generate sample data right in memory rather than relying on a persistent solution. Let’s populate the course repo using the setup logic below:

private CourseRepoImpl initCourseRepo() {
    Course restCourse = new Course();
    restCourse.setId(1);
    restCourse.setName("REST with Spring");
    restCourse.setInstructor("Eugen");
    restCourse.setEnrolmentDate(new Date(1234567890000L));
    
    Course securityCourse = new Course();
    securityCourse.setId(2);
    securityCourse.setName("Learn Spring Security");
    securityCourse.setInstructor("Eugen");
    securityCourse.setEnrolmentDate(new Date(1456789000000L));
    
    CourseRepoImpl courseRepo = new CourseRepoImpl();
    courseRepo.setGreeting("Welcome to Beldung!");
    courseRepo.addCourse(restCourse);
    courseRepo.addCourse(securityCourse);
    return courseRepo;
}

5.3. Binding Java Objects and XML Elements

The steps that need to be taken to marshal Java objects to XML elements are illustrated with the following helper method:

private void marshalCourseRepo(CourseRepo courseRepo) throws Exception {
    AegisWriter<XMLStreamWriter> writer = context.createXMLStreamWriter();
    AegisType aegisType = context.getTypeMapping().getType(CourseRepo.class);
    XMLStreamWriter xmlWriter = XMLOutputFactory.newInstance()
      .createXMLStreamWriter(new FileOutputStream(fileName));
    
    writer.write(courseRepo, 
      new QName("http://aegis.cxf.baeldung.com", "baeldung"), false, xmlWriter, aegisType);
    
    xmlWriter.close();
}

As we can see, the AegisWriter and AegisType objects must be created from the AegisContext instance. The AegisWriter object then marshals the given Java instance to the specified output.

In this case, this is an XMLStreamWriter object associated with a file named after the value of the fileName class-level field in the file system.

The following method unmarshals an XML document to a Java object of the given type:

private CourseRepo unmarshalCourseRepo() throws Exception {       
    AegisReader<XMLStreamReader> reader = context.createXMLStreamReader();
    XMLStreamReader xmlReader = XMLInputFactory.newInstance()
      .createXMLStreamReader(new FileInputStream(fileName));
    
    CourseRepo courseRepo = (CourseRepo) reader.read(
      xmlReader, context.getTypeMapping().getType(CourseRepo.class));
    
    xmlReader.close();
    return courseRepo;
}

Here, an AegisReader object is generated from the AegisContext instance. The AegisReader object then creates a Java object out of the provided input. In this example, that input is an XMLStreamReader object backed by the file we generated in the marshalCourseRepo method described right above.

5.4. Assertions

Now, it is time to combine all helper methods defined in previous subsections into a test method:

@Test
public void whenMarshalingAndUnmarshalingCourseRepo_thenCorrect()
  throws Exception {
    initializeContext();
    CourseRepo inputRepo = initCourseRepo();
    marshalCourseRepo(inputRepo);
    CourseRepo outputRepo = unmarshalCourseRepo();
    Course restCourse = outputRepo.getCourses().get(1);
    Course securityCourse = outputRepo.getCourses().get(2);

    // JUnit assertions
}

We first create a CourseRepo instance, then marshal it to an XML document and finally unmarshal the document to recreate the original object. Let’s verify that the recreated object is what we expect:

assertEquals("Welcome to Beldung!", outputRepo.getGreeting());
assertEquals("REST with Spring", restCourse.getName());
assertEquals(new Date(1234567890000L), restCourse.getEnrolmentDate());
assertNull(restCourse.getInstructor());
assertEquals("Learn Spring Security", securityCourse.getName());
assertEquals(new Date(1456789000000L), securityCourse.getEnrolmentDate());
assertNull(securityCourse.getInstructor());

It is clear that except for the instructor property, all other ones have their values recovered, including the enrolmentDate property with values of type Date. This is exactly what we expect as we have instructed Aegis to ignore the instructor property when marshaling Course objects.

5.5. Output XML Document

To make the effect of Aegis mapping files explicit, we show the XML document without customization below:

<ns1:baeldung xmlns:ns1="http://aegis.cxf.baeldung.com"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:type="ns1:CourseRepo">
    <ns1:courses>
        <ns2:entry xmlns:ns2="urn:org.apache.cxf.aegis.types">
            <ns2:key>1</ns2:key>
            <ns2:value xsi:type="ns1:Course">
                <ns1:enrolmentDate>2009-02-14T06:31:30+07:00
                </ns1:enrolmentDate>
                <ns1:id>1</ns1:id>
                <ns1:instructor>Eugen</ns1:instructor>
                <ns1:name>REST with Spring</ns1:name>
            </ns2:value>
        </ns2:entry>
        <ns2:entry xmlns:ns2="urn:org.apache.cxf.aegis.types">
            <ns2:key>2</ns2:key>
            <ns2:value xsi:type="ns1:Course">
                <ns1:enrolmentDate>2016-03-01T06:36:40+07:00
                </ns1:enrolmentDate>
                <ns1:id>2</ns1:id>
                <ns1:instructor>Eugen</ns1:instructor>
                <ns1:name>Learn Spring Security</ns1:name>
            </ns2:value>
        </ns2:entry>
    </ns1:courses>
    <ns1:greeting>Welcome to Beldung!</ns1:greeting>
</ns1:baeldung>

Compare this with the case when Aegis custom mapping is in action:

<ns1:baeldung xmlns:ns1="http://aegis.cxf.baeldung.com"
    xmlns:ns="http://courserepo.baeldung.com"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:type="ns:Baeldung" greeting="Welcome to Beldung!">
    <ns:courses>
        <ns2:entry xmlns:ns2="urn:org.apache.cxf.aegis.types">
            <ns2:key>1</ns2:key>
            <ns2:value xsi:type="ns1:Course">
                <ns1:enrolmentDate>2009-02-14T06:31:30+07:00
                </ns1:enrolmentDate>
                <ns1:id>1</ns1:id>
                <ns1:name>REST with Spring</ns1:name>
            </ns2:value>
        </ns2:entry>
        <ns2:entry xmlns:ns2="urn:org.apache.cxf.aegis.types">
            <ns2:key>2</ns2:key>
            <ns2:value xsi:type="ns1:Course">
                <ns1:enrolmentDate>2016-03-01T06:36:40+07:00
                </ns1:enrolmentDate>
                <ns1:id>2</ns1:id>
                <ns1:name>Learn Spring Security</ns1:name>
            </ns2:value>
        </ns2:entry>
    </ns:courses>
</ns1:baeldung>

You may find this XML structure in the baeldung.xml, right inside the project’s main directory after running the test defined in this section.

You will see that the type attribute and namespace of the XML element corresponding to the CourseRepo object change in accordance with what we set in the CourseRepo.aegis.xml file. The greeting property is also transformed into an attribute, and the instructor property of Course objects disappears as expected.

It is worth to note that by default Aegis converts a basic Java type to the best matching schema type, e.g. from Date objects to xsd:dateTime elements as shown in this tutorial. However, we can change that particular binding by setting the configuration in the corresponding mapping file.

Please navigate to the Aegis home page if you want to have more information.

6. Conclusion

This tutorial illustrates the use of the Apache CXF Aegis data binding as a standalone subsystem. It demonstrates how Aegis may be used to map Java objects to XML elements, and vice versa.

The tutorial also focuses on how to customize data binding behaviors.

And, as always, the implementation of all these examples and code snippets can be found in the GitHub project.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Java Web Weekly, Issue 151

$
0
0

1. Spring and Java

>> What Future Java Might Look Like [sitepoint.com]

The plans for Java beyond version 9 are very interesting and clearly quite ambitious. Some huge features in the works.

>> Another post-processor for Spring Boot [frankel.ch]

Some fun digging into internal of Spring (and Spring Boot), going beyond using the framework and towards actually understanding it.

>> Spring Kafka Producer/Consumer sample [java-allandsundry.com]

Clean and to the point examples introducing Spring Kafka.

>> (J)Unit Testing Principles [codecentric.de]

Quick post going over some of the foundations of unit testing. This is not ground-breaking stuff, but these are exactly the things that are so often overlooked.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> ValueObject [martinfowler.com]

Getting back to basics, especially on things we think we understand, is almost always a good idea.

These concepts are foundational for a reason – we build everything else on top of them, so it’s well worth having clarity when we look at the building blocks of our work.

>> How to run Continuous Integration on EC2 without breaking the bank [giorgiosironi.com]

A solid guide to setting up a CI pipeline on EC2 in a way that makes economical sense. Lots of good stuff here, especially as you scale.

>> Most popular relational databases – 2016 edition [plumbr.eu]

This kind of field data is always interesting to give us a sense of what the overall market looks like.

>> Continuous Delivery Patterns: Building your application inside a Docker container [codecentric.de]

The way we’re building a CD pipeline now has certainly changed from the way we used to do it just a few years ago. And Docker was certainly a big part of data, along with the newer DSLs in Jenkins.

Also worth reading:

3. Musings

>> How to be productive (as a developer) [sebastian-daschner.com]

Lots of good advice here on how to get productive as a developer. “Throw away the mouse” is hard to follow, fantastic advice.

>> Resolving Conflict [queue.acm.org]

Conflict is one of those things you’d rather not worry about.

But, as I’m growing a team, that’s not really an option – so it’s worth giving it some real though and having an intelligent approach to dealing with it (rather than a gut reaction).

>> How to Deliver Software Projects on Time [daedtech.com]

There’s a question that doesn’t have a simple answer. Which of course doesn’t mean we should stop trying to get better at the process.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> It makes me uncomfortable when they enjoy working [dilbert.com]

>> Security Standards [dilbert.com]

>> I hope I’m not the only one who joined this group just for the laughs [dilbert.com]

5. Pick of the Week

>> The Perfect Morning Routine (Backed by Science) [taylorpearson.me]

Viewing all 3875 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>