Quantcast
Channel: Baeldung
Viewing all 3692 articles
Browse latest View live

Introducing nudge4j

$
0
0

1. Overview

nudge4j allows developers to see the impact of any operation straight-away and provides an environment in which they can explore, learn, and ultimately spend less time debugging and redeploying their application.

In this article, we will explore what nudge4j is, how it works, and how any Java application in development might benefit from it.

2. How nudge4j Works

2.1. A REPL In Disguise

nudge4j is essentially a read-eval-print-loop (REPL) in which you talk to your Java application within a browser window via a simple page containing just two elements:

  • an editor
  • the Execute on JVM button

You can talk to your JVM in a typical REPL cycle:

  • Type any code into the editor and press Execute on JVM
  • The browser posts the code to your JVM, which then runs the code
  • The result is returned (as a string) and displayed below the button

nudge4j comes with a few examples to try straight-away, like querying how long the JVM has been running and how much memory is currently available. I suggest you start with these before writing your own code.

2.2. The JavaScript Engine

The code which is sent by the browser to the JVM is JavaScript that manipulates Java objects (not to be confused with any JavaScript that runs on the browser). The JavaScript is executed by the built-in JavaScript engine Nashorn.

Don’t worry if you don’t know (or like) JavaScript – for your nudge4j needs, you can just think of it as an untyped Java dialect.

Note that I am aware that saying that “JavaScript is untyped Java” is a huge simplification. But I want Java developers (who may be prejudiced towards anything that is JavaScript) to give nudge4j a fair chance.

2.3. Scope of JVM Interaction

nudge4j lets you access any Java class which is accessible from your JVM, allowing you to call methods, create objects, etc. This is very powerful, but it might not be sufficient while working with your application.

In some situations, you might want to reach one or more objects, specific to your application only, so that you can manipulate them. nudge4j allows for that. Any object that needs to be exposed can be passed as an argument at instantiation time.

2.4. Exception Handling

The design of nudge4j recognizes the possibility that users of the tool might make mistakes or cause errors on the JVM. In both of these cases, the tool is designed to report the full stack trace in order to guide the user to rectify the mistake or error.

Let’s look at a screenshot in which a snippet of code that has been executed results in an Exception being thrown:

3. Adding nudge4j to Your Application

3.1. Just Copy and Paste

The integration with nudge4j is achieved somewhat unconventionally, as there are no jar files to add to your classpath, and there are no dependencies to add to a Maven or Gradle build.

Instead, you are required to simply copy and paste a small snippet of Java code – around 100 lines – anywhere into your own code before you run it.

You’ll find the snippet on the nudge4j home page – there’s even a button on the page that you can click to copy the snippet to your clipboard.

This snippet of code might appear quite abstruse at first. There are a few reasons for that:

  • The nudge4j snippet can be dropped into any class; therefore, it could not make any assumption regarding the imports, and any class it contained had to be fully qualified
  • To avoid potential clashes with variables already defined, the code is wrapped in a function
  • Access to the built-in JDK HttpServer is done via introspection in order to avoid restrictions which exist with some IDEs (e.g. Eclipse) for packages beginning with  “com.sun.*”

So, even though Java is already a verbose language, it had to be made even more verbose to provide for a seamless integration.

3.2. Sample Application

Let’s start with a standard JVM application where we pretend that a simple java.util.HashMap holds most of the information that we want to play with:

public class MyApp {
    public static void main(String args[]) {
        Map map = new HashMap();
        map.put("health", 60);
        map.put("strength", 4);
        map.put("tools", Arrays.asList("hammer"));
        map.put("places", Arrays.asList("savannah","tundra"));
        map.put("location-x", -42 );
        map.put("location-y", 32);
 
        // paste original code from nudge4j below
        (new java.util.function.Consumer<Object[]>() {
            public void accept(Object args[]) {
                ...
                ...
            }
        }).accept(new Object[] { 
            5050,  // <-- the port
            map    // <-- the map is passed as a parameter.
        });
    }
}

As you can see from this example, you simply paste in the nudge4j snippet at the end of your own code. Lines 12-20 in the example here serve as a placeholder for an abbreviated version of the snippet.

Now, let’s point the browser to http://localhost:5050/. The map is now accessible as args[1] in the editor from the browser by simply typing:

args[1];

This will provide a summary of our Map (in this case relying on the toString() method of the Map and its keys and values).

Suppose we want to examine and modify the Map entry with the key value “tools”.

To get a list of all available tools in the Map, you would write:

map = args[1];
map.get("tools");

And to add a new tool to the Map, you would write:

map = args[1];
map.get("tools").add("axe");

In general, few lines of code should be sufficient to probe any Java application.

4. Conclusion

By combining two simple APIs within the JDK (Nashorn and Http server) nudge4j gives you the ability to probe into any Java 8 Application.

In a way, nudge4j is just a modern cut off an old idea: give developers access to the facilities of an existing system via a scripting language – an idea that can make an impact on how Java developers could spend their day coding.


Intro to Log4j2 – Appenders, Layouts and Filters

$
0
0

1. Overview

Logging events is a critical aspect of software development. While there are lots of frameworks available in Java ecosystem, Log4J has been the most popular for decades, due to the flexibility and simplicity it provides.

Log4j 2 is new and improved version of the classic Log4j framework.

In this article, we’ll introduce the most common appenders, layouts, and filters via practical examples.

In Log4J2, an appender is simply a destination for log events; it can be as simple as a console and can be complex like any RDBMS. Layouts determine how the logs will be presented and filters filter the data according to the various criterion.

2. Setup

In order to understand several logging components and their configuration let’s set up different test use-cases, each consisting of a log4J2.xml configuration file and a JUnit 4 test class.

Two maven dependencies are common to all examples:

<dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-core</artifactId>
    <version>2.7</version>
</dependency>
<dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-core</artifactId>
    <version>2.7</version>
    <type>test-jar</type>
    <scope>test</scope>
</dependency>

Besides the main log4j-core package we need to include the ‘test jar’ belonging to the package to gain access to a context rule needed for testing of uncommonly named configuration files.

3. Default Configuration

ConsoleAppender is the default configuration of the Log4J 2 core package. It logs messages to the system console in a simple pattern:

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN">
    <Appenders>
        <Console name="ConsoleAppender" target="SYSTEM_OUT">
            <PatternLayout 
              pattern="%d [%t] %-5level %logger{36} - %msg%n%throwable"/>
        </Console>
    </Appenders>
    <Loggers>
        <Root level="ERROR">
            <AppenderRef ref="ConsoleAppender"/>
        </Root>
    </Loggers>
</Configuration>

Let’s analyze the tags in this simple XML configuration:

  • Configuration: The root element of a Log4J 2 configuration file and attribute status is the level of the internal Log4J events, that we want to log
  • AppendersThis element is holding one or more appenders. Here we’ll configure an appender that outputs to the system console at standard out
  • Loggers: This element can consist of multiple configured Logger elements. With the special Root tag, you can configure a nameless standard logger that will receive all log messages from the application. Each logger can be set to a minimum log level
  • AppenderRef: This element defines a reference to an element from the Appenders section. Therefore the attribute ‘ref‘ is linked with an appenders ‘name‘ attribute

The corresponding unit test will be similarly simple. We’ll obtain a Logger reference and print two messages:

@Test
public void givenLoggerWithDefaultConfig_whenLogToConsole_thanOK()
  throws Exception {
    Logger logger = LogManager.getLogger(getClass());
    Exception e = new RuntimeException("This is only a test!");

    logger.info("This is a simple message at INFO level. " +
      "It will be hidden.");
    logger.error("This is a simple message at ERROR level. " +
    "This is the minimum visible level.", e);
}

4. ConsoleAppender with PatternLayout

Let’s define a new console appender with a customized color pattern in a separate XML file, and include that in our main configuration:

<?xml version="1.0" encoding="UTF-8"?>
<Console name="ConsoleAppender" target="SYSTEM_OUT">
    <PatternLayout pattern="%style{%date{DEFAULT}}{yellow}
      %highlight{%-5level}{FATAL=bg_red, ERROR=red, WARN=yellow, INFO=green} 
      %message"/>
</Console>

This file is using some pattern variables that gets replaced by Log4J 2 at runtime:

  • %style{…}{colorname}This will print the text in the first bracket pair () in a given color (colorname).
  • %highlight{…}{FATAL=colorname, …}This is similar to the ‘style’ variable. But a different color can be given for each log level.
  • %date{format}This gets replaced by the current date in the specified format. Here we’re using the ‘DEFAULT’ DateTime format,  yyyy-MM-dd HH:mm:ss,SSS’.
  • %-5level: Prints the level of the log message in a right-aligned fashion.
  • %message: Represents the raw log message

But there exists many more variables and formatting in the PatternLayout. You can refer them to the Log4J 2‘s official documentation.

Now we’ll include the defined console appender into our main configuration:

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN" xmlns:xi="http://www.w3.org/2001/XInclude">
    <Appenders>
        <xi:include href="log4j2-includes/
          console-appender_pattern-layout_colored.xml"/>
    </Appenders>
    <Loggers>
        <Root level="DEBUG">
            <AppenderRef ref="ConsoleAppender"/>
        </Root>
    </Loggers>
</Configuration>

The unit test:

@Test
public void givenLoggerWithConsoleConfig_whenLogToConsoleInColors_thanOK() 
  throws Exception {
    Logger logger = LogManager.getLogger("CONSOLE_PATTERN_APPENDER_MARKER");
    logger.trace("This is a colored message at TRACE level.");
    ...
}

5. Async File Appender with JSONLayout and BurstFilter

Sometimes it’s useful to write log messages in an asynchronous manner. For example, if application performance has priority over the availability of logs.

In such use-cases, we can use an AsyncAppender.

For our example, we’re configuring an asynchronous JSON log file. Furthermore, we’ll include a burst filter that limits the log output at a specified rate:

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN">
    <Appenders>
        ...
        <File name="JSONLogfileAppender" fileName="target/logfile.json">
            <JSONLayout compact="true" eventEol="true"/>
            <BurstFilter level="INFO" rate="2" maxBurst="10"/>
        </File>
        <Async name="AsyncAppender" bufferSize="80">
            <AppenderRef ref="JSONLogfileAppender"/>
        </Async>
    </Appenders>
    <Loggers>
        ...
        <Logger name="ASYNC_JSON_FILE_APPENDER" level="INFO"
          additivity="false">
            <AppenderRef ref="AsyncAppender" />
        </Logger>
        <Root level="INFO">
            <AppenderRef ref="ConsoleAppender"/>
        </Root>
    </Loggers>
</Configuration>

Notice that:

  • The JSONLayout is configured in a way, that writes one log event per row
  • The BurstFilter will drop every event with ‘INFO’ level and above if there are more than two of them, but at a maximum of 10 dropped events
  • The AsyncAppender is set to a buffer of 80 log messages; after that, the buffer is flushed to the log file

Let’s take a look at the corresponding unit test. We’re filling the appended buffer in a loop, let it write to disk and inspect the line count of the log file:

@Test
public void givenLoggerWithAsyncConfig_whenLogToJsonFile_thanOK() 
  throws Exception {
    Logger logger = LogManager.getLogger("ASYNC_JSON_FILE_APPENDER");

    final int count = 88;
    for (int i = 0; i < count; i++) {
        logger.info("This is async JSON message #{} at INFO level.", count);
    }
    
    long logEventsCount 
      = Files.lines(Paths.get("target/logfile.json")).count();
    assertTrue(logEventsCount > 0 && logEventsCount <= count);
}

6. RollingFile Appender and XMLLayout

Next, we’ll create a rolling log file. After a configured file size, the log file gets compressed and rotated.

This time we’re using an XML layout:

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN">
    <Appenders>
        <RollingFile name="XMLRollingfileAppender"
          fileName="target/logfile.xml"
          filePattern="target/logfile-%d{yyyy-MM-dd}-%i.log.gz">
            <XMLLayout/>
            <Policies>
                <SizeBasedTriggeringPolicy size="17 kB"/>
            </Policies>
        </RollingFile>
    </Appenders>
    <Loggers>
        <Logger name="XML_ROLLING_FILE_APPENDER" 
       level="INFO" additivity="false">
            <AppenderRef ref="XMLRollingfileAppender" />
        </Logger>
        <Root level="TRACE">
            <AppenderRef ref="ConsoleAppender"/>
        </Root>
    </Loggers>
</Configuration>

Notice that:

  • The RollingFile appender has a ‘filePattern’ attribute, which is used to name rotated log files and can be configured with placeholder variables. In our example, it should contain a date and a counter before the file suffix.
  • The default configuration of XMLLayout will write single log event objects without the root element.
  • We’re using a size based policy for rotating our log files.

Our unit test class will look like the one from the previous section:

@Test
public void givenLoggerWithRollingFileConfig_whenLogToXMLFile_thanOK()
  throws Exception {
    Logger logger = LogManager.getLogger("XML_ROLLING_FILE_APPENDER");
    final int count = 88;
    for (int i = 0; i < count; i++) {
        logger.info(
          "This is rolling file XML message #{} at INFO level.", i);
    }
}

7. Syslog Appender

Let’s say we need to send logged event’s to a remote machine over the network. The simplest way to do that using Log4J2 would be using it’s Syslog Appender:

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN">
    <Appenders>
        ...
        <Syslog name="Syslog" 
          format="RFC5424" host="localhost" port="514" 
          protocol="TCP" facility="local3" connectTimeoutMillis="10000" 
          reconnectionDelayMillis="5000">
        </Syslog>
    </Appenders>
    <Loggers>
        ...
        <Logger name="FAIL_OVER_SYSLOG_APPENDER" 
          level="INFO" 
          additivity="false">
            <AppenderRef ref="FailoverAppender" />
        </Logger>
        <Root level="TRACE">
            <AppenderRef ref="Syslog" />
        </Root>
    </Loggers>
</Configuration>

The attributes in the Syslog tag:

  • name: defines the name of the appender, and must be unique. Since we can have multiple Syslog appenders for the same application and configuration
  • format: it can be either set to BSD or RFC5424, and the Syslog records would be formatted accordingly
  • host & portthe hostname and port of the remote Syslog server machine
  • protocolwhether to use TCP or UPD
  • facilityto which Syslog facility the event will be written
  • connectTimeoutMillistime period of waiting for an established connection, defaults to zero
  • reconnectionDelayMillistime to wait before re-attempting connection

8. FailoverAppender

Now there may be instances where one appender fails to process the log events and we do not want to lose the data. In such cases, the FailoverAppender comes handy.

For example, if the Syslog appender fails to send events to the remote machine, instead of losing that data we might fall back to FileAppender temporarily.

The FailoverAppender takes a primary appender and number of secondary appenders. In case the primary fails, it tries to process the log event with secondary ones in order until one succeeds or there aren’t any secondaries to try:

<Failover name="FailoverAppender" primary="Syslog">
    <Failovers>
        <AppenderRef ref="ConsoleAppender" />
    </Failovers>
</Failover>

Let’s test it:

@Test
public void givenLoggerWithFailoverConfig_whenLog_thanOK()
  throws Exception {
    Logger logger = LogManager.getLogger("FAIL_OVER_SYSLOG_APPENDER");
    Exception e = new RuntimeException("This is only a test!"); 

    logger.trace("This is a syslog message at TRACE level.");
    logger.debug("This is a syslog message at DEBUG level.");
    logger.info("This is a syslog message at INFO level. 
      This is the minimum visible level.");
    logger.warn("This is a syslog message at WARN level.");
    logger.error("This is a syslog message at ERROR level.", e);
    logger.fatal("This is a syslog message at FATAL level.");
}

9. JDBC Appender

The JDBC appender sends log events to an RDBMS, using standard JDBC. The connection can be obtained either using any JNDI Datasource or any connection factory.

The basic configuration consists of a DataSource or ConnectionFactory, ColumnConfigs, and tableName:

<JDBC name="JDBCAppender" tableName="logs">
    <ConnectionFactory 
      class="com.baeldung.logging.log4j2.tests.jdbc.ConnectionFactory" 
      method="getConnection" />
    <Column name="when" isEventTimestamp="true" />
    <Column name="logger" pattern="%logger" />
    <Column name="level" pattern="%level" />
    <Column name="message" pattern="%message" />
    <Column name="throwable" pattern="%ex{full}" />
</JDBC>

Now let’s try out:

@Test
public void givenLoggerWithJdbcConfig_whenLogToDataSource_thanOK()
  throws Exception {
    Logger logger = LogManager.getLogger("JDBC_APPENDER");
    final int count = 88;
    for (int i = 0; i < count; i++) {
        logger.info("This is JDBC message #{} at INFO level.", count);
    }

    Connection connection = ConnectionFactory.getConnection();
    ResultSet resultSet = connection.createStatement()
      .executeQuery("SELECT COUNT(*) AS ROW_COUNT FROM logs");
    int logCount = 0;
    if (resultSet.next()) {
        logCount = resultSet.getInt("ROW_COUNT");
    }
    assertTrue(logCount == count);
}

10. Conclusion

This article shows very simple examples of how you can use different logging appenders, filter and layouts with Log4J2 and ways to configure them.

The examples that accompany the article are available over on GitHub.

Java 9 – Exploring the REPL

$
0
0

1. Introduction

This article is about jshell, an interactive REPL (Read-Evaluate-Print-Loop) console that is bundled with the JDK for the upcoming Java 9 release. For those not familiar with the concept, a REPL allows to interactively run arbitrary snippets of code and evaluate their results.

A REPL can be useful for things such as quickly checking the viability of an idea or figuring out e.g. a formatted string for String or SimpleDateFormat.

2. Running

To get started we need to run the REPL, which is done by invoking:

$JAVA_HOME/bin/jshell

If more detailed messaging from the shell is desired, a -v flag can be used:

$JAVA_HOME/bin/jshell -v

Once it is ready, we will be greeted by a friendly message and a familiar Unix-style prompt at the bottom.

3. Defining and Invoking Methods

Methods can be added by typing their signature and body:

jshell> void helloWorld() { System.out.println("Hello world");}
|  created method helloWorld()

Here we defined the ubiquitous “hello world” method.  It can be invoked using normal Java syntax:

jshell> helloWorld()
Hello world

4. Variables

Variables can be defined with the normal Java declaration syntax:

jshell> int i = 0;
i ==> 0
|  created variable i : int

jshell> String company = "Baeldung"
company ==> "Baeldung"
|  created variable company : String

jshell> Date date = new Date()
date ==> Sun Feb 26 06:30:16 EST 2017
|  created variable date : Date

Note that semicolons are optional.  Variables can also be declared without initialization:

jshell> File file
file ==> null
|  created variable file : File

5. Expressions

Any valid Java expression is accepted and the result of the evaluation will be shown. If no explicit receiver of the result is provided, “scratch” variables will be created:

jshell> String.format("%d of bottles of beer", 100)
$6 ==> "100 of bottles of beer"
|  created scratch variable $6 : String

The REPL is quite helpful here by informing us that it created a scratch variable named $6 which value is “100 of bottles of beer on the wall” and its type is String.

Multi-line expressions are also possible.  Jshell is smart enough to know when an expression is incomplete and will prompt the user to continue on a new line:

jshell> int i =
   ...> 5;
i ==> 5
|  modified variable i : int
|    update overwrote variable i : int

Note how the prompt changed to an indented …> to signify the continuation of an expression.

6. Commands

Jshell provides quite a few meta-commands that aren’t related to evaluating Java statements.  They all start with a forward-slash (/) to be distinguished from normal operations. For example, we can request a list of all available commands by issuing /help or /?.

Let’s take a look at some of them.

6.1. Imports

To list all the imports active in the current session we can use the /import command:

jshell> /import
|    import java.io.*
|    import java.math.*
|    import java.net.*
|    import java.nio.file.*
|    import java.util.*
|    import java.util.concurrent.*
|    import java.util.function.*
|    import java.util.prefs.*
|    import java.util.regex.*
|    import java.util.stream.*

As we can see, the shell starts with quite a few useful imports already added.

6.2. Lists

Working in a REPL is not nearly as easy as having a full-featured IDE at our fingertips: it is easy to forget what variables have which values, what methods have been defined and so on.  To check the state of the shell we can use /var, /methods, /list or /history:

jshell> /var
| int i = 0
| String company = "Baeldung"
| Date date = Sun Feb 26 06:30:16 EST 2017
| File file = null
| String $6 = "100 of bottles of beer on the wall"

jshell> /methods
| void helloWorld()

jshell> /list

 1 : void helloWorld() { System.out.println("Hello world");}
 2 : int i = 0;
 3 : String company = "Baeldung";
 4 : Date date = new Date();
 5 : File file;
 6 : String.format("%d of bottles of beer on the wall", 100)

jshell> /history

void helloWorld() { System.out.println("Hello world");}
int i = 0;
String company = "Baeldung"
Date date = new Date()
File file
String.format("%d of bottles of beer on the wall", 100)
/var
/methods
/list
/history

The difference between /list and /history is that the latter shows commands in addition to expressions.

6.3. Saving

To save the expression history the /save command can be used:

jshell> /save repl.java

This saves our expression history into repl.java in the same directory from which we ran the jshell command.

6.4. Loading

To load a previously saved file we can use the /open command:

jshell> /open repl.java

A loaded session can then be verified by issuing /var, /method or /list.

6.5. Exiting

When we are done with the work, the /exit command can terminate the shell:

jshell> /exit
|  Goodbye

Goodbye jshell.

7. Conclusion

In this article, we took a look at Java 9 REPL. Since Java has been around for over 20 years already, perhaps it arrived a little late. However, it should prove to be another valuable tool in our Java toolbox.

Spring Security – Redirect to the Previous URL After Login

$
0
0

1. Overview

This article will focus on how to redirect a user back to the originally requested URL – after they log in.

Previously, we’ve seen how to redirect to different pages after login with Spring Security for different types of users and covered various types of redirections with Spring MVC.

The article is based on top of the Spring Security Login tutorial.

2. Common Practice

The most common ways to implement redirection logic after login are:

  • using HTTP Referer header
  • saving the original request in the session
  • appending original URL to the redirected login URL

Using the HTTP Referer header is a straightforward way, for most browsers and HTTP clients set Referer automatically. However, as Referer is forgeable and relies on client implementation, using HTTP Referer header to implement redirection is generally not suggested.

Saving the original request in the session is a much safer and robust way is to save original request in the session. Besides the original URL, we can store original request attributes and any custom properties in the session.

And appending original URL to the redirected login URL is usually seen in SSO implementations. When authenticated via an SSO service, users will be redirected to the originally requested page, with the URL appended. We must ensure the appended URL is properly encoded.

Another similar implementation is to put the original request URL in a hidden field inside the login form. But this is no better than using HTTP Referer

In Spring Security, the first two approaches are natively supported.

3. AuthenticationSuccessHandler

In form-based authentication, redirection happens right after login, which is handled in an AuthenticationSuccessHandler instance in Spring Security.

Three default implementations are provided: SimpleUrlAuthenticationSuccessHandler, SavedRequestAwareAuthenticationSuccessHandler and ForwardAuthenticationSuccessHandler. We’ll focus on the first two implementations.

3.1. SavedRequestAwareAuthenticationSuccessHandler

SavedRequestAwareAuthenticationSuccessHandler makes use of the saved request stored in the session. After a successful login, users will be redirected to the URL saved in the original request.

For form login, SavedRequestAwareAuthenticationSuccessHandler is used as the default AuthenticationSuccessHandler.

@Configuration
@EnableWebSecurity
public class RedirectionSecurityConfig extends WebSecurityConfigurerAdapter {

    //...

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http
          .authorizeRequests()
          .antMatchers("/login*")
          .permitAll()
          .anyRequest()
          .authenticated()
          .and()
          .formLogin();
    }
    
}

And the equivalent XML would be:

<http>
    <intercept-url pattern="/login" access="permitAll"/>
    <intercept-url pattern="/**" access="isAuthenticated()"/>
    <form-login />
</http>

Suppose we have a secured resource at location “/secured”. For the first time access to the resource, we’ll be redirected to the login page; after filling in credentials and posting the login form, we’ll be redirected back to our originally requested resource location:

@Test
public void givenAccessSecuredResource_whenAuthenticated_thenRedirectedBack() 
  throws Exception {
 
    MockHttpServletRequestBuilder securedResourceAccess = get("/secured");
    MvcResult unauthenticatedResult = mvc
      .perform(securedResourceAccess)
      .andExpect(status().is3xxRedirection())
      .andReturn();

    MockHttpSession session = (MockHttpSession) unauthenticatedResult
      .getRequest()
      .getSession();
    String loginUrl = unauthenticatedResult
      .getResponse()
      .getRedirectedUrl();
    mvc
      .perform(post(loginUrl)
        .param("username", userDetails.getUsername())
        .param("password", userDetails.getPassword())
        .session(session)
        .with(csrf()))
      .andExpect(status().is3xxRedirection())
      .andExpect(redirectedUrlPattern("**/secured"))
      .andReturn();

    mvc
      .perform(securedResourceAccess.session(session))
      .andExpect(status().isOk());
}

3.2. SimpleUrlAuthenticationSuccessHandler

Compared to the SavedRequestAwareAuthenticationSuccessHandler, SimpleUrlAuthenticationSuccessHandler gives us more options on redirection decisions.

We can enable Referer-based redirection by setUserReferer(true):

public class RefererRedirectionAuthenticationSuccessHandler 
  extends SimpleUrlAuthenticationSuccessHandler
  implements AuthenticationSuccessHandler {

    public RefererRedirectionAuthenticationSuccessHandler() {
        super();
        setUseReferer(true);
    }

}

Then use it as the AuthenticationSuccessHandler in RedirectionSecurityConfig:

@Override
protected void configure(HttpSecurity http) throws Exception {
    http
      .authorizeRequests()
      .antMatchers("/login*")
      .permitAll()
      .anyRequest()
      .authenticated()
      .and()
      .formLogin()
      .successHandler(new RefererAuthenticationSuccessHandler());
}

And for XML configuration:

<http>
    <intercept-url pattern="/login" access="permitAll"/>
    <intercept-url pattern="/**" access="isAuthenticated()"/>
    <form-login authentication-success-handler-ref="refererHandler" />
</http>

<beans:bean 
  class="RefererRedirectionAuthenticationSuccessHandler" 
  name="refererHandler"/>

3.3. Under the Hood

There is no magic in these easy to use features in Spring Security. When a secured resource is being requested, the request will be filtered by a chain of various filters. Authentication principals and permissions will be checked. If the request session is not authenticated yet, AuthenticationException will be thrown.

The AuthenticationException will be caught in the ExceptionTranslationFilterin which an authentication process will be commenced, resulting in a redirection to the login page.

public class ExceptionTranslationFilter extends GenericFilterBean {

    //...

    public void doFilter(ServletRequest req, ServletResponse res, FilterChain chain)
      throws IOException, ServletException {
        //...

        handleSpringSecurityException(request, response, chain, ase);

        //...
    }

    private void handleSpringSecurityException(HttpServletRequest request,
      HttpServletResponse response, FilterChain chain, RuntimeException exception)
      throws IOException, ServletException {

        if (exception instanceof AuthenticationException) {

            sendStartAuthentication(request, response, chain,
              (AuthenticationException) exception);

        }

        //...
    }

    protected void sendStartAuthentication(HttpServletRequest request,
      HttpServletResponse response, FilterChain chain,
      AuthenticationException reason) throws ServletException, IOException {
       
       SecurityContextHolder.getContext().setAuthentication(null);
       requestCache.saveRequest(request, response);
       authenticationEntryPoint.commence(request, response, reason);
    }

    //... 

}

After login, we can customize behaviors in an AuthenticationSuccessHandler, as shown above.

4. Conclusion

In this Spring Security example, we discussed common practice for redirection after login and explained implementations using Spring Security.

Note that all the implementations we mentioned are vulnerable to certain attacks if no validation or extra method controls are applied. Users might be redirected to a malicious site by such attacks.

The OWASP has provided a cheat sheet to help us handle unvalidated redirects and forwards. This would do a lot of help if we need to build implementations on our own.

The full implementation code of this article can be found over on Github.

Java 9 Process API Improvements

$
0
0

1. Overview

The process API in Java had been quite primitive prior to Java 5, the only way to spawn a new process was to use the Runtime.getRuntime().exec() API. Then in Java 5, ProcessBuilder API was introduced which supported a cleaner way of spawning new processes.

Java 9 is adding a new way of getting information about current and any spawned processes.

In this article, we will look at both of these enhancements.

2. Current Java Process Information

We can now obtain a lot of information about the process via the API java.lang.ProcessHandle.Info API:

  • the command used to start the process
  • the arguments of the command
  • time instant when the process was started
  • total time spent by it and the user who created it

Here’s how we can do that:

@Test
public void givenCurrentProcess_whenInvokeGetInfo_thenSuccess() 
  throws IOException {
 
    ProcessHandle processHandle = ProcessHandle.current();
    ProcessHandle.Info processInfo = processHandle.info();
 
    assertNotNull(processHandle.getPid());
    assertEquals(false, processInfo.arguments().isPresent());
    assertEquals(true, processInfo.command().isPresent());
    assertTrue(processInfo.command().get().contains("java"));
    assertEquals(true, processInfo.startInstant().isPresent());
    assertEquals(true, 
      processInfo.totalCpuDuration().isPresent());
    assertEquals(true, processInfo.user().isPresent());
}

It is important to note that java.lang.ProcessHandle.Info is a public interface defined within another interface java.lang.ProcessHandle. The JDK provider (Oracle JDK, Open JDK, Zulu or others) should provide implementations to these interfaces in such a way that these implementations return the relevant information for the processes.

3. Spawned Process Information

It is also possible to get the process information of a newly spawned process. In this case, after we spawn the process and get an instance of the java.lang.Process, we invoke the toHandle() method on it to get an instance of java.lang.ProcessHandle.

The rest of the details remain the same as in the section above:

String javaCmd = ProcessUtils.getJavaCmd().getAbsolutePath();
ProcessBuilder processBuilder = new ProcessBuilder(javaCmd, "-version");
Process process = processBuilder.inheritIO().start();
ProcessHandle processHandle = process.toHandle();

4. Enumerating Live Processes in the System

We can list all the processes currently in the system, which are visible to the current process. The returned list is a snapshot at the time when the API was invoked, so it’s possible that some processes terminated after taking the snapshot or some new processes were added.

In order to do that, we can use the static method allProcesses() available in the java.lang.ProcessHandle interface which returns us a Stream of ProcessHandle:

@Test
public void givenLiveProcesses_whenInvokeGetInfo_thenSuccess() {
    Stream<ProcessHandle> liveProcesses = ProcessHandle.allProcesses();
    liveProcesses.filter(ProcessHandle::isAlive)
      .forEach(ph -> {
 
        assertNotNull(ph.getPid());
        assertEquals(true, ph.info()
          .command()
          .isPresent());
      });
}

5. Enumerating Child Processes

There are two variants to do this:

  • get direct children of the current process
  • get all the descendants of the current process

The former is achieved by using the method children() and the latter is achieved by using the method descendants():

@Test
public void givenProcess_whenGetChildProcess_thenSuccess() 
  throws IOException{
 
    int childProcessCount = 5;
    for (int i = 0; i < childProcessCount; i++){
        String javaCmd = ProcessUtils.getJavaCmd()
          .getAbsolutePath();
        ProcessBuilder processBuilder 
          = new ProcessBuilder(javaCmd, "-version");
        processBuilder.inheritIO().start();
    }
    Stream<ProcessHandle> children
      = ProcessHandle.current().children();

    children.filter(ProcessHandle::isAlive)
      .forEach(ph -> log.info("PID: {}, Cmd: {}",
        ph.getPid(), ph.info().command()));

    // and for descendants
    Stream<ProcessHandle> descendants
      = ProcessHandle.current().descendants();
    descendants.filter(ProcessHandle::isAlive)
      .forEach(ph -> log.info("PID: {}, Cmd: {}",
        ph.getPid(), ph.info().command()));
}

6. Triggering Dependent Actions on Process Termination

We might want to run something on termination of the process. This can be achieved by using the onExit() method in the java.lang.ProcessHandle interface. The method returns us a CompletableFuture which provides the ability to trigger dependent operations when the CompletableFuture is completed.

Here, the CompletableFuture indicates the process has completed, but it doesn’t matter if the process has completed successfully or not. We invoke the get() method on the CompletableFuture, to wait for its completion:

@Test
public void givenProcess_whenAddExitCallback_thenSuccess() 
  throws Exception {
 
    String javaCmd = ProcessUtils.getJavaCmd()
      .getAbsolutePath();
    ProcessBuilder processBuilder 
      = new ProcessBuilder(javaCmd, "-version");
    Process process = processBuilder.inheritIO()
      .start();
    ProcessHandle processHandle = process.toHandle();

    log.info("PID: {} has started", processHandle.getPid());
    CompletableFuture<ProcessHandle> onProcessExit 
      = processHandle.onExit();
    onProcessExit.get();
    assertEquals(false, processHandle.isAlive());
    onProcessExit.thenAccept(ph -> {
        log.info("PID: {} has stopped", ph.getPid());
    });
}

The onExit() method is available in the java.lang.Process interface as well.

7. Conclusion

In this tutorial, we covered interesting additions to the Process API in Java 9 that give us much more control over the running and spawned processes.

The code used in this article can be found over on GitHub.

Spring Security with Stormpath

$
0
0

1. Overview

Stormpath has developed solid support for Spring Boot and Spring Security – to make the integration with their infrastructure and services quite straightforward.

In this article, we’re going to have a look at a minimalistic setup and integration of Stormpath with Spring Security.

2. Setting Up Stormpath

Before we can really integrate Stormpath, we need to create access token in Stormpath’s cloud. For that, we need to sign up over on their website. Please remember that for development purpose we’ll need to sign up as a developer – which gives us 10000 API calls per month of using the free mode.

Of course if we already have an active Stormpath account, we can use that and directly login.

Now, we need to create the API keys; by clicking Manage API Keys” link inside Developers Tools, we’ll see a button named “Create API Key.

We need to click on this button to generate the API key. When clicking, we’ll get prompted us to download a properties file containing the API key details. The content will look like this:

apiKey.id = xxxxxxxxxxx
apiKey.secret = xxxxxxxxxxxx

We need to store this details very carefully since this data can’t be fetched again from the server.

3. Building The Application

3.1. Maven Dependencies

In order to use Stormpath API, we need to use their Java SDK. For that, we need to integrate the following dependency in the pom.xml:

<dependency>
    <groupId>com.stormpath.spring</groupId>
    <artifactId>stormpath-default-spring-boot-starter</artifactId>
    <version>1.5.4</version>
</dependency>

You can find the latest version of the stormpath-default-spring-boot-starter in Central Maven Repository.

3.2. Spring Security Configuration

One of the advantages of using Stormpath is that we don’t need to add much boilerplate code to configure Spring Security. The following couple of lines of code is all we need to fully configure the application:

@Configuration
public class SecurityConfiguration extends WebSecurityConfigurerAdapter {

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http.apply(stormpath());
    }
}

The stormpath() is a static method, which will actually be enough for a simple integration with Spring Security.

What’s even more interesting here is that we don’t have to create any additional HTML pages to design login, sign-up, etc. Stormpath will generate those pages; however, depending on our need, we may create custom pages and integrate Stormpath’s functionalities.

3.3. Application.properties

We are almost done building this bare-bones application. We just need to add the API keys details, we have created earlier in the application.properties file:

stormpath.client.apiKey.id = // your api id
stormpath.client.apiKey.secret = // your api secret

As per the Stormpath Guidelines, it’s always a best practice to put a sensitive data in the JVM environment variables, instead of using them in the application.properties.

We can declare them as JVM parameters:

-Dstormpath.client.apiKey.id=[api_id] -Dstormpath.client.apiKey.secret=[api_secret]

Now, we’re ready to start the application and see the results. We can check the following URLs to test Stormpath’s functionalities:

  • /login – Login page
  • /register – Registration page
  • /forgot – Forgot password page

3.4. Other options

There’s also an interesting option to check on the login page, the Forgot Password link at the login box. When clicking this link, we’ll be redirected to the /forgot page, where we can provide our email address, which we created to sign up. This will trigger an automatic email containing the link to reset a password.

However, we need to do following configuration at the Stormpath Admin Panel to configure this:

  • Click on the Directories link on top of the page. It should show all of the directories created with this account. By default, after sign up, Stormpath automatically creates a directory named Stormpath Administrator. However, if can create other directories and use them.
  • In the left panel click on the Workflows & Emails link to see a password reset option. By default, it’s disabled. We need to click on the Enabled button to use it.
  • In the Link Base URL, we need to give the URL of our application and this URL will be attached to the password reset email.

4. Conclusion

In this quick article, we learned how to easily integrate Spring Security with Stormpath.

There are plenty of other configuration like email verification, etc., which can be configured via Stormpath Admin Console; using those, we can build a secured application quite quickly.

And, like always, you can find the full source code on GitHub.

Guide To Solr in Java With Apache SolrJ

$
0
0

1. Overview

Apache Solr is an open-source search platform built on top of Lucene. Apache SolrJ is a Java-based client for Solr that provides interfaces for the main features of search like indexing, querying, and deleting documents.

In this article, we’re going to explore how to interact with an Apache Solr server using SolrJ.

2. Setup

In order to install a Solr server on your machine, please refer to the Solr QuickStart Guide.

The installation process is simple — just download the zip/tar package, extract the contents, and start the server from the command line. For this article, we’ll create a Solr server with a core called ‘bigboxstore’:

bin/solr start
bin/solr create -c 'bigboxstore'

By default, Solr listens to port 8983 for incoming HTTP queries. You can verify that it is successfully launched by opening the http://localhost:8983/solr/#/bigboxstore URL in a browser and observing the Solr Dashboard.

3. Maven Configuration

Now that we have our Solr server up and running, let’s jump straight to the SolrJ Java client. To use SolrJ in your project, you will need to have the following Maven dependency declared in your pom.xml file:

<dependency>
    <groupId>org.apache.solr</groupId>
    <artifactId>solr-solrj</artifactId>
    <version>6.4.0</version>
</dependency>

You can always find the latest version hosted by Maven Central.

4. Apache SolrJ Java API

Let’s initiate the SolrJ client by connecting to our Solr server:

String urlString = "http://localhost:8983/solr/bigboxstore";
HttpSolrClient solr = new HttpSolrClient.Builder(urlString).build();
solr.setParser(new XMLResponseParser());

Note: SolrJ uses a binary format, rather than XML, as its default response format. For compatibility with Solr, it’s required to explicitly invoke setParser() to XML as shown above. More details on this can be found here.

4.1. Indexing Documents

Let’s define the data to be indexed using a SolrInputDocument and add it to our index using the add() method:

SolrInputDocument document = new SolrInputDocument();
document.addField("id", "123456");
document.addField("name", "Kenmore Dishwasher");
document.addField("price", "599.99");
solr.add(document);
solr.commit();

Note: Any action that modifies the Solr database requires the action to be followed by commit().

4.2. Indexing with Beans

You can also index Solr documents using beans. Let’s define a ProductBean whose properties are annotated with @Field:

public class ProductBean {

    String id;
    String name;
    String price;

    @Field("id")
    protected void setId(String id) {
        this.id = id;
    }

    @Field("name")
    protected void setName(String name) {
        this.name = name;
    }

    @Field("price")
    protected void setPrice(String price) {
        this.price = price;
    }

    // getters and constructor omitted for space
}

Then, let’s add the bean to our index:

solrClient.addBean( new ProductBean("888", "Apple iPhone 6s", "299.99") );
solrClient.commit();

4.3. Querying Indexed Documents by Field and Id

Let’s verify our document is added by using SolrQuery to query our Solr server.

The QueryResponse from the server will contain a list of SolrDocument objects matching any query with the format field:value. In this example, we query by price:

SolrQuery query = new SolrQuery();
query.set("q", "price:599.99");
QueryResponse response = solr.query(query);

SolrDocumentList docList = response.getResults();
assertEquals(docList.getNumFound(), 1);

for (SolrDocument doc : docList) {
     assertEquals((String) doc.getFieldValue("id"), "123456");
     assertEquals((Double) doc.getFieldValue("price"), (Double) 599.99);
}

A simpler option is to query by Id using getById(). which will return only one document if a match is found:

SolrDocument doc = solr.getById("123456");
assertEquals((String) doc.getFieldValue("name"), "Kenmore Dishwasher");
assertEquals((Double) doc.getFieldValue("price"), (Double) 599.99);

4.4. Deleting Documents

When we want to remove a document from the index, we can use deleteById() and verify it has been removed:

solr.deleteById("123456");
solr.commit();
SolrQuery query = new SolrQuery();
query.set("q", "id:123456");
QueryResponse response = solr.query(query);
SolrDocumentList docList = response.getResults();
assertEquals(docList.getNumFound(), 0);

We also have the option to deleteByQuery(), so let’s try deleting any document with a specific name:

solr.deleteByQuery("name:Kenmore Dishwasher");
solr.commit();
SolrQuery query = new SolrQuery();
query.set("q", "id:123456");
QueryResponse response = solr.query(query);
SolrDocumentList docList = response.getResults();
assertEquals(docList.getNumFound(), 0);

5. Conclusion

In this quick article, we’ve seen how to use the SolrJ Java API to perform some of the common interactions with the Apache Solr full-text search engine.

You can check out the examples provided in this article over on GitHub.

Java Web Weekly, Issue 166

$
0
0

Lots of interesting writeups on Java 9 this week.

Here we go…

1. Spring and Java

>> Spring Framework 5.0 M5 Update [spring.io]

Very interesting functionality in the latest Spring 5 pre-release.

>> A use-case for local class declaration [frankel.ch]

From the engineering point of view, there are some nice use cases for defining classes locally but those should be used with caution because they might violate PoLA.

>> Integration testing strategies for Spring Boot microservices part 2 [codecentric.de]

The 2nd part from the series on testing strategies for microservices architectures done in Spring Boot.

>> How to encrypt and decrypt data with Hibernate [vladmihalcea.com]

A short and to-the-point write-up on how to do data encryption with Hibernate.

>> LRU Cache From LinkedHashMap [javaspecialists.eu]

LinkedHashMap can be used for building lightweight LRU caches.

Should you build your own cache? Definitely not, but it’s a fantastic learning tool.

>> Testing RxJava2 [infoq.com]

Testing RxJava is easier than it seems when using dedicated solutions like TestSubscriber, TestScheduler or RxJavaPlugins.

The Awaitility library might come in handy too.

>> Profile-based optimization techniques in the JVM [advancedweb.hu]

A new installment from a deep dive series into optimization techniques for the JVM.

>> The Last Frontier in Java Performance: Remove the Garbage Collector [infoq.com]

Very interesting article about potential ideas for decreasing the GC’s overhead.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> How does MVCC (Multi-Version Concurrency Control) work [vladmihalcea.com]

A short overview of the MVCC technique – applied of course to database systems, but potentially to other types of systems as well.

>> Secrets of Maintainable Codebases [daedtech.com]

Everyone is talking about developing clean and maintainable databases but what does it actually mean?

 Also worth reading:

3. Musings

>> Excited about a ‘2.0’ tech stack for microservices [christianposta.com]

A few thoughts about a new generation of tools for building microservices.

>> Tech jobs are already largely automated [lemire.me]

Very interesting points regarding the reality of our industry and how software is impacting the overall job market.

>> What’s in a Name? Spelling Matters in Code [daedtech.com]

In the age of advanced IDEs, there is no justification for having grammar errors or typos in your codebase.

>> First steps as a test automation coach [ontestautomation.com]

Thoughts about starting to coach teams towards – in this case, towards better testing.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Are you sure my data is correct? [dilbert.com]

>> Giving 110% [dilbert.com]

>> How to look busy [dilbert.com]

5. Pick of the Week

A really good episode on the important topic of doing deep work:

>> SPI 255: Deep Work with Cal Newport [smartpassiveincome.com]


How to Register a Servlet in Java

$
0
0

1. Introduction

This article will provide an overview of how to register a servlet within Java EE and Spring Boot. Specifically, we will look at two ways to register a Java Servlet in Java EE — one using a web.xml file, and the other using annotations. Then we’ll register servlets in Spring Boot using XML configuration, Java configuration, and through configurable properties.

A great introductory article on servlets can be found here.

2. Registering Servlets in Java EE

Let’s go over two ways to register a servlet in Java EE. First, we can register a servlet via web.xml. Alternatively, we can use the Java EE @WebServlet annotation.

2.1. Via web.xml

The most common way to register a servlet within your Java EE application is to add it to your web.xml file:

<welcome-file-list>
    <welcome-file>index.html</welcome-file>
    <welcome-file>index.htm</welcome-file>
    <welcome-file>index.jsp</welcome-file>
</welcome-file-list>
<servlet>
    <servlet-name>Example</servlet-name>
    <servlet-class>com.baeldung.Example</servlet-class>
</servlet>
<servlet-mapping>
    <servlet-name>Example</servlet-name>
    <url-pattern>/Example</url-pattern>
</servlet-mapping>

As you can see, this involves two steps: (1) adding our servlet to the servlet tag, making sure to also specify the source path to the class the servlet resides within, and (2) specifying the URL path the servlet will be exposed on in the url-pattern tag.

The Java EE web.xml file is usually found in WebContent/WEB-INF.

2.2. Via Annotations

Now let’s register our servlet using the @WebServlet annotation on our custom servlet class. This eliminates the need for servlet mappings in the server.xml and registration of the servlet in web.xml:

@WebServlet(
  name = "AnnotationExample",
  description = "Example Servlet Using Annotations",
  urlPatterns = {"/AnnotationExample"}
)
public class Example extends HttpServlet {	
 
    @Override
    protected void doGet(
      HttpServletRequest request, 
      HttpServletResponse response) throws ServletException, IOException {
 
        response.setContentType("text/html");
        PrintWriter out = response.getWriter();
        out.println("<p>Hello World!</p>");
    }
}

The code above demonstrates how to add that annotation directly to a servlet. The servlet will still be available at the same URL path as before.

3. Registering Servlets in Spring Boot

Now that we’ve shown how to register servlets in Java EE, let’s take a look at several ways to register servlets in a Spring Boot application.

3.1. Programmatic Registration

Spring Boot supports 100% programmatic configuration of a web application.

First, we’ll implement the WebApplicationInitializer interface while also using a subclass of WebMvcConfigurerAdapter, which allows you to override preset defaults instead of having to specify each particular configuration setting, saving you time and allowing you to work with several tried and true settings out-of-the-box.

Let’s look at a sample WebApplicationInitializer implementation:

public class WebAppInitializer implements WebApplicationInitializer {
 
    public void onStartup(ServletContext container) throws ServletException {
        AnnotationConfigWebApplicationContext ctx
          = new AnnotationConfigWebApplicationContext();
        ctx.register(WebMvcConfigure.class);
        ctx.setServletContext(container);

        ServletRegistration.Dynamic servlet = container.addServlet(
          "dispatcherExample", new DispatcherServlet(ctx));
        servlet.setLoadOnStartup(1);
        servlet.addMapping("/");
     }
}

Next, let’s extend the WebMvcConfigurerAdapter class:

@Configuration
public class WebMvcConfigure extends WebMvcConfigurerAdapter {

    @Bean
    public ViewResolver getViewResolver() {
        InternalResourceViewResolver resolver
          = new InternalResourceViewResolver();
        resolver.setPrefix("/WEB-INF/");
        resolver.setSuffix(".jsp");
        return resolver;
    }

    @Override
    public void configureDefaultServletHandling(
      DefaultServletHandlerConfigurer configurer) {
        configurer.enable();
    }

    @Override
    public void addResourceHandlers(ResourceHandlerRegistry registry) {
        registry.addResourceHandler("/resources/**")
          .addResourceLocations("/resources/").setCachePeriod(3600)
          .resourceChain(true).addResolver(new PathResourceResolver());
    }
}

Above we specify some of the default settings for JSP servlets explicitly in order to support .jsp views and static resource serving.

3.2. XML Configuration

Another way to configure and register servlets within Spring Boot is through web.xml:

<servlet>
    <servlet-name>dispatcher</servlet-name>
    <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
    <init-param>
        <param-name>contextConfigLocation</param-name>
        <param-value>/WEB-INF/spring/dispatcher.xml</param-value>
    </init-param>
    <load-on-startup>1</load-on-startup>
</servlet>

<servlet-mapping>
    <servlet-name>dispatcher</servlet-name>
    <url-pattern>/</url-pattern>
</servlet-mapping>

The web.xml used to specify configuration in Spring is similar to that found in Java EE. Above, you can see how we specify a few more parameters via attributes under the servlet tag.

Here we use another XML to complete the configuration:

<beans ...>
    
    <context:component-scan base-package="com.baeldung"/>

    <bean 
      class="org.springframework.web.servlet.view.InternalResourceViewResolver">
        <property name="prefix" value="/WEB-INF/jsp/"/>
        <property name="suffix" value=".jsp"/>
    </bean>
</beans>

Remember that your Spring web.xml will usually live in src/main/webapp/WEB-INF.

3.3. Combining XML and Programmatic Registration

Let’s mix an XML configuration approach with Spring’s programmatic configuration:

public void onStartup(ServletContext container) throws ServletException {
   XmlWebApplicationContext xctx = new XmlWebApplicationContext();
   xctx.setConfigLocation('classpath:/context.xml');
   xctx.setServletContext(container);

   ServletRegistration.Dynamic servlet = container.addServlet(
     "dispatcher", new DispatcherServlet(ctx));
   servlet.setLoadOnStartup(1);
   servlet.addMapping("/");
}

Let’s also configure the dispatcher servlet:

<beans ...>

    <context:component-scan base-package="com.baeldung"/>
    <bean class="com.baeldung.configuration.WebAppInitializer"/>
</beans>

3.4. Registration by Bean

We can also programmatically configure and register our servlets using a ServletRegistrationBean. Below we’ll do so in order to register an HttpServlet (which implements the javax.servlet.Servlet interface):

@Bean
public ServletRegistrationBean exampleServletBean() {
    ServletRegistrationBean bean = new ServletRegistrationBean(
      new CustomServlet(), "/exampleServlet/*");
    bean.setLoadOnStartup(1);
    return bean;
}

The main advantage of this approach is that it enables you to add both multiple servlets as well as different kinds of servlets to your Spring application.

Instead of merely utilizing a DispatcherServlet, which is a more specific kind of HttpServlet and the most common kind used in the WebApplicationInitializer programmatic approach to configuration we explored in section 3.1, we’ll use a simpler HttpServlet subclass instance which exposes the four basic HttpRequest operations through four functions: doGet(), doPost(), doPut(), and doDelete() just like in Java EE.

Remember that HttpServlet is an abstract class (so it can’t be instantiated). We can whip up a custom extension easily, though:

public class CustomServlet extends HttpServlet{
    ...
}

4. Registering Servlets with Properties

Another, though uncommon, way to configure and register your servlets is to use a custom properties file loaded into the app via a PropertyLoader, PropertySource, or PropertySources instance object.

This provides an intermediate kind of configuration and the ability to otherwise customize application.properties which provides little direct configuration for non-embedded servlets.

4.1. System Properties Approach

We can add some custom settings to our application.properties file or another properties file. Let’s add a few settings to configure our DispatcherServlet:

servlet.name=dispatcherExample
servlet.mapping=/dispatcherExampleURL

Let’s load our custom properties into our application:

System.setProperty("custom.config.location", "classpath:custom.properties");

And now we can access those properties via:

System.getProperty("custom.config.location");

4.2. Custom Properties Approach

Let’s start with a custom.properties file:

servlet.name=dispatcherExample
servlet.mapping=/dispatcherExampleURL

We can then use a run-of-the-mill Property Loader:

public Properties getProperties(String file) throws IOException {
  Properties prop = new Properties();
  InputStream input = null;
  input = getClass().getResourceAsStream(file);
  prop.load(input);
  if (input != null) {
      input.close();
  }
  return prop;
}

And now we can add these custom properties as constants to our WebApplicationInitializer implementation:

private static final PropertyLoader pl = new PropertyLoader(); 
private static final Properties springProps
  = pl.getProperties("custom_spring.properties"); 

public static final String SERVLET_NAME
  = springProps.getProperty("servlet.name"); 
public static final String SERVLET_MAPPING
  = springProps.getProperty("servlet.mapping");

We can then use them to, for example, configure our dispatcher servlet:

ServletRegistration.Dynamic servlet = container.addServlet(
  SERVLET_NAME, new DispatcherServlet(ctx));
servlet.setLoadOnStartup(1);
servlet.addMapping(SERVLET_MAPPING);

The advantage of this approach is the absence of .xml maintenance but with easy-to-modify configuration settings that don’t require redeploying the codebase.

4.3. The PropertySource Approach

A faster way to accomplish the above is to make use of Spring’s PropertySource which allows a configuration file to be accessed and loaded.

PropertyResolver is an interface implemented by ConfigurableEnvironment, which makes application properties available at servlet startup and initialization:

@Configuration 
@PropertySource("classpath:/com/yourapp/custom.properties") 
public class ExampleCustomConfig { 
    @Autowired 
    ConfigurableEnvironment env; 

    public String getProperty(String key) { 
        return env.getProperty(key); 
    } 
}

Above, we autowire a dependency into the class and specify the location of our custom properties file. We can then fetch our salient property by calling the function getProperty() passing in the String value.

4.4. The PropertySource Programmatic Approach

We can combine the above approach (which involves fetching property values) with the approach below (which allows us to programmatically specify those values):

ConfigurableEnvironment env = new StandardEnvironment(); 
MutablePropertySources props = env.getPropertySources(); 
Map map = new HashMap(); map.put("key", "value"); 
props.addFirst(new MapPropertySource("Map", map));

We’ve created a map linking a key to a value then add that map to PropertySources enabling invocation as needed.

5. Registering Embedded Servlets

Lastly, we’ll also take a look at basic configuration and registration of embedded servlets within Spring Boot.

An embedded servlet provides full web container (Tomcat, Jetty, etc.) functionality without having to install or maintain the web-container separately.

You can add the required dependencies and configuration for simple live server deployment wherever such functionality is supported painlessly, compactly, and quickly.

We’ll only look at how to do this Tomcat but the same approach can be undertaken for Jetty and alternatives.

Let’s specify the dependency for an embedded Tomcat 8 web container in pom.xml:

<dependency>
    <groupId>org.apache.tomcat.embed</groupId>
     <artifactId>tomcat-embed-core</artifactId>
     <version>8.5.11</version>
</dependency>

Now let’s add the tags required to successfully add Tomcat to the .war produced by Maven at build-time:

<build>
    <finalName>embeddedTomcatExample</finalName>
    <plugins>
        <plugin>
            <groupId>org.codehaus.mojo</groupId>
            <artifactId>appassembler-maven-plugin</artifactId>
            <version>2.0.0</version>
            <configuration>
                <assembleDirectory>target</assembleDirectory>
                <programs>
                    <program>
                        <mainClass>launch.Main</mainClass>
                        <name>webapp</name>
                    </program>
            </programs>
            </configuration>
            <executions>
                <execution>
                    <phase>package</phase>
                    <goals>
                        <goal>assemble</goal>
                    </goals>
                </execution>
            </executions>
        </plugin>
    </plugins>
</build>

If you are using Spring Boot, you can instead add Spring’s spring-boot-starter-tomcat dependency to your pom.xml:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-tomcat</artifactId>
    <scope>provided</scope>
</dependency>

5.1. Registration Through Properties

Spring Boot supports configuring most possible Spring settings through application.properties. After adding the necessary embedded servlet dependencies to your pom.xml, you can customize and configure your embedded servlet using several such configuration options:

server.jsp-servlet.class-name=org.apache.jasper.servlet.JspServlet 
server.jsp-servlet.registered=true
server.port=8080
server.servlet-path=/

Above are some of the application settings that can be used to configure the DispatcherServlet and static resource sharing. Settings for embedded servlets, SSL support, and sessions are also available.

There are really too many configuration parameters to list here but you can see the full list in the Spring Boot documentation.

5.2. Configuration Through YAML

Similarly, we can configure our embedded servlet container using YAML. This requires the use of a specialized YAML property loader — the YamlPropertySourceLoader  — which exposes our YAML and makes the keys and values therein available for use within our app.

YamlPropertySourceLoader sourceLoader = new YamlPropertySourceLoader();
PropertySource<?> yamlProps = sourceLoader.load("yamlProps", resource, null);

5.3. Programmatic Configuration Through TomcatEmbeddedServletContainerFactory

Programmatic configuration of an embedded servlet container is possible through a subclassed instance of EmbeddedServletContainerFactory. For example, you can use the TomcatEmbeddedServletContainerFactory to configure your embedded Tomcat servlet.

The TomcatEmbeddedServletContainerFactory wraps the org.apache.catalina.startup.Tomcat object providing additional configuration options:

@Bean
public EmbeddedServletContainerFactory servletContainer() {
    TomcatEmbeddedServletContainerFactory tomcatContainerFactory
      = new TomcatEmbeddedServletContainerFactory();
    return tomcatContainerFactory;
}

Then we can configure the returned instance:

tomcatContainerFactory.setPort(9000);
tomcatContainerFactory.setContextPath("/springboottomcatexample");

Each of those particular settings can be made configurable using any of the methods previously described.

We can also directly access and manipulate the org.apache.catalina.startup.Tomcat object:

Tomcat tomcat = new Tomcat();
tomcat.setPort(port);
tomcat.setContextPath("/springboottomcatexample");
tomcat.start();

6. Conclusion

In this article, we’ve reviewed several ways to register a Servlet in a Java EE and Spring Boot application.

The source code used in this tutorial is available in the Github project.

Intro To Reactor Core

$
0
0

1. Introduction

Reactor Core is a Java 8 library which implements the reactive programming model. It’s built on top of the Reactive Streams Specification, a standard for building reactive applications.

From the background of non-reactive Java development, going reactive can be quite a steep learning curve. This becomes more challenging when comparing it to the Java 8 Stream API, as they could be mistaken for being the same high-level abstractions.

In this article, we’ll attempt to demystify this paradigm. We’ll take small steps through Reactor until we’ve built a picture of how to compose reactive code, laying the foundation for more advanced articles to come in a later series.

2. Reactive Streams Specification

Before we look at Reactor, we should look at the Reactive Streams Specification. This is what Reactor implements, and it lays the groundwork for the library.

Essentially, Reactive Streams is a specification for asynchronous stream processing.

In other words, a system where lots of events are being produced and consumed asynchronously. Think about a stream of thousands of stock updates per second coming into a financial application, and for it to have to respond to those updates in a timely manner.

One of the main goals of this is to address the problem of back pressure. If we have a producer which is emitting events to a consumer faster than it can process them, then eventually the consumer will be overwhelmed with events, running out of system resources. Backpressure means that our consumer should be able to tell the producer how much data to send in order to prevent this, and this is what is laid out in the specification.

3. Maven Dependencies

Before we get started, let’s add our Maven dependencies:

<dependency>
    <groupId>io.projectreactor</groupId>
    <artifactId>reactor-core</artifactId>
    <version>3.0.5.RELEASE</version>
</dependency>

<dependency> 
    <groupId>ch.qos.logback</groupId> 
    <artifactId>logback-classic</artifactId> 
    <version>1.1.3</version> 
</dependency>

We’re also adding Logback as a dependency. This is because we’ll be logging the output of Reactor in order to better understand the flow of data.

4. Producing a Stream of Data

In order for an application to be reactive, the first thing it must be able to do is to produce a stream of data. This could be something like the stock update example that we gave earlier. Without this data, we wouldn’t have anything to react to, which is why this is a logical first step. Reactive Core gives us two data types that enable us to do this.

4.1. Flux

The first way of doing this is with a Flux.  It’s a stream which can emit 0..n elements. Let’s try creating a simple one:

Flux<String> just = Flux.just("1", "2", "3");

In this case, we have a static stream of three elements.

4.2. Mono

The second way of doing this is with a Mono, which is a stream of 0..1 elements. Let’s try instantiating one:

Mono<String> just = Mono.just("foo");

This looks and behaves almost exactly the same as the Flux, only this time we are limited to no more than one element.

4.3. Why Not Just Flux?

Before experimenting further, it’s worth highlighting why we have these two data types.

First, it should be noted that both a Flux and Mono are implementations of the Reactive Streams Publisher interface. Both classes are compliant with the specification, and we could use this interface in their place:

Publisher<String> just = Mono.just("foo");

But really, knowing this cardinality is useful. This is because a few operations only make sense for one of the two types, and because it can be more expressive (imagine findOne() in a repository).

5. Subscribing to a Stream

Now we have a high-level overview of how to produce a stream of data, we need to subscribe to it in order for it to emit the elements.

5.1. Collecting Elements

Let’s use the subscribe() method to collect all the elements in a stream:

List<Integer> elements = new ArrayList<>();

Flux.just(1, 2, 3, 4)
  .log()
  .subscribe(elements::add);

assertThat(elements).containsExactly(1, 2, 3, 4);

The data won’t start flowing until we subscribe. Notice that we have added some logging as well, this will be helpful when we look at what’s happening behind the scenes.

5.2. The Flow of Elements

With logging in place, we can use it to visualize how the data is flowing through our stream:

20:25:19.550 [main] INFO  reactor.Flux.Array.1 - | onSubscribe([Synchronous Fuseable] FluxArray.ArraySubscription)
20:25:19.553 [main] INFO  reactor.Flux.Array.1 - | request(unbounded)
20:25:19.553 [main] INFO  reactor.Flux.Array.1 - | onNext(1)
20:25:19.553 [main] INFO  reactor.Flux.Array.1 - | onNext(2)
20:25:19.553 [main] INFO  reactor.Flux.Array.1 - | onNext(3)
20:25:19.553 [main] INFO  reactor.Flux.Array.1 - | onNext(4)
20:25:19.553 [main] INFO  reactor.Flux.Array.1 - | onComplete()

First of all, everything is running on the main thread. Let’s not go into any details about this, as we’ll be taking a further look at concurrency later on in this article. It does make things simple, though, as we can deal with everything in order.

Now let’s go through the sequence that we have logged one by one:

  1. onSubscribe() – This is called when we subscribe to our stream
  2. request(unbounded) – When we call subscribe, behind the scenes we are creating a SubscriptionThis subscription requests elements from the stream. In this case, it defaults to unbounded, meaning it requests every single element available
  3. onNext() – This is called on every single element
  4. onComplete() – This is called last, after receiving the last element. There’s actually a onError() as well, which would be called if there is an exception, but in this case, there isn’t

This is the flow laid out in the Subscriber interface as part of the Reactive Streams Specification, and in reality, that’s what’s been instantiated behind the scenes in our call to onSubscribe(). It’s a useful method, but to better understand what’s happening let’s provide a Subscriber interface directly:

Flux.just(1, 2, 3, 4)
  .log()
  .subscribe(new Subscriber<Integer>() {
    @Override
    public void onSubscribe(Subscription s) {
      s.request(Long.MAX_VALUE);
    }

    @Override
    public void onNext(Integer integer) {
      elements.add(integer);
    }

    @Override
    public void onError(Throwable t) {}

    @Override
    public void onComplete() {}
});

We can see that each possible stage in the above flow maps to a method in the Subscriber implementation. It just happens that the Flux has provided us with a helper method to reduce this verbosity.

5.3. Comparison to Java 8 Streams

It still might appear that we have something synonymous to a Java 8 Stream doing collect:

List<Integer> collected = Stream.of(1, 2, 3, 4)
  .collect(toList());

Only we don’t.

The core difference is that Reactive is a push model, whereas the Java 8 Streams are a pull model. In reactive approach. events are pushed to the subscribers as they come in. 

The next thing to notice is a Streams terminal operator is just that, terminal, pulling all the data and returning a result. With Reactive we could have an infinite stream coming in from an external resource, with multiple subscribers attached and removed on an ad hoc basis. We can also do things like combine streams, throttle streams and apply backpressure, which we will cover next.

6. Backpressure

The next thing we should consider is backpressure. In our example, the subscriber is telling the producer to push every single element at once. This could end up becoming overwhelming for the subscriber, consuming all of its resources.

Backpressure is when a downstream can tell an upstream to send it fewer data in order to prevent it from being overwhelmed.

We can modify our Subscriber implementation to apply backpressure. Let’s tell the upstream to only send two elements at a time by using request():

Flux.just(1, 2, 3, 4)
  .log()
  .subscribe(new Subscriber<Integer>() {
    private Subscription s;
    int onNextAmount;

    @Override
    public void onSubscribe(Subscription s) {
        this.s = s;
        s.request(2);
    }

    @Override
    public void onNext(Integer integer) {
        elements.add(integer);
        onNextAmount++;
        if (onNextAmount % 2 == 0) {
            s.request(2);
        }
    }

    @Override
    public void onError(Throwable t) {}

    @Override
    public void onComplete() {}
});

Now if we run our code again, we’ll see the request(2) is called, followed by two onNext() calls, then request(2) again.

23:31:15.395 [main] INFO  reactor.Flux.Array.1 - | onSubscribe([Synchronous Fuseable] FluxArray.ArraySubscription)
23:31:15.397 [main] INFO  reactor.Flux.Array.1 - | request(2)
23:31:15.397 [main] INFO  reactor.Flux.Array.1 - | onNext(1)
23:31:15.398 [main] INFO  reactor.Flux.Array.1 - | onNext(2)
23:31:15.398 [main] INFO  reactor.Flux.Array.1 - | request(2)
23:31:15.398 [main] INFO  reactor.Flux.Array.1 - | onNext(3)
23:31:15.398 [main] INFO  reactor.Flux.Array.1 - | onNext(4)
23:31:15.398 [main] INFO  reactor.Flux.Array.1 - | request(2)
23:31:15.398 [main] INFO  reactor.Flux.Array.1 - | onComplete()

Essentially, this is reactive pull backpressure. We are requesting the upstream to only push a certain amount of elements, and only when we are ready. If we imagine we were being streamed tweets from twitter, it would then be up to the upstream to decide what to do. If tweets were coming in but there are no requests from the downstream, then the upstream could drop items, store them in a buffer, or some other strategy.

7. Operating on a Stream

We can also perform operations on the data in our stream, responding to events as we see fit.

7.1. Mapping Data in a Stream

A simple operation that we can perform is applying a transformation. In this case, let’s just double all the numbers in our stream:

Flux.just(1, 2, 3, 4)
  .log()
  .map(i -> i * 2)
  .subscribe(elements::add);

map() will be applied when onNext() is called.

7.2. Combining two Streams

We can then make things more interesting by combining another stream with this one. Let’s try this by using zip() function:

Flux.just(1, 2, 3, 4)
  .log()
  .map(i -> i * 2)
  .zipWith(Flux.range(0, Integer.MAX_VALUE), 
    (two, one) -> String.format("First Flux: %d, Second Flux: %d", one, two))
  .subscribe(elements::add);

assertThat(elements).containsExactly(
  "First Flux: 0, Second Flux: 2",
  "First Flux: 1, Second Flux: 4",
  "First Flux: 2, Second Flux: 6",
  "First Flux: 3, Second Flux: 8");

Here, we are creating another Flux which keeps incrementing by one and streaming it together with our original one. We can see how these work together by inspecting the logs:

20:04:38.064 [main] INFO  reactor.Flux.Array.1 - | onSubscribe([Synchronous Fuseable] FluxArray.ArraySubscription)
20:04:38.065 [main] INFO  reactor.Flux.Array.1 - | onNext(1)
20:04:38.066 [main] INFO  reactor.Flux.Range.2 - | onSubscribe([Synchronous Fuseable] FluxRange.RangeSubscription)
20:04:38.066 [main] INFO  reactor.Flux.Range.2 - | onNext(0)
20:04:38.067 [main] INFO  reactor.Flux.Array.1 - | onNext(2)
20:04:38.067 [main] INFO  reactor.Flux.Range.2 - | onNext(1)
20:04:38.067 [main] INFO  reactor.Flux.Array.1 - | onNext(3)
20:04:38.067 [main] INFO  reactor.Flux.Range.2 - | onNext(2)
20:04:38.067 [main] INFO  reactor.Flux.Array.1 - | onNext(4)
20:04:38.067 [main] INFO  reactor.Flux.Range.2 - | onNext(3)
20:04:38.067 [main] INFO  reactor.Flux.Array.1 - | onComplete()
20:04:38.067 [main] INFO  reactor.Flux.Array.1 - | cancel()
20:04:38.067 [main] INFO  reactor.Flux.Range.2 - | cancel()

Note how we now have one subscription per Flux. The onNext() calls are also alternated, so the index of each element in the stream will match when we apply the zip() function.

8. Hot Streams

Currently, we’ve focused primarily on cold streams. These are static, fixed length streams which are easy to deal with. A more realistic use case for reactive might be something that happens infinitely. For example, we could have a stream of mouse movements which constantly needs to be reacted to or a twitter feed. These types of streams are called hot streams, as they are always running and can be subscribed to at any point in time, missing the start of the data.

8.1. Creating a ConnectableFlux

One way to create a hot stream is by converting a cold stream into one. Let’s create a Flux that lasts forever, outputting the results to the console, which would simulate an infinite stream of data coming from an external resource:

ConnectableFlux<Object> publish = Flux.create(fluxSink -> {
    while(true) {
        fluxSink.next(System.currentTimeMillis());
    }
})
  .publish();

By calling publish() we are given a ConnectableFluxThis means that calling subscribe() won’t cause it start emitting, allowing us to add multiple subscriptions:

publish.subscribe(System.out::println);        
publish.subscribe(System.out::println);

If we try running this code, nothing will happen. It’s not until we call connect(), that the Flux will start emitting. It doesn’t matter whether we are subscribing or not.

8.2. Throttling

If we run our code, our console will be overwhelmed with logging. This is simulating a situation where too much data is being passed to our consumers. Let’s try getting around this with throttling:

ConnectableFlux<Object> publish = Flux.create(fluxSink -> {
    while(true) {
        fluxSink.next(System.currentTimeMillis());
    }
})
  .sample(ofSeconds(2))
  .publish();

Here, we’ve introduced a sample() method with an interval of two seconds. Now values will only be pushed to our subscriber every two seconds, meaning the console will be a lot less hectic.

Of course, there’s multiple strategies to reduce the amount of data sent downstream, such as windowing and buffering, but they will be left out of scope for this article.

9. Concurrency

All of our above examples have currently run on the main thread. However, we can control which thread our code runs on if we want. The Scheduler interface provides an abstraction around asynchronous code, for which many implementations are provided for us. Let’s try subscribing to a different thread to main:

Flux.just(1, 2, 3, 4)
  .log()
  .map(i -> i * 2)
  .subscribeOn(Schedulers.parallel())
  .subscribe(elements::add);

The Parallel scheduler will cause our subscription to be run on a different thread, which we can prove by looking at the logs:

20:03:27.505 [main] DEBUG reactor.util.Loggers$LoggerFactory - Using Slf4j logging framework
20:03:27.529 [parallel-1] INFO  reactor.Flux.Array.1 - | onSubscribe([Synchronous Fuseable] FluxArray.ArraySubscription)
20:03:27.531 [parallel-1] INFO  reactor.Flux.Array.1 - | request(unbounded)
20:03:27.531 [parallel-1] INFO  reactor.Flux.Array.1 - | onNext(1)
20:03:27.531 [parallel-1] INFO  reactor.Flux.Array.1 - | onNext(2)
20:03:27.531 [parallel-1] INFO  reactor.Flux.Array.1 - | onNext(3)
20:03:27.531 [parallel-1] INFO  reactor.Flux.Array.1 - | onNext(4)
20:03:27.531 [parallel-1] INFO  reactor.Flux.Array.1 - | onComplete()

Concurrency get’s more interesting that this, and it will be worth us exploring it in another article.

10. Conclusion

In this article, we’ve given a high level, end-to-end overview of Reactive Core. We’ve explained how we can publish and subscribe to streams, apply backpressure, operate on streams and also handle data asynchronously. This should hopefully lay a foundation for us to write reactive applications.

Later articles in this series will cover more advanced concurrency and other reactive concepts. There’s also another article covering Reactor with Spring.

The source code for our application is available on over on GitHub; this is a Maven project which should be able to run as is.

Guide to Guava’s Reflection Utilities

$
0
0

1. Overview

In this article, we’ll be looking at the Guava reflection API – which is definitely more versatile compared to the standard Java reflection API.

We’ll be using Guava to capture generic types at runtime, and we’ll making good use of Invokable as well.

2. Capturing Generic Type at Runtime

In Java, generics are implemented with type erasure. That means that the generic type information is only available at compile time and, at runtime – it’s no longer available.

For example, List<String>, the information about generic type gets erased at runtime. Due to that fact, it is not safe to pass around generic Class objects at runtime.

We might end up assigning two lists that have different generic types to the same reference, which is clearly not a good idea:

List<String> stringList = Lists.newArrayList();
List<Integer> intList = Lists.newArrayList();

boolean result = stringList.getClass()
  .isAssignableFrom(intList.getClass());

assertTrue(result);

Because of type erasure, the method isAssignableFrom() can not know the actual generic type of the lists. It basically compares two types that are just a List without any information about an the actual type.

By using the standard Java reflection API we can detect the generic types of methods and classes. If we have a method that returns a List<String>, we can use reflection to obtain the return type of that method – a ParameterizedType representing List<String>.

The TypeToken class uses this workaround to allow the manipulation of generic types. We can use the TypeToken class to capture an actual type of generic list and check if they really can be referenced by the same reference:

TypeToken<List<String>> stringListToken
  = new TypeToken<List<String>>() {};
TypeToken<List<Integer>> integerListToken
  = new TypeToken<List<Integer>>() {};
TypeToken<List<? extends Number>> numberTypeToken
  = new TypeToken<List<? extends Number>>() {};

assertFalse(stringListToken.isSubtypeOf(integerListToken));
assertFalse(numberTypeToken.isSubtypeOf(integerListToken));
assertTrue(integerListToken.isSubtypeOf(numberTypeToken));

Only the integerListToken can be assigned to a reference of type nubmerTypeToken because an Integer class extends a Number class.

3. Capturing Complex Types Using TypeToken

Let’s say that we want to create a generic parameterized class, and we want to have information about a generic type at runtime. We can create a class that has a TypeToken as a field to capture that information:

abstract class ParametrizedClass<T> {
    TypeToken<T> type = new TypeToken<T>(getClass()) {};
}

Then, when creating an instance of that class, the generic type will be available at runtime:

ParametrizedClass<String> parametrizedClass = new ParametrizedClass<String>() {};

assertEquals(parametrizedClass.type, TypeToken.of(String.class));

We can also create a TypeToken of a complex type that has more than one generic type, and retrieve information about each of those types at runtime:

TypeToken<Function<Integer, String>> funToken
  = new TypeToken<Function<Integer, String>>() {};

TypeToken<?> funResultToken = funToken
  .resolveType(Function.class.getTypeParameters()[1]);

assertEquals(funResultToken, TypeToken.of(String.class));

We get an actual return type for Function, that is a String. We can even get a type of the entry in the map:

TypeToken<Map<String, Integer>> mapToken
  = new TypeToken<Map<String, Integer>>() {};

TypeToken<?> entrySetToken = mapToken
  .resolveType(Map.class.getMethod("entrySet")
  .getGenericReturnType());

assertEquals(
  entrySetToken,
  new TypeToken<Set<Map.Entry<String, Integer>>>() {});

Here we use a reflection method getMethod() from Java standard library to capture the return type of a method.

4. Invokable

The Invokable is a fluent wrapper of java.lang.reflect.Method and java.lang.reflect.Constructor. It provides a simpler API on top of a standard Java reflection API. Let’s say that we have a class that has two public methods and one of them is final:

class CustomClass {
    public void somePublicMethod() {}

    public final void notOverridablePublicMethod() {}
}

Now let’s examine the somePublicMethod() using Guava API and Java standard reflection API:

Method method = CustomClass.class.getMethod("somePublicMethod");
Invokable<CustomClass, ?> invokable 
  = new TypeToken<CustomClass>() {}
  .method(method);

boolean isPublicStandradJava = Modifier.isPublic(method.getModifiers());
boolean isPublicGuava = invokable.isPublic();

assertTrue(isPublicStandradJava);
assertTrue(isPublicGuava);

There is not much difference between these two variants, but checking if a method is overridable is a really non-trivial task in Java. Fortunately, the isOverridable() method from the Invokable class makes it easier:

Method method = CustomClass.class.getMethod("notOverridablePublicMethod");
Invokable<CustomClass, ?> invokable
 = new TypeToken<CustomClass>() {}.method(method);

boolean isOverridableStandardJava = (!(Modifier.isFinal(method.getModifiers()) 
  || Modifier.isPrivate(method.getModifiers())
  || Modifier.isStatic(method.getModifiers())
  || Modifier.isFinal(method.getDeclaringClass().getModifiers())));
boolean isOverridableFinalGauava = invokable.isOverridable();

assertFalse(isOverridableStandardJava);
assertFalse(isOverridableFinalGauava);

We see that even such a simple operation needs a lot of checks using standard reflection API. The Invokable class hides this behind the API that is simple to use and very concise.

5. Conclusion

In this article, we were looking at the Guava reflection API and compare it to the standard Java. We saw how to capture generic types at runtime, and how the Invokable class provides elegant and easy to use API for code that is using reflection.

The implementation of all these examples and code snippets can be found in the GitHub project – this is a Maven project, so it should be easy to import and run as it is.

Mockito’s Java 8 Features

$
0
0

1. Overview

Java 8 introduced a range of new, awesome features, like lambda and streams. And naturally Mockito leveraged these recent innovations in its 2nd major version.

In this article, we are going to explore everything this powerful combination has to offer.

2. Mocking Interface With a Default Method

From Java 8 onwards we can now write method implementations in our interfaces. This might be a great new functionality, but its introduction to the language violated a strong concept that was part of Java since its conception.

Mockito version 1 was not ready for this change. Basically, because it didn’t allow us to ask it to call real methods from interfaces.

Imagine that we have an interface with 2 method declarations: the first one is the old-fashioned method signature we’re all used to, and the other is a brand new default method:

public interface JobService {
 
    Optional<JobPosition> findCurrentJobPosition(Person person);
    
    default boolean assignJobPosition(Person person, JobPosition jobPosition) {
        if(!findCurrentJobPosition(person).isPresent()) {
            person.setCurrentJobPosition(jobPosition);
            
            return true;
        } else {
            return false;
        }
    }
}

Notice that the assignJobPosition() default method has a call to the unimplemented findCurrentJobPosition() method.

Now, suppose we want to test our implementation of assignJobPosition() without writing an actual implementation of findCurrentJobPosition(). We could simply create a mocked version of JobService, then tell Mockito to return a known value from the call to our unimplemented method and call the real method when assignJobPosition() is called:

public class JobServiceUnitTest {
 
    @Mock
    private JobService jobService;

    @Test
    public void givenDefaultMethod_whenCallRealMethod_thenNoExceptionIsRaised() {
        Person person = new Person();

        when(jobService.findCurrentJobPosition(person))
              .thenReturn(Optional.of(new JobPosition()));

        doCallRealMethod().when(jobService)
          .assignJobPosition(
            Mockito.any(Person.class), 
            Mockito.any(JobPosition.class)
        );

        assertFalse(jobService.assignJobPosition(person, new JobPosition()));
    }
}

This is perfectly reasonable and it would work just fine given we were using an abstract class instead of an interface.

However, the inner workings of Mockito 1 were just not ready for this structure. If we were to run this code with Mockito pre version 2 we would get this nicely described error:

org.mockito.exceptions.base.MockitoException:
Cannot call real method on java interface. Interface does not have any implementation!
Calling real methods is only possible when mocking concrete classes.

Mockito is doing its job and telling us it can’t call real methods on interfaces since this operation was unthinkable before Java 8.

The good news is that just by changing the version of Mockito we’re using we can make this error go away. Using Maven, for example, we could use version 2.7.5 (the latest Mockito version can be found here):

<dependency>
    <groupId>org.mockito</groupId>
    <artifactId>mockito-core</artifactId>
    <version>2.7.5</version>
    <scope>test</scope>
</dependency>

There is no need to make any changes to the code. The next time we run our test, the error will no longer occur.

3. Return Default Values for Optional and Stream

Optional and Stream are other Java 8 new additions. One similarity between the two classes is that both have a special type of value that represent an empty object. This empty object makes it easier to avoid the so far omnipresent NullPointerException.

3.1. Example with Optional

Consider a service that injects the JobService described in the previous section and has a method that calls JobService#findCurrentJobPosition():

public class UnemploymentServiceImpl implements UnemploymentService {
 
    private JobService jobService;
    
    public UnemploymentServiceImpl(JobService jobService) {
        this.jobService = jobService;
    }

    @Override
    public boolean personIsEntitledToUnemploymentSupport(Person person) {
        Optional<JobPosition> optional = jobService.findCurrentJobPosition(person);
        
        return !optional.isPresent();
    }
}

Now, assume we want to create a test to check that, when a person has no current job position, they are entitled to the unemployment support.

In that case, we would force findCurrentJobPosition() to return an empty Optional. Before Mockito 2, we were required to mock the call to that method:

public class UnemploymentServiceImplUnitTest {
 
    @Mock
    private JobService jobService;

    @InjectMocks
    private UnemploymentServiceImpl unemploymentService;

    @Test
    public void givenReturnIsOfTypeOptional_whenMocked_thenValueIsEmpty() {
        Person person = new Person();

        when(jobService.findCurrentJobPosition(any(Person.class)))
          .thenReturn(Optional.empty());
        
        assertTrue(unemploymentService.personIsEntitledToUnemploymentSupport(person));
    }
}

This when(…).thenReturn(…) instruction on line 13 is necessary because Mockito’s default return value for any method calls to a mocked object is null. Version 2 changed that behavior.

Since we rarely handle null values when dealing with Optional, Mockito now returns an empty Optional by default. That is the exact same value as the return of a call to Optional.empty().

So, when using Mockito version 2, we could get rid of line 13 and our test would still be successful:

public class UnemploymentServiceImplUnitTest {
 
    @Test
    public void givenReturnIsOptional_whenDefaultValueIsReturned_thenValueIsEmpty() {
        Person person = new Person();
 
        assertTrue(unemploymentService.personIsEntitledToUnemploymentSupport(person));
    }
}

3.2. Example with Stream

The same behavior occurs when we mock a method that returns a Stream.

Let’s add a new method to our JobService interface that returns a Stream representing all the job positions that a person has ever worked at:

public interface JobService {
    Stream<JobPosition> listJobs(Person person);
}

This method is used on another new method that will query if a person has ever worked on a job that matches a given search string:

public class UnemploymentServiceImpl implements UnemploymentService {
   
    @Override
    public Optional<JobPosition> searchJob(Person person, String searchString) {
        return jobService.listJobs(person)
          .filter((j) -> j.getTitle().contains(searchString))
          .findFirst();
    }
}

So, assume we want to properly test the implementation of searchJob(), without having to worry about writing the listJobs() and assume we want to test the scenario when the person hasn’t work at any jobs yet. In that case, we would want listJobs() to return an empty Stream.

Before Mockito 2, we would need to mock the call to listJobs() to write such test:

public class UnemploymentServiceImplUnitTest {
 
    @Test
    public void givenReturnIsOfTypeStream_whenMocked_thenValueIsEmpty() {
        Person person = new Person();
        when(jobService.listJobs(any(Person.class))).thenReturn(Stream.empty());
        
        assertFalse(unemploymentService.searchJob(person, "").isPresent());
    }
}

If we upgrade to version 2, we could drop the when(…).thenReturn(…) call, because now Mockito will return an empty Stream on mocked methods by default:

public class UnemploymentServiceImplUnitTest {
 
    @Test
    public void givenReturnIsStream_whenDefaultValueIsReturned_thenValueIsEmpty() {
        Person person = new Person();
        
        assertFalse(unemploymentService.searchJob(person, "").isPresent());
    }
}

4. Leveraging Lambda Expressions

With Java 8’s lambda expressions we can make statements much more compact and easier to read. When working with Mockito, 2 very nice examples of the simplicity brought in by lambda expressions are ArgumentMatchers and custom Answers.

4.1. Combination of Lambda and ArgumentMatcher

Before Java 8, we needed to create a class that implemented ArgumentMatcher, and write our custom rule in the matches() method.

With Java 8, we can replace the inner class with a simple lambda expression:

public class ArgumentMatcherWithLambdaUnitTest {
 
    @Test
    public void whenPersonWithJob_thenIsNotEntitled() {
        Person peter = new Person("Peter");
        Person linda = new Person("Linda");
        
        JobPosition teacher = new JobPosition("Teacher");

        when(jobService.findCurrentJobPosition(
          ArgumentMatchers.argThat(p -> p.getName().equals("Peter"))))
          .thenReturn(Optional.of(teacher));
        
        assertTrue(unemploymentService.personIsEntitledToUnemploymentSupport(linda));
        assertFalse(unemploymentService.personIsEntitledToUnemploymentSupport(peter));
    }
}

4.2. Combination of Lambda and Custom Answer

The same effect can be achieved when combining lambda expressions with Mockito’s Answer.

For example, if we wanted to simulate calls to the listJobs() method in order to make it return a Stream containing a single JobPosition if the Person‘s name is “Peter”, and an empty Stream otherwise, we would have to create a class (anonymous or inner) that implemented the Answer interface.

Again, the use of a lambda expression, allow us to write all the mock behavior inline:

public class CustomAnswerWithLambdaUnitTest {
 
    @Before
    public void init() {
        MockitoAnnotations.initMocks(this);

        when(jobService.listJobs(any(Person.class))).then((i) ->
          Stream.of(new JobPosition("Teacher"))
          .filter(p -> ((Person) i.getArgument(0)).getName().equals("Peter")));
    }
}

Notice that, in the implementation above, there is no need for the PersonAnswer inner class.

5. Conclusion

In this article, we covered how to leverage new Java 8 and Mockito 2 features together to write cleaner, simpler and shorter code. If you are not familiar with some of the Java 8 features we saw here, check some of our articles:

Also, check the accompanying code on our GitHub repository.

AngularJS CRUD Application with Spring Data REST

$
0
0

1. Overview

In this tutorial, we’re going to create an example of a simple CRUD application using AngularJS for the front-end and Spring Data REST for the back-end.

2. Creating the REST Data Service

In order to create the support for persistence, we’ll make use of the Spring Data REST specification that will enable us to perform CRUD operations on a data model.

You can find all the necessary information on how to setup the REST endpoints in the introduction to Spring Data REST. In this article, we will reuse the existing project we have setup for the introduction tutorial.

For persistence, we will use the H2 in memory database.

As a data model, the previous article defines a WebsiteUser class, with id, name and email properties and a repository interface called UserRepository.

Defining this interface instructs Spring to create the support for exposing REST collection resources and item resources. Let’s take a closer look at the endpoints available to us now that we will later call from AngularJS.

2.1. The Collection Resources

A list of all the users will be available to us at the endpoint /users. This URL can be called using the GET method and will return JSON objects of the form:

{
  "_embedded" : {
    "users" : [ {
      "name" : "Bryan",
      "age" : 20,
      "_links" : {
        "self" : {
          "href" : "http://localhost:8080/users/1"
        },
        "User" : {
          "href" : "http://localhost:8080/users/1"
        }
      }
    }, 
...
    ]
  }
}

2.2. The Item Resources

A single WebsiteUser object can be manipulated by accessing URLs of the form /users/{userID} with different HTTP methods and request payloads.

For retrieving a WebsiteUser object, we can access /users/{userID} with the GET method. This returns a JSON object of the form:

{
  "name" : "Bryan",
  "email" : "bryan@yahoo.com",
  "_links" : {
    "self" : {
      "href" : "http://localhost:8080/users/1"
    },
    "User" : {
      "href" : "http://localhost:8080/users/1"
    }
  }
}

To add a new WebsiteUser, we will need to call /users with POST method. The attributes of the new WebsiteUser record will be added in the request body as a JSON object:

{name: "Bryan", email: "bryan@yahoo.com"}

If there are no errors, this URL returns a status code 201 CREATED.

If we want to update the attributes of the WebsiteUser record, we need to call the URL /users/{UserID} with the PATCH method and a request body containing the new values:

{name: "Bryan", email: "bryan@gmail.com"}

To delete a WebsiteUser record, we can call the URL /users/{UserID} with the DELETE method. If there are no errors, this returns status code 204 NO CONTENT.

2.3. MVC Configuration

We’ll also add a basic MVC configuration to display html files in our application:

@Configuration
@EnableWebMvc
public class MvcConfig extends WebMvcConfigurerAdapter{
    
    public MvcConfig(){
        super();
    }
    
    @Override
    public void configureDefaultServletHandling(
      DefaultServletHandlerConfigurer configurer) {
        configurer.enable();
    }
}

2.4. Allowing Cross Origin Requests

If we want to deploy the AngularJS front-end application separately than the REST API – then we need to enable cross origin requests.

Spring Data REST has added support for this starting with version 1.5.0.RELEASE. To allow requests from a different domain, all you have to do is add the @CrossOrigin annotation to the repository:

@CrossOrigin
@RepositoryRestResource(collectionResourceRel = "users", path = "users")
public interface UserRepository extends CrudRepository<WebsiteUser, Long> {}

As a result, on every response from the REST endpoints a header of Access-Control-Allow-Origin will be added.

3. Creating the AngularJS Client

For creating the front end of our CRUD application, we’ll use AngularJS – a well-know JavaScript framework that eases the creation of front-end applications.

In order to use AngularJS, we first need to include the angular.min.js file in our html page that will be called users.html:

<script 
  src="https://ajax.googleapis.com/ajax/libs/angularjs/1.5.6/angular.min.js">
</script>

Next, we need to create an Angular module, controller, and service that will call the REST endpoints and display the returned data.

These will be placed in a JavaScript file called app.js that also needs to be included in the users.html page:

<script src="view/app.js"></script>

3.1. Angular Service

First, let’s create an Angular service called UserCRUDService that will make use of the injected AngularJS $http service to make calls to the server. Each call will be placed in a separate method.

Let’s take a look at defining the method for retrieving a user by id using the /users/{userID} endpoint:

app.service('UserCRUDService', [ '$http', function($http) {

    this.getUser = function getUser(userId) {
        return $http({
            method : 'GET',
            url : 'users/' + userId
        });
    }
} ]);

Next, let’s define the addUser method which makes a POST request to the /users URL and sends the user values in the data attribute:

this.addUser = function addUser(name, email) {
    return $http({
        method : 'POST',
        url : 'users',
        data : {
            name : name,
            email: email
        }
    });
}

The updateUser method is similar to the one above, except it will have an id parameter and makes a PATCH request:

this.updateUser = function updateUser(id, name, email) {
    return $http({
        method : 'PATCH',
        url : 'users/' + id,
        data : {
            name : name,
            email: email
        }
    });
}

The method for deleting a WebsiteUser record will make a DELETE request:

this.deleteUser = function deleteUser(id) {
    return $http({
        method : 'DELETE',
        url : 'users/' + id
    })
}

And finally, let’s take a look at the methods for retrieving the entire list of users:

this.getAllUsers = function getAllUsers() {
    return $http({
        method : 'GET',
        url : 'users'
    });
}

All of these service methods will be called by an AngularJS controller.

3.2. Angular Controller

We will create an UserCRUDCtrl AngularJS controller that will have an UserCRUDService injected and will use the service methods to obtain the response from the server, handle the success and error cases, and set $scope variables containing the response data for displaying it in the HTML page.

Let’s take a look at the getUser() function that calls the getUser(userId) service function and defines two callback methods in case of success and error. If the server request succeeds, then the response is saved in a user variable; otherwise, error messages are handled:

app.controller('UserCRUDCtrl', ['$scope','UserCRUDService', 
  function ($scope,UserCRUDService) {
      $scope.getUser = function () {
          var id = $scope.user.id;
          UserCRUDService.getUser($scope.user.id)
            .then(function success(response) {
                $scope.user = response.data;
                $scope.user.id = id;
                $scope.message='';
                $scope.errorMessage = '';
            },
    	    function error (response) {
                $scope.message = '';
                if (response.status === 404){
                    $scope.errorMessage = 'User not found!';
                }
                else {
                    $scope.errorMessage = "Error getting user!";
                }
            });
      };
}]);

The addUser() function will call the corresponding service function and handle the response:

$scope.addUser = function () {
    if ($scope.user != null && $scope.user.name) {
        UserCRUDService.addUser($scope.user.name, $scope.user.email)
          .then (function success(response){
              $scope.message = 'User added!';
              $scope.errorMessage = '';
          },
          function error(response){
              $scope.errorMessage = 'Error adding user!';
              $scope.message = '';
        });
    }
    else {
        $scope.errorMessage = 'Please enter a name!';
        $scope.message = '';
    }
}

The updateUser() and deleteUser() functions are similar to the one above:

$scope.updateUser = function () {
    UserCRUDService.updateUser($scope.user.id, 
      $scope.User.name, $scope.user.email)
      .then(function success(response) {
          $scope.message = 'User data updated!';
          $scope.errorMessage = '';
      },
      function error(response) {
          $scope.errorMessage = 'Error updating user!';
          $scope.message = '';
      });
}

$scope.deleteUser = function () {
    UserCRUDService.deleteUser($scope.user.id)
      .then (function success(response) {
          $scope.message = 'User deleted!';
          $scope.User = null;
          $scope.errorMessage='';
      },
      function error(response) {
          $scope.errorMessage = 'Error deleting user!';
          $scope.message='';
      });
}

And finally, let’s define the function that retrieves a list of users, and stores it in the users variable:

$scope.getAllUsers = function () {
    UserCRUDService.getAllUsers()
      .then(function success(response) {
          $scope.users = response.data._embedded.users;
          $scope.message='';
          $scope.errorMessage = '';
      },
      function error (response) {
          $scope.message='';
          $scope.errorMessage = 'Error getting users!';
      });
}

3.3. HTML Page

The users.html page will make use of the controller functions defined in the previous section and the stored variables.

First, in order to use the Angular module, we need to set the ng-app property:

<html ng-app="app">

Then, to avoid typing UserCRUDCtrl.getUser() every time we use a function of the controller, we can wrap our HTML elements in a div with a ng-controller property set:

<div ng-controller="UserCRUDCtrl">

Let’s create the form that will input and display the values for the WebiteUser object we want to manipulate. Each of these will have a ng-model attribute set, which binds it to the value of the attribute:

<table>
    <tr>
        <td width="100">ID:</td>
        <td><input type="text" id="id" ng-model="user.id" /></td>
    </tr>
    <tr>
        <td width="100">Name:</td>
        <td><input type="text" id="name" ng-model="user.name" /></td>
    </tr>
    <tr>
        <td width="100">Age:</td>
        <td><input type="text" id="age" ng-model="user.email" /></td>
    </tr>
</table>

Binding the id input to the user.id variable, for example, means that whenever the value of the input is changed, this value is set in the user.id variable and vice versa.

Next, let’s use the ng-click attribute to define the links that will trigger the invoking of each CRUD controller function defined:

<a ng-click="getUser(user.id)">Get User</a>
<a ng-click="updateUser(user.id,user.name,user.email)">Update User</a>
<a ng-click="addUser(user.name,user.email)">Add User</a>
<a ng-click="deleteUser(user.id)">Delete User</a>

Finally, let’s display the list of users entirely and by name:

<a ng-click="getAllUsers()">Get all Users</a><br/><br/>
<div ng-repeat="usr in users">
{{usr.name}} {{usr.email}}

4. Conclusion

In this tutorial, we have shown how you can create a CRUD application using AngularJS and the Spring Data REST specification.

The complete code for the above example can be found in the GitHub project.

To run the application, you can use the command mvn spring-boot:run and access the URL /users.html.

A guide to the “when{}” block in Kotlin

$
0
0

1. Introduction

This tutorial introduces the when{} block in Kotlin language and demonstrates the various ways that it can be used.

To understand the material in this article, basic knowledge of the Kotlin language is needed. You can have a look at the introduction to the Kotlin Language article on Baeldung to learn more about the language.

2. Kotlin’s when{} Block

When{} block is essentially an advanced form of the switch-case statement known from Java.

In Kotlin, if a matching case is found then only the code in the respective case block is executed and execution continues with the next statement after the when block. This essentially means that no break statements are needed in the end of each case block.

To demonstrate the usage of when{}, let’s define an enum class that holds the first letter in the permissions field for some of the file types in Unix:

enum class UnixFileType {
    D, HYPHEN_MINUS, L
}
Let’s also define a hierarchy of classes that model the respective Unix file types:
sealed class UnixFile {

    abstract fun getFileType(): UnixFileType

    class RegularFile(val content: String) : UnixFile() {
        override fun getFileType(): UnixFileType {
            return UnixFileType.HYPHEN_MINUS
        }
    }

    class Directory(val children: List<UnixFile>) : UnixFile() {
        override fun getFileType(): UnixFileType {
            return UnixFileType.D
        }
    }

    class SymbolicLink(val originalFile: UnixFile) : UnixFile() {
        override fun getFileType(): UnixFileType {
            return UnixFileType.L
        }
    }
}

2.1. When{} as an Expression

A big difference from the Java’s switch statement is that the when{} block in Kotlin can be used both as a statement and as an expression. Kotlin follows the principles of other functional languages and flow-control structures are expressions and the result of their evaluation can be returned to the caller.

If the value returned is assigned to a variable, the compiler will check that type of the return value is compatible with the type expected by the client and will inform us in case it is not:

@Test
fun testWhenExpression() {
    val directoryType = UnixFileType.D

    val objectType = when (directoryType) {
        UnixFileType.D -> "d"
        UnixFileType.HYPHEN_MINUS -> "-"
        UnixFileType.L -> "l"
    }

    assertEquals("d", objectType)
}

There are two things to notice when using when as an expression in Kotlin.

First, the value that is returned to the caller is the value of the matching case block or in other words the last defined value in the block.

The second thing to notice is that we need to guarantee that the caller gets a value. For this to happen we need to ensure that the cases, in the when block, cover every possible value that can be assigned to the argument.

2.2. When{} as an Expression with Default Case

A default case will match any argument value that is not matched by a normal case and in Kotlin is declared using the else clause. In any case, the Kotlin compiler will assume that every possible argument value is covered by the when block and will complain in case it is not.

To add a default case in Kotlin’s when expression:

@Test
fun testWhenExpressionWithDefaultCase() {
    val fileType = UnixFileType.L

    val result = when (fileType) {
        UnixFileType.L -> "linking to another file"
        else -> "not a link"
    }

    assertEquals("linking to another file", result)
}

2.3. When{} Expression with a Case that Throws an Exception

In Kotlin, throw returns a value of type Nothing. 

In this case, Nothing is used to declare that the expression failed to compute a value. Nothing is the type that inherits from all user-defined and built-in types in Kotlin.

Therefore, since the type is compatible with any argument that we would use in a when block, it is perfectly valid to throw an exception from a case even if the when block is used as an expression.

Let’s define a when expression where one of the cases throws an exception:

@Test(expected = IllegalArgumentException::class)
fun testWhenExpressionWithThrowException() {
    val fileType = UnixFileType.L

    val result: Boolean = when (fileType) {
        UnixFileType.HYPHEN_MINUS -> true
        else -> throw IllegalArgumentException("Wrong type of file")
    }
}

2.4. When{} Used as a Statement

We can also use the when block as a statement.

In this case, we do not need to cover every possible value for the argument and the value computed in each case block, if any, is just ignored. When used as a statement, the when block can be used similarly to how the switch statement is used in Java.

Let’s use the when block as a statement:

@Test
fun testWhenStatement() {
    val fileType = UnixFileType.HYPHEN_MINUS

    when (fileType) {
        UnixFileType.HYPHEN_MINUS -> println("Regular file type")
        UnixFileType.D -> println("Directory file type")
    }
}

We can see from the example that it is not mandatory to cover all possible argument values when we are using when as a statement.

2.5. Combining When{} Cases

Kotlin’s when expression allows us to combine different cases into one by concatenating the matching conditions with a comma.

Only one case has to match for the respective block of code to be executed, so comma acts as an OR operator.

Let’s create a case that combines two conditions:

@Test
fun testCaseCombination() {
    val fileType = UnixFileType.D

    val frequentFileType: Boolean = when (fileType) {
        UnixFileType.HYPHEN_MINUS, UnixFileType.D -> true
        else -> false
    }

    assertTrue(frequentFileType)
}

2.6. When{} Used Without an Argument

Kotlin allows us to omit the argument value in the when block.

This essentially turns when in a simple if-elseif expression that sequentially checks cases and executes the block of code of the first matching case. If we omit the argument in the when block, then the case expressions should evaluate to either true or false.

Let’s create a when block that omits the argument:

@Test
fun testWhenWithoutArgument() {
    val fileType = UnixFileType.L

    val objectType = when {
        fileType === UnixFileType.L -> "l"
        fileType === UnixFileType.HYPHEN_MINUS -> "-"
        fileType === UnixFileType.D -> "d"
        else -> "unknown file type"
    }

    assertEquals("l", objectType)
}

2.7. Dynamic Case Expressions

In Java, the switch statement can only be used with primitives and their boxed types, enums and the String class. In contrast, Kotlin allows us to use the when block with any built-in or user defined type. 

In addition, it is not required that the cases are constant expressions as in Java. Cases in Kotlin can be dynamic expressions that are evaluated at runtime. For example, cases could be the result of a function as long as the function return type is compatible with the type of the when block argument.

Let’s define a when block with dynamic case expressions:

@Test
fun testDynamicCaseExpression() {
    val unixFile = UnixFile.SymbolicLink(UnixFile.RegularFile("Content"))

    when {
        unixFile.getFileType() == UnixFileType.D -> println("It's a directory!")
        unixFile.getFileType() == UnixFileType.HYPHEN_MINUS -> println("It's a regular file!")
        unixFile.getFileType() == UnixFileType.L -> println("It's a soft link!")
    }
}

2.8. Range and Collection Case Expressions

It is possible to define a case in a when block that checks if a given collection or a range of values contains the argument.

For this reason, Kotlin provides the in operator which is a syntactic sugar for the contains() method. This means that Kotlin behind the scenes translates the case element in to collection.contains(element).

To check if the argument is in a list:

@Test
fun testCollectionCaseExpressions() {
    val regularFile = UnixFile.RegularFile("Test Content")
    val symbolicLink = UnixFile.SymbolicLink(regularFile)
    val directory = UnixFile.Directory(listOf(regularFile, symbolicLink))

    val isRegularFileInDirectory = when (regularFile) {
        in directory.children -> true
        else -> false
    }

    val isSymbolicLinkInDirectory = when {
        symbolicLink in directory.children -> true
        else -> false
    }

    assertTrue(isRegularFileInDirectory)
    assertTrue(isSymbolicLinkInDirectory)
}
To check that the argument is in a range:
@Test
fun testRangeCaseExpressions() {
    val fileType = UnixFileType.HYPHEN_MINUS

    val isCorrectType = when (fileType) {
        in UnixFileType.D..UnixFileType.L -> true
        else -> false
    }

    assertTrue(isCorrectType)
}

Even though REGULAR_FILE type is not explicitly contained in the range, its ordinal is between the ordinals of DIRECTORY and SYMBOLIC_LINK and therefore the test is successful.

2.9. Is Case Operator and Smart Cast

We can use Kotlin’s is operator to check if the argument is an instance of a specified type. The is operator is similar to the instanceof operator in Java.

However, Kotlin provides us with a feature called “smart cast”. After we check if the argument is an instance of a given type, we do not have to explicitly cast the argument to that type since the compiler does that for us.

Therefore, we can use the methods and properties defined in the given type directly in the case block.
To use the is operator with the “smart cast” feature in a when block:

@Test
fun testWhenWithIsOperatorWithSmartCase() {
    val unixFile: UnixFile = UnixFile.RegularFile("Test Content")

    val result = when (unixFile) {
        is UnixFile.RegularFile -> unixFile.content
        is UnixFile.Directory -> unixFile.children.map { it.getFileType() }.joinToString(", ")
        is UnixFile.SymbolicLink -> unixFile.originalFile.getFileType()
    }

    assertEquals("Test Content", result)
}
Without explicitly casting unixFile to RegularFile, Directory or SymbolicLink, we were able to use RegularFile.content, Directory.children, and SymbolicLink.originalFile respectively.

3. Conclusion

In this article, we have seen several examples of how to use the when block offered by the Kotlin language.

Even though it’s not possible to do pattern matching using when in Kotlin, as is the case with the corresponding structures in Scala and other JVM languages, the when block is versatile enough to make us totally forget about these features.

The complete implementation of the examples for this article can be found over on GitHub.

Intro to Jasypt

$
0
0

1. Overview

In this article, we’ll be looking at the Jasypt (Java Simplified Encryption) library.

Jasypt is a Java library which allows developers to add basic encryption capabilities to projects with minimum effort, and without the need of having an in-depth knowledge about implementation details of encryption protocols.

2. Using Simple Encryption

Consider we’re building a web application in which user submits an account private data. We need to store that data in the database, but it would be insecure to store plain text.

One way to deal with it is to store an encrypted data in the database, and when retrieving that data for a particular user decrypt it.

To perform encryption and decryption using a very simple algorithm, we can use a BasicTextEncryptor class from the Jasypt library:

BasicTextEncryptor textEncryptor = new BasicTextEncryptor();
String privateData = "secret-data";
textEncryptor.setPasswordCharArray("some-random-data".toCharArray());

Then we can use an encrypt() method to encrypt the plain text:

String myEncryptedText = textEncryptor.encrypt(privateData);
assertNotSame(privateData, myEncryptedText);

If we want to store a private data for given user in the database, we can store a myEncryptedText without violating any security restrictions. Should we want to decrypt data back to a plain text, we can use a decrypt() method:

String plainText = textEncryptor.decrypt(myEncryptedText);
 
assertEquals(plainText, privateData);

We see that decrypted data is equal to plain text data that was previously encrypted.

3. One-way Encryption

The previous example is not an ideal way to perform authentication, that is when we want to store a user password. Ideally, we want to encrypt the password without a way to decrypt it. When the user tries to log into our service, we encrypt his password and compare it with the encrypted password that is stored in the database. That way we do not need to operate on plain text password.

We can use a BasicPasswordEncryptor class to perform the one-way encryption:

String password = "secret-pass";
BasicPasswordEncryptor passwordEncryptor = new BasicPasswordEncryptor();
String encryptedPassword = passwordEncryptor.encryptPassword(password);

Then, we can compare an already encrypted password with a password of a user that perform login process without a need to decrypt password that is already stored in the database:

boolean result = passwordEncryptor.checkPassword("secret-pass", encryptedPassword);

assertTrue(result);

4. Configuring Algorithm for Encryption

We can use a stronger encryption algorithm but we need to remember to install Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files for our JVM (installation instructions are included in the download).

In Jasypt we can use strong encryption by using a StandardPBEStringEncryptor class and customize it using a setAlgorithm() method:

StandardPBEStringEncryptor encryptor = new StandardPBEStringEncryptor();
String privateData = "secret-data";
encryptor.setPassword("some-random-passwprd");
encryptor.setAlgorithm("PBEWithMD5AndTripleDES");

Let’s set the encryption algorithm to be PBEWithMD5AndTripleDES. 

Next, the process of encryption and decryption looks the same as the previous one using a BasicTextEncryptor class:

String encryptedText = encryptor.encrypt(privateData);
assertNotSame(privateData, encryptedText);

String plainText = encryptor.decrypt(encryptedText);
assertEquals(plainText, privateData);

5. Using Multi-Threaded Decryption

When we’re operating on the multi-core machine we want to handle processing of decryption in parallel. To achieve good performance we can use a PooledPBEStringEncryptor and the setPoolSize() API to create a pool of digesters. Each of them can be used by the different thread in parallel:

PooledPBEStringEncryptor encryptor = new PooledPBEStringEncryptor();
encryptor.setPoolSize(4);
encryptor.setPassword("some-random-data");
encryptor.setAlgorithm("PBEWithMD5AndTripleDES");

It’s good practice to set pool size to be equal to the number of cores of the machine. The code for encryption and decryption is the same as previous ones.

6. Usage in Other Frameworks

A quick final note is that the Jasypt library can be integrated with a lot of other libraries, including of course the Spring Framework.

We only need to create a configuration to add encryption support into our Spring application. And if we want to store sensitive data into the database and we are using Hibernate as the data access framework, we can also integrate Jasypt with it.

Instructions about these integrations, as well as with some other frameworks, can be found in the Guides section on the Jasypt’s home page.

7. Conclusion

In this article, we were looking at the Jasypt library that helps us create more secure applications by using an already well know and tested cryptography algorithms. It is covered with the simple API that is easy to use.

The implementation of all these examples and code snippets can be found in the GitHub project – this is a Maven project, so it should be easy to import and run as it is.


HBase with Java

$
0
0

1. Overview

In this article, we’ll be looking at the HBase database Java Client library. HBase is a distributed database that uses the Hadoop file system for storing data.

We’ll create a Java example client and a table to which we will add some simple records.

2. HBase Data Structure

In HBase, data is grouped into column families. All column members of a column family have the same prefix.

For example, the columns family1:qualifier1 and family1:qualifier2 are both members of the family1 column family. All column family members are stored together on the filesystem.

Inside the column family, we can put a row that has a specified qualifier. We can think of a qualifier as a kind of the column name.

Let’s see an example record from Hbase:

Family1:{  
   'Qualifier1':'row1:cell_data',
   'Qualifier2':'row2:cell_data',
   'Qualifier3':'row3:cell_data'
}
Family2:{  
   'Qualifier1':'row1:cell_data',
   'Qualifier2':'row2:cell_data',
   'Qualifier3':'row3:cell_data'
}

We have two column families, each of them has three qualifiers with some cell data in it. Each row has a row key – it is a unique row identifier. We will be using the row key to insert, retrieve and delete the data.

3. HBase Client Maven Dependency

Before we connect to the HBase, we need to add hbase-client and hbase dependencies:

<dependency>
    <groupId>org.apache.hbase</groupId>
    <artifactId>hbase-client</artifactId>
    <version>${hbase.version}</version>
</dependency>
<dependency>
     <groupId>org.apache.hbase</groupId>
     <artifactId>hbase</artifactId>
     <version>${hbase.version}</version>
</dependency>

4. HBase Setup

We need to setup HBase to be able to connect from a Java client library to it. The installation is out of the scope of this article but you can check out some of the HBase installation guides online.

Next, we need to start an HBase master locally by executing:

hbase master start

5. Connecting to HBase from Java 

To connect programmatically from Java to HBase, we need to define an XML configuration file. We started our HBase instance on localhost so we need to enter that into a configuration file:

<configuration>
    <property>
        <name>hbase.zookeeper.quorum</name>
        <value>localhost</value>
    </property>
    <property>
        <name>hbase.zookeeper.property.clientPort</name>
        <value>2181</value>
    </property>
</configuration>

Now we need to point an HBase client to that configuration file:

Configuration config = HBaseConfiguration.create();

String path = this.getClass()
  .getClassLoader()
  .getResource("hbase-site.xml")
  .getPath();
config.addResource(new Path(path));

Next, we’re checking if a connection to HBase was successful – in the case of a failure, the MasterNotRunningException will be thrown:

HBaseAdmin.checkHBaseAvailable(config);

6. Creating a Database Structure

Before we start adding data to HBase, we need to create the data structure for inserting rows. We will create one table with two column families:

private TableName table1 = TableName.valueOf("Table1");
private String family1 = "Family1";
private String family2 = "Family2";

Firstly, we need to create a connection to the database and get admin object, which we will use for manipulating a database structure:

Connection connection = ConnectionFactory.createConnection(config)
Admin admin = connection.getAdmin();

Then, we can create a table by passing an instance of the HTableDescriptor class to a createTable() method on the admin object:

HTableDescriptor desc = new HTableDescriptor(table1);
desc.addFamily(new HColumnDescriptor(family1));
desc.addFamily(new HColumnDescriptor(family2));
admin.createTable(desc);

7. Adding and Retrieving Elements 

With the table created, we can add new data to it by creating a Put object and calling a put() method on the Table object:

byte[] row1 = Bytes.toBytes("row1")
Put p = new Put(row1);
p.addImmutable(family1.getBytes(), qualifier1, Bytes.toBytes("cell_data"));
table1.put(p);

Retrieving previously created row can be achieved by using a Get class:

Get g = new Get(row1);
Result r = table1.get(g);
byte[] value = r.getValue(family1.getBytes(), qualifier1);

The row1 is a row identifier – we can use it to retrieve a specific row from the database. When calling:

Bytes.bytesToString(value)

the returned result will be previously the inserted cell_data.

8. Scanning and Filtering

We can scan the table, retrieving all elements inside of a given qualifier by using a Scan object (note that ResultScanner extends Closable, so be sure to call close() on it when you’re done):

Scan scan = new Scan();
scan.addColumn(family1.getBytes(), qualifier1);

ResultScanner scanner = table.getScanner(scan);
for (Result result : scanner) {
    System.out.println("Found row: " + result);
}

That operation will print all rows inside of a qualifier1 with some additional information like timestamp:

Found row: keyvalues={Row1/Family1:Qualifier1/1488202127489/Put/vlen=9/seqid=0}

We can retrieve specific records by using filters.

Firstly, we are creating two filters. The filter1 specifies that scan query will retrieve elements that are greater than row1, and filter2 specifies that we are interested only in rows that have a qualifier equal to qualifier1:

Filter filter1 = new PrefixFilter(row1);
Filter filter2 = new QualifierFilter(
  CompareOp.GREATER_OR_EQUAL, 
  new BinaryComparator(qualifier1));
List<Filter> filters = Arrays.asList(filter1, filter2);

Then we can get a result set from a Scan query:

Scan scan = new Scan();
scan.setFilter(new FilterList(Operator.MUST_PASS_ALL, filters));

try (ResultScanner scanner = table.getScanner(scan)) {
    for (Result result : scanner) {
        System.out.println("Found row: " + result);
    }
}

When creating a FilterList we passed an Operator.MUST_PASS_ALL – it means that all filters must be satisfied. We can choose an Operation.MUST_PASS_ONE if only one filter needs to be satisfied. In the resulting set, we will have only rows that matched specified filters.

9. Deleting Rows

Finally, to delete a row, we can use a Delete class:

Delete delete = new Delete(row1);
delete.addColumn(family1.getBytes(), qualifier1);
table.delete(delete);

We’re deleting a row1 that resides inside of a family1.

10. Conclusion

In this quick tutorial, we focused on communicated with a HBase database. We saw how to connect to HBase from the Java client library and how to run various basic operations.

The implementation of all these examples and code snippets can be found in the GitHub project; this is a Maven project, so it should be easy to import and run as it is.

Spring Cloud – Tracing Services with Zipkin

$
0
0

 1. Overview

In this article, we are going to add Zipkin to our spring cloud projectZipkin is an open source project that provides mechanisms for sending, receiving, storing, and visualizing traces. This allows us to correlate activity between servers and get a much clearer picture of exactly what is happening in our services.

This article is not an introductory article to distributed tracing or spring cloud. If you would like more information about distributed tracing, read our introduction to spring sleuth.

2. Zipkin Service

Our Zipkin service will serve as the store for all our spans. Each span is sent to this service and collected into traces for future identification.

2.1. Setup

Create a new Spring Boot project and add these dependencies to pom.xml:

<dependency>
    <groupId>io.zipkin.java</groupId>
    <artifactId>zipkin-server</artifactId>
</dependency>
<dependency>
    <groupId>io.zipkin.java</groupId>
    <artifactId>zipkin-autoconfigure-ui</artifactId>
    <scope>runtime</scope>
</dependency>

For reference: you can find the latest version on Maven Central (zipkin-server, zipkin-autoconfigure-ui). Versions of the dependencies are inherited from spring-boot-starter-parent.

2.2. Enabling Zipkin Server

To enable the Zipkin server, we must add some annotations to the main application class:

@SpringBootApplication
@EnableZipkinServer
public class ZipkinApplication {...}

The new annotation @EnableZipkinServer will set up this server to listen for incoming spans and act as our UI for querying.

2.3. Configuration

First, let’s create a file called bootstrap.properties in src/main/resources. Remember that this file is needed to fetch our configuration from out config server.

Let’s add these properties to it:

spring.cloud.config.name=zipkin
spring.cloud.config.discovery.service-id=config
spring.cloud.config.discovery.enabled=true
spring.cloud.config.username=configUser
spring.cloud.config.password=configPassword

eureka.client.serviceUrl.defaultZone=
  http://discUser:discPassword@localhost:8082/eureka/

Now let’s add a configuration file to our config repo, located at c:\Users\{username}\ on Windows or /Users/{username}/ on *nix.

In this directory let’s add a file named zipkin.properties and add these contents:

spring.application.name=zipkin
server.port=9411
eureka.client.region=default
eureka.client.registryFetchIntervalSeconds=5
logging.level.org.springframework.web=debug

Remember to commit the changes in this directory so that the config service will detect the changes and load the file.

2.4. Run

Now let’s run our application and navigate to http://localhost:9411. We should be greeted with Zipkin’s homepage:

Great! Now we are ready to add some dependencies and configuration to our services that we want to trace.

3. Service Configuration

The setup for the resource servers is pretty much the same. In the following sections, we will detail how to set up the book-service. We will follow that up by explaining the modifications needed to apply these updates to the rating-service and gateway-service.

3.1. Setup

To begin sending spans to our Zipkin server we will add this dependency to our pom.xml file:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-zipkin</artifactId>
</dependency>

For reference: you can find the latest version on Maven Central (spring-cloud-starter-zipkin).

3.2. Spring Config

We need to add some configuration so that book-service will use Eureka to find our Zipkin service. Open BookServiceApplication.java and add this code to the file:

@Autowired
private EurekaClient eurekaClient;
 
@Autowired
private SpanMetricReporter spanMetricReporter;
 
@Autowired
private ZipkinProperties zipkinProperties;
 
@Value("${spring.sleuth.web.skipPattern}")
private String skipPattern;

// ... the main method goes here

@Bean
public ZipkinSpanReporter makeZipkinSpanReporter() {
    return new ZipkinSpanReporter() {
        private HttpZipkinSpanReporter delegate;
        private String baseUrl;

        @Override
        public void report(Span span) {
 
            InstanceInfo instance = eurekaClient
              .getNextServerFromEureka("zipkin", false);
            if (!(baseUrl != null && 
              instance.getHomePageUrl().equals(baseUrl))) {
                baseUrl = instance.getHomePageUrl();
                delegate = new HttpZipkinSpanReporter(baseUrl,
                  zipkinProperties.getFlushInterval(), 
                  zipkinProperties.getCompression().isEnabled(), 
                  spanMetricReporter);
 
                if (!span.name.matches(skipPattern)) delegate.report(span);
            }
        }
    };
}

The above configuration registers a custom ZipkinSpanReporter that gets its URL from eureka. This code also keeps track of the existing URL and only updates the HttpZipkinSpanReporter if the URL changes. This way no matter where we deploy our Zipkin server to we will always be able to locate it without restarting the service.

We also import the default Zipkin properties that are loaded by spring boot and use them to manage our custom reporter.

3.3. Configuration

Now let’s add some configuration to our book-service.properties file in the config repository:

spring.sleuth.sampler.percentage=1.0
spring.sleuth.web.skipPattern=(^cleanup.*)

Zipkin works by sampling actions on a server. By setting the spring.sleuth.sampler.percentage to 1.0, we are setting the sampling rate to 100%. The skip pattern is simply a regex used for excluding spans whose name matches.

The skip pattern will block all spans from being reported that start with the word ‘cleanup’. This is to stop spans originating from the spring session code base.

3.4. Rating Service

Follow the exact same steps from the book-service section above, applying the changes to the equivalent files for rating-service.

3.5. Gateway Service

Follow the same steps book-service. But when adding the configuration to the gateway.properties add these instead:

spring.sleuth.sampler.percentage=1.0
spring.sleuth.web.skipPattern=(^cleanup.*|.+favicon.*)

This will configure the gateway service to not send spans about the favicon or spring session.

3.6. Run

If you haven’t done so already, start the configdiscoverygateway, book, rating, and zipkin services.

Navigate to http://localhost:8080/book-service/books.

Open a new tab and navigate to http://localhost:9411. Select book-service and press the ‘Find Traces’ button. You should see a trace appear in the search results. Click that trace of opening it:

On the trace page, we can see the request broken down by service. The first two spans are created by the gateway and the last is created by the book-service. This shows us how much time the request spent processing on the book-service, 18.379 ms, and on the gateway, 87.961 ms.

4. Conclusion

We have seen how easy it is to integrate Zipkin into our cloud application.

This gives us some much-needed insight into how communication travels through our application. As our application grows in complexity, Zipkin can provide us with much-needed information on where requests are spending their time. This can help us determine where things are slowing down and indicate what areas of our application need improvement.

As always you can find the source code over on Github.

Array Processing with Apache Commons Lang 3

$
0
0

1. Overview

The Apache Commons Lang 3 library provides support for manipulation of core classes of the Java APIs. This support includes methods for handling strings, numbers, dates, concurrency, object reflection and more.

In this quick tutorial, we’ll focus on array processing with the very useful ArrayUtils utility class.

2. Maven Dependency

In order to use the Commons Lang 3 library, just pull it from the central Maven repository using the following dependency:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-lang3</artifactId>
    <version>3.5</version>
</dependency>

You can find the latest version of this library here.

3. ArrayUtils

The ArrayUtils class provides utility methods for working with arrays. These methods try to handle the input gracefully by preventing an exception from being thrown when a null value is passed in.

This section illustrates some methods defined in the ArrayUtils class. Note that all of these methods can work with any element type.

For convenience, their overloaded flavors are also defined for handling arrays containing primitive types.

4. add and addAll

The add method copies a given array and inserts a given element at a given position in the new array. If the position is not specified, the new element is added at the end of the array.

The following code fragment inserts the number zero at the first position of the oldArray array and verifies the result:

int[] oldArray = { 2, 3, 4, 5 };
int[] newArray = ArrayUtils.add(oldArray, 0, 1);
int[] expectedArray = { 1, 2, 3, 4, 5 };
 
assertArrayEquals(expectedArray, newArray);

If the position is not specified, the additional element is added at the end of oldArray:

int[] oldArray = { 2, 3, 4, 5 };
int[] newArray = ArrayUtils.add(oldArray, 1);
int[] expectedArray = { 2, 3, 4, 5, 1 };
 
assertArrayEquals(expectedArray, newArray);

The addAll method adds all elements at the end of a given array. The following fragment illustrates this method and confirms the result:

int[] oldArray = { 0, 1, 2 };
int[] newArray = ArrayUtils.addAll(oldArray, 3, 4, 5);
int[] expectedArray = { 0, 1, 2, 3, 4, 5 };
 
assertArrayEquals(expectedArray, newArray);

5. remove and removeAll

The remove method removes an element at a specified position from a given array. All subsequent elements are shifted to the left. Note that this is true for all removal operations.

This method returns a new array instead of making changes to the original one:

int[] oldArray = { 1, 2, 3, 4, 5 };
int[] newArray = ArrayUtils.remove(oldArray, 1);
int[] expectedArray = { 1, 3, 4, 5 };
 
assertArrayEquals(expectedArray, newArray);

The removeAll method removes all elements at specified positions from a given array:

int[] oldArray = { 1, 2, 3, 4, 5 };
int[] newArray = ArrayUtils.removeAll(oldArray, 1, 3);
int[] expectedArray = { 1, 3, 5 };
 
assertArrayEquals(expectedArray, newArray);

6. removeElement and removeElements

The removeElement method removes the first occurrence of a specified element from a given array.

Instead of throwing an exception, the removal operation is ignored if such an element does not exist in the given array:

int[] oldArray = { 1, 2, 3, 3, 4 };
int[] newArray = ArrayUtils.removeElement(oldArray, 3);
int[] expectedArray = { 1, 2, 3, 4 };
 
assertArrayEquals(expectedArray, newArray);

The removeElements method removes the first occurrences of specified elements from a given array.

Instead of throwing an exception, the removal operation is ignored if a specified element does not exist in the given array:

int[] oldArray = { 1, 2, 3, 3, 4 };
int[] newArray = ArrayUtils.removeElements(oldArray, 2, 3, 5);
int[] expectedArray = { 1, 3, 4 };
 
assertArrayEquals(expectedArray, newArray);

7. The removeAllOccurences API

The removeAllOccurences method removes all occurrences of the specified element from the given array.

Instead of throwing an exception, the removal operation is ignored if such an element does not exist in the given array:

int[] oldArray = { 1, 2, 2, 2, 3 };
int[] newArray = ArrayUtils.removeAllOccurences(oldArray, 2);
int[] expectedArray = { 1, 3 };
 
assertArrayEquals(expectedArray, newArray);

8. The contains API

The contains method checks if a value exists in a given array. Here is a code example, including verification of the result:

int[] array = { 1, 3, 5, 7, 9 };
boolean evenContained = ArrayUtils.contains(array, 2);
boolean oddContained = ArrayUtils.contains(array, 7);
 
assertEquals(false, evenContained);
assertEquals(true, oddContained);

9. The reverse API

The reverse method reverses the element order within a specified range of a given array. This method makes changes to the passed-in array instead of returning a new one.

Let’s have a look at a quick:

int[] originalArray = { 1, 2, 3, 4, 5 };
ArrayUtils.reverse(originalArray, 1, 4);
int[] expectedArray = { 1, 4, 3, 2, 5 };
 
assertArrayEquals(expectedArray, originalArray);

If a range is not specified, the order of all elements is reversed:

int[] originalArray = { 1, 2, 3, 4, 5 };
ArrayUtils.reverse(originalArray);
int[] expectedArray = { 5, 4, 3, 2, 1 };
 
assertArrayEquals(expectedArray, originalArray);

10. The shift API

The shift method shifts a series of elements in a given array a number of positions. This method makes changes to the passed-in array instead of returning a new one.

The following code fragment shifts all elements between the elements at index 1 (inclusive) and index 4 (exclusive) one position to the right and confirms the result:

int[] originalArray = { 1, 2, 3, 4, 5 };
ArrayUtils.shift(originalArray, 1, 4, 1);
int[] expectedArray = { 1, 4, 2, 3, 5 };
 
assertArrayEquals(expectedArray, originalArray);

If the range boundaries are not specified, all elements of the array are shifted:

int[] originalArray = { 1, 2, 3, 4, 5 };
ArrayUtils.shift(originalArray, 1);
int[] expectedArray = { 5, 1, 2, 3, 4 };
 
assertArrayEquals(expectedArray, originalArray);

11. The subarray API

The subarray method creates a new array containing elements within a specified range of the given array. The following is an example of an assertion of the result:

int[] oldArray = { 1, 2, 3, 4, 5 };
int[] newArray = ArrayUtils.subarray(oldArray, 2, 7);
int[] expectedArray = { 3, 4, 5 };
 
assertArrayEquals(expectedArray, newArray);

Notice that when the passed-in index is greater than the length of the array, it is demoted to the array length rather than having the method throw an exception. Similarly, if a negative index is passed in, it is promoted to zero.

12. The swap API

The swap method swaps a series of elements at specified positions in the given array.

The following code fragment swaps two groups of elements starting at the indexes 0 and 3, with each group containing two elements:

int[] originalArray = { 1, 2, 3, 4, 5 };
ArrayUtils.swap(originalArray, 0, 3, 2);
int[] expectedArray = { 4, 5, 3, 1, 2 };
 
assertArrayEquals(expectedArray, originalArray);

If no length argument is passed in, only one element at each position is swapped:

int[] originalArray = { 1, 2, 3, 4, 5 };
ArrayUtils.swap(originalArray, 0, 3);
int[] expectedArray = { 4, 2, 3, 1, 5 };
assertArrayEquals(expectedArray, originalArray);

13. Conclusion

This tutorial introduces the core array processing utility in Apache Commons Lang 3 – ArrayUtils.

As always, the implementation of all examples and code snippets given above can be found in the GitHub project.

String Processing with Apache Commons Lang 3

$
0
0

1. Overview

The Apache Commons Lang 3 library provides support for manipulation of core classes of the Java APIs. This support includes methods for handling strings, numbers, dates, concurrency, object reflection and more.

In addition to providing a general introduction to the library, this tutorial demonstrates methods of two of the most commonly used classes, namely ArrayUtils and StringUtils. ArrayUtils is used for operations on arrays, while StringUtils is used for manipulation of String instances.

2. Maven Dependency

In order to use the Commons Lang 3 library, just pull it from the central Maven repository using the following dependency:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-lang3</artifactId>
    <version>3.5</version>
</dependency>

You can find the latest version of this library here.

3. StringUtils

The StringUtils class provides methods for null-safe operations on strings.

Many methods of this class have corresponding ones defined in class java.lang.String, which are not null-safe. However, this section will instead focus on several methods that do not have equivalents in the String class.

4. The containsAny Method

The containsAny method checks if a given String contains any character in the given set of characters. This set of characters can be passed in in the form of a String or char varargs.

The following code fragment demonstrates the use of two overloaded flavors of this method with result verification:

String string = "baeldung.com";
boolean contained1 = StringUtils.containsAny(string, 'a', 'b', 'c');
boolean contained2 = StringUtils.containsAny(string, 'x', 'y', 'z');
boolean contained3 = StringUtils.containsAny(string, "abc");
boolean contained4 = StringUtils.containsAny(string, "xyz");
 
assertTrue(contained1);
assertFalse(contained2);
assertTrue(contained3);
assertFalse(contained4);

5. The containsIgnoreCase Method

The containsIgnoreCase method checks if a given String contains another String in a case insensitive manner.

The following code fragment verifies that the String “baeldung.com” comprises “BAELDUNG” when upper and lower case is ignored:

String string = "baeldung.com";
boolean contained = StringUtils.containsIgnoreCase(string, "BAELDUNG");
 
assertTrue(contained);

6. The countMatches Method

The counterMatches method counts how many times a character or substring appears in a given String.

The following is a demonstration of this method, confirming that ‘w’ appears four times and “com” does twice in the String “welcome to www.baeldung.com”:

String string = "welcome to www.baeldung.com";
int charNum = StringUtils.countMatches(string, 'w');
int stringNum = StringUtils.countMatches(string, "com");
 
assertEquals(4, charNum);
assertEquals(2, stringNum);

7. Appending and Prepending Method

The appendIfMissing and appendIfMissingIgnoreCase methods append a suffix to the end of a given String if it does not already end with any of the passed-in suffixes in a case sensitive and insensitive manner respectively.

Similarly, the prependIfMissing and prependIfMissingIgnoreCase methods prepend a prefix to the beginning of a given String if it does not start with any of the passed-in prefixes.

In the following example, the appendIfMissing and prependIfMissing methods are used to add a suffix and prefix to the String “baeldung.com” without these affixes being repeated:

String string = "baeldung.com";
String stringWithSuffix = StringUtils.appendIfMissing(string, ".com");
String stringWithPrefix = StringUtils.prependIfMissing(string, "www.");
 
assertEquals("baeldung.com", stringWithSuffix);
assertEquals("www.baeldung.com", stringWithPrefix);

8. Case Changing Method

The String class already defines methods to convert all characters of a String to uppercase or lowercase. This subsection only illustrates the use of methods changing the case of a String in other ways, including swapCase, capitalize and uncapitalize.

The swapCase method swaps the case of a String, changing uppercase to lowercase and lowercase to uppercase:

String originalString = "baeldung.COM";
String swappedString = StringUtils.swapCase(originalString);
 
assertEquals("BAELDUNG.com", swappedString);

The capitalize method converts the first character of a given String to uppercase, leaving all remaining characters unchanged:

String originalString = "baeldung";
String capitalizedString = StringUtils.capitalize(originalString);
 
assertEquals("Baeldung", capitalizedString);

The uncapitalize method converts the first character of the given String to lowercase, leaving all remaining characters unchanged:

String originalString = "Baeldung";
String uncapitalizedString = StringUtils.uncapitalize(originalString);
 
assertEquals("baeldung", uncapitalizedString);

9. Reversing Method

The StringUtils class defines two methods for reversing strings: reverse and reverseDelimited. The reverse method rearranges all characters of a String in the opposite order, while the reverseDelimited method reorders groups of characters, separated by a specified delimiter.

The following code fragment reverses the string “baeldung” and validates the outcome:

String originalString = "baeldung";
String reversedString = StringUtils.reverse(originalString);
 
assertEquals("gnudleab", reversedString);

With the reverseDelimited method, characters are reversed in groups instead of individually:

String originalString = "www.baeldung.com";
String reversedString = StringUtils.reverseDelimited(originalString, '.');
 
assertEquals("com.baeldung.www", reversedString);

10. The rotate() Method

The rotate() method circularly shifts characters of a String a number of positions. The code fragment below moves all characters of the String “baeldung” four positions to the right and verifies the result:

String originalString = "baeldung";
String rotatedString = StringUtils.rotate(originalString, 4);
 
assertEquals("dungbael", rotatedString);

11. The difference Method

The difference method compares two strings, returning the remainder of the second String, starting from the position where it is different from the first. The following code fragment compares two Strings: “Baeldung Tutorials” and “Baeldung Courses” in both directions and validates the outcome:

String tutorials = "Baeldung Tutorials";
String courses = "Baeldung Courses";
String diff1 = StringUtils.difference(tutorials, courses);
String diff2 = StringUtils.difference(courses, tutorials);
 
assertEquals("Courses", diff1);
assertEquals("Tutorials", diff2);

12. Conclusion

This tutorial introduces String processing in the Apache Commons Lang 3 and goes over the main APIs we can use out of the StringUtils library class.

As always, the implementation of all examples and code snippets given above can be found in the GitHub project.

Java Web Weekly, Issue 167

$
0
0

Lots of interesting writeups on Spring stuff going on this week.

Let’s jump right in…

1. Spring and Java

>> Spring Boot – Configure Log Level in Runtime Using Actuator Endpoint [codeleak.pl]

Starting with Spring Boot 1.5, we can configure log levels at runtime by performing simple POST requests.

>> 7 Tips and Tricks We Learned From the Java Community [takipi.com]

Community is a great source of knowledge 🙂

>> A use-case for Spring component scan [frankel.ch]

A quick refresher of how @ComponentScan works followed by a practical example.

>> Deep Dive into Java 9’s Stack-Walking API [sitepoint.com]

Java 9 includes a brand new Stack-Walking API that provides an access to the execution stack. Hopefully, we’ll no longer need to hack our way through frames.

>> Java EE 8 – February recap [oracle.com]

A short overview of what is going on around Java EE 8.

>> Public Review of JSON-P Specification 1.1 is Now Open [infoq.com]

Very cool – the JSON-P JSR-374 1.1 spec is now public.

>> Getting Started with Thymeleaf 3 Text Templates [codeleak.pl]

And a quick-start guide to templating with Thymeleaf 3.

>> The best way to soft delete with Hibernate [vladmihalcea.com]

With a little bit of effort, soft deletes are achievable with Hibernate.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Easy RSA signatures and encryption with JWK [insaneprogramming.be]

Exchanging data in the REST age can be painful but the JWK standard made it much easier. This short tutorial shows how to use asymmetrical RSA with the JWK.

>> Writing Integration Tests with Docker Compose and JUnit [codecentric.de]

A quick and practical guide to wiring Docker containers up for integration testing.

>> Be aware that bcrypt has a maximum password length [mscharhag.com]

It’s good to remember that bcrypt has its limitations.

>> Continuous Delivery With Kubernetes, Docker, and CircleCI [alexecollins.com]

A presentation of a CD setup with Kubernetes, Docker, and CircleCI. Definitely a useful setup.

Also worth reading:

3. Musings

>> The Whiteboard Interview: Adulthood Deferred [daedtech.com]

Recent “whiteboard interviews” topic has sparked a lot of discussion about the efficiency of contemporary tech interviews.

>> On false negatives and false positives [ontestautomation.com]

A short write-up recalling the importance of fast identification of false negatives and false positives.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> I think I’ll go add value somewhere else [dilbert.com]

>> Self-deprecating joke to underscore my true power [dilbert.com]

>> Big fan of low self-esteem [dilbert.com]

5. Pick of the Week

>> Why we choose profit [m.signalvnoise.com]

Viewing all 3692 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>