Quantcast
Channel: Baeldung
Viewing all 3683 articles
Browse latest View live

Intro to Stream Processing with Spring Cloud Data Flow

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Introduction

Spring Cloud Data Flow is a cloud native programming and operating model for composable data microservices.

With Spring Cloud Data Flow, developers can create and orchestrate data pipelines for common use cases such as data ingest, real-time analytics, and data import/export.

This data pipelines come in two flavors, streaming and batch data pipelines.

In the first case an unbounded amount of data is consumed or produced via messaging middleware. While in the second case short lived task process a finite set of data and then terminate.

This article will focus on streaming processing.

2. Architectural Overview

The key components these type of architecture are Applications, the Data Flow Server, and the target runtime.

Also in addition to these key components, we also usually have a Data Flow Shell and a message broker within the architecture.

Let’s see all these components in more detail.

2.1. Applications

Typically, a streaming data pipeline includes consuming events from external systems, data processing, and polyglot persistence. These phases are commonly referred to as Source, Processor, and Sink in Spring Cloud terminology:

  • Source: is the application that consumes events
  • Processor: consumes data from the Source, does some processing on it, and emits the processed data to the next application in the pipeline
  • Sink: either consumes from a Source or Processor and writes the data to the desired persistence layer

These applications can be packaged in two ways:

  • Spring Boot uber-jar that is hosted in a maven repository, file, http or any other Spring resource implementation (this method will be used in this article)
  • Docker

Many source, processor, and sink applications for common use-cases (e.g. jdbc, hdfs, http, router) are already provided and ready to use by the Spring Cloud Data Flow team.

2.2. Runtime

Also, a runtime is needed for these applications to execute. The supported runtimes are:

  • Cloud Foundry
  • Apache YARN
  • Kubernetes
  • Apache Mesos
  • Local Server for development (wich will be used in this article)

2.3. Data Flow Server

The component that is responsible for deploying applications to a runtime is the Data Flow Server. There is a Data Flow Server executable jar provided for each of the target runtimes.

The Data Flow Server is responsible for interpreting:

  • A stream DSL that describes the logical flow of data through multiple applications.
  • A deployment manifest that describes the mapping of applications onto the runtime.

2.4. Data Flow Shell

The Data Flow Shell is a client for the Data Flow Server. The shell allow us to perform the DSL command needed to interact with the server.

As an example, the DSL to describe the flow of data from an http source to a jdbc sink would be written as “http | jdbc”. These names in the DSL are registered with the Data Flow Server and map onto application artifacts that can be hosted in Maven or Docker repositories.

Spring also offer a graphical interface, named Flo, for creating and monitoring streaming data pipelines, however its use is outside the discussion of this article.

2.5. Message Broker

As we’ve seen in the example of the previous section we have used the pipe symbol into the definition of the flow of data. The pipe symbol represents the communication between the two applications via messaging middleware.

This means that we need a message broker up and running in the target environment.

The two messaging middleware brokers that are supported are:

  • Apache Kafka
  • RabbitMQ

And so, now that we have an overview of the architectural components – it’s time to build our first stream processing pipeline.

3. Install a Message Broker

As we have seen, the applications in the pipeline need a messaging middleware to communicate. For the purpose of this article, we’ll go with RabbitMQ.

For the full details of the installation you can follow the instruction on the official site.

4. The Local Data Flow Server

To make short work of generating our applications we we’ll use Spring Initilizr; with its help we can obtain our Spring Boot applications in few minutes.

After reaching the web site, simply choose a Group, an Artifact name and select Local Data Flow Server from the dependencies search box.

Once this is done, click on the button Generate Project to start the download of the maven artifact.

Spring Initializr

After the download is completed, unzip the project and import it as a maven project in your IDE of choice.

Now we need to annotate the Spring Boot main class with @EnableDataFlowServer annotation:

@EnableDataFlowServer
@SpringBootApplication
public class SpringDataFlowServerApplication {

    public static void main(String[] args) {
        SpringApplication.run(
          SpringDataFlowServerApplication.class, args);
    }
}

That’s all. Our Local Data Flow Server is ready to be executed:

mvn spring-boot:run

The application will boot up on port 9393.

5. The Data Flow Shell

Again, go to the Spring Initializr and choose a Group, an Artifact name and select Data Flow Shell from the dependencies search box.

Once downloaded and imported the project, we need to add the @EnableDataFlowShell annotation to the Spring Boot main class:

@EnableDataFlowShell
@SpringBootApplication
public class SpringDataFlowShellApplication {
    
    public static void main(String[] args) {
        SpringApplication.run(SpringDataFlowShellApplication.class, args);
    }
}

We can now run the shell:

mvn spring-boot:run

After the shell is running we can type the help command in the prompt to see a complete list of command that we can perform.

6. The Source Application

Similarly on Initializr, we’ll now create a simple application with a Stream Rabbit dependency.

We’ll then add the @EnableBinding(Source.class) annotation to the Spring Boot main class:

@EnableBinding(Source.class)
@SpringBootApplication
public class SpringDataFlowTimeSourceApplication {
    
    public static void main(String[] args) {
        SpringApplication.run(
          SpringDataFlowTimeSourceApplication.class, args);
    }
}

Now we need to define the source of the data that must be processed. This source could be any kind of potentially endless workload (internet-of-things sensor data, 24/7 event processing, online transaction data ingest).

In our sample application we produce one event (for simplicity a new timestamp) every 10 seconds with a Poller.

The @InboundChannelAdapter annotation sends a message to the source’s output channel, using the return value as the payload of the message:

@Bean
@InboundChannelAdapter(
  value = Source.OUTPUT, 
  poller = @Poller(fixedDelay = "10000", maxMessagesPerPoll = "1")
)
public MessageSource<Long> timeMessageSource() {
    return () -> MessageBuilder.withPayload(new Date().getTime()).build();
}

Our data source is ready.

7. The Processor Application

Next- we’ll create an application with a Stream Rabbit dependency.

We’ll then add the @EnableBinding(Processor.class) annotation to the Spring Boot main class:

@EnableBinding(Processor.class)
@SpringBootApplication
public class SpringDataFlowTimeProcessorApplication {

    public static void main(String[] args) {
        SpringApplication.run(
          SpringDataFlowTimeProcessorApplication.class, args);
    }
}

Next, we need to define a method to process the data that coming from the source application.

To define a transformer we need to annotate this method with @Transformer annotation:

@Transformer(inputChannel = Processor.INPUT, 
  outputChannel = Processor.OUTPUT)
public Object transform(Long timestamp) {

    DateFormat dateFormat = new SimpleDateFormat("yyyy/MM/dd hh:mm:yy");
    String date = dateFormat.format(timestamp);
    return date;
}

It converts a timestamp from the ‘input’ channel to a formatted date which will be sent to the ‘output’ channel.

8. The Sink Application

The last application to create is the Sink application.

Again, go to the Spring Initializr and choose a Group, an Artifact name and select Stream Rabbit from the dependencies search box.

Then add the @EnableBinding(Sink.class) annotation to the Spring Boot main class:

@EnableBinding(Sink.class)
@SpringBootApplication
public class SpringDataFlowLoggingSinkApplication {

    public static void main(String[] args) {
	SpringApplication.run(
          SpringDataFlowLoggingSinkApplication.class, args);
    }
}

Now we need a method to intercept the messages coming from the processor application.

To do this we need to add the @StreamListener(Sink.INPUT) annotation to our method:

@StreamListener(Sink.INPUT)
public void loggerSink(String date) {
    logger.info("Received: " + date);
}

The method simply print the timestamp transformed in a formatted date to a log file.

9. Register a Stream App

The Spring Cloud Data Flow Shell allow us to Register a Stream App with the App Registry using the app register command.

We must provide an unique name, application type, and a URI that can be resolved to the app artifact. For the type, specify “source“, “processor“, or “sink“.

When providing a URI with the maven scheme, the format should conform to the following:

maven://<groupId>:<artifactId>[:<extension>[:<classifier>]]:<version>

To register the Source, Processor and Sink applications previously created, go to the Spring Cloud Data Flow Shell and issue the following commands from the prompt:

app register --name time-source --type source 
  --uri maven://org.baeldung.spring.cloud:spring-data-flow-time-source:jar:0.0.1-SNAPSHOT

app register --name time-processor --type processor 
  --uri maven://org.baeldung.spring.cloud:spring-data-flow-time-processor:jar:0.0.1-SNAPSHOT

app register --name logging-sink --type sink 
  --uri maven://org.baeldung.spring.cloud:spring-data-flow-logging-sink:jar:0.0.1-SNAPSHOT

10. Create and Deploy the Stream

To create a new stream definitions go to the Spring Cloud Data Flow Shell and execute the following shell command:

stream create --name time-to-log 
  --definition 'time-source | time-processor | logging-sink'

This define a stream named time-to-log based off the DSL expression ‘time-source | time-processor | logging-sink’.

Then to deploy the stream execute the following shell command:

stream deploy --name time-to-log

The Data Flow Server resolves time-source, time-processor and logging-sink to maven coordinates and uses those to launch the time-source, time-processor and logging-sink applications of the stream.

If the stream is correctly deployed you’ll see in the Data Flow Server logs that the modules have been started and tied together:

2016-08-24 12:29:10.516  INFO 8096 --- [io-9393-exec-10] o.s.c.d.spi.local.LocalAppDeployer: deploying app time-to-log.logging-sink instance 0
   Logs will be in PATH_TO_LOG/spring-cloud-dataflow-1276836171391672089/time-to-log-1472034549734/time-to-log.logging-sink
2016-08-24 12:29:17.600  INFO 8096 --- [io-9393-exec-10] o.s.c.d.spi.local.LocalAppDeployer       : deploying app time-to-log.time-processor instance 0
   Logs will be in PATH_TO_LOG/spring-cloud-dataflow-1276836171391672089/time-to-log-1472034556862/time-to-log.time-processor
2016-08-24 12:29:23.280  INFO 8096 --- [io-9393-exec-10] o.s.c.d.spi.local.LocalAppDeployer       : deploying app time-to-log.time-source instance 0
   Logs will be in PATH_TO_LOG/spring-cloud-dataflow-1276836171391672089/time-to-log-1472034562861/time-to-log.time-source

11. Reviewing the Result

In this example, the source simply sends the current timestamp as a message each second, the processor format it and the log sink outputs the formatted timestamp using the logging framework.

The log files are located within the directory displayed in the Data Flow Server’s log output, as shown above. To see the result we can tail the log:

tail -f PATH_TO_LOG/spring-cloud-dataflow-1276836171391672089/time-to-log-1472034549734/time-to-log.logging-sink/stdout_0.log
2016-08-24 12:40:42.029  INFO 9488 --- [r.time-to-log-1] s.c.SpringDataFlowLoggingSinkApplication : Received: 2016/08/24 11:40:01
2016-08-24 12:40:52.035  INFO 9488 --- [r.time-to-log-1] s.c.SpringDataFlowLoggingSinkApplication : Received: 2016/08/24 11:40:11
2016-08-24 12:41:02.030  INFO 9488 --- [r.time-to-log-1] s.c.SpringDataFlowLoggingSinkApplication : Received: 2016/08/24 11:40:21

12. Conclusion

In this article we have seen how to build a data pipeline for stream processing through the use of Spring Cloud Data Flow.

Also we saw the role of Source, Processor and Sink applications inside the stream and how to plug and tie this module inside a Data Flow Server through the use of Data Flow Shell.

The example code can be found in the GitHub project.

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE


How to Deploy a WAR File to Tomcat

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Overview

Apache Tomcat is one of the most popular web servers in the Java community. It ships as a servlet container capable of serving Web ARchives with the WAR extension.

It provides a management dashboard from which you can deploy a new web application, or undeploy an existing one without having to restart the container. This is especially useful in production environments.

In this article, we will do a quick overview of Tomcat and then cover various approaches to deploying a WAR file.

2. Tomcat Structure

Before we begin we should familiarize ourselves with some terminology and environment variables.

2.1. Environment Variables

If you have worked with Tomcat before, these will be very familiar to you:

$CATALINA_HOME

This variable points to the directory where our server is installed.

$CATALINA_BASE

This variable points to the directory of a particular instance of Tomcat, you may have multiple instances installed. If this variable is not set explicitly, then it will be assigned the same value as $CATALINA_HOME.

Web applications are deployed under the $CATALINA_HOME\webapps directory.

2.2. Terminology

Document root. Refers to the top-level directory of a web application, where all the application resources are located like JSP files, HTML pages, Java classes and images.

Context path. Refers to the location which is relative to the server’s address and represents the name of the web application.

For example, if our web application is put under the $CATALINA_HOME\webapps\myapp directory, it will be accessed by the URL http://localhost/myapp, and its context path will be /myapp.

WAR. Is the extension of a file that packages a web application directory hierarchy in ZIP format and is short for Web Archive. Java web applications are usually packaged as WAR files for deployment. These files can be created on the command line or with an IDE like Eclipse.

After deploying our WAR file, Tomcat unpacks it and stores all project files in the webapps directory in a new directory named after the project.

3. Tomcat Setup

The Tomcat Apache web server is free software that can be downloaded from their website. It is required that there is a JDK available on the user’s machine and that the JAVA_HOME environment variable is set correctly.

3.1. Start Tomcat

We can start the Tomcat server by simply running the startup script located at $CATALINE_HOME\bin\startup. There is a .bat and a .sh in every installation.

Choose the appropriate option depending on whether you are using a Windows or Unix based operating system.

3.2. Configure Roles

During the deployment phase, we will have a number of options, one of which is to use Tomcat’s management dashboard. To access this dashboard, we must have an admin user configured with the appropriate roles.

To have access to the dashboard the admin user needs the manager-gui role. Later, we will need to deploy a WAR file using Maven, for this, we need the manager-script role too.

Let’s make these changes in $CATALINE_HOME\conf\tomcat-users:

<role rolename="manager-gui"/>
<role rolename="manager-script"/>
<user username="admin" password="password" roles="manager-gui, manager-script"/>

More details about the different Tomcat roles can be found by following this official link.

3.3. Set Directory Permissions

Finally, ensure that there is read/write permission on the Tomcat installation directory.

3.4. Test Installation

To test that Tomcat is setup properly run the startup script (startup.bat/startup.sh), if no errors are displayed on the console we can double-check by visiting http://localhost:8080.

If you see the Tomcat landing page then we have installed the server correctly.

3.5. Resolve Port Conflict

By default, Tomcat is set to listen to connections on port 8080. If there is another application that is already bound to this port, the startup console will let us know.

To change the port, we can edit the server configuration file server.xml located at $CATALINE_HOME\conf\server.xml. By default, the connector configuration is as follows:

<Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" />

For instance, if we want to change our port to 8081, then we will have to change the connector’s port attribute like so:

<Connector port="8081" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" />

Sometimes, the port we have chosen is not open by default, in this case we will need to open this port with the appropriate commands in the Unix kernel or creating the appropriate firewall rules in Windows, how this is done is beyond the scope of this article.

4. Deploy From Maven

If we want to use Maven for deploying our web archives, we must configure Tomcat as a server in Maven’s settings.xml file.

There are two locations where the settings.xml file may be found:

  • The Maven install: ${maven.home}/conf/settings.xml
  • A user’s install: ${user.home}/.m2/settings.xml

Once you have found it add Tomcat as follows:

<server>
    <id>TomcatServer</id>
    <username>admin</username>
    <password>password</password>
</server>

We will now need to create a basic web application from Maven to test the deployment. Let’s navigate to where we would like to create the application.

Run this command on the console to create a new Java web application:

mvn archetype:generate -DgroupId=com.baeldung -DartifactId=tomcat-war-deployment 
  -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false

This will create a complete web application in the directory tomcat-war-deployment which, if we deploy now and access via the browser, prints hello world!.

But before we do that we need to make one change to enable Maven deployment. So head over to the pom.xml and add this plugin:

<plugin>
    <groupId>org.apache.tomcat.maven</groupId>
    <artifactId>tomcat7-maven-plugin</artifactId>
    <version>2.2</version>
    <configuration>
        <url>http://localhost:8080/manager/text</url>
        <server>TomcatServer</server>
        <path>/myapp</path>
    </configuration>
</plugin>

Note that we are using the Tomcat 7 plugin because it works for both versions 7 and 8 without any special changes.

The configuration url is the url to which we are sending our deployment, Tomcat will know what to do with it. The server element is the name of server instance that Maven recognizes. Finally, the path element defines the context path of our deployment.

This means that if our deployment succeeds, we will access the web application by hitting http://localhost:8080/myapp.

Now we can run the following commands from Maven.

To deploy the web app:

mvn tomcat7:deploy

To undeploy it:

mvn tomcat7:undeploy

To redeploy after making changes:

mvn tomcat7:redeploy

5. Deploy With Cargo Plugin

Cargo is a versatile library that allows us to manipulate various type of application containers in a standard way.

5.1. Cargo Deployment Setup

In this section, we will look at how to use Cargo’s Maven plugin to deploy a WAR to Tomcat, in this case we will deploy it to a version 7 instance.

To get a firm grip of the whole process, we will start from scratch by creating a new Java web application from the command line:

mvn archetype:generate -DgroupId=com.baeldung -DartifactId=cargo-deploy 
  -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false

This will create a complete Java web application in the cargo-deploy directory. If we build, deploy and load this application as is, it will print Hello World! in the browser.

Unlike the Tomcat7 Maven plugin, the Cargo Maven plugin requires that this file be present.

Since our web application does not contain any servlets, our web.xml file will be very basic. So navigate to the WEB-INF folder of our newly created project and create a web.xml file with the following content:

<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
  xmlns="http://java.sun.com/xml/ns/javaee" 
    xsi:schemaLocation="http://java.sun.com/xml/ns/javaee 
      http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd" id="WebApp_ID" version="3.0">

    <display-name>cargo-deploy</display-name>
    <welcome-file-list>
        <welcome-file>index.jsp</welcome-file>
    </welcome-file-list>
</web-app>

To enable Maven to recognize Cargo’s commands without typing the fully qualified name, we need to add the Cargo Maven plugin to a plugin group in Maven’s settings.xml. 

As an immediate child of the root <settings></settings> element, add this:

<pluginGroups>
    <pluginGroup>org.codehaus.cargo</pluginGroup>
</pluginGroups>

5.2. Local Deploy

In this subsection we will edit our pom.xml to suit our new deployment requirements.

Add the plugin as follows:

<build>
    <plugins>
        <plugin>
            <groupId>org.codehaus.cargo</groupId>
            <artifactId>cargo-maven2-plugin</artifactId>
            <version>1.5.0</version>
            <configuration>
                <container>
                    <containerId>tomcat7x</containerId>
                    <type>installed</type>
                    <home>Insert absolute path to tomcat 7 installation</home>
                </container>
                <configuration>
                    <type>existing</type>
                    <home>Insert absolute path to tomcat 7 installation</home>
                </configuration>
            </configuration>
       </plugin>
    </plugins>
</build>

The latest version, at the time of writing, is 1.5.0, however the latest version can always be found here.

Notice that we explicitly define the packaging as a WAR, without this, our build will fail. In the plugins section, we then add the cargo maven2 plugin. Additionally, we add a configuration section where we tell Maven that we are using a Tomcat container and also an existing installation.

By setting the container type to installed, we tell Maven that we have an instance installed on the machine and we provide the absolute URL to this installation.

By setting the configuration type to existing, we tell Tomcat that we have an existing setup that we are using and no further configuration is required.

The alternative would be to tell cargo to download and setup the version specified by providing a URL. However, our focus is on WAR deployment.

It’s worth noting that whether we are using Maven 2.x or Maven 3.x, the cargo maven2 plugin works for both.

We can now install our application by executing:

mvn install

and deploying it by doing:

mvn cargo:deploy

If all goes well we should be able to run our web application by loading http://localhost:8080/cargo-deploy.

5.3. Remote Deploy

To do a remote deploy we only need to change the configuration section of our pom.xml. Remote deploy means that we do not have a local installation of Tomcat but have access to the manager dashboard on a remote server.

So let’s change the pom.xml so that the configuration section looks like this:

<configuration>
    <container>
        <containerId>tomcat8x</containerId>
        <type>remote</type>
    </container>
    <configuration>
        <type>runtime</type>
        <properties>
            <cargo.remote.username>admin</cargo.remote.username>
            <cargo.remote.password>admin</cargo.remote.password>
            <cargo.tomcat.manager.url>http://localhost:8080/manager/text
              </cargo.tomcat.manager.url>
        </properties>
    </configuration>
</configuration>

This time, we change the container type from installed to remote and the configuration type from existing to runtime. Finally, we add authentication and remote URL properties to the configuration.

Ensure that the roles and users are already present in $CATALINA_HOME/conf/tomcat-users.xml just as before.

If you are editing the same project for remote deployment, first undeploy the existing WAR:

mvn cargo:undeploy

clean the project:

mvn clean

install it:

mvn install

finally, deploy it:

mvn cargo:deploy

That’s it.

6. Deploy From Eclipse

Eclipse allows us to embed servers in order to add web project deployment in the normal workflow without navigating away from the IDE.

6.1. Embed Tomcat In Eclipse

We can embed an installation into eclipse by selecting the window menu item from taskbar and then preferences from the drop down.

We will find a tree grid of preference items on the left panel of the window that appears. We can then navigate to eclipse -> servers or just type servers in the search bar.

We then select the installation directory, if not already open for us, and choose the Tomcat version we downloaded.

On the right-hand-side of the panel a configuration page will appear where we select the enable option to activate this server version and browse to the the installation folder.


Capture1

We apply changes and the next time we open the servers view from eclipse’s windows -> show view submenu, the newly configured server will be present and we can start, stop and deploy applications to it.

6.2. Deploy Web Application In Embedded Tomcat

To deploy a web application to Tomcat, it must exist in our work space.

Open the servers view from window -> show view and look for servers. When open, we can just right click on the server we configured and select add deployment from the context menu that appears.

 

Capture-1

From the New Deployment dialog box that appears, open the project drop down and select the web project.

There is a Deploy Type section beneath the Project combobox, when we select Exploded Archive(development mode)our changes in the application will be synced live without having to redeploy, this is the best option during development as it is very efficient.

Capture-2

 

Selecting Packaged Archive(production mode) will require us to redeploy every time we make changes and see them in the browser. This is best only for production, but still Eclipse makes it equally easy.

6.3. Deploy Web Application In External Location

We usually choose to deploy a WAR through Eclipse to make debugging easier. There may come a time when we want it deployed to a location other than those used by Eclipse’s embedded servers. The most common instance is where our production server is online and we want to update the web application.

We can bypass this procedure by deploying in production mode and noting the Deploy Location in the New Deployment dialog box and picking the WAR from there.

During deployment, instead of selecting an embedded server, we can select the <Externally Launched> option from the servers view alongside the list of embedded servers. We navigate to the webapps directory of an external Tomcat installation.

7. Deploy From IntelliJ IDEA

To deploy a web application to Tomcat, it must exist and have already been downloaded and installed.

7.1. Local Configuration

Open the Run menu and click the Edit Configurations options.

 

IntelliJ Tomcat Configuration

In the panel on the left search for Tomcat Server, if it is not there click the + sign in the menu, search for Tomcat and select Local. In the name field put Tomcat 7/8 (depending on your version).

 

IntelliJ Add Tomcat Server

Click the Configure… button and in Tomcat Home field navigate to the home location of your installation and select it.

 

Tomcat Configuration In IntelliJ

Optionally, set the Startup page to be http://localhost:8080/ and HTTP port: 8080, change the port as appropriate.

Go to the Deployment tab and click on the + symbol, select artifact you want to add to the server and click OK

 

Add artifact

 

7.2. Remote Configuration

Follow the same instructions as for local Tomcat configurations but in the server tab you must enter the remote location of the installation.

8. Deploy By Copying Archive

We have seen how to export a WAR from Eclipse. One of the things we can do is to deploy it by simply dropping it into the $CATALINA_HOME\webapps directory of any Tomcat instance. If the instance is running, the deployment will start instantly as Tomcat unpacks the archive and configures its context path.

If the instance is not running, then the server will deploy the project the next time it is started.

9. Deploy From Tomcat Manager

Assuming we already have our WAR file to hand and would like to deploy it using the management dashboard. You can access the manager dashboard by visiting: http://localhost:8080/manager.

The dashboard has five different sections: Manager, Applications, Deploy, Diagnostics and Server Information. If you go to the Deploy section you will find two subsections.

9.1. Deploy Directory or WAR File Located on Server

If the WAR file is located on the server where the Tomcat instance is running, then we can fill the required Context Path field preceded by a forward slash “/”.

Let’s say we would like our web application to be accessed from the browser with the URL http://localhost:8080/myapp, then our context path field will have /myapp.

We skip the XML Configuration file URL field and head over to the WAR or Directory URL field. Here we enter the absolute URL to the Web ARchive file as it appears on our server. Let’s say our file’s location is C:/apps/myapp.war, then we enter this location. Don’t forget the WAR extension.

After that, we can click deploy button. The page will reload and we should see the message:

OK - Deployed application at context path /myapp

at the top of the page.

Additionally, our application should also appear in the Applications section of the page.

9.2. WAR File to Deploy

Simply click the choose file button, navigate to the location of the WAR file and select it, then click the deploy button.

In both situations, if all goes well, the Tomcat console will inform us that the deployment has been successful with a message like the following:

INFO: Deployment of web application archive \path\to\deployed_war has finished in 4,833 ms

10. Conclusion

In this writeup, we focused on deploying a WAR into a Tomcat server.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Java Web Weekly, Issue 140

$
0
0

I just released the Master Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> Building Spring Cloud Microservices That Strangle Legacy Systems [kennybastani.com]

I still have a lot to go through here, but this is definitely a fantastic practical application of the strangler application pattern that I personally enjoy so much.

This pattern offers such a clear, sensible counter-balance to the unfortunate idea of the Big Rewrite, so this writeup is especially interesting.

>> Check your Spring Security SAML config – XXE security issue [spring.io]

A vulnerability found in sample code, clearly and transparently communicated to the community in case there are implementations out there that copy-pasted the sample.

This is why I like the Spring ecosystem.

>> Replaying Events in An Axon Framework Based Application [geekabyte.blogspot.com]

Replaying the event stream in an Event Sourcing architecture is one of those things that takes a while to sink in.

But once you realize that you can actually do that, yeah – a whole lot of options open up.

>> Using jOOλ to Combine Several Java 8 Collectors into One [jooq.org]

A quick writeup analyzing an code example from the community – and then using jOOλ to make it better (and far cleaner).

I definitely like these kinds of in-depth and to the point looks at code that can be improved (especially when they happen to my code). Lots to learn from here.

>> JUnit Cheat Sheet [zeroturnaround.com]

A practical and no-fluff writeup covering and distilling the main take-aways in JUnit 5.

>> Custom test slice with Spring Boot 1.4 [spring.io]

Testing with Spring and Boot is becoming better and better.

One good example is the segmentation of the Spring context that’s bootstrapped by the test – I always used to do this manually. This is better.

>> Spring Security OAuth2 – Client Authentication Issue [spring.io]

Very interesting and rare scenario of an OAuth2 vulnerability in Spring Security – where a user has the same username as the clientId of the client. Quick and to the point writeup here.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Stop Cross-Site Timing Attacks with SameSite cookies [igvita.com]

A very promising new draft, looking to update RFC6265 (the main HTTP State Management RFC) with a new type of cookie.

If accepted – this would go a long, long way towards mitigating a slew of CSRF attacks and vulnerabilities.

Very exciting proposal, and a great explanation of why we need it in this article.

>> DDD Tutorial – Modelling “Create Organization” [sapiensworks.com]

>> DDD Tutorial – Modelling Query Cases [sapiensworks.com]

Doing DDD (and incidentally Event Sourcing as well) takes a lot of digging, understanding of the system and ultimately modeling work.

These two installments of the series are a good step in that direction.

>> The Fixing-JSON Conversation [tbray.org]

Definitely interesting points on improving JSON (yeah, you read that right).

>> A Proposed Recipe for Designing, Building and Testing Microservices [specto.io]

Lots of good nuggets here if you’re doing microservices (well).

>> How Code Review Saves You Time [daedtech.com]

I think that by now we’re all on the same page with the fact that code reviews are very beneficial. Of course that doesn’t change that it’s not an easy practice to pick up, especially inside an organization that doesn’t have a culture that’s especially open to new ideas.

In my experience, metrics help a lot here – when a team has a non-trivial jump in some key metrics, the adoption stops being something that needs to be “accepted” and becomes an decision that’s internal to the team.

>> The Dropbox hack is real [troyhunt.com]

Either these big-time breaches are happening more and more these days, or I’m just noticing them more.

Either way, they happen a lot – so it’s nice to read about a company that actually stores the credentials data intelligently, so that when it does happen, it’s not a huge deal.

Also worth reading:

3. Musings

>> Some thoughts on the future of test automation [ontestautomation.com]

A good understanding of the testing ecosystem is oh-so valuable, not only when doing actual coding (half of my own coding work is testing), but generally, when releasing work into the hands of clients.

This writeup definitely has some good take-aways.

>> Why I Introduced Scala In Our Project [techblog.bozho.net]

I am personally a lot more partial to Clojure than Scala; but, similar to the topic of this article – I’ve been doing some Scala work recently and have come to appreciate some of the nicer aspects of the language.

One thing that’s definitely important to glean from this one is – if you don’t have Scala experience but want to try it out, introduce it on a small, side-module, not in the main codebase of your project.

>> My Realizations about Software Consulting [daedtech.com]

Software consulting is changing, no two ways about it. And, like most other things, really moving forward requires a shift in your mindset rather than an increase in your efficiency or skill. Very interesting read.

>> Innovation as a Fringe Activity [lemire.me]

Wall of text? Sure. Good? Yeah.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> The problem is in the part of your brain that handles intelligence [dilbert.com]

>> This is a magic button … [dilbert.com]

>> My faults are suspiciously alphabetical [dilbert.com]

5. Pick of the Week

This book has been a long time coming – Vlad has been working on it for over a year.

It’s finally out and will definitely be the reference book for learning JPA and Hibernate for a number of years to come.

So, if you’re doing Hibernate work, definitely pick this one up, not only to read, but to come back to as reference material as you’re actually doing work:

>> High Performance Java Persistence [leanpub.com]

And the announcement post:

>> High-Performance Java Persistence – Part Three [vladmihalcea.com]

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

A Guide to Spring Cloud Netflix – Hystrix

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Overview

This tutorial will cover Spring Cloud Netflix Hystrix, a fault tolerance library. We will use the library and implement the Circuit Breaker enterprise pattern, which is describing a strategy against failure cascading at different levels in the service-layer.

In simpler terms: How to allow one service to continue functioning – when it calls external services which are failing?

The principle is analogous to electronics: Hystrix is watching methods for failing calls to related services. If there is such a failing method, it will open the circuit, which means, it forwards the call to a fallback method.

The library will tolerate failures up to a threshold. Beyond that, it leaves the circuit open. Which means, it will forward all subsequent calls to the fallback method, to prevent future fails. This creates a time buffer for the related service to recover from its failing state.

2. REST Producer

To create a scenario, which demonstrates the Circuit Breaker pattern, we need a ‘related’ service first. We’ll name it ‘REST Producer’, because it provides data for the Hystrix-enabled ‘REST Consumer’ – which we’ll create in the next step.

If you’re already familiar with the implementation of a REST service, or if you already have one ‘ready-to-go’, you can skip this section.

Otherwise let’s create a new Maven project using the spring-boot-starter-web dependency:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <version>1.4.0.RELEASE</version>
</dependency>

The project itself is intentionally kept simple. It will consist of a controller interface with one @RequestMapper annotated GET method returning simply a String, a @RestController implementing this interface and a @SpringBootApplication.

We’ll begin with the interface:

public interface GreetingController {
    @RequestMapping("/greeting/{username}")
    String greeting(@PathVariable("username") String username);
}

And the implementation:

@RestController
public class GreetingControllerImpl implements GreetingController {
    @Override
    public String greeting(@PathVariable("username") String username) {
        return String.format("Hello %s!\n", username);
    }
}

Next we’ll write down the main application class:

@SpringBootApplication
public class RestProducerApplication {
    public static void main(String[] args) {
        SpringApplication.run(RestProducerApplication.class, args);
    }
}

To complete this section, the only thing left to do, is to configure an application-port on which we will be listening. We won’t use the default port 8080, because the port should remaining reserved for the application described in the next step.

Furthermore we’re defining an application name, to be able to look-up our ‘REST Producer’ from the client application that we’ll introduce, later. Let’s create an application.properties with the following content:

server.port=9090
spring.application.name=rest-producer

Now we’re able to test our ‘REST Producer’ using curl:

$> curl http://localhost:9090/greeting/Cid
Hello Cid!

3. REST Consumer with Hystrix

For our demonstration scenario, we’ll be implementing a web-application, which is consuming the REST service from the previous step using RestTemplate and Hystrix. For the sake of simplicity, we’ll call it the ‘REST Consumer’.

Consequently we create a new Maven project with spring-cloud-starter-hystrix, spring-boot-starter-web and spring-boot-starter-thymeleaf as dependencies:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-hystrix</artifactId>
    <version>1.1.5.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <version>1.4.0.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-thymeleaf</artifactId>
    <version>1.4.0.RELEASE</version>
</dependency>

For the Circuit Breaker to work, Hystix will scan @Component or @Service annotated classes for @HystixCommand annotated methods, implement a proxy for it and monitor its calls.

We’re going to create a @Service class first, which will be autowired into a @Controller. Since we’re building a web-application using Thymeleaf, we also need a HTML template as view.

This will be our inject-able @Service implementing a @HystrixCommand with an associated fallback method. This fallback has to use the same signature as the ‘original’:

@Service
public class GreetingService {
    @HystrixCommand(fallbackMethod = "defaultGreeting")
    public String getGreeting(String username) {
        return new RestTemplate()
          .getForObject("http://localhost:9090/greeting/{username}", 
          String.class, username);
    }

    private String defaultGreeting(String username) {
        return "Hello User!";
    }
}

RestConsumerApplication will be our main application class. The @EnableCircuitBreaker annotation will scan the classpath for any compatible Circuit Breaker implementation.

To use Hystrix explicitly you have to annotate this class with @EnableHystrix:

@SpringBootApplication
@EnableCircuitBreaker
public class RestConsumerApplication {
    public static void main(String[] args) {
        SpringApplication.run(RestConsumerApplication.class, args);
    }
}

We’ll set up the controller using our GreetingService:

@Controller
public class GreetingController {
    @Autowired
    private GreetingService greetingService;

    @RequestMapping("/get-greeting/{username}")
    public String getGreeting(Model model, @PathVariable("username") String username) {
        model.addAttribute("greeting", greetingService.getGreeting(username));
        return "greeting-view";
    }
}

And here’s the HTML template:

<!DOCTYPE html>
<html xmlns:th="http://www.thymeleaf.org">
    <head>
        <title>Greetings from Hystrix</title>
    </head>
    <body>
        <h2 th:text="${greeting}"/>
    </body>
</html>

To ensure that the application is listening at a defined port, we put the following in an application.properties file:

server.port=8080

To see a Hystix circuit breaker in action, we’re starting our ‘REST Consumer’ and pointing our browser to http://localhost:8080/get-greeting/Cid. Under normal circumstances, the following will be shown:

Hello Cid!

To simulate a failure of our ‘REST Producer’, we’ll simply stop it and after we finished refreshing the browser we should see a generic message, returned from the fallback method in our @Service:

Hello User!

4. REST Consumer with Hystrix and Feign

Now we’re going to modify the project from the previous step to use Spring Netflix Feign as declarative REST client, instead of Spring RestTemplate.

The advantage is, that we’re later able to easily refactor our Feign Client interface to use Spring Netflix Eureka for service discovery.

To start the new project, we’ll make a copy of our ‘REST Consumer’, and add the ‘REST Producer’ and spring-cloud-starter-feign as dependency to the pom.xml:

<dependency>
    <groupId>com.baeldung.spring.cloud</groupId>
    <artifactId>spring-cloud-hystrix-rest-producer</artifactId>
    <version>1.0.0-SNAPSHOT</version>
</dependency>
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-feign</artifactId>
    <version>1.1.5.RELEASE</version>
</dependency>

Now we’re able to use our GreetingController to extend a Feign Client. We will implement Hystrix fallback as an inner static class annotated with @Component.

Alternatively, we could define a @Bean annotated method returning an instance of this fallback class.

The name property of the @FeignClient is mandatory. It is used, to look-up the application either by service discovery via an Eureka Client or via url, if this property is given. For more on using Spring Netflix Eureka for service discovery have a look at this article:

@FeignClient(
  name = "rest-producer"
  url = "http://localhost:9090", 
  fallback = GreetingClient.GreetingClientFallback.class
)
public interface GreetingClient extends GreetingController {
    
    @Component
    public static class GreetingClientFallback implements GreetingController {
        @Override
        public String greeting(@PathVariable("username") String username) {
            return "Hello User!";
        }
    }
}

In the RestConsumerFeignApplication, we’ll put an additional annotation to enable Feign integration, in fact @EnableFeignClients, to the application main class:

@SpringBootApplication
@EnableCircuitBreaker
@EnableFeignClients
public class RestConsumerFeignApplication {
    
    public static void main(String[] args) {
        SpringApplication.run(RestConsumerFeignApplication.class, args);
    }
}

We’re going to modify the controller to use an auto-wired Feign Client, rather than the previously injected @Service, to retrieve our greeting:

@Controller
public class GreetingController {
    @Autowired
    private GreetingClient greetingClient;

    @RequestMapping("/get-greeting/{username}")
    public String getGreeting(Model model, @PathVariable("username") String username) {
        model.addAttribute("greeting", greetingClient.greeting(username));
        return "greeting-view";
    }
}

To distinct this example from the previous, we will alter the application listening port in the application.properties:

server.port=8082

Finally we will test this Feign-enabled ‘REST Consumer’ like the one from the previous section. The expected result should be the same.

5. Using Scopes

Normally a @HytrixCommand annotated method is executed in a thread pool context. But sometimes it needs to be running in a local scope, for example a @SessionScope or a @RequestScope. This can be done via giving arguments to the command annotation:

@HystrixCommand(fallbackMethod = "getSomeDefault", commandProperties = {
  @HystrixProperty(name="execution.isolation.strategy", value="SEMAPHORE")
})

6. The Hystrix Dashboard

An optional nice feature of Hystrix is the ability to monitor its status on a dashboard.

To enable it, we’ll put spring-cloud-starter-hystrix-dashboard and spring-boot-starter-actuator in the pom.xml of our ‘REST Consumer’:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-hystrix-dashboard</artifactId>
    <version>1.1.5.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
    <version>1.4.0.RELEASE</version>
</dependency>

The former needs to be enabled via annotating a @Configuration with @EnableHystrixDashboard and the latter automatically enables the required metrics within our web-application.

After we’ve done restarting the application, we’ll point a browser at http://localhost:8080/hystrix, input the metrics url of a ‘hystrix.stream’ and begin monitoring. Finally we should see something like this:

Picture of an example Hytrix Dashboard

Monitoring a ‘hystrix.stream’ is something fine, but if you have to watch multiple Hystrix-enabled applications, it will become inconvenient. For this purpose Spring Cloud provides a tool called Turbine, which is able to aggregate streams to present in one Hystrix Dashboard.

Configuring Turbine is beyond the scope of this write-up, but the possibility should be mentioned here. So its also possible to collect these streams via messaging, using Turbine Stream.

7. Conclusion

As we’ve seen so far, we’re now able to implement the Circuit Breaker pattern using Spring Netflix Hystrix together with either Spring RestTemplate or Spring Netflix Feign.

This means that we’re able to consume services with included fallback using ‘static’ or rather ‘default’ data and we’re able to monitor the usage of this data.

As usual, you’ll find the sources on GitHub.

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

Java Web Weekly 141: Guide to Java 9, SpringOne Talks, ElasticSearch Tuning and Troubleshooting on Hard-Mode

$
0
0

I just released the Master Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> The Ultimate Guide To Java 9 [codefx.org]

A monster-post all about what’s coming in Java 9.

>> Scratching a JUnit Itch [dannorth.net]

An interesting read on bringing in some concepts from Go into JUnit.

A while I haven’t used Go before, I can definitely see how the approach makes sense in a lot of scenarios.

>> How to map encrypted database columns with Hibernate’s @ColumnTransformer annotation [thoughts-on-java.org]

A code-focus explanation of how to store sensitive data in the DB by using an encrypted column.

>> Page Object pattern example [lkrnac.net]

A quick and practical intro to the Page Object pattern; this is very close to the way I now write all my end-to-end tests.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> DDD Tutorial – Modelling Domain Relationship and Domain Service [sapiensworks.com]

The next step in the DDD journey I’ve been documenting here over the last few weeks.

This time focused on a more complex usecase to model.

>> Finding the link between heart rate and running pace with Spark ML – Fitting a linear regression model [vanwilgenburg.com]

A practical and cool writeup here – trying to find the relation between heart rate and pace of running.

I have a soft spot for this kind of analysis, after doing a couple of years of Mahout work and digging through similar datasets.

>> Why is troubleshooting so hard? [plumbr.eu]

We earn our troubleshooting chops on hard-mode.

That’s cool, because it turns out that – all of that stress of looking at things failing in production – makes us better engineers.

This quick writeup is a very high-level look at out options with the goal of better navigating this very nuanced and important landscape.

>> 9 Tips on ElasticSearch Configuration for High Performance [loggly.com]

Some very useful ElasticSearch practical tips from the Loggly team.

Getting some of these right can make or break your solution – as I personally learned the hard way over the last two and a half years of working with ES.

Also worth reading:

3. Musings

>> How to handle unfinished User Stories in Scrum [codecentric.de]

A good answer to a very common question if you’re doing any kind of Scrum – especially in the beginning.

>> Static Analysis for Small Business [daedtech.com]

Static Analysis is one of those tools that, once picked up by a team, won’t be abandoned any time soon. A small team is no different, and this quick writeup explores why it does make sense here as well.

>> Are there too many people? [lemire.me]

I realize this is just scratching the surface and understanding how populations shrink and grow is a deep topic.

That being said, this was a good primer.

>> The Billing Maturity Model [daedtech.com]

If you’re doing consulting, this piece is going to help you out, if only to give you the context to step away from hourly billing.

And if you’re on this path, have a look at the “Hourly Billing is Nuts” book as well.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> This took an ugly turn [dilbert.com]

>> Banning telecommuting [dilbert.com]

>> Do you mind if I check my email? [dilbert.com]

5. Pick of the Week

>> The myth of low-hanging fruit [m.signalvnoise.com]

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Java Web Weekly, Issue 142

$
0
0

I just released the Master Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Java and Spring

>> Redmonk Analyzes Java Framework Popularity [infoq.com]

Quick and very interesting data about the frameworks we use in the Java ecosystem.

>> Oracle Shares Their Strategy for Java EE with the JCP Executive Committee [infoq.com]

Some further (but minimal) insights into what’s going on with Java EE.

>> Interfacing with Messy Humans [javaspecialists.eu]

Humans are indeed messy and use antiquated systems which of course need to be mapped – sometimes successfully – to concepts in the core libraries of programming languages.

>> Adventures in SEO with Vaadin [frankel.ch]

Java and SEO aren’t two words you’d expect to see together – it’s both interesting and encouraging that some frameworks are actually doing good work in this area.

>> How to store timestamps in UTC using the new hibernate.jdbc.time_zone configuration property [relation.to]

A cool new solution to an old problem.

>> Hibernate Best Practices [thoughts-on-java.org]

A monster of a post that’s certainly going to be a good reference when doing Hibernate work – and a great step towards a correct and idiomatic use of the platform.

>> The Fall of Eclipse [movingfulcrum.com]

Sad, but definitely true – Eclipse has lost, and deservedly so. I’m still a user, but I’m planning to jump ship as well.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Basics of Web Application Security: Protect User Sessions [martinfowler.com]

Another top-notch installment in the running series on web application security – this one focused on the core aspect of sessions.

Also worth reading:

3. Musings

>> The Ergonomics of Type Checking [silvrback.com]

A quick bit of writing that cuts to the core of using a type checked language (or not). Definitely read this one.

>> How to Get Developers to Adopt a Coding Standard [daedtech.com]

The first time I tried to bring a coding standard into my project – it was an absolute disaster. You can learn these lessons the hard way, or you can do some reading and sidestep most of that if you’re in so inclined.

>> The Biggest Mistake Static Analysis Prevents [daedtech.com]

Static analysis is always easier to understand with horror stories.

If you lived through those stories yourself, it’s understandably not as fun, so being able to glean insights out of the stories of other developers can shave years off from the natural learning process.

>> Are You an Engineer or a Developer? [swizec.com]

A fun and grounded look at what the engineer vs developer difference really is.

>> Publish every day [swizec.com]

Yeah.

>> Starting high school in 2016 [lemire.me]

The idea that schools are stuck in the last century is probably not new to anyone, but it’s nevertheless quite unfortunate and also interesting to read about from first-hand experience.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> I guess now I’m doing your job too? [dilbert.com]

>> I’ve added my name to your patent application [dilbert.com]

>> Maybe next time we should introduce ourselves [dilbert.com]

5. Pick of the Week

>> The power of positive intent [m.signalvnoise.com]

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Java Web Weekly, Issue 143

$
0
0

I just released the Master Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> Proposed schedule change for JDK 9 [mail.openjdk.java.net]

This had to go first.

Not entirely unexpected, Java 9 got delayed by 4 months, which places GA in July of next year.

>> Private methods in interfaces in Java 9 [joda.org]

Java 9 is adding private methods on interfaces. Yes – you read that right 🙂

>> Java EE 8 Delayed Until End of 2017, Oracle Announces at JavaOne [infoq.com]

More clarity around the schedule and timeline of Java EE 8.

>> Feedback needed – Which constraints should be added? [beanvalidation.org]

A very quick survey asking for feedback on the exact constraints that should become part of the new bean validation standard.

Definitely worth voting on if you’re using the standard (and you probably are).

>> How to simplify JPA and Hibernate integration testing using Java 8 lambdas [relation.to]

Cool to see some Java 8 goodness in Hibernate.

>> JHipster 3.7 Tutorial Pt 3: Secure Service Communication Using OAuth2 [stytex.de]

A quick exploration of the new UAA functionality in JHipster – something I’m planning to make time to experiment with as well.

>> Complete Guide: Inheritance strategies with JPA and Hibernate [thoughts-on-java.org]

A good handle on inheritance strategies is a central aspect of understanding and working with JPA and Hibernate – and this article is a good way to brush up on the fundamentals.

>> JavaOne 2016 – Audience Gets a Glimpse of the Power of JShell [infoq.com]

JShell is one of the features I’m most excited about when Java 9 is going to drop.

>> Resource leakages: command pattern to the rescue [plumbr.eu]

Look – this isn’t new by any stretch of the imagination. Read it anyways – it might lead to code that behaves better.

>> Oracle Gives NetBeans to the Apache Foundation [infoq.com]

An unexpected move from Oracle, with hopefully positive consequences for the project.

That being said, that may also mean that the current Oracle team will slowly but surely be pulled from the project, which will definitely also have an impact.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Musings

>> Are you guilty of over-engineering? [frankel.ch]

This write-up is a good example of why TDD is so very useful. Design is the primary reason to do TDD – a design that almost always leads to fewer unnecessary abstractions like the one described here.

>> Old Geek [tbray.org]

It’s always baffling and sad to realize just how different the tech landscape looks over a certain age. It’s really a shame and easy to forget if you’re not there yet.

>> Here’s how broken today’s web will feel in Chrome’s secure-by-default future [troyhunt.com]

These changes are coming to Chrome (first) sooner rather than later. A bold step in the right direction – and one that’s especially interesting if you own a site.

>> Wherefore Art Thou, Tech Debt? [daedtech.com]

A insightful read about how different shops operate and about common patterns across the industry. Clearly a post born out of experience – and a hint of serenity that comes with that experience.

If you’re not doing consulting you’ve definitely seen some of these yourself but likely not all.

>> RPC vs REST is not in the URL [bizcoder.com]

Interesting position on REST, definitely coming from a different perspective.

>> What 60,000 Customer Searches Taught Us about Logging in JSON [loggly.com]

It’s always fun to read up on how the industry doing logging, and “start logging in JSON now” is a message I fully agree with.

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> Is that sarcasm? [dilbert.com]

>> When our CEO visits [dilbert.com]

>> Productivity went down [dilbert.com]

4. Pick of the Week

Petri’s course on testing is 30% off for a few more days, until the first package goes live.

So, right now is a great opportunity to get it, as it’s going to go up in price soon:

>> Get 30% off the “Test with Spring” Course

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

web.xml vs Initializer with Spring

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Overview

In this article we’ll cover three different approaches of configuring a DispatcherServlet available in recent versions of the Spring Framework:

  1. We’ll start with an XML configuration and a web.xml file
  2. Then we’ll migrate the Servlet declaration from the web.xml file to Java config, but we’ll leave any other configuration in XML
  3. Finally in the third and final step of the refactoring, we’ll have a 100% Java-configured project

2. The DispatcherServlet

One of the core concepts of Spring MVC is the DispatcherServlet. The Spring documentation defines it as:

A central dispatcher for HTTP request handlers/controllers, e.g. for web UI controllers or HTTP-based remote service exporters. Dispatches to registered handlers for processing a web request, providing convenient mapping and exception handling facilities.

Basically the DispatcherServlet is the entry point of every Spring MVC application. Its purpose is to intercept HTTP requests and to dispatch them to the right component that will know how to handle it.

3. Configuration with web.xml

If you deal with legacy Spring projects it is very common to find XML configuration and until Spring 3.1 the only way to configure the DispatcherServlet was with the WEB-INF/web.xml file. In this case there are two steps required.

Let’s see an example configuration – the first step is the Servlet declaration:

<servlet>
    <servlet-name>dispatcher</servlet-name>
    <servlet-class>
        org.springframework.web.servlet.DispatcherServlet
    </servlet-class>
    <init-param>
        <param-name>contextConfigLocation</param-name>
        <param-value>/WEB-INF/spring/dispatcher-config.xml</param-value>
    </init-param>
    <load-on-startup>1</load-on-startup>
</servlet>

With this block of XML we are declaring a servlet that:

  1. Is named “dispatcher
  2. Is an instance of org.springframework.web.servlet.DispatcherServlet
  3. Will be initialized with a parameter named contextConfigLocation which contains the path to the configuration XML

load-on-startup is an integer value that specifies the order for multiple servlets to be loaded. So if you need to declare more than one servlet you can define in which order they will be initialized. Servlets marked with lower integers are loaded before servlets marked with higher integers.

Now our servlet is configured. The second step is declaring a servlet-mapping:

<servlet-mapping>
    <servlet-name>dispatcher</servlet-name>
    <url-pattern>/</url-pattern>
</servlet-mapping>

With the servlet mapping we are bounding it by its name to a URL pattern that specifies what HTTP requests will be handled by it.

4. Hybrid Configuration

With the adoption of the version 3.0 of Servlet APIs, the web.xml file has become optional, and we can now use Java to configure the DispatcherServlet.

We can register a servlet implementing a WebApplicationInitializer. This is the equivalent of the XML configuration above:

public class MyWebAppInitializer implements WebApplicationInitializer {
    @Override
    public void onStartup(ServletContext container) {
        XmlWebApplicationContext context = new XmlWebApplicationContext();
        context.setConfigLocation("/WEB-INF/spring/dispatcher-config.xml");

        ServletRegistration.Dynamic dispatcher = container
          .addServlet("dispatcher", new DispatcherServlet(context));

        dispatcher.setLoadOnStartup(1);
        dispatcher.addMapping("/");
    }
}

In this example we are:

  1. Implementing the WebApplicationInitializer interface
  2. Overriding the onStartup method we create a new XmlWebApplicationContext configured with the same file passed as contextConfigLocation to the servlet in the XML example
  3. Then we are creating an instance of DispatcherServlet with the new context that we just instantiated
  4. And finally we are registering the servlet with a mapping URL pattern

So we used Java to declare the servlet and bind it to a URL mapping but we kept the configuration in a separated XML file: dispatcher-config.xml.

5. 100% Java Configuration

With this approach our servlet is declared in Java, but we still need an XML file to configure it. With WebApplicationInitializer you can achieve a 100% Java configuration.

Let’s see how we can refactor the previous example.

The first thing we will need to do is create the application context for the servlet.

This time we will use an annotation based context so that we can use Java and annotations for configuration and remove the need for XML files like dispatcher-config.xml:

AnnotationConfigWebApplicationContext context
  = new AnnotationConfigWebApplicationContext();

This type of context can then be configured registering a configuration class:

context.register(AppConfig.class);

Or setting an entire package that will be scanned for configuration classes:

context.setConfigLocation("com.example.app.config");

Now that our application context is created, we can add a listener to the ServletContext that will load the context:

container.addListener(new ContextLoaderListener(context));

The next step is creating and registering our dispatcher servlet:

ServletRegistration.Dynamic dispatcher = container
  .addServlet("dispatcher", new DispatcherServlet(context));

dispatcher.setLoadOnStartup(1);
dispatcher.addMapping("/");

Now our WebApplicationInitializer should look like this:

public class MyWebAppInitializer implements WebApplicationInitializer {
    @Override
    public void onStartup(ServletContext container) {
        AnnotationConfigWebApplicationContext context
          = new AnnotationConfigWebApplicationContext();
        context.setConfigLocation("com.example.app.config");

        container.addListener(new ContextLoaderListener(context));

        ServletRegistration.Dynamic dispatcher = container
          .addServlet("dispatcher", new DispatcherServlet(context));
        
        dispatcher.setLoadOnStartup(1);
        dispatcher.addMapping("/");
    }
}

Java and annotation configuration offers many advantages. Usually it leads to shorter and more concise configuration and annotations provide more context to declarations, as it’s co-located with the code that they configure.

But this is not always a preferable or even possible way. For example some developers may prefer keeping their code and configuration separated, or you may need to work with third party code that you can’t modify.

6. Conclusion

In this article we covered different ways to configure a DispatcherServlet in Spring 3.2+ and it’s up to you to decide which one to use based on your preferences. Spring will accommodate to your decision whatever you choose.

You can find the source code from this article on Github.

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE


Guide to Java Reflection

$
0
0

I just released the Master Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

In this article we will be exploring java reflection, which allows us to inspect or/and modify runtime attributes of classes, interfaces, fields and methods. This particularly comes in handy when we don’t know their names at compile time.

Additionally, we can instantiate new objects, invoke methods and get or set field values using reflection.

2. Project Setup

To use java reflection, we do not need to include any special jars, any special configuration or Maven dependencies. The JDK ships with a group of classes that are bundled in the java.lang.reflect package specifically for this purpose.

So all we need to do is to make the following import into our code:

import java.lang.reflect.*;

and we are good to go.

To get access to the class, method and field information of an instance we call the getClass method which returns the runtime class representation of the object. The returned class object provides methods for accessing information about a class.

3. Simple Example

To get our feet wet, we are going to take a look at a very basic example which inspects the fields of a simple java object at runtime.

Let’s create a simple Person class with only name and age fields and no methods at all. Here is the Person class:

public class Person {
    private String name;
    private int age;
}

We will now use java reflection to discover the names of all fields of this class. To appreciate the power of reflection, we will construct a Person object and use Object as the reference type:

@Test
public void givenObject_whenGetsFieldNamesAtRuntime_thenCorrect() {
    Object person = new Person();
    Field[] fields = person.getClass().getDeclaredFields();

    List<String> actualFieldNames = getFieldNames(fields);

    assertTrue(Arrays.asList("name", "age")
      .containsAll(actualFieldNames));
}

This test shows us that we are able to get an array of Field objects from our person object, even if the reference to the object is a parent type of that object.

In the above example, we were only interested in the names of those fields, but there is much more that can be done and we will see further examples of this in the subsequent sections.

Notice how we use a helper method to extract the actual field names, it’s a very basic code:

private static List<String> getFieldNames(Field[] fields) {
    List<String> fieldNames = new ArrayList<>();
    for (Field field : fields)
      fieldNames.add(field.getName());
    return fieldNames;
}

4. Java Reflection Use Cases

Before we proceed to the different features of java reflection, we will discuss some of the common uses we may find for it. Java reflection is extremely powerful and can come in very handy in a number of ways.

For instance, in many cases we have a naming convention for database tables. We may choose to add consistency by pre-fixing our table names with tbl_, such that a table with student data is called tbl_student_data.

In such cases, we may name the java object holding student data as Student or StudentData. Then using the CRUD paradigm, we have one entry point for each operation such that Create operations only receive an Object parameter.

We then use reflection to retrieve the object name and field names. At this point, we can map this data to a DB table and assign the object field values to the appropriate DB field names.

5. Inspecting Java Classes

In this section, we will explore the most fundamental component in the java reflection API. java class objects, as we mentioned earlier, give us access to the internal details of any object.

We are going to examine internal details such as an object’s class name, their modifiers, fields, methods, implemented interfaces etc.

5.1. Getting Ready

To get a firm grip on the reflection API, as applied to java classes and have examples with variety, we will create an abstract Animal class which implements the Eating interface. This interface defines the eating behavior of any concrete Animal object we create.

So firstly, here is the Eating interface:

public interface Eating {
    String eats();
}

and then the concrete Animal implementation of the Eating interface:

public abstract class Animal implements Eating {

    public static String CATEGORY = "domestic";
    private String name;

    protected abstract String getSound();

    // constructor, standard getters and setters omitted 
}

Let’s also create another interface called Locomotion which describes how an animal moves:

public interface Locomotion {
    String getLocomotion();
}

We will now create a concrete class called Goat which extends Animal and implements Locomotion. Since the super class implements Eating, Goat will have to implement that interface’s methods as well:

public class Goat extends Animal implements Locomotion {

    @Override
    protected String getSound() {
        return "bleat";
    }

    @Override
    public String getLocomotion() {
        return "walks";
    }

    @Override
    public String eats() {
        return "grass";
    }

    // constructor omitted
}

From this point onward, we will use java reflection to inspect aspects of java objects that appear in the classes and interfaces above.

5.2. Class Names

Let’s start by getting the name of an object from the Class:

@Test
public void givenObject_whenGetsClassName_thenCorrect() {
    Object goat = new Goat("goat");
    Class<?> clazz = goat.getClass();

    assertEquals("Goat", clazz.getSimpleName());
    assertEquals("com.baeldung.reflection.Goat", clazz.getName());
    assertEquals("com.baeldung.reflection.Goat", clazz.getCanonicalName());
}

Note that the getSimpleName method of Class returns the basic name of the object as it would appear in it’s declaration. Then the other two methods return the fully qualified class name including the package declaration.

Let’s also see how we can create an object of the Goat class if we only know it’s fully qualified class name:

@Test
public void givenClassName_whenCreatesObject_thenCorrect(){
    Class<?> clazz = Class.forName("com.baeldung.reflection.Goat");

    assertEquals("Goat", clazz.getSimpleName());
    assertEquals("com.baeldung.reflection.Goat", clazz.getName());
    assertEquals("com.baeldung.reflection.Goat", clazz.getCanonicalName());	
}

Notice that the name we pass to the static forName method should include the package information otherwise we will get a ClassNotFoundException.

5.3. Class Modifiers

We can determine the modifiers used on a class by calling the getModifiers method which returns an Integer. Each modifier is a flag bit which is either set or cleared.

The java.lang.reflect.Modifier class offers static methods that analyze the returned Integer for the presence or absence of a specific modifier.

Let’s confirm the modifiers of  some of the classes we defined above:

@Test
public void givenClass_whenRecognisesModifiers_thenCorrect() {
    Class<?> goatClass = Class.forName("com.baeldung.reflection.Goat");
    Class<?> animalClass = Class.forName("com.baeldung.reflection.Animal");

    int goatMods = goatClass.getModifiers();
    int animalMods = animalClass.getModifiers();

    assertTrue(Modifier.isPublic(goatMods));
    assertTrue(Modifier.isAbstract(animalMods));
    assertTrue(Modifier.isPublic(animalMods));
}

We are able to inspect modifiers of any class located in a library jar that we are importing into our project.

In most cases, we may need to use the forName approach rather than the full blown instantiation since that would be an expensive process in the case of memory heavy classes.

5.4. Package Information

By using java reflection we are also able to get information about the package of any class or object. This data is bundled inside the Package class which is returned by a call to getPackage method on the class object.

Lets run a test to retrieve the package name:

@Test
public void givenClass_whenGetsPackageInfo_thenCorrect() {
    Goat goat = new Goat("goat");
    Class<?> goatClass = goat.getClass();
    Package pkg = goatClass.getPackage();

    assertEquals("com.baeldung.reflection", pkg.getName());
}

5.5. Super Class

We are also able to obtain the super class of any java class by using java reflection.

In many cases, especially while using library classes or java’s builtin classes, we may not know beforehand the super class of an object we are using, this sub section will show how to obtain this information.

So let’s go ahead and determine the super class of Goat, additionally, we also show that java.lang.String class is a subclass of java.lang.Object class:

@Test
public void givenClass_whenGetsSuperClass_thenCorrect() {
    Goat goat = new Goat("goat");
    String str = "any string";

    Class<?> goatClass = goat.getClass();
    Class<?> goatSuperClass = goatClass.getSuperclass();

    assertEquals("Animal", goatSuperClass.getSimpleName());
    assertEquals("Object", str.getClass().getSuperclass().getSimpleName());
}

5.6. Implemented Interfaces

Using java reflection, we are also able to get the list of interfaces implemented by a given class.

Let’s retrieve the class types of the interfaces implemented by the Goat class and the Animal abstract class:

@Test
public void givenClass_whenGetsImplementedInterfaces_thenCorrect(){
    Class<?> goatClass = Class.forName("com.baeldung.reflection.Goat");
    Class<?> animalClass = Class.forName("com.baeldung.reflection.Animal");

    Class<?>[] goatInterfaces = goatClass.getInterfaces();
    Class<?>[] animalInterfaces = animalClass.getInterfaces();

    assertEquals(1, goatInterfaces.length);
    assertEquals(1, animalInterfaces.length);
    assertEquals("Locomotion", goatInterfaces[0].getSimpleName());
    assertEquals("Eating", animalInterfaces[0].getSimpleName());
}

Notice from the assertions that each class implements only a single interface. Inspecting the names of these interfaces, we find that Goat implements Locomotion and Animal implements Eating, just as it appears in our code.

You may have observed that Goat is a subclass of the abstract class Animal and implements the interface method eats(), then, Goat also implements the Eating interface.

It is therefore worth noting that only those interfaces that a class explicitly declares as implemented with the implements keyword appear in the returned array.

So even if a class implements interface methods because it’s super class implements that interface, but the subclass does not directly declare that interface with the implements keyword, then that interface will not appear in the array of interfaces.

5.7. Constructors, Methods and Fields

With java reflection, we are able to inspect the constructors of any object’s class as well as methods and fields.

We will later on be able to see deeper inspections on each of these components of a class but for now, it suffices to just get their names and compare them with what we expect.

Let’s see how to get the constructor of the Goat class:

@Test
public void givenClass_whenGetsConstructor_thenCorrect(){
    Class<?> goatClass = Class.forName("com.baeldung.reflection.Goat");

    Constructor<?>[] constructors = goatClass.getConstructors();

    assertEquals(1, constructors.length);
    assertEquals("com.baeldung.reflection.Goat", constructors[0].getName());
}

We can also inspect the fields of the Animal class like so:

@Test
public void givenClass_whenGetsFields_thenCorrect(){
    Class<?> animalClass = Class.forName("com.baeldung.java.reflection.Animal");
    Field[] fields = animalClass.getDeclaredFields();

    List<String> actualFields = getFieldNames(fields);

    assertEquals(2, actualFields.size());
    assertTrue(actualFields.containsAll(Arrays.asList("name", "CATEGORY")));
}

Just like we can inspect the methods of the Animal class:

@Test
public void givenClass_whenGetsMethods_thenCorrect(){
    Class<?> animalClass = Class.forName("com.baeldung.java.reflection.Animal");
    Method[] methods = animalClass.getDeclaredMethods();
    List<String> actualMethods = getMethodNames(methods);

    assertEquals(4, actualMethods.size());
    assertTrue(actualMethods.containsAll(Arrays.asList("getName",
      "setName", "getSound")));
}

Just like getFieldNames, we have added a helper method to retrieve method names from an array of Method objects:

private static List<String> getMethodNames(Method[] methods) {
    List<String> methodNames = new ArrayList<>();
    for (Method method : methods)
      methodNames.add(method.getName());
    return methodNames;
}

6. Inspecting Constructors

With java reflection, we can inspect constructors of any class and even create class objects at runtime. This is made possible by the java.lang.reflect.Constructor class.

Earlier on, we only looked at how to get the array of Constructor objects, from which we were able to get the names of the constructors.

In this section, we will focus on how to retrieve specific constructors. In java, as we know, no two constructors of a class share exactly the same method signature. So we will use this uniqueness to get one constructor from many.

To appreciate the features of this class, we will create a Bird subclass of Animal with three constructors. We will not implement Locomotion so that we can specify that behavior using a constructor argument, to add still more variety:

public class Bird extends Animal {
    private boolean walks;

    public Bird() {
        super("bird");
    }

    public Bird(String name, boolean walks) {
        super(name);
        setWalks(walks);
    }

    public Bird(String name) {
        super(name);
    }

    public boolean walks() {
        return walks;
    }

    // standard setters and overridden methods
}

Let’s confirm by reflection that this class has three constructors:

@Test
public void givenClass_whenGetsAllConstructors_thenCorrect() {
    Class<?> birdClass = Class.forName("com.baeldung.reflection.Bird");
    Constructor<?>[] constructors = birdClass.getConstructors();

    assertEquals(3, constructors.length);
}

Next, we will retrieve each constructor for the Bird class by passing the constructor’s parameter class types in declared order:

@Test
public void givenClass_whenGetsEachConstructorByParamTypes_thenCorrect(){
    Class<?> birdClass = Class.forName("com.baeldung.reflection.Bird");

    Constructor<?> cons1 = birdClass.getConstructor();
    Constructor<?> cons2 = birdClass.getConstructor(String.class);
    Constructor<?> cons3 = birdClass.getConstructor(String.class, boolean.class);
}

There is no need for assertion since when a constructor with given parameter types in the given order does not exist, we will get a NoSuchMethodException and the test will automatically fail.

In the last test, we will see how to instantiate objects at runtime while supplying their parameters:

@Test
public void givenClass_whenInstantiatesObjectsAtRuntime_thenCorrect() {
    Class<?> birdClass = Class.forName("com.baeldung.reflection.Bird");
    Constructor<?> cons1 = birdClass.getConstructor();
    Constructor<?> cons2 = birdClass.getConstructor(String.class);
    Constructor<?> cons3 = birdClass.getConstructor(String.class,
      boolean.class);

    Bird bird1 = (Bird) cons1.newInstance();
    Bird bird2 = (Bird) cons2.newInstance("Weaver bird");
    Bird bird3 = (Bird) cons3.newInstance("dove", true);

    assertEquals("bird", bird1.getName());
    assertEquals("Weaver bird", bird2.getName());
    assertEquals("dove", bird3.getName());

    assertFalse(bird1.walks());
    assertTrue(bird3.walks());
}

We instantiate class objects by calling the newInstance method of Constructor class and passing the required parameters in declared order. We then cast the result to the required type.

For bird1, we use the default constructor which from our Bird code, automatically sets the name to bird and we confirm that with a test.

We then instantiate bird2 with only a name and test as well, remember that when we don’t set locomotion behavior as it defaults to false, as seen in the last two assertions.

7. Inspecting Fields

Previously, we only inspected the names of fields, in this section, we will show how to get and set their values at runtime.

There are two main methods used to inspect fields of a class at runtime: getFields() and getField(fileName).

The getFields() method returns all accessible public fields of the class in question. It will return all the public fields in both the class and all super classes.

For instance, when we call this method on the Bird class, we will only get the CATEGORY field of it’s super class, Animal, since Bird itself does not declare any public fields:

@Test
public void givenClass_whenGetsPublicFields_thenCorrect() {
    Class<?> birdClass = Class.forName("com.baeldung.reflection.Bird");
    Field[] fields = birdClass.getFields();

    assertEquals(1, fields.length);
    assertEquals("CATEGORY", fields[0].getName());
}

This method also has a variant called getField which returns only one Field object by taking the name of the field:

@Test
public void givenClass_whenGetsPublicFieldByName_thenCorrect() {
    Class<?> birdClass = Class.forName("com.baeldung.reflection.Bird");
    Field field = birdClass.getField("CATEGORY");

    assertEquals("CATEGORY", field.getName());
}

We are not able to access private fields declared in super classes and not declared in the child class. This is why we are not able to access the name field.

However, we can inspect private fields declared in the class we are dealing with by calling the getDeclaredFields method:

@Test
public void givenClass_whenGetsDeclaredFields_thenCorrect(){
    Class<?> birdClass = Class.forName("com.baeldung.reflection.Bird");
    Field[] fields = birdClass.getDeclaredFields();

    assertEquals(1, fields.length);
    assertEquals("walks", fields[0].getName());
}

We can also use its other variant in case we know the name of the field:

@Test
public void givenClass_whenGetsFieldsByName_thenCorrect() {
    Class<?> birdClass = Class.forName("com.baeldung.reflection.Bird");
    Field field = birdClass.getDeclaredField("walks");

    assertEquals("walks", field.getName());
}

If we get the name of the field wrong or type in an in-existent field, we will get a NoSuchFieldException.

We get the field type as follows:

@Test
public void givenClassField_whenGetsType_thenCorrect() {
    Field field = Class.forName("com.baeldung.reflection.Bird")
      .getDeclaredField("walks");
    Class<?> fieldClass = field.getType();

    assertEquals("boolean", fieldClass.getSimpleName());
}

Next, we will look at how to access field values and modify them. To be able to get the value of a field, let alone setting it, we must first set it’s accessible by calling setAccessible method on the Field object and pass boolean true to it:

@Test
public void givenClassField_whenSetsAndGetsValue_thenCorrect() {
    Class<?> birdClass = Class.forName("com.baeldung.reflection.Bird");
    Bird bird = (Bird) birdClass.newInstance();
    Field field = birdClass.getDeclaredField("walks");
    field.setAccessible(true);

    assertFalse(field.getBoolean(bird));
    assertFalse(bird.walks());
		
    field.set(bird, true);
		
    assertTrue(field.getBoolean(bird));
    assertTrue(bird.walks());
}

In the above test, we ascertain that indeed the value of the walks field is false before setting it to true.

Notice how we use the Field object to set and get values by passing it the instance of the class we are dealing with and possibly the new value we want the field to have in that object.

One important thing to note about Field objects is that when it is declared as public static, then we don’t need an instance of the class containing them, we can just pass null in it’s place and still obtain the default value of the field, like so:

@Test
public void givenClassField_whenGetsAndSetsWithNull_thenCorrect(){
    Class<?> birdClass = Class.forName("com.baeldung.reflection.Bird");
    Field field = birdClass.getField("CATEGORY");
    field.setAccessible(true);

    assertEquals("domestic", field.get(null));
}

8. Inspecting Methods

In a previous example, we used reflection only to inspect method names. However, java reflection is more powerful than that.

With java reflection we can invoke methods at runtime and pass them their required parameters, just like we did for constructors. Similarly, we can also invoke overloaded methods by specifying parameter types of each.

Just like fields, there are two main methods that we use for retrieving class methods. The getMethods method returns an array of all public methods of the class and super classes.

This means that with this method, we can get public methods of the java.lang.Object class like toString, hashCode and notifyAll:

@Test
public void givenClass_whenGetsAllPublicMethods_thenCorrect(){
    Class<?> birdClass = Class.forName("com.baeldung.java.reflection.Bird");
    Method[] methods = birdClass.getMethods();
    List<String> methodNames = getMethodNames(methods);

    assertTrue(methodNames.containsAll(Arrays
      .asList("equals", "notifyAll", "hashCode",
        "walks", "eats", "toString")));
}

To get only public methods of the class we are interested in, we have to use getDeclaredMethods method:

@Test
public void givenClass_whenGetsOnlyDeclaredMethods_thenCorrect(){
    Class<?> birdClass = Class.forName("com.baeldung.java.reflection.Bird");
    List<String> actualMethodNames
      = getMethodNames(birdClass.getDeclaredMethods());

    List<String> expectedMethodNames = Arrays
      .asList("setWalks", "walks", "getSound", "eats");

    assertEquals(expectedMethodNames.size(), actualMethodNames.size());
    assertTrue(expectedMethodNames.containsAll(actualMethodNames));
    assertTrue(actualMethodNames.containsAll(expectedMethodNames));
}

Each of these methods has the singular variation which returns a single Method object whose name we know:

@Test
public void givenMethodName_whenGetsMethod_thenCorrect() {
    Class<?> birdClass = Class.forName("com.baeldung.reflection.Bird");
    Method walksMethod = birdClass.getDeclaredMethod("walks");
    Method setWalksMethod = birdClass.getDeclaredMethod("setWalks", boolean.class);

    assertFalse(walksMethod.isAccessible());
    assertFalse(setWalksMethod.isAccessible());

    walksMethod.setAccessible(true);
    setWalksMethod.setAccessible(true);

    assertTrue(walksMethod.isAccessible());
    assertTrue(setWalksMethod.isAccessible());
}

Notice how we retrieve individual methods and specify what parameter types they take. Those that don’t take parameter types are retrieved with an empty variable argument, leaving us with only a single argument, the method name.

Next we will show how to invoke a method at runtime. We know by default that the walks attribute of the Bird class is false, we want to call its setWalks method and set it to true:

@Test
public void givenMethod_whenInvokes_thenCorrect() {
    Class<?> birdClass = Class.forName("com.baeldung.reflection.Bird");
    Bird bird = (Bird) birdClass.newInstance();
    Method setWalksMethod = birdClass.getDeclaredMethod("setWalks", boolean.class);
    Method walksMethod = birdClass.getDeclaredMethod("walks");
    boolean walks = (boolean) walksMethod.invoke(bird);

    assertFalse(walks);
    assertFalse(bird.walks());

    setWalksMethod.invoke(bird, true);

    boolean walks2 = (boolean) walksMethod.invoke(bird);
    assertTrue(walks2);
    assertTrue(bird.walks());
}

Notice how we first invoke the walks method and cast the return type to the appropriate data type and then check its value. We then later invoke the setWalks method to change that value and test again.

9. Conclusion

In this tutorial, we have covered the Java Reflection API and looked at how to use it to inspect classes, interfaces, fields and methods at runtime without prior knowledge of their internals by compile time.

The full source code and examples for this tutorial can be found in my Github project.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Deploying Web Applications in Jetty

$
0
0

I just released the Master Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

In this article, we will do a quick overview of the Jetty web server and then cover various approaches to deploying a WAR file.

Jetty is an open source Java HTTP web server and a servlet container. Jetty is more commonly used in machine to machine communication in the Java ecosystem.

2. Project Setup

The latest version of Jetty can always be downloaded by following this link. We will create a very basic Java web application from the command line with Maven, which we will use for our examples.

In this article, we are using Jetty 9.x, the latest version at the moment.

Let’s head over to our console, navigate to our location of choice and run the following command:

mvn archetype:generate -DgroupId=com.baeldung -DartifactId=jetty-app 
  -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false

This command will create a complete Java web app inside a new jetty-app folder in our current location. It is just one of many ways of creating a Java application with Maven and it suits our purpose.

Since we are going to be dealing with WAR files, let’s navigate to the project root and build it:

cd jetty-app

Building with Maven:

mvn package

Then jetty-app.war will be created at location jetty-app/target/jetty-app.war.

3. Jetty Structure

Context path. Refers to the location which is relative to the server’s address and represents the name of the web application.

For example, if our web application is put under the $JETTY_HOME\webapps\myapp directory, it will be accessed by the URL http://localhost/myapp, and its context path will be /myapp.

WAR. Is the extension of a file that packages a web application directory hierarchy in ZIP format and is short for Web Archive. Java web applications are usually packaged as WAR files for deployment. WAR files can be created on the command line or with an IDE like Eclipse.

4. Deploying by Copying WAR

The easiest way to deploy a web application to Jetty server is probably by copying the WAR file into the $JETTY_HOME/webapps directory.

After copying, we can start the server by navigating to $JETTY_HOME and running the command:

java -jar start.jar

Jetty will scan its $JETTY_HOME/webapps directory at startup for web applications to deploy. Our new app will be deployed at /jetty-app context.

When we load the URL http://localhost:8080/jetty-app from the browser, we should see our app running with Hello world! printed to the screen.

5. Deploying Using Context File

Jetty web server offers us a way to deploy a web archive located anywhere in the file system by us creating a context file for it.

This way, even if our WAR file is located on a desktop or we have chosen to keep it in jetty-app/target where Maven places the package, we can just create its context file inside $JETTY_HOME/webapps.

Let’s undeploy the jetty-app.war we just deployed by deleting it from webapps. We will then create jetty-app.xml with the following code and place it inside webapps:

<?xml version="1.0"  encoding="ISO-8859-1"?>
<!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//DTD Configure//EN" 
  "http://www.eclipse.org/jetty/configure.dtd">
<Configure class="org.eclipse.jetty.webapp.WebAppContext">
    <Set name="contextPath">/jetty</Set>
    <Set name="war">absolute/path/to/jetty-app.war</Set>
</Configure>

This context file must have the same name as our WAR, with XML file extension. Notice that we have set the contextPath attribute to /jetty. This means that we will access our web app from the URL http://localhost:8080/jetty.

This ability to customize the context path is one of the great advantages of the context file approach of deploying WARs in Jetty as some app names may not be convenient for this purpose.

6. Deploying with the Jetty Maven Plugin

6.1. Default Deployment

The jetty Maven plugin helps us to do rapid testing and iteration while building Java web applications. To be able to deploy and run applications with it, we only need to add the plugin in pom.xml:

<plugin>
    <groupId>org.eclipse.jetty</groupId>
    <artifactId>jetty-maven-plugin</artifactId>
    <version>9.3.11.v20160721</version>
</plugin>

The latest version can be found by following this Maven link.

We have to make sure that our instance of Jetty running on port 8080 is stopped before we perform the next step.

To deploy our app after adding the plugin, we navigate to the root where pom.xml is located and run the following command:

mvn jetty:run

This command creates a new jetty instance and the plugin deploys the app to it. We can access it by loading http://localhost:8080.

The jetty Maven plugin continuously scans the web project for any changes and keeps redeploying it.

6.2. Changing the ContextPath

From the previous subsection, the app was deployed under / context. However, if we would like to deploy under a given context path such as /jetty as before, we will have to configure the plugin differently.

We will change our plugin declaration to the following XML:

<plugin>
    <groupId>org.eclipse.jetty</groupId>
    <artifactId>jetty-maven-plugin</artifactId>
    <version>9.3.11.v20160721</version>
    <configuration>
        <webApp>
            <contextPath>/jetty</contextPath>
        </webApp>
    </configuration>
</plugin>

Notice how we have added a configuration block to further customize our deployment. Several configuration options exist to be placed inside this block depending on what one wants.

After these changes, we can re-run the plugin as before and access our app through http://localhost:8080/jetty.

6.3. Changing the Port

A scenario one may face is a port in use exception. May be we have a jetty instance running on port 8080 for production, but we are still in development phase and want to benefit from the ease of iteration that comes with deploying using the Maven plugin.

In such cases, we have to run our test server on a different port. Let’s change the plugin configuration to the following XML:

<configuration>
    <webApp>
        <contextPath>/jetty</contextPath>
    </webApp>
    <httpConnector>
        <port>8888</port>
    </httpConnector>
</configuration>

When we re-run our Maven plugin, we will be able to access our app from http://localhost:8888/jetty.

It is worth noting that with the jetty Maven plugin, we do not need to have an instance of jetty installed and running. Rather, it creates its own jetty instance.

7. Deploying With Jetty Runner

Just like jetty Maven plugin, the jetty-runner offers a fast and easy way to deploy and run our web app. With jetty-runner, we also don’t need to install and run a separate instance of a jetty server.

7.1. Jetty Runner Setup

To use jetty-runner in rapid deployment and running of our web apps, we can download the latest version by following this Maven link.

With jetty-runner, we only need to place its downloaded jar anywhere we please and be ready with the file system path to our web archives.

We can pass in configuration parameters from the command line as well as deploy numerous applications at different contexts and bound to different ports with just one command.

I have placed my jetty-runner jar in the same hierarchy as jetty-app directory. That is the directory containing our web application.

7.2. Basic Deployment

Let’s deploy our WAR using jetty-runner:

java -jar jetty-runner-9.4.0.M1.jar jetty-app/target/jetty-app.war

This command, just like the case of the Maven plugin, creates a jetty instance and deploys the provided WAR to it. The WAR path can be an absolute or a relative path.

We can load this application using http://localhost:8080.

7.3. Deploy With Context Path

To deploy under /jetty context as before:

java -jar jetty-runner-9.4.0.M1.jar --path /jetty jetty-app/target/jetty-app.war

Accessible via http://localhost:8080/jetty.

7.4. Deploy On Given Port

To deploy on a given port number:

java -jar jetty-runner-9.4.0.M1.jar --port 9090 jetty-app/target/jetty-app.war

Accessible via http://localhost:9090.

7.5. Deploy Multiple WARs

To deploy several WARs with the same command, we use the  –path argument to make each unique:

java -jar jetty-runner --path /one one.war --path /two two.war

We would then access one.war via http://localhost:8080/one and two.war via http://localhost:8080/two.

8. Deploy with Cargo Maven Plugin

Cargo is a versatile library that allows us to manipulate various types of application containers in a standard way.

8.1. Cargo Deployment Setup

In this section, we will look at how to use Cargo’s Maven plugin to deploy a WAR to Jetty, in this case we will deploy a WAR to a Jetty 9.x instance.

To get a firm grip of the whole process, we will start from scratch by creating a new Java web application from the command line:

mvn archetype:generate -DgroupId=com.baeldung -DartifactId=cargo-deploy 
  -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false

This will create a complete Java web application in the cargo-deploy directory. If we build, deploy and load this application as is, it will print Hello World! in the browser.

Since our web application does not contain any servlets, our web.xml file will be very basic. So navigate to the WEB-INF folder of our newly created project and create a web.xml if it was not auto created yet with the following content:

<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
  xmlns="http://Java.sun.com/xml/ns/javaee" 
    xsi:schemaLocation="http://Java.sun.com/xml/ns/javaee 
      http://Java.sun.com/xml/ns/javaee/web-app_3_0.xsd" id="WebApp_ID" version="3.0">

    <display-name>cargo-deploy</display-name>
    <welcome-file-list>
        <welcome-file>index.jsp</welcome-file>
    </welcome-file-list>
</web-app>

To enable Maven to recognize cargo’s commands without typing the fully qualified name, we need to add the cargo Maven plugin in a plugin group in Maven’s settings.xml. 

As an immediate child of the root <settings></settings> element, add this:

<pluginGroups>
    <pluginGroup>org.codehaus.cargo</pluginGroup>
</pluginGroups>

8.2. Local Deploying

In this subsection we will edit our pom.xml to suit our new deployment requirements.

Add the plugin as follows:

<build>
    <plugins>
        <plugin>
            <groupId>org.codehaus.cargo</groupId>
            <artifactId>cargo-maven2-plugin</artifactId>
            <version>1.5.0</version>
            <configuration>
                <container>
                    <containerId>jetty9x</containerId>
                    <type>installed</type>
                    <home>Insert absolute path to jetty 9 installation</home>
                </container>
                <configuration>
                    <type>existing</type>
                    <home>Insert absolute path to jetty 9 installation</home>
                </configuration>
            </configuration>
       </plugin>
    </plugins>
</build>

Notice that we explicitly define the packaging as a WAR, without this, our build will fail. In the plugins section, we then add the cargo maven2 plugin.

The latest version at the time of writing is 1.5.0, however the latest version can always be found here. Additionally, we add a configuration section where we tell Maven that we are using Jetty container and also an existing Jetty installation.

By setting the container type to installed, we tell Maven that we have a Jetty instance installed on the machine and we provide the absolute URL to this installation.

By setting the configuration type to existing, we tell Maven that we have an existing setup that we are using and no further configuration is required.

The alternative would be to tell cargo to download and setup the Jetty version specified by providing a URL. However, our focus is on WAR deployment.

It’s worth noting that whether we are using Maven 2.x or Maven 3.x, the cargo maven2 plugin works for both.

We can now install our application by executing:

mvn install

and deploy it by running:

mvn cargo:deploy

If all goes well in the Maven and Jetty console, then we should be able to run our web application by loading http://localhost:8080/cargo-deploy.

If we check the $JETTY_HOME/webapps folder, we will find a deployment descriptor file or what we earlier on called context file called cargo-deploy.xml created by cargo.

8.3. Remote Deploying

By default, Jetty does not come with possibilities for remote deployment. In order to add such a support to Jetty, Cargo uses the Jetty remote deployer Web application.

What this means is that we have to download a web application WAR pre-created by the Cargo developers, deploy this WAR to the target jetty container.

Every time we want to deploy to this remote server using cargo Maven plugin, it will send an HTTP request to the deployer application on the remote server with our WAR for deployment.

This remote deployer can be found here. Head over to the tools section and download the cargo-jetty-7-and-onwards-deployer WAR.

Security considerations

We must set up a security realm in jetty before this can work, for authentication purposes. Create a file called realm.properties in $JETTY_HOME/etc directory of the remote jetty server. The file content is:

admin:password,manager

The admin is the user name by which the client can access the secured apps, password is the password and manager is the role the clients must possess before being granted access.

We must also declare our security requirements in the deployer application. We will unpack the WAR we downloaded from jetty downloads page, make some changes and pack it back into a WAR.

After unpacking, head over to WEB-INF/web.xml and uncomment the XML code with the Uncomment in order to activate security  comment. Or place the following code there:

<security-constraint>
    <web-resource-collection>
        <web-resource-name>Jetty Remote Deployer</web-resource-name>
        <url-pattern>/*</url-pattern>
    </web-resource-collection>
    <auth-constraint>
        <role-name>manager</role-name>
    </auth-constraint>
</security-constraint>

<login-config>
    <auth-method>BASIC</auth-method>
    <realm-name>Test Realm</realm-name>
</login-config>

Deploying the deployer

We can now pack the app back into a WAR and copy it to any location on the remote server. We will then deploy it to Jetty.

During deployment, it is best to use a deployment descriptor file so that we are able to create a securityHandler and pass to it a loginService. All secured applications must have a login service or else jetty will fail to deploy them.

Now, let us create a context file in $JETTY_HOME/webapps of the remote jetty instance, remember the rules of naming the context file. Make it the same name as the WAR:

<?xml version="1.0"  encoding="ISO-8859-1"?>
<!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//DTD Configure//EN" 
  "http://www.eclipse.org/jetty/configure.dtd">
<Configure class="org.eclipse.jetty.webapp.WebAppContext">
    <Set name="contextPath">/deployer</Set>
    <Set name="war">absolute/path/to/cargo-jetty-deployer.war</Set>
    <Get name="securityHandler">
        <Set name="loginService">
            <New class="org.eclipse.jetty.security.HashLoginService">
                <Set name="name">Test Realm</Set>
                <Set name="config"><SystemProperty name="jetty.home" 
                   default="."/>/etc/realm.properties</Set>
            </New>
        </Set>
    </Get>
</Configure>

Start the remote jetty server and if everything goes well, we should be able to load http://localhost:8080/cargo-jetty-deployer. We should then be able to see something like:

HTTP ERROR 400

Problem accessing /cargo-jetty-deployer/. Reason:

    Command / is unknown

Deploying WAR to remote Jetty

To do a remote deploy we only need to change our configuration section of pom.xml. Remote deploy means that we do not have a local installation of Jetty but have authenticated access to the deployer app running on the remote server.

So let’s change the pom.xml so that the configuration section looks like this:

<configuration>
    <container>
        <containerId>jetty9x</containerId>
        <type>remote</type>
    </container>
    <configuration>          
        <type>runtime</type>
        <properties>
	    <cargo.hostname>127.0.0.1</cargo.hostname>
            <cargo.servlet.port>8080</cargo.servlet.port>
            <cargo.remote.username>admin</cargo.remote.username>
            <cargo.remote.password>password</cargo.remote.password>
        </properties>
    </configuration>
</configuration>

This time, we change the container type from installed to remote and the configuration type from existing to runtime. Finally, we add hostname, port and authentication properties to the configuration.

clean the project:

mvn clean

install it:

mvn install

finally, deploy it:

mvn cargo:deploy

That’s it.

9. Deploying from Eclipse

Eclipse allows us to embed servers in order to add web project deployment in the normal workflow without navigating away from the IDE.

9.1. Embedding Jetty in Eclipse

We can embed a Jetty installation into eclipse by selecting window item from taskbar and then preferences from the drop down menu.

We will find a tree grid of preference items on the left panel of the window that appears. We can then navigate to eclipse -> servers or just type servers in the search bar.

We then select Jetty directory if not already open for us and choose the Jetty version we downloaded.

On the right side of the panel will appear a configuration page where we select the enable option to activate this Jetty version and browse to the the installation folder.

From the screen shots, jetty 7.x will be replaced by the version of jetty we configured.

Capture

We apply changes and the next time we open the servers view from eclipse’s windows -> show view submenu, the newly configured server will be present and we can start, stop and deploy applications to it.

9.2. Deploying Web Application in Embedded Jetty

To deploy a web application to the embedded Jetty instance, it must exist in our workspace.

Open the servers view from window -> show view and look for servers. When open, we can just right click on the server we configured and select add deployment from the context menu that appears.

Capture

From the New Deployment dialog box that appears, open the project drop down and select the web project.

There is a Deploy Type section beneath the Project combo box when we select Exploded Archive(development mode)our changes in the application will be synced live without having to redeploy, this is the best option during development as it is very efficient.

Capture

Selecting Packaged Archive(production mode) will require us to redeploy every time we make changes and see them in the browser. This is best only for production, but still, Eclipse makes it equally easy.

9.3. Deploying Web Application in External Location

We usually choose to deploy a WAR through Eclipse to make debugging easier. There may come a time when we want it deployed to a location other than those used by Eclipse’s embedded servers.

The most common instance is where our production server is online and we want to update the web application.

We can bypass this procedure by deploying in production mode and noting the Deploy Location in the New Deployment dialog box and picking the WAR from there.

During deployment, instead of selecting an embedded server, we can select the <Externally Launched> option from the servers view alongside the list of embedded servers. We navigate to the $JETTY_HOME/webapps directory of an external Jetty installation.

10. Deploying from IntelliJ IDEA

To deploy a web application to Jetty, it must exist and have already been downloaded and installed.

10.1. Local Configuration

Open the Run menu and click the Edit Configurations options.

In the panel on the left search for Jetty Server, if it is not there click the + sign in the menu, search for Jetty and select Local. In the name field put Jetty 9.

Click the Configure… button and in Jetty Home field navigate to the home location of your installation and select it.

Optionally, set the Startup page to be http://localhost:8080/ and HTTP port: 8080, change the port as appropriate.

Go to the Deployment tab and click on the + symbol, select artifact you want to add to the server and click OK

10.2. Remote Configuration

Follow the same instructions as for local Jetty configurations but in the server tab, you must enter the remote location of the installation.

11. Conclusion

In this article, we have covered extensively the various ways of deploying a WAR file in Jetty web server.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Guide to the Java ArrayList

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Overview

In this article, we are going to take a look at ArrayList class from Java Collections Framework. We will discuss its properties, common use cases, as well as its advantages and disadvantages.

ArrayList resides within Java Core Libraries, so you don’t need any additional libraries. In order to use it just add the following import statement:

import java.util.ArrayList;

List represents an ordered sequence of values where some value may occur more than one time.

ArrayList is one of the List implementations built atop an array, which is able to dynamically grow and shrink as you add/remove elements. Elements could be easily accessed by their indexes starting from zero. This implementation has the following properties:

  • Random access takes O(1) time
  • Adding element takes amortized constant time O(1)
  • Inserting/Deleting takes O(n) time
  • Searching takes O(n) time for unsorted array and O(log n) for a sorted one

2. Creation

ArrayList has several constructors and we will present them all in this section.

First, notice that ArrayList is a generic class, so you can parameterize it with any type you want and the compiler will ensure that, for example, you will not be able to put Integer values inside a collection of Strings. Also, you don’t need to cast elements when retrieving them from a collection.

Secondly, it is good practice to use generic interface List as a variable type, because it decouples it from a particular implementation.

2.1. Default No-Arg Constructor

List<String> list = new ArrayList<>();
assertTrue(list.isEmpty());

We’re simply creating an empty ArrayList instance.

2.2. Constructor Accepting Initial Capacity

List<String> list = new ArrayList<>(20);

Here you specify the initial length of an underlying array. This may help you avoiding unnecessary resizing while adding new items.

2.3. Constructor Accepting Collection

Collection<Integer> number 
  = IntStream.range(0, 10).boxed().collect(toSet());

List<Integer> list = new ArrayList<>(numbers);
assertEquals(10, list.size());
assertTrue(numbers.containsAll(list));

Notice, that element of the Collection instance are used for populating underlying array.

3. Adding Elements

You may insert an element either at the end or at the specific position:

List<Long> list = new ArrayList<>();

list.add(1L);
list.add(2L);
list.add(1, 3L);

assertThat(Arrays.asList(1L, 3L, 2L), equalTo(list));

You may also insert a collection or several elements at once:

List<Long> list = new ArrayList<>(Arrays.asList(1L, 2L, 3L));
LongStream.range(4, 10).boxed()
  .collect(collectingAndThen(toCollection(ArrayList::new), ys -> list.addAll(0, ys)));
assertThat(Arrays.asList(4L, 5L, 6L, 7L, 8L, 9L, 1L, 2L, 3L), equalTo(list));

4. Iterating the List

There are two types of iterators available: Iterator and ListIterator. While the former gives you an opportunity to traverse list in one direction, the latter allows you to traverse it in both directions.

Here we will show you only the ListIterator:

List<Integer> list = new ArrayList<>(
  IntStream.range(0, 10).boxed().collect(toCollection(ArrayList::new))
);
ListIterator<Integer> it = list.listIterator(list.size());
List<Integer> result = new ArrayList<>(list.size());
while (it.hasPrevious()) {
    result.add(it.previous());
}

Collections.reverse(list);
assertThat(result, equalTo(list));

You may also search, add or remove elements using iterators.

5. Searching in the List

We will demonstrate how searching works using a collection:

List<String> list = LongStream.range(0, 16)
  .boxed()
  .map(Long::toHexString)
  .collect(toCollection(ArrayList::new));
List<String> stringsToSearch = new ArrayList<>(list);
stringsToSearch.addAll(list);

5.1. Searching an Unsorted Array

In order to find an element you may use indexOf() or lastIndexOf() methods. They both accept an object and return int value:

assertEquals(10, stringsToSearch.indexOf("a"));
assertEquals(26, stringsToSearch.lastIndexOf("a"));

If you want to find all elements satisfying a predicate, you may filter collection using Java 8 Stream API (read more about it here) using Predicate like this:

Set<String> matchingStrings = new HashSet<>(Arrays.asList("a", "c", "9"));

List<String> result = stringsToSearch
  .stream()
  .filter(matchingStrings::contains)
  .collect(toCollection(ArrayList::new));

assertEquals(6, result.size());

It is also possible to use a for loop or an iterator:

Iterator<String> it = stringsToSearch.iterator();
Set<String> matchingStrings = new HashSet<>(Arrays.asList("a", "c", "9"));

List<String> result = new ArrayList<>();
while (it.hasNext()) {
    String s = it.next();
    if (matchingStrings.contains(s)) {
        result.add(s);
    }
}

5.2. Searching a Sorted Array

If you have a sorted array, then you may use binary search algorithm which works faster than linear search:

List<String> copy = new ArrayList<>(stringsToSearch);
Collections.sort(copy);
int index = Collections.binarySearch(copy, "f");
assertThat(index, not(equalTo(-1)));

Notice that if an element is not found then -1 will be returned.

6. Removing Elements

In order to remove an element, you should find its index and only then perform the removal via remove() method. Overloaded version of this method, that accepts an object, searches for it and performs removal of the first occurrence of an equal element:

List<Integer> list = new ArrayList<>(
  IntStream.range(0, 10).boxed().collect(toCollection(ArrayList::new))
);
Collections.reverse(list);

list.remove(0);
assertThat(list.get(0), equalTo(8));

list.remove(Integer.valueOf(0));
assertFalse(list.contains(0));

But be careful when working with boxed types such as Integer. In order to remove particular element, you should first box int value or otherwise an element will be removed by its index.

You may as well use aforementioned Stream API for removing several items, but we won’t show it here. For this purpose we will use iterator:

Set<String> matchingStrings
 = HashSet<>(Arrays.asList("a", "b", "c", "d", "e", "f"));

Iterator<String> it = stringsToSearch.iterator();
while (it.hasNext()) {
    if (matchingStrings.contains(it.next())) {
        it.remove();
    }
}

 7. Summary

In this quick article, we had a look at the ArrayList in Java.

We showed how to create an ArrayList instance, how to add, find or remove elements using different approaches.

As usual, you can find all the code samples over on GitHub.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Intro to Feign

$
0
0

I just announced the Master Class of my "REST With Spring" Course:

>> THE "REST WITH SPRING" CLASSES

1. Overview

In this tutorial we will introduce and explain Feign, a declarative HTTP client developed by Netflix.

Feign aims at simplifying HTTP API clients. Simply put, the developer needs only to declare and annotate an interface while the actual implementation will be provisioned at runtime.

2. Example

We will present an example of a book store service REST API, that is queried and tested based on the Feign HTTP client.

Before we build a sample Feign client, we’ll add the needed dependencies and start-up the REST service.

The book store service example can be cloned from here.

After downloading the service application, we’ll run it with:

$> mvn install spring-boot:run

3. Setup

First we’ll create a new Maven project and include this dependencies:

<dependency>
    <groupId>io.github.openfeign</groupId>
    <artifactId>feign-okhttp</artifactId>
    <version>9.3.1</version>
</dependency>
<dependency>
    <groupId>io.github.openfeign</groupId>
    <artifactId>feign-gson</artifactId>
    <version>9.3.1</version>
</dependency>
<dependency>
    <groupId>io.github.openfeign</groupId>
    <artifactId>feign-slf4j</artifactId>
    <version>9.3.1</version>
</dependency>

Besides the feign-core dependency (which is also pulled in), we’ll use a few plugins, especially: feign-okhttp for internally using Square’s OkHttp client to make requests, feign-gson for using Google’s GSON as JSON processor and feign-slf4j for using the Simple Logging Facade to log requests.

To actually get some log output, you’ll need your favorite, SLF4J-supported logger implementation on your classpath.

Before we continue to create our client interface, we’ll setup a Book model for holding our data:

public class Book {
    private String isbn;
    private String author;
    private String title;
    private String synopsis;
    private String language;

    // standard constructor, getters and setters
}

NOTE: At least a “no arguments constructor” is needed by a JSON processor.

In fact our REST provider is a hypermedia driven API, so we’ll need a simple wrapper class:

public class BookResource {
    private Book book;

    // standard constructor, getters and setters
}

Note: Well keep the BookResource simple, because our sample Feign client doesn’t benefit from hypermedia features!

4. Server Side

To understand how to define a Feign client, we’ll first look into some of the methods and responses supported by our REST provider.

Let’s try it out with a simple curl shell command to list all books. Don’t forget to prefix your calls with ‘/api’, which is the servlet-context defined in the application.properties:

$> curl http://localhost:8081/api/books

we will get a complete book repository represented as JSON:

[
  {
    "book": {
      "isbn": "1447264533",
      "author": "Margaret Mitchell",
      "title": "Gone with the Wind",
      "synopsis": null,
      "language": null
    },
    "links": [
      {
        "rel": "self",
        "href": "http://localhost:8081/api/books/1447264533"
      }
    ]
  },

  ...

  {
    "book": {
      "isbn": "0451524934",
      "author": "George Orwell",
      "title": "1984",
      "synopsis": null,
      "language": null
    },
    "links": [
      {
        "rel": "self",
        "href": "http://localhost:8081/api/books/0451524934"
      }
    ]
  }
]

We can also query individual Book resource, by appending the ISBN to a get request:

$> curl http://localhost:8081/api/books/1447264533

5. Feign Client

Now we’ll define our Feign client.

We will use the @RequestLine annotation to specify the HTTP verb and a path part as argument, and theparameters will be modeled using the @Param annotation:

public interface BookClient {
    @RequestLine("GET /{isbn}")
    BookResource findByIsbn(@Param("isbn") String isbn);

    @RequestLine("GET")
    List<BookResource> findAll();

    @RequestLine("POST")
    @Headers("Content-Type: application/json")
    void create(Book book);
}

NOTE: Feign clients can be used to consume text-based HTTP APIs only, which means that they cannot handle binary data, e.g. file uploads or downloads.

That’s all! Now we’ll use the Feign.builder() to configure our interface-based client. The actual implementation will be provisioned at runtime:

BookClient bookClient = Feign.builder()
  .client(new OkHttpClient())
  .encoder(new GsonEncoder())
  .decoder(new GsonDecoder())
  .logger(new Slf4jLogger(BookClient.class))
  .logLevel(Logger.Level.FULL)
  .target(BookClient.class, "http://localhost:8081/api/books");

Feign supports various plugins such as JSON/XML encoders and decoders or an underlying HTTP client for making the requests.

6. Unit Test

Let’s create an unit test class, containing three @Test methods, to test our client. The test will use static imports from the packages org.hamcrest.CoreMatchers.* and org.junit.Assert.*:

@Test
public void givenBookClient_shouldRunSuccessfully() throws Exception {
   List<Book> books = bookClient.findAll().stream()
     .map(BookResource::getBook)
     .collect(Collectors.toList());

   assertTrue(books.size() > 2);
}

@Test
public void givenBookClient_shouldFindOneBook() throws Exception {
    Book book = bookClient.findByIsbn("0151072558").getBook();
    assertThat(book.getAuthor(), containsString("Orwell"));
}

@Test
public void givenBookClient_shouldPostBook() throws Exception {
    String isbn = UUID.randomUUID().toString();
    Book book = new Book(isbn, "Me", "It's me!", null, null);
    bookClient.create(book);
    book = bookClient.findByIsbn(isbn).getBook();

    assertThat(book.getAuthor(), is("Me"));
}

These tests are pretty self-explanatory. To run it, simply execute the Maven test goal:

$> mvn test

7. Further Reading

If you need some kind of fallback, in case of service unavailability, you can add HystrixFeign to your classpath and build your client with the HystrixFeign.builder() instead.

To learn more about Hystrix, please follow this dedicated tutorial series.

If you want to integrate Spring Cloud Netflix Hystrix with Feign, you can read more about this here.

It’s also possible to add client-side load-balancing and/or service discovery to your client.

The former is done by adding Ribbon to your classpath and call the builder like so:

BookClient bookClient = Feign.builder()
  .client(RibbonClient.create())
  .target(BookClient.class, "http://localhost:8081/api/books");

For service discovery you have to build up your service with Spring Cloud Netflix Eureka enabled. Then simply integrate with Spring Cloud Netflix Feign and you get Ribbon load-balancing for free. More about this can be found here.

8. Conclusion

This article explained how to build a declarative HTTP client using Feign to consume text-based APIs.

As usual, you’ll find the sources on GitHub.

The Master Class of my "REST With Spring" Course is finally out:

>> CHECK OUT THE CLASSES

Batch Processing with Spring Cloud Data Flow

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Overview

In the first article of the series, we introduced Spring Cloud Data Flow‘s architectural component and how to use it to create a streaming data pipeline.

As opposed to a stream pipeline, where an unbounded amount of data is processed, a batch process makes it easy to create short-lived services where tasks are executed on demand.

2. Local Data Flow Server and Shell

The Local Data Flow Server is a component that is responsible for deploying applications, while the Data Flow Shell allows us to perform DSL commands needed for interacting with a server.

In the previous article, we used Spring Initilizr to set them both up as a Spring Boot Application.

After adding the @EnableDataFlowServer annotation to the server’s main class and the @EnableDataFlowShell annotation to the shell’s main class respectively, they are ready to be launched by performing:

mvn spring-boot:run

The server will boot up on port 9393 and a shell will be ready to interact with it from the prompt.

You can refer to the previous article for the details on how to obtain and use a Local Data Flow Server and its shell client.

3. The Batch Application

As with the server and the shell, we can use Spring Initilizr to set up a root Spring Boot batch application.

After reaching the website, simply choose a Group, an Artifact name and select Cloud Task from the dependencies search box.

Once this is done, click on the Generate Project button to start downloading the Maven artifact.

The artifact comes preconfigured and with basic code. Let’s see how to edit it in order to build our batch application.

3.1. Maven Dependencies

First of all, let’s add a couple of Maven dependencies. As this is a batch application, we need to import libraries from the Spring Batch Project:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-batch</artifactId>
</dependency>

Also, as the Spring Cloud Task uses a relational database to store results of an executed task, we need to add a dependency to an RDBMS driver:

<dependency>
    <groupId>com.h2database</groupId>
    <artifactId>h2</artifactId>
</dependency>

We’ve chosen to use the H2 in-memory database provided by Spring. This gives us a simple method of bootstrapping development. However, in a production environment, you’ll want to configure your own DataSource.

Keep in mind that artifacts’ versions will be inherited from Spring Boot’s parent pom.xml file.

3.2. Main Class

The key point to enabling desired functionality would be to add the @EnableTask and @EnableBatchProcessing annotations to the Spring Boot’s main class. This class level annotation tells Spring Cloud Task to bootstrap everything:

@EnableTask
@EnableBatchProcessing
@SpringBootApplication
public class BatchJobApplication {

    public static void main(String[] args) {
        SpringApplication.run(BatchJobApplication.class, args);
    }
}

3.3. Job Configuration

Lastly, let’s configure a job – in this case a simple print of a String to a log file:

@Configuration
public class JobConfiguration {

    private static Log logger
      = LogFactory.getLog(JobConfiguration.class);

    @Autowired
    public JobBuilderFactory jobBuilderFactory;

    @Autowired
    public StepBuilderFactory stepBuilderFactory;

    @Bean
    public Job job() {
        return jobBuilderFactory.get("job")
          .start(stepBuilderFactory.get("jobStep1")
          .tasklet(new Tasklet() {
            
              @Override
              public RepeatStatus execute(StepContribution contribution, 
                ChunkContext chunkContext) throws Exception {
                
                logger.info("Job was run");
                return RepeatStatus.FINISHED;
              }
        }).build()).build();
    }
}

Details on how to configure and define a job are outside the scope of this article. For more information, you can see our Introduction to Spring Batch article.

Finally, our application is ready. Let’s install it inside our local Maven repository. To do this cd into the project’s root directory and issue the command:

mvn clean install

Now it’s time to put the application inside the Data Flow Server.

4. Registering the Application

To register the application within the App Registry we need to provide a unique name, an application type, and a URI that can be resolved to the app artifact.

Go to the Spring Cloud Data Flow Shell and issue the command from the prompt:

app register --name batch-job --type task 
  --uri maven://org.baeldung.spring.cloud:batch-job:jar:0.0.1-SNAPSHOT

5. Creating a Task

A task definition can be created using the command:

task create myjob --definition batch-job

This creates a new task with the name myjob pointing to the previously registeredbatch-job application .

A listing of the current task definitions can be obtained using the command:

task list

6. Launching a Task

To launch a task we can use the command:

task launch myjob

Once the task is launched the state of the task is stored in a relational DB. We can check the status of our task executions with the command:

task execution list

7. Reviewing the Result

In this example, the job simply prints a string in a log file. The log files are located within the directory displayed in the Data Flow Server’s log output.

To see the result we can tail the log:

tail -f PATH_TO_LOG\spring-cloud-dataflow-2385233467298102321\myjob-1472827120414\myjob
[...] --- [main] o.s.batch.core.job.SimpleStepHandler: Executing step: [jobStep1]
[...] --- [main] o.b.spring.cloud.JobConfiguration: Job was run
[...] --- [main] o.s.b.c.l.support.SimpleJobLauncher:
  Job: [SimpleJob: [name=job]] completed with the following parameters: 
    [{}] and the following status: [COMPLETED]

8. Conclusion

In this article, we have shown how to deal with batch processing through the use of Spring Cloud Data Flow.

The example code can be found in the GitHub project.

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

Filtering a Stream of Optionals in Java

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Introduction

In this article, we’re going to talk about how to filter out non-empty values from a Stream of Optionals.

We’ll be looking at three different approaches – two using Java 8 and one using the new support in Java 9.

We will be working on the same list in all examples:

List<Optional<String>> listOfOptionals = Arrays.asList(
  Optional.empty(), Optional.of("foo"), Optional.empty(), Optional.of("bar"));

2. Using filter()

One of the options in Java 8 is to filter out the values with Optional::isPresent and then perform mapping with the Optional::get function to extract values:

List<String> filteredList = listOfOptionals.stream()
  .filter(Optional::isPresent)
  .map(Optional::get)
  .collect(Collectors.toList());

3. Using flatMap()

The other option would be to use flatMap with a lambda expression that converts an empty Optional to an empty Stream instance, and non-empty Optional to a Stream instance containing only one element:

List<String> filteredList = listOfOptionals.stream()
  .flatMap(o -> o.isPresent() ? Stream.of(o.get()) : Stream.empty())
  .collect(Collectors.toList());

4. Java 9’s Optional::stream

All this will get quite simplified with the arrival of Java 9 that adds a stream() method to Optional.

This approach is similar to the one showed in section 3 but this time we are using a predefined method for converting Optional instance into a Stream instance:

It will return a stream of either one or zero element(s) whether the Optional value is or isn’t present:

List<String> filteredList = listOfOptionals.stream()
  .flatMap(Optional::stream)
  .collect(Collectors.toList());

5. Conclusion

With this, we’ve quickly seen three ways of filtering the present values out of a Stream of Optionals.

The full implementation of code samples can be found on the Github project.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Using a Custom Spring MVC’s Handler Interceptor to Manage Sessions

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Introduction

In this tutorial, we are going to focus on the Spring MVC HandlerInterceptor. 

More specifically, we will show a more advanced use case for using interceptors – we’ll emulate a session timeout logic by setting custom counters and tracking sessions manually.

If you want to read about the HandlerInterceptor’s basics in Spring, check out this article.

2. Maven Dependencies

In order to use Interceptors, you need to include the following section in a dependencies section of your pom.xml file:

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-web</artifactId>
    <version>4.3.2.RELEASE</version>
</dependency>

The latest version can be found here. This dependency only covers Spring Web so don’t forget to add spring-core and spring-context for a full (minimal) web application.

3. Custom Implementation of Session Timeouts

In this example, we will configure maximum inactive time for the users in our system. After that time, they will be logged out automatically from the application.

This logic is just a proof of concept – we can of course easily achieve the same result using session timeouts – but the result is not the point here, the usage of the interceptor is.

And so, we want to make sure that session will be invalidated if the user is not active. For example, if a user forgot to log out, the inactive time counter will prevent accessing the account by unauthorized users. In order to do that, we need to set constant for the inactive time:

private static final long MAX_INACTIVE_SESSION_TIME = 5 * 10000;

We set it to 50 seconds for testing purposes; don’t forget, it is counted in ms.

Now, we need to keep track of each session in our app, so we need to include this Spring Interface:

@Autowired
private HttpSession session;

Let’s proceed with the preHandle() method.

3.1. preHandle()

In this method we will include following operations:

  • setting timers to check handling time of the requests
  • checking if a user is logged in (using UserInterceptor method from this article)
  • automatic logging out, if the user’s inactive session time exceeds maximum allowed value

Let’s look at the implementation:

@Override
public boolean preHandle(
  HttpServletRequest req, HttpServletResponse res, Object handler) throws Exception {
    log.info("Pre handle method - check handling start time");
    long startTime = System.currentTimeMillis();
    request.setAttribute("executionTime", startTime);
}

In this part of the code, we set the startTime of handling execution. From this moment, we will count a number of seconds to finish handling of each request. In the next part, we will provide logic for session time, only if somebody logged in during his HTTP Session:

if (UserInterceptor.isUserLogged()) {
    session = request.getSession();
    log.info("Time since last request in this session: {} ms",
      System.currentTimeMillis() - request.getSession().getLastAccessedTime());
    if (System.currentTimeMillis() - session.getLastAccessedTime()
      > MAX_INACTIVE_SESSION_TIME) {
        log.warn("Logging out, due to inactive session");
        SecurityContextHolder.clearContext();
        request.logout();
        response.sendRedirect("/spring-security-rest-full/logout");
    }
}
return true;

First, we need to get the session from the request.

Next, we do some console logging, about who is logged in, and how long has passed, since the user performs any operation in our application. We may use session.getLastAccessedTime() to obtain this information, subtract it from current time and compare with our MAX_INACTIVE_SESSION_TIME. 

If time is longer than we allow, we clear the context, log out the request and then (optionally) send a redirect as a response to default logout view, which is declared in Spring Security configuration file.

To complete counters for handling time example, we also implement postHandle() method, which is described in the next subsection.

3.2. postHandle()

This method is implementation just to get information, how long it took to process the current request. As you saw in the previous  code snippet, we set executionTime in Spring model. Now it’s time to use it:

@Override
public void postHandle(
  HttpServletRequest request, 
  HttpServletResponse response,
  Object handler, 
  ModelAndView model) throws Exception {
    log.info("Post handle method - check execution time of handling");
    long startTime = (Long) request.getAttribute("executionTime");
    log.info("Execution time for handling the request was: {} ms",
      System.currentTimeMillis() - startTime);
}

The implementation is simple – we check an execution time and subtract it from a current system time. Just remember to cast the value of the model to long.

Now we can log execution time properly.

4. Config of the Interceptor

To add our newly created Interceptor into Spring configuration, we need to override addInterceptors() method inside WebConfig class that extends WebMvcConfigurerAdapter:

@Override
public void addInterceptors(InterceptorRegistry registry) {
    registry.addInterceptor(new SessionTimerInterceptor());
}

We may achieve the same configuration by editing our XML Spring configuration file:

<mvc:interceptors>
    <bean id="sessionTimerInterceptor" class="org.baeldung.web.interceptor.SessionTimerInterceptor"/>
</mvc:interceptors>

Moreover, we need to add listener, in order to automate the creation of the ApplicationContext:

public class ListenerConfig implements WebApplicationInitializer {
    @Override
    public void onStartup(ServletContext sc) throws ServletException {
        sc.addListener(new RequestContextListener());
    }
}

5. Conclusion

This tutorial shows how to intercept web requests using Spring MVC’s HandlerInterceptor in order to manually do session management/timeout.

As usual, all examples and configurations are available here on GitHub.

5.1. Articles in the Series

All articles of the series:

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE


CSRF Protection with Spring MVC and Thymeleaf

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Introduction

Thymeleaf is a Java template engine for processing and creating HTML, XML, JavaScript, CSS and plaintext. For an intro to Thymeleaf and Spring, have a look at this writeup.

In this article, we will discuss how to prevent Cross-Site Request Forgery (CSRF) attacks in Spring MVC with Thymeleaf application. To be more specific, we will test CSRF attack for HTTP POST method.

CSRF is an attack which forces an end user to execute unwanted actions in a web application in which is currently authenticated.

2. Maven Dependencies

First, let us see the configurations required to integrate Thymeleaf with Spring. The thymeleaf-spring library is required in our dependencies:

<dependency>
    <groupId>org.thymeleaf</groupId>
    <artifactId>thymeleaf</artifactId>
    <version>3.0.1.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.thymeleaf</groupId>
    <artifactId>thymeleaf-spring4</artifactId>
    <version>3.0.1.RELEASE</version>
</dependency>

Note that, for a Spring 3 project, the thymeleaf-spring3 library must be used instead of thymeleaf-spring4. The latest version of the dependencies may be found here.

Moreover, in order to use Spring Security, we need to add following dependencies:

<dependency>
    <groupId>org.springframework.security</groupId>
    <artifactId>spring-security-web</artifactId>
    <version>4.1.3.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.springframework.security</groupId>
    <artifactId>spring-security-config</artifactId>
    <version>4.1.3.RELEASE</version>
</dependency>

The latest versions of two Spring Security related libraries are available here and here.

3. Java Configuration

In addition to Thymeleaf configuration covered here, we need to add configuration for Spring Security. In order to do that, we need to create the class:

@Configuration
@EnableWebSecurity
@EnableGlobalMethodSecurity(securedEnabled = true, prePostEnabled = true)
public class WebMVCSecurity extends WebSecurityConfigurerAdapter {

    @Bean
    @Override
    public AuthenticationManager authenticationManagerBean() throws Exception {
        return super.authenticationManagerBean();
    }

    @Override
    protected void configure(AuthenticationManagerBuilder auth) throws Exception {
        auth.inMemoryAuthentication()
          .withUser("user1").password("user1Pass")
          .authorities("ROLE_USER");
    }

    @Override
    public void configure(WebSecurity web) throws Exception {
        web.ignoring().antMatchers("/resources/**");
    }

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http
          .authorizeRequests()
          .anyRequest()
          .authenticated()
          .and()
          .httpBasic();
    }
}

For more details and description of Security configuration, we refer to the Security with Spring series.

CSRF protection is enabled by default with Java configuration. In order to disable this useful feature we need to add this in configure(…) method:

.csrf().disable()

In XML configuration we need to specify the CSRF protection manually, otherwise, it will not work:

<security:http 
  auto-config="true"
  disable-url-rewriting="true" 
  use-expressions="true">
    <security:csrf />
     
    <!-- Remaining configuration ... -->
</security:http>

Please also note, that if we are using login page with login form, we need to always include the CSRF token in the login form as a hidden parameter manually in the code:

<input 
  type="hidden" 
  th:name="${_csrf.parameterName}" 
  th:value="${_csrf.token}" />

For the remaining forms, CSRF token will be automatically added to forms with hidden input:

<input 
  type="hidden" 
  name="_csrf"
  value="32e9ae18-76b9-4330-a8b6-08721283d048" /> 
<!-- Example token -->

4. Views Configuration

Let’s proceed to the main part of HTML files with form actions and testing procedure creation. In the first view, we try to add new student to the list:

<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml"
	xmlns:th="http://www.thymeleaf.org">
<head>
<title>Add Student</title>
</head>
<body>
    <h1>Add Student</h1>
        <form action="#" th:action="@{/saveStudent}" th:object="${student}"
          method="post">
            <ul>
                <li th:errors="*{id}" />
                <li th:errors="*{name}" />
                <li th:errors="*{gender}" />
                <li th:errors="*{percentage}" />
            </ul>
    <!-- Remaining part of HTML -->
    </form>
</body>
</html>

In this view, we are adding a student to the list, by providing id, name, gender and percentage (optionally, as stated in the form validation). Before we can execute this form, we need to provide user and password, to authenticate us in a web application.

4.1. Browser CSRF Attack Testing

Now we proceed to the second HTML view. The purpose of it is to try to do CSRF attack:

<!DOCTYPE html>
<html>
<head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
</head>
<body>
<form action="http://localhost:8080/spring-thymeleaf/saveStudent" method="post">
    <input type="hidden" name="payload" value="CSRF attack!"/>
    <input type="submit" />
</form>
</body>
</html>

We know that the action URL is http://localhost:8080/spring-thymeleaf/saveStudent. The hacker wants to access this page to perform an attack.

In order to test, open the HTML file in another browser, without logging in to the application. When you try to submit the form, we will receive the page:

CSRF attack

Our request was denied because we sent a request without a CSRF token.

Please note, that HTTP session is used in order to store CSRF token. When the request is sent, Spring compares generated token with the token stored in the session, in order to confirm that the user is not hacked.

4.2. JUnit CSRF Attack Testing

If you don’t want to test CSRF attack using a browser, you can also do it via a quick integration test; let’s start with the Spring config for that test:

@RunWith(SpringJUnit4ClassRunner.class)
@WebAppConfiguration
@ContextConfiguration(classes = { 
  WebApp.class, WebMVCConfig.class, WebMVCSecurity.class, InitSecurity.class })
public class CsrfEnabledIntegrationTest {

    // configuration

}

And move on to the actual tests:

@Test
public void addStudentWithoutCSRF() throws Exception {
    mockMvc.perform(post("/saveStudent").contentType(MediaType.APPLICATION_JSON)
      .param("id", "1234567").param("name", "Joe").param("gender", "M")
      .with(testUser())).andExpect(status().isForbidden());
}

@Test
public void addStudentWithCSRF() throws Exception {
    mockMvc.perform(post("/saveStudent").contentType(MediaType.APPLICATION_JSON)
      .param("id", "1234567").param("name", "Joe").param("gender", "M")
      .with(testUser()).with(csrf())).andExpect(status().isOk());
}

The first test will result in a forbidden status due to the missing CSRF token, whereas the second will be executed properly.

5. Conclusion

In this article, we discussed how to prevent CSRF attacks using Spring Security and Thymeleaf framework.

The full implementation of this tutorial can be found in the GitHub project – this is an Eclipse based project, so it should be easy to import and run as it is.

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

Spring, Hibernate and a JNDI Datasource

$
0
0

I usually post about Persistence on Twitter - you can follow me there:

1. Overview

In this article, we’ll create a Spring application using Hibernate/JPA with a JNDI datasource.

If you want to rediscover the basics of Spring and Hibernate, check out this article.

2. Declaring the Datasource

2.1. System

Since we’re using a JNDI datasource, we won’t define it in our application, we’ll define it in our application container.

In this example, we’re going to use 8.5.4 version of Tomcat and the 9.5.4 version of the PostgreSQL database.

You should be able to replicate the same steps using any other Java application container and a database of your choice (as long as you have proper JDBC jars for it!).

2.2. Declaring the Datasource on the Application Container

We’ll declare our datasource in <tomcat_home>/conf/server.xml file inside the <GlobalNamingResources> element.

Assuming that the database server is running on the same machine as the application container, and that the intended database is named postgres, and that the username is baeldung with password pass1234, a resource would look like this:

<Resource name="jdbc/BaeldungDatabase" 
  auth="Container"
  type="javax.sql.DataSource" 
  driverClassName="org.postgresql.Driver"
  url="jdbc:postgresql://localhost:5432/postgres"
  username="baeldung" 
  password="pass1234" 
  maxActive="20" 
  maxIdle="10" 
  maxWait="-1"/>

Take note that we’ve named our resource jdbc/BaeldungDatabase. This will be the name to be used when referencing this datasource.

We’ve also had to specify its type and database driver’s class name. For it to work, you must also place the corresponding jar in <tomcat_home>/lib/ (in this case, PostgreSQL’s JDBC jar).

Remaining configuration parameters are:

  • auth=”Container” – means that the container will be signing on to the resource manager on behalf of the application
  • maxActive, maxIdle, and maxWait – are pool connection’s configuration parameters

We must also define a ResourceLink inside the <Context> element in <tomcat_home>/conf/context.xml, which would look like:

<ResourceLink 
  name="jdbc/BaeldungDatabase" 
  global="jdbc/BaeldungDatabase" 
  type="javax.sql.DataSource"/>

Note that we are using the name we defined in our Resource in server.xml.

3. Using the Resource

3.1. Setting the Application

We’re going to define a simple Spring + JPA + Hibernate application using pure Java config now.

We’ll start by defining the Spring context’s configuration (keep in mind that we are focusing on JNDI here and assuming that you already know the basics of Spring’s configuration):

@Configuration
@EnableTransactionManagement
@PropertySource({ "classpath:persistence-jndi.properties" })
@ComponentScan({ "org.baeldung.persistence" })
@EnableJpaRepositories(basePackages = "org.baeldung.persistence.dao")
public class PersistenceJNDIConfig {

    @Autowired
    private Environment env;

    @Bean
    public LocalContainerEntityManagerFactoryBean entityManagerFactory() 
      throws NamingException {
        LocalContainerEntityManagerFactoryBean em 
          = new LocalContainerEntityManagerFactoryBean();
        em.setDataSource(dataSource());
        
        // rest of entity manager configuration
        return em;
    }

    @Bean
    public DataSource dataSource() throws NamingException {
        return (DataSource) new JndiTemplate().lookup(env.getProperty("jdbc.url"));
    }

    @Bean
    public PlatformTransactionManager transactionManager(EntityManagerFactory emf) {
        JpaTransactionManager transactionManager = new JpaTransactionManager();
        transactionManager.setEntityManagerFactory(emf);
        return transactionManager;
    }

    // rest of persistence configuration
}

Note that we have a full example of configuration in the Spring 4 and JPA with Hibernate article.

In order to create our dataSource bean, we need to look for the JNDI resource we defined at our application container. We’ll store this in persistence-jndi.properties key (among other properties):

jdbc.url=java:comp/env/jdbc/BaeldungDatabase

Note that in the jdbc.url property we’re defining a root name to look for:  java:comp/env/ (this are defaults and correspond to component and environment) and then the same name we used in server.xml: jdbc/BaeldungDatabase.

3.2. JPA Configuration – Model, DAO and Service

We’re going to use a simple model with the @Entity annotation with a generated id and a name:

@Entity
public class Foo {

    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    @Column(name = "ID")
    private long id;
 
    @Column(name = "NAME")
    private String name;

    // default getters and setters
}

Let’s define a simple repository:

@Repository
public class FooDao {

    @PersistenceContext
    private EntityManager entityManager;

    public List<Foo> findAll() {
        return entityManager
          .createQuery("from " + Foo.class.getName()).getResultList();
    }
}

And lastly, let’s create a simple service:

@Service
@Transactional
public class FooService {

    @Autowired
    private FooDao dao;

    public List<Foo> findAll() {
        return dao.findAll();
    }
}

With this, you have everything you need in order to use your JNDI datasource in your Spring application.

4. Conclusion

In this article, we’ve created an example Spring application with a JPA + Hibernate setup working with a JNDI datasource.

Note that the most important parts are the definition of the resource in the application container and the lookup for the JNDI resource on the configuration.

And, as always, the full project can be found on GitHub.

I usually post about Persistence on Twitter - you can follow me there:


Guide to Spring Data REST Validators

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Overview

This article covers a basic introduction to Spring Data REST Validators. If you need to first go over the basics of Spring Data REST, definitely visit this article to brush up on the basics.

Simply put, with Spring Data REST, we can simply add a new entry into the database through the REST API, but we of course also need to make sure the data is valid before actually persisting it.

This article continues on an existing article and we will reuse the existing project we set up there.

2. Using Validators

Starting with Spring 3, the framework features the Validator interface – which can be used to validate objects.

2.1. Motivation

In the previous article, we defined our entity having two properties – name and email.

And so, to create a new resource, we simply need to run:

curl -i -X POST -H "Content-Type:application/json" -d 
  '{ "name" : "Test", "email" : "test@test.com" }' 
  http://localhost:8080/users

This POST request will save the provided JSON object into our database, and the operation will return:

{
  "name" : "Test",
  "email" : "test@test.com",
  "_links" : {
    "self" : {
        "href" : "http://localhost:8080/users/1"
    },
    "websiteUser" : {
        "href" : "http://localhost:8080/users/1"
    }
  }
}

A positive outcome was expected since we provided valid data. But, what will happen if we remove the property name, or just set the value to an empty String?

To test the first scenario, we will run modified command from before where we will set empty string as a value for property name:

curl -i -X POST -H "Content-Type:application/json" -d 
  '{ "name" : "", "email" : "Baggins" }' http://localhost:8080/users

With that command we’ll get the following response:

{
  "name" : "",
  "email" : "Baggins",
  "_links" : {
    "self" : {
        "href" : "http://localhost:8080/users/1"
    },
    "websiteUser" : {
        "href" : "http://localhost:8080/users/1"
    }
  }
}

For the second scenario, we will remove property name from request:

curl -i -X POST -H "Content-Type:application/json" -d 
  '{ "email" : "Baggins" }' http://localhost:8080/users

For that command we will get this response:

{
  "name" : null,
  "email" : "Baggins",
  "_links" : {
    "self" : {
        "href" : "http://localhost:8080/users/2"
    },
    "websiteUser" : {
        "href" : "http://localhost:8080/users/2"
    }
  }
}

As we can see, both requests were OK and we can confirm that with 201 status code and API link to our object.

This behavior is not acceptable since we want to avoid inserting partial data into a database.

2.2. Spring Data REST Events

During every call on Spring Data REST API, Spring Data REST exporter generates various events which are listed here:

  • BeforeCreateEvent
  • AfterCreateEvent
  • BeforeSaveEvent
  • AfterSaveEvent
  • BeforeLinkSaveEvent
  • AfterLinkSaveEvent
  • BeforeDeleteEvent
  • AfterDeleteEvent

Since all events are handled in a similar way, we will only show how to handle beforeCreateEvent which is generated before a new object is saved into the database.

2.3. Defining a Validator

To create our own validator, we need to implement the org.springframework.validation.Validator interface with the supports and validate methods.

Supports checks if the validator supports provided requests, while validate method validates provided data in requests.

Let’s define a WebsiteUserValidator class:

public class WebsiteUserValidator implements Validator {

    @Override
    public boolean supports(Class<?> clazz) {
        return WebsiteUser.class.equals(clazz);
    }

    @Override
    public void validate(Object obj, Errors errors) {
        WebsiteUser user = (WebsiteUser) obj;
        if (checkInputString(user.getName())) {
            errors.rejectValue("name", "name.empty");
        }
   
        if (checkInputString(user.getEmail())) {
            errors.rejectValue("email", "email.empty");
        }
    }

    private boolean checkInputString(String input) {
        return (input == null || input.trim().length() == 0);
    }
}

Errors object is a special class designed to contain all errors provided in validate method. Later in this article, we’ll show how you can use provided messages contained in Errors object.
To add new error message, we have to call errors.rejectValue(nameOfField, errorMessage).

After we’ve defined the validator, we need to map it to a specific event which is generated after the request is accepted.

For example, in our case, beforeCreateEvent is generated because we want to insert a new object into our database. But since we want to validate object in a request, we need to define our validator first.

This can be done in three ways:

  • Add Component annotation with name “beforeCreateWebsiteUserValidator“. Spring Boot will recognize prefix beforeCreate which determines the event we want to catch, and it will also recognize WebsiteUser class from Component name.
    @Component("beforeCreateWebsiteUserValidator")
    public class WebsiteUserValidator implements Validator {
        ...
    }
  • Create Bean in Application Context with @Bean annotation:
    @Bean
    public WebsiteUserValidator beforeCreateWebsiteUserValidator() {
        return new WebsiteUserValidator();
    }
  • Manual registration:
    @SpringBootApplication
    public class SpringDataRestApplication 
      extends RepositoryRestMvcConfiguration {
        public static void main(String[] args) {
            SpringApplication.run(SpringDataRestApplication.class, args);
        }
        
        @Override
        protected void configureValidatingRepositoryEventListener(
          ValidatingRepositoryEventListener v) {
            v.addValidator("beforeCreate", new WebsiteUserValidator());
        }
    }
    • For this case, you don’t need any annotations on WebsiteUserValidator class.

2.4.  Event Discovery Bug

At the moment, a bug exists in Spring Data REST – which affects events discovery.

If we call POST request which generates the beforeCreate event, our application will not call validator because the event will not be discovered, due to this bug.

A simple workaround for this problem is to insert all events into Spring Data REST ValidatingRepositoryEventListener class:

@Configuration
public class ValidatorEventRegister implements InitializingBean {

    @Autowired
    ValidatingRepositoryEventListener validatingRepositoryEventListener;

    @Autowired
    private Map<String, Validator> validators;

    @Override
    public void afterPropertiesSet() throws Exception {
        List<String> events = Arrays.asList("beforeCreate");
        for (Map.Entry<String, Validator> entry : validators.entrySet()) {
            events.stream()
              .filter(p -> entry.getKey().startsWith(p))
              .findFirst()
              .ifPresent(
                p -> validatingRepositoryEventListener
               .addValidator(p, entry.getValue()));
        }
    }
}

3. Testing

In Section 2.1. we showed that, without a validator, we can add objects without name property into our database which is not desired behavior because we don’t check data integrity.

If we want to add the same object without name property but with provided validator, we will get this error:

curl -i -X POST -H "Content-Type:application/json" -d 
  '{ "email" : "test@test.com" }' http://localhost:8080/users
{  
   "timestamp":1472510818701,
   "status":406,
   "error":"Not Acceptable",
   "exception":"org.springframework.data.rest.core.
    RepositoryConstraintViolationException",
   "message":"Validation failed",
   "path":"/users"
}

As we can see, missing data from request was detected and an object was not saved into the database. Our request was returned with 500 HTTP code and message for an internal error.

The error message doesn’t say anything about the problem in our request. If we want to make it more informational, we will have to modify response object.

In the Exception Handling in Spring article, we showed how to handle exceptions generated by framework, so that’s definitely a good read at this point.

Since our application generates a RepositoryConstraintViolationException exception we will create a handler for this particular exception which will modify response message.

This is ours RestResponseEntityExceptionHandler class:

@ControllerAdvice
public class RestResponseEntityExceptionHandler extends
  ResponseEntityExceptionHandler {

    @ExceptionHandler({ RepositoryConstraintViolationException.class })
    public ResponseEntity<Object> handleAccessDeniedException(
      Exception ex, WebRequest request) {
          RepositoryConstraintViolationException nevEx = 
            (RepositoryConstraintViolationException) ex;

          String errors = nevEx.getErrors().getAllErrors().stream()
            .map(p -> p.toString()).collect(Collectors.joining("\n"));
          
          return new ResponseEntity<Object>(errors, new HttpHeaders(),
            HttpStatus.PARTIAL_CONTENT);
    }
}

With this custom handler, our return object will have information about all detected errors.

4. Conclusion

In this article, we showed that validators are essential for every Spring Data REST API which provides an extra layer of security for data insertion.

We also illustrated how simple is to create new validator with annotations.

As always, the code for this application can be found in the GitHub project.

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

Spring Boot Application as a Service

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Overview

This article explores some options of running Spring Boot applications as a service.

Firstly, we are going to explain web applications’ packaging options and system services. In the subsequent sections, we explore different alternatives we have when setting up a service for both Linux as Windows based systems.

Finally, we will conclude with some references to additional sources of information.

2. Project Setup and Build Instructions

2.1. Packaging

Web applications are traditionally packaged as a Web Application aRchives (WAR) and deployed to a web server.

Spring Boot applications may be packaged both as WAR and JAR files. The latter embeds a web server within a JAR file, which allows you to run applications without the need of an installation and configuration of an application server.

2.2. Maven Configuration

Let’s start by defining the configuration of our pom.xml file:

<packaging>jar</packaging>

<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>1.4.0.RELEASE</version>
</parent>

<dependencies>
    ....
</dependencies>

<build>
    <plugins>
        <plugin>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-maven-plugin</artifactId>
            <configuration>
                <executable>true</executable>
            </configuration>
        </plugin>
    </plugins>
</build>

The packaging must be set to jar. We are using the latest stable version of Spring Boot at the time of writing, but any version after 1.3 will be enough. You can find more information about available versions here.

Notice that we have set the <executable> parameter to true for the spring-boot-maven-plugin artifact. This makes sure that a MANIFEST.MF file is added to the JAR package. This manifest contains a Main-Class entry that specifies which class defines the main method for your application.

2.3. Building Your Application

Run the following command inside your application’s root directory:

$ mvn clean package

The executable JAR file is now available in the target directory and we may start up the application by executing the following command on the command line:

$ java -jar your-app.jar

At this point, you still need to invoke the Java interpreter with the -jar option. There are many reasons why it would be preferable to have your app started by being able to invoke it as a service.

3. On Linux

In order to run a program as a background process, we could simply use the nohup Unix command, but this is not the preferred way either for various reasons. A good explanation is provided in this thread.

Instead, we are going to daemonize our process. Under Linux, we may choose to configure a daemon either with a traditional System V init script or with a Systemd configuration file. The former is traditionally the most well-known option but is gradually being replaced by the latter.

You may find more details on this difference here.

For enhanced security we first create a specific user to run the service with and change the executable JAR file permissions accordingly:

$ sudo useradd baeldung
$ sudo passwd baeldung
$ sudo chown baeldung:baeldung your-app.jar
$ sudo chmod 500 your-app.jar

3.1. System V Init

A Spring Boot executable JAR file makes the service setup process very easy:

$ sudo ln -s /path/to/your-app.jar /etc/init.d/your-app

The above command creates a symbolic link to your executable JAR file. You must use the full path to your executable JAR file, otherwise, the symbolic link will not work properly. This link enables you to start the application as a service:

$ sudo service your-app start

The script supports the standard service start, stop, restart and status commands. Moreover:

  • it starts the services running under the user baeldung we have just created
  • it tracks the application’s process ID in /var/run/your-app/your-app.pid
  • it writes console logs to /var/log/your-app.log, which you may want to check in case your application fails to start properly

3.2. Systemd

The systemd service setup is very simple as well. Firstly, we create a script named your-app.service using the following example and put it in /etc/systemd/system directory:

[Unit]
Description=A Spring Boot application
After=syslog.target

[Service]
User=baeldung
ExecStart=/path/to/your-app.jar SuccessExitStatus=143 

[Install] 
WantedBy=multi-user.target

Remember to modify Description, User and ExecStart fields to match your application. You should be able to execute the aforementioned standard service commands at this point as well.

As opposed to the System V init approach described in the previous section, the process ID file and console log file should be configured explicitly using appropriate fields in the service script. An exhaustive list of options may be found here.

3.3. Upstart

Upstart is an event-based service manager, a potential replacement for the System V init that offers more control on the behavior of the different daemons.

The site has good setup instructions that should work for almost any Linux distribution. When using Ubuntu you probably have it installed and configured already (check if there are any jobs with a name starting with “upstart” in /etc/init).

We create a job your-app.conf to start our Spring Boot application:

# Place in /home/{user}/.config/upstart

description "Some Spring Boot application"

respawn # attempt service restart if stops abruptly

exec java -jar /path/to/your-app.jar

Now run “start your-app” and your service will start.

Upstart offers many job configuration options, you can find most of them here.

4. On Windows

In this section, we present a couple of options that may be used to run a Java JAR as a Windows service.

4.1. Windows Service Wrapper

Due to difficulties with the GPL license of the Java Service Wrapper (see next subsection) in combination with e.g. the MIT license of Jenkins, the Windows Service Wrapper project, also known as winsw, was conceived.

Winsw provides programmatic means to install/uninstall/start/stop a service. In addition, it may be used to run any kind of executable as a service under Windows, whereas Java Service Wrapper, as implied by its name, only supports Java applications.

First, you download the binaries here. Next, the configuration file that defines our Windows service, MyApp.xml, should look like this:

<service>
    <id>MyApp</id>
    <name>MyApp</name>
    <description>This runs Spring Boot as a Service.</description>
    <env name="MYAPP_HOME" value="%BASE%"/>
    <executable>java</executable>
    <arguments>-Xmx256m -jar "%BASE%\MyApp.jar"</arguments>
    <logmode>rotate</logmode>
</service>

Finally, you have to rename the winsw.exe to MyApp.exe so that its name matches with the MyApp.xml configuration file. Thereafter you can install the service like so:

$ MyApp.exe install

Similarly, you may use uninstall, start, stop, etc.

4.2. Java Service Wrapper

In case you don’t mind the GPL licensing of the Java Service Wrapper project, this alternative may address your needs to configure your JAR file as a Windows service equally well. Basically, the Java Service Wrapper also requires you to specify in a configuration file which specifies how to run your process as a service under Windows.

This article explains in a very detailed way how to set up such an execution of a JAR file as a service under Windows, so we there’s no need to repeat the info.

5. Additional References

Spring Boot applications may also be started as Windows service using Procrun of the Apache Commons Daemon project. Procrun is a set of applications that allow Windows users to wrap Java applications as Windows services. Such a service may be set to start automatically when the machine boots and will continue to run without any user being logged on.

More details on starting Spring Boot applications under Unix may be found here. There are also detailed instructions on how to modify Systemd unit files for Redhat based systems. Finally

Finally, this quick howto describes how to incorporate a Bash script into your JAR file, so that it becomes an executable itself!

6. Conclusion

Services allow you to manage your application state very efficiently and, as we have seen, service setup for Spring Boot applications is now easier than ever.

Just remember to follow the important and simple security measures on user permissions to run your service.

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

Guide to Elasticsearch in Java

$
0
0

I usually post about Persistence on Twitter - you can follow me there:

1. Overview

In this article, we’re going to dive into some key concepts related to full-text search engines, with a special focus on Elasticsearch.

As this is a Java-oriented article, we’re not going to give a detailed step-by-step tutorial on how to setup Elasticsearch and show how it works under the hood, instead, we’re going to target the Java client, and how to use the main features like index, delete, get and search.

2. Setup

In order to install Elasticsearch on your machine, please refer to the official setup guide.

The installation process is pretty simple, just download the zip/tar package and run the elasticsearch script file(elasticsearch.bat for Windows users).

By default, Elasticsearch listens to the 9200 port for upcoming HTTP queries by default. We can verify that it is successfully launched by opening the  http://localhost:9200/ URL in your favorite browser:

{
    "name" : "Mikey",
    "cluster_name" : "elasticsearch",
    "version" : {
        "number" : "2.3.5",
        "build_hash" : "90f439ff60a3c0f497f91663701e64ccd01edbb4",
        "build_timestamp" : "2016-07-27T10:36:52Z",
        "build_snapshot" : false,
        "lucene_version" : "5.5.0"
    },
    "tagline" : "You Know, for Search"
}

3. Maven Configuration

Now that we have our basic Elasticsearch cluster up and running, let’s jump straight to the Java client. First of all, we need to have the following Maven dependency declared in our pom.xml file:

<dependency>
    <groupId>org.elasticsearch</groupId>
    <artifactId>elasticsearch</artifactId>
    <version>2.4.0</version>
</dependency>

You can always check the latest versions hosted by the Maven Central with the link provided before.

4. Java API

Before we jump straight to how to use the main Java API features, we need to initiate the client node using the nodeBuilder():

Node node = nodeBuilder()
  .clusterName("elasticsearch").client(true).node();
Client client = node.client();

4.1. Indexing Documents

The prepareIndex() function allows to store an arbitrary JSON document and make it searchable:

@Test
public void givenJsonString_whenJavaObject_thenIndexDocument() {
    String jsonObject = "{\"age\":10,\"dateOfBirth\":1471466076564,"
      +"\"fullName\":\"John Doe\"}";
    IndexResponse response = client.prepareIndex("people", "Doe")
      .setSource(jsonObject).get();
      
    String id = response.getId();
    String index = response.getIndex();
    String type = response.getType();
    long version = response.getVersion();
       
    assertTrue(response.isCreated());
    assertEquals(0, version);
    assertEquals("people", index);
    assertEquals("Doe", type);
}

When running the test make sure to declare the path.home variable, otherwise the following exception may rise:

java.lang.IllegalStateException: path.home is not configured

After running the Maven command : mvn clean install -Des.path.home=C:\elastic, the JSON document will be stored with people as an index and Doe as a type.

Note that it is possible to use any JSON Java library to create and process your documents. If you are not familiar with any of these, you can use Elasticsearch helpers to generate your own JSON documents:

XContentBuilder builder = XContentFactory.jsonBuilder()
  .startObject()
  .field("fullName", "Test")
  .field("dateOfBirth", new Date())
  .field("age", "10")
  .endObject();
IndexResponse response = client.prepareIndex("people", "Doe")
  .setSource(builder).get();
  
assertTrue(response.isCreated());

4.2. Querying Indexed Documents

Now that we have a typed searchable JSON document indexed, we can proceed and search using the prepareSearch() method:

SearchResponse response = client.prepareSearch().execute().actionGet();
List<SearchHit> searchHits = Arrays.asList(response.getHits().getHits());
List<Person> results = new ArrayList<Person>();
searchHits.forEach(
  hit -> results.add(JSON.parseObject(hit.getSourceAsString(), Person.class)));

The results returned by the actionGet() method are called Hits, each Hit refers to a JSON document matching a search request.

In this case, the results list contains all the data stored in the cluster. Note that in this example we’re using the FastJson library in order to convert JSON Strings to Java objects.

We can enhance the request by adding additional parameters in order to customize the query using the QueryBuilders methods:

SearchResponse response = client.prepareSearch()
  .setTypes()
  .setSearchType(SearchType.DFS_QUERY_THEN_FETCH)
  .setPostFilter(QueryBuilders.rangeQuery("age").from(5).to(15))
  .execute()
  .actionGet();

4.3. Retrieving and Deleting Documents

The prepareGet() and prepareDelete() methods allow to get or delete a JSON document from the cluster using its id:

GetResponse response = client.prepareGet("people","Doe","1").get();
String age = (String) response.getField("age").getValue();
// Process other fields
DeleteResponse response = client.prepareDelete("people", "Doe", "5")
  .get();

The syntax is pretty straightforward, you just need to specify the index and the type value alongside the object’s id.

5. QueryBuilders Examples

The QueryBuilders class provides a variety of static methods used as dynamic matchers to find specific entries in the cluster. While using the prepareSearch() method for looking for specific JSON documents in the cluster, we can use query builders to customize the search results.

Here’s a list of the most common uses of the QueryBuilders API.

The matchAllQuery() method returns a QueryBuilder object that matches all documents in the cluster:

QueryBuilder matchAllQuery = QueryBuilders.matchAllQuery();

The rangeQuery() matches documents where a field’s value is within a certain range:

QueryBuilder matchDocumentsWithinRange = QueryBuilders
  .rangeQuery("price").from(15).to(100)

Providing a field name – e.g. fullName, and the corresponding value – e.g. John Doe, The matchQuery() method matches all document with these exact field’s value:

QueryBuilder matchSpecificFieldQuery= QueryBuilders
  .matchQuery("fullName", "John Doe");

We can as well use the multiMatchQuery() method to build a multi-fields version of the match query:

QueryBuilder matchSpecificFieldQuery= QueryBuilders.matchQuery(
  "Text I am looking for", "field_1", "field_2^3", "*_field_wildcard");

We can use the caret symbol (^) to boost specific fields.

In our example the field_2 has boost value set to three, making it more important than the other fields. Note that it’s possible to use wildcards and regex queries, but performance-wise, beware of memory consumption and response-time delay when dealing with wildcards, because something like *_apples may cause a huge impact on performance.

The coefficient of importance is used to order the result set of hits returned after executing the prepareSearch() method.

If you are more familiar with the Lucene queries syntax, you can use the simpleQueryStringQuery() method to customize search queries:

QueryBuilder simpleStringQuery = QueryBuilders
  .simpleQueryStringQuery("+John -Doe OR Janette");

As you can probably guess, we can use the Lucene’s Query Parser syntax to build simple, yet powerful queries. Here’re some basic operators that can be used alongside the AND/OR/NOT operators to build search queries:

  • The required operator (+): requires that a specific piece of text exists somewhere in fields of a document.
  • The prohibit operator (): excludes all documents that contain a keyword declared after the () symbol.

6. Conclusion

In this quick article, we’ve seen how to use the ElasticSearch’s Java API to perform some of the common features related to full-text search engines.

You can check out the example provided in this article in the GitHub project.

I usually post about Persistence on Twitter - you can follow me there:


Viewing all 3683 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>