Quantcast
Channel: Baeldung
Viewing all 3689 articles
Browse latest View live

Spring Cloud – Bootstrapping

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Overview

Spring Cloud is a framework for building robust cloud applications. The framework facilitates the development of applications by providing solutions to many of the common problems faced when moving to a distributed environment.

Applications that run with microservices architecture aim to simplify development, deployment, and maintenance. The decomposed nature of the application allows developers to focus on one problem at a time. Improvements can be introduced without impacting other parts of a system.

On the other hand, different challenges arise when we take on a microservice approach:

  • Externalizing configuration so that is flexible and does not require rebuild of the service on change
  • Service discovery
  • Hiding complexity of services deployed on different hosts

In this article, we will build four microservices: a configuration server, a discovery server, a resource server, and finally a gateway server. These four microservices form a solid base application to begin cloud development and address the aforementioned challenges.

2. Config Server

When developing a cloud application, one issue is maintaining and distributing configuration to our services. We really don’t want to spend time configuring each environment before scaling our service horizontally or risk security breaches by baking our configuration into our application.

To solve this, we will consolidate all of our configuration into a single Git repository and connect that to one application that manages a configuration for all our applications. We are going to be setting up a very simple implementation.

To learn more details and see a more complex example take a look at Spring Cloud Configuration article.

2.1. Setup

Navigate to start.spring.io and select Maven and Spring Boot 1.4.x.

Set the artifact to “config”. In the dependencies section, search for “config server” and add that module. Then press the generate button and you will be able to download a zip file with a preconfigured project inside and ready to go.

Alternatively, we can generate a Spring Boot project and add some dependencies to the POM file manually.

These dependencies will be shared between all the projects:

<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>1.4.0.RELEASE</version>
    <relativePath/>
</parent>

<dependencies>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-test</artifactId>
        <scope>test</scope>
    </dependency>
</dependencies> 

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-dependencies</artifactId>
            <version>Brixton.SR5</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>

<build>
    <plugins>
        <plugin>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-maven-plugin</artifactId>
        </plugin>
    </plugins>
</build>

Let’s add a dependency for the config server:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-config-server</artifactId>
</dependency>

For reference: you can find the latest version on Maven Central (spring-cloud-dependenciestest, config-server).

2.2. Spring Config

To enable the configuration server we must add some annotations to the main application class:

@SpringBootApplication
@EnableConfigServer
public class ConfigApplication {...}

@EnableConfigServer will turn our application into a configuration server.

2.3. Properties

Let’s add the application.properties in src/main/resources:

server.port=8081
spring.application.name=config

spring.cloud.config.server.git.uri=file://${user.home}/application-config

The most significant setting for the config server is the git.uri parameter. This is currently set to a relative file path that generally resolves to c:\Users\{username}\ on Windows or /Users/{username}/ on *nix. This property points to a Git repository where the property files for all the other applications are stored. It can be set to an absolute file path if necessary.

Tip: On a windows machine preface the value with ‘file:///’, on *nix then use ‘file://’.

2.4. Git Repository

Navigate to the folder defined by spring.cloud.config.server.git.uri and add the folder ‘application-config’. CD into that folder and type git init. This will initialize a Git repository where we can store files and track their changes.

2.5. Run

Let’s run config server and make sure it is working. From the command line type mvn spring-boot:run. This will start the server. You should see this output indicating the server is running:

Tomcat started on port(s): 8081 (http)

3. Discovery

Now that we have configuration taken care of, we need a way for all of our servers to be able to find each other. We will solve this problem by setting the Eureka discovery server up. Since our applications could be running on any ip/port combination we need a central address registry that can serve as an application address lookup.

When a new server is provisioned it will communicate with the discovery server and register its address so that others can communicate with it. This way other applications can consume this information as they make requests.

To learn more details and see a more complex discovery implementation take a look at Spring Cloud Eureka article.

3.1. Setup

Again we’ll navigate to start.spring.io. Set the artifact to ‘discovery’. Search for “eureka server” and add that dependency. Search for “config client” and add that dependency. Generate the project.

Alternatively, we can create a Spring Boot project, copy the contents of the POM from config server and swap in these dependencies:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-config</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-eureka-server</artifactId>
</dependency>

For reference: you will find the bundles on Maven Central (config-client, eureka-server).

3.2. Spring Config

Let’s add Java config to the main class:

@SpringBootApplication
@EnableEurekaServer
public class DiscoveryApplication {...}

@EnableEurekaServer will configure this server as a discovery server using Netflix Eureka. Spring Boot will automatically detect the configuration dependency on the classpath and lookup the configuration from the config server.

3.3. Properties

Now we will add two properties files:

bootstrap.properties in src/main/resources

spring.cloud.config.name=discovery
spring.cloud.config.uri=http://localhost:8081

These properties will let discovery server query the config server at startup.

discovery.properties in our Git repository

spring.application.name=discovery
server.port=8082

eureka.instance.hostname=localhost

eureka.client.serviceUrl.defaultZone=http://localhost:8082/eureka/
eureka.client.register-with-eureka=false
eureka.client.fetch-registry=false

The filename must match the spring.application.name property.

In addition, we are telling this server that it is operating in the default zone, this matches the config client’s region setting. We are also telling the server not to register with another discovery instance.

In production, you would have more than one of these to provide redundancy in the event of failure and that setting would be true.

Let’s commit the file to the Git repository. Otherwise, the file will not be detected.

3.4. Add Dependency to the Config Server

Add this dependency to the config server POM file:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-eureka</artifactId>
</dependency>

For reference: you will find the bundle on Maven Central (eureka-client).

Add these properties to the application.properties file in src/main/resources of the config server:

eureka.client.region = default
eureka.client.registryFetchIntervalSeconds = 5
eureka.client.serviceUrl.defaultZone=http://localhost:8082/eureka/

3.5. Run

Start the discovery server using the same command, mvn spring-boot:run. Output from the command line should include:

Fetching config from server at: http://localhost:8081
...
Tomcat started on port(s): 8082 (http)

Stop and rerun the config service. If all is good output should look like:

DiscoveryClient_CONFIG/10.1.10.235:config:8081: registering service...
Tomcat started on port(s): 8081 (http)
DiscoveryClient_CONFIG/10.1.10.235:config:8081 - registration status: 204

4. Gateway

Now that we have our configuration and discovery issues resolved we still have a problem with clients accessing all of our applications.

If we leave everything in a distributed system, then we will have to manage complex CORS headers to allow cross-origin requests on clients. We can resolve this by creating a Gateway server. This will act as a reverse proxy shuttling requests from clients to our back end servers.

A gateway server is an excellent application in microservice architecture as it allows all responses to originate from a single host. This will eliminate the need for CORS and give us a convenient place to handle common problems like authentication.

4.1. Setup

By now we know the drill. Navigate to start.spring.io. Set the artifact to ‘gateway‘. Search for “zuul” and add that dependency. Search for “config client”and add that dependency. Search for “eureka discovery” and add that dependency. Generate that project.

Alternatively, we could create a Spring Boot app with these dependencies:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-config</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-eureka</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-zuul</artifactId>
</dependency>

For reference: you will find the bundle on Maven Central (config-clienteureka-client, zuul).

4.2. Spring Config

Let’s add the configuration to the main class:

@SpringBootApplication
@EnableZuulProxy
@EnableEurekaClient
public class GatewayApplication {...}

4.3. Properties

Now we will add two properties files:

bootstrap.properties in src/main/resources

spring.cloud.config.name=gateway
spring.cloud.config.discovery.service-id=config
spring.cloud.config.discovery.enabled=true

eureka.client.serviceUrl.defaultZone=http://localhost:8082/eureka/

gateway.properties in our Git repository

spring.application.name=gateway
server.port=8080

eureka.client.region = default
eureka.client.registryFetchIntervalSeconds = 5
eureka.client.serviceUrl.defaultZone=http://localhost:8082/eureka/

zuul.routes.resource.path=/resource/**
hystrix.command.resource.execution.isolation.thread.timeoutInMilliseconds: 5000

The zuul.routes property allows us to define an application to route certain requests based on an ant URL matcher. Our property tells Zuul to route any request that comes in on ‘/resource/**’ to an application with the spring.application.name of ‘resource’. Zuul will then lookup the host from discovery server using the application name and forward the request to that server.

Remember to commit the changes in the repository!

4.4. Run

Run the config and discovery applications and wait until the config application has registered with the discovery server. If they are already running you do not have to restart them. Once that is complete, run the gateway server. The gateway server should start on port 8080 and register itself with the discovery server. Output from the console should contain:

Fetching config from server at: http://10.1.10.235:8081/
...
DiscoveryClient_GATEWAY/10.1.10.235:gateway:8080: registering service...
DiscoveryClient_GATEWAY/10.1.10.235:gateway:8080 - registration status: 204
Tomcat started on port(s): 8080 (http)

One mistake that is easy to make is to start the server before config server has registered with Eureka. In this case you will see a log with this output:

Fetching config from server at: http://localhost:8888

This is the default URL and port for a config server and indicates our discovery service did not have an address when the configuration request was made. Just wait a few seconds and try again, once the config server has registered with Eureka, the problem will resolve.

5. Resource

A resource server is where the business logic will go. In our application, we will only have one resource server but in theory, we could have as many as we want. Let’s make our final server for this tutorial!

5.1. Setup

One more time. Navigate to start.spring.io. Set the artifact to ‘resource’. Search for “web” and add that dependency. Search for “config client” and add that dependency. Search for “eureka discovery” and add that dependency. Generate that project.

Alternatively, add these dependencies to a project:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-config</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-eureka</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>

For reference: you will find the bundle on Maven Central (config-clienteureka-client, web).

5.2. Spring Config

Let’s modify our main class:

@SpringBootApplication
@EnableEurekaClient
@RestController
public class ResourceApplication {
    public static void main(String[] args) {
        SpringApplication.run(ResourceApplication.class, args);
    }

    @Value("${resource.returnString}")
    private String returnString;

    @RequestMapping("/hello/cloud")
    public String getString() {
        return returnString;
    }
}

We also added a REST controller and a field set by our properties file to return a value we will set during configuration.

5.3. Properties

Now we just need to add our two properties files:

bootstrap.properties in src/main/resources:

spring.cloud.config.name=resource
spring.cloud.config.discovery.service-id=config
spring.cloud.config.discovery.enabled=true

eureka.client.serviceUrl.defaultZone=http://localhost:8082/eureka/

resource.properties in our Git repository:

spring.application.name=resource
server.port=8083

resource.returnString=hello cloud

eureka.client.region = default
eureka.client.registryFetchIntervalSeconds = 5
eureka.client.serviceUrl.defaultZone=http://localhost:8082/eureka/

Let’s commit the changes to the repository.

5.4. Run

Once all the other applications have started we can start the resource server. The console output should look like:

DiscoveryClient_RESOURCE/10.1.10.235:resource:8083: registering service...
DiscoveryClient_RESOURCE/10.1.10.235:resource:8083 - registration status: 204
Tomcat started on port(s): 8083 (http)

Once it is up we can use our browser to access the endpoint we just created. Navigate to http://localhost:8080/resource/hello/cloud and we get back a response saying ‘hello cloud’. Notice that we are not accessing the resource directly on port 8083 but we are going through the gateway server.

6. Conclusion

Now we are able to connect the various pieces of Spring Cloud into a functioning microservice application. This forms a base we can use to begin building more complex applications.

As always, you can find this source code over on Github.

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE


MD5 Hashing in Java

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Overview

MD5 is a widely used cryptographic hash function, which produces a hash of 128 bit.

In this article we shall see how will see different approaches to create MD5 hashes using various Java libraries.

2. MD5 Using MessageDigest Class

We have a hashing functionality in java.security.MessageDigest Class. The idea is to first instantiate MessageDigest with the kind of algorithm you want to use as argument to the Singleton:

MessageDigest.getInstance(String Algorithm)

And then keep on updating the message digest using update() function:

public void update(byte [] input)

The above function can be called multiple times when say you are reading a long file. Then finally we need to use digest function to generate a hash code:

public byte[] digest()

Below is an example which generates hash for a password and then verifies it:

@Test
public void givenPassword_whenHashing_thenVerifying() 
  throws NoSuchAlgorithmException {
    String hash = "35454B055CC325EA1AF2126E27707052";
    String password = "ILoveJava";
        
    MessageDigest md = MessageDigest.getInstance("MD5");
    md.update(password.getBytes());
    byte[] digest = md.digest();
    String myHash = DatatypeConverter
      .printHexBinary(digest).toUpperCase();
        
    assertThat(myHash.equals(hash)).isTrue();
}

Similarly, we can also verify checksum of a file:

@Test
public void givenFile_generatingChecksum_thenVerifying() 
  throws NoSuchAlgorithmException, IOException {
    String filename = "src/test/resources/test_md5.txt";
    String checksum = "5EB63BBBE01EEED093CB22BB8F5ACDC3";
        
    MessageDigest md = MessageDigest.getInstance("MD5");
    md.update(Files.readAllBytes(Paths.get(filename)));
    byte[] digest = md.digest();
    String myChecksum = DatatypeConverter
      .printHexBinary(digest).toUpperCase();
        
    assertThat(myChecksum.equals(checksum)).isTrue();
}

3. MD5 using Apache Commons

The org.apache.commons.codec.digest.DigestUtils class makes things much more simpler to above operations we did using MessageDigest Class.

Lets see an example for hashing and verifying password:

@Test
public void givenPassword_whenHashingUsingCommons_thenVerifying()  {
    String hash = "35454B055CC325EA1AF2126E27707052";
    String password = "ILoveJava";

    String md5Hex = DigestUtils
      .md5Hex(password).toUpperCase();
        
    assertThat(md5Hex.equals(hash)).isTrue();
}

4. MD5 using Guava

Below is another simpler approach for generating MD5 checksums using com.google.common.io.Files.hash :

@Test
public void givenFile_whenChecksumUsingGuava_thenVerifying() 
  throws IOException {
    String filename = "src/test/resources/test_md5.txt";
    String checksum = "5EB63BBBE01EEED093CB22BB8F5ACDC3";
        
    HashCode hash = com.google.common.io.Files
      .hash(new File(filename), Hashing.md5());
    String myChecksum = hash.toString()
      .toUpperCase();
        
    assertThat(myChecksum.equals(checksum)).isTrue();
}

5. Conclusion

There are different approaches in Java API and other third Party APIs like Apache commons and Guava. Choose wisely based on requirements of project and dependencies your project may want to follow.

As always, the code is available over on Github.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Java Web Weekly, Issue 144

$
0
0

I just released the Master Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> New in Spring 5: Functional Web Framework [spring.io]

The new reactive framework in Spring 5 is starting to take shape (and getting to the top of my list to test).

>> Ready your Java 8 Reactive apps now, Reactor 3.0 GA is out ! [spring.io]

Speaking of reactive applications, Reactor 3 is out with a major update to the programming model.

>> RXJava by Example [infoq.com]

And – still on reactive – a great intro to RxJava – which will have first class support in Spring 5 as well.

>> Free Thoughts on Java Library – ebooks, cheat sheets and more [thoughts-on-java.org]

A grand library on Hibernate? Cool beans – the convenience of having material that’s well structured and thought out is definitely useful.

>> Java 9, OSGi and the Future of Modularity [infoq.com]

Given that Java 9 is not to far away now, it makes a lot of sense to start understanding modularity beyond the point of just reading about it.

>> The Ingredients and Roadmap of Rebooted Java EE 8 and 9 [adam-bien.com]

There’s finally some direction and clarity around the plans for Java EE 8 (and 9).

That being said, I’m personally not very enthusiastic about “a reboot” – there’s a reason reboots have a bad wrap – they generally don’t work.

The proposed list of features looks good, but forcing so many things in a single release is risky instead of developing them organically.

>> Should tests be ordered or not? [frankel.ch]

An interesting attempt to challenge the assumption that tests shouldn’t be ordered.

>> Code generating beans – mutable and immutable [joda.org]

Should we be using mutable beans in 2016? No, no, no!

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> How (not) to test RESTful APIs with Selenium WebDriver [ontestautomation.com]

Yes. Definitely. Don’t do it 🙂

>> When to Choose SQL and When to Choose NoSQL [jooq.org]

Pick the right tool for the job. Look at SQL first.

Just remember that the ability to scale isn’t the only reason you might want to look at a NoSQL solution – domain design is close second.

Also worth reading:

3. Musings

>> I Stopped Contributing To Stackoverflow, But It’s Not Declining [techblog.bozho.net]

An inside look at the StackOverflow community from someone who’s actually on the inside.

I personally never really got into contributing on StackOverflow, but I find these reads about that ecosystem quite interesting nevertheless.

>> Azure Functions in practice [troyhunt.com]

A very fun and informative read about dealing with an ongoing, large-scale DDOS attack.

>> Defining Developer Collaboration [daedtech.com]

Collaboration on a software project can range from herding cats to effortlessly skipping along towards the common goal. I found that latter scenario usually starts with the hiring process.

>> 7 years of blogging and a lifetime later… [troyhunt.com]

If you’ve been thinking about blogging, stop thinking and start typing.

>> Replacing Bugzilla with Tuleap [waynebeaton.com]

Finally!

>> WTF Is a CTO [matt.aimonetti.net]

>> When to Hire a VP of Engineering [matt.aimonetti.net]

A couple of writeups from the trenches, from an engineer I admire. Highly useful if that’s the direction you’re going on, career-wise.

>> Software Architect as a Developer Pension Plan [daedtech.com]

A fun exploration of the state of our industry on the backdrop of the huge impact our profession has had on the world.

All based on a podcast episode from the Freelancers Show – which I remember listing to not to long ago 🙂

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> I thought I downsized you last week [dilbert.com]

>> I can’t let you leave this cubicle alive [dilbert.com]

>> Criticize the behavior, not the person [dilbert.com]

5. Pick of the Week

My talk from Voxxed Days Bucharest earlier this year – all about CQRS and Event Sourcing:

>> An Architecture with CQRS and Event Sourcing by Eugen Paraschiv [youtube.com]

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Database Migrations with Flyway

$
0
0

I usually post about Persistence on Twitter - you can follow me there:

1. Introduction

This article describes key concepts of Flyway and how we can use this framework to continuously remodel our application’s database schema reliably and easily. At the end, we will present an example of managing a MySQL database using a Maven Flyway plugin.

Flyway updates a database from one version to a next using migrations. We can write migrations either in SQL with database specific syntax or in Java for advanced database transformations.

Migrations can either be versioned or repeatable. The former have a unique version and are applied exactly once. The latter do not have a version. Instead, they are (re-)applied every time their checksum changes.

Within a single migration run, repeatable migrations are always applied last, after pending versioned migrations have been executed. Repeatable migrations are applied in order of their description. For a single migration, all statements are run within a single database transaction.

In this article we mainly focus on how we may use the Maven plugin to perform database migrations.

2. Flyway Maven Plugin

To install a Flyway Maven plugin, add following plugin definition to your pom.xml:

<plugin>
    <groupId>org.flywaydb</groupId>
    <artifactId>flyway-maven-plugin</artifactId>
    <version>4.0.3</version> 
</plugin>

You can check the latest version of the plugin available on Maven repository.

This Maven plugin may be configured in four different ways. Please refer to the documentation to get a list of all configurable properties.

2.1. Plugin Configuration

We may configure the plugin directly by use of the <configuration> tag in the plugin definition of the pom.xml:

<plugin>
    <groupId>org.flywaydb</groupId>
    <artifactId>flyway-maven-plugin</artifactId>
    <version>4.0.3</version>
    <configuration>
        <user>databaseUser</user>
        <password>databasePassword</password>
        <schemas>
            <schema>schemaName</schema>
        </schemas>
        ...
    </configuration>
</plugin>

2.2. Maven Properties

We may also configure the plugin by specifying configurable properties as Maven properties in pom.xml:

<project>
    ...
    <properties>
        <flyway.user>databaseUser</flyway.user>
        <flyway.password>databasePassword</flyway.password>
        <flyway.schemas>schemaName</flyway.schemas>
        ...
    </properties>
    ...
</project>

2.3. External Configuration File

We may also provide plugin configuration in a separate .properties file:

flyway.user=databaseUser
flyway.password=databasePassword
flyway.schemas=schemaName
...

Default configuration file name is flyway.properties and it should reside in the same directory as the pom.xml file. Encoding is specified by flyway.encoding (Default is UTF-8).

If you are using any other name (e.g customConfig.properties) as configuration file, then it should be specified explicitly when invoking the Maven command:

$ mvn -Dflyway.configFile=customConfig.properties

2.4. System Properties

Finally, all configuration properties may also be specified as system properties when invoking Maven on the command line:

$ mvn -Dflyway.user=databaseUser -Dflyway.password=databasePassword 
  -Dflyway.schemas=schemaName

Following is an order of precedence when a configuration is specified in more than one way:

  1. System properties
  2. External configuration file
  3. Maven properties
  4. Plugin configuration

3. Example Migration

In this section, we walk through the required steps to migrate a database schema for a MySQL database using the Maven plugin. We use an external file to configure Flyway.

This section assumes that you have already created a Maven project in a directory called $PROJECT_ROOT and MySQL database instance running on localhost:3306.

3.1. Update POM

Add an appropriate database driver dependency for the MySQL database in the pom.xml:

<dependency>
    <groupId>mysql</groupId>
    <artifactId>mysql-connector-java</artifactId>
    <version>6.0.3</version>
</dependency>

You can check the latest version of the driver available on Maven repository. Add Flyway plugin to pom.xml as explained in Section 2 above.

3.2. Configure Flyway Using External File

Create a file myFlywayConfig.properties in $PROJECT_ROOT with the following content:

flyway.user=databaseUser
flyway.password=databasePassword
flyway.schemas=app-db
flyway.url=jdbc:mysql://localhost:3306/
flyway.locations=filesystem:db/migration

The above configuration specifies that our migration scripts are located in the db/migration directory. It connects to MySQL instance available on localhost:3306 using databaseUser and databasePassword.

The application database schema is app-db. Please replace flyway.user, flyway.password, flyway.url with your database username, database password and database host/port appropriately.

3.3. Define First Migration

Flyway adheres to the following naming convention for migration scripts:

<Prefix><Version>__<Description>.sql

Where:

  • <Prefix> – Default prefix is V, which may be configured in the above configuration file using the flyway.sqlMigrationPrefix property.
  • <Version> – Migration version number. Major and minor versions may be separated by an underscore. Migration version should always start with 1.
  • <Description> – Textual description of the migration. The description needs to be separated from the version numbers with a double underscore.

Example: V1_1_0__my_first_migration.sql

Create a directory db/migration in $PROJECT_ROOT with a migration script named V1_0__create_employee_schema.sql containing SQL instructions to create e.g. an employee table:

CREATE TABLE IF NOT EXISTS `employee` (

    `id` int NOT NULL AUTO_INCREMENT PRIMARY KEY,
    `name` varchar(20),
    `email` varchar(50),
    `date_of_birth` timestamp

)ENGINE=InnoDB DEFAULT CHARSET=UTF8;

3.4. Execute Migrations

Invoke the following Maven command from $PROJECT_ROOT to execute database migrations:

$ mvn clean flyway:migrate -Dflyway.configFile=myFlywayConfig.properties

This should result in a first successful migration. The database schema may now be depicted as follows:

employee:
+----+------+-------+---------------+
| id | name | email | date_of_birth |
+----+------+-------+---------------+

Repeat the steps from subsections 3.3. and 3.4. to define and run new migrations at will.

3.5. Define And Execute Second Migration

Create a second migration file with name V2_0_create_department_schema.sql containing the following two queries:

CREATE TABLE IF NOT EXISTS `department` (

`id` int NOT NULL AUTO_INCREMENT PRIMARY KEY,
`name` varchar(20)

)ENGINE=InnoDB DEFAULT CHARSET=UTF8; 

ALTER TABLE `employee` ADD `dept_id` int AFTER `email`;

Execute a similar migration as was mentioned in section 3.4 above. The database schema looks like following after successfully executing the second migration.

employee:
+----+------+-------+---------+---------------+
| id | name | email | dept_id | date_of_birth |
+----+------+-------+---------+---------------+
department:
+----+------+
| id | name |
+----+------+

We may now verify that both migrations were indeed successful by invoking the following Maven command:

$ mvn flyway:info -Dflyway.configFile=myFlywayConfig.properties

4. How Flyway Works

To keep track of which migrations have already been applied, when and by whom, it adds a special bookkeeping table to your schema. This metadata table also tracks migration checksums and whether or not the migrations were successful.

The framework performs the following steps to accommodate evolving database schemas:

  1. It checks a database schema to locate its metadata table (SCHEMA_VERSION by default). If the metadata table does not exist, it will create one
  2. It scans an application classpath for available migrations
  3. It compares migrations against the metadata table. If a version number is lower or equal to a version marked as current, it is ignored
  4. It marks any remaining migrations as pending migrations. These are sorted based on version number and are executed in order
  5. As each migration is applied, the metadata table is updated accordingly

5. Commands

Flyway supports the following basic commands to manage database migrations.

  • Info: Prints current status/version of a database schema. It prints which migrations are pending,  which migrations have been applied, what is the status of applied migrations and when they were applied.
  • Migrate: Migrates a database schema to the current version. It scans the classpath for available migrations and applies pending migrations.
  • Baseline: Baselines an existing database, excluding all migrations, including baselineVersion. Baseline helps to start with Flyway in an existing database. Newer migrations can then be applied normally.
  • Validate: Validates current database schema against available migrations.
  • Repair: Repairs metadata table.
  • Clean: Drops all objects in a configured schema. All database objects are dropped. Of course, you should never use clean on any production database.

6. Conclusion

In this article, we’ve shown how Flyway works and how we can use this framework to remodel our application database reliably.

The code accompanying this article is available on Github.

I usually post about Persistence on Twitter - you can follow me there:


Java Annotation Processing and Creating a Builder

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Introduction

This article is an intro to Java source-level annotation processing and provides examples of using this technique for generating additional source files during compilation.

2. Applications of Annotation Processing

The source-level annotation processing first appeared in Java 5. It is a handy technique for generating additional source files during the compilation stage.

The source files don’t have to be Java files — you can generate any kind of description, metadata, documentation, resources, or any other type of files, based on annotations in your source code.

Annotation processing is actively used in many ubiquitous Java libraries, for instance, to generate metaclasses in QueryDSL and JPA, to augment classes with boilerplate code in Lombok library.

An important thing to note is the limitation of the annotation processing API — it can only be used to generate new files, not to change existing ones.

The notable exception is the Lombok library which uses annotation processing as a bootstrapping mechanism to include itself into the compilation process and modify the AST via some internal compiler APIs. This hacky technique has nothing to do with the intended purpose of annotation processing and therefore is not discussed in this article.

3. Annotation Processing API

The annotation processing is done in multiple rounds. Each round starts with the compiler searching for the annotations in the source files and choosing the annotation processors suited for these annotations. Each annotation processor, in turn, is called on the corresponding sources.

If any files are generated during this process, another round is started with the generated files as its input. This process continues until no new files are generated during the processing stage.

Each annotation processor, in turn, is called on the corresponding sources. If any files are generated during this process, another round is started with the generated files as its input. This process continues until no new files are generated during the processing stage.

The annotation processing API is located in the javax.annotation.processing package. The main interface that you’ll have to implement is the Processor interface, which has a partial implementation in the form of AbstractProcessor class. This class is the one we’re going to extend to create our own annotation processor.

4. Setting Up the Project

To demonstrate the possibilities of annotation processing, we will develop a simple processor for generating fluent object builders for annotated classes.

We’re going to split our project into two Maven modules. One of them, annotation-processor module, will contain the processor itself together with the annotation, and another, the annotation-user module, will contain the annotated class. This is a typical use case of annotation processing.

The settings for the annotation-processor module are as follows. We’re going to use the Google’s auto-service library to generate processor metadata file which will be discussed later, and the maven-compiler-plugin tuned for the Java 8 source code. The versions of these dependencies are extracted to the properties section.

Latest versions of the auto-service library and maven-compiler-plugin can be found in Maven Central repository:

<properties>
    <auto-service.version>1.0-rc2</auto-service.version>
    <maven-compiler-plugin.version>
      3.5.1
    </maven-compiler-plugin.version>
</properties>

<dependencies>

    <dependency>
        <groupId>com.google.auto.service</groupId>
        <artifactId>auto-service</artifactId>
        <version>${auto-service.version}</version>
        <scope>provided</scope>
    </dependency>

</dependencies>

<build>
    <plugins>

        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-compiler-plugin</artifactId>
            <version>${maven-compiler-plugin.version}</version>
            <configuration>
                <source>1.8</source>
                <target>1.8</target>
            </configuration>
        </plugin>

    </plugins>
</build>

The annotation-user Maven module with the annotated sources does not need any special tuning, except adding a dependency on the annotation-processor module in the dependencies section:

<dependency>
    <groupId>com.baeldung</groupId>
    <artifactId>annotation-processing</artifactId>
    <version>1.0.0-SNAPSHOT</version>
</dependency>

5. Defining an Annotation

Suppose we have a simple POJO class in our annotation-user module with several fields:

public class Person {

    private int age;

    private String name;

    // getters and setters …

}

We want to create a builder helper class to instantiate the Person class more fluently:

Person person = new PersonBuilder()
  .setAge(25)
  .setName("John")
  .build();

This PersonBuilder class is an obvious choice for a generation, as its structure is completely defined by the Person setter methods.

Let’s create a @BuilderProperty annotation in the annotation-processor module for the setter methods. It will allow us to generate the Builder class for each class that has its setter methods annotated:

@Target(ElementType.METHOD)
@Retention(RetentionPolicy.SOURCE)
public @interface BuilderProperty {
}

The @Target annotation with the ElementType.METHOD parameter ensures that this annotation can be only put on a method.

The SOURCE retention policy means that this annotation is only available during source processing and is not available at runtime.

The Person class with properties annotated with the @BuilderProperty annotation will look as follows:

public class Person {

    private int age;

    private String name;

    @BuilderProperty
    public void setAge(int age) {
        this.age = age;
    }

    @BuilderProperty
    public void setName(String name) {
        this.name = name;
    }

    // getters …

}

6. Implementing a Processor

6.1. Creating an AbstractProcessor Subclass

We’ll start with extending the AbstractProcessor class inside the annotation-processor Maven module.

First, we should specify annotations that this processor is capable of processing, and also the supported source code version. This can be done either by implementing the methods getSupportedAnnotationTypes and getSupportedSourceVersion of the Processor interface or by annotating your class with @SupportedAnnotationTypes and @SupportedSourceVersion annotations.

The @AutoService annotation is a part of the auto-service library and allows to generate the processor metadata which will be explained in the following sections.

@SupportedAnnotationTypes(
  "com.baeldung.annotation.processor.BuilderProperty")
@SupportedSourceVersion(SourceVersion.RELEASE_8)
@AutoService(Processor.class)
public class BuilderProcessor extends AbstractProcessor {

    @Override
    public boolean process(Set<? extends TypeElement> annotations, 
      RoundEnvironment roundEnv) {
        return false;
    }
}

You can specify not only the concrete annotation class names but also wildcards, like “com.baeldung.annotation.*” to process annotations inside the com.baeldung.annotation package and all its sub packages, or even “*” to process all annotations.

The single method that we’ll have to implement is the process method that does the processing itself. It is called by the compiler for every source file containing the matching annotations.

Annotations are passed as the first Set<? extends TypeElement> annotations argument, and the information about the current processing round is passed as the RoundEnviroment roundEnv argument.

The return boolean value should be true if your annotation processor has processed all the passed annotations, and you don’t want them to be passed to other annotation processors down the list.

6.2. Gathering Data

Our processor does not really do anything useful yet, so let’s fill it with code.

First, we’ll need to iterate through all annotation types that are found in the class — in our case, the annotations set will have a single element corresponding to the @BuilderProperty annotation, even if this annotation occurs multiple times in the source file.

Still, it’s better to implement the process method as an iteration cycle, for completeness sake:

@Override
public boolean process(Set<? extends TypeElement> annotations, 
  RoundEnvironment roundEnv) {

    for (TypeElement annotation : annotations) {
        Set<? extends Element> annotatedElements 
          = roundEnv.getElementsAnnotatedWith(annotation);
        
        // …
    }

    return true;
}

In this code, we use the RoundEnvironment instance to receive all elements annotated with the @BuilderProperty annotation. In the case of the Person class, these elements correspond to the setName and setAge methods.

@BuilderProperty annotation’s user could erroneously annotate methods that are not actually setters. The setter method name should start with set, and the method should receive a single argument. So let’s separate the wheat from the chaff.

In the following code, we use the Collectors.partitioningBy() collector to split annotated methods into two collections: correctly annotated setters and other erroneously annotated methods:

Map<Boolean, List<Element>> annotatedMethods = annotatedElements.stream().collect(
  Collectors.partitioningBy(element ->
    ((ExecutableType) element.asType()).getParameterTypes().size() == 1
    && element.getSimpleName().toString().startsWith("set")));

List<Element> setters = annotatedMethods.get(true);
List<Element> otherMethods = annotatedMethods.get(false);

Here we use the Element.asType() method to receive an instance of the TypeMirror class which gives us some ability to introspect types even though we are only at the source processing stage.

We should warn the user about incorrectly annotated methods, so let’s use the Messager instance accessible from the AbstractProcessor.processingEnv protected field. The following lines will output an error for each erroneously annotated element during the source processing stage:

otherMethods.forEach(element ->
  processingEnv.getMessager().printMessage(Diagnostic.Kind.ERROR,
    "@BuilderProperty must be applied to a setXxx method " 
      + "with a single argument", element));

Of course, if the correct setters collection is empty, there is no point of continuing the current type element set iteration:

if (setters.isEmpty()) {
    continue;
}

If the setters collection has at least one element, we’re going to use it to get the fully qualified class name from the enclosing element, which in case of the setter method appears to be the source class itself:

String className = ((TypeElement) setters.get(0)
  .getEnclosingElement()).getQualifiedName().toString();

The last bit of information we need to generate a builder class is a map between the names of the setters and the names of their argument types:

Map<String, String> setterMap = setters.stream().collect(Collectors.toMap(
    setter -> setter.getSimpleName().toString(),
    setter -> ((ExecutableType) setter.asType())
      .getParameterTypes().get(0).toString()
));

6.3. Generating the Output File

Now we have all the information we need to generate a builder class: the name of the source class, all its setter names, and their argument types.

To generate the output file, we’ll use the Filer instance provided again by the object in the AbstractProcessor.processingEnv protected property:

JavaFileObject builderFile = processingEnv.getFiler()
  .createSourceFile(builderClassName);
try (PrintWriter out = new PrintWriter(builderFile.openWriter())) {
    // writing generated file to out …
}

The complete code of the writeBuilderFile method is provided below. We only need to calculate the package name, fully qualified builder class name, and simple class names for the source class and the builder class. The rest of the code is pretty straightforward.

private void writeBuilderFile(
  String className, Map<String, String> setterMap) 
  throws IOException {

    String packageName = null;
    int lastDot = className.lastIndexOf('.');
    if (lastDot > 0) {
        packageName = className.substring(0, lastDot);
    }

    String simpleClassName = className.substring(lastDot + 1);
    String builderClassName = className + "Builder";
    String builderSimpleClassName = builderClassName
      .substring(lastDot + 1);

    JavaFileObject builderFile = processingEnv.getFiler()
      .createSourceFile(builderClassName);
    
    try (PrintWriter out = new PrintWriter(builderFile.openWriter())) {

        if (packageName != null) {
            out.print("package ");
            out.print(packageName);
            out.println(";");
            out.println();
        }

        out.print("public class ");
        out.print(builderSimpleClassName);
        out.println(" {");
        out.println();

        out.print("    private ");
        out.print(simpleClassName);
        out.print(" object = new ");
        out.print(simpleClassName);
        out.println("();");
        out.println();

        out.print("    public ");
        out.print(simpleClassName);
        out.println(" build() {");
        out.println("        return object;");
        out.println("    }");
        out.println();

        setterMap.entrySet().forEach(setter -> {
            String methodName = setter.getKey();
            String argumentType = setter.getValue();

            out.print("    public ");
            out.print(builderSimpleClassName);
            out.print(" ");
            out.print(methodName);

            out.print("(");

            out.print(argumentType);
            out.println(" value) {");
            out.print("        object.");
            out.print(methodName);
            out.println("(value);");
            out.println("        return this;");
            out.println("    }");
            out.println();
        });

        out.println("}");
    }
}

7. Running the Example

To see the code generation in action, you should either compile both modules from the common parent root or first compile the annotation-processor module and then the annotation-user module.

The generated PersonBuilder class can be found inside the annotation-user/target/generated-sources/annotations/com/baeldung/annotation/PersonBuilder.java file and should look like this:

package com.baeldung.annotation;

public class PersonBuilder {

    private Person object = new Person();

    public Person build() {
        return object;
    }

    public PersonBuilder setName(java.lang.String value) {
        object.setName(value);
        return this;
    }

    public PersonBuilder setAge(int value) {
        object.setAge(value);
        return this;
    }
}

8. Alternative Ways of Registering a Processor

To use your annotation processor during the compilation stage, you have several other options, depending on your use case and the tools you use.

8.1. Using the Annotation Processor Tool

The apt tool was a special command line utility for processing source files. It was a part of Java 5, but since Java 7 it was deprecated in favour of other options and removed completely in Java 8. It will not be discussed in this article.

8.2. Using the Compiler Key

The -processor compiler key is a standard JDK facility to augment the source processing stage of the compiler with your own annotation processor.

Note that the processor itself and the annotation have to be already compiled as classes in a separate compilation and present on the classpath, so the first thing you should do is:

javac com/baeldung/annotation/processor/BuilderProcessor
javac com/baeldung/annotation/processor/BuilderProperty

Then you do the actual compilation of your sources with the -processor key specifying the annotation processor class you’ve just compiled:

javac -processor com.baeldung.annotation.processor.MyProcessor Person.java

To specify several annotation processors in one go, you can separate their class names with commas, like this:

javac -processor package1.Processor1,package2.Processor2 SourceFile.java

8.3. Using Maven

The maven-compiler-plugin allows specifying annotation processors as part of its configuration.

Here’s an example of adding annotation processor for the compiler plugin. You could also specify the directory to put generated sources into, using the generatedSourcesDirectory configuration parameter.

Note that the BuilderProcessor class should already be compiled, for instance, imported from another jar in the build dependencies:

<build>
    <plugins>

        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-compiler-plugin</artifactId>
            <version>3.5.1</version>
            <configuration>
                <source>1.8</source>
                <target>1.8</target>
                <encoding>UTF-8</encoding>
                <generatedSourcesDirectory>${project.build.directory}
                  /generated-sources/</generatedSourcesDirectory>
                <annotationProcessors>
                    <annotationProcessor>
                        com.baeldung.annotation.processor.BuilderProcessor
                    </annotationProcessor>
                </annotationProcessors>
            </configuration>
        </plugin>

    </plugins>
</build>

8.4. Adding a Processor Jar to the Classpath

Instead of specifying the annotation processor in the compiler options, you may simply add a specially structured jar with the processor class to the classpath of the compiler.

To pick it up automatically, the compiler has to know the name of the processor class. So you have to specify it in the META-INF/services/javax.annotation.processing.Processor file as a fully qualified class name of the processor:

com.baeldung.annotation.processor.BuilderProcessor

You can also specify several processors from this jar to pick up automatically by separating them with a new line:

package1.Processor1
package2.Processor2
package3.Processor3

If you use Maven to build this jar and try to put this file directly into the src/main/resources/META-INF/services directory, you’ll encounter the following error:

[ERROR] Bad service configuration file, or exception thrown while 
constructing Processor object: javax.annotation.processing.Processor: 
Provider com.baeldung.annotation.processor.BuilderProcessor not found

This is because the compiler tries to use this file during the source-processing stage of the module itself when the BuilderProcessor file is not yet compiled. The file has to be either put inside another resource directory and copied to the META-INF/services directory during the resource copying stage of the Maven build, or (even better) generated during the build.

The Google auto-service library, discussed in the following section, allows generating this file using a simple annotation.

8.5. Using the Google auto-service Library

To generate the registration file automatically, you can use the @AutoService annotation from the Google’s auto-service library, like this:

@AutoService(Processor.class)
public BuilderProcessor extends AbstractProcessor {
    // …
}

This annotation is itself processed by the annotation processor from the auto-service library. This processor generates the META-INF/services/javax.annotation.processing.Processor file containing the BuilderProcessor class name.

9. Conclusion

In this article, we’ve demonstrated source-level annotation processing using an example of generating a Builder class for a POJO. We have also provided several alternative ways of registering annotation processors in your project.

The source code for the article is available on GitHub.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Introduction to the Front Controller Pattern in Java

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Overview

In this tutorial we’ll be digging deeper into the Front Controller Pattern, part of the Enterprise Patterns as defined in Martin Fowler‘s book “Patterns of Enterprise Application Architecture”.

Front Controller is defined as “a controller that handles all requests for a Web site”. It stands in front of a web-application and delegates requests to subsequent resources. It also provides an interface to common behavior such as security, internationalization and presenting particular views to certain users.

This enables an application to change its behavior at runtime. Furthermore it helps to read and maintain an application by preventing code duplication.

The Front Controller consolidates all request handling by channeling requests through a single handler object.

2. How Does it Work?

The Front Controller Pattern is mainly divided into two parts. A single dispatching controller and a hierarchy of commands. The following UML depicts class relations of a generic Front Controller implementation:

front-controller

This single controller dispatches requests to commands in order to trigger behavior associated with a request.

To demonstrate its implementation, we’ll implement the controller in a FrontControllerServlet and commands as classes inherited from an abstract FrontCommand.

3. Setup

3.1. Maven Dependencies

First, we’ll setup a new Maven WAR project with javax.servlet-api included:

<dependency>
    <groupId>javax.servlet</groupId>
    <artifactId>javax.servlet-api</artifactId>
    <version>4.0.0-b01</version>
    <scope>provided</scope>
</dependency>

as well as jetty-maven-plugin:

<plugin>
    <groupId>org.eclipse.jetty</groupId>
    <artifactId>jetty-maven-plugin</artifactId>
    <version>9.4.0.M1</version>
    <configuration>
        <webApp>
            <contextPath>/front-controller</contextPath>
        </webApp>
    </configuration>
</plugin>

3.2. Model

Next, we’ll define a Model class and a model Repository. We’ll use the following Book class as our model:

public class Book {
    private String author;
    private String title;
    private Double price;

    // standard constructors, getters and setters
}

This will be the repository, you can lookup the source code for concrete implementation or provide one on your own:

public interface Bookshelf {
    default void init() {
        add(new Book("Wilson, Robert Anton & Shea, Robert", 
          "Illuminati", 9.99));
        add(new Book("Fowler, Martin", 
          "Patterns of Enterprise Application Architecture", 27.88));
    }

    Bookshelf getInstance();

    <E extends Book> boolean add(E book);

    Book findByTitle(String title);
}

3.3. FrontControllerServlet

The implementation of the Servlet itself is fairly simple. We’re extracting the command name from a request, creating dynamically a new instance of a command class and executing it.

This allows us to add new commands without changing a code base of our Front Controller.

Another option is to implement the Servlet using static, conditional logic. This has the advantage of compile-time error checking:

public class FrontControllerServlet extends HttpServlet {
    @Override
    protected void doGet(HttpServletRequest request, 
      HttpServletResponse response) {
        FrontCommand command = getCommand(request);
        command.init(getServletContext(), request, response);
        command.process();
    }

    private FrontCommand getCommand(HttpServletRequest request) {
        try {
            Class type = Class.forName(String.format(
              "com.baeldung.enterprise.patterns.front." 
              + "controller.commands.%sCommand",
              request.getParameter("command")));
            return (FrontCommand) type
              .asSubclass(FrontCommand.class)
              .newInstance();
        } catch (Exception e) {
            return new UnknownCommand();
        }
    }
}

3.4. FrontCommand

Let’s implement an abstract class called FrontCommand, which is holding the behavior common to all commands.

This class has access to the ServletContext and its request and response objects. Furthermore, it’ll handle view resolution:

public abstract class FrontCommand {
    protected ServletContext context;
    protected HttpServletRequest request;
    protected HttpServletResponse response;

    public void init(
      ServletContext servletContext,
      HttpServletRequest servletRequest,
      HttpServletResponse servletResponse) {
        this.context = servletContext;
        this.request = servletRequest;
        this.response = servletResponse;
    }

    public abstract void process() throws ServletException, IOException;

    protected void forward(String target) throws ServletException, IOException {
        target = String.format("/WEB-INF/jsp/%s.jsp", target);
        RequestDispatcher dispatcher = context.getRequestDispatcher(target);
        dispatcher.forward(request, response);
    }
}

A concrete implementation of this abstract FrontCommand would be a SearchCommand. This will include conditional logic for cases where a book was found or when book is missing:

public class SearchCommand extends FrontCommand {
    @Override
    public void process() throws ServletException, IOException {
        Book book = new BookshelfImpl().getInstance()
          .findByTitle(request.getParameter("title"));
        if (book != null) {
            request.setAttribute("book", book);
            forward("book-found");
        } else {
            forward("book-notfound");
        }
    }
}

If the application is running, we can reach this command by pointing our browser to http://localhost:8080/front-controller/?command=Search&title=patterns.

The SearchCommand resolves to two views, the second view can be tested with the following request http://localhost:8080/front-controller/?command=Search&title=any-title.

To round up our scenario we’ll implement a second command, which is fired as fallback in all cases, a command request is unknown to the Servlet:

public class UnknownCommand extends FrontCommand {
    @Override
    public void process() throws ServletException, IOException {
        forward("unknown");
    }
}

This view will be reachable at http://localhost:8080/front-controller/?command=Order&title=any-title or by completely leaving out the URL parameters.

4. Deployment

Because we decided to create a WAR file project, we’ll need a web deployment descriptor. With this web.xml we’re able to run our web-application in any Servlet container:

<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns="http://xmlns.jcp.org/xml/ns/javaee"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee
  http://xmlns.jcp.org/xml/ns/javaee/web-app_3_1.xsd"
  version="3.1">
    <servlet>
        <servlet-name>front-controller</servlet-name>
        <servlet-class>
            com.baeldung.enterprise.patterns.front.controller.FrontControllerServlet
        </servlet-class>
    </servlet>
    <servlet-mapping>
        <servlet-name>front-controller</servlet-name>
        <url-pattern>/</url-pattern>
    </servlet-mapping>
</web-app>

As the last step we’ll run ‘mvn install jetty:run’ and inspect our views in a browser.

5. Conclusion

As we’ve seen so far, we now should be familiar with the Front Controller Pattern and its implementation as Servlet and command hierarchy.

As usual, you’ll find the sources on GitHub.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Apache Tiles Integration with Spring MVC

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Overview

Apache Tiles is a free, open source templating framework purely built on the Composite design pattern.

A Composite design pattern is a type of structural pattern which composes objects into tree structures to represent whole-part hierarchies and this pattern treats individual objects and composition of objects uniformly. In other words, in Tiles, a page is built by assembling a composition of sub views called Tiles.

The advantages of this framework over other frameworks include:

  • re-usability
  • ease in configuration
  • low performance overhead

In this article, we’ll focus on integrating Apache Tiles with Spring MVC.

2. Dependency Configuration

The first step here is to add the necessary dependency in the pom.xml:

<dependency>
    <groupId>org.apache.tiles</groupId>
    <artifactId>tiles-jsp</artifactId>
    <version>3.0.7</version>
</dependency>

3. Tiles Layout Files

Now we need to define the template definitions and specifically as per each page we will overwrite the template definitions for that specific page:

<tiles-definitions>
    <definition name="template-def" 
           template="/WEB-INF/views/tiles/layouts/defaultLayout.jsp">  
        <put-attribute name="title" value="" />  
        <put-attribute name="header" 
           value="/WEB-INF/views/tiles/templates/defaultHeader.jsp" />  
        <put-attribute name="menu" 
           value="/WEB-INF/views/tiles/templates/defaultMenu.jsp" />  
        <put-attribute name="body" value="" />  
        <put-attribute name="footer" 
           value="/WEB-INF/views/tiles/templates/defaultFooter.jsp" />  
    </definition>  
    <definition name="home" extends="template-def">  
        <put-attribute name="title" value="Welcome" />  
        <put-attribute name="body" 
           value="/WEB-INF/views/pages/home.jsp" />  
    </definition>  
</tiles-definitions>

4. ApplicationConfiguration and Other Classes

As part of configuration we will create three specific java classes called ApplicationInitializer, ApplicationController and ApplicationConfiguration:

  • ApplicationInitializer initializes and checks the necessary configuration specified in the ApplicationConfiguration classes
  • ApplicationConfiguration class contains the configuration for integrating Spring MVC with Apache Tiles framework
  • ApplicationController class works in sync with tiles.xml file and redirects to the necessary pages basing on the incoming requests

Let us see each of the classes in action:

@Controller
@RequestMapping("/")
public class ApplicationController {
    @RequestMapping(
      value = { "/"}, 
      method = RequestMethod.GET)
    public String homePage(ModelMap model) {
        return "home";
    }
    @RequestMapping(
      value = { "/apachetiles"}, 
      method = RequestMethod.GET)
    public String productsPage(ModelMap model) {
        return "apachetiles";
    }
 
    @RequestMapping(
      value = { "/springmvc"},
      method = RequestMethod.GET)
    public String contactUsPage(ModelMap model) {
        return "springmvc";
    }
}
public class ApplicationInitializer extends 
  AbstractAnnotationConfigDispatcherServletInitializer {
    
    @Override
    protected Class<?>[] getRootConfigClasses() {
        return new Class[] { ApplicationConfiguration.class };
    }
    
    @Override
    protected Class<?>[] getServletConfigClasses() {
        return null;
    }
    
    @Override
    protected String[] getServletMappings() {
        return new String[] { "/" };
    }
}

There are two important classes which plays a key role in configuring tiles in a Spring MVC application. They are TilesConfigurer and TilesViewResolver:

  • TilesConfgurer helps in linking the Tiles framework with the Spring framework by providing the path to the tiles-configuration file
  • TilesViewResolver is one of the adapter class provided by Spring API to resolve the tiles view

Finally, in the ApplicationConfiguration class, we used TilesConfigurer and TilesViewResolver classes to achieve the integration:

@Configuration
@EnableWebMvc
@ComponentScan(basePackages = "com.baeldung.tiles.springmvc")
public class ApplicationConfiguration extends WebMvcConfigurerAdapter {
    @Bean
    public TilesConfigurer tilesConfigurer() {
        TilesConfigurer tilesConfigurer = new TilesConfigurer();
        tilesConfigurer.setDefinitions(
          new String[] { "/WEB-INF/views/**/tiles.xml" });
        tilesConfigurer.setCheckRefresh(true);
        
        return tilesConfigurer;
    }
    
    @Override
    public void configureViewResolvers(ViewResolverRegistry registry) {
        TilesViewResolver viewResolver = new TilesViewResolver();
        registry.viewResolver(viewResolver);
    }
    
    @Override
    public void addResourceHandlers(ResourceHandlerRegistry registry) {
        registry.addResourceHandler("/static/**")
          .addResourceLocations("/static/");
    }
}

5. Tiles Template Files

Till now we had finished the configuration of Apache Tiles framework and the definition of template and specific tiles used in the whole application.

In this step, we need to create the specific template files which have been defined in the tiles.xml.

Please find the snippet of the layouts which can be used as a base to build specific pages:

<html>
    <head>
        <meta 
          http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
        <title><tiles:getAsString name="title" /></title>
        <link href="<c:url value='/static/css/app.css' />" 
            rel="stylesheet">
        </link>
    </head>
    <body>
        <div class="flex-container">
            <tiles:insertAttribute name="header" />
            <tiles:insertAttribute name="menu" />
	    <article class="article">
	        <tiles:insertAttribute name="body" />
	    </article>
	    <tiles:insertAttribute name="footer" />
        </div>
    </body>
</html>

6. Conclusion

This concludes the integration of Spring MVC with Apache Tiles.

You can find the full implementation in the following github project.

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

Generate equals() and hashCode() with Eclipse

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Introduction

In this article, we explore generating equals() and hashCode() methods using the Eclipse IDE . We will illustrate how powerful and convenient the Eclipse’s code auto-generation is, and also emphasize that diligent testing of code is still necessary.

2. Rules

equals() in Java is used for checking if 2 objects are equivalent. A good way to test this is to ensure objects are symmetric, reflexive, and transitive. That is, for three non-null objects a, b, and c:

  • Symmetric – a.equals(a)
  • Reflexive – a.equals(b) if and only if b.equals(a)
  • Transitive – if a.equals(b) and b.equals(c) then a.equals(c)

hashCode() must obey one rule:

  • 2 objects which are equals() must have the same hashCode() value

3. Class with Primitives

Let’s consider a Java class composed of only primitive member variables:

public class PrimitiveClass {

    private boolean primitiveBoolean;
    private int primitiveInt;

    // constructor, getters and setters
}

We use the Eclipse IDE to generate equals() and hashCode() using ‘Source->Generate hashCode() and equals()‘. Eclipse provides a dialog box like this:

eclipse-equals-hascode

We can ensure all member variables are included by choosing ‘Select All’.

Note that the options listed beneath Insertion Point: affect the style of the generated code. Here, we don’t select any of those options, select ‘OK’ and the methods are added to our class:

@Override
public int hashCode() {
    final int prime = 31;
    int result = 1;
    result = prime * result + (primitiveBoolean ? 1231 : 1237);
    result = prime * result + primitiveInt;
    return result;
}

@Override
public boolean equals(Object obj) {
    if (this == obj) return true;
    if (obj == null) return false;
    if (getClass() != obj.getClass()) return false;
    PrimitiveClass other = (PrimitiveClass) obj;
    if (primitiveBoolean != other.primitiveBoolean) return false;
    if (primitiveInt != other.primitiveInt) return false;
    return true;
}

The generated hashCode() method starts with a declaration of a prime number (31), performs various operations on primitive objects and returns its result based on the object’s state.

equals() checks first if two objects are the same instance (==) and returns true if they are.

Next, it checks that the comparison object is non-null and both objects are of the same class, returning false if they are not.

Finally, equals() checks the equality of each member variable, returning false if any of them is not equal.

So we can write simple tests:

PrimitiveClass aObject = new PrimitiveClass(false, 2);
PrimitiveClass bObject = new PrimitiveClass(false, 2);
PrimitiveClass dObject = new PrimitiveClass(true, 2);

assertTrue(aObject.equals(bObject) && bObject.equals(aObject));
assertTrue(aObject.hashCode() == bObject.hashCode());

assertFalse(aObject.equals(dObject));
assertFalse(aObject.hashCode() == dObject.hashCode());

4. Class with Collections and Generics

Now, let’s consider a more complex Java class with collections and generics:

public class ComplexClass {

    private List<?> genericList;
    private Set<Integer> integerSet;

    // constructor, getters and setters
}

Again we use Eclipse ‘Source->Generate hashCode() and equals()’.  Notice the hashCode() uses instanceOf to compare class objects, because we selected ‘Use ‘instanceof’ to compare types’ in the Eclipse options on the dialog. We get:

@Override
public int hashCode() {
    final int prime = 31;
    int result = 1;
    result = prime * result + ((genericList == null)
      ? 0 : genericList.hashCode());
    result = prime * result + ((integerSet == null)
      ? 0 : integerSet.hashCode());
    return result;
}

@Override
public boolean equals(Object obj) {
    if (this == obj) return true;
    if (obj == null) return false;
    if (!(obj instanceof ComplexClass)) return false;
    ComplexClass other = (ComplexClass) obj;
    if (genericList == null) {
        if (other.genericList != null)
            return false;
    } else if (!genericList.equals(other.genericList))
        return false;
    if (integerSet == null) {
        if (other.integerSet != null)
            return false;
    } else if (!integerSet.equals(other.integerSet))
        return false;
    return true;
}

The generated hashCode() method relies on AbstractList.hashCode() and AbstractSet.hashCode() core Java methods. These iterate through a collection, summing hashCode() values of each item and returning a result.

Similarly, the generated equals() method uses AbstractList.equals() and AbstractSet.equals(), which compare collections for equality by comparing their fields.

We can verify the robustness by testing some examples:

ArrayList<String> strArrayList = new ArrayList<String>();
strArrayList.add("abc");
strArrayList.add("def");
ComplexClass aObject = new ComplexClass(strArrayList, new HashSet<Integer>(45,67));
ComplexClass bObject = new ComplexClass(strArrayList, new HashSet<Integer>(45,67));
		
ArrayList<String> strArrayListD = new ArrayList<String>();
strArrayListD.add("lmn");
strArrayListD.add("pqr");
ComplexClass dObject = new ComplexClass(strArrayListD, new HashSet<Integer>(45,67));
		
assertTrue(aObject.equals(bObject) && bObject.equals(aObject));
assertTrue(aObject.hashCode() == bObject.hashCode());

assertFalse(aObject.equals(dObject));
assertFalse(aObject.hashCode() == dObject.hashCode());

5. Inheritance

Let’s consider Java classes that use inheritance:

public abstract class Shape {
    public abstract double area();

    public abstract double perimeter();
}

public class Rectangle extends Shape {
    private double width;
    private double length;
   
    @Override
    public double area() {
        return width * length;
    }

    @Override
    public double perimeter() {
        return 2 * (width + length);
    }
    // constructor, getters and setters
}

public class Square extends Rectangle {
    Color color;
    // constructor, getters and setters
}

If we attempt the ‘Source->Generate hashCode() and equals()‘ on the Square class, Eclipse warns us that ‘the superclass ‘Rectangle’ does not redeclare equals() and hashCode() : the resulting code may not function correctly’.

Similarly, we get a warning about the superclass ‘Shape’ when we attempt to generate hashCode() and equals() on the Rectangle class.

Eclipse will allow us to plow forward despite warnings. In the case of Rectangle, it extends an abstract Shape class that cannot implement hashCode() or equals() because it has no concrete member variables. We can ignore Eclipse for that case.

The Square class, however, inherits width and length member variables from Rectangle, as well as it’s own color variable. Creating hashCode() and equals() in Square without first doing the same for Rectangle means using only color in equals()/hashCode():

@Override
public int hashCode() {
    final int prime = 31;
    int result = 1;
    result = prime * result + ((color == null) ? 0 : color.hashCode());
    return result;
}
@Override
public boolean equals(Object obj) {
    if (this == obj) return true;
    if (obj == null) return false;
    if (getClass() != obj.getClass()) return false;
    Square other = (Square) obj;
    if (color == null) {
        if (other.color != null)
            return false;
    } else if (!color.equals(other.color))
        return false;
    return true;
}

A quick test shows us that equals()/hashCode() for Square are not sufficient if it’s only the width that differs, because width is not included in equals()/hashCode() calculations:

Square aObject = new Square(10, Color.BLUE);		
Square dObject = new Square(20, Color.BLUE);

Assert.assertFalse(aObject.equals(dObject));
Assert.assertFalse(aObject.hashCode() == dObject.hashCode());

Let’s fix this by using Eclipse to generate equals()/hashCode() for the Rectangle class:

@Override
public int hashCode() {
    final int prime = 31;
    int result = 1;
    long temp;
    temp = Double.doubleToLongBits(length);
    result = prime * result + (int) (temp ^ (temp >>> 32));
    temp = Double.doubleToLongBits(width);
    result = prime * result + (int) (temp ^ (temp >>> 32));
    return result;
}

@Override
public boolean equals(Object obj) {
    if (this == obj) return true;
    if (obj == null) return false;
    if (getClass() != obj.getClass()) return false;
    Rectangle other = (Rectangle) obj;
    if (Double.doubleToLongBits(length)
      != Double.doubleToLongBits(other.length)) return false;
    if (Double.doubleToLongBits(width)
      != Double.doubleToLongBits(other.width)) return false;
    return true;
}

We must re-generate equals()/hashCode() in the Square class, so Rectangle‘s equals()/hashCode() are invoked. In this generation of code, we’ve selected all the options in the Eclipse dialog, so we see comments, instanceOf comparisons, and if blocks:

@Override
public int hashCode() {
    final int prime = 31;
    int result = super.hashCode();
    result = prime * result + ((color == null) ? 0 : color.hashCode());
    return result;
}


@Override
public boolean equals(Object obj) {
    if (this == obj) {
        return true;
    }
    if (!super.equals(obj)) {
        return false;
    }
    if (!(obj instanceof Square)) {
        return false;
    }
    Square other = (Square) obj;
    if (color == null) {
        if (other.color != null) {
            return false;
       }
    } else if (!color.equals(other.color)) {
        return false;
    }
    return true;
}

Re-running our test from above, we pass now because Square‘s hashCode()/equals() are calculated correctly.

6. Conclusion

The Eclipse IDE is very powerful and allows auto-generation of a boilerplate code – getters/setters, constructors of various types, equals(), and hashCode().

By understanding what Eclipse is doing, we can decrease time spent on these coding tasks. However, we must still use caution and verify our code with tests to ensure we have handled all the expected cases.

You can find the code for equals()/hashcode() on GitHub.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE


Spring MVC + Thymeleaf 3.0: New Features

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Introduction

Thymeleaf is a Java template engine for processing and creating HTML, XML, JavaScript, CSS and plain text. For an intro to Thymeleaf and Spring, have a look at this writeup.

In this article, we will discuss new features of Thymeleaf 3.0 in Spring MVC with Thymeleaf application.  Version 3 comes with new features and many under-the-hood improvements. To be more specific, we will cover the topics of natural processing and Javascript inlining.

Thymeleaf 3.0 includes three new textual template modes: TEXT, JAVASCRIPT, and CSS – which are meant to be used for processing plain, JavaScript and CSS templates, respectively.

2. Maven Dependencies

First, let us see the configurations required to integrate Thymeleaf with Spring; thymeleaf-spring library is required in our dependencies:

<dependency>
    <groupId>org.thymeleaf</groupId>
    <artifactId>thymeleaf</artifactId>
    <version>3.0.1.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.thymeleaf</groupId>
    <artifactId>thymeleaf-spring4</artifactId>
    <version>3.0.1.RELEASE</version>
</dependency>

Note that, for a Spring 3 project, the thymeleaf-spring3 library must be used instead of thymeleaf-spring4. The latest version of the dependencies can be found here.

3. Java Thymeleaf Configuration

First, we need to configure new template engine, view and template resolvers. In order to do that, we need to update the Java config class, created

In order to do that, we need to update the Java config class, created here. In addition to new types of resolvers, our templates are implementing Spring interface ApplicationContextAware:

@Configuration
@EnableWebMvc
@ComponentScan({ "com.baeldung.thymeleaf" })
public class WebMVCConfig extends WebMvcConfigurerAdapter
  implements ApplicationContextAware {

    private ApplicationContext applicationContext;

    // Java setter

    @Bean
    public ViewResolver htmlViewResolver() {
        ThymeleafViewResolver resolver = new ThymeleafViewResolver();
        resolver.setTemplateEngine(templateEngine(htmlTemplateResolver()));
        resolver.setContentType("text/html");
        resolver.setCharacterEncoding("UTF-8");
        resolver.setViewNames(ArrayUtil.array("*.html"));
        return resolver;
    }
	
    @Bean
    public ViewResolver javascriptViewResolver() {
        ThymeleafViewResolver resolver = new ThymeleafViewResolver();
        resolver.setTemplateEngine(templateEngine(javascriptTemplateResolver()));
        resolver.setContentType("application/javascript");
        resolver.setCharacterEncoding("UTF-8");
        resolver.setViewNames(ArrayUtil.array("*.js"));
        return resolver;
    }
	
    @Bean
    public ViewResolver plainViewResolver() {
        ThymeleafViewResolver resolver = new ThymeleafViewResolver();
        resolver.setTemplateEngine(templateEngine(plainTemplateResolver()));
        resolver.setContentType("text/plain");
        resolver.setCharacterEncoding("UTF-8");
        resolver.setViewNames(ArrayUtil.array("*.txt"));
        return resolver;
    }
}

As we may observe above, we created three different view resolvers – one for HTML views, one for Javascript files, and one for plain text files. Thymeleaf will differentiate them by checking filename extensions – .html.js and .txt, respectively.

We also created the static ArrayUtil class, in order to use method array() that creates required String[] array with view names.

In the next part of this class, we need to configure template engine:

private TemplateEngine templateEngine(ITemplateResolver templateResolver) {
    SpringTemplateEngine engine = new SpringTemplateEngine();
    engine.setTemplateResolver(templateResolver);
    return engine;
}

Finally, we need to create three separate template resolvers:

private ITemplateResolver htmlTemplateResolver() {
    SpringResourceTemplateResolver resolver = new SpringResourceTemplateResolver();
    resolver.setApplicationContext(applicationContext);
    resolver.setPrefix("/WEB-INF/views/");
    resolver.setCacheable(false);
    resolver.setTemplateMode(TemplateMode.HTML);
    return resolver;
}
    
private ITemplateResolver javascriptTemplateResolver() {
    SpringResourceTemplateResolver resolver = new SpringResourceTemplateResolver();
    resolver.setApplicationContext(applicationContext);
    resolver.setPrefix("/WEB-INF/js/");
    resolver.setCacheable(false);
    resolver.setTemplateMode(TemplateMode.JAVASCRIPT);
    return resolver;
}
    
private ITemplateResolver plainTemplateResolver() {
    SpringResourceTemplateResolver resolver = new SpringResourceTemplateResolver();
    resolver.setApplicationContext(applicationContext);
    resolver.setPrefix("/WEB-INF/txt/");
    resolver.setCacheable(false);
    resolver.setTemplateMode(TemplateMode.TEXT);
    return resolver;
}

Please note, that for testing it is better to use non-cached templates, that’s why it is recommended to use setCacheable(false) method.

Javascript templates will be stored in /WEB-INF/js/ folder, plain text files in /WEB-INF/txt/ folder, and finally path to HTML files is /WEB-INF/html.

4. Spring Controller Configuration

In order to test our new configuration, we created following Spring controller:

@Controller
public class InliningController {

    @RequestMapping(value = "/html", method = RequestMethod.GET)
    public String getExampleHTML(Model model) {
        model.addAttribute("title", "Baeldung");
        model.addAttribute("description", "<strong>Thymeleaf</strong> tutorial");
        return "inliningExample.html";
    }
	
    @RequestMapping(value = "/js", method = RequestMethod.GET)
    public String getExampleJS(Model model) {
        model.addAttribute("students", StudentUtils.buildStudents());
        return "studentCheck.js";
    }
	
    @RequestMapping(value = "/plain", method = RequestMethod.GET)
    public String getExamplePlain(Model model) {
        model.addAttribute("username", SecurityContextHolder.getContext()
          .getAuthentication().getName());
        model.addAttribute("students", StudentUtils.buildStudents());
        return "studentsList.txt";
    }
}

In the HTML file example, we’ll show how to use the new inlining feature, with and without escaping HTML tags.

For the JS example we’ll generate an AJAX request, which will load js file with students information. Please note, that we are using simple buildStudents() method inside StudentUtils class, from this article.

Finally, in the plaintext example, we’ll show student information as a text file. A typical example of using the plain text template mode could be used for sending plain-text email.

As an additional feature, we will use SecurityContextHolder, to obtain the logged username.

5. HTML/JS/TEXT example files

The last part of this tutorial is to create three different types of files, and test the usage of the new Thymeleaf features. Let’s start with HTML file:

<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml" xmlns:th="http://www.thymeleaf.org">
<head>
    <meta charset="UTF-8">
    <title>Inlining example</title>
</head>
<body>
    <p>Title of tutorial: [[${title}]]</p>
    <p>Description: [(${description})]</p>
</body>
</html>

In this file we use two different approaches. In order to display title, we use escaped syntax, which will remove all HTML tags, resulting in displaying only text. In the case of description, we use unescaped syntax, to keep HTML tags. The final result will look like this:

<p>Title of tutorial: Baeldung</p>
<p>Description: <strong>Thymeleaf</strong> tutorial</p>

which of course will be parsed by our browser, by displaying word Thymeleaf with a bold style.

Next, we proceed to test out js template features:

var count = [[${students.size()}]];
alert("Number of students in group: " + count);

Attributes in JAVASCRIPT template mode will be JavaScript-unescaped. It will result in creating js alert. We load this alert, by using jQuery AJAX, in the listStudents.html file:

<script th:inline="javascript">
    $(document).ready(function() {
        $.ajax({
            url : "/spring-thymeleaf/js",
            });
        });
</script>

The last, but not the least function, that we want to test is plain text file generation. We created studentsList.txt file with following content:

Dear [(${username})],

This is the list of our students:
[# th:each="s : ${students}"]
   - [(${s.name})]. ID: [(${s.id})]
[/]
Thanks,
The Baeldung University

Note that, as with the markup template modes, the Standard Dialects include just one processable element ([# …]) and a set of processable attributes (th:text, th:utext, th:if, th:unless, th:each, etc.). The result will be a text file, that we may use for example in  the email, as it was mentioned at the end of Section 3.

How to test? Our suggestion is to play with browser first, then check existing JUnit test as well.

6. Conclusion

In this article, we discussed new features implemented in Thymeleaf framework with a focus on Version 3.0.

The full implementation of this tutorial can be found in the GitHub project – this is an Eclipse based project, that is easy to test in every modern Internet browser.

Finally, if you’re planning to migrate a project from Version 2 to this latest version, have a look here at the migration guide. And note that your existing Thymeleaf templates are almost 100% compatible with Thymeleaf 3.0 so you will only have to do a few modifications in your configuration.

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

Introduction to Java Config for Spring Security

$
0
0

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

1. Overview

This article is an introduction to Java configuration for Spring Security which enables users to easily configure Spring Security without the use of XML.

Java configuration was added to the Spring framework in Spring 3.1 and extended to Spring Security in Spring 3.2 and is defined in a class annotated @Configuration.

2. Maven Setup

To use Spring Security in a Maven projects, we first need to have the spring-security-core dependency in the project pom.xml:

<dependency>
    <groupId>org.springframework.security</groupId>
    <artifactId>spring-security-core</artifactId>
    <version>4.1.3.RELEASE</version>
</dependency>

The latest version can always be found here.

3. Web Security with Java Configuration

Let’s start with a basic example of a Spring Security Java configuration:

@EnableWebSecurity
public class SecurityConfig extends WebSecurityConfigurerAdapter {

    @Autowired
    public void configureGlobal(AuthenticationManagerBuilder auth) 
      throws Exception {
        auth.inMemoryAuthentication().withUser("user")
          .password("password").roles("USER");
    }
}

As you may have noticed, the configuration sets up a basic in-memory authentication config.

4. HTTP Security

To enable HTTP Security in Spring, we need to extend the WebSecurityConfigurerAdapter to provide a default configuration in the configure(HttpSecurity http) method:

protected void configure(HttpSecurity http) throws Exception {
    http.authorizeRequests()
      .anyRequest().authenticated()
      .and().httpBasic();
}

The above default configuration makes sure any request to the application is authenticated with form based login or HTTP basic authentication.

Also, it is exactly similar to the following XML configuration:

<http>
    <intercept-url pattern="/**" access="authenticated"/>
    <form-login />
    <http-basic />
</http>

5. Form Login

Interestingly, Spring Security generates a login page automatically, based on the features that are enabled and using standard values for the URL which processes the submitted login:

protected void configure(HttpSecurity http) throws Exception {
    http.authorizeRequests()
      .anyRequest().authenticated()
      .and().formLogin()
      .loginPage("/login").permitAll();
}

Here the automatically generated login page is convenient to get up and running quickly.

6. Authorization with Roles

Let’s now configure some simple authorization on each URL using roles:

protected void configure(HttpSecurity http) throws Exception {
    http.authorizeRequests()
      .antMatchers("/", "/home").access("hasRole('USER')")
      .antMatchers("/admin/**").hasRole("ADMIN")
      .and()
      // some more method calls
      .formLogin();
}

Notice how we’re using both the type-safe API – hasRole – but also the expression based API, via access.

7. Logout

As many other aspects of Spring Security, logout has some great defaults provided by the framework.

By default, a logout request invalidates the session, clears any authentication caches, clears the SecurityContextHolder and redirects to login page.

Here is a simple logout config:

protected void configure(HttpSecurity http) throws Exception {
    http.logout();
}

However, if you want to get more control over the available handlers, here’s what a more complete implementation will look like:

protected void configure(HttpSecurity http) throws Exception {
    http.logout().logoutUrl("/my/logout")
      .logoutSuccessUrl("/my/index")
      .logoutSuccessHandler(logoutSuccessHandler) 
      .invalidateHttpSession(true)
      .addLogoutHandler(logoutHandler)
      .deleteCookies(cookieNamesToClear)
      .and()
      // some other method calls
}

8. Authentication

Let’s have a look at another way of allowing authentication with Spring Security.

8.1. In-Memory Authentication

We’ll start with a simple, in-memory configuration:

@Autowired
public void configureGlobal(AuthenticationManagerBuilder auth) 
  throws Exception {
    auth.inMemoryAuthentication()
      .withUser("user").password("password").roles("USER")
      .and()
      .withUser("admin").password("password").roles("USER", "ADMIN");
}

8.2. JDBC Authentication

To move that to JDBC, all you have to do is to define a data source within the application – and use that directly:

@Autowired
private DataSource dataSource;

@Autowired
public void configureGlobal(AuthenticationManagerBuilder auth) 
  throws Exception {
    auth.jdbcAuthentication().dataSource(dataSource)
      .withDefaultSchema()
      .withUser("user").password("password").roles("USER")
      .and()
      .withUser("admin").password("password").roles("USER", "ADMIN");
}

9. Conclusion

In this quick tutorial, we went over the basics of Java Configuration for Spring Security and focused on the code samples that illustrate the simplest configuration scenarios.

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

Java – Get Random Item/Element From a List

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Introduction

Picking a random List element is a very basic operation but not so obvious to implement. In this article, we will show the most efficient way of doing this in different contexts.

2. Picking a Random Item/Items

In order to get a random item from a List instance, you need to generate a random index number and then fetch an item by this generated index number using List.get() method.

The key point here is to remember that you mustn’t use an index that exceeds your List’s capacity.

2.1. Single Random Item

In order to select random index, you can use Random.nextInt(int bound) method:

public void givenList_shouldReturnARandomElement() {
    List<Integer> givenList = Arrays.asList(1, 2, 3);
    Random rand = new Random();
    int randomElement = givenList.get(rand.nextInt(givenList.size()));
}

Instead of Random class, you can always use static method Math.random() and multiply it with list size (Math.random() generates Double random value between 0 (inclusive) and 1 (exclusive), so remember to cast it to int after multiplication).

2.2. Select Random Index In Multithread Environment

When writing multithread applications using single Random class instance, might result in picking same value for every process accessing this instance. You can always create new instance per thread by using a dedicated ThreadLocalRandom class:

int randomElementIndex = ThreadLocalRandom.current().nextInt(listSize);

2.3. Select Random Items With Repetitions

Sometimes you might want to pick few elements from a list. It is quite straightforward:

public void givenList_whenNumberElementsChosen_shouldReturnRandomElementsRepeat() {
    Random rand = new Random();
    List<String> givenList = Arrays.asList("one", "two", "three", "four");

    int numberOfElements = 2;

    for (int i = 0; i < numberOfElements; i++) {
        int randomIndex = rand.nextInt(givenList.size());
        String randomElement = givenList.get(randomIndex);
    }
}

2.4. Select Random Items Without Repetitions

Here, you need to make sure that element is removed from list after selection:

public void givenList_whenNumberElementsChosen_shouldReturnRandomElementsNoRepeat() {
    Random rand = new Random();
    List<String> givenList = Arrays.asList("one", "two", "three", "four");

    int numberOfElements = 2;

    for (int i = 0; i < numberOfElements; i++) {
        int randomIndex = rand.nextInt(givenList.size());
        String randomElement = givenList.get(randomIndex);
        givenList.remove(randomIndex);
    }
}

2.5. Select Random Series

In case you would like to obtain random series of elements, Collections utils class might be handy:

public void givenList_whenSeriesLengthChosen_shouldReturnRandomSeries() {
    List<Integer> givenList = Arrays.asList(1, 2, 3, 4, 5, 6);
    Collections.shuffle(givenList);

    int randomSeriesLength = 3;

    List<Integer> randomSeries = givenList.subList(0, randomSeriesLength);
}

3. Conclusion

In this article, we explored the most efficient way of fetching random elements from a List instance for different scenarios.

Code examples can be found on GitHub.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Java Web Weekly, Issue 145

$
0
0

Lots of interesting writeups on Java 9 this week.

Here we go…

1. Spring and Java

>> Ahead-of-Time (AOT) Compilation May Come to OpenJDK HotSpot in Java 9 [infoq.com]

If you’re into the lower level aspects of Java compilation, this one’s short, to the point and highly interesting.

>> Spring-Reactive samples – Mono and Single [java-allandsundry.com]

I like to see these “practical learning” articles starting to bubble up as we get closer and closer to the upcoming reactive support in Spring 5.

>> How Optional Breaks the Monad Laws and Why It Matters [sitepoint.com]

Hmm, I need to read this one a third time.

>> Java 9, OSGi and the Future of Modularity (Part 2) [infoq.com]

Modularity is clearly going to the focus in Java 9 (and the reason the GA keeps getting pushed). This writeup (and the previous installment) are solid way to get up to speed with the upcoming release.

>> Concurrency Puzzle – System.arraycopy() [javaspecialists.eu]

I like concurrency, and I like puzzles. Need I say more?

OK, here are some hints as well.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> The hardest thing in computer science [kaczmarzyk.net]

Naming things is hard – no two ways about it.

This writeup doesn’t just mention that (like so many other surface level articles) but actually goes into and explores the topic. There are definitely takeaways you can pull out from this one.

>> How to Choose the Right Log Management Tool? [takipi.com]

A system to handle, display and mine the log data produced by the system – highly useful and unfortunately so overlooked.

Keep in mind that any system will be better than just leaving the logs on the machine.

Also worth reading:

3. Musings

>> Humility in Software Development [mattblodgett.com]

This one takes seconds to read and a lot longer to think about.

>> Habits that Help Code Quality [daedtech.com]

Good code is a journey, and it’s well worth investing time and reading up on these kinds of experience based writeups.

The best code I wrote 5 years ago looks so obviously crappy to me now, which is exactly how it should be.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Listen to the charismatic tone of my deep, confident voice [dilbert.com]

>> Where you saying something about respect? [dilbert.com]

>> Did you close Skype? [dilbert.com]

5. Pick of the Week

“Figure out how this is actually your fault” is the single best piece of advice I got early on:

>> It’s always your fault [m.signalvnoise.com]

FileNotFoundException in Java

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Introduction

In this article, we’re going to talk about a very common exception in Java – the FileNotFoundException.

We’ll cover the cases when it can occur, possible ways of treating it and some examples.

2. When Is The Exception Thrown?

As indicated on Java’s API documentation, this exception can be thrown when:

  • A file with the specified pathname does not exist
  • A file with the specified pathname does exist but is inaccessible for some reason (requested writing for a read-only file, or permissions don’t allow accessing the file)

3. How to Handle It?

First of all, taking into account that it extends  java.io.IOException that extends java.lang.Exception, you will need to deal with it with a try-catch block as with any other checked Exception.

Then, what to do (business/logic related) inside the try-catch block actually depends on what you need to do.

You may need to:

  • Rise a business-specific exception: this may be a stop execution error, but you will leave the decision in the upper layers of the application (don’t forget to include the original exception)
  • Alert a user with a dialogue or error message: this isn’t a stop execution error, so just notifying is enough
  • Create a file: reading an optional configuration file, not finding it and creating a new one with default values
  • Create a file in another path: you need to write something and if the first path is not available, you try with a fail-safe one
  • Just log an error: this error should not stop the execution but you log it for future analysis

4. Examples

Now we’ll see some examples, all of which will be based on the following test class:

public class FileNotFoundExceptionTest {

    private static final Logger LOG
      = Logger.getLogger(FileNotFoundExceptionTest.class);
    private String fileName = Double.toString(Math.random());
    
    protected void readFailingFile() throws IOException {
        BufferedReader rd = new BufferedReader(new FileReader(new File(fileName)));
        rd.readLine();
        // no need to close file
    }

    class BusinessException extends RuntimeException {
        public BusinessException(String string, FileNotFoundException ex) {
            super(string, ex);
        }
    }
}

4.1. Logging the Exception

If you run the following code, it will “log” the error in the console:

@Test
public void logError() throws IOException {
    try {
        readFailingFile();
    } catch (FileNotFoundException ex) {
        LOG.error("Optional file " + fileName + " was not found.", ex);
    }
}

4.2. Raising a Business Specific Exception

Next, an example of raising a business-specific exception so that the error can be handled in the upper layers:

@Test(expected = BusinessException.class)
public void raiseBusinessSpecificException() throws IOException {
    try {
        readFailingFile();
    } catch (FileNotFoundException ex) {
        throw new BusinessException(
          "BusinessException: necessary file was not present.", ex);
    }
}

4.3. Creating a File

Finally, we’ll try to create the file so it can be read (maybe for a thread that is continuously reading a file), but again catching the exception and handling the possible second error:

@Test
public void createFile() throws IOException {
    try {
        readFailingFile();
    } catch (FileNotFoundException ex) {
        try {
            new File(fileName).createNewFile();
            readFailingFile();            
        } catch (IOException ioe) {
            throw new RuntimeException(
              "BusinessException: even creation is not possible.", ioe);
        }
    }
}

5. Conclusion

In this quick writeup, we’ve seen when a FileNotFoundException can occur and several options to handle it.

As always, the full examples are over on Github.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

How to Read a File in Java

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Introduction

In this article, we will see how we can read a file from a classpath, URL or inside a JAR file, using standard Java classes.

2. Setup

We will use a set of test examples using core Java classes only, and in the tests, we’ll use assertions using Hamcrest matchers.

Tests will share a common readFromInputStream method that transforms an InputStream to String for easier asserting of results:

private String readFromInputStream(InputStream inputStream) {
    InputStreamReader inputStreamReader = new InputStreamReader(inputStream);
    BufferedReader bufferedReader = new BufferedReader(inputStreamReader);
    StringBuilder resultStringBuilder = new StringBuilder();
    String line;
    while ((line = bufferedReader.readLine()) != null) {
        resultStringBuilder.append(line);
        resultStringBuilder.append("\n");
    }
    
    bufferedReader.close();
    inputStreamReader.close();
    inputStream.close();
    return resultStringBuilder.toString();
}

Note that there are other ways of achieving the same result. You can consult this article for some alternatives.

3. Read File from Classpath

This section explains how to read a file that is available on a classpath. We will read the “fileTest.txt” available under src/main/resources:

@Test
public void givenFileNameAsAbsolutePath_whenUsingClasspath_thenFileData() {
    String expectedData = "Hello World from fileTest.txt!!!";
    
    Class clazz = FileOperationsTest.class;
    InputStream inputStream = clazz.getResourceAsStream("/fileTest.txt");
    String data = readFromInputStream(inputStream);

    Assert.assertThat(data, containsString(expectedData));
}

In the above code snippet, we used the current class to load file using getResourceAsStream method and passed the absolute path of the file to load.

The same method is available on a ClassLoader instance as well:

ClassLoader classLoader = getClass().getClassLoader();
InputStream inputStream = classLoader.getResourceAsStream("fileTest.txt");
String data = readFromInputStream(inputStream);

We obtain the classLoader of the current class using getClass().getClassLoader().

The main difference is that when using the getResourceAsStream on a ClassLoader instance, the path is treated as absolute starting from the root of the classpath. 

When used against a Class instance, the path could be relative to the package, or an absolute path, which is hinted by the leading slash.

4. Read File with JDK7

In JDK7 the NIO package was significantly updated. Let’s look at an example using the Files class and the readAllBytes method. The readAllBytes method accepts a Path. Path class can be considered as an upgrade of the java.io.File with some additional operations in place:

@Test
public void givenFilePath_whenUsingFilesReadAllBytes_thenFileData() {
   String expectedData = "Hello World from fileTest.txt!!!";
       
   Path path = Paths.get(getClass().getClassLoader()
     .getResource("fileTest.txt").toURI());       
   byte[] fileBytes = Files.readAllBytes(path);
   String data = new String(fileBytes);

   Assert.assertEquals(expectedData, data.trim());
}

5. Read File with JDK8

JDK8 offers lines() method inside the Files class. It returns a Stream of String elements.

Let’s look at an example of how to read data into bytes and decode using UTF-8 charset:

@Test
public void givenFilePath_whenUsingFilesLines_thenFileData() {
    String expectedData = "Hello World from fileTest.txt!!!";
        
    Path path = Paths.get(getClass().getClassLoader()
      .getResource("fileTest.txt").toURI());
        
    StringBuilder data = new StringBuilder();
    Stream<String> lines = Files.lines(path);
    lines.forEach(line -> data.append(line).append("\n"));
        
    Assert.assertEquals(expectedData, data.toString().trim());
}

6. Read File with FileUtils

Another common option is using the FileUtils class of the commons package. You’ll need to add the following dependency:

<dependency>
    <groupId>org.lucee</groupId>
    <artifactId>commons-io</artifactId>
    <version>2.4.0</version>
</dependency>

Make sure to check the latest dependency version here.

The file reading code may look like:

@Test
public void givenFileName_whenUsingFileUtils_thenFileData() {
    String expectedData = "Hello World from fileTest.txt!!!";
        
    ClassLoader classLoader = getClass().getClassLoader();
    File file = new File(classLoader.getResource("fileTest.txt").getFile());
    String data = FileUtils.readFileToString(file);
        
    Assert.assertEquals(expectedData, data.trim());
}

Here we pass the File object to the method readFileToString() of FileUtils class. This utility class manages to load the content without the necessity of writing any boilerplate code to create an InputStream instance and read data.

7. Read Content from URL

To read content from URL, we will use “http://www.baeldung.com/” URL in our example as:

@Test
public void givenURLName_whenUsingURL_thenFileData() {
    String expectedData = "Baeldung";

    URL urlObject = new URL("http://www.baeldung.com/");
    URLConnection urlConnection = urlObject.openConnection();
    InputStream inputStream = urlConnection.getInputStream();
    String data = readFromInputStream(inputStream);

    Assert.assertThat(data, containsString(expectedData));
}

There are also alternative ways of connecting to an URL. Here we used the URL and URLConnection class available in the standard SDK.

8. Read File from JAR

To read a file which is located inside a JAR file, we will need a JAR with a file inside it. For our example we will read “LICENSE.txt” from the “hamcrest-library-1.3.jar” file:

@Test
public void givenFileName_whenUsingJarFile_thenFileData() {
    String expectedData = "BSD License";

    Class clazz = Matchers.class;
    InputStream inputStream = clazz.getResourceAsStream("/LICENSE.txt");
    String data = readFromInputStream(inputStream);

    Assert.assertThat(data, containsString(expectedData));
}

Here we want to load LICENSE.txt that resides in Hamcrest library, so we will use the Matcher’s class that helps to get a resource. The same file can be loaded using the classloader too.

9. Conclusion

In this tutorial we have seen how to read a file from various locations like classpath, URL or jar files.

We also demonstrated a data reading from files using core Java API.

You can find the source code in the following GitHub repo.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

A Guide To Java Regular Expressions API

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Overview

In this article, we will discuss the Java Regex API and how regular expressions can be used in Java programming language.

In the world of regular expressions, there are many different flavors to choose from, such as grep, Perl, Python, PHP, awk and much more.

This means that a regular expression that works in one programming language may not work in another. The regular expression syntax in the Java is most similar to that found in Perl.

2. Setup

To use regular expressions in Java, we do not need any special setup. The JDK contains a special package java.util.regex totally dedicated to regex operations. We only need to import it into our code.

Moreover, the java.lang.String class also has inbuilt regex support that we commonly use in our code.

3. Java Regex Package

The java.util.regex package consists of three classes: Pattern, Matcher and PatternSyntaxException:

  • Pattern object is a compiled regex. The Pattern class provides no public constructors. To create a pattern, we must first invoke one of its public static compile methods, which will then return a Pattern object. These methods accept a regular expression as the first argument.
  • Matcher object interprets the pattern and performs match operations against an input String. It also defines no public constructors. We obtain a Matcher object by invoking the matcher method on a Pattern object.
  • PatternSyntaxException object is an unchecked exception that indicates a syntax error in a regular expression pattern.

We will explore these classes in detail; however, we must first understand how a regex is constructed in Java.

If you are already familiar with regex from a different environment, you may find certain differences, but they are minimal.

4. Simple Example

Let’s start with the simplest use case for a regex. As we noted earlier, when a regex is applied to a String, it may match zero or more times.

The most basic form of pattern matching supported by the java.util.regex API is the match of a String literal. For example, if the regular expression is foo and the input String is foo, the match will succeed because the Strings are identical:

@Test
public void givenText_whenSimpleRegexMatches_thenCorrect() {
    Pattern pattern = Pattern.compile("foo");
    Matcher matcher = pattern.matcher("foo");
 
    assertTrue(matcher.find());
}

We first create a Pattern object by calling its static compile method and passing it a pattern we want to use.

Then we create a Matcher object be calling the Pattern object’s matcher method and passing it the text we want to check for matches.

After that, we call the method find in the Matcher object.

The find method keeps advancing through the input text and returns true for every match, so we can use it to find the match count as well:

@Test
public void givenText_whenSimpleRegexMatchesTwice_thenCorrect() {
    Pattern pattern = Pattern.compile("foo");
    Matcher matcher = pattern.matcher("foofoo");
    int matches = 0;
    while (matcher.find()) {
        matches++;
    }
 
    assertEquals(matches, 2);
}

Since we will be running more tests, we can abstract the logic for finding number of matches in a method called runTest:

public static int runTest(String regex, String text) {
    Pattern pattern = Pattern.compile(regex);
    Matcher matcher = pattern.matcher(text);
    int matches = 0;
    while (matcher.find()) {
        matches++;
    }
    return matches;
}

When we get 0 matches, the test should fail, otherwise, it should pass.

5. Meta Characters

Meta characters affect the way a pattern is matched, in a way adding logic to the search pattern. The Java API supports several metacharacters, the most straightforward being the dot “.” which matches any character:

@Test
public void givenText_whenMatchesWithDotMetach_thenCorrect() {
    int matches = runTest(".", "foo");
    
    assertTrue(matches > 0);
}

Considering the previous example where regex foo matched the text foo as well as foofoo two times. If we used the dot metacharacter in the regex, we would not get two matches in the second case:

@Test
public void givenRepeatedText_whenMatchesOnceWithDotMetach_thenCorrect() {
    int matches= runTest("foo.", "foofoo");
 
    assertEquals(matches, 1);
}

Notice the dot after the foo in the regex. The matcher matches every text that is preceded by foo since the last dot part means any character after. So after finding the first foo, the rest is seen as any character. That is why there is only a single match.

The API supports several other meta characters <([{\^-=$!|]})?*+.> which we will be looking into further in this article.

6. Character Classes

Browsing through the official Pattern class specification, we will discover summaries of supported regex constructs. Under character classes, we have about 6 constructs.

6.1. OR Class

Constructed as [abc]. Any of the elements in the set is matched:

@Test
public void givenORSet_whenMatchesAny_thenCorrect() {
    int matches = runTest("[abc]", "b");
 
    assertEquals(matches, 1);
}

If they all appear in the text, each is matched separately with no regard to order:

@Test
public void givenORSet_whenMatchesAnyAndAll_thenCorrect() {
    int matches = runTest("[abc]", "cab");
 
    assertEquals(matches, 3);
}

They can also be alternated as part of a String. In the following example, when we create different words by alternating the first letter with each element of the set, they are all matched:

@Test
public void givenORSet_whenMatchesAllCombinations_thenCorrect() {
    int matches = runTest("[bcr]at", "bat cat rat");
 
    assertEquals(matches, 3);
}

6.2. NOR Class

The above set is negated by adding a caret as the first element:

@Test
public void givenNORSet_whenMatchesNon_thenCorrect() {
    int matches = runTest("[^abc]", "g");
 
    assertTrue(matches > 0);
}

Another case:

@Test
public void givenNORSet_whenMatchesAllExceptElements_thenCorrect() {
    int matches = runTest("[^bcr]at", "sat mat eat");
 
    assertTrue(matches > 0);
}

6.3. Range Class

We can define a class that specifies a range within which the matched text should fall using a hyphen(-), likewise, we can also negate a range.

Matching uppercase letters:

@Test
public void givenUpperCaseRange_whenMatchesUpperCase_
  thenCorrect() {
    int matches = runTest(
      "[A-Z]", "Two Uppercase alphabets 34 overall");
 
    assertEquals(matches, 2);
}

Matching lowercase letters:

@Test
public void givenLowerCaseRange_whenMatchesLowerCase_
  thenCorrect() {
    int matches = runTest(
      "[a-z]", "Two Uppercase alphabets 34 overall");
 
    assertEquals(matches, 26);
}

Matching both upper case and lower case letters:

@Test
public void givenBothLowerAndUpperCaseRange_
  whenMatchesAllLetters_thenCorrect() {
    int matches = runTest(
      "[a-zA-Z]", "Two Uppercase alphabets 34 overall");
 
    assertEquals(matches, 28);
}

Matching a given range of numbers:

@Test
public void givenNumberRange_whenMatchesAccurately_
  thenCorrect() {
    int matches = runTest(
      "[1-5]", "Two Uppercase alphabets 34 overall");
 
    assertEquals(matches, 2);
}

Matching another range of numbers:

@Test
public void givenNumberRange_whenMatchesAccurately_
  thenCorrect2(){
    int matches = runTest(
      "[30-35]", "Two Uppercase alphabets 34 overall");
 
    assertEquals(matches, 1);
}

6.4. Union Class

A union character class is a result of combining two or more character classes:

@Test
public void givenTwoSets_whenMatchesUnion_thenCorrect() {
    int matches = runTest("[1-3[7-9]]", "123456789");
 
    assertEquals(matches, 6);
}

The above test will only match 6 out of the 9 integers because the union set skips 3, 4 and 5.

6.5. Intersection Class

Similar to the union class, this class results from picking common elements between two or more sets. To apply intersection, we use the &&:

@Test
public void givenTwoSets_whenMatchesIntersection_thenCorrect() {
    int matches = runTest("[1-6&&[3-9]]", "123456789");
 
    assertEquals(matches, 4);
}

We get 4 matches because the intersection of the two sets has only 4 elements.

6.6. Subtraction Class

We can use subtraction to negate one or more character classes, for example matching a set of odd decimal numbers:

@Test
public void givenSetWithSubtraction_whenMatchesAccurately_thenCorrect() {
    int matches = runTest("[0-9&&[^2468]]", "123456789");
 
    assertEquals(matches, 5);
}

Only 1,3,5,7,9 will be matched.

7. Predefined Character Classes

The Java regex API also accepts predefined character classes. Some of the above character classes can be expressed in shorter form though making the code less intuitive. One special aspect of the Java version of this regex is the escape character.

As we will see, most characters will start with a backslash, which has a special meaning in Java. For these to be compiled by the Pattern class – the leading backslash must be escaped i.e. \d becomes \\d.

Matching digits, equivalent to [0-9]:

@Test
public void givenDigits_whenMatches_thenCorrect() {
    int matches = runTest("\\d", "123");
 
    assertEquals(matches, 3);
}

Matching non-digits, equivalent to [^0-9]:

@Test
public void givenNonDigits_whenMatches_thenCorrect() {
    int mathces = runTest("\\D", "a6c");
 
    assertEquals(matches, 2);
}

Matching white space:

@Test
public void givenWhiteSpace_whenMatches_thenCorrect() {
    int matches = runTest("\\s", "a c");
 
    assertEquals(matches, 1);
}

Matching non-white space:

@Test
public void givenNonWhiteSpace_whenMatches_thenCorrect() {
    int matches = runTest("\\S", "a c");
 
    assertEquals(matches, 2);
}

Matching a word character, equivalent to [a-zA-Z_0-9]:

@Test
public void givenWordCharacter_whenMatches_thenCorrect() {
    int matches = runTest("\\w", "hi!");
 
    assertEquals(matches, 2);
}

Matching a non-word character:

@Test
public void givenNonWordCharacter_whenMatches_thenCorrect() {
    int matches = runTest("\\W", "hi!");
 
    assertEquals(matches, 1);
}

8. Quantifiers

The Java regex API also allows us to use quantifiers. These enable us to further tweak the match’s behavior by specifying the number of occurrences to match against.

To match a text zero or one time, we use the ? quantifier:

@Test
public void givenZeroOrOneQuantifier_whenMatches_thenCorrect() {
    int matches = runTest("\\a?", "hi");
 
    assertEquals(matches, 3);
}

Alternatively, we can use the brace syntax, also supported by the Java regex API:

@Test
public void givenZeroOrOneQuantifier_whenMatches_thenCorrect2() {
    int matches = runTest("\\a{0,1}", "hi");
 
    assertEquals(matches, 3);
}

This example introduces the concept of zero-length matches. It so happens that if a quantifier’s threshold for matching is zero, it always matches everything in the text including an empty String at the end of every input. This means that even if the input is empty, it will return one zero-length match.

This explains why we get 3 matches in the above example despite having a String of length two. The third match is zero-length empty String.

To match a text zero or limitless times, we us * quantifier, it is just similar to ?:

@Test
public void givenZeroOrManyQuantifier_whenMatches_thenCorrect() {
     int matches = runTest("\\a*", "hi");
 
     assertEquals(matches, 3);
}

Supported alternative:

@Test
public void givenZeroOrManyQuantifier_whenMatches_thenCorrect2() {
    int matches = runTest("\\a{0,}", "hi");
 
    assertEquals(matches, 3);
}

The quantifier with a difference is +, it has a matching threshold of 1. If the required String does not occur at all, there will be no match, not even a zero-length String:

@Test
public void givenOneOrManyQuantifier_whenMatches_thenCorrect() {
    int matches = runTest("\\a+", "hi");
 
    assertFalse(matches);
}

Supported alternative:

@Test
public void givenOneOrManyQuantifier_whenMatches_thenCorrect2() {
    int matches = runTest("\\a{1,}", "hi");
 
    assertFalse(matches);
}

As it is in Perl and other languages, the brace syntax can be used to match a given text a number of times:

@Test
public void givenBraceQuantifier_whenMatches_thenCorrect() {
    int matches = runTest("a{3}", "aaaaaa");
 
    assertEquals(matches, 2);
}

In the above example, we get two matches since a match occurs only if a appears three times in a row. However, in the next test we won’t get a match since the text only appears two times in a row:

@Test
public void givenBraceQuantifier_whenFailsToMatch_thenCorrect() {
    int matches = runTest("a{3}", "aa");
 
    assertFalse(matches > 0);
}

When we use a range in the brace, the match will be greedy, matching from the higher end of the range:

@Test
public void givenBraceQuantifierWithRange_whenMatches_thenCorrect() {
    int matches = runTest("a{2,3}", "aaaa");
 
    assertEquals(matches, 1);
}

We’ve specified at least two occurrences but not exceeding three, so we get a single match instead where the matcher sees a single aaa and a lone a which can’t be matched.

However, the API allows us to specify a lazy or reluctant approach such that the matcher can start from the lower end of the range in which case matching two occurrences as aa and aa:

@Test
public void givenBraceQuantifierWithRange_whenMatchesLazily_thenCorrect() {
    int matches = runTest("a{2,3}?", "aaaa");
 
    assertEquals(matches, 2);
}

9. Capturing Groups

The API also allows us to treat multiple characters as a single unit through capturing groups.

It will attache numbers to the capturing groups and allow back referencing using these numbers.

In this section, we will see a few examples on how to use capturing groups in Java regex API.

Let’s use a capturing group that matches only when an input text contains two digits next to each other:

@Test
public void givenCapturingGroup_whenMatches_thenCorrect() {
    int maches = runTest("(\\d\\d)", "12");
 
    assertEquals(matches, 1);
}

The number attached to the above match is 1, using a back reference to tell the matcher that we want to match another occurrence of the matched portion of the text. This way, instead of:

@Test
public void givenCapturingGroup_whenMatches_thenCorrect2() {
    int matches = runTest("(\\d\\d)", "1212");
 
    assertEquals(matches, 2);
}

Where there are two separate matches for the input, we can have one match but propagating the same regex match to span the entire length of the input using back referencing:

@Test
public void givenCapturingGroup_whenMatchesWithBackReference_
  thenCorrect() {
    int matches = runTest("(\\d\\d)\\1", "1212");
 
    assertEquals(matches, 1);
}

Where we would have to repeat the regex without back referencing to achieve the same result:

@Test
public void givenCapturingGroup_whenMatches_thenCorrect3() {
    int matches = runTest("(\\d\\d)(\\d\\d)", "1212");
 
    assertEquals(matches, 1);
}

Similarly, for any other number of repetitions, back referencing can make the matcher see the input as a single match:

@Test
public void givenCapturingGroup_whenMatchesWithBackReference_
  thenCorrect2() {
    int matches = runTest("(\\d\\d)\\1\\1\\1", "12121212");
 
    assertEquals(matches, 1);
}

But if you change even the last digit, the match will fail:

@Test
public void givenCapturingGroupAndWrongInput_
  whenMatchFailsWithBackReference_thenCorrect() {
    int matches = runTest("(\\d\\d)\\1", "1213");
 
    assertFalse(matches > 0);
}

It is important not to forget the escape backslashes, this is crucial in Java syntax.

10. Boundary Matchers

The Java regex API also supports boundary matching. If we care about where exactly in the input text the match should occur, then this is what we are looking for. With the previous examples, all we cared about was whether a match was found or not.

To match only when the required regex is true at the beginning of the text, we use the caret ^. 

This test will fail since the text dog can be found at the beginning:

@Test
public void givenText_whenMatchesAtBeginning_thenCorrect() {
    int matches = runTest("^dog", "dogs are friendly");
 
    assertTrue(matches > 0);
}

The following test will fail:

@Test
public void givenTextAndWrongInput_whenMatchFailsAtBeginning_
  thenCorrect() {
    int matches = runTest("^dog", "are dogs are friendly?");
 
    assertFalse(matches > 0);
}

To match only when the required regex is true at the end of the text, we use the dollar character $. A match will be found in the following case:

@Test
public void givenText_whenMatchesAtEnd_thenCorrect() {
    int matches = runTest("dog$", "Man's best friend is a dog");
 
    assertTrue(matches > 0);
}

And no match will be found here:

@Test
public void givenTextAndWrongInput_whenMatchFailsAtEnd_thenCorrect() {
    int matches = runTest("dog$", "is a dog man's best friend?");
 
    assertFalse(matches > 0);
}

If we want a match only when the required text is found at a word boundary, we use \\b regex at the beginning and end of the regex:

Space is a word boundary:

@Test
public void givenText_whenMatchesAtWordBoundary_thenCorrect() {
    int matches = runTest("\\bdog\\b", "a dog is friendly");
 
    assertTrue(matches > 0);
}

The empty string at the beginning of a line is also a word boundary:

@Test
public void givenText_whenMatchesAtWordBoundary_thenCorrect2() {
    int matches = runTest("\\bdog\\b", "dog is man's best friend");
 
    assertTrue(matches > 0);
}

These tests pass because the beginning of a String, as well as space between one text and another, marks a word boundary, however, the following test shows the opposite:

@Test
public void givenWrongText_whenMatchFailsAtWordBoundary_thenCorrect() {
    int matches = runTest("\\bdog\\b", "snoop dogg is a rapper");
 
    assertFalse(matches > 0);
}

Two-word characters appearing in a row does not mark a word boundary, but we can make it pass by changing the end of the regex to look for a non-word boundary:

@Test
public void givenText_whenMatchesAtWordAndNonBoundary_thenCorrect() {
    int matches = runTest("\\bdog\\B", "snoop dogg is a rapper");
    assertTrue(matches > 0);
}

11. Pattern Class Methods

Previously, we have only created Pattern objects in a basic way. However, this class has another variant of the compile method that accepts a set of flags alongside the regex argument affecting the way the pattern is matched.

These flags are simply abstracted integer values. Let’s overload the runTest method in the test class so that it can take a flag as the third argument:

public static int runTest(String regex, String text, int flags) {
    pattern = Pattern.compile(regex, flags);
    matcher = pattern.matcher(text);
    int matches = 0;
    while (matcher.find()){
        matches++;
    }
    return matches;
}

In this section, we will look at the different supported flags and how they are used.

Pattern.CANON_EQ

This flag enables canonical equivalence. When specified, two characters will be considered to match if, and only if, their full canonical decompositions match.

The concept of canonical equivalence is beyond the scope of this article, you can read more about it here.

Consider the accented Unicode character é. Its composite code point is u00E9. However, Unicode also has a separate code point for its component characters e, u0065 and the acute accent, u0301. In this case, composite character u00E9 is indistinguishable from the two character sequence u0065 u0301.

By default, matching does not take canonical equivalence into account:

@Test
public void givenRegexWithoutCanonEq_whenMatchFailsOnEquivalentUnicode_thenCorrect() {
    int matches = runTest("\u00E9", "\u0065\u0301");
 
    assertFalse(matches > 0);
}

But if we add the flag, then the test will pass:

@Test
public void givenRegexWithCanonEq_whenMatchesOnEquivalentUnicode_thenCorrect() {
    int matches = runTest("\u00E9", "\u0065\u0301", Pattern.CANON_EQ);
 
    assertTrue(matches > 0);
}

Pattern.CASE_INSENSITIVE

This flag enables matching regardless of case. By default matching takes case into account:

@Test
public void givenRegexWithDefaultMatcher_whenMatchFailsOnDifferentCases_thenCorrect() {
    int matches = runTest("dog", "This is a Dog");
 
    assertFalse(matches > 0);
}

So using this flag, we can change the default behavior:

@Test
public void givenRegexWithCaseInsensitiveMatcher
  _whenMatchesOnDifferentCases_thenCorrect() {
    int matches = runTest(
      "dog", "This is a Dog", Pattern.CASE_INSENSITIVE);
 
    assertTrue(matches > 0);
}

We can also use the equivalent, embedded flag expression to achieve the same result:

@Test
public void givenRegexWithEmbeddedCaseInsensitiveMatcher
  _whenMatchesOnDifferentCases_thenCorrect() {
    int matches = runTest("(?i)dog", "This is a Dog");
 
    assertTrue(matches > 0);
}

Pattern.COMMENTS

The Java API allows one to include comments using # in the regex. This can help in documenting complex regex that may not be immediately obvious to another programmer.

The comments flag makes the matcher ignore any white space or comments in the regex and only consider the pattern. In the default matching mode the following test would fail:

@Test
public void givenRegexWithComments_whenMatchFailsWithoutFlag_thenCorrect() {
    int matches = runTest(
      "dog$  #check for word dog at end of text", "This is a dog");
 
    assertFalse(matches > 0);
}

This is because the matcher will look for the entire regex in the input text, including the spaces and the # character. But when we use the flag, it will ignore the extra spaces and the every text starting with # will be seen as a comment to be ignored for each line:

@Test
public void givenRegexWithComments_whenMatchesWithFlag_thenCorrect() {
    int matches = runTest(
      "dog$  #check end of text","This is a dog", Pattern.COMMENTS);
 
    assertTrue(matches > 0);
}

There is also an alternative embedded flag expression for this:

@Test
public void givenRegexWithComments_whenMatchesWithEmbeddedFlag_thenCorrect() {
    int matches = runTest(
      "(?x)dog$  #check end of text", "This is a dog");
 
    assertTrue(matches > 0);
}

Pattern.DOTALL

By default, when we use the dot “.” expression in regex, we are matching every character in the input String until we encounter a new line character.

Using this flag, the match will include the line terminator as well. We will understand better with the following examples. These examples will be a little different. Since we are interested in asserting against the matched String, we will use matcher‘s group method which returns the previous match.

First, we will see the default behavior:

@Test
public void givenRegexWithLineTerminator_whenMatchFails_thenCorrect() {
    Pattern pattern = Pattern.compile("(.*)");
    Matcher matcher = pattern.matcher(
      "this is a text" + System.getProperty("line.separator") 
        + " continued on another line");
    matcher.find();
 
    assertEquals("this is a text", matcher.group(1));
}

As we can see, only the first part of the input before the line terminator is matched.

Now in dotall mode, the entire text including the line terminator will be matched:

@Test
public void givenRegexWithLineTerminator_whenMatchesWithDotall_thenCorrect() {
    Pattern pattern = Pattern.compile("(.*)", Pattern.DOTALL);
    Matcher matcher = pattern.matcher(
      "this is a text" + System.getProperty("line.separator") 
        + " continued on another line");
    matcher.find();
    assertEquals(
      "this is a text" + System.getProperty("line.separator") 
        + " continued on another line", matcher.group(1));
}

We can also use an embedded flag expression to enable dotall mode:

@Test
public void givenRegexWithLineTerminator_whenMatchesWithEmbeddedDotall
  _thenCorrect() {
    
    Pattern pattern = Pattern.compile("(?s)(.*)");
    Matcher matcher = pattern.matcher(
      "this is a text" + System.getProperty("line.separator") 
        + " continued on another line");
    matcher.find();
 
    assertEquals(
      "this is a text" + System.getProperty("line.separator") 
        + " continued on another line", matcher.group(1));
}

Pattern.LITERAL

When in this mode, matcher gives no special meaning to any metacharacters, escape characters or regex syntax. Without this flag, the matcher will match the following regex against any input String:

@Test
public void givenRegex_whenMatchesWithoutLiteralFlag_thenCorrect() {
    int matches = runTest("(.*)", "text");
 
    assertTrue(matches > 0);
}

This is the default behavior we have been seeing in all the examples. However, with this flag, no match will be found, since the matcher will be looking for (.*) instead of interpreting it:

@Test
public void givenRegex_whenMatchFailsWithLiteralFlag_thenCorrect() {
    int matches = runTest("(.*)", "text", Pattern.LITERAL);
 
    assertFalse(matches > 0);
}

Now if we add the required string, the test will pass:

@Test
public void givenRegex_whenMatchesWithLiteralFlag_thenCorrect() {
    int matches = runTest("(.*)", "text(.*)", Pattern.LITERAL);
 
    assertTrue(matches > 0);
}

There is no embedded flag character for enabling literal parsing.

Pattern.MULTILINE

By default ^ and $ metacharacters match absolutely at the beginning and at the end respectively of the entire input String. The matcher disregards any line terminators:

@Test
public void givenRegex_whenMatchFailsWithoutMultilineFlag_thenCorrect() {
    int matches = runTest(
      "dog$", "This is a dog" + System.getProperty("line.separator") 
      + "this is a fox");
 
    assertFalse(matches > 0);
}

The match fails because the matcher searches for dog at the end of the entire String but the dog is present at the end of the first line of the string.

However, with the flag, the same test will pass since the matcher now takes into account line terminators. So the String dog is found just before the line terminates, hence success:

@Test
public void givenRegex_whenMatchesWithMultilineFlag_thenCorrect() {
    int matches = runTest(
      "dog$", "This is a dog" + System.getProperty("line.separator") 
      + "this is a fox", Pattern.MULTILINE);
 
    assertTrue(matches > 0);
}

Here is the embedded flag version:

@Test
public void givenRegex_whenMatchesWithEmbeddedMultilineFlag_
  thenCorrect() {
    int matches = runTest(
      "(?m)dog$", "This is a dog" + System.getProperty("line.separator") 
      + "this is a fox");
 
    assertTrue(matches > 0);
}

12. Matcher Class Methods

In this section, we will look at some useful methods of the Matcher class. We will group them according to functionality for clarity.

12.1. Index Methods

Index methods provide useful index values that show precisely where the match was found in the input String . In the following test, we will confirm the start and end indices of the match for dog in the input String :

@Test
public void givenMatch_whenGetsIndices_thenCorrect() {
    Pattern pattern = Pattern.compile("dog");
    Matcher matcher = pattern.matcher("This dog is mine");
    matcher.find();
 
    assertEquals(5, matcher.start());
    assertEquals(8, matcher.end());
}

12.2. Study Methods

Study methods go through the input String and return a boolean indicating whether or not the pattern is found. Commonly used are matches and lookingAt methods.

The matches and lookingAt methods both attempt to match an input sequence against a pattern. The difference, is that matches requires the entire input sequence to be matched, while lookingAt does not.

Both methods start at the beginning of the input String :

@Test
public void whenStudyMethodsWork_thenCorrect() {
    Pattern pattern = Pattern.compile("dog");
    Matcher matcher = pattern.matcher("dogs are friendly");
 
    assertTrue(matcher.lookingAt());
    assertFalse(matcher.matches());
}

The matches method will return true in a case like so:

@Test
public void whenMatchesStudyMethodWorks_thenCorrect() {
    Pattern pattern = Pattern.compile("dog");
    Matcher matcher = pattern.matcher("dog");
 
    assertTrue(matcher.matches());
}

12.3. Replacement Methods

Replacement methods are useful to replace text in an input string. The common ones are replaceFirst and replaceAll.

The replaceFirst and replaceAll methods replace the text that matches a given regular expression. As their names indicate, replaceFirst replaces the first occurrence, and replaceAll replaces all occurrences:

@Test
public void whenReplaceFirstWorks_thenCorrect() {
    Pattern pattern = Pattern.compile("dog");
    Matcher matcher = pattern.matcher(
      "dogs are domestic animals, dogs are friendly");
    String newStr = matcher.replaceFirst("cat");
 
    assertEquals(
      "cats are domestic animals, dogs are friendly", newStr);
}

Replace all occurrences:

@Test
public void whenReplaceAllWorks_thenCorrect() {
    Pattern pattern = Pattern.compile("dog");
    Matcher matcher = pattern.matcher(
      "dogs are domestic animals, dogs are friendly");
    String newStr = matcher.replaceAll("cat");
 
    assertEquals("cats are domestic animals, cats are friendly", newStr);
}

13. Conclusion

In this article, we have learned how to use regular expressions in Java and also explored the most important features of the java.util.regex package.

The full source code for the project including all the code samples used here can be found in the GitHub project.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE


Introduction To Play In Java

$
0
0

I just released the Master Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

The purpose of this intro tutorial is to explore the Play Framework and teach you how to create a web application with it.

Play is a high-productivity web applications framework for programming languages whose code is compiled and run on the JVM, mainly Java and Scala. It integrates the components and APIs we need for modern web application development.

2. Play Framework Setup

Let’s head over to Play framework’s official page and download the latest version of the distribution. At the time of this tutorial, the latest is version 2.5.

This will download a small starter package with a script called activator inside the bin folder. This script will setup the rest of the framework for us incrementally as and when we need the different components.

To make things simple, we will need to add the activator to PATH variable so as to run it from the console.

3. Anatomy of Play Applications

In this section, we will get a better understanding of how a Play application is structured and what each file and directory in that structure is used for.

If you would like to challenge yourself to a simple example right away, then skip to the next section.

The app Directory

This directory contains application sources and all executable resources. Java source code, web templates and compiled assets’ sources and is the core part of our API.

It contains some important subdirectories each of which packages one part of the MVC architectural pattern, plus optional files and directories:

  • models – this is the application business layer, the files in this package will probably model our database tables and enable us to access the persistence layer
  • views – this package contains HTML templates that can be rendered to the browser
  • controllers – a subdirectory in which we have our controllers. Controllers are Java source files which contain actions to be executed for each API call. Actions are public methods which process HTTP requests and return results of the same as HTTP responses
  • assets – a subdirectory which contains compiled assets such as CSS and javascript. The above naming conventions are flexible, we can create our own packages e.g. an app/utils package. We can also customize the package naming app/com/baeldung/controllers

The bin Directory

Contains the activator scripts we will be using to run our application. Apart from run, It offers other utility methods such as clean, compile and test.

The conf Directory

Where we configure the application. It contains two most important files.

The application.conf file is where we configure things like logging, database connections, which port the server runs on, and more.

The routes file is where we define our application’s routes and mappings from HTTP URLs to controller actions.

The project Directory

Contains the files that configure the build process based on SBT (Scala Build Tool).

The public Directory

Contains public assets such as CSS, javascript and image files for rendering to the browser.

The test Directory

This directory contains unit test and integration test suites for the Play application.

The build.sbt File

The application build script.

4. Simple Example

In this section, we will create a very basic example of a web application and use it to familiarize ourselves with the fundamentals of Play framework.

Remember that we just set up a skeleton of the framework. Therefore, the first time we execute a new command, Play will need to download all the related dependencies to fulfill the command; this could take quite a while depending on our internet connection.

The next time we execute the same command, Play will have what it needs, so it won’t need to download anything new and will be able to execute the command faster.

Let’s open a command prompt, navigate to our location of choice and execute the following command:

activator new webapp play-java

The above command will create a new web application called webapp from a Java template. In the same way, Scala developers would use play-scala instead of play-java.

When we check webapp directory after the console process has completed, we should see a directory structure as below:

capture-1

We should be able to understand this now as in a previous section, we inspected each directory and file of webapp individually.

From the console, let’s cd into webapp directory to run the new application. Execute the following command from the project root:

activator run

The above command, after completion of execution, will spawn a server on port number 9000 to expose our API which we can access by loading http://localhost:9000. We should see the following page loaded in the browser:

capture-1

Our new API has four endpoints which we can now try out in turn from the browser. The first one which we have just loaded is the root endpoint which loads a startup documentation of webapp as we may have already noticed.

The second one, accessible through http://localhost:9000/message will greet us with a Hi! message.

The third at http://localhost:9000/count will load an integer which is incremented each time we hit this endpoint.

The last one at http://localhost:9000/assets is meant for downloading files from the server by adding a file name to the path, since we have not created any files on the server yet, it will load an informative error page.

5. Actions and Controllers

An action is basically a Java method inside a controller class that processes request parameters and produces a result to be sent to the client.

A controller is a Java class that extends play.mvc.Controller that logically groups together actions that may be related to results they produce for the client.

Let’s head over to webapp/app/controllers and pay attention to HomeController.Java and CountController.Java. Each one contains a single action.

HomeController‘s index action returns a web page with startup tutorials. This web page is the default index template in views package and a text is passed to it to be displayed at the top of our page.

Let’s change the text a little:

public Result index() {
    return ok(index.render("REST API with Play by Baeldung"));
}

and reload the page in the browser:

Capture

We can do away with the template completely so that we only deal with what we can control and what we are interested in:

public Result index() {
    return ok("REST API with Play by Baeldung");
}

in which case when we reload, we will have only this text in the browser, without any HTML or styling:

REST API with Play by Baeldung

We could as well include our own HTML-like wrapping the text in the header <h1></h1> tags and receive the appropriate result on reloading, feel free to play around with it. But there is a catch! Let’s see.

CountController uses an inbuilt counter to maintain state with how many times the /count endpoint has been hit using the count action:

public Result count() {
    return ok(Integer.toString(counter.nextCount()));
}

Let’s customize it a bit:

public Result count() {
    return ok("<h1>API hit count:" + Integer.toString(
    counter.nextCount())+"</h1>").as("text/html");
}

When we reload, instead of getting a tiny integer, we now have a large text with HTML header formatting.

We have manipulated our response by customizing the response type. We’ll look into this feature in more detail in a later section.

We have also seen two other important features of Play Framework.

First, our code changes are compiled on the fly and reloading the browser reflects the most recent version of our code.

Secondly, Play provides us with helper methods for standard HTTP responses in the play.mvc.Results class. For example, the ok() method which returns an OK HTTP 200 response alongside the response body we pass to it as a parameter.

There are more of such helper methods such as pageNotFound() and badRequest() which we will explore in the following section.

6. Manipulating Results

We have been benefiting from Play’s content negotiation feature without even realizing. Play automatically infers response content-type from the response body. We have been returning:

return ok("text to display");

And Play would automatically set Content-Type header to text/plain. We can take over control and customize this.

We customized the response for CountController.count action to text/html like so:

public Result count() {
    return ok("<h1>API hit count:"+Integer.toString
      (counter.nextCount()) + "</h1>").as("text/html");
}

We could have done the same using response() method of super class, Controller like so:

response().setContentType("text/html");
return ok("html formatted text");

This pattern cuts across all kinds of content types. Depending on the format of the data we pass to the helper method ok, we can replace text/html by text/plain or application/json.

If we need to set response headers, the response() helper method still comes in handy:

response().setHeader("headerName","value");

Apart from ok() helper method, the Play.mvc.Results class offers other helper methods for standard HTTP responses such as pageNotFound() as we saw in the Actions and Controllers section.

7. Conclusion

In this article, we have explored the basics of Play Framework. We have also been able to create a basic Java web application using Play.

To get full source code, you can check out the project over on GitHub.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Spring and Thymeleaf 3: Expressions

$
0
0

1. Introduction

Thymeleaf is a Java template engine for processing and creating HTML, XML, JavaScript, CSS and plain text. For an intro to Thymeleaf and Spring, have a look at this write-up.

Besides these basic functions, Thymeleaf offers us a set of utility objects that will help us perform common tasks in our application.

In this article, we’ll discuss a core feature in Thymeleaf 3.0 – Expression Utility Objects in Spring MVC applications.  More specifically, we’ll cover the topic of processing dates, calendars, strings, objects and much more.

2. Maven Dependencies

First, let us see the required configuration needed to integrate Thymeleaf with Spring. The thymeleaf-spring library is required in our dependencies:

<dependency>
    <groupId>org.thymeleaf</groupId>
    <artifactId>thymeleaf</artifactId>
    <version>3.0.1.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.thymeleaf</groupId>
    <artifactId>thymeleaf-spring4</artifactId>
    <version>3.0.1.RELEASE</version>
</dependency>

Note that, for a Spring 3 project, the thymeleaf-spring3 library must be used instead of thymeleaf-spring4. The latest version of the dependencies can  be found here.

3. Expression Utility Objects

Before looking at the core focus of this writeup, if you want to take a step back and see how to configure Thymeleaf 3.0 in your web app project, have a look at this tutorial.

For the purpose of the current article, we created a Spring controller and HTML file – to test out all the features we’re going to be discussing. Below is the complete list of available helper objects and their functions:

  • #dates: utility methods for java.util.Date objects
  • #calendars: similar to #dates, used for java.util.Calendar objects
  • #numbers: utility methods for formatting numeric objects
  • #strings: utility methods for String objects
  • #objects: utility methods for Java Object class in general
  • #bools: utility methods for boolean evaluation
  • #arrays: utility methods for arrays
  • #lists: utility methods for lists
  • #sets: utility methods for sets
  • #maps: utility methods for maps
  • #aggregates: utility methods for creating aggregates on arrays or collections
  • #messages: utility methods for obtaining externalized messages inside variables expressions

3.1. Dates Objects

The first function that we want to discuss is processing of the java.util.Date objects. The expression utility objects responsible for date processing start with #dates.functionName(). The first function that we want to cover is formatting of a Date object (which is added to the Spring model parameters).

Let’s say we want to use ISO8601 format:

<p th:text="${#dates.formatISO(date)}"></p>

No matter how our date was set on the back-end side, it needs to be displayed accordingly to this standard. What’s more, if we want to be specific with the format, we can specify it manually:

<p th:text="${#dates.format(date, 'dd-MM-yyyy HH:mm')}"></p>

The function takes two variables as parameters: Date and its format.

Finally, here are a few similarly useful functions we can use:

<p th:text="${#dates.dayOfWeekName(date)}"></p>
<p th:text="${#dates.createNow()}"></p>
<p th:text="${#dates.createToday()}"></p>

In the first, we will receive the name of the day of the week, in the second we will create a new Date object, and finally we will create a new Date with time set to 00:00.

3.2. Calendar Objects

Calendar utilities are very similar to dates processing, except that we are using an instance of the java.util.Calendar object:

<p th:text="${#calendars.formatISO(calendar)}"></p>
<p th:text="${#calendars.format(calendar, 'dd-MM-yyyy HH:mm')}"></p>
<p th:text="${#calendars.dayOfWeekName(calendar)}"></p>

The only difference is when we want to create new Calendar instance:

<p th:text="${#calendars.createNow().getTime()}"></p>
<p th:text="${#calendars.createToday().getFirstDayOfWeek()}"></p>

Please note, that we may use any Calendar class method in order to get requested data.

3.3. Numbers Processing

Another very handful feature is numbers-processing. Let’s focus on a num variable, randomly created with a double type:

<p th:text="${#numbers.formatDecimal(num,2,3)}"></p>
<p th:text="${#numbers.formatDecimal(num,2,3,'COMMA')}"></p>

In the first line, we format decimal number by setting minimum integer digits and exact decimal digits. In the second one, in addition to integer and decimal digits, we specified the decimal separator. The options are POINT, COMMA, WHITESPACE, NONE or DEFAULT (by locale).

There is one more function that we want to present in this paragraph. It is creation of a sequence of integer numbers:

<p th:each="number: ${#numbers.sequence(0,2)}">
    <span th:text="${number}"></span>
</p>
<p th:each="number: ${#numbers.sequence(0,4,2)}">
    <span th:text="${number}"></span>
</p>

In the first example, we had Thymeleaf generate a sequence from 0-2, whereas in the second in addition to minimum and maximum value, we provided a definition of step (in this example the values will change by two).

Please note, that the interval is closed on both sides.

3.4. Strings Operations

It is the most comprehensive feature of expression utility objects.

We can start the description with the utility of checking empty or null String objects. Quite often, developers would use Java methods inside Thymeleaf tags to do that, which might be not safe for null objects.

Instead, we can do this:

<p th:text="${#strings.isEmpty(string)}"></p>
<p th:text="${#strings.isEmpty(nullString)}"></p>
<p th:text="${#strings.defaultString(emptyString,'Empty String')}"></p>

The first String is not empty, so the method will return false. The second String is null, so we will get true. Finally, we may use #strings.defaultString(…) method to specify a default value, if String will be empty.

There are many more methods. All of them works not only with strings but also with Java.Collections. For example to use substring-related operations:

<p th:text="${#strings.indexOf(name,frag)}"></p>
<p th:text="${#strings.substring(name,3,5)}"></p>
<p th:text="${#strings.substringAfter(name,prefix)}"></p>
<p th:text="${#strings.substringBefore(name,suffix)}"></p>
<p th:text="${#strings.replace(name,'las','ler')}"></p>

or to use null-safe comparison and concatenation:

<p th:text="${#strings.equals(first, second)}"></p>
<p th:text="${#strings.equalsIgnoreCase(first, second)}"></p>
<p th:text="${#strings.concat(values...)}"></p>
<p th:text="${#strings.concatReplaceNulls(nullValue, values...)}"></p>

Finally, there are text-style related features, that will preserve the syntax to be always the same:

<p th:text="${#strings.abbreviate(string,5)} "></p>
<p th:text="${#strings.capitalizeWords(string)}"></p>

In the first method, abbreviated text will make it have a maximum size of n. If a text is bigger, it will be clipped and finished with “…”.

In the second method, we will capitalize words.

3.5. Aggregates

The last but not the least function that we want to discuss here is aggregates. They are null safe, and provide utilities to calculate average or sum from array or any other collection:

<p th:text="${#aggregates.sum(array)}"></p>
<p th:text="${#aggregates.avg(array)}"></p>
<p th:text="${#aggregates.sum(set)}"></p>
<p th:text="${#aggregates.avg(set)}"></p>

4. Conclusion

In this article, we discussed Expression Utility Objects features implemented in the Thymeleaf framework, version 3.0.

The full implementation of this tutorial can be found in the GitHub project.

How to test? Our suggestion is to play with a browser first, then check the existing JUnit tests as well.

Please take a note, that examples do not cover all available utility expressions. If you want to learn about all types of utilities have a look here.

The Master Class of "Learn Spring Security" is out:

>> CHECK OUT THE COURSE

Routing In Play Applications in Java

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Overview

In this article, we will explore the aspect of routing in developing web applications with the Play Framework.

Routing is a common concept that appears in most web development frameworks including Spring MVC.

A route is a URL pattern that is mapped to a handler. The handler can be a physical file, such as a downloadable asset in the web application or a class that processes the request, such as a controller in an MVC application.

2. Setup

First, we’ll need to create a Java Play application. The details of how to set up Play Framework on your machine are available in my introductory article. You can read it by following this link.

By the end of the setup, we should be having a working Play application that we are able to access from a browser.

3. HTTP Routing

You may be wondering how Play knows which controller to consult whenever we send an HTTP request. The answer lies in the webapp/conf/routes configuration file.

Play’s router translates HTTP requests into action calls. HTTP requests are considered to be events in MVC architecture and the router reacts to them by consulting the routes file for which controller and which action in that controller to execute.

Each of these events supplies a router with two parameters, i.e. a request path with its query String and a request’s HTTP method.

4. Basic Routing With Play

For the router to do its work, the conf/routes file must define mappings of HTTP methods and URI patterns to appropriate controller actions:

GET     /     controllers.HomeController.index
GET     /     count controllers.CountController.count
GET     /     message controllers.AsyncController.message

GET     /     assets/*file controllers.Assets.versioned(path="/public", file: Asset)

All routes files must also map the static resources in the webapp/public folder available to the client on the /assets endpoint.
Notice the syntax of defining HTTP routes, and HTTP method space URI pattern space controller action.

5. URI Patterns

In this section, we will expound a little on URI patterns.

5.1. Static URI patterns

The first three URI patterns above are static. This means that mapping of the URLs to resources occurs without any further processing in the controller actions.

As long as a controller method is called, it returns a static resource whose content is determined before the request.

5.2. Dynamic URI patterns

The last of the above URI patterns is dynamic. This means that the controller action servicing a request on these URIs needs some information from the request to determine the response. In the above case, it expects a file name.

The normal sequence of events is that the router receives an event, picks the path from the URL, decodes its segments and passes them to the controller.

This is the point where path and query parameters are injected into the controller action as parameters. We will demonstrate this with an example in the next sections.

6. Advanced Routing With Play

In this section, we will be discussing advanced options in routing using Dynamic URI patterns in detail.

6.1. Simple Path Parameters

These are unnamed parameters in a request URL that appear after the host and port and are parsed in order of appearance.

Inside webapp/app/HomeController.java, let us create a new action like so:

public Result greet(String name) {
    return ok("Hello " + name);
}

We want to be able to pick a path parameter from the request URL and map it to the variable name. Remember path parameters are not named in the URL.

The router will get those values from a route configuration.

So, let’s open webapp/conf/routes and create a mapping for this new action:

GET    /greet/:name     controllers.HomeController.greet(name)

Notice how we inform a router that name is a dynamic path segment with the colon syntax and then go ahead to pass it as a parameter to the greet action call.

Now load http://locahost:9000/greet/john in the browser and you will be greeted by name:

Hello john

It so happens that if our action parameter is of string type, we may pass it during the action call without specifying the parameter type, this is not the same for other types.

Let’s spice up our /greet endpoint with age information.

Back to HomeController‘s greet action, change it to:

public Result greet(String name,int age) {
    return ok("Hello " + name + ", you are " + age + " years old");
}

And the route to:

GET    /greet/:name/:age controllers.HomeController.greet(name,age:Integer)

Notice how we specify age parameter type in the action call on the far right, because it’s not a String.

Notice also the Scala syntax for declaring a variable, age:Integer. In Java, we would use Integer age syntax. Play Framework is built in Scala, so there is Scala syntax in a lot of places.

Let’s load http://localhost:9000/greet/john/26:

Hello john, you are 26 years old

6.2. Wildcards in Path Parameters

In our routes configuration file, the last mapping is:

GET  /assets/*file  controllers.Assets.versioned(path="/public", file: Asset)

We use a wildcard in the dynamic part of the path. What we are telling Play is that whatever value replaces *file in the actual request should be parsed as a whole and not decoded like in other cases of path parameters.

In this example, the controller is an inbuilt one, Assets, which allows the client to download files from the webapp/public folder. When we load http://localhost:9000/assets/images/favicon.png, we should see the image of the Play favicon in the browser since it’s present in the /public/images folder.

Let’s create our own example action in HomeController.java:

public Result introduceMe(String data) {
    String[] clientData=data.split(",");
    return ok("Your name is "+clientData[0]+", you are "+clientData[1]+" years old");
}

Notice that in this action, we receive one String parameter and apply our own logic to decode it. In this case, the logic is to split a comma delimited String into an array. Previously, we depended on a router to decode this data for us.

With wildcards, we are on our own. We are hoping that the client gets our syntax correct while passing this data in. Ideally, we should validate the incoming String before using it.

Let’s create a route to this action:

GET   /*data   controllers.HomeController.introduceMe(data)

Now load the URL http://localhost:9000/john,26. This will print:

Your name is john, you are 26 years old

6.3. Regex in Path Parameters

Just like wildcards, we can use regular expressions for the dynamic part. Let’s add an action that receives a number and returns its square:

Let’s add an action that receives a number and returns its square:

public Result squareMe(Long num) {
    return ok(num + " Squared is " + (num*num));
}

Let’s add its route:

GET   /square/$num<[0-9]+>   controllers.HomeController.squareMe(num:Long)

Let’s place this route below the introduceMe route to introduce a new concept. With this routing configuration, only URLs where the regex part is a positive integer will be routed to squareMe action.

Now if you placed the route as instructed in the previous paragraph, load http://localhost:9000/square/2. You should be greeted with an ArrayIndexOutOfBoundsException:

capture-3

If we check the error logs in the server console, we will realize that the action call was actually performed on introduceMe action rather than squareMe action. As said earlier about wildcards, we are on our own and we did not validate incoming data.

Therefore introduceMe was called with a single string “2” instead of a comma delimited String. After splitting it, we got an array with only one element and yet tried to access an element at index 1.

But why call introduceMe yet we routed this call to squareMe? The reason is a Play feature we will cover next called Routing Priority.

7. Routing Priority

If there is a conflict between routes as there is between squareMe and introduceMe, then the first route in declaration order is taken.

Why is there a conflict? Because of the wildcard context path /*data matches any request URL apart from the base path /. So every route whose URI pattern uses wildcards should appear last in order.

Now change the declaration order of the routes such that the introduceMe route comes after squareMe. Load:

2 Squared is 4

To test the power of regular expressions in a route, try loading http://locahost:9000/square/-1, a router will fail to match the squareMe route and match introduceMe route and we will get the ArrayIndexOutOfBoundsException again.

This is because -1 does not get matched by the provided regular expression, neither does any alphabetic character.

8. Parameters

Up until this point, the only aspect we’ve covered about parameters is how to declare their type in the routes file.

In this section, we will look at more options available to us when dealing with parameters in routes.

8.1. Parameters With Fixed Values

Sometimes we will want to use a fixed value for a parameter. This is our way of telling Play to use the path parameter provided or if the request context is path /, then use a certain fixed value.

Another way of looking at it is having two endpoints or context paths leading to the same controller action. One endpoint requiring a parameter from the request URL and defaulting to the other in case the said parameter is absent.

To demonstrate this, let’s make a change to HomeController.index() action like so:

public Result index() {
    return ok("Routing in Play by Baeldung");
}

Assuming we don’t always want our API to return a String:

Routing in Play by Baeldung

We want to control it by sending the name of an author of the article along with the request, defaulting to the fixed value Baeldung only if the request does not have the author parameter.

So let’s further change the index action by adding a parameter:

public Result index(String author) {
    return ok("REST API with Play by "+author);
}

Let’s also see how to add a fixed value parameter to the route:

GET   /   controllers.HomeController.index(author="Baeldung")
GET   /:author   controllers.HomeController.index(author)

Notice how we now have two separate routes all leading to the HomeController.index action instead of one.

When we now load http://localhost:9000/ from the browser we get:

Routing in Play by Baeldung

While when we load http://localhost:9000/john, we get:

Routing in Play by john

8.2. Parameters with Default Values

Apart from having fixed values, parameters can also have what we call default values. Both provide fallback values to the controller action parameters in case the request does not provide the required values.

The difference between the two is that fixed values are used as a fallback for path parameters while default values are used as a fallback for query parameters.

Path parameters as in http://localhost:9000/param1/param2 and query parameters as in http://localhost:9000/?param1=value1&param2=value2.

The second difference is in the syntax of declaring the two in a route. Fixed value parameters use the assignment operator as in:

author="Baeldung"

While default values use a different type of assignment:

author ?= "Baeldung"

We use the ?= operator which conditionally assigns Baeldung to author in case author is found to contain no value.

To have a complete demonstration, let’s get back to HomeController.index action. Let’s say, apart from the author name which is a path parameter, we also want to pass author id as a query parameter which should default to 1 if not passed in the request.

We will change index action to:

public Result index(String author,int id) {
    return ok("Routing in Play by:"+author+" ID:"+id);
}

and the index routes to:

GET   /   controllers.HomeController.index(author="Baeldung",id:Int?=1)
GET   /:author   controllers.HomeController.index(author,id:Int?=1)

Now loading http://localhost:9000/:

Routing in Play by:Baeldung ID:1

Hitting http://localhost:9000/?id=10:

Routing in Play by:Baeldung ID:10

What about http://localhost:9000/john:

Routing in Play by:john ID:1

and http://localhost:9000/john?id=5:

Routing in Play by:john ID:5

9. Conclusion

In this article, we explored the notion of Routing in Play applications. You can also check out my other article, REST API with Play Framework by following this link.

The source code for the complete project is available on GitHub.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Introduction to the Wicket Framework

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Overview

Wicket is a Java server-side web component-oriented framework that aims at simplifying building web interfaces by introducing patterns known from desktop UI development.

With Wicket it is possible to build a web application using only Java code and XHTML compliant HTML pages. No need for Javascript, nor XML configuration files.

It provides a layer over the request-response cycle, shielding from working at a low level and allowing developers to focus on the business logic.

In this article, we will introduce the basics by building the HelloWorld Wicket application, followed by a complete example using two built-in components that communicate with each other.

2. Setup

To run a Wicket project, let’s add the following dependencies:

<dependency>
    <groupId>org.apache.wicket</groupId>
    <artifactId>wicket-core</artifactId>
    <version>7.4.0</version>
</dependency>

You may want to check out the latest version of Wicket in the Maven Central repository, which at the time of your reading may not coincide with the one used here.

Now we are ready to build our first Wicket application.

3. HelloWorld Wicket

Let’s start by subclassing Wicket’s WebApplication class, which, at a minimum, requires overriding the Class<? extends Page> getHomePage() method.

Wicket will use this class as application’s main entry point. Inside the method, simply return a class object of a class named HelloWorld:

public class HelloWorldApplication extends WebApplication {
    @Override
    public Class<? extends Page> getHomePage() {
        return HelloWorld.class;
    }
}

Wicket favors convention over configuration. Adding a new web page to the application requires creating two files: a Java file and an HTML file with the same name (but different extension) under the same directory. Additional configuration is only needed if you want to change the default behaviour.

In the source code’s package directory, first add the HelloWorld.java:

public class HelloWorld extends WebPage {
    public HelloWorld() {
        add(new Label("hello", "Hello World!"));
    }
}

then HelloWorld.html:

<html>
    <body>
        <span wicket:id="hello"></span>
    </body>
</html>

As a final step, add the filter definition inside the web.xml:

<filter>
    <filter-name>wicket.examples</filter-name>
    <filter-class>
      org.apache.wicket.protocol.http.WicketFilter
    </filter-class>
    <init-param>
        <param-name>applicationClassName</param-name>
        <param-value>
          com.baeldung.wicket.examples.HelloWorldApplication
        </param-value>
    </init-param>
</filter>

That’s it. We have just coded our first Wicket web application.

Run the project by building a war file, (mvn package from the command line) and deploy it on a servlet container such as Jetty or Tomcat.

Let’s access http://localhost:8080/HelloWorld/ in the browser. An empty page with the message Hello World! shall appear.

4. Wicket Components

Components in Wicket are triads consisting of a Java class, the HTML markup, and a model. Models are a facade that components use to access the data.

This structure provides a nice separation of concerns and by decoupling the component from data-centric operations, increases code reuse.

The example that follows demonstrates how to add Ajax behaviour to a component. It consists of a page with two elements: a dropdown menu and a label. When the dropdown selection changes, the label (and only the label) will be updated.

The body of the HTML file CafeSelector.html will be minimal, with only two elements, a dropdown menu, and a label:

<select wicket:id="cafes"></select>
<p>
    Address: <span wicket:id="address">address</span>
</p>

On the Java side, let’s create the label:

Label addressLabel = new Label("address", 
  new PropertyModel<String>(this.address, "address"));
addressLabel.setOutputMarkupId(true);

The first argument in the Label constructor matching the wicket:id assigned in the HTML file. The second argument is the component’s model, a wrapper for the underlying data that is presented in the component.

The setOutputMarkupId method makes the component eligible for modification via Ajax. Let’s now create the dropdown list and add Ajax behavior to it:

DropDownChoice<String> cafeDropdown 
  = new DropDownChoice<>(
    "cafes", 
    new PropertyModel<String>(this, "selectedCafe"), 
    cafeNames);
cafeDropdown.add(new AjaxFormComponentUpdatingBehavior("onchange") {
    @Override
    protected void onUpdate(AjaxRequestTarget target) {
        String name = (String) cafeDropdown.getDefaultModel().getObject();
        address.setAddress(cafeNamesAndAddresses.get(name).getAddress());
        target.add(addressLabel);
    }
});

The creation is similar to the label’s, the constructor accepts the wicket id, a model and a list of cafe names.

Then AjaxFormComponentUpdatingBehavior is added with the onUpdate callback method that updates the label’s  model once ajax request is issued. Finally, the label component is set as a target for refreshing.

Finally, the label component is set as a target for refreshing.

As you can see everything is Java, not a single line of Javascript was necessary. In order to change what the label displays we simply modified a POJO. The mechanism by which modifying a Java object translates to a change in the web page happens behind the curtains and is not relevant to the developer.

Wicket offers a big set of AJAX-enabled components out-of-the-box. The catalog of the components with live examples is available here.

5. Conclusion

In this introductory article, we’ve covered the basics of Wicket the component-based web framework in Java.

Wicket provides a layer of abstraction that aims to do away entirely with the plumbing code.

We’ve included two simple examples, which can be found on GitHub, to give you a taste of what development with this framework looks like.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Java 9 New Features

$
0
0

The Master Class of "Learn Spring Security" is live:

>> CHECK OUT THE COURSE

1. Overview

Java 9 comes with a rich feature set. Although there are no new language concepts, new APIs and diagnostic commands will definitely be interesting to developers.

In this writeup we’re going to have quick, high level look at some of the new features; a full list of new features is available here.

2. Modular System – Jigsaw Project

Let’s start with the big one – bringing modularity into the Java platform.

A modular system provides capabilities similar to OSGi framework’s system. Modules have a concept of dependencies, can export a public API and keep implementation details hidden/private.

One of the main motivations here is to provide modular JVM, which can run on devices with a lot less available memory. The JVM could run with only those modules and APIs which are required by the application. Check out this link for a description of what these modules are.

Also, JVM internal (implementation) APIs like com.sun.* are no longer accessible from application code.

Simply put, the modules are going to be described in a file called module-info.java located in the top of java code hierarchy:

module com.baeldung.java9.modules.car {
    requires com.baeldung.java9.modules.engines;
    exports com.baeldung.java9.modules.car.handling;
}

Our module car requires module engine to run and exports a package for handling.

For more in-depth example check OpenJDK Project Jigsaw: Module System Quick-Start Guide.

3. A New HTTP Client

A long-awaited replacement of the old HttpURLConnection.

The new API is located under the java.net.http package.

It should support both HTTP/2 protocol and WebSocket handshake, with performance that should be comparable with the Apache HttpClient, Netty and Jetty.

Let have a look at this new functionality by creating and sending a simple HTTP request.

3.1. Quick GET Request

The API uses the Builder pattern, which makes it really easy for quick use:

URI httpURI = new URI("http://localhost:8080");
HttpRequest request = HttpRequest.create(httpURI).GET();
HttpResponse response = request.response();
String responseBody = response.body(HttpResponse.asString());

4. Process API

The process API has been improved for controlling and managing operating-system processes.

4.1. Process Information

The class java.lang.ProcessHandle contains most of the new functionalities:

ProcessHandle self = ProcessHandle.current();
long PID = self.getPid();
ProcessHandle.Info procInfo = self.info();
 
Optional<String[]> args = procInfo.arguments();
Optional<String> cmd =  procInfo.commandLine();
Optional<Instant> startTime = procInfo.startInstant();
Optional<Duration> cpuUsage = procInfo.totalCpuDuration();

The current method returns an object representing a process of currently running JVM. The Info subclass provides details about the process.

4.2. Destroying Processes

Now – let’s stop all the running child processes using destroy():

childProc = ProcessHandle.current().children();
childProc.forEach(procHandle -> {
    assertTrue("Could not kill process " + procHandle.getPid(), procHandle.destroy());
});

5. Small Language Modifications

5.1. Try-With-Resources

In Java 7, the try-with-resources syntax requires a fresh variable to be declared for each resource being managed by the statement.

In Java 9 there is an additional refinement: if the resource is referenced by a final or effectively final variable, a try-with-resources statement can manage a resource without a new variable being declared:

MyAutoCloseable mac = new MyAutoCloseable();
try (mac) {
    // do some stuff with mac
}
 
try (new MyAutoCloseable() { }.finalWrapper.finalCloseable) {
   // do some stuff with finalCloseable
} catch (Exception ex) { }

5.2. Diamond Operator Extension

Now we can use diamond operator in conjunction with anonymous inner classes:

FooClass<Integer> fc = new FooClass<>(1) { // anonymous inner class
};
 
FooClass<? extends Integer> fc0 = new FooClass<>(1) { 
    // anonymous inner class
};
 
FooClass<?> fc1 = new FooClass<>(1) { // anonymous inner class
};

5.3. Interface Private Method

Interfaces in the upcoming JVM version can have private methods, which can be used to split lengthy default methods:

interface InterfaceWithPrivateMethods {
    
    private static String staticPrivate() {
        return "static private";
    }
    
    private String instancePrivate() {
        return "instance private";
    }
    
    default void check() {
        String result = staticPrivate();
        InterfaceWithPrivateMethods pvt = new InterfaceWithPrivateMethods() {
            // anonymous class
        };
        result = pvt.instancePrivate();
    }
}}

6. JShell Command Line Tool

JShell is read–eval–print loop – REPL for short.

Simply put, it’s an interactive tool to evaluate declarations, statements, and expressions of Java, together with an API. It is very convenient for testing small code snippets, which otherwise require creating a new class with the main method.

The jshell executable itself can be found in <JAVA_HOME>/bin folder:

jdk-9\bin>jshell.exe
|  Welcome to JShell -- Version 9
|  For an introduction type: /help intro
jshell> "This is my long string. I want a part of it".substring(8,19);
$5 ==> "my long string"

The interactive shell comes with history and auto-completion; it also provides functionality like saving to and loading from files, all or some of the written statements:

jshell> /save c:\develop\JShell_hello_world.txt
jshell> /open c:\develop\JShell_hello_world.txt
Hello JShell!

Code snippets are executed upon file loading.

7. JCMD Sub-Commands

Let’s explore some of the new subcommands in jcmd command line utility. We will get a list of all classes loaded in the JVM and their inheritance structure.

In the example below we can see the hierarchy of java.lang.Socket loaded in JVM running Eclipse Neon:

jdk-9\bin>jcmd 14056 VM.class_hierarchy -i -s java.net.Socket
14056:
java.lang.Object/null
|--java.net.Socket/null
|  implements java.io.Closeable/null (declared intf)
|  implements java.lang.AutoCloseable/null (inherited intf)
|  |--org.eclipse.ecf.internal.provider.filetransfer.httpclient4.CloseMonitoringSocket
|  |  implements java.lang.AutoCloseable/null (inherited intf)
|  |  implements java.io.Closeable/null (inherited intf)
|  |--javax.net.ssl.SSLSocket/null
|  |  implements java.lang.AutoCloseable/null (inherited intf)
|  |  implements java.io.Closeable/null (inherited intf)

The first parameter of jcmd command is the process id (PID) of the JVM on which we want to run the command.

Another interesting subcommand is set_vmflag. We can modify some JVM parameters online, without the need of restarting the JVM process and modifying its startup parameters.

You can find out all the available VM flags with subcommand jcmd 14056 VM.flags -all

8. Мulti-Resolution Image API

The interface java.awt.image.MultiResolutionImage encapsulates a set of images with different resolutions into a single object. We can retrieve a resolution-specific image variant based on a given DPI metric and set of image transformations or retrieve all of the variants in the image.

The java.awt.Graphics class gets variant from a multi-resolution image based on the current display DPI metric and any applied transformations.

The class java.awt.image.BaseMultiResolutionImage provides basic implementation:

BufferedImage[] resolutionVariants = ....
MultiResolutionImage bmrImage
  = new BaseMultiResolutionImage(baseIndex, resolutionVariants);
Image testRVImage = bmrImage.getResolutionVariant(16, 16);
assertSame("Images should be the same", testRVImage, resolutionVariants[3]);

9. Variable Handles

The API resides under java.lang.invoke and consists of VarHandle and MethodHandles. It provides equivalents of java.util.concurrent.atomic and sun.misc.Unsafe operations upon object fields and array elements with similar performance.

With Java 9 Modular system access to sun.misc.Unsafe will not be possible from application code.

10. Publish-Subscribe Framework

The class java.util.concurrent.Flow provides interfaces that support the Reactive Streams publish-subscribe framework. These interfaces support interoperability across a number of asynchronous systems running on JVMs.

We can use utility class SubmissionPublisher to create custom components.

11. Unified JVM Logging

This feature introduces a common logging system for all components of the JVM. It provides the infrastructure to do the logging, but it does not add the actual logging calls from all JVM components. It also does not add logging to Java code in the JDK.

The logging framework defines a set of tags – for example, gc, compiler, threads, etc. We can use the command line parameter -Xlog to turn on logging during startup.

Let’s log messages tagged with ‘gc’ tag using ‘debug’ level to a file called ‘gc.txt’ with no decorations:

java -Xlog:gc=debug:file=gc.txt:none ...

-Xlog:help will output possible options and examples. Logging configuration can be modified runtime using jcmd command. We are going to set GC logs to info and redirect them to a file – gc_logs:

jcmd 9615 VM.log output=gc_logs what=gc

12. New APIs

12.1. Immutable Set

java.util.Set.of() – creates an immutable set of a given elements. In Java 8 creating a Set of several elements would require several lines of code. Now we can do it as simple as:

Set<String> strKeySet = Set.of("key1", "key2", "key3");

The Set returned by this method is JVM internal class: java.util.ImmutableCollections.SetN, which extends public java.util.AbstractSet. It is immutable – if we try to add or remove elements, an UnsupportedOperationException will be thrown.

You can also convert an entire array into a Set with the same method.

12.2. Optional To Stream

java.util.Optional.stream() gives us an easy way to you use the power of Streams on Optional elements:

List<String> filteredList = listOfOptionals.stream()
  .flatMap(Optional::stream)
  .collect(Collectors.toList());

13. Conclusion

Java 9 will come with a modular JVM and lots of other new and diverse improvements and features.

You can find source code for the examples over on GitHub.

I just released the Master Class of "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Viewing all 3689 articles
Browse latest View live