Quantcast
Channel: Baeldung
Viewing all 3831 articles
Browse latest View live

Java Web Weekly 50

$
0
0

I usually post about Dev stuff on Twitter - you can follow me there:

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> Artfully Benchmarking Java 8 Streams and Lambdas [infoq.com]

A quick, journalistic look at Java 8 Streams performance – something we’re starting to be aware of in the community.

>> Spring Boot Memory Performance [spring.io]

This writeup is going to be referenced for a long time, as this kind of low level information is really missing from the Spring ecosystem.

>> Spring Data JPA Tutorial: Adding Custom Methods Into All Repositories [petrikainulainen.net]

I had to do something similar several times in practice – adding a custom method into a Spring Data repo – so this guide is a welcome reference.

>> 5 reasons why you should consider upgrading your applications to Spring 4 [codeleak.pl]

Short and to the point – upgrading to Spring 4 is a solid productivity boost across the board.

>> Hibernate Logging Guide – Use the right config for development and production [thoughts-on-java.org]

A must read if you’re working with Hibernate and aren’t quite sure how your logging should be set up.

>> The danger of @InjectMocks [blog.frankel.ch]

Just because we can do some low level stuff in Java doesn’t mean we should. Mockito made some choices about all of that, and about what you can and cannot do with the tool.

This is a quick dive into the way mocks can be injected at runtime.

>> 3 Reasons why You Shouldn’t Replace Your for-loops by Stream.forEach()  [jooq.org]

A very interesting and pragmatic look at the Java 8 functional story, now that it’s no longer new and shiny.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> Clearing Up the Integrated Tests Scam [thecodewhisperer.com]

I listened to the “Integration Tests Are a Scam” and it really opened up way of thinking about the way I did testing back then.

Later on I continued to learned from J.B. live, so I’m excited to see here a thought out analysis on the topic. Good stuff.

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> I have a reflexive urge to disagree with you [dilbert.com]

>> I value substance over style. How’s that working out? [dilbert.com]

>> Someone was raised with to much self-esteem [dilbert.com]

 

4. Pick of the Week

>> My Favorite Database is the Network [lucumr.pocoo.org]

 

I usually post about Dev stuff on Twitter - you can follow me there:



Introduction to Pointcut Expressions in Spring

$
0
0

I just announced the release dates of my upcoming "REST With Spring" Classes:

>> THE "REST WITH SPRING" CLASSES

1. Overview

In this tutorial we will discuss the Spring AOP pointcut expression language.

We will first introduce some terminology used in aspect-oriented programming. A join point is a step of the program execution, such as the execution of a method or the handling of an exception. In Spring AOP, a join point always represents a method execution. A pointcut is a predicate that matches the join points and a pointcut expression language is a way of describing pointcuts programmatically.

2. Usage

A pointcut expression can appear as a value of the @Pointcut annotation:

@Pointcut("within(@org.springframework.stereotype.Repository *)")
public void repositoryClassMethods() {}

The method declaration is called the pointcut signature. It provides a name that can be used by advice annotations to refer to that pointcut.

@Around("repositoryClassMethods()")
public Object measureMethodExecutionTime(ProceedingJoinPoint pjp) throws Throwable {
    ...
}

A pointcut expression could also appear as the value of the expression property of an aop:pointcut tag:

<aop:config>
    <aop:pointcut id="anyDaoMethod" 
      expression="@target(org.springframework.stereotype.Repository)"/>
</aop:config>

3. Pointcut Designators

A pointcut expression starts with a pointcut designator (PCD), which is a keyword telling Spring AOP what to match. There are several pointcut designators, such as the execution of a method, a type, method arguments, or annotations.

3.1 execution

The primary Spring PCD is execution, which matches method execution join points.

@Pointcut("execution(public String org.baeldung.dao.FooDao.findById(Long))")

This example pointcut will match exactly the execution of findById method of the FooDao class. This works, but it is not very flexible. Suppose we would like to match all the methods of the FooDao class, which may have different signatures, return types, and arguments. To achieve this we may use wildcards:

@Pointcut("execution(* org.baeldung.dao.FooDao.*(..))")

Here the first wildcard matches any return value, the second matches any method name, and the (..) pattern matches any number of parameters (zero or more).

3.2 within

Another way to achieve the same result from the previous section is by using the within PCD, which limits matching to join points of certain types.

@Pointcut("within(org.baeldung.dao.FooDao)")

We could also match any type within the org.baeldung package or a sub-package.

@Pointcut("within(org.baeldung..*)")

3.3 this and target

this limits matching to join points where the bean reference is an instance of the given type, while target limits matching to join points where the target object is an instance of the given type. The former works when Spring AOP creates a CGLIB-based proxy, and the latter is used when a JDK-based proxy is created. Suppose that the target class implements an interface:

public class FooDao implements BarDao {
    ...
}

In this case, Spring AOP will use the JDK-based proxy and you should use the target PCD because the proxied object will be an instance of Proxy class and implement the BarDao interface:

@Pointcut("target(org.baeldung.dao.BarDao)")

On the other hand if FooDao doesn’t implement any interface or proxyTargetClass property is set to true then the proxied object will be a subclass of FooDao and the this PCD could be used:

@Pointcut("this(org.baeldung.dao.FooDao)")

3.4 args

This PCD is used for matching particular method arguments:

@Pointcut("execution(* *..find*(Long))")

This pointcut matches any method that starts with find and has only one parameter of type Long. If we want to match a method with any number of parameters but having the fist parameter of type Long, we could use the following expression:

@Pointcut("execution(* *..find*(Long,..))")

3.5 @target

The @target PCD (not to be confused with the target PCD described above) limits matching to join points where the class of the executing object has an annotation of the given type:

@Pointcut("@target(org.springframework.stereotype.Repository)")

3.6 @args

This PCD limits matching to join points where the runtime type of the actual arguments passed have annotations of the given type(s). Suppose that we want to trace all the methods accepting beans annotated with @Entity annotation:

@Pointcut("@args(org.baeldung.aop.annotations.Entity)")
public void methodsAcceptingEntities() {}

To access the argument we should provide a JoinPoint argument to the advice:

@Before("methodsAcceptingEntities()")
public void logMethodAcceptionEntityAnnotatedBean(JoinPoint jp) {
    logger.info("Accepting beans with @Entity annotation: " + jp.getArgs()[0]);
}

3.7 @within

This PCD limits matching to join points within types that have the given annotation:

@Pointcut("@within(org.springframework.stereotype.Repository)")

Which is equivalent to:

@Pointcut("within(@org.springframework.stereotype.Repository *)")

3.8 @annotation

This PCD limits matching to join points where the subject of the join point has the given annotation. For example we may create a @Loggable annotation:

@Pointcut("@annotation(org.baeldung.aop.annotations.Loggable)")
public void loggableMethods() {}

Then we may log execution of the methods marked by that annotation:

@Before("loggableMethods()")
public void logMethod(JoinPoint jp) {
    String methodName = jp.getSignature().getName();
    logger.info("Executing method: " + methodName);
}

4. Combining Pointcut Expressions

Pointcut expressions can be combined using &&, || and ! operators:

@Pointcut("@target(org.springframework.stereotype.Repository)")
public void repositoryMethods() {}

@Pointcut("execution(* *..create*(Long,..))")
public void firstLongParamMethods() {}

@Pointcut("repositoryMethods() && firstLongParamMethods()")
public void entityCreationMethods() {}

5. Conclusion

In this quick intro to Spring AOP and pointcuts, we illustrated some examples of pointcut expressions usage.

The full set of examples could be found in my github project – this is an Eclipse based project, so it should be easy to import and run as it is.

Sign Up and get 25% Off my upcoming "REST With Spring" classes on launch:

>> CHECK OUT THE CLASSES

Introduction to Advice Types in Spring

$
0
0

I just announced the release dates of my upcoming "REST With Spring" Classes:

>> THE "REST WITH SPRING" CLASSES

1. Overview

In this article we’ll discuss different types of AOP advice that can be created in Spring.

Advice is an action taken by an aspect at a particular join point. Different types of advice include “around,” “before” and “after” advice. The main purpose of aspects is to support cross-cutting concerns, such as logging, profiling, caching, and transaction management.

And if you want to go deeper into pointcut expressions, check out yesterdays intro to these.

2. Enabling Advice

With Spring, you can declare advice using AspectJ annotations, but you must first apply the @EnableAspectJAutoProxy annotation to your configuration class, which will enable support for handling components marked with AspectJ’s @Aspect annotation.

@Configuration
@ComponentScan(basePackages = {"org.baeldung.dao", "org.baeldung.aop"})
@EnableAspectJAutoProxy
public class TestConfig {
    ...
}

3. Before Advice

This advice, as the name implies, is executed before the join point. It does not prevent the continued execution of the method it advises unless an exception is thrown.

Consider the following aspect that simply logs the method name before it is called:

@Component
@Aspect
public class LoggingAspect {

    private Logger logger = Logger.getLogger(LoggingAspect.class.getName());

    @Pointcut("@target(org.springframework.stereotype.Repository)")
    public void repositoryMethods() {};

    @Before("repositoryMethods()")
    public void logMethodCall(JoinPoint jp) {
        String methodName = jp.getSignature().getName();
        logger.info("Before " + methodName);
    }
}

The logMethodCall advice will be executed before any repository method defined by the repositoryMethods pointcut.

4. After Advice

After advice, declared by using the @After annotation, is executed after a matched method’s execution, whether or not an exception was thrown.

In some ways, it is similar to a finally block. In case you need advice to be triggered only after normal execution, you should use the returning advice declared by @AfterReturning annotation. If you want your advice to be triggered only when the target method throws an exception, you should use throwing advice, declared by using the @AfterThrowing annotation.

Suppose that we wish to notify some application components when a new instance of Foo is created. We could publish an event from FooDao, but this would violate the single responsibility principle. Instead, we can accomplish this by defining the following aspect:

@Component
@Aspect
public class PublishingAspect {

    private ApplicationEventPublisher eventPublisher;

    @Autowired
    public void setEventPublisher(ApplicationEventPublisher eventPublisher) {
        this.eventPublisher = eventPublisher;
    }

    @Pointcut("@target(org.springframework.stereotype.Repository)")
    public void repositoryMethods() {}

    @Pointcut("execution(* *..create*(Long,..))")
    public void firstLongParamMethods() {}

    @Pointcut("repositoryMethods() && firstLongParamMethods()")
    public void entityCreationMethods() {}

    @AfterReturning(value = "entityCreationMethods()", returning = "entity")
    public void logMethodCall(JoinPoint jp, Object entity) throws Throwable {
        eventPublisher.publishEvent(new FooCreationEvent(entity));
    }
}

Notice, first, that by using the @AfterReturning annotation we can access the target method’s return value. Second, by declaring a parameter of type JoinPoint, we can access the arguments of the target method’s invocation.

Next we create a listener which will simply log the event. You may read more about events in this tutorial:

@Component
public class FooCreationEventListener implements ApplicationListener<FooCreationEvent> {

    private Logger logger = Logger.getLogger(getClass().getName());

    @Override
    public void onApplicationEvent(FooCreationEvent event) {
        logger.info("Created foo instance: " + event.getSource().toString());
    }
}

5. Around Advice

Around advice surrounds a join point such as a method invocation.

This is the most powerful kind of advice. Around advice can perform custom behavior both before and after the method invocation. It is also responsible for choosing whether to proceed to the join point or to shortcut the advised method execution by providing its own return value or throwing an exception.

To demonstrate its use, suppose that you want to measure method execution time. For this purpose you may create the following aspect:

@Aspect
@Component
public class PerformanceAspect {

    private Logger logger = Logger.getLogger(getClass().getName());

    @Pointcut("within(@org.springframework.stereotype.Repository *)")
    public void repositoryClassMethods() {};

    @Around("repositoryClassMethods()")
    public Object measureMethodExecutionTime(ProceedingJoinPoint pjp) throws Throwable {
        long start = System.nanoTime();
        Object retval = pjp.proceed();
        long end = System.nanoTime();
        String methodName = pjp.getSignature().getName();
        logger.info("Execution of " + methodName + " took " + 
          TimeUnit.NANOSECONDS.toMillis(end - start) + " ms");
        return retval;
    }
}

This advice is triggered when any of the join points matched by the repositoryClassMethods pointcut is executed.

This advice takes one parameter of type ProceedingJointPoint. The parameter give us an opportunity to take an action before the target method call. In this case, we simply save the method start time.

Second, the advice return type is Object since the target method can return a result of any type. If target method is void, null will be returned. After the target method call, we can measure the timing, log it, and return the method’s result value to the caller.

6. Overview

In this article we’ve learned the different types of advice in Spring and their declarations and implementations. We defined aspects using schema-based approach and using AspectJ annotations. We have also provided several possible advice applications.

The implementation of all these examples and code snippets can be found in my github project – this is an Eclipse based project, so it should be easy to import and run as it is.

Sign Up and get 25% Off my upcoming "REST With Spring" classes on launch:

>> CHECK OUT THE CLASSES

Java Web Weekly 51

$
0
0

I usually post about Dev stuff on Twitter - you can follow me there:

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> Refactoring Code to Load a Document [martinfowler.com]

A well documented, lengthy, “reference-able a year from now” article about one of the hardest problems in software development – managing change well.

Specifically JSON documents/data, published externally to clients.

>> 5 Tips for Reducing Your Java Garbage Collection Overhead [takipi.com]

Some solid, practical tips on improving the memory footprint of your system.

>> Backing Spring Cache with Couchbase [couchbase.com]

A play-by-play on making Couchbase jive with Spring.

I had this one on the content calendar of the site – maybe it’s time to take it off :)

>> How to recognize different types of beans from quite a long way away [next-presso.com]

A deep dive into beans in CDI. If you’re doing Java EE work, this is definitely one to read.

>> OpenJDK 9: Life Without HPROF and jhat [infoq.com]

A quick overview of some of the low level tools that are not going to be part of Java 9, as a result of the modularization cleanup work.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Tracking HTTP/2.0 Adoption [shodan.io]

Very interesting and promising data about the adoption of the various HTTP/1.x alternatives.

>> Why 451? [mnot.net]

It’s not every day that a new HTTP status code gets created – especially one about censorship. A quick and interesting read.

Also worth reading:

3. Musings

>> Escaping Sucker Culture [daedtech.com]

After the highly interesting and popular article from last week, this followup goes into some of the tactics that an employee can keep in mind (and do) when they’re in an over-work culture.

>> BDD: A Three-Headed Monster [lizkeogh.com]

A solid piece about BDD; doing BDD well is going to make my 2016 goals list – and this is the kind of writeup that I need to come back to.

>> The Soul of a New Release: Eating Our Own Dog Food [infoq.com]

Putting out a new version of your system can be smooth sailing if you’re employing some good practices and tactics along the way. This is the way Plumbr did theirs.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> If it weren’t urgent, it would be email… [dilbert.com]

>> All the numbers were wrong [dilbert.com]

>> Should we always ignore what the data said? [dilbert.com]

 

5. Pick of the Week

As you know, I very rarely pick my own stuff here, in the weekly review. But – in a couple of days, my “REST With Spring” course will finally be done and going live. I’ve been working on it for 4 months now, so it feel good to finally set it free:

>> The Master Class of “REST With Spring”

 

I usually post about Dev stuff on Twitter - you can follow me there:


“REST With Spring” is Live

$
0
0

10 minutes ago I pushed out the “go-live” announcement of the Master Class of REST With Spring – and I’ve been watching the signup emails as they’re hitting my inbox. And the cool thing is that I recognize some of the names – these are people that I’ve had conversations with over email.

It’s been almost 5 months of work, so it feel good having the full course out there and having so many people go through it.

First – here’s a quick recap I wrote back in November, when I launched the Intermediate Class.

And here’s the structure of these last 3 courses in the Master Class:

Course 7: Document, Discover and Evolve the REST API (55 minutes)

  • Module 0: Introduction
  • Module 1: Document the API with Swagger (11 minutes)
  • Module 2: The Basics of HATEOAS – part 1 (7 minutes)
  • Module 2: The Basics of HATEOAS – part 2 (7 minutes)
  • Module 3: Advanced Scenarios with Spring HATEOAS (10 minutes)
  • Module 4: How To Evolve the API without Breaking Clients – part 1 (7 minutes)
  • Module 4: How To Evolve the API without Breaking Clients – part 2 (9 minutes)
  • Module 4: How To Evolve the API without Breaking Clients – part 3 (4 minutes)
  • Module 5: Versioning – The Last Resort (Bonus Material – To Be Released)

Course 8: Monitoring and API Metrics (43 minutes)

  • Module 1: Fundamentals of Monitoring with Boot (10 minutes)
  • Module 2: Custom Metrics for the API (11 minutes)
  • Module 3: Monitoring Data over JMX – part 1 (11 minutes)
  • Module 3: Monitoring Data over JMX – part 2 (11 minutes)
  • Module 4: Displaying Metrics over HTTP (Bonus Material – To Be Released)
  • Module 5: Production Grade Tools for Monitoring (Bonus Material – To Be Released)

Course 9: CI and CD Pipelines for the API (57 minutes)

  • Module 1: Setting Up Jenkins and The First Job – part 1 (7 minutes)
  • Module 1: Setting Up Jenkins and The First Job – part 2 (8 minutes)
  • Module 2: A Simple Jenkins Pipeline From Scratch – part 1 (6 minutes)
  • Module 2: A Simple Jenkins Pipeline From Scratch – part 2 (9 minutes)
  • Module 3: Remote Deployment for the API (10 minutes)
  • Module 4: From Continuous Integration to Continuous Deployment – part 1 (11 minutes)
  • Module 4: From Continuous Integration to Continuous Deployment – part 2 (6 minutes)
  • Module 5: Load Balance the REST API on EC2 (Bonus Material – To Be Released)
  • REST With Spring – Outro

And of course, if you’re reading this writeup right on the 20th of December – you can still get the early-bird price of 25% Off:

>> The Master Class of REST With Spring

What’s Next

First, some downtime – I’ve been burning the candle on both ends recording the final course and marketing the launch.

Then, the bonus stuff – I’ve been asking for and getting a lot of feedback from the students going through the course – and I’ve been working on sorting that feedback into categories.

Some of that went into the bonus modules I already announced for the existing 9 courses. But the bulk of that feedback actually went towards a new bonus course:

Course 10: Advanced API Tactics

  • Module 1: Creating a REST API with Spring Data REST
  • Module 2: Binary Data Formats in a Spring REST API
  • Module 3: An API Rate Limiting implementation
  • Module 4: Sunsetting a REST API Version
  • Module 5: Intro to Spring MVC and REST Annotations

And that’s it – 5 months in the making, 600 students – “REST With Spring” is out!

Using the CassandraTemplate from Spring Data

$
0
0

I usually post about Persistence on Twitter - you can follow me there:

1. Overview

This is the second article of the Spring Data Cassandra article series. In this article we will mainly focus on CassandraTemplate and CQL queries in the data access layer. You can read more about Spring Data Cassandra in the first article in the series.

Cassandra Query Language (CQL) is the query language for the Cassandra database and CqlTemplate is the low-level data access template in Spring Data Cassandra – it conveniently exposes data manipulation related operations to execute CQL statements.

CassandraTemplate builds on top of the low level CqlTemplate and provides a simple way to query domain objects and map the objects to a persisted data structure in Cassandra.

Let’s start with the configuration and then dive into examples of using the two templates.

2. CassandraTemplate Configuration

CassandraTemplate is available in the Spring context because our main Cassandra Spring config is extending AbstractCassandraConfiguration:

@Configuration
@EnableCassandraRepositories(basePackages = "org.baeldung.spring.data.cassandra.repository")
public class CassandraConfig extends AbstractCassandraConfiguration { ... }

We can then simple wire in the template – either by its exact type, CassandraTemplate, or as the more generic interface CassandraOperations:

@Autowired
private CassandraOperations cassandraTemplate;

3. Data Access Using CassandraTemplate

Let’s use the CassandraTemplate defined above in our data access layer module to work with data persistent.

3.1. Saving a New Book

We can save a new book to our book store:

Book javaBook = new Book(
  UUIDs.timeBased(), "Head First Java", "O'Reilly Media",
  ImmutableSet.of("Computer", "Software"));
cassandraTemplate.insert(javaBook);

Then we can check the availability of the inserted book in the database:

Select select = QueryBuilder.select().from("book")
  .where(QueryBuilder.eq("title", "Head First Java"))
  .and(QueryBuilder.eq("publisher", "O'Reilly Media"));
Book retrievedBook = cassandraTemplate.selectOne(select, Book.class);

We are using a Select QueryBuilder here, to be mapped to the selectOne() in cassandraTemplate. We will discuss the QueryBuilder in more depth in the CQL queries section.

3.2. Saving Multiple Books

We can save multiple books to our book store at once using a list:

Book javaBook = new Book(
  UUIDs.timeBased(), "Head First Java", "O'Reilly Media",
  ImmutableSet.of("Computer", "Software"));
Book dPatternBook = new Book(
  UUIDs.timeBased(), "Head Design Patterns", "O'Reilly Media",
  ImmutableSet.of("Computer", "Software"));
List<Book> bookList = new ArrayList<Book>();
bookList.add(javaBook);
bookList.add(dPatternBook);
cassandraTemplate.insert(bookList);

3.3. Updating an Existing Book

Lat’s start by inserting a new book:

Book javaBook = new Book(
  UUIDs.timeBased(), "Head First Java", "O'Reilly Media",
  ImmutableSet.of("Computer", "Software"));
cassandraTemplate.insert(javaBook);

Let’s fetch the book:

Select select = QueryBuilder.select().from("book");
Book retrievedBook = cassandraTemplate.selectOne(select, Book.class);

Then let’s add some additional tags to the retrieved book:

retrievedBook.setTags(ImmutableSet.of("Java", "Programming"));
cassandraTemplate.update(retrievedBook);

3.4. Deleting an Inserted Book

Let’s insert a new book:

Book javaBook = new Book(
  UUIDs.timeBased(), "Head First Java", "O'Reilly Media",
  ImmutableSet.of("Computer", "Software"));
cassandraTemplate.insert(javaBook);

Then delete the book:

cassandraTemplate.delete(javaBook);

3.5. Deleting All Books

Let’s now insert some new books:

Book javaBook = new Book(
  UUIDs.timeBased(), "Head First Java", "O'Reilly Media",
  ImmutableSet.of("Computer", "Software"));
Book dPatternBook = new Book(
  UUIDs.timeBased(), "Head Design Patterns", "O'Reilly Media", 
  ImmutableSet.of("Computer", "Software"));
cassandraTemplate.insert(javaBook);
cassandraTemplate.insert(dPatternBook);

Then delete the all the books:

cassandraTemplate.deleteAll(Book.class);

4. Data Access Using CQL Queries

It is always possible to use CQL queries for data manipulation in the data access layer. CQL query handing is performed by the CqlTemplate class, allowing us to execute custom queries as required.

However, as CassandraTemplate class is an extension of CqlTemplate, we can use it directly to execute those queries.

Let’s look at the different methods we can use to manipulate data using CQL queries.

4.1. Using QueryBuilder

QueryBuilder can be used to build a query for data manipulation in database. Almost all the standard manipulations can be built up using the out-of-the-box building blocks:

Insert insertQueryBuider = QueryBuilder.insertInto("book")
 .value("isbn", UUIDs.timeBased())
 .value("title", "Head First Java")
 .value("publisher", "OReilly Media")
 .value("tags", ImmutableSet.of("Software"));
cassandraTemplate.execute(insertQueryBuider);

If you look closely at the code snippet, you may notice that the execute() method is used instead of the relevant operation type (insert, delete, etc.). This is because the type of query is defined by the output of the QueryBuilder.

4.2. Using PreparedStatements

Even though PreparedStatements can be used for any case, this mechanism is usually recommended for multiple inserts for high-speed ingestion.

A PreparedStatement is prepared only once, helping ensure high performance:

UUID uuid = UUIDs.timeBased();
String insertPreparedCql = 
  "insert into book (isbn, title, publisher, tags) values (?, ?, ?, ?)";
List<Object> singleBookArgsList = new ArrayList<>();
List<List<?>> bookList = new ArrayList<>();
singleBookArgsList.add(uuid);
singleBookArgsList.add("Head First Java");
singleBookArgsList.add("OReilly Media");
singleBookArgsList.add(ImmutableSet.of("Software"));
bookList.add(singleBookArgsList);
cassandraTemplate.ingest(insertPreparedCql, bookList);

4.3. Using CQL Statements

We can directly use CQL statements to query data as follows:

UUID uuid = UUIDs.timeBased();
String insertCql = "insert into book (isbn, title, publisher, tags) 
  values (" + uuid + ", 'Head First Java', 'OReilly Media', {'Software'})";
cassandraTemplate.execute(insertCql);

5. Conclusion

In this article, we examined various data manipulation strategies using Spring Data Cassandra, including CassandraTemplate and CQL queries.

The implementation of the above code snippets and examples can be found in my GitHub project – this is an Eclipse based project, so it should be easy to import and run as it is.

I usually post about Persistence on Twitter - you can follow me there:


Guava 18: What’s New?

$
0
0

1. Overview

Google Guava provides libraries with utilities that ease Java development. In this tutorial we will take a look to new functionality introduced in the Guava 18 release.

2. MoreObjects Utility Class

Guava 18 saw the addition of the MoreObjects class, which contains methods that do not have equivalents in java.util.Objects.

As of release 18, it only contains implementations of the toStringHelper method, which can be used to help you to build your own toString methods.

  • toStringHelper(Class<?> clazz)
  • toStringHelper(Object self)
  • toStringHelper(String className)

Typically, toString() is used when you need to output some information about an object. Usually it should contain details about the current state of the object. By using one of the implementations of toStringHelper, you can easily build a useful toString() message.

Suppose we have a User object containing a few fields that need to be written when toString() is called. We can use the MoreObjects.toStringHelper(Object self) method to do this easily.

public class User {

    private long id;
    private String name;

    public User(long id, String name) {
        this.id = id;
        this.name = name;
    }

    @Override
    public String toString() {
        return MoreObjects.toStringHelper(this)
            .add("id", id)
            .add("name", name)
            .toString();
    }
}

Here we have used toStringHelper(Object self) method. With this setup, we can create a sample user to see the output that results when calling toString().

User user = new User(12L, "John Doe");
String userState = user.toString();
// userState: User{ id=12,name=John Doe }

The other two implementations will return the same String if configured similarly:

toStringHelper(Class<?> clazz)

@Override
public String toString() {
    return MoreObjects.toStringHelper(User.class)
        .add("id", id)
        .add("name", name)
        .toString();
}

toStringHelper(String className)

@Override
public String toString() {
    return MoreObjects.toStringHelper("User")
        .add("id", id)
        .add("name", name)
        .toString();
}

The difference between these methods is evident if you call toString() on extensions of the User class. For example, if you have two kinds of Users: Administrator and Player, they would produce different output.

public class Player extends User {
    public Player(long id, String name) {
        super(id, name);
    }
}

public class Administrator extends User {
    public Administrator(long id, String name) {
        super(id, name);
    }
}

If you use toStringHelper(Object self) in the User class then your Player.toString() will return “Player{id=12, name=John Doe}“. However, if you use toStringHelper(String className) or toStringHelper(Class<?> clazz), Player.toString() will return “User{id=12, name=John Doe}“. The class name listed will be the parent class rather than the subclass.

3. New Methods in FluentIterable

3.1. Overview

FluentIterable is used to operate with Iterable instances in a chained fashion. Lets see how it can be used.

Suppose you have list of User objects, defined in the examples above, and you wish to filter that list to include only the users that are aged 18 or older.

List<User> users = new ArrayList<>();
users.add(new User(1L, "John", 45));
users.add(new User(2L, "Michelle", 27));
users.add(new User(3L, "Max", 16));
users.add(new User(4L, "Sue", 10));
users.add(new User(5L, "Bill", 65));

Predicate<User> byAge = user -> user.getAge() >= 18;

List<String> results = FluentIterable.from(users)
                           .filter(byAge)
                           .transform(Functions.toStringFunction())
                           .toList();

The resulting list will contain the information for John, Michelle, and Bill.

3.2. FluentIterable.of(E[])

With this method. you can create a FluentIterable from array of Object.

User[] usersArray = { new User(1L, "John", 45), new User(2L, "Max", 15) } ;
FluentIterable<User> users = FluentIterable.of(usersArray);

You can now use the methods provided in FluentIterable interface.

3.3. FluentIterable.append(E…)

You can create new FluentIterable from existing FluentIterable by appending more elements to it.

User[] usersArray = {new User(1L, "John", 45), new User(2L, "Max", 15)};

FluentIterable<User> users = FluentIterable.of(usersArray).append(
                                 new User(3L, "Sue", 23),
                                 new User(4L, "Bill", 17)
                             );

As expected, the size of the resultant FluentIterable is 4.

3.4. FluentIterable.append(Iterable<? extends E>)

This method behaves the same as the previous example, but allows you to add the entire contents of any existing implementation of Iterable to a FluentIterable.

User[] usersArray = { new User(1L, "John", 45), new User(2L, "Max", 15) };

List<User> usersList = new ArrayList<>();
usersList.add(new User(3L, "Diana", 32));

FluentIterable<User> users = FluentIterable.of(usersArray).append(usersList);

As expected, the size of the resultant FluentIterable is 3.

3.5. FluentIterable.join(Joiner)

The FluentIterable.join(…) method produces a String representing the entire contents of the FluentIterable, joined by a given String.

User[] usersArray = { new User(1L, "John", 45), new User(2L, "Max", 15) };
FluentIterable<User> users = FluentIterable.of(usersArray);
String usersString = users.join(Joiner.on("; "));

The usersString variable will contain the output of calling the toString() method on each element of the FluentIterable, separated by a “;”. The Joiner class provides several options for joining the strings.

4. Hashing.crc32c

A hash function is any function that can be used to map data of arbitrary size to data of a fixed size. It is used in many areas, such as cryptography and checking for errors in transmitted data.

The Hashing.crc32c method returns a HashFunction that implements the CRC32C algorithm.

int receivedData = 123;
HashCode hashCode = Hashing.crc32c().hashInt(receivedData);
// hashCode: 495be649

5. InetAddresses.decrement(InetAddress)

This method returns a new InetAddress that will be “one less” than its input.

InetAddress address = InetAddress.getByName("127.0.0.5");
InetAddress decrementedAddress = InetAddresses.decrement(address);
// decrementedAddress: 127.0.0.4

6. New Executors in MoreExecutors

6.1. Threading Review

In Java you can use multiple threads to execute work. For this purpose, Java has Thread and Runnable classes.

ConcurrentHashMap<String, Boolean> threadExecutions = new ConcurrentHashMap<>();
Runnable logThreadRun = () -> threadExecutions.put(Thread.currentThread().getName(), true);

Thread t = new Thread(logThreadRun);
t.run();

Boolean isThreadExecuted = threadExecutions.get("main");

As expected, isThreadExecuted will be true. Also you can see that this Runnable will run only in the main thread. If you want to use multiple threads, you can use different Executors for different purposes.

ExecutorService executorService = Executors.newFixedThreadPool(2);
executorService.submit(logThreadRun);
executorService.submit(logThreadRun);
executorService.shutdown();

Boolean isThread1Executed = threadExecutions.get("pool-1-thread-1");
Boolean isThread2Executed = threadExecutions.get("pool-1-thread-2");
// isThread1Executed: true
// isThread2Executed: true

In this example, all submitted work is executed in ThreadPool threads.

Guava provides different methods in its MoreExecutors class.

6.2. MoreExecutors.directExecutor()

This is a lightweight executor that can run tasks on the thread that calls the execute method.

Executor executor = MoreExecutors.directExecutor();
executor.execute(logThreadRun);

Boolean isThreadExecuted = threadExecutions.get("main");
// isThreadExecuted: true

6.3. MoreExecutors.newDirectExecutorService()

This method returns an instance of ListeningExecutorService. It is a heavier implementation of Executor that has many useful methods. It is similar to the deprecated sameThreadExecutor() method from previous versions of Guava.

This ExecutorService will run tasks on the thread that calls the execute() method.

ListeningExecutorService executor = MoreExecutors.newDirectExecutorService();
executor.execute(logThreadRun);

This executor has many useful methods such as invokeAll, invokeAny, awaitTermination, submit, isShutdown, isTerminated, shutdown, shutdownNow.

7. Conclusion

Guava 18 introduced several additions and improvements to its growing library of useful features. It is well-worth considering for use in your next project. The code samples in this article are available in the GitHub repository.

Java Web Weekly 52

$
0
0

I usually post about Dev stuff on Twitter - you can follow me there:

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> Hibernate aggressive connection release [vladmihalcea.com]

A quick intro and overview of the way Hibernate deals with releasing connections and how that behavior can be configured.

>> Explicit receiver parameters [joda.org]

A cool feature I didn’t know Java 8 had – passing in the this parameter explicitly as a method argument.

>> JEP 277 “Enhanced Deprecation” is Nice. But Here’s a Much Better Alternative [jooq.org]

A very interesting solution to add richer semantics to the way we deprecated APIs.

>> 3 Disasters Which I Solved With JProfiler [petrikainulainen.net]

A good profiling session can make the difference between a solid, well executing and a slow path through the code. Here’s a few ideas to get started.

>> A Refresher – Top 10 Java EE 7 Backend Features [blog.eisele.net]

A rundown of some of the most useful Java EE features – most of which look quite handy.

>> Jigsaw Hands-On Guide [codefx.org]

A solid, practical deep dive into modularity with the upcoming project Jigsaw.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Rich Domain Doesn’t Mean Fat Objects [sapiensworks.com]

There’s a lot to consider when modeling your domain, and not just the technical aspects.

Also worth reading:

3. Musings

>> React Indie Bundle report, or how we made $31k in a week [swizec.com]

A very cool case study of a product launch, with a lot of tactical info to pay close attention to.

>> Giving less advice [signalvnoise.com]

How close you are to the thing you had success with is really something we should all keep in mind when giving advice. Very sensible position and a solid, quick read.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> You have me finishing two weeks before I start [dilbert.com]

>> My email objecting to your plan [dilbert.com]

>> Two weeks or so [dilbert.com]

 

5. Pick of the Week

>> The One Method I’ve Used to Eliminate Bad Tech Hires [medium.com]

 

I usually post about Dev stuff on Twitter - you can follow me there:



Intro to Spring Security LDAP

$
0
0

I just announced the release dates of my upcoming "REST With Spring" Classes:

>> THE "REST WITH SPRING" CLASSES

1. Overview

In this quick tutorial, we will learn how to set up Spring Security LDAP.

Before we start, a note about what LDAP is – it stands for Lightweight Directory Access Protocol and it’s an open, vendor-neutral protocol for accessing directory services over a network.

2. Maven Dependency

First, let take a look at maven dependencies we need:

<dependency>
    <groupId>org.springframework.security</groupId>
    <artifactId>spring-security-ldap</artifactId>
</dependency>

<dependency>
    <groupId>org.apache.directory.server</groupId>
    <artifactId>apacheds-server-jndi</artifactId>
    <version>1.5.5</version>
</dependency>

Note: We used ApacheDS as our LDAP server which is an extensible and embeddable directory server.

3. Java Configuration

Next, let’s discuss our Spring Security Java configuration:

public class SecurityConfig extends WebSecurityConfigurerAdapter {

    @Override
    protected void configure(AuthenticationManagerBuilder auth) throws Exception {
        auth.ldapAuthentication()
            .userSearchBase("ou=people")
            .userSearchFilter("(uid={0})")
            .groupSearchBase("ou=groups")
            .groupSearchFilter("member={0}")
            .contextSource()
            .root("dc=baeldung,dc=com")
            .ldif("classpath:users.ldif");
    }
}

This is of course only the LDAP relevant part of the config – the full Java configuration can be found here.

4. XML Configuration

Now, let’s take a look at corresponding XML configuration:

<authentication-manager>
    <ldap-authentication-provider
      user-search-base="ou=people"
      user-search-filter="(uid={0})"
      group-search-base="ou=groups"
      group-search-filter="(member={0})">
    </ldap-authentication-provider>
</authentication-manager>
   
<ldap-server root="dc=baeldung,dc=com" ldif="users.ldif"/>

Again, this is just part of the configuration – the part that is relevant to LDAP; the full XML config can be found here.

5. LDAP Data Interchange Format

LDAP data can be represented using the LDAP Data Interchange Format (LDIF) – here’s an example of our user data:

dn: ou=groups,dc=baeldung,dc=com
objectclass: top
objectclass: organizationalUnit
ou: groups

dn: ou=people,dc=baeldung,dc=com
objectclass: top
objectclass: organizationalUnit
ou: people

dn: uid=baeldung,ou=people,dc=baeldung,dc=com
objectclass: top
objectclass: person
objectclass: organizationalPerson
objectclass: inetOrgPerson
cn: Jim Beam
sn: Beam
uid: baeldung
userPassword: password

dn: cn=admin,ou=groups,dc=baeldung,dc=com
objectclass: top
objectclass: groupOfNames
cn: admin
member: uid=baeldung,ou=people,dc=baeldung,dc=com

dn: cn=user,ou=groups,dc=baeldung,dc=com
objectclass: top
objectclass: groupOfNames
cn: user
member: uid=baeldung,ou=people,dc=baeldung,dc=com

6. The Application

Finally, here is our simple application:

@Controller
public class MyController {

    @RequestMapping("/secure")
    public String secure(Map<String, Object> model, Principal principal) {
        model.put("title", "SECURE AREA");
        model.put("message", "Only Authorized Users Can See This Page");
        return "home";
    }
}

7. Conclusion

In this quick guide to Spring Security with LDAP, we learned how to provision a basic system with LDIF and configure the security of that system.

The full implementation of this tutorial can be found in the github project – this is an Eclipse based project, so it should be easy to import and run as it is.

Sign Up and get 25% Off my upcoming "REST With Spring" classes on launch:

>> CHECK OUT THE CLASSES

Introduction to Spring Batch

$
0
0

1. Introduction

In this article we’re going to focus on a practical, code-focused intro to Spring Batch. Spring Batch is a processing framework designed for robust execution of jobs.

It’s current version 3.0, which supports Spring 4 and Java 8. It also accommodates JSR-352, which is new java specification for batch processing.

Here are a few interesting and practical use-cases of the framework.

2. Workflow Basics

Spring batch follows the traditional batch architecture where a job repository does the work of scheduling and interacting with the job.

A job can have more than one steps – and every step typically follows the sequence of reading data, processing it and writing it.

And of course the framework will do most of the heavy lifting for us here – especially when it comes to the low level persistence work of dealing with the jobs – using sqlite for the job repository.

2.1. Our Example Usecase

The simple usecase we’re going to tackle here is – we’re going to migrate some financial transaction data from CSV to XML.

The input file has a very simple structure – it contains a transaction per line, made up of: a username, the user id, the date of the transaction and the amount:

username, userid, transaction_date, transaction_amount
devendra, 1234, 31/10/2015, 10000
john, 2134, 3/12/2015, 12321
robin, 2134, 2/02/2015, 23411

3. The Maven pom

Dependencies required for this project are spring core, spring batch, and sqlite jdbc connector:

<project xmlns="http://maven.apache.org/POM/4.0.0" 
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
  http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>org.baeldung</groupId>
    <artifactId>spring-batch-intro</artifactId>
    <version>0.1-SNAPSHOT</version>
    <packaging>jar</packaging>

    <name>spring-batch-intro</name>
    <url>http://maven.apache.org</url>

    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <spring.version>4.2.0.RELEASE</spring.version>
        <spring.batch.version>3.0.5.RELEASE</spring.batch.version>
        <sqlite.version>3.8.11.2</sqlite.version>
    </properties>

    <dependencies>
        <!-- SQLite database driver -->
        <dependency>
            <groupId>org.xerial</groupId>
            <artifactId>sqlite-jdbc</artifactId>
            <version>${sqlite.version}</version>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-oxm</artifactId>
            <version>${spring.version}</version>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-jdbc</artifactId>
            <version>${spring.version}</version>
        </dependency>
        <dependency>
            <groupId>org.springframework.batch</groupId>
            <artifactId>spring-batch-core</artifactId>
            <version>${spring.batch.version}</version>
        </dependency>
    </dependencies>
</project>

4. Spring Batch Config

First thing we’ll do is to configure Spring Batch with XML:

<beans xmlns="http://www.springframework.org/schema/beans"
  xmlns:jdbc="http://www.springframework.org/schema/jdbc" 
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="
    http://www.springframework.org/schema/beans 
    http://www.springframework.org/schema/beans/spring-beans-4.2.xsd
    http://www.springframework.org/schema/jdbc 
    http://www.springframework.org/schema/jdbc/spring-jdbc-4.2.xsd
">

    <!-- connect to SQLite database -->
    <bean id="dataSource"
      class="org.springframework.jdbc.datasource.DriverManagerDataSource">
        <property name="driverClassName" value="org.sqlite.JDBC" />
        <property name="url" value="jdbc:sqlite:repository.sqlite" />
        <property name="username" value="" />
        <property name="password" value="" />
    </bean>

    <!-- create job-meta tables automatically -->
    <jdbc:initialize-database data-source="dataSource">
        <jdbc:script
          location="org/springframework/batch/core/schema-drop-sqlite.sql" />
        <jdbc:script location="org/springframework/batch/core/schema-sqlite.sql" />
    </jdbc:initialize-database>

    <!-- stored job-meta in memory -->
    <!-- 
    <bean id="jobRepository" 
      class="org.springframework.batch.core.repository.support.MapJobRepositoryFactoryBean"> 
        <property name="transactionManager" ref="transactionManager" />
    </bean> 
    -->

    <!-- stored job-meta in database -->
    <bean id="jobRepository"
      class="org.springframework.batch.core.repository.support.JobRepositoryFactoryBean">
        <property name="dataSource" ref="dataSource" />
        <property name="transactionManager" ref="transactionManager" />
        <property name="databaseType" value="sqlite" />
    </bean>

    <bean id="transactionManager" class=
      "org.springframework.batch.support.transaction.ResourcelessTransactionManager" />

    <bean id="jobLauncher"
      class="org.springframework.batch.core.launch.support.SimpleJobLauncher">
        <property name="jobRepository" ref="jobRepository" />
    </bean>

</beans>

Of course a Java configuration is also available:

@Configuration
@EnableBatchProcessing
public class SpringConfig {

    @Value("org/springframework/batch/core/schema-drop-sqlite.sql")
    private Resource dropReopsitoryTables;

    @Value("org/springframework/batch/core/schema-sqlite.sql")
    private Resource dataReopsitorySchema;

    @Bean
    public DataSource dataSource() {
        DriverManagerDataSource dataSource = new DriverManagerDataSource();
        dataSource.setDriverClassName("org.sqlite.JDBC");
        dataSource.setUrl("jdbc:sqlite:repository.sqlite");
        return dataSource;
    }

    @Bean
    public DataSourceInitializer dataSourceInitializer(DataSource dataSource)
      throws MalformedURLException {
        ResourceDatabasePopulator databasePopulator = 
          new ResourceDatabasePopulator();

        databasePopulator.addScript(dropReopsitoryTables);
        databasePopulator.addScript(dataReopsitorySchema);
        databasePopulator.setIgnoreFailedDrops(true);

        DataSourceInitializer initializer = new DataSourceInitializer();
        initializer.setDataSource(dataSource);
        initializer.setDatabasePopulator(databasePopulator);

        return initializer;
    }

    private JobRepository getJobRepository() throws Exception {
        JobRepositoryFactoryBean factory = new JobRepositoryFactoryBean();
        factory.setDataSource(dataSource());
        factory.setTransactionManager(getTransactionManager());
        factory.afterPropertiesSet();
        return (JobRepository) factory.getObject();
    }

    private PlatformTransactionManager getTransactionManager() {
        return new ResourcelessTransactionManager();
    }

    public JobLauncher getJobLauncher() throws Exception {
        SimpleJobLauncher jobLauncher = new SimpleJobLauncher();
        jobLauncher.setJobRepository(getJobRepository());
        jobLauncher.afterPropertiesSet();
        return jobLauncher;
    }
}

5. Spring Batch Job Config

Let’s now write our job description for the CSV to XML work:

<beans xmlns="http://www.springframework.org/schema/beans"
  xmlns:batch="http://www.springframework.org/schema/batch" 
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://www.springframework.org/schema/batch
    http://www.springframework.org/schema/batch/spring-batch-3.0.xsd
    http://www.springframework.org/schema/beans 
    http://www.springframework.org/schema/beans/spring-beans-4.2.xsd
">

    <import resource="spring.xml" />

    <bean id="record" class="org.baeldung.spring_batch_intro.model.Transaction"></bean>
    <bean id="itemReader"
      class="org.springframework.batch.item.file.FlatFileItemReader">

        <property name="resource" value="input/record.csv" />

        <property name="lineMapper">
            <bean class="org.springframework.batch.item.file.mapping.DefaultLineMapper">
                <property name="lineTokenizer">
                    <bean class=
                      "org.springframework.batch.item.file.transform.DelimitedLineTokenizer">
                        <property name="names" value="username,userid,transactiondate,amount" />
                    </bean>
                </property>
                <property name="fieldSetMapper">
                    <bean class="org.baeldung.spring_batch_intro.service.RecordFieldSetMapper" />
                </property>
            </bean>
        </property>
    </bean>

    <bean id="itemProcessor"
      class="org.baeldung.spring_batch_intro.service.CustomItemProcessor" />

    <bean id="itemWriter"
      class="org.springframework.batch.item.xml.StaxEventItemWriter">
        <property name="resource" value="file:xml/output.xml" />
        <property name="marshaller" ref="recordMarshaller" />
        <property name="rootTagName" value="transactionRecord" />
    </bean>

    <bean id="recordMarshaller" class="org.springframework.oxm.jaxb.Jaxb2Marshaller">
        <property name="classesToBeBound">
            <list>
                <value>org.baeldung.spring_batch_intro.model.Transaction</value>
            </list>
        </property>
    </bean>
    <batch:job id="firstBatchJob">
        <batch:step id="step1">
            <batch:tasklet>
                <batch:chunk reader="itemReader" writer="itemWriter"
                  processor="itemProcessor" commit-interval="10">
                </batch:chunk>
            </batch:tasklet>
        </batch:step>
    </batch:job>
</beans>

And of course, the similar java based job config:

public class SpringBatchConfig {
    
    @Autowired
    private JobBuilderFactory jobs;

    @Autowired
    private StepBuilderFactory steps;

    @Value("input/record.csv")
    private Resource inputCsv;

    @Value("file:xml/output.xml")
    private Resource outputXml;

    @Bean
    public ItemReader<Transaction> itemReader()
      throws UnexpectedInputException, ParseException {
        FlatFileItemReader<Transaction> reader = new FlatFileItemReader<Transaction>();
        DelimitedLineTokenizer tokenizer = new DelimitedLineTokenizer();
        String[] tokens = { "username", "userid", "transactiondate", "amount" };
        tokenizer.setNames(tokens);
        reader.setResource(inputCsv);
        DefaultLineMapper<Transaction> lineMapper = 
          new DefaultLineMapper<Transaction>();
        lineMapper.setLineTokenizer(tokenizer);
        lineMapper.setFieldSetMapper(new RecordFieldSetMapper());
        reader.setLineMapper(lineMapper);
        return reader;
    }

    @Bean
    public ItemProcessor<Transaction, Transaction> itemProcessor() {
        return new CustomItemProcessor();
    }

    @Bean
    public ItemWriter<Transaction> itemWriter(Marshaller marshaller)
      throws MalformedURLException {
        StaxEventItemWriter<Transaction> itemWriter = 
          new StaxEventItemWriter<Transaction>();
        itemWriter.setMarshaller(marshaller);
        itemWriter.setRootTagName("transactionRecord");
        itemWriter.setResource(outputXml);
        return itemWriter;
    }

    @Bean
    public Marshaller marshaller() {
        Jaxb2Marshaller marshaller = new Jaxb2Marshaller();
        marshaller.setClassesToBeBound(new Class[] { Transaction.class });
        return marshaller;
    }

    @Bean
    protected Step step1(ItemReader<Transaction> reader,
      ItemProcessor<Transaction, Transaction> processor,
      ItemWriter<Transaction> writer) {
        return steps.get("step1").<Transaction, Transaction> chunk(10)
          .reader(reader).processor(processor).writer(writer).build();
    }

    @Bean(name = "firstBatchJob")
    public Job job(@Qualifier("step1") Step step1) {
        return jobs.get("firstBatchJob").start(step1).build();
    }
}

OK, so now that we have the whole config, let’s break it down and start discussing it.

5.1. Read Data and Create Objects with ItemReader

First we configured the cvsFileItemReader which will read the data from the record.csv and convert it into the Transaction object:

@SuppressWarnings("restriction")
@XmlRootElement(name = "transactionRecord")
public class Transaction {
    private String username;
    private int userId;
    private Date transactionDate;
    private double amount;

    /* getters and setters for the attributes */

    @Override
    public String toString() {
        return "Transaction [username=" + username + ", userId=" + userId
          + ", transactionDate=" + transactionDate + ", amount=" + amount
          + "]";
    }
}

To do so – it uses a custom mapper:

public class RecordFieldSetMapper implements FieldSetMapper<Transaction> {

    public Transaction mapFieldSet(FieldSet fieldSet) throws BindException {
        SimpleDateFormat dateFormat = new SimpleDateFormat("dd/MM/yyyy");
        Transaction transaction = new Transaction();

        transaction.setUsername(fieldSet.readString("username"));
        transaction.setUserId(fieldSet.readInt(1));
        transaction.setAmount(fieldSet.readDouble(3));
        String dateString = fieldSet.readString(2);
        try {
            transaction.setTransactionDate(dateFormat.parse(dateString));
        } catch (ParseException e) {
            e.printStackTrace();
        }
        return transaction;
    }
}

5.2. Processing Data with ItemProcessor

We have created our own item processor, CustomItemProcessor. This doesn’t process anything related to the transaction object – all it does is passes the original object coming from reader to the writer:

public class CustomItemProcessor implements ItemProcessor<Transaction, Transaction> {

    public Transaction process(Transaction item) {
        return item;
    }
}

5.3. Writing Objects to the FS with ItemWriter

Finally, we are going to store this transaction into an xml file located at xml/output.xml:

<bean id="itemWriter"
  class="org.springframework.batch.item.xml.StaxEventItemWriter">
    <property name="resource" value="file:xml/output.xml" />
    <property name="marshaller" ref="recordMarshaller" />
    <property name="rootTagName" value="transactionRecord" />
</bean>

5.4. Configuring the Batch Job

So all we have to do is connect the dots with a job – using the batch:job syntax.

Note the commit-interval – that’s the number of transactions to be kept in memory before committing the batch to the itemWriter; it will hold the transactions in memory until that point (or until the end of the input data is encountered):

<batch:job id="firstBatchJob">
    <batch:step id="step1">
        <batch:tasklet>
            <batch:chunk reader="itemReader" writer="itemWriter"
              processor="itemProcessor" commit-interval="10">
            </batch:chunk>
        </batch:tasklet>
    </batch:step>
</batch:job>

5.5. Running the Batch Job

That’s it – let’s now set up and run everything:

public class App {
    public static void main(String[] args) {
        // Spring Java config
        AnnotationConfigApplicationContext context = new AnnotationConfigApplicationContext();
        context.register(SpringConfig.class);
        context.register(SpringBatchConfig.class);
        context.refresh();
        
        JobLauncher jobLauncher = (JobLauncher) context.getBean("jobLauncher");
        Job job = (Job) context.getBean("firstBatchJob");
        System.out.println("Starting the batch job");
        try {
            JobExecution execution = jobLauncher.run(job, new JobParameters());
            System.out.println("Job Status : " + execution.getStatus());
            System.out.println("Job completed");
        } catch (Exception e) {
            e.printStackTrace();
            System.out.println("Job failed");
        }
    }
}

6. Conclusion

This tutorial gives you a basic idea of how to work with Spring Batch and how to use it in a simple usecase.

It shows how you can easily develop your batch processing pipeline and how you can customize different stages in reading, processing and writing.

The full implementation of this tutorial can be found in the github project – this is an Eclipse based project, so it should be easy to import and run as it is.

Hiring a Developer to Create Videos

$
0
0

Back in August, I published the first job here on Baeldung to hire a technical content editor for the site. The applications came in almost immediately and I hired someone within two weeks.

This is the next step for the site and my next hire – I’m looking for a developer to create technical videos/screencasts.

And, same as last time – what better way to find someone than reaching out to readers and the Baeldung community.

Who’s the right candidate?

First – you need to be a developer yourself, working or actively involved in the Java and Spring ecosystem. The videos will be code centric, so being in the trenches and able to code is instrumental.

Second – you need to be either a native speaker or have solid delivery in English. Screencasts are mostly audio-centric, so the way they’re delivered is of course a critical aspect.

Finally – you don’t need to necessarily have experience with video, but if you do – it’s definitely going to help. If you don’t have video experience, just expect there’s going to be a learning curve.

What’s the actual job?

You’ll be working on video/screencast content diving into various Java and Spring topics.

Here’s how the process will work for a new video – for example – a 10 min screencast focused on Spring Data JPA.

I’ll provide a quick sample project and a high level overview of the main points to hit in the video.

You’ll go through the project to understand it and then you’ll do the recording. After a quick editing pass you’ll send me the draft and I’ll provide feedback.

Finally – you’ll make some (minimal) changes based on my feedback and do the final editing pass to clean the audio, introduce transitions, etc.

Generally, for 10 minutes of video, expect about 2-3 hours of work.

What’s the budget?

The budget for 1 hour of video is 1000$ – which will be an hourly rate of about 66$ / hour. That is assuming an average of 15 hours of work for an hour of final video.

Of course, the process will be slower initially and producing 1 hour of video might take longer, depending on your experience with speaking, editing, etc.

And as you get a handle on it, the process might also be quite a bit faster then this average.

How do I apply?

If you think you’re well suited for this type of video work, I’d love to work with on creating some cool videos for the Java community – get in touch with me over on eugen@baeldung.com. We can talk a bit more in-depth about the video creation process and I’ll share the tools and process I use.

Do note that the application itself is going to be video. Basically a quick (2-3 minute) screencast on a technical / coding topic of your choice (as long as it’s in the Java ecosystem).

Cheers,

Eugen. 

Java Web Weekly 1

$
0
0

I usually post about Dev stuff on Twitter - you can follow me there:

The third year of curating my Java reading and sharing the best stuff here, on Baeldunghere we go…

1. Spring and Java

>> If You’ve Written Java Code in 2015 – Here Are the Trends You Couldn’t Have Missed [takipi.com]

Some of the trends of 2015 in the Java ecosystem.

>> Please, Java. Do Finally Support Multiline String Literals [jooq.org]

Yeah – short and to the point – multiline Strings would be a highly useful, simple addition to the language.

>> Android Will Use the OpenJDK [infoq.com]

Interesting turn of events after so many years of problems and legal uncertainty, stretching way back to the days of Sun.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> Refactoring is a Development Technique, Not a Project [daedtech.com]

A good way to think about refactoring as an ongoing process instead of a “stop the world” mini-project, once a year.

>> No, you can’t join my wifi network [troyhunt.com]

An interesting rant about home network security.

>> The Best Way to Hire Developers [daedtech.com]

Interviewing well can feel a lot like luck, and having been on both sides of the table I can definitely see why that’s the case.

In my own experience interviewing candidates, the most helpful thing has been not allowing myself to settle into an interviewing rhythm and always keeping my eye on how I can do interviewing better.

>> General Performance Tips [techblog.bozho.net]

Some quick general thoughts about good performance practices.

Also worth reading:

 

3. Comics

And my favorite Dilberts of the week:

>> Is this part of your larger war on knowledge? [dilbert.com]

>> Did a just pay a consultant to recommend his own company’s software? [dilbert.com]

>> Is he behind me? [dilbert.com]

 

4. Pick of the Week

>> The myth of perfect design [scottberkun.com]

 

I usually post about Dev stuff on Twitter - you can follow me there:


Java Web Weekly 2

$
0
0

I usually post about Dev stuff on Twitter - you can follow me there:

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> How to add a GitHub commit link to a Spring Boot application [codecentric.de]

A very to the point, practical writeup on displaying a git commit link in a Boot application.

>> A Curious Incidence of a jOOQ API Design Flaw [jooq.org]

This is why API design is so damn hard – very interesting read.

>> Writing Better Tests With JUnit [codecentric.de]

Some good testing principles after the “intro” part of the writeup.

>> Creating a PageRank Analytics Platform Using Spring Boot Microservices [kennybastani.com]

I opened this article not knowing what the expect. And I was definitely surprised to see a full case-study of building out a small but practical (and quite interesting) application with Spring and a bunch of other technologies.

Whenever I get the question “I’m a beginner – how do I get started learning a new {X}” – my usual answer is: Build something with it. Not a trivial toy project, but something that’s actually useful (at least to you). It’s this kind of project that I have in mind. Cool beans. Very cool beans indeed.

>> Writing Unit Tests With Spock Framework: Introduction to Specifications, Part One [petrikainulainen.net]

A solid, quick intro to the Spock framework and specifications.

>> Native Queries – How to call native SQL queries with JPA [thoughts-on-java.org]

A solid intro to writing raw SQL within JPA. Multi-line Strings would come really handy for this one.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> Make better Git commits [arhohuttunen.fi]

Using git well is definitely not easy. Here are some clear things that you absolutely need to do if you want to keep your sanity, especially if you’re working in a large team (and doing code reviews). Have at it.

>> How to Detect Sucker Culture while Interviewing [daedtech.com]

Good good advice for interviewing in a way that actually matches and syncs up with your broader life goals.

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> Avoid saying “unfortunately” when you talk to clients [dilbert.com]

>> Do you understand? Maybe. Is your point that you don’t know how to communicate? [dilbert.com]

>> The servers are down [dilbert.com]

 

4. Pick of the Week

I’ve recently opened up a new position here at Baeldung – for video content creation. Here are the full details, budgets and an example of what it takes to record a video.

Have a look if it’s something that may interest you:

>> Hiring a Developer to Create Videos [baeldung.com]

 

I usually post about Dev stuff on Twitter - you can follow me there:


Lambda Expressions and Functional Interfaces: Tips and Best Practices

$
0
0

I usually post about Java stuff on Twitter - you should follow me there:

1. Overview

Now that Java 8 has reached wide usage, patterns and best practices have begun to emerge for some of its headlining features. In this tutorial we will take a closer look to functional interfaces and lambda expressions.

2. Prefer Standard Functional Interfaces

Functional interfaces, which are gathered in the java.util.function package, satisfy most developers’ needs in providing target types for lambda expressions and method references. Each of these interfaces are general and abstract, making them easy to adapt to almost any lambda expression. Developers should explore this package before creating new functional interfaces.

Consider an interface Foo:

@FunctionalInterface
public interface Foo {
    String method(String string);
}

and a method add() in some class UseFoo, which takes this interface as a parameter:

public String add(String string, Foo foo) {
    return foo.method(string);
}

To execute it, you would write:

Foo foo = parameter -> parameter + " from lambda";
String result = useFoo.add("Message ", foo);

Look closer and you will see that Foo is nothing more than a function that accepts one argument and produces a result. Java 8 already provides such an interface in Function<T,R> from the java.util.function package.

Now we can remove interface Foo completely and change our code to:

public String add(String string, Function<String, String> fn) {
    return fn.apply(string);
}

To execute this, we can write:

Function<String, String> fn = 
  parameter -> parameter + " from lambda";
String result = useFoo.addWithStandardFI("Message ", fn);

3. Use the @FunctionalInterface Annotation

Annotate your functional interfaces with @FunctionalInterface. At first, this annotation seems to be useless. Even without it, your interface will be treated as functional as long as it has just one abstract method.

But imagine a big project with several interfaces – it’s hard to control everything manually. An interface, which was designed to be functional, could accidentally be changed by adding of other abstract method/methods, rendering it unusable as a functional interface.

Bu using the @FunctionalInterface annotation, the compiler will trigger an error in response to any attempt to break the predefined structure of a functional interface. It is also a very handy tool to make your application architecture easier to understand for other developers.

So, use this:

@FunctionalInterface
public interface Foo {
    String method();
}

instead of just:

public interface Foo {
    String method();
}

4. Don’t Overuse Default Methods in Functional Interfaces

You can easily add default methods to the functional interface. This is acceptable to the functional interface contract as long as there is only one abstract method declaration:

@FunctionalInterface
public interface Foo {
    String method();
    default void defaultMethod() {}
}

Functional interfaces can be extended by other functional interfaces if their abstract methods have the same signature. For example:

@FunctionalInterface
public interface FooExtended extends Baz, Bar {}
	
@FunctionalInterface
public interface Baz {	
    String method();	
    default void defaultBaz() {}		
}
	
@FunctionalInterface
public interface Bar {	
    String method();	
    default void defaultBar() {}	
}

Just as with regular interfaces, extending different functional interfaces with the same default method can be problematic. For example, assume that interfaces Bar and Baz both have a default method defaultCommon(). In this case, you will get a compile-time error:

interface Foo inherits unrelated defaults for defaultCommon() from types Baz and Bar...

To fix this, defaultCommon() method should be overridden in the Foo interface. You can, of course, provide a custom implementation of this method. But if you want to use one of the parent interfaces’ implementations (for example, from the Baz interface), add following line of code to the defaultCommon() method’s body:

Baz.super.defaultCommon();

But be careful. Adding too many default methods to the interface is not a very good architectural decision. It is should be viewed as a compromise, only to be used when required, for upgrading existing interfaces without breaking backward compatibility.

5. Instantiate Functional Interfaces with Lambda Expressions

The compiler will allow you to use an inner class to instantiate a functional interface. However, this can lead to very verbose code. You should prefer lambda expressions:

Foo foo = parameter -> parameter + " from Foo";

over an inner class:

Foo fooByIC = new Foo() {
    @Override
    public String method(String string) {
        return string + " from Foo";
    }
};

The lambda expression approach can be used for any suitable interface from old libraries. It is usable for interfaces like Runnable, Comparator, and so on. However, this doesn’t mean that you should review your whole older codebase and change everything.

6. Avoid Overloading Methods with Functional Interfaces as Parameters 

Use methods with different names to avoid collisions; let’s look at an example:

public interface Adder {
    String add(Function<String, String> f);
    void add(Consumer<Integer> f);
}

public class AdderImpl implements Adder {

    @Override
    public  String add(Function<String, String> f) {
        return f.apply("Something ");
    }

    @Override
    public void add(Consumer<Integer> f) {}
}

At first glance, this seems reasonable. But any attempt to execute any of AdderImpl’s methods:

String r = adderImpl.add(a -> a + " from lambda");

ends with an error with the following message:

reference to add is ambiguous both method 
add(java.util.function.Function<java.lang.String,java.lang.String>) 
in fiandlambdas.AdderImpl and method 
add(java.util.function.Consumer<java.lang.Integer>) 
in fiandlambdas.AdderImpl match

To solve this problem, you have two options. The first is to use methods with different names:

String addWithFunction(Function<String, String> f);

void addWithConsumer(Consumer<Integer> f);

The second is to perform casting manually. This is not preferred.

String r = Adder.add((Function) a -> a + " from lambda");

7. Don’t Treat Lambda Expressions as Inner Classes

Despite our previous example, where we essentially substituted inner class by lambda expression, the two concepts are different in an important way: scope.

When you use inner class, it creates a new scope. You can overwrite local variables from the enclosing scope by instantiating new local variables with the same names. You can also use the keyword this inside your inner class as a reference to its instance.

However, lambda expressions work with enclosing scope. You can’t overwrite variables from the enclosing scope inside the lambda’s body. In this case, the keyword this is a reference to an enclosing instance.

For example, in the class UseFoo you have an instance variable value:

private String value = "Enclosing scope value";

Then in some method of this class place the following code and execute this method.

public String scopeExperiment() {
    Foo fooIC = new Foo() {
        String value = "Inner class value";

        @Override
        public String method(String string) {
            return this.value;
        }
    };
    String resultIC = fooIC.method("");

    Foo fooLambda = parameter -> {
        String value = "Lambda value";
        return this.value;
    };
    String resultLambda = fooLambda.method("");

    return "Results: resultIC = " + resultIC + 
      ", resultLambda = " + resultLambda;
}

If you execute the scopeExperiment() method, you will get the following result: Results: resultIC = Inner class value, resultLambda = Enclosing scope value

As you can see, by calling this.value in IC,  you can access a local variable from its instance. But in the case of the lambda, this.value call gives you access to the variable value which is defined in the  UseFoo class, but not to the variable value defined inside the lambda’s body.

8. Keep Lambda Expressions Short And Self-explanatory

If possible, use one line constructions instead of a large block of code. Remember lambdas should be an expression, not a narrative. Despite its concise syntax, lambdas should precisely express the functionality they provide.  

This is mainly stylistic advice, as performance will not change drastically. In general, however, it is much easier to understand and to work with such code.

This can be achieved in many ways – let’s have a closer look.

8.1. Avoid Blocks of Code in Lambda’s Body

In an ideal situation, lambdas should be written in one line of code. With this approach, the lambda is a self-explanatory construction, which declares what action should be executed with what data (in the case of lambdas with parameters).

If you have a large block of code, the lambda’s functionality is not immediately clear.

With this in mind, do the following:

Foo foo = parameter -> buildString(parameter);
private String buildString(String parameter) {
    String result = "Something " + parameter;
    //many lines of code
    return result;
}

instead of:

Foo foo = parameter -> { String result = "Something " + parameter; 
    //many lines of code 
    return result; 
};

However, please don’t use this “one-line lambda” rule as dogma. If you have two or three lines in lambda’s definition, it may not be valuable to extract that code to another method.

8.2. Avoid Specifying Parameter Types

A compiler in most cases is able to resolve the type of lambda parameters with the help of type inference. Therefore, adding a type to the parameters is optional and can be omitted.

Do this:

(a, b) -> a.toLowerCase() + b.toLowerCase();

instead of this:

(String a, String b) -> a.toLowerCase() + b.toLowerCase();

8.3. Avoid Parentheses Around a Single Parameter

Lambda syntax requires parentheses only around more than one parameter or when there is no parameter at all. That is why it is safe to make your code a little bit shorter and to exclude parentheses when there is only one parameter.

So, do this:

a -> a.toLowerCase();

instead of this:

(a) -> a.toLowerCase();

8.4.  Avoid Return Statement and Braces

Braces and return statements are optional in one-line lambda bodies. This means, that they can be omitted for clarity and conciseness.

Do this:

a -> a.toLowerCase();

instead of this:

a -> {return a.toLowerCase()};

8.5. Use Method References

Very often, even in our previous examples, lambda expressions just call methods which are already implemented elsewhere. In this situation, it is very useful to use another Java 8 feature: method references.

So, the lambda expression:

a -> a.toLowerCase();

could be substituted by:

String::toLowerCase;

This is not always shorter, but it makes code more readable.

9. Use “Effectively Final” Variables

Accessing a non-final variable inside lambda expressions will cause compile-time error. But it doesn’t mean that you should mark every target variable as final.

According to the “effectively final concept, a compiler treats every variable as final, as long as it is assigned only once.

It is safe to use such variables inside lambdas, because the compiler will control their state and trigger a compile-time error immediately after any attempt to change them.

For example, the following code will not compile:

public void method() {
    String localVariable = "Local";
    Foo foo = parameter -> {
        String localVariable = parameter;
        return localVariable;
    };
}

The compiler will inform you that:

Variable 'localVariable' is already defined in the scope.

This approach should simplify the process of making lambda execution thread-safe.

10. Protect Object Variables from Mutation

One of the main purposes of lambdas is use in parallel computing – which means that they’re really helpful when it comes to thread-safety.

The “effectively final” paradigm helps a lot here, but not in every case. Lambdas can’t change a value of an object from enclosing scope. But in the case of mutable object variables, state could be changed inside lambda expressions.

Consider the following code:

int[] total = new int[1];
Runnable r = () -> total[0]++;
r.run();

This code is legal, as total variable remains “effectively final”. But will the object it references to have the same state after execution of the lambda? No!

Keep this example as a reminder to avoid code that can cause unexpected mutations.

11. Conclusion

In this tutorial, we saw some best practices and pitfalls in Java 8’s lambda expressions and functional interfaces. Despite the utility and power of these new features, they are just tools. Every developer should pay attention while using them.

The complete source code for the example is available in this github project – this is a Maven and Eclipse project, so it can be imported and used as-is.

I usually post about Java stuff on Twitter - you should follow me there:


Introduction to Using Thymeleaf in Spring

$
0
0

I just announced the release dates of my upcoming "REST With Spring" Classes:

>> THE "REST WITH SPRING" CLASSES

1. Introduction

Thymeleaf is a Java template engine for processing and creating HTML, XML, JavaScript, CSS and text.

In this article, we will discuss how to use Thymleaf with Spring along with some basic use cases in the view layer of a Spring MVC application.

The library is extremely extensible and its natural templating capability ensures templates can be prototyped without a back-end – which makes development very fast when compared with other popular template engines such as JSP.

2. Integrating Thymeleaf with Spring

Firstly let us see the configurations required to integrate with Spring. The thymeleaf-spring library is required for the integration.

Add the following dependencies to your Maven POM file:

<dependency>
    <groupId>org.thymeleaf</groupId>
    <artifactId>thymeleaf</artifactId>
    <version>2.1.4.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.thymeleaf</groupId>
    <artifactId>thymeleaf-spring4</artifactId>
    <version>2.1.4.RELEASE</version>
</dependency>

Note that, for a Spring 3 project, the thymeleaf-spring3 library must be used instead of thymeleaf-spring4.

The SpringTemplateEngine class performs all of the configuration steps. You can configure this class as a bean in the Java configuration file:

@Bean
@Description("Thymeleaf Template Resolver")
public ServletContextTemplateResolver templateResolver() {
    ServletContextTemplateResolver templateResolver = new ServletContextTemplateResolver();
    templateResolver.setPrefix("/WEB-INF/views/");
    templateResolver.setSuffix(".html");
    templateResolver.setTemplateMode("HTML5");

    return templateResolver;
}

@Bean
@Description("Thymeleaf Template Engine")
public SpringTemplateEngine templateEngine() {
    SpringTemplateEngine templateEngine = new SpringTemplateEngine();
    templateEngine.setTemplateResolver(templateResolver());

    return templateEngine;
}

The templateResolver bean properties prefix and suffix indicate the location of the view pages within the webapp directory and their filename extension, respectively.

The ViewResolver interface in Spring MVC maps the view names returned by a controller to actual view objects. ThymeleafViewResolver implements the ViewResolver interface and is used to determine which Thymeleaf views to render, given a view name.

The final step in the integration is to add the ThymeleafViewResolver as a bean:

@Bean
@Description("Thymeleaf View Resolver")
public ThymeleafViewResolver viewResolver() {
    ThymeleafViewResolver viewResolver = new ThymeleafViewResolver();
    viewResolver.setTemplateEngine(templateEngine());
    viewResolver.setOrder(1);
    return viewResolver;
}

3. Displaying Values from Message Source (Property Files)

The th:text=”#{key}” tag attribute can be used to display values from property files. For this to work the property file must be configured as messageSource bean:

@Bean
@Description("Spring Message Resolver")
public ResourceBundleMessageSource messageSource() {
    ResourceBundleMessageSource messageSource = new ResourceBundleMessageSource();
    messageSource.setBasename("messages");
    return messageSource;
}

Here is the Thymeleaf HTML code to display the value associated with the key welcome.message:

<span th:text="#{welcome.message}" />

4. Displaying Model Attributes

4.1. Simple Attributes

The th:text=”${attributename}” tag attribute can be used to display the value of model attributes. Let’s add a model attribute with the name serverTime in the controller class:

model.addAttribute("serverTime", dateFormat.format(new Date()));

The HTML code to display the value of serverTime attribute:

Current time is <span th:text="${serverTime}" />

4.2. Collection Attributes

If the model attribute is a collection of objects, the th:each tag attribute can be used to iterate over it. Let’s define a Student model class with two fields, id and name:

public class Student implements Serializable {
    private Integer id;
    private String name;
    // standard getters and setters
}

Now we will add a list of students as model attribute in the controller class:

List<Student> students = new ArrayList<Student>();
// logic to build student data
model.addAttribute("students", students);

Finally, we can use Thymeleaf template code to iterate over the list of students and display all field values:

<tbody>
    <tr th:each="student: ${students}">
        <td th:text="${student.id}" />
        <td th:text="${student.name}" />
    </tr>
</tbody>

5. Conditional Evaluation

5.1. if and unless

The th:if=”${condition}” attribute is used to display a section of the view if the condition is met. The th:unless=”${condition}” attribute is used to display a section of the view if the condition is not met.

Add a gender field to the Student model:

public class Student implements Serializable {
    private Integer id;
    private String name;
    private Character gender;
    
    // standard getters and setters
}

Suppose this field has two possible values (M or F) to indicate the student’s gender. If we wish to display the words “Male” or “Female” instead of the single character, we could accomplish this by using the following Thymeleaf code:

<td>
    <span th:if="${student.gender} == 'M'" th:text="Male" /> 
    <span th:unless="${student.gender} == 'M'" th:text="Female" />
</td>

5.2. switch and case

The th:switch and th:case attributes are used to display content conditionally using the switch statement structure.

The previous code could be rewritten using the th:switch and th:case attributes:

<td th:switch="${student.gender}">
    <span th:case="'M'" th:text="Male" /> 
    <span th:case="'F'" th:text="Female" />
</td>

6. Handling User Input

Form input can be handled using the th:action=”@{url}” and th:object=”${object}” attributes. The th:action is used to provide the form action URL and th:object is used to specify an object to which the submitted form data will be bound. Individual fields are mapped using the th:field=”*{name}” attribute, where name is the matching property of the object.

For the Student class, we can create an input form:

<form action="#" th:action="@{/saveStudent}" th:object="${student}" method="post">
    <table border="1">
        <tr>
            <td><label th:text="#{msg.id}" /></td>
            <td><input type="number" th:field="*{id}" /></td>
        </tr>
        <tr>
            <td><label th:text="#{msg.name}" /></td>
            <td><input type="text" th:field="*{name}" /></td>
        </tr>
        <tr>
            <td><input type="submit" value="Submit" /></td>
        </tr>
    </table>
</form>

In the above code, /saveStudent is the form action URL and student is the object that holds the form data submitted.

The StudentController class handles the form submission:

@Controller
public class StudentController {
    @RequestMapping(value = "/saveStudent", method = RequestMethod.POST)
    public String saveStudent(@ModelAttribute Student student, BindingResult errors, Model model) {
        // logic to process input data
    }
}

In the code above, the @RequestMapping annotation maps the controller method with URL provided in the form. The annotated method saveStudent() performs the required processing for the submitted form. The @ModelAttribute annotation binds the form fields to the student object.

7. Displaying Validation Errors

The #fields.hasErrors() function can be used to check if a field has any validation errors. The #fields.errors() function can be used to display errors for a particular field. Field name is the input parameter for both these functions.

HTML code to iterate and display the errors for each of the fields in the form:

<ul>
    <li th:each="err : ${#fields.errors('id')}" th:text="${err}" />
    <li th:each="err : ${#fields.errors('name')}" th:text="${err}" />
</ul>

Instead of field name the above functions accept the wild card character * or the constant all to indicate all fields. The th:each attribute is used to iterate the multiple errors that may be present for each of the fields.

The previous HTML code re-written using the wildcard *:

<ul>
    <li th:each="err : ${#fields.errors('*')}" th:text="${err}" />
</ul>

or using the constant all:

<ul>
    <li th:each="err : ${#fields.errors('all')}" th:text="${err}" />
</ul>

Similarly, global errors in Spring can be displayed using the global constant.

The HTML code to display global errors:

<ul>
    <li th:each="err : ${#fields.errors('global')}" th:text="${err}" />
</ul>

The th:errors attribute can also be used to display error messages. The previous code to display errors in the form can be re-written using th:errors attribute:

<ul>
    <li th:errors="*{id}" />
    <li th:errors="*{name}" />
</ul>

8. Using Conversions

The double bracket syntax {{}} is used to format data for display. This makes use of the formatters configured for that type of field in the conversionService bean of the context file.

The name field in the Student class is formatted:

<tr th:each="student: ${students}">
    <td th:text="${{student.name}}" />
</tr>

The above code uses the NameFormatter class configured as:

@Override
@Description("Custom Conversion Service")
public void addFormatters(FormatterRegistry registry) {
    registry.addFormatter(new NameFormatter());
}

The NameFormatter class implements the Spring Formatter interface.

The #conversions utility can also be used to convert objects for display. The syntax for the utility function is #conversions.convert(Object, Class) where Object is converted to Class type.

To display student object percentage field with the fractional part removed:

<tr th:each="student: ${students}">
    <td th:text="${#conversions.convert(student.percentage, 'Integer')}" />
</tr>

9. Conclusion

In this tutorial, we’ve seen how to integrate and use Thymeleaf in a Spring MVC application.

We have also seen examples of how to display fields, accept input, display validation errors, and convert data for display. A working version of the code shown in this article is available in a GitHub repository.

Sign Up and get 25% Off my upcoming "REST With Spring" classes on launch:

>> CHECK OUT THE CLASSES


Java Web Weekly Issue 107

$
0
0

I usually post about Dev stuff on Twitter - you can follow me there:

At the very beginning of 2014, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a week since.

Here we go…

1. Spring and Java

>> If Java Were Designed Today: The Synchronizable Interface [jooq.org]

Yet another cool exploration of what “might be” in Java – this time with a focus on the ol’ “synchronized”.

>> European conferences with strong Spring content [spring.io]

A quick and cool list of conferences in Europe, well represented in the Spring ecosystem. Good stuff, especially now since I’m working on a couple new talks.

>> Beware Of findFirst() And findAny() [codefx.org]

Very cool exploration of the nuances of findFirst and findAny withing the Java Streams API.

>> JPA test case templates [in.relation.to]

The Hibernate testing effort is moving forward with this addition of a JPA focused test case that reproduces the issue without being Hibernate specific. That’s certainly the right approach whenever possible.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

Also worth reading:

3. Musings

>> Do you have to love what you do? [signalvnoise.com]

The old adage “you need to love what you to” is certainly seeing some strong counter-examples on the web.

A good way I heard this re-framed is – you need to love helping people, serving your audience, or really, love the game in general. I like that shift of focus – it opens up a lot more options.

>> PayPal and zero dollar invoice spam [troyhunt.com]

A surprising, new type of spam via PayPal – have a quick read even only to be aware of the problem (if you have a PayPal account).

>> Developer Tips for Sublime Productivity [daedtech.com]

Focus is so very important, as more and more – “time is the asset” (Gary V).

These are some basic but solid tips to get you into that state of flow and help you keep it.

>> Testing: Appetite Comes With Eating [techblog.bozho.net]

Some interesting, personal notes on testing and why you can’t afford not to invest into it.

>> New – Scheduled Reserved Instances [aws.amazon.com]

This looks like it could fit into a few interesting usecases – very cool to see so much innovation coming out of AWS.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> My boss keeps micromanaging me [dilbert.com]

>> Genius comes in many forms [dilbert.com]

>> This may look like an ordinary powerpoint slide [dilbert.com]

 

5. Pick of the Week

>> Introducing BDD [dannorth.net]

 

I usually post about Dev stuff on Twitter - you can follow me there:


Eliminate Redundancies in RAML with Resource Types and Traits

$
0
0

1. Overview

In our RAML tutorial article, we introduced the RESTful API Modeling Language and created a simple API definition based on a single entity called Foo. Now imagine a real-world API in which you have several entity-type resources, all having the same or similar GET, POST, PUT, and DELETE operations. You can see how your API documentation can quickly become tedious and repetitive.

In this article, we show how the use of the resource types and traits features in RAML can eliminate redundancies in resource and method definitions by extracting and parameterizing common sections, thereby eliminating copy-and-paste errors while making your API definitions more concise.

2. Our API

In order to demonstrate the benefits of resource types and traits, we will expand our original API by adding resources for a second entity type called Bar. Here are the resources that will make up our revised API:

  • GET /api/v1/foos
  • POST /api/v1/foos
  • GET /api/v1/foos/{fooId}
  • PUT /api/v1/foos/{fooId}
  • DELETE /api/v1/foos/{fooId}
  • GET /api/v1/foos/name/{name}
  • GET /api/v1/foos?name={name}&ownerName={ownerName}
  • GET /api/v1/bars
  • POST /api/v1/bars
  • GET /api/v1/bars/{barId}
  • PUT /api/v1/bars/{barId}
  • DELETE /api/v1/bars/{barId}
  • GET /api/v1/bars/fooId/{fooId}

3. Recognizing Patterns

As we read through the list of resources in our API, we begin to see some patterns emerge. For example, there is a pattern for the URIs and methods used to create, read, update, and delete single entities, and there is a pattern for the URIs and methods used to retrieve collections of entities. The collection and collection-item pattern is one of the more common patterns used to extract resource types in RAML definitions.

Let’s look at a couple of sections of our API:

[Note: In the code snippets below, a line containing only three dots (…) indicates that some lines are being skipped for brevity.]

/foos:
  get:
    description: List all foos matching query criteria, if provided;
                 otherwise list all foos
    queryParameters:
      name?: string
      ownerName?: string
    responses:
      200:
        body:
          application/json:
            type: Foo[]
  post:
    description: Create a new foo
    body:
      application/json:
        type: Foo
    responses:
      201:
        body:
          application/json:
            type: Foo
...
/bars:
  get:
    description: List all bars matching query criteria, if provided;
                 otherwise list all bars
    queryParameters:
      name?: string
      ownerName?: string
    responses:
      200:
        body:
          application/json:
            type: Bar[]
  post:
    description: Create a new bar
    body:
      application/json:
        type: Bar
    responses:
      201:
        body:
          application/json:
            type: Bar

When we compare the RAML definitions of the /foos and /bars resources, including the HTTP methods used, we can see several redundancies among the various properties of each, and we again see patterns begin to emerge.

Wherever there is a pattern in either a resource or method definition, there is an opportunity to use a RAML resource type or trait.

4. Resource Types

In order to implement the patterns found in the API, resource types use reserved and user-defined parameters surrounded by double angle brackets (<< and >>).

4.1 Reserved Parameters

Two reserved parameters may be used in resource type definitions:

  • <<resourcePath>> represents the entire URI (following the baseURI), and
  • <<resourcePathName>> represents the part of the URI following the right-most forward slash (/), ignoring any braces { }.

When processed inside a resource definition, their values are calculated based on the resource being defined.

Given the resource /foos, for example, <<resourcePath>> would evaluate to “/foos” and <<resourcePathName>> would evaluate to “foos”.

Given the resource /foos/{fooId}, <<resourcePath>> would evaluate to “/foos/{fooId}” and <<resourcePathName>> would evaluate to “fooId”.

4.2 User-Defined Parameters

A resource type definition may also contain user-defined parameters. Unlike the reserved parameters, whose values are determined dynamically based on the resource being defined, user-defined parameters must be assigned values wherever the resource type containing them is used, and those values do not change.

User-defined parameters may be declared at the beginning of a resource type definition, although doing so is not required and is not common practice, as the reader can usually figure out their intended usage given their names and the contexts in which they are used.

4.3 Parameter Functions

A handful of useful text functions are available for use wherever a parameter is used in order to transform the expanded value of the parameter when it is processed in a resource definition.

Here are the functions available for parameter transformation:

  • !singularize (only available for the US English locale)
  • !pluralize (only available for the US English locale)
  • !uppercase
  • !lowercase
  • !uppercamelcase
  • !lowercamelcase
  • !upperunderscorecase
  • !lowerunderscorecase
  • !upperhyphencase
  • !lowerhyphencase

Functions are applied to a parameter using the following construct:

<<parameterName | !functionName>>

If you need to use more than one function to achieve the desired transformation, you would separate each function name with the pipe symbol (“|”) and prepend an exclamation point (!) before each function used.

For example, given the resource /foos, where <<resourcePathName>> evaluates to “foos”:

  • <<resourcePathName | !singularize>> ==> “foo”
  • <<resourcePathName | !uppercase>> ==> “Foos”
  • <<resourcePathName | !singularize | !uppercase>> ==> “Foo”

And given the resource /bars/{barId}, where <<resourcePathName>> evaluates to “barId”:

  • <<resourcePathName | !uppercase>> ==> “BARID”
  • <<resourcePathName | !uppercamelcase>> ==> “BarId”
  • <<resourcePathName | !lowerhyphencase>> ==> “bar-id”

5. Extracting a Resource Type for Collections

Let’s refactor the /foos and /bars resource definitions shown above, using a resource type to capture the common properties. We will use the reserved parameter <<resourcePathName>>, and the user-defined parameter <<typeName>> to represent the data type used.

5.1 Definition

Here is a resource type definition representing a collection of items:

resourceTypes:
  - collection:
      usage: Use this resourceType to represent any collection of items
      description: A collection of <<resourcePathName>>
      get:
        description: Get all <<resourcePathName>>, optionally filtered responses:
          200:
            body:
              application/json:
                type: <<typeName>>[]
      post:
        description: Create a new <<resourcePathName|!singularize>> responses:
          201:
            body:
              application/json:
                type: <<typeName>>

Note that in our API, because our data types are merely capitalized, singular versions of our base resources’ names, we could have applied functions to the <<resourcePathName>> parameter, instead of introducing the user-defined <<typeName>> parameter, to achieve the same result for this portion of the API:

resourceTypes:
  - collection:
    ...
      get:
      ...
              type: <<resourcePathName|!singularize|!uppercamelcase>>[]
      post:
        ...
              type: <<resourcePathName|!singularize|!uppercamelcase>>

5.2 Application

Using the above definition that incorporates the <<typeName>> parameter, here is how you would apply the “collection” resource type to the resources /foos and /bars:

/foos:
  type: collection
  typeName: Foo
  get:
    queryParameters:
      name?: string
      ownerName?: string
...
/bars:
  type: collection
  typeName: Bar

Notice that we are still able to incorporate the differences between the two resources — in this case, the queryParameters section — while still taking advantage of all that the resource type definition has to offer.

6. Extracting a Resource Type for Single Items of a Collection

Let’s focus now on the portion of our API dealing with single items of a collection: the /foos/{fooId} and /bars/{barId} resources. Here is the code for/foos/{fooId}:

/foos:
...
  /{fooId}:
    get:
      description: Get a foo by fooId
      responses:
        200:
          body:
            application/json:
              type: Foo
              example: !include examples/Foo.json
        404:
          body:
            application/json:
              type: Error
              example: !include examples/Error.json
    put:
      description: Update a foo by fooId
      body:
        application/json:
          type: Foo
          example: !include examples/Foo.json
      responses:
        200:
          body:
            application/json:
              type: Foo
              example: !include examples/Foo.json
        404:
          body:
            application/json:
              type: Error
              example: !include examples/Error.json
    delete:
      description: Delete a foo by fooId
      responses:
        204:
        404:
          body:
            application/json:
              type: Error
              example: !include examples/Error.json

The /bars/{barId} resource definition also has GET, PUT, and DELETE methods and is identical to the /foos/{fooId} definition, other than the occurrences of the strings “foo” and “bar” (and their respective pluralized and/or capitalized forms).

6.1 Definition

Extracting the pattern we just identified, here is how we define a resource type for single items of a collection:

resourceTypes:
...
  - item:
      usage: Use this resourceType to represent any single item
      description: A single <<typeName>>
      get:
        description: Get a <<typeName>> by <<resourcePathName>>
        responses:
          200:
            body:
              application/json:
                type: <<typeName>>
                example: !include examples/<<typeName>>.json
          404:
            body:
              application/json:
                type: Error
                example: !include examples/Error.json
      put:
        description: Update a <<typeName>> by <<resourcePathName>>
        body:
          application/json:
            type: <<typeName>>
            example: !include examples/<<typeName>>.json
        responses:
          200:
            body:
              application/json:
                type: <<typeName>>
                example: !include examples/<<typeName>>.json
          404:
            body:
              application/json:
                type: Error
                example: !include examples/Error.json
      delete:
        description: Delete a <<typeName>> by <<resourcePathName>>
        responses:
          204:
          404:
            body:
              application/json:
                type: Error
                example: !include examples/Error.json

6.2 Application

And here is how we apply the “item” resource type:

/foos:
...
  /{fooId}:
    type: item
    typeName: Foo
...
/bars:
...
  /{barId}:
    type: item
    typeName: Bar

7. Traits

Whereas a resource type is used to extract patterns from resource definitions, a trait is used to extract patterns from method definitions that are common across resources.

7.1 Parameters

Along with <<resourcePath>> and <<resourcePathName>>, one additional reserved parameter is available for use in trait definitions: <<methodName>> evaluates to the HTTP method (GET, POST, PUT, DELETE, etc) for which the trait is defined. User-defined parameters may also appear within a trait definition, and where applied, take on the value of the resource in which they are being applied.

7.2 Definition

Notice that the “item” resource type is still full of redundancies. Let’s see how traits can help eliminate them. We’ll start by extracting a trait for any method containing a request body:

traits:
  - hasRequestItem
      body:
        application/json:
          type: <<typeName>>
          example: !include examples/<<typeName>>.json

Now let’s extract traits for methods whose normal responses contain bodies:

  - hasResponseItem:
      responses:
          200:
            body:
              application/json:
                type: <<typeName>>
                example: !include examples/<<typeName>>.json
  - hasResponseCollection:
      responses:
          200:
            body:
              application/json:
                type: <<typeName>>[]
                example: !include examples/<<typeName>>.json

Finally, here’s a trait for any method that could return a 404 error response:

  - hasNotFound:
      responses:
          404:
            body:
              application/json:
                type: Error
                example: !include examples/Error.json

7.3 Application

We then apply this trait to our resource types:

resourceTypes:
  - collection:
      usage: Use this resourceType to represent any collection of items
      description: A collection of <<resourcePathName|!uppercamelcase>>
      get:
        description: |
          Get all <<resourcePathName|!uppercamelcase>>,
          optionally filtered
        is: [ hasResponseCollection ]
      post:
        description: Create a new <<resourcePathName|!singularize>>
        is: [ hasRequestItem ]
  - item:
      usage: Use this resourceType to represent any single item
      description: A single <<typeName>>
      get:
        description: Get a <<typeName>> by <<resourcePathName>>
        is: [ hasResponseItem, hasNotFound ]
      put:
        description: Update a <<typeName>> by <<resourcePathName>>
        is: | [ hasRequestItem, hasResponseItem, hasNotFound ]
      delete:
        description: Delete a <<typeName>> by <<resourcePathName>>
        is: [ hasNotFound ]
        responses:
          204:

We can also apply traits to methods defined within resources. This is especially useful for “one-off” scenarios where a resource-method combination matches one or more traits but does not match any defined resource type:

/foos:
...
  /name/{name}:
    get:
      description: List all foos with a certain name
      typeName: Foo
      is: [ hasResponseCollection ]

8. Conclusion

In this tutorial, we’ve shown how to significantly reduce or, in some cases, eliminate redundancies from a RAML API definition.

First, we identified the redundant sections of our resources, recognized their patterns, and extracted resource types. Then we did the same for the methods that were common across resources to extract traits. Then we were able to eliminate further redundancies by applying traits to our resource types and to “one-off” resource-method combinations that did not strictly match one of our defined resource types.

As a result, our simple API with resources for only two entities, was reduced from 177 to just over 100 lines of code. To learn more about RAML resource types and traits, visit the RAML.org 1.0 spec.

The full implementation of this tutorial can be found in the github project.

Here is our final RAML API in its entirety:

#%RAML 1.0
title: Baeldung Foo REST Services API
version: v1
protocols: [ HTTPS ]
baseUri: http://rest-api.baeldung.com/api/{version}
mediaType: application/json
securedBy: basicAuth
securitySchemes:
  - basicAuth:
      description: Each request must contain the headers necessary for
                   basic authentication
      type: Basic Authentication
      describedBy:
        headers:
          Authorization:
            description: |
              Used to send the Base64 encoded "username:password"
              credentials
            type: string
        responses:
          401:
            description: |
              Unauthorized. Either the provided username and password
              combination is invalid, or the user is not allowed to
              access the content provided by the requested URL.
types:
  Foo:   !include types/Foo.raml
  Bar:   !include types/Bar.raml
  Error: !include types/Error.raml
resourceTypes:
  - collection:
      usage: Use this resourceType to represent a collection of items
      description: A collection of <<resourcePathName|!uppercamelcase>>
      get:
        description: |
          Get all <<resourcePathName|!uppercamelcase>>,
          optionally filtered
        is: [ hasResponseCollection ]
      post:
        description: |
          Create a new <<resourcePathName|!uppercamelcase|!singularize>>
        is: [ hasRequestItem ]
  - item:
      usage: Use this resourceType to represent any single item
      description: A single <<typeName>>
      get:
        description: Get a <<typeName>> by <<resourcePathName>>
        is: [ hasResponseItem, hasNotFound ]
      put:
        description: Update a <<typeName>> by <<resourcePathName>>
        is: [ hasRequestItem, hasResponseItem, hasNotFound ]
      delete:
        description: Delete a <<typeName>> by <<resourcePathName>>
        is: [ hasNotFound ]
        responses:
          204:
traits:
  - hasRequestItem:
      body:
        application/json:
          type: <<typeName>>
  - hasResponseItem:
      responses:
          200:
            body:
              application/json:
                type: <<typeName>>
                example: !include examples/<<typeName>>.json
  - hasResponseCollection:
      responses:
          200:
            body:
              application/json:
                type: <<typeName>>[]
                example: !include examples/<<typeName|!pluralize>>.json
  - hasNotFound:
      responses:
          404:
            body:
              application/json:
                type: Error
                example: !include examples/Error.json
/foos:
  type: collection
  typeName: Foo
  get:
    queryParameters:
      name?: string
      ownerName?: string
  /{fooId}:
    type: item
    typeName: Foo
  /name/{name}:
    get:
      description: List all foos with a certain name
      typeName: Foo
      is: [ hasResponseCollection ]
/bars:
  type: collection
  typeName: Bar
  /{barId}:
    type: item
    typeName: Bar
  /fooId/{fooId}:
    get:
      description: Get all bars for the matching fooId
      typeName: Bar
      is: [ hasResponseCollection ]

Injecting Mockito Mocks into Spring Beans

$
0
0

1. Overview

This article will show how to use dependency injection to insert Mockito mocks into Spring Beans for unit testing.

In real-world applications, where components often depend on accessing external systems, it is important to provide proper test isolation so that we can focus on testing the functionality of a given unit without having to involve the whole class hierarchy for each test.

Injecting a mock is a clean way to introduce such isolation.

2. Maven Dependencies

The following Maven dependencies are required for the unit tests and the mock objects:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter</artifactId>
    <version>1.3.1.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-test</artifactId>
    <version>1.3.1.RELEASE</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.mockito</groupId>
    <artifactId>mockito-core</artifactId>
    <version>1.10.19</version>
</dependency>

We decided to use Spring Boot for this example, but classic Spring will also work fine.

3. Writing the Test

3.1. The Business Logic

First, we are going to write the business logic that will be tested:

@Service
public class NameService {
    public String getUserName(String id) {
        return "Real user name";
    }
}

The NameService class will be injected into:

@Service
public class UserService {

    private NameService nameService;

    @Autowired
    public UserService(NameService nameService) {
        this.nameService = nameService;
    }

    public String getUserName(String id) {
        return nameService.getUserName(id);
    }
}

For the purposes of this tutorial, the given classes return a single name regardless of the id provided. This is done so that we don’t get distracted by testing any complex logic.

We will also need a standard Spring Boot main class to scan the beans and initialize the application:

@SpringBootApplication
public class MocksApplication {
    public static void main(String[] args) {
        SpringApplication.run(MocksApplication.class, args);
    }
}

3.2. The Tests

Now lets move on to the test logic. First of all, we have to configure application context for the tests:

@Profile("test")
@Configuration
public class NameServiceTestConfiguration {
    @Bean
    @Primary
    public NameService nameService() {
        return Mockito.mock(NameService.class);
    }
}

The @Profile annotation tells Spring to apply this configuration only when the “test” profile is active. The @Primary annotation is there to make sure this this instance is used instead of a real one for autowiring.  The method itself creates and returns a Mockito mock of our NameService class.

Now we can write the unit test:

@ActiveProfiles("test")
@RunWith(SpringJUnit4ClassRunner.class)
@SpringApplicationConfiguration(classes = MocksApplication.class)
public class UserServiceTest {

    @Autowired
    private UserService userService;

    @Autowired
    private NameService nameService;

    @Test
    public void whenUserIdIsProvided_thenRetrievedNameIsCorrect() {
        Mockito.when(nameService.getUserName("SomeId")).thenReturn("Mock user name");
        String testName = userService.getUserName("SomeId");
        Assert.assertEquals("Mock user name", testName);
    }
}

We use the @ActiveProfiles annotation to enable the “test” profile and activate the mock configuration we wrote earlier. Because of this, Spring autowires a real instance of the UserService class, but a mock of the NameService class. The test itself is a fairly typical JUnit+Mockito test. We configure the desired behavior of the mock, then call the method which we want to test and assert that it returns the value that we expect.

It is also possible (though not recommended) to avoid using environment profiles in such tests. To do so, remove the @Profile and @ActiveProfiles annotations and add an @ContextConfiguration(classes = NameServiceTestConfiguration.class) annotation to the UserServiceTest class.

4. Conclusion

The tutorial showed how to inject Mockito mocks into Spring Beans. The implementation can be found in the example GitHub project.

Custom Error Message Handling for REST API

$
0
0

I just announced the release of my "REST With Spring" Classes:

>> THE "REST WITH SPRING" CLASSES

1. Overview

In this tutorial – we’ll discuss how to implement an global error handler for a Spring REST API.

We will use the semantics of each exception to build out meaningful error messages for client, with the clear goal of giving that client all the info to easily diagnose the problem.

2. A Custom Error Message

Let’s start by implementing a simple structure for sending errors over the wire – the ApiError:

public class ApiError {

    private HttpStatus status;
    private String message;
    private List<String> errors;

    public ApiError(HttpStatus status, String message, List<String> errors) {
        super();
        this.status = status;
        this.message = message;
        this.errors = errors;
    }

    public ApiError(HttpStatus status, String message, String error) {
        super();
        this.status = status;
        this.message = message;
        errors = Arrays.asList(error);
    }
}

The information here should be straightforward:

  • status: the HTTP status code
  • message: the error message associated with exception
  • error: List of constructed error messages

And of course, for the actual exception handling logic in Spring, we’ll use the @ControllerAdvice annotation:

@ControllerAdvice
public class CustomRestExceptionHandler extends ResponseEntityExceptionHandler {
    ...
}

3. Handle Bad Request Exceptions

3.1. Handling the Exceptions

Now, let’s see how we can handle the most common client errors – basically scenarios of a client sent an invalid request to the API:

  • BindException: This exception is thrown when fatal binding errors occur.
  • MethodArgumentNotValidException: This exception is thrown when argument annotated with @Valid failed validation:

@Override
protected ResponseEntity<Object> handleMethodArgumentNotValid(
  MethodArgumentNotValidException ex, 
  HttpHeaders headers, 
  HttpStatus status, 
  WebRequest request) {
    List<String> errors = new ArrayList<String>();
    for (FieldError error : ex.getBindingResult().getFieldErrors()) {
        errors.add(error.getField() + ": " + error.getDefaultMessage());
    }
    for (ObjectError error : ex.getBindingResult().getGlobalErrors()) {
        errors.add(error.getObjectName() + ": " + error.getDefaultMessage());
    }
    
    ApiError apiError = 
      new ApiError(HttpStatus.BAD_REQUEST, ex.getLocalizedMessage(), errors);
    return handleExceptionInternal(ex, apiError, headers, apiError.getStatus(), request);
}

As you can see, we are overriding a base method out of the ResponseEntityExceptionHandler and providing our own custom implementation.

That’s not always going to be the case – sometimes we’re going to need to handle a custom exception that doesn’t have a default implementation in the base class, as we’ll get to see later on here.

Next:

  • MissingServletRequestPartException: This exception is thrown when when the part of a multipart request not found

  • MissingServletRequestParameterException: This exception is thrown when request missing parameter:

@Override
protected ResponseEntity<Object> handleMissingServletRequestParameter(
  MissingServletRequestParameterException ex, HttpHeaders headers, 
  HttpStatus status, WebRequest request) {
    String error = ex.getParameterName() + " parameter is missing";
    
    ApiError apiError = 
      new ApiError(HttpStatus.BAD_REQUEST, ex.getLocalizedMessage(), error);
    return new ResponseEntity<Object>(apiError, new HttpHeaders(), apiError.getStatus());
}
  • ConstrainViolationException: This exception reports the result of constraint violations:

@ExceptionHandler({ ConstraintViolationException.class })
public ResponseEntity<Object> handleConstraintViolation(
  ConstraintViolationException ex, WebRequest request) {
    List<String> errors = new ArrayList<String>();
    for (ConstraintViolation<?> violation : ex.getConstraintViolations()) {
        errors.add(violation.getRootBeanClass().getName() + " " + 
          violation.getPropertyPath() + ": " + violation.getMessage());
    }

    ApiError apiError = 
      new ApiError(HttpStatus.BAD_REQUEST, ex.getLocalizedMessage(), errors);
    return new ResponseEntity<Object>(apiError, new HttpHeaders(), apiError.getStatus());
}
  • TypeMismatchException: This exception is thrown when try to set bean property with wrong type.

  • MethodArgumentTypeMismatchException: This exception is thrown when method argument is not the expected type:

@ExceptionHandler({ MethodArgumentTypeMismatchException.class })
public ResponseEntity<Object> handleMethodArgumentTypeMismatch(
  MethodArgumentTypeMismatchException ex, WebRequest request) {
    String error = ex.getName() + " should be of type " + ex.getRequiredType().getName();

    ApiError apiError = 
      new ApiError(HttpStatus.BAD_REQUEST, ex.getLocalizedMessage(), error);
    return new ResponseEntity<Object>(apiError, new HttpHeaders(), apiError.getStatus());
}

3.2. Consuming the API from the Client

Let’s now have a look at a a test that runs into a MethodArgumentTypeMismatchException: we’ll send a request with id as String instead of long:

@Test
public void whenMethodArgumentMismatch_thenBadRequest() {
    Response response = givenAuth().get(URL_PREFIX + "/api/foos/ccc");
    ApiError error = response.as(ApiError.class);

    assertEquals(HttpStatus.BAD_REQUEST, error.getStatus());
    assertEquals(1, error.getErrors().size());
    assertTrue(error.getErrors().get(0).contains("should be of type"));
}

And finally – considering this same request: :

Request method:	GET
Request path:	http://localhost:8080/spring-security-rest/api/foos/ccc

Here’s what this kind of JSON error response will look like: 

{
    "status": "BAD_REQUEST",
    "message": 
      "Failed to convert value of type [java.lang.String] 
       to required type [java.lang.Long]; nested exception 
       is java.lang.NumberFormatException: For input string: \"ccc\"",
    "errors": [
        "id should be of type java.lang.Long"
    ]
}

4. Handle NoHandlerFoundException

Next, we can customize our servlet to throw this exception instead of send 404 response – as follows:

<servlet>
    <servlet-name>api</servlet-name>
    <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>        
    <init-param>
        <param-name>throwExceptionIfNoHandlerFound</param-name>
        <param-value>true</param-value>
    </init-param>
</servlet>

Then, once this happens, we we can simply handle it just as any other exception:

@Override
protected ResponseEntity<Object> handleNoHandlerFoundException(
  NoHandlerFoundException ex, HttpHeaders headers, HttpStatus status, WebRequest request) {
    String error = "No handler found for " + ex.getHttpMethod() + " " + ex.getRequestURL();

    ApiError apiError = new ApiError(HttpStatus.NOT_FOUND, ex.getLocalizedMessage(), error);
    return new ResponseEntity<Object>(apiError, new HttpHeaders(), apiError.getStatus());
}

Here is a simple test:

@Test
public void whenNoHandlerForHttpRequest_thenNotFound() {
    Response response = givenAuth().delete(URL_PREFIX + "/api/xx");
    ApiError error = response.as(ApiError.class);

    assertEquals(HttpStatus.NOT_FOUND, error.getStatus());
    assertEquals(1, error.getErrors().size());
    assertTrue(error.getErrors().get(0).contains("No handler found"));
}

Let’s have a look at the full request:

Request method:	DELETE
Request path:	http://localhost:8080/spring-security-rest/api/xx

And the error JSON response:

{
    "status":"NOT_FOUND",
    "message":"No handler found for DELETE /spring-security-rest/api/xx",
    "errors":[
        "No handler found for DELETE /spring-security-rest/api/xx"
    ]
}

5. Handle HttpRequestMethodNotSupportedException

Next, let’s have a look at another interesting exception – the HttpRequestMethodNotSupportedException – which occurs when you send a requested with an unsupported HTTP method:

@Override
protected ResponseEntity<Object> handleHttpRequestMethodNotSupported(
  HttpRequestMethodNotSupportedException ex, 
  HttpHeaders headers, 
  HttpStatus status, 
  WebRequest request) {
    StringBuilder builder = new StringBuilder();
    builder.append(ex.getMethod());
    builder.append(" method is not supported for this request. Supported methods are ");
    ex.getSupportedHttpMethods().forEach(t -> builder.append(t + " "));

    ApiError apiError = new ApiError(HttpStatus.METHOD_NOT_ALLOWED, 
      ex.getLocalizedMessage(), builder.toString());
    return new ResponseEntity<Object>(apiError, new HttpHeaders(), apiError.getStatus());
}

Here is a simple test reproducing this exception:

@Test
public void whenHttpRequestMethodNotSupported_thenMethodNotAllowed() {
    Response response = givenAuth().delete(URL_PREFIX + "/api/foos/1");
    ApiError error = response.as(ApiError.class);

    assertEquals(HttpStatus.METHOD_NOT_ALLOWED, error.getStatus());
    assertEquals(1, error.getErrors().size());
    assertTrue(error.getErrors().get(0).contains("Supported methods are"));
}

And here’s the full request:

Request method:	DELETE
Request path:	http://localhost:8080/spring-security-rest/api/foos/1

And the error JSON response:

{
    "status":"METHOD_NOT_ALLOWED",
    "message":"Request method 'DELETE' not supported",
    "errors":[
        "DELETE method is not supported for this request. Supported methods are GET "
    ]
}

6. Handle HttpMediaTypeNotSupportedException

Now, let’s handle HttpMediaTypeNotSupportedException – which occurs when the client send a request with unsupported media type – as follows:

@Override
protected ResponseEntity<Object> handleHttpMediaTypeNotSupported(
  HttpMediaTypeNotSupportedException ex, 
  HttpHeaders headers, 
  HttpStatus status, 
  WebRequest request) {
    StringBuilder builder = new StringBuilder();
    builder.append(ex.getContentType());
    builder.append(" media type is not supported. Supported media types are ");
    ex.getSupportedMediaTypes().forEach(t -> builder.append(t + ", "));

    ApiError apiError = new ApiError(HttpStatus.UNSUPPORTED_MEDIA_TYPE, 
      ex.getLocalizedMessage(), builder.substring(0, builder.length() - 2));
    return new ResponseEntity<Object>(apiError, new HttpHeaders(), apiError.getStatus());
}

Here is a simple test running into this issue:

@Test
public void whenSendInvalidHttpMediaType_thenUnsupportedMediaType() {
    Response response = givenAuth().body("").post(URL_PREFIX + "/api/foos");
    ApiError error = response.as(ApiError.class);

    assertEquals(HttpStatus.UNSUPPORTED_MEDIA_TYPE, error.getStatus());
    assertEquals(1, error.getErrors().size());
    assertTrue(error.getErrors().get(0).contains("media type is not supported"));
}

Finally – here’s a sample request:

Request method:	POST
Request path:	http://localhost:8080/spring-security-
Headers:	Content-Type=text/plain; charset=ISO-8859-1

And the error JSON response:

{
    "status":"UNSUPPORTED_MEDIA_TYPE",
    "message":"Content type 'text/plain;charset=ISO-8859-1' not supported",
    "errors":["text/plain;charset=ISO-8859-1 media type is not supported. 
       Supported media types are text/xml 
       application/x-www-form-urlencoded 
       application/*+xml 
       application/json;charset=UTF-8 
       application/*+json;charset=UTF-8 */"
    ]
}

7. Default Handler

Finally, let’s implement a fall-back handler – a catch-all type of logic that deals with all other exceptions that don’t have specific handlers:

@ExceptionHandler({ Exception.class })
public ResponseEntity<Object> handleAll(Exception ex, WebRequest request) {
    ApiError apiError = new ApiError(
      HttpStatus.INTERNAL_SERVER_ERROR, ex.getLocalizedMessage(), "error occurred");
    return new ResponseEntity<Object>(apiError, new HttpHeaders(), apiError.getStatus());
}

8. Conclusion

Building a proper, mature error handler for a Spring REST API is tough and definitely an iterative process. Hopefully this tutorial will be a good starting point towards doing that for your API and also a good anchor for how you should look at helping your the clients of your API quickly and easily diagnose errors and move past them.

The full implementation of this tutorial can be found in the github project – this is an Eclipse based project, so it should be easy to import and run as it is.

Sign Up to the newly released "REST With Spring" Master Class:

>> CHECK OUT THE CLASSES

Auditing with JPA, Hibernate, and Spring Data JPA

$
0
0

I usually post about Persistence on Twitter - you can follow me there:

1. Overview

In the context of ORM, database auditing means tracking and logging events related to persistent entities, or simply entity versioning. Inspired by SQL triggers, the events are insert, update and delete operations on entities. The benefits of database auditing are analogous to those provided by source version control.

We will demonstrate three approaches to introducing auditing into an application. First, we will implement it using standard JPA. Next, we will look at two JPA extensions that provide their own auditing functionality: one provided by Hibernate, another by Spring Data.

Here are the sample related entities, Bar and Foo, that will be used in this example:

Screenshot_4

2. Auditing with JPA

JPA doesn’t explicitly contain an auditing API, but the functionality can be achieved using entity lifecycle events.

2.1. @PrePersist, @PreUpdate and @PreRemove

In JPA Entity class, a method can be specified as a callback which will be invoked during a particular entity lifecycle event. As we’re interested in callbacks that are executed before the corresponding DML operations, there are @PrePersist, @PreUpdate and @PreRemove callback annotations available for our purposes:

@Entity
public class Bar {
      
    @PrePersist
    public void onPrePersist() { ... }
      
    @PreUpdate
    public void onPreUpdate() { ... }
      
    @PreRemove
    public void onPreRemove() { ... }
      
}

Internal callback methods should always return void and take no arguments. They can have any name and any access level but should not be static.

Be aware that the @Version annotation in JPA is not related to our topic; it has to do with resolving optimistic locks.

2.2. Implementing the Callback Methods

There is a significant restriction with this approach though. As stated in JPA 2 specification (JSR 317):

In general, the lifecycle method of a portable application should not invoke EntityManager or Query operations, access other entity instances, or modify relationships within the same persistence context. A lifecycle callback method may modify the non-relationship state of the entity on which it is invoked.

In the absence of an auditing framework, we must maintain the database schema and domain model manually. For our simple use case, let’s add two new properties to the entity, as we can manage only the “non-relationship state of the entity”. An operation property will store the name of an operation performed and a timestamp property is for the timestamp of the operation:

@Entity
public class Bar {
     
    ...
     
    @Column(name = "operation")
    private String operation;
     
    @Column(name = "timestamp")
    private long timestamp;
     
    ...
     
    // standard setters and getters for the new properties
     
    ...
     
    @PrePersist
    public void onPrePersist() {
        audit("INSERT");
    }
     
    @PreUpdate
    public void onPreUpdate() {
        audit("UPDATE");
    }
     
    @PreRemove
    public void onPreRemove() {
        audit("DELETE");
    }
     
    private void audit(String operation) {
        setOperation(operation);
        setTimestamp((new Date()).getTime());
    }
     
}

If you need to add such auditing to multiple classes, you can use @EntityListeners to centralize the code. For example:

@EntityListeners(AuditListener.class)
@Entity
public class Bar { ... }
public class AuditListener {
    
    @PrePersist
    @PreUpdate
    @PreRemove
    private void beforeAnyOperation(Object object) { ... }
    
}

3. Hibernate Envers

With Hibernate, we could make use of Interceptors and EventListeners as well as database triggers to accomplish auditing. But the ORM framework offers Envers, a module implementing auditing and versioning of persistent classes.

3.1. Get Started with Envers

To set up Envers, you need to add the hibernate-envers JAR into your classpath:

<dependency>
    <groupId>org.hibernate</groupId>
    <artifactId>hibernate-envers</artifactId>
    <version>${hibernate.version}</version>
</dependency>

Then just add the @Audited annotation either on an @Entity (to audit the whole entity) or on specific @Columns (if you need to audit specific properties only):

@Entity
@Audited
public class Bar { ... }

Note that Bar has a one-to-many relationship with Foo. In this case we either need to audit Foo as well by adding @Audited on Foo or set @NotAudited on the relationship’s property in Bar:

@OneToMany(mappedBy = "bar")
@NotAudited
private Set<Foo> fooSet;

3.2. Creating Audit Log Tables

There are several ways to create audit tables:

  • set hibernate.hbm2ddl.auto to create, create-drop or update, so Envers can create them automatically
  • use org.hibernate.tool.EnversSchemaGenerator to export the complete database schema programatically
  • use an Ant task to generate appropriate DDL statements
  • use a Maven plugin for for generating a database schema from your mappings (such as Juplo) to export Envers schema (works with Hibernate 4 and higher)

We’ll go the first route, as it is the most straightforward, but be aware that using hibernate.hbm2ddl.auto is not safe in production.

In our case, bar_AUD and foo_AUD (if you’ve set Foo as @Audited as well) tables should be generated automatically. The audit tables copy all audited fields from the entity’s table with two fields, REVTYPE (values are: “0” for adding, “1” for updating, “2” for removing an entity) and REV.

Besides these, an extra table named REVINFO will be generated by default, it includes two important fields, REV and REVTSTMP and records the timestamp of every revision. And as you can guess, bar_AUD.REV and foo_AUD.REV are actually foreign keys to REVINFO.REV.

3.3. Configuring Envers

You can configure Envers properties just like any other Hibernate property. For example, let’s change the audit table suffix (which defaults to “_AUD“) to “_AUDIT_LOG“. Here is how to set the value of the corresponding property org.hibernate.envers.audit_table_suffix:

Properties hibernateProperties = new Properties(); hibernateProperties.setProperty( “org.hibernate.envers.audit_table_suffix”, “_AUDIT_LOG”); sessionFactory.setHibernateProperties(hibernateProperties);

A full listing of available properties can be found in the Envers documentation.

3.4. Accessing Entity History

You can query for historic data in a way similar to querying data via theHibernate criteria API. The audit history of an entity can be accessed using the AuditReader interface, which can be obtained with an open EntityManager or Session via the AuditReaderFactory:

AuditReader reader = AuditReaderFactory.get(session);

Envers provides AuditQueryCreator (returned by AuditReader.createQuery()) in order to create audit-specific queries. The following line will return all Bar instances modified at revision #2 (where bar_AUDIT_LOG.REV = 2):

AuditQuery query = reader.createQuery()
    .forEntitiesAtRevision(Bar.class, 2)

Here is how to query for Bar‘s revisions, i.e. it will result in getting a list of all Bar instances in all their states that were audited:

AuditQuery query = reader.createQuery()
    .forRevisionsOfEntity(Bar.class, true, true);

If the second parameter is false the result is joined with the REVINFO table, otherwise only entity instances are returned. The last parameter specifies whether to return deleted Bar instances.

Then you can specify constraints using the AuditEntity factory class:

query.addOrder(AuditEntity.revisionNumber().desc());

4. Spring Data JPA

Spring Data JPA is a framework that extends JPA by adding an extra layer of abstraction on the top of the JPA provider. This layer allows for support for creating JPA repositories by extending Spring JPA repository interfaces. For our purposes, you can extend CrudRepository<T, ID extends Serializable>, the interface for generic CRUD operations. As soon as you’ve created and injected your repository to another component, Spring Data will provide the implementation automatically and you’re ready to add auditing functionality.

4.1. Enabling JPA Auditing

To start, we want to enable auditing via annotation configuration. In order to do that, just add @EnableJpaAuditing on your @Configuration class:

@Configuration
@EnableTransactionManagement
@EnableJpaRepositories
@EnableJpaAuditing
public class PersistenceConfig { ... }

4.2. Adding Spring’s Entity Callback Listener

As we already know, JPA provides the @EntityListeners annotation to specify callback listener classes. Spring Data provides its own JPA entity listener class: AuditingEntityListener. So let’s specify the listener for the Bar entity:

@Entity
@EntityListeners(AuditingEntityListener.class)
public class Bar { ... }

Now auditing information will be captured by the listener on persisting and updating the Bar entity.

4.3. Tracking Created and Last Modified Dates

Next we will add two new properties for storing the created and last modified dates to our Bar entity. The properties are annotated by the @CreatedDate and @LastModifiedDate annotations accordingly, and their values are set automatically:

@Entity
@EntityListeners(AuditingEntityListener.class)
public class Bar {
    
    ...
    
    @Column(name = "created_date")
    @CreatedDate
    private long createdDate;

    @Column(name = "modified_date")
    @LastModifiedDate
    private long modifiedDate;
    
    ...
    
}

Generally, you would move the properties to a base class (annotated by @MappedSuperClass) which would be extended by all your audited entities. In our example, we add them directly to Bar for the sake of simplicity.

4.4. Auditing the Author of Changes with Spring Security

If your app uses Spring Security, you can not only track when changes were made but also who made them:

@Entity
@EntityListeners(AuditingEntityListener.class)
public class Bar {
    
    ...
    
    @Column(name = "created_by")
    @CreatedBy
    private String createdBy;

    @Column(name = "modified_by")
    @LastModifiedBy
    private String modifiedBy;
    
    ...
    
}

The columns annotated with @CreatedBy and @LastModifiedBy are populated with the name of the principal that created or last modified the entity. The information is pulled from SecurityContext‘s Authentication instance. If you want to customize values that are set to the annotated fields, you can implement AuditorAware<T> interface:

public class AuditorAwareImpl implements AuditorAware<String> {
 
    @Override
    public String getCurrentAuditor() {
        // your custom logic
    }

}

In order to configure the app to use AuditorAwareImpl to lookup the current principal, declare a bean of AuditorAware type initialized with an instance of AuditorAwareImpl and specify the bean’s name as the auditorAwareRef parameter’s value in @EnableJpaAuditing:

@EnableJpaAuditing(auditorAwareRef="auditorProvider")
public class PersistenceConfig {
    
    ...
    
    @Bean
    AuditorAware<String> auditorProvider() {
        return new AuditorAwareImpl();
    }
    
    ...
    
}

5. Conclusion

We have considered three approaches to implementing auditing functionality:

  • The pure JPA approach is the most basic and consists of using lifecycle callbacks. However, you are only allowed to modify the non-relationship state of an entity. This makes the @PreRemove callback useless for our purposes, as any settings you’ve made in the method will be deleted then along with the entity.
  • Envers is a mature auditing module provided by Hibernate. It is highly configurable and lacks the flaws of the pure JPA implementation. Thus, it allows us to audit the delete operation, as it logs into tables other than the entity’s table.
  • The Spring Data JPA approach abstracts working with JPA callbacks and provides handy annotations for auditing properties. It’s also ready for integration with Spring Security. The disadvantage is that it inherits the same flaws of the JPA approach, so the delete operation cannot be audited.

The examples for this article are available in a GitHub repository.

I usually post about Persistence on Twitter - you can follow me there:


Viewing all 3831 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>