Quantcast
Channel: Baeldung
Viewing all 3717 articles
Browse latest View live

PubSub Messaging with Spring Data Redis

$
0
0

I usually post about Persistence on Twitter - you can follow me there:

1. Overview

In this second article from the series exploring Spring Data Redis, we’ll have a look at the pub/sub message queues.

In Redis, publishers are not programmed to send their messages to specific subscribers. Rather, published messages are characterized into channels, without knowledge of what (if any) subscribers there may be.

Similarly, subscribers express interest in one or more topics and only receive messages that are of interest, without knowledge of what (if any) publishers there are.

This decoupling of publishers and subscribers can allow for greater scalability and a more dynamic network topology.

2. Redis Configuration

Let’s start adding the configuration which is required for the message queues.

First, we’ll define a MessageListenerAdapter bean which contains a custom implementation of the MessageListener interface called RedisMessageSubscriber. This bean acts as a subscriber in the pub-sub messaging model:

@Bean
MessageListenerAdapter messageListener() { 
    return new MessageListenerAdapter(new RedisMessageSubscriber());
}

RedisMessageListenerContainer is a class provided by Spring Data Redis which provides asynchronous behavior for Redis message listeners.  This is called internally and, according to the Spring Data Redis documentation – “handles the low level details of listening, converting and message dispatching.”

@Bean
RedisMessageListenerContainer redisContainer() {
    RedisMessageListenerContainer container 
      = new RedisMessageListenerContainer(); 
    container.setConnectionFactory(jedisConnectionFactory()); 
    container.addMessageListener(messageListener(), topic()); 
    return container; 
}

We will also create a bean using a custom-built MessagePublisher interface and a RedisMessagePublisher implementation. This way, we can have a generic message-publishing API, and have the Redis implementation take a redisTemplate and topic as constructor arguments:

@Bean
MessagePublisher redisPublisher() { 
    return new RedisMessagePublisher(redisTemplate(), topic());
}

Finally, we’ll set up a topic to which the publisher will send messages, and the subscriber will receive them:

@Bean
ChannelTopic topic() {
    return new ChannelTopic("messageQueue");
}

3. Publishing Messages

3.1. Defining the MessagePublisher Interface

Spring Data Redis does not provide a MessagePublisher interface to be used for message distribution. We can define a custom interface which will use redisTemplate in implementation:

public interface MessagePublisher {
    void publish(String message);
}

3.2. RedisMessagePublisher Implementation

Our next step is to provide an implementation of the MessagePublisher interface, adding message publishing details and using the functions in redisTemplate.

The template contains a very rich set of functions for wide range of operations – out of which convertAndSend is capable of sending a message to a queue through a topic:

public class RedisMessagePublisher implements MessagePublisher {

    @Autowired
    private RedisTemplate<String, Object> redisTemplate;
    @Autowired
    private ChannelTopic topic;

    public RedisMessagePublisher() {
    }

    public RedisMessagePublisher(
      RedisTemplate<String, Object> redisTemplate, ChannelTopic topic) {
      this.redisTemplate = redisTemplate;
      this.topic = topic;
    }

    public void publish(String message) {
        redisTemplate.convertAndSend(topic.getTopic(), message);
    }
}

As you can see, the publisher implementation is straightforward. It uses the convertAndSend() method of the redisTemplate to format and publish the given message to the configured topic.

A topic implements publish and subscribe semantics: when a message is published, it goes to all the subscribers who are registered to listen on that topic.

4. Subscribing to Messages

RedisMessageSubscriber implements the Spring Data Redis-provided MessageListener interface:

@Service
public class RedisMessageSubscriber implements MessageListener {

    public static List<String> messageList = new ArrayList<String>();

    public void onMessage(Message message, byte[] pattern) {
        messageList.add(message.toString());
        System.out.println("Message received: " + message.toString());
    }
}

Note that there is a second parameter called pattern, which we have not used in this example. The Spring Data Redis documentation states that this parameter represents the, “pattern matching the channel (if specified)”, but that it can be null.

5. Sending and Receiving Messages

Now we’ll put it all together. Let’s create a message and then publish it using the RedisMessagePublisher:

String message = "Message " + UUID.randomUUID();
redisMessagePublisher.publish(message);

When we call publish(message), the content is sent to Redis, where it is routed to the message queue topic defined in our publisher. Then it is distributed to the subscribers of that topic.

You may already have noticed that RedisMessageSubscriber is a listener, which registers itself to the queue for retrieval of messages.

On the arrival of the message, the subscriber’s onMessage() method defined triggered.

In our example, we can verify that we’ve received messages that have been published by checking the messageList in our RedisMessageSubscriber:

RedisMessageSubscriber.messageList.get(0).contains(message)

6. Conclusion

In this article, we examined a pub/sub message queue implementation using Spring Data Redis.

The implementation of the above example can be found in a GitHub project.

I usually post about Persistence on Twitter - you can follow me there:



Elasticsearch Queries with Spring Data

$
0
0

I usually post about Persistence on Twitter - you can follow me there:

1. Introduction

In a previous article, we demonstrated how to configure and use Spring Data Elasticsearch for a project. In this article we will examine several query types offered by Elasticsearch and we’ll also talk about field analyzers and their impact on search results.

2. Analyzers

All stored string fields are, by default, processed by an analyzer. An analyzer consists of one tokenizer and several token filters, and is usually preceded by one or more character filters.

The default analyzer splits the string by common word separators (such as spaces or punctuation) and puts every token in lowercase. It also ignores common English words.

Elasticsearch can also be configured to regard a field as analyzed and not-analyzed at the same time.

For example, in an Article class, suppose we store the title field as a standard analyzed field. The same field with the suffix verbatim will be stored as a not-analyzed field:

@MultiField(
  mainField = @Field(type = String),
  otherFields = {
      @NestedField(index = not_analyzed, dotSuffix = "verbatim", type = String)
  }
)
private String title;

Here, we apply the @MultiField annotation to tell Spring Data that we would like this field to be indexed in several ways. The main field will use the name title and will be analyzed according to the rules described above.

But we also provide a second annotation, @NestedField, which describes an additional indexing of the title field. We use FieldIndex.not_analyzed to indicate that we do not want to use an analyzer when performing the additional indexing of the field, and that this value should be stored using a nested field with the suffix verbatim.

2.1. Analyzed Fields

Let’s look at an example. Suppose an article with the title “Spring Data Elasticsearch” is added to our index. The default analyzer will break up the string at the space characters and produce lowercase tokens: “spring“, “data“, and “elasticsearch“.

Now we may use any combination of these terms to match a document:

SearchQuery searchQuery = new NativeSearchQueryBuilder()
  .withQuery(matchQuery("title", "elasticsearch data"))
  .build();

2.2. Non-analyzed Fields

A non-analyzed field is not tokenized, so can only be matched as a whole when using match or term queries:

SearchQuery searchQuery = new NativeSearchQueryBuilder()
  .withQuery(matchQuery("title.verbatim", "Second Article About Elasticsearch"))
  .build();

Using a match query, we may only search by the full title, which is also case-sensitive.

3. Match Query

A match query accepts text, numbers and dates.

There are three type of “match” query:

  • boolean
  • phrase and
  • phrase_prefix

In this section we will explore the boolean match query.

3.1. Matching with Boolean Operators

boolean is the default type of a match query; you can specify which boolean operator to use (or is default):

SearchQuery searchQuery = new NativeSearchQueryBuilder()
  .withQuery(matchQuery("title","Search engines").operator(AND))
  .build();
List<Article> articles = getElasticsearchTemplate()
  .queryForList(searchQuery, Article.class);

This query would return an article with the title “Search engines” by specifying two terms from the title with and operator. But what will happen if we search with the default (or) operator when only one of the terms matches?

SearchQuery searchQuery = new NativeSearchQueryBuilder()
  .withQuery(matchQuery("title", "Engines Solutions"))
  .build();
List<Article> articles = getElasticsearchTemplate()
  .queryForList(searchQuery, Article.class);
assertEquals(1, articles.size());
assertEquals("Search engines", articles.get(0).getTitle());

The “Search engines” article is still matched, but it will have a lower score because not all of the terms matched.

The sum of the scores of each matching term add up to the total score of each resulting document.

There may be situations in which a document containing a rare term entered in the query will have higher rank then a document which contains several common terms.

3.2. Fuzziness

When the user makes a typo in a word, it is still possible to match it with a search by specifying a fuzziness parameter, which allows inexact matching.

For string fields fuzziness means the edit distance: the number of one-character changes that need to be made to one string to make it the same as another string.

SearchQuery searchQuery = new NativeSearchQueryBuilder()
  .withQuery(matchQuery("title", "spring date elasticsearch")
  .operator(AND)
  .fuzziness(Fuzziness.ONE)
  .prefixLength(3))
  .build();

The prefix_length parameter is used to improve performance. In this case, we require that the first three characters should match exactly, which reduces the number of possible combinations.

5. Phrase Search

Phase search is stricter, although you can control it with the slop parameter. This parameter tells the phrase query how far apart terms are allowed to be while still considering the document a match.

In other words, it represents the number of times you need to move a term in order to make the query and document match:

SearchQuery searchQuery = new NativeSearchQueryBuilder()
  .withQuery(matchPhraseQuery("title", "spring elasticsearch").slop(1))
  .build();

Here the query will match the document with the title “Spring Data Elasticsearch” because we set the slop to one.

6. Multi Match Query

When you want to search in multiple fields then you could use QueryBuilders#multiMatchQuery() where you specify all the fields to match:

SearchQuery searchQuery = new NativeSearchQueryBuilder()
  .withQuery(multiMatchQuery("tutorial")
    .field("title")
    .field("tags")
    .type(MultiMatchQueryBuilder.Type.BEST_FIELDS))
  .build();

Here we search the title and tags fields for a match.

Notice that here we use the “best fields” scoring strategy.  It will take the maximum score among the fields as a document score.

7. Aggregations

In our Article class we have also defined a tags field, which is non-analyzed. We could easily create a tag cloud by using an aggregation.

Remember that, because the field is non-analyzed, the tags will not be tokenized:

TermsBuilder aggregation = AggregationBuilders.terms("top_tags")
  .field("tags")
  .order(Terms.Order.aggregation("_count", false));
SearchResponse response = client.prepareSearch("blog")
  .setTypes("article")
  .addAggregation(aggregation)
  .execute().actionGet();

Map<String, Aggregation> results = response.getAggregations().asMap();
StringTerms topTags = (StringTerms) results.get("top_tags");

List<String> keys = topTags.getBuckets()
  .stream()
  .map(b -> b.getKey())
  .collect(toList());
assertEquals(asList("elasticsearch", "spring data", "search engines", "tutorial"), keys);

8. Summary

In this article we discussed the difference between analyzed and non-analyzed fields, and how this distinction affects search.

We also learned about several types of queries provided by Elasticsearch, such as the match query, phrase match query, full-text search query, and boolean query.

Elasticsearch provides many other types of queries, such as geo queries, script queries and compound queries. You can read about them in the Elasticsearch documentation and explore the Spring Data Elasticsearch API in order to use these queries in your code.

You can find a project containing the examples used in this article in the GitHub repository.

I usually post about Persistence on Twitter - you can follow me there:


Java Web Weekly, Issue 116

$
0
0

I usually post about Dev stuff on Twitter - you can follow me there:

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> Reactor Core 2.5 becomes a unified Reactive Foundation on Java 8 [spring.io]

The focus and the driving force behind Spring 5 is clearly going to be reactive programming.

So, if you’re doing Spring work, definitely have a quick read and see how the ecosystem is growing and what you can do with the new infrastructure.

>> Java app monitoring with ELK – Part I [balamaci.ro]

I’ve been using the ELK stack for logging visualization and analysis for over two years now and it has definitely been one of the nicer and more useful tools I’m working with. There’s so much you can do with clean, rich logging info and a good analysis tool on top of all that data.

>> Jigsaw Finally Arrives in JDK 9 [infoq.com]

Modularity finally made it into the JDK 9 builds – time to play.

>> Caching de luxe with Spring and Guava [codecentric.de]

A long, slightly weird but ultimately interesting read on actually using caching in real-world scenarios, not just setting it up in a toy project

>> Ceylon Might Just be the Only (JVM) Language that Got Nulls Right [jooq.org]

A nice way Ceylon handles and works with nulls. If you’re a language aficionado and you haven’t done any work in Ceylon before, definitely have a read.

>> Java EE 8 MVC: Working with bean parameters [mscharhag.com]

The exploration of Java EE 8 goes on, this time with mapping bean parameters in an MVC style application.

>> When to write setters [giorgiosironi.com]

A back-to-basic kind of writeup with the benefit of real-world experience.

>> Adding Type Inference to Java: Good or Evil? [beyondjava.net]

>> Java May Adopt (Really Useful) Type Inference at Last [beyondjava.net]

A bit of a deeper look into the newly proposed JEP that may add type inference to the Java language.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> The Most Important Code Metrics You’ve Never Heard Of [daedtech.com]

Developer productivity is a unsurprisingly very difficult to measure. Putting that aside though – definitely keep track of some of the metrics this writeup talks about – they’re highly useful when determining the overall health of your codebase.

>> Trackers [jacquesmattheij.com]

A concerning (and funny) read about the tracking and data driven culture we’re all living in.

>> 10 Lessons from 10 Years of Amazon Web Services [allthingsdistributed.com] and >> Ten Years in the AWS Cloud – How Time Flies! [aws.amazon.com]

10 years of running one of the more complex systems, highly distributed systems yielded some very interesting lessons.

>> Impressions from Voxxed Days Bucharest 2016 [vladmihalcea.com]

This was definitely a well put together event and I enjoyed speaking about Event Sourcing and meeting a whole lot of cool people.

>> The First Winter [mdswanson.com]

A quick writeup but rich in takeaways. These little things do add up to a good culture.

>> Writing Tests Doesn’t Have to Be Extra Work [daedtech.com]

Done right, tests can and will definitely speed you up – once you get through the productivity hit that does usually occur in the first few weeks after picking up TDD.

>> Firing People [zachholman.com]

A long and personal read that I’m including in the review just because I enjoy Zachs writing.

>> The Trouble with Career Sites [daedtech.com]

And since the last article was about firing people, let’s now look at hiring and be brutally honest about the process and what works and doesn’t work.

Also worth reading:

3. Comics

And my favorite Dilberts of the week (absolutely hilarious):

>> BUILD AN ARK! [dilbert.com]

>> An internet hoax [dilbert.com]

>> It’s sort of an abusive relationship? [dilbert.com]

 

4. Pick of the Week

>> How GitHub Works: Be Asynchronous [zachholman.com]

 

I usually post about Dev stuff on Twitter - you can follow me there:


Spring and Spring Boot Adoption in March 2016

$
0
0

The 2015 Numbers

Spring 4 came out in December 2013 and it’s been slowly ramping up ever since.

In May 2015, I ran a survey that puts Spring 4 adoption at 65% and Spring Boot adoption at 34%.

The New 2016 Spring Numbers

Last week I just wrapped up a the new “Java and Spring in 2016” survey and received 2253 answers from the community.

In March of 2016, 81.1% of Spring users are using Spring 4 – so definitely a lot of growth since last years 65%:

The Spring Boot Numbers

Spring Boot is even more interesting, growing from 34% last year to 52.8%:

Conclusion

The new numbers are quite interesting, especially the Spring Boot ones which clearly show Boot reached critical mass as it broke the 50% mark.

Out of the full 2255 participants, 388 (17.2%) voted as not using Spring – but keep in mind that my audience is skewed towards Spring developers.

And of course Spring 5 is less than a year away and 4.3 is even closer – so things are going to be just as fast-paced in 2016.

More Jackson Annotations

$
0
0

I usually post about Jackson and JSON stuff on Twitter - you can follow me there:

1. Overview

This article covers some additional annotations that were not covered in the previous article, A Guide to Jackson Annotations – we will go through seven of these.

2. @JsonIdentityReference

@JsonIdentityReference is used for customization of references to objects that will be serialized as object identities instead of full POJOs. It works in collaboration with @JsonIdentityInfo to force usage of object identities in every serialization, different from all-but-the-first-time when @JsonIdentityReference is absent. This couple of annotations is most helpful when dealing with circular dependencies among objects. Please refer to section 4 of the Jackson – Bidirectional Relationship article for more information.

In order to demonstrate the use @JsonIdentityReference, we will define two different bean classes, without and with this annotation.

The bean without @JsonIdentityReference:

@JsonIdentityInfo(generator = ObjectIdGenerators.PropertyGenerator.class, property = "id")
public class BeanWithoutIdentityReference {
    private int id;
    private String name;

    // constructor, getters and setters
}

For the bean using @JsonIdentityReference, we choose the id property to be the object identity:

@JsonIdentityInfo(generator = ObjectIdGenerators.PropertyGenerator.class, property = "id")
@JsonIdentityReference(alwaysAsId = true)
public class BeanWithIdentityReference {
    private int id;
    private String name;
    
    // constructor, getters and setters
}

In the first case, where @JsonIdentityReference is absent, that bean is serialized with full details on its properties:

BeanWithoutIdentityReference bean 
  = new BeanWithoutIdentityReference(1, "Bean Without Identity Reference Annotation");
String jsonString = mapper.writeValueAsString(bean);

The output of the serialization above:

{
    "id": 1,
    "name": "Bean Without Identity Reference Annotation"
}

When @JsonIdentityReference is used, the bean is serialized as a simple identity instead:

BeanWithIdentityReference bean 
  = new BeanWithIdentityReference(1, "Bean With Identity Reference Annotation");
String jsonString = mapper.writeValueAsString(bean);
assertEquals("1", jsonString);

3. @JsonAppend

The @JsonAppend annotation is used to add virtual properties to an object in addition to regular ones when that object is serialized. This is necessary when we want to add supplementary information directly into a JSON string, rather than changing the class definition. For instance, it might be more convenient to insert the version metadata of a bean to the corresponding JSON document than to provide it with an additional property.

Assume we have a bean without @JsonAppend as follows:

public class BeanWithoutAppend {
    private int id;
    private String name;

    // constructor, getters and setters
}

A test will confirm that in the absence of the @JsonAppend annotation, the serialization output does not contain information on the supplementary version property, despite the fact that we attempt to add to the ObjectWriter object:

BeanWithoutAppend bean = new BeanWithoutAppend(2, "Bean Without Append Annotation");
ObjectWriter writer 
  = mapper.writerFor(BeanWithoutAppend.class).withAttribute("version", "1.0");
String jsonString = writer.writeValueAsString(bean);

The serialization output:

{
    "id": 2,
    "name": "Bean Without Append Annotation"
}

Now, let’s say we have a bean annotated with @JsonAppend:

@JsonAppend(attrs = { 
  @JsonAppend.Attr(value = "version") 
})
public class BeanWithAppend {
    private int id;
    private String name;

    // constructor, getters and setters
}

A similar test to the previous one will verify that when the @JsonAppend annotation is applied, the supplementary property is included after serialization:

BeanWithAppend bean = new BeanWithAppend(2, "Bean With Append Annotation");
ObjectWriter writer 
  = mapper.writerFor(BeanWithAppend.class).withAttribute("version", "1.0");
String jsonString = writer.writeValueAsString(bean);

The output of that serialization shows that the version property has been added:

{
    "id": 2,
    "name": "Bean With Append Annotation",
    "version": "1.0"
}

4. @JsonNaming

The @JsonNaming annotation is used to choose the naming strategies for properties in serialization, overriding the default. Using the value element, we can specify any strategy, including custom ones.

In addition to the default, which is LOWER_CAMEL_CASE (e.g. lowerCamelCase), Jackson library provides us with four other built-in property naming strategies for convenience:

  • KEBAB_CASE: Name elements are separated by hyphens, e.g. kebab-case.
  • LOWER_CASE: All letters are lowercase with no separators, e.g. lowercase.
  • SNAKE_CASE: All letters are lowercase with underscores as separators between name elements, e.g. snake_case.
  • UPPER_CAMEL_CASE: All name elements, including the first one, start with a capitalized letter, followed by lowercase ones and there are no separators, e.g. UpperCamelCase.

This example will illustrate the way to serialize properties using snake case names, where a property named beanName is serialized as bean_name.

Given a bean definition:

@JsonNaming(PropertyNamingStrategy.SnakeCaseStrategy.class)
public class NamingBean {
    private int id;
    private String beanName;

    // constructor, getters and setters
}

The test below demonstrates that the specified naming rule works as required:

NamingBean bean = new NamingBean(3, "Naming Bean");
String jsonString = mapper.writeValueAsString(bean);        
assertThat(jsonString, containsString("bean_name"));

The jsonString variable contains following data:

{
    "id": 3,
    "bean_name": "Naming Bean"
}

5. @JsonPropertyDescription

The Jackson library is able to create JSON schemas for Java types with the help of a separate module called JSON Schema. The schema is useful when we want to specify expected output when serializing Java objects, or to validate a JSON document before deserialization.

The @JsonPropertyDescription annotation allows a human readable description to be added to the created JSON schema by providing the description field.

This section makes use of the bean declared below to demonstrate the capabilities of @JsonPropertyDescription:

public class PropertyDescriptionBean {
    private int id;
    @JsonPropertyDescription("This is a description of the name property")
    private String name;

    // getters and setters
}

The method for generating a JSON schema with the addition of the description field is shown below:

SchemaFactoryWrapper wrapper = new SchemaFactoryWrapper();
mapper.acceptJsonFormatVisitor(PropertyDescriptionBean.class, wrapper);
JsonSchema jsonSchema = wrapper.finalSchema();
String jsonString = mapper.writeValueAsString(jsonSchema);
assertThat(jsonString, containsString("This is a description of the name property"));

As we can see, the generation of JSON schema was successful:

{
    "type": "object",
    "id": "urn:jsonschema:com:baeldung:jackson:annotation:extra:PropertyDescriptionBean",
    "properties": 
    {
        "name": 
        {
            "type": "string",
            "description": "This is a description of the name property"
        },

        "id": 
        {
            "type": "integer"
        }
    }
}

6. @JsonPOJOBuilder

The @JsonPOJOBuilder annotation is used to configure a builder class to customize deserialization of a JSON document to recover POJOs when the naming convention is different from the default.

Suppose we need to deserialize the following JSON string:

{
    "id": 5,
    "name": "POJO Builder Bean"
}

That JSON source will be used to create an instance of the POJOBuilderBean:

@JsonDeserialize(builder = BeanBuilder.class)
public class POJOBuilderBean {
    private int identity;
    private String beanName;

    // constructor, getters and setters
}

The names of the bean’s properties are different from those of the fields in JSON string. This is where @JsonPOJOBuilder comes to the rescue.

The @JsonPOJOBuilder annotation is accompanied by two properties:

  • buildMethodName: The name of the no-arg method used to instantiate the expected bean after binding JSON fields to that bean’s properties. The default name is build.
  • withPrefix: The name prefix for auto-detection of matching between the JSON and bean’s properties. The default prefix is with.

This example makes use of the BeanBuilder class below, which is used on POJOBuilderBean:

@JsonPOJOBuilder(buildMethodName = "createBean", withPrefix = "construct")
public class BeanBuilder {
    private int idValue;
    private String nameValue;

    public BeanBuilder constructId(int id) {
        idValue = id;
        return this;
    }

    public BeanBuilder constructName(String name) {
        nameValue = name;
        return this;
    }

    public POJOBuilderBean createBean() {
        return new POJOBuilderBean(idValue, nameValue);
    }
}

In the code above, we have configured the @JsonPOJOBuilder to use a build method called createBean and the construct prefix for matching properties.

The application of @JsonPOJOBuilder to a bean is described and tested as follows:

String jsonString = "{\"id\":5,\"name\":\"POJO Builder Bean\"}";
POJOBuilderBean bean = mapper.readValue(jsonString, POJOBuilderBean.class);

assertEquals(5, bean.getIdentity());
assertEquals("POJO Builder Bean", bean.getBeanName());

The result shows that a new data object has been successfully re-created from a JSON source in despite a mismatch in properties’ names.

7. @JsonTypeId

The @JsonTypeId annotation is used to indicate that the annotated property should be serialized as the type id when including polymorphic type information, rather than as a regular property. That polymorphic metadata is used during deserialization to recreate objects of the same subtypes as they were before serialization, rather than of the declared supertypes.

For more information on Jackson’s handling of inheritance, see section 2 of the Inheritance in Jackson.

Let’s say we have a bean class definition as follows:

public class TypeIdBean {
    private int id;
    @JsonTypeId
    private String name;

    // constructor, getters and setters
}

The following test validates that @JsonTypeId works as it is meant to:

mapper.enableDefaultTyping(DefaultTyping.NON_FINAL);
TypeIdBean bean = new TypeIdBean(6, "Type Id Bean");
String jsonString = mapper.writeValueAsString(bean);
        
assertThat(jsonString, containsString("Type Id Bean"));

The serialization process’ output:

[
    "Type Id Bean",
    {
        "id": 6
    }
]

8. @JsonTypeIdResolver

The @JsonTypeIdResolver annotation is used to signify a custom type identity handler in serialization and deserialization. That handler is responsible for conversion between Java types and type id included in a JSON document.

Suppose that we want to embed type information in a JSON string when dealing with the following class hierarchy.

The AbstractBean superclass:

@JsonTypeInfo(
  use = JsonTypeInfo.Id.NAME, 
  include = JsonTypeInfo.As.PROPERTY, 
  property = "@type"
)
@JsonTypeIdResolver(BeanIdResolver.class)
public class AbstractBean {
    private int id;

    protected AbstractBean(int id) {
        this.id = id;
    }

    // no-arg constructor, getter and setter
}

The FirstBean subclass:

public class FirstBean extends AbstractBean {
    String firstName;

    public FirstBean(int id, String name) {
        super(id);
        setFirstName(name);
    }

    // no-arg constructor, getter and setter
}

The LastBean subclass:

public class LastBean extends AbstractBean {
    String lastName;

    public LastBean(int id, String name) {
        super(id);
        setLastName(name);
    }

    // no-arg constructor, getter and setter
}

Instances of those classes are used to populate a BeanContainer object:

public class BeanContainer {
    private List<AbstractBean> beans;

    // getter and setter
}

We can see that the AbstractBean class is annotated with @JsonTypeIdResolver, indicating that it uses a custom TypeIdResolver to decide how to include subtype information in serialization and how to make use of that metadata the other way round.

Here is the resolver class to handle inclusion of type information:

public class BeanIdResolver extends TypeIdResolverBase {
    
    private JavaType superType;

    @Override
    public void init(JavaType baseType) {
        superType = baseType;
    }

    @Override
    public Id getMechanism() {
        return Id.NAME;
    }

    @Override
    public String idFromValue(Object obj) {
        return idFromValueAndType(obj, obj.getClass());
    }

    @Override
    public String idFromValueAndType(Object obj, Class<?> subType) {
        String typeId = null;
        switch (subType.getSimpleName()) {
        case "FirstBean":
            typeId = "bean1";
            break;
        case "LastBean":
            typeId = "bean2";
        }
        return typeId;
    }

    @Override
    public JavaType typeFromId(DatabindContext context, String id) {
        Class<?> subType = null;
        switch (id) {
        case "bean1":
            subType = FirstBean.class;
            break;
        case "bean2":
            subType = LastBean.class;
        }
        return context.constructSpecializedType(superType, subType);
    }
}

The two most notable methods are idFromValueAndType and typeFromId, with the former telling the way to include type information when serializing POJOs and the latter determining the subtypes of re-created objects using that metadata.

In order to make sure that both serialization and deserialization work well, let’s write a test to validate the complete progress.

First, we need to instantiate a bean container and bean classes, then populate that container with bean instances:

FirstBean bean1 = new FirstBean(1, "Bean 1");
LastBean bean2 = new LastBean(2, "Bean 2");

List<AbstractBean> beans = new ArrayList<>();
beans.add(bean1);
beans.add(bean2);

BeanContainer serializedContainer = new BeanContainer();
serializedContainer.setBeans(beans);

Next, the BeanContainer object is serialized and we confirm that the resulting string contains type information:

String jsonString = mapper.writeValueAsString(serializedContainer);
assertThat(jsonString, containsString("bean1"));
assertThat(jsonString, containsString("bean2"));

The output of serialization is shown below:

{
    "beans": 
    [
        {
            "@type": "bean1",
            "id": 1,
            "firstName": "Bean 1"
        },

        {
            "@type": "bean2",
            "id": 2,
            "lastName": "Bean 2"
        }
    ]
}

That JSON structure will be used to re-create objects of the same subtypes as before serialization. Here are the implementation steps for deserialization:

BeanContainer deserializedContainer = mapper.readValue(jsonString, BeanContainer.class);
List<AbstractBean> beanList = deserializedContainer.getBeans();
assertThat(beanList.get(0), instanceOf(FirstBean.class));
assertThat(beanList.get(1), instanceOf(LastBean.class));

9. Conclusion

This tutorial has explained several less-common Jackson annotations in detail. The implementation of these examples and code snippets can be found in a GitHub project.

I usually post about Jackson and JSON stuff on Twitter - you should follow me there:


Java Web Weekly, Issue 117

$
0
0

I usually post about Dev stuff on Twitter - you can follow me there:

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> JEP 286 Survey Results for Local Variable Type Inference [infoq.com]

A quick followup to the survey Brian Goetz ran to take the pulse of the community on the best way to implement type inference in Java. Looks like a pretty decisive yes.

>> Simplifying Database Queries with Jinq [infoq.com]

Jing looks like a clean, nice way to access your SQL data – here’s just a quick example to show you what the library can do.

>> Improve Your JUnit Experience with this Annotation [jooq.org]

Very quick and to the point way to run your tests in a more predictable order – which makes a lot of sense.

I personally actually like the unpredictable nature of tests – it’s a quick and nice way to flush out any unforeseen connections between them – but I can certainly see the appeal of running them in a clear order.

>> How to call Oracle stored procedures and functions from Hibernate [vladmihalcea.com]

A very practical and useful guide to using stored procedures with Hibernate. A bit annotation-heavy, but if you’re using JPA, you’re already used to that.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Understanding CSRF, the video tutorial edition [troyhunt.com]

Having a solid understanding of CSRF attacks well beyond the basics – can save your bacon when taking your system to production. Definitely have a look at this one.

>> Uber Bug Bounty: Turning Self-XSS into Good-XSS [fin1te.net]

I enjoy reading through the details of these attacks. I’m saving this one for the weekend but it looks promising, so I’m including it here as well.

>> Writing OpenAPI (Swagger) Specification Tutorial – Part 3 – Simplifying specification file [apihandyman.io]

API documentation is the new hotness, yes, but it’s also necessary. And while I’m using Swagger myself, I’m keeping a close eye on the other tools available out there.

>> Event Sourcing vs CRUD [alexecollins.com]

A very quick and to the point set of questions to ask yourself before deciding if Event Sourcing makes sense for the architecture of your system.

Also worth reading:

3. Musings

>> That Code’s Not Dead — It Went To a Farm Upstate… And You’re Paying For It [daedtech.com]

Removing “dead” code is critical to keep the sanity of your system (and your own while you work on that system).

One of the cleanest and easiest to work with codebases that I ever touched early on in my career – was one where the team lead was ruthless with cutting code that wasn’t used immediately.

>> My Passion Was My Weak Spot [jacquesmattheij.com]

Passion is one thing, and allowing it to put you in unhealthy, one-side type of work is another.

This piece is definitely worth the read, especially if you’re relatively new to working as a developer.

>> Take a Step Back [techblog.bozho.net]

Some solid advice if there ever was any – think through those little, day to day decisions to keep your system and your codebase clean and nimble.

>> AppDynamics vs Dynatrace: Battle of the Enterprise Monitoring Giants [takipi.com]

If you ever asked the monitoring question for the system you’re working on, you’ve asked yourself this exact question more than once.

My only gripe about this one is that it doesn’t include the other major player in the space – New Relic. Other than that – some solid information over here.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> My PowerPoint slides have a little something for everyone [dilbert.com]

>> I’m in the mood to tweet [dilbert.com]

>> You’re exactly what I’m trying to avoid [dilbert.com]

 

5. Pick of the Week

Every year I run a survey to find out how the adoption of new technologies is going. Here are the new numbers for Spring and Spring Boot:

>> Spring and Spring Boot Adoption in March 2016 [baeldung.com]

 

I usually post about Dev stuff on Twitter - you can follow me there:


XStream User Guide: Converting Objects to XML

$
0
0

1. Overview

In this tutorial, we’ll learn how to use the XStream library to serialize Java objects to XML.

2. Features

There are quite a few interesting benefits to using XStream to serialize and deserialize XML:

  • Configured properly, it produces very clean XML
  • Provides significant opportunities for customization of the XML output
  • Support for object graphs, including circular references
  • For most use cases, the XStream instance is thread-safe, once configured (there are caveats when using annotations)
  • Clear messages are provided during exception handling to help diagnose issues
  • Starting with version 1.4.7, we have security features available to disallow serialization of certain types

3. Project Setup

In order to use XStream in our project we will add the following Maven dependency:

<dependency>
    <groupId>com.thoughtworks.xstream</groupId>
    <artifactId>xstream</artifactId>
    <version>1.4.9</version>
</dependency>

 4. Basic Usage

The XStream class is a facade for the API. When creating an instance of XStream, we need to take care of thread safety issues as well:

XStream xstream = new XStream();

Once an instance is created and configured, it may be shared across multiple threads for marshalling/unmarshalling unless you enable annotation processing.

4.1. Drivers

Several drivers are supported, such as DomDriver, StaxDriver, XppDriver, and more. These drivers have different performance and resource usage characteristics.

The XPP3 driver is used by default, but of course we can easily change the driver:

XStream xstream = new XStream(new StaxDriver());

4.2. Generating XML

Let’s start by defining a simple POJO for – Customer:

public class Customer {

    private String firstName;
    private String lastName;
    private Date dob;

    // standard constructor, setters, and getters
}

Let’s now generate an XML representation of the object:

Customer customer = new Customer("John", "Doe", new Date());
String dataXml = xstream.toXML(customer);

Using the default settings, the following output is produced:

<com.baeldung.pojo.Customer>
    <firstName>John</firstName>
    <lastName>Doe</lastName>
    <dob>1986-02-14 03:46:16.381 UTC</dob>
</com.baeldung.pojo.Customer>

From this output, we can clearly see that the containing tag uses the fully-qualified class name of Customer by default.

There are many reasons we might decide that the default behavior doesn’t suit our needs. For example, we might not be comfortable exposing the package structure of our application. Also, the XML generated is significantly longer.

5. Aliases

An alias is a name we wish to use for elements rather than using default names.

For example, we can replace com.baeldung.pojo.Customer with customer by registering an alias for the Customer class. We can also add aliases for properties of a class. By using aliases, we can make our XML output much more readable and less Java-specific.

5.1. Class Aliases

Aliases can be registered either programmatically or using annotations.

Let’s now annotate our Customer class with @XStreamAlias:

@XStreamAlias("customer")

Now we need to configure our instance to use this annotation:

xstream.processAnnotations(Customer.class);

Alternatively, if we wish to configure an alias programmatically, we can use the code below:

xstream.alias("customer", Customer.class);

Whether using the alias or programmatic configuration, the output for a Customer object will be much cleaner:

<customer>
    <firstName>John</firstName>
    <lastName>Doe</lastName>
    <dob>1986-02-14 03:46:16.381 UTC</dob>
</customer>

5.2. Field Aliases

We can also add aliases for fields using the same annotation used for aliasing classes. For example, if we wanted the field firstName to be replaced with fn in the XML representation, we could use the following annotation:

@XStreamAlias("fn")
private String firstName;

Alternatively, we can accomplish the same goal programmatically:

xstream.aliasField("fn", Customer.class, "firstName");

The aliasField method accepts three arguments: the alias we wish to use, the class in which the property is defined, and the property name we wish to alias.

Whichever method is used the output is the same:

<customer>
    <fn>John</fn>
    <lastName>Doe</lastName>
    <dob>1986-02-14 03:46:16.381 UTC</dob>
</customer>

5.3. Default Aliases

There are several aliases pre-registered for classes – here’s a few of these:

alias("float", Float.class);
alias("date", Date.class);
alias("gregorian-calendar", Calendar.class);
alias("url", URL.class);
alias("list", List.class);
alias("locale", Locale.class);
alias("currency", Currency.class);

6. Collections

Now we will add a list of ContactDetails inside the Customer class.

private List<ContactDetails> contactDetailsList;

With default settings for collection handling, this is the output:

<customer>
    <firstName>John</firstName>
    <lastName>Doe</lastName>
    <dob>1986-02-14 04:14:05.874 UTC</dob>
    <contactDetailsList>
        <ContactDetails>
            <mobile>6673543265</mobile>
            <landline>0124-2460311</landline>
        </ContactDetails>
        <ContactDetails>
            <mobile>4676543565</mobile>
            <landline>0120-223312</landline>
        </ContactDetails>
    </contactDetailsList>
</customer>

Let’s suppose we need to omit the contactDetailsList parent tags, and we just want each ContactDetails element to be a child of the customer element. Let us modify our example again:

xstream.addImplicitCollection(Customer.class, "contactDetailsList");

Now, when the XML is generated, the root tags are omitted, resulting in the XML below:

<customer>
    <firstName>John</firstName>
    <lastName>Doe</lastName>
    <dob>1986-02-14 04:14:20.541 UTC</dob>
    <ContactDetails>
        <mobile>6673543265</mobile>
        <landline>0124-2460311</landline>
    </ContactDetails>
    <ContactDetails>
        <mobile>4676543565</mobile>
        <landline>0120-223312</landline>
    </ContactDetails>
</customer>

The same can also be achieved using annotations:

@XStreamImplicit
private List<ContactDetails> contactDetailsList;

7. Converters

XStream uses a map of Converter instances, each with its own conversion strategy. These convert supplied data to a particular format in XML and back again.

In addition to using the default converters, we can modify the defaults or register custom converters.

7.1. Modifying an Existing Converter

Suppose we weren’t happy with the way the dob tags were generated using the default settings. We can modify the custom converter for Date provided by XStream (DateConverter):

xstream.registerConverter(new DateConverter("dd-MM-yyyy", null));

The above will produce the output in “dd-MM-yyyy” format:

<customer>
    <firstName>John</firstName>
    <lastName>Doe</lastName>
    <dob>14-02-1986</dob>
</customer>

7.2. Custom Converters

We can also create a custom converter to accomplish the same output as in the previous section:

public class MyDateConverter implements Converter {

    private SimpleDateFormat formatter = new SimpleDateFormat("dd-MM-yyyy");

    @Override
    public boolean canConvert(Class clazz) {
        return Date.class.isAssignableFrom(clazz);
    }

    @Override
    public void marshal(
      Object value, HierarchicalStreamWriter writer, MarshallingContext arg2) {
        Date date = (Date)value;
        writer.setValue(formatter.format(date));
    }

    // other methods
}

Finally, we register our MyDateConverter class as below:

xstream.registerConverter(new MyDateConverter());

We can also create converters that implement the SingleValueConverter interface, which is designed to convert an object into a string.

public class MySingleValueConverter implements SingleValueConverter {

    @Override
    public boolean canConvert(Class clazz) {
        return Customer.class.isAssignableFrom(clazz);
    }

    @Override
    public String toString(Object obj) {
        SimpleDateFormat formatter = new SimpleDateFormat("dd-MM-yyyy");
        Date date = ((Customer) obj).getDob();
        return ((Customer) obj).getFirstName() + "," 
          + ((Customer) obj).getLastName() + ","
          + formatter.format(date);
    }

    // other methods
}

Finally, we register MySingleValueConverter:

xstream.registerConverter(new MySingleValueConverter());

Using MySingleValueConverter, the XML output for a Customer is as follows:

<customer>John,Doe,14-02-1986</customer>

7.3. Converter Priority

When registering Converter objects, is is possible to set their priority level, as well.

From the XStream javadocs:

The converters can be registered with an explicit priority. By default they are registered with XStream.PRIORITY_NORMAL. Converters of same priority will be used in the reverse sequence they have been registered. The default converter, i.e. the converter which will be used if no other registered converter is suitable, can be registered with priority XStream.PRIORITY_VERY_LOW. XStream uses by default the ReflectionConverter as the fallback converter.

The API provides several named priority values:

private static final int PRIORITY_NORMAL = 0;
private static final int PRIORITY_LOW = -10;
private static final int PRIORITY_VERY_LOW = -20;

8. Omitting Fields

We can omit fields from our generated XML using either annotations or programmatic configuration. In order to omit a field using an annotation, we simply apply the @XStreamOmitField annotation to the field in question:

@XStreamOmitField 
private String firstName;

In order to omit the field programmatically, we use the following method:

xstream.omitField(Customer.class, "firstName");

Whichever method we select, the output is the same:

<customer> 
    <lastName>Doe</lastName> 
    <dob>14-02-1986</dob> 
</customer>

9. Attribute Fields

Sometimes we may wish to serialize a field as an attribute of an element rather than as element itself. Suppose we add a contactType field:

private String contactType;

If we want to set contactType as an XML attribute, we can use the @XStreamAsAttribute annotation:

@XStreamAsAttribute
private String contactType;

Alternatively, we can accomplish the same goal programmatically:

xstream.useAttributeFor(ContactDetails.class, "contactType");

The output of either of the above methods is the same:

<ContactDetails contactType="Office">
    <mobile>6673543265</mobile>
    <landline>0124-2460311</landline>
</ContactDetails>

10. Concurrency

XStream’s processing model presents some challenges. Once the instance is configured, it is thread-safe.

It is important to note that processing of annotations modifies the configuration just before marshalling/unmarshalling. And so – if we require the instance to be configured on-the-fly using annotations, it is generally a good idea to use a separate XStream instance for each thread.

11. Conclusion

In this article, we covered the basics of using XStream to convert objects to XML. We also learned about customizations we can use to ensure the XML output meets our needs. Finally, we looked at thread-safety problems with annotations.

In the next article in this series, we will learn about converting XML back to Java objects.

The complete source code for this article can be downloaded from the linked GitHub repository.

Java Web Weekly, Issue 118

$
0
0

I usually post about Dev stuff on Twitter - you can follow me there:

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> Var and val in Java? [joda.org]

An interesting opinion piece about the introduction of local variable type inference in Java.

>> The Spring Boot Dashboard in STS – Part 5: Working with Launch configurations [spring.io]

Launch configs have always been a bit hard to manage in Eclipse – it’s nice to see the new Boot dashboards make some headway into getting these easier to manage.

>> Jenkins 2.0 Beta Available, Adds New Pipeline Build System [infoq.com] and >> Jenkins 2.0 Overview [jenkins.io]

The Jenkins ecosystem is moving forward and we’ve all but forgotten that Hudson was even a thing.

>> Retry handling with Spring-Retry [mscharhag.com]

Retry logic was something I had to roll out by hand many years back – so having out of the box support for it in Spring is highly useful.

>> 10 Features I Wish Java Would Steal From the Kotlin Language [jooq.org]

A fun read and a whole lot of wishful thinking :)

>> JUnit 5 – Architecture [codefx.org]

A deeper look into the architecture of the upcoming JUnit 5, and how the improvements will help in quite a number of scenarios (including IDEs). Cool stuff.

>> Benchmarking High-Concurrency HTTP Servers on the JVM [paralleluniverse.co]

A very detailed and well researched look at the state of concurrency of our HTTP servers running on the JVM.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

Worth reading:

3. Musings

>> Thanks For Ruining Another Game Forever, Computers [codinghorror.com]

If you’ve been at least mildly interested in the ongoing trend of computers defeating human players in games like chess and more recently Go – this this is a fun and interesting read.

>> So You Want Some Passive Income [daedtech.com]

A quick and practical read if you’re starting to think about passive(ish) income.

Just keep in mind that passive is an umbrella term, a long-term play and an oversimplification. It’s also, done right – a very good way to pay the bills.

>> Software Can’t Live On Its Own [techblog.bozho.net]

The idea of unsupervised software, much like the concept of passive income doesn’t quite work out in practice.

And so exploring this concept and being realistic about what it takes to actually support a system that’s seeing real-world use is definitely important.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Oh, it’s on now [dilbert.com]

>> Do you mind while I pretend to be helpful? [dilbert.com]

>> I showed an interest in her opinion [dilbert.com]

 

5. Pick of the Week

>> Sleep deprivation is not a badge of honor [signalvnoise.com]

 

I usually post about Dev stuff on Twitter - you can follow me there:



Introduction to jOOQ with Spring

$
0
0

I usually post about Persistence on Twitter - you can follow me there:

1. Overview

This article will introduce Java Object Oriented Querying – jOOQ – and a simple way to set it up in collaboration with the Spring Framework.

Most Java applications have some sort of SQL persistence and access that layer with the help of higher level tools such as JPA. And while that’s useful, in some cases you really need a finer, more nuanced tool to get to your data, or to actually take advantage of everything the underlying DB has to offer.

jOOQ avoids some typical ORM patterns and generates code that allows us to build typesafe queries, and get a more complete control of the generated SQL via a clean and powerful fluent API.

2. Maven Dependencies

The following dependencies are necessary to run the code in this tutorial.

2.1. jOOQ

<dependency>
    <groupId>org.jooq</groupId>
    <artifactId>jooq</artifactId>
    <version>3.7.3</version>
</dependency>

2.2. Spring

There are several Spring dependencies required for our example; however, to make things simple, we just need to explicitly include two of them in the POM file:

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-context</artifactId>
    <version>4.2.5.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-jdbc</artifactId>
    <version>4.2.5.RELEASE</version>
</dependency>

2.3. Database

To make things easy for our example, we will make use of the H2 embedded database:

<dependency>
    <groupId>com.h2database</groupId>
    <artifactId>h2</artifactId>
    <version>1.4.191</version>
</dependency>

3. Code Generation

3.1. Database Structure

Let’s introduce the database structure we will be working with throughout this article. Suppose that we need to create a database for a publisher to store information the books and authors they manage, where an author may write many books and a book may be co-written by many authors.

To make it simple, we will generate only three tables: book for books, author for authors, and another table called author_book to represent the many-to-many relationship between authors and books. The author table has three columns: id, first_name and last_name. The book table contains only a title column and the id primary key.

The following SQL queries, stored in the intro_schema.sql resource file, will be executed against the database we have already set up before to create the necessary tables and populate them with sample data:

DROP TABLE IF EXISTS author_book, author, book;

CREATE TABLE author (
  id             INT          NOT NULL PRIMARY KEY,
  first_name     VARCHAR(50),
  last_name      VARCHAR(50)  NOT NULL
);

CREATE TABLE book (
  id             INT          NOT NULL PRIMARY KEY,
  title          VARCHAR(100) NOT NULL
);

CREATE TABLE author_book (
  author_id      INT          NOT NULL,
  book_id        INT          NOT NULL,
  
  PRIMARY KEY (author_id, book_id),
  CONSTRAINT fk_ab_author     FOREIGN KEY (author_id)  REFERENCES author (id)  
    ON UPDATE CASCADE ON DELETE CASCADE,
  CONSTRAINT fk_ab_book       FOREIGN KEY (book_id)    REFERENCES book   (id)
);

INSERT INTO author VALUES 
  (1, 'Kathy', 'Sierra'), 
  (2, 'Bert', 'Bates'), 
  (3, 'Bryan', 'Basham');

INSERT INTO book VALUES 
  (1, 'Head First Java'), 
  (2, 'Head First Servlets and JSP'),
  (3, 'OCA/OCP Java SE 7 Programmer');

INSERT INTO author_book VALUES (1, 1), (1, 3), (2, 1);

3.2. Properties Maven Plugin

We will use three different Maven plugins to generate the jOOQ code. The first of these is the Properties Maven plugin.

This plugin is used to read configuration data from a resource file. It is not required since the data may be directly added to the POM, but it is a good idea to manage the properties externally.

In this section, we will define properties for database connections, including the JDBC driver class, database URL, username and password, in a file named intro_config.properties. Externalizing these properties makes it easy to switch the database or just change the configuration data.

The read-project-properties goal of this plugin should be bound to an early phase so that the configuration data can be prepared for use by other plugins. In this case, it is bound to the initialize phase:

<plugin>
    <groupId>org.codehaus.mojo</groupId>
    <artifactId>properties-maven-plugin</artifactId>
    <version>1.0.0</version>
    <executions>
        <execution>
            <phase>initialize</phase>
            <goals>
                <goal>read-project-properties</goal>
            </goals>
            <configuration>
                <files>
                    <file>src/main/resources/intro_config.properties</file>
                </files>
            </configuration>
        </execution>
    </executions>
</plugin>

3.3. SQL Maven Plugin

The SQL Maven plugin is used to execute SQL statements to create and populate database tables. It will make use of the properties that have been extracted from the intro_config.properties file by the Properties Maven plugin, and take the SQL statements from the intro_schema.sql resource.

The SQL Maven plugin is configured as below:

<plugin>
    <groupId>org.codehaus.mojo</groupId>
    <artifactId>sql-maven-plugin</artifactId>
    <version>1.5</version>
    <executions>
        <execution>
            <phase>initialize</phase>
            <goals>
                <goal>execute</goal>
            </goals>
            <configuration>
                <driver>${db.driver}</driver>
                <url>${db.url}</url>
                <username>${db.username}</username>
                <password>${db.password}</password>
                <srcFiles>
                    <srcFile>src/main/resources/intro_schema.sql</srcFile>
                </srcFiles>
            </configuration>
        </execution>
    </executions>
    <dependencies>
        <dependency>
            <groupId>com.h2database</groupId>
            <artifactId>h2</artifactId>
            <version>1.4.191</version>
        </dependency>
    </dependencies>
</plugin>

Note that this plugin must be placed later than the Properties Maven plugin in the POM file since their execution goals are both bound to the same phase, and Maven will execute the them in the order they are listed.

3.4. jOOQ Codegen Plugin

The jOOQ Codegen plugin generates Java code from a database table structure. Its generate goal should be bound to the generate-sources phase to ensure the correct order of execution. The plugin metadata looks like the following:

<plugin>
    <groupId>org.jooq</groupId>
    <artifactId>jooq-codegen-maven</artifactId>
    <version>${org.jooq.version}</version>
    <executions>
        <execution>
            <phase>generate-sources</phase>
            <goals>
                <goal>generate</goal>
            </goals>
            <configuration>
                <jdbc>
                    <driver>${db.driver}</driver>
                    <url>${db.url}</url>
                    <user>${db.username}</user>
                    <password>${db.password}</password>
                </jdbc>
                <generator>
                    <target>
                        <packageName>com.baeldung.jooq.introduction.db</packageName>
                        <directory>src/main/java</directory>
                    </target>
                </generator>
            </configuration>
        </execution>
    </executions>
</plugin>

3.5. Generating Code

To finish up the process of source code generation, we need to run the Maven generate-sources phase. In Eclipse, we can do this by right-clicking on the project and choosing Run As –> Maven generate-sources. After the command is completed, source files corresponding to the author, book, author_book tables (and several others for supporting classes) are generated.

Let’s dig into table classes to see what jOOQ produced. Each class has a static field of the same name as the class, except that all letters in the name are capitalized. The following are code snippets taken from the generated classes’ definitions:

The Author class:

public class Author extends TableImpl<AuthorRecord> {
    public static final Author AUTHOR = new Author();

    // other class members
}

The Book class:

public class Book extends TableImpl<BookRecord> {
    public static final Book BOOK = new Book();

    // other class members
}

The AuthorBook class:

public class AuthorBook extends TableImpl<AuthorBookRecord> {
    public static final AuthorBook AUTHOR_BOOK = new AuthorBook();

    // other class members
}

The instances referenced by those static fields will serve as data access objects to represent the corresponding tables when working with other layers in a project.

4. Spring Configuration

4.1. Translating jOOQ Exceptions to Spring

In order to make exceptions thrown from jOOQ execution consistent with Spring support for database access, we need to translate them into subtypes of the DataAccessException class.

Let’s define an implementation of the ExecuteListener interface to convert exceptions:

public class ExceptionTranslator extends DefaultExecuteListener {
    public void exception(ExecuteContext context) {
        SQLDialect dialect = context.configuration().dialect();
        SQLExceptionTranslator translator 
          = new SQLErrorCodeSQLExceptionTranslator(dialect.name());
        context.exception(translator
          .translate("Access database using jOOQ", context.sql(), context.sqlException()));
    }
}

This class will be used by the Spring application context.

4.2. Configuring Spring

This section will go through steps to define a PersistenceContext that contains metadata and beans to be used by the Spring application context.

Let’s get started by applying necessary annotations to the class:

  • @Configuration: Make the class to be recognized as a container for beans
  • @ComponentScan: Configure scanning directives, including the value option to declare an array of package names to search for components. In this tutorial, the package to be searched is the one generated by the jOOQ Codegen Maven plugin
  • @EnableTransactionManagement: Enable transactions to be managed by Spring
  • @PropertySource: Indicate the locations of the properties files to be loaded. The value in this article points to the file containing configuration data and dialect of the database, which happens to be the same file mentioned in subsection 4.1.
@Configuration
@ComponentScan({"com.baeldung.jooq.introduction.db.public_.tables"})
@EnableTransactionManagement
@PropertySource("classpath:intro_config.properties")
public class PersistenceContext {
    // Other declarations
}

Next, use an Environment object to get the configuration data, which is then used to configure the DataSource bean:

@Autowired
private Environment environment;

@Bean
public DataSource dataSource() {
    JdbcDataSource dataSource = new JdbcDataSource();

    dataSource.setUrl(environment.getRequiredProperty("db.url"));
    dataSource.setUser(environment.getRequiredProperty("db.username"));
    dataSource.setPassword(environment.getRequiredProperty("db.password"));
    return dataSource; 
}

Now we define several beans to work with database access operations:

@Bean
public TransactionAwareDataSourceProxy transactionAwareDataSource() {
    return new TransactionAwareDataSourceProxy(dataSource());
}

@Bean
public DataSourceTransactionManager transactionManager() {
    return new DataSourceTransactionManager(dataSource());
}

@Bean
public DataSourceConnectionProvider connectionProvider() {
    return new DataSourceConnectionProvider(transactionAwareDataSource());
}

@Bean
public ExceptionTranslator exceptionTransformer() {
    return new ExceptionTranslator();
}
    
@Bean
public DefaultDSLContext dsl() {
    return new DefaultDSLContext(configuration());
}

Finally, we provide a jOOQ Configuration implementation and declare it as a Spring bean to be used by the DSLContext class:

@Bean
public DefaultConfiguration configuration() {
    DefaultConfiguration jooqConfiguration = new DefaultConfiguration();
    jooqConfiguration.set(connectionProvider());
    jooqConfiguration.set(new DefaultExecuteListenerProvider(exceptionTransformer()));

    String sqlDialectName = environment.getRequiredProperty("jooq.sql.dialect");
    SQLDialect dialect = SQLDialect.valueOf(sqlDialectName);
    jooqConfiguration.set(dialect);

    return jooqConfiguration;
}

5. Using jOOQ with Spring

This section demonstrates the use of jOOQ in common database access queries. There are two tests, one for commit and one for rollback, for each type of “write” operation, including inserting, updating and deleting data. The use of “read” operation is illustrated when selecting data to verify the “write” queries.

We will begin by declaring an auto-wired DSLContext object and instances of jOOQ generated classes to be used by all testing methods:

@Autowired
private DSLContext dsl;

Author author = Author.AUTHOR;
Book book = Book.BOOK;
AuthorBook authorBook = AuthorBook.AUTHOR_BOOK;

5.1. Inserting Data

The first step is to insert data into tables:

dsl.insertInto(author)
  .set(author.ID, 4)
  .set(author.FIRST_NAME, "Herbert")
  .set(author.LAST_NAME, "Schildt")
  .execute();
dsl.insertInto(book)
  .set(book.ID, 4)
  .set(book.TITLE, "A Beginner's Guide")
  .execute();
dsl.insertInto(authorBook)
  .set(authorBook.AUTHOR_ID, 4)
  .set(authorBook.BOOK_ID, 4)
  .execute();

A SELECT query to extract data:

Result<Record3<Integer, String, Integer>> result = dsl
  .select(author.ID, author.LAST_NAME, DSL.count())
  .from(author)
  .join(authorBook)
  .on(author.ID.equal(authorBook.AUTHOR_ID))
  .join(book)
  .on(authorBook.BOOK_ID.equal(book.ID))
  .groupBy(author.LAST_NAME)
  .fetch();

The above query produces the following output:

+----+---------+-----+
|  ID|LAST_NAME|count|
+----+---------+-----+
|   1|Sierra   |    2|
|   2|Bates    |    1|
|   4|Schildt  |    1|
+----+---------+-----+

The result is confirmed by the Assert API:

assertEquals(3, result.size());
assertEquals("Sierra", result.getValue(0, author.LAST_NAME));
assertEquals(Integer.valueOf(2), result.getValue(0, DSL.count()));
assertEquals("Schildt", result.getValue(2, author.LAST_NAME));
assertEquals(Integer.valueOf(1), result.getValue(2, DSL.count()));

When a failure occurs due to an invalid query, an exception is thrown and the transaction rolls back. In the following example, the INSERT query violates a foreign key constraint, resulting in an exception:

@Test(expected = DataAccessException.class)
public void givenInvalidData_whenInserting_thenFail() {
    dsl.insertInto(authorBook)
      .set(authorBook.AUTHOR_ID, 4)
      .set(authorBook.BOOK_ID, 5)
      .execute();
}

5.2. Updating Data

Now let’s update the existing data:

dsl.update(author)
  .set(author.LAST_NAME, "Baeldung")
  .where(author.ID.equal(3))
  .execute();
dsl.update(book)
  .set(book.TITLE, "Building your REST API with Spring")
  .where(book.ID.equal(3))
  .execute();
dsl.insertInto(authorBook)
  .set(authorBook.AUTHOR_ID, 3)
  .set(authorBook.BOOK_ID, 3)
  .execute();

Get the necessary data:

Result<Record3<Integer, String, String>> result = dsl
  .select(author.ID, author.LAST_NAME, book.TITLE)
  .from(author)
  .join(authorBook)
  .on(author.ID.equal(authorBook.AUTHOR_ID))
  .join(book)
  .on(authorBook.BOOK_ID.equal(book.ID))
  .where(author.ID.equal(3))
  .fetch();

The output should be:

+----+---------+----------------------------------+
|  ID|LAST_NAME|TITLE                             |
+----+---------+----------------------------------+
|   3|Baeldung |Building your REST API with Spring|
+----+---------+----------------------------------+

The following test will verify that jOOQ worked as expected:

assertEquals(1, result.size());
assertEquals(Integer.valueOf(3), result.getValue(0, author.ID));
assertEquals("Baeldung", result.getValue(0, author.LAST_NAME));
assertEquals("Building your REST API with Spring", result.getValue(0, book.TITLE));

In case of a failure, an exception is thrown and the transaction rolls back, which we confirm with a test:

@Test(expected = DataAccessException.class)
public void givenInvalidData_whenUpdating_thenFail() {
    dsl.update(authorBook)
      .set(authorBook.AUTHOR_ID, 4)
      .set(authorBook.BOOK_ID, 5)
      .execute();
}

5.3. Deleting Data

The following method deletes some data:

dsl.delete(author)
  .where(author.ID.lt(3))
  .execute();

Here is the query to read the affected table:

Result<Record3<Integer, String, String>> result = dsl
  .select(author.ID, author.FIRST_NAME, author.LAST_NAME)
  .from(author)
  .fetch();

The query output:

+----+----------+---------+
|  ID|FIRST_NAME|LAST_NAME|
+----+----------+---------+
|   3|Bryan     |Basham   |
+----+----------+---------+

The following test verifies the deletion:

assertEquals(1, result.size());
assertEquals("Bryan", result.getValue(0, author.FIRST_NAME));
assertEquals("Basham", result.getValue(0, author.LAST_NAME));

On the other hand, if a query is invalid, it will throw an exception and the transaction rolls back. The following test will prove that:

@Test(expected = DataAccessException.class)
public void givenInvalidData_whenDeleting_thenFail() {
    dsl.delete(book)
      .where(book.ID.equal(1))
      .execute();
}

6. Conclusion

This tutorial introduced the basics of jOOQ, a Java library for working with databases. It covered the steps to generate source code from a database structure and how to interact with that database using the newly created classes.

The implementation of all these examples and code snippets can be found in a GitHub project.

I usually post about Persistence on Twitter - you can follow me there:


XStream User Guide: Converting XML to Objects

$
0
0

I usually post about Dev stuff on Twitter - you can follow me there:

1. Overview

In a previous article, we learned how to use XStream to serialize Java objects to XML. In this tutorial, we will learn how to do the reverse: deserialize XML to Java objects. These tasks can be accomplished using annotations or programmatically.

To learn about the basic requirements for setting up XStream and its dependencies, please reference the previous article.

2. Deserialize an Object from XML

To start with, suppose we have the following XML:

<com.baeldung.pojo.Customer>
    <firstName>John</firstName>
    <lastName>Doe</lastName>
    <dob>1986-02-14 03:46:16.381 UTC</dob>
</com.baeldung.pojo.Customer>

We need to convert this to a Java Customer object:

public class Customer {
 
    private String firstName;
    private String lastName;
    private Date dob;
 
    // standard setters and getters
}

The XML can be input in a number of ways, including FileInputStream, Reader, or String. For simplicity, we’ll assume that we have the XML above in a String object.

Customer convertedCustomer = (Customer) xstream.fromXML(customerXmlString);
Assert.assertTrue(convertedCustomer.getFirstName().equals("John"));

3. Aliases

In the first example, the XML had the fully-qualified name of the class in the outermost XML tag, matching the location of our Customer class. With this setup, XStream easily converts the XML to our object without any extra configuration. But we may not always have these conditions. We might not have control over the XML tag naming, or we might decide to add aliases for fields.

For example, suppose we modified our XML to not use the fully-qualified class name for the outer tag:

<customer>
    <firstName>John</firstName>
    <lastName>Doe</lastName>
    <dob>1986-02-14 03:46:16.381 UTC</dob>
</customer>

We can covert this XML by creating aliases.

3.1. Class Aliases

We register aliases with the XStream instance either programmatically or using annotations. We can annotate our Customer class with @XStreamAlias:

@XStreamAlias("customer")
public class Customer {
    //...
}

Now we need to configure our XStream instance to use this annotation:

xstream.processAnnotations(Customer.class);

Alternatively, if we wish to configure an alias programmatically, we can use the code below:

xstream.alias("customer", Customer.class);

3.2. Field Aliases

Suppose we have the following XML:

<customer>
    <fn>John</fn>
    <lastName>Doe</lastName>
    <dob>1986-02-14 03:46:16.381 UTC</dob>
</customer>

The fn tag doesn’t match any fields in our Customer object, so we will need to define an alias for that field if we wish to deserialize it. We can achieve this using the following annotation:

@XStreamAlias("fn")
private String firstName;

Alternatively, we can accomplish the same goal programmatically:

xstream.aliasField("fn", Customer.class, "firstName");

4. Implicit Collections

Let’s say we have the following XML, containing a simple list of ContactDetails:

<customer>
    <firstName>John</firstName>
    <lastName>Doe</lastName>
    <dob>1986-02-14 04:14:20.541 UTC</dob>
    <ContactDetails>
        <mobile>6673543265</mobile>
        <landline>0124-2460311</landline>
    </ContactDetails>
    <ContactDetails>...</ContactDetails>
</customer>

We want to load the list of ContactDetails into a List<ContactDetails> field in our Java object. We can achieve this by using the following annotation:

@XStreamImplicit
private List<ContactDetails> contactDetailsList;

Alternatively, we can accomplish the same goal programmatically:

xstream.addImplicitCollection(Customer.class, "contactDetailsList");

5. Ignore Fields

Let’s say we have following XML:

<customer>
    <firstName>John</firstName>
    <lastName>Doe</lastName>
    <dob>1986-02-14 04:14:20.541 UTC</dob>
    <fullName>John Doe</fullName>
</customer>

In the XML above, we have extra element <fullName> which is missing from our Java Customer object.

If we try to deserialize the above xml without taking any care for extra element, program throws an UnknownFieldException.

No such field com.baeldung.pojo.Customer.fullName

As the exception clearly states, XStream does not recognize the field fullName.

To overcome this problem we need to configure it to ignore unknown elements:

xstream.ignoreUnknownElements();

6. Attribute Fields

Suppose we have XML with attributes as part of elements that we’d like to deserialize as a field in our object. We will add a contactType attribute to our ContactDetails object:

<ContactDetails contactType="Office">
    <mobile>6673543265</mobile>
    <landline>0124-2460311</landline>
</ContactDetails>

If we want to deserialize the contactType XML attribute, we can use the @XStreamAsAttribute annotation on the field we’d like it to appear in:

@XStreamAsAttribute
private String contactType;

Alternatively, we can accomplish the same goal programmatically:

xstream.useAttributeFor(ContactDetails.class, "contactType");

7. Conclusion

In this article, we explored the options we have available when deserializing XML to Java objects using XStream.

The complete source code for this article can be downloaded from the linked GitHub repository.

I usually post about Dev stuff on Twitter - you can follow me there:


File Upload with Spring MVC

$
0
0

I usually post about Dev stuff on Twitter - you can follow me there:

1. Overview

In previous articles, we introduced the basics of form handling and explored the form tag library in Spring MVC.

In this article, we focus on what Spring offers for multipart (file upload) support in web applications.

Spring allows us to enable this multipart support with pluggable MultipartResolver objects. The framework provides one MultipartResolver implementation for use with Commons FileUpload and another for use with Servlet 3.0 multipart request parsing.

After configuring the MultipartResolver we’ll see how to upload single file and multiple files.

2. Commons FileUpload

To use CommonsMultipartResolver to handle the file upload, we need to add the following dependency:

<dependency>
    <groupId>commons-fileupload</groupId>
    <artifactId>commons-fileupload</artifactId>
    <version>1.3.1</version>
</dependency>

Now we can define the CommonsMultipartResolver bean into our Spring configuration.

This MultipartResolver comes with a series of set method to define properties such as the maximum size for uploads:

@Bean(name = "multipartResolver")
public CommonsMultipartResolver multipartResolver() {
    CommonsMultipartResolver multipartResolver = new CommonsMultipartResolver();
    multipartResolver.setMaxUploadSize(100000);
    return new CommonsMultipartResolver();
}

3. With Servlet 3.0

In order to use Servlet 3.0 multipart parsing, we need configure a couple pieces of the application. First, we need to set a MultipartConfigElement in our DispatcherServlet registration:

public class MainWebAppInitializer implements WebApplicationInitializer {

    private String TMP_FOLDER = "/tmp"; 
    private int MAX_UPLOAD_SIZE = 5 * 1024 * 1024; 
    
    @Override
    public void onStartup(ServletContext sc) throws ServletException {
        
        ServletRegistration.Dynamic appServlet = sc.addServlet("mvc", new DispatcherServlet(
          new GenericWebApplicationContext()));

        appServlet.setLoadOnStartup(1);
        
        MultipartConfigElement multipartConfigElement = new MultipartConfigElement(TMP_FOLDER, 
          MAX_UPLOAD_SIZE, MAX_UPLOAD_SIZE * 2, MAX_UPLOAD_SIZE / 2);
        
        appServlet.setMultipartConfig(multipartConfigElement);
    }
}

In the MultipartConfigElement object, we have configured the storage location, maximum individual file size, maximum request size (in case of multiple files in a single request), and the size at which the file upload progress is flushed to the storage location.

These settings must be applied at the servlet registration level, as Servlet 3.0 does not allow them to be registered in the MultipartResolver as is the case with CommonsMultipartResolver.

Once this is done, we can add the StandardServletMultipartResolver to our Spring configuration:

@Bean
public StandardServletMultipartResolver multipartResolver() {
    return new StandardServletMultipartResolver();
}

4. Uploading a File

To upload our file, we can build a simple form in which we use an HTML input tag with type=’file’.

Regardless of the upload handling configuration we have chosen, we need to set the encoding attribute of the form to multipart/form-data. This lets the browser know how to encode the form:

<form:form method="POST" action="/spring-mvc-xml/uploadFile" enctype="multipart/form-data">
    <table>
        <tr>
            <td><form:label path="file">Select a file to upload</form:label></td>
            <td><input type="file" name="file" /></td>
        </tr>
        <tr>
            <td><input type="submit" value="Submit" /></td>
        </tr>
    </table>
</form>

To store the uploaded file we can use a MultipartFile variable. We can retrieve this variable from the request parameter inside our controller’s method:

@RequestMapping(value = "/uploadFile", method = RequestMethod.POST)
public String submit(@RequestParam("file") MultipartFile file, ModelMap modelMap) {
    modelMap.addAttribute("file", file);
    return "fileUploadView";
}

The MultipartFile class provides access to details about the uploaded file, including file name, file type, and so on. We can use a simple HTML page to display this information:

<h2>Submitted File</h2>
<table>
    <tr>
        <td>OriginalFileName:</td>
        <td>${file.originalFilename}</td>
    </tr>
    <tr>
        <td>Type:</td>
        <td>${file.contentType}</td>
    </tr>
</table>

5. Uploading Multiple Files

To upload multiple files in a single request, we simply put multiple input file fields inside the form:

<form:form method="POST" action="/spring-mvc-java/uploadMultiFile" enctype="multipart/form-data">
    <table>
        <tr>
            <td>Select a file to upload</td>
            <td><input type="file" name="files" /></td>
        </tr>
        <tr>
            <td>Select a file to upload</td>
            <td><input type="file" name="files" /></td>
        </tr>
        <tr>
            <td>Select a file to upload</td>
            <td><input type="file" name="files" /></td>
        </tr>
        <tr>
            <td><input type="submit" value="Submit" /></td>
        </tr>
    </table>
</form:form>

We need to take care that each input field has the same name, so that it can be accessed as an array of MultipartFile:

@RequestMapping(value = "/uploadMultiFile", method = RequestMethod.POST)
public String submit(@RequestParam("files") MultipartFile[] files, ModelMap modelMap) {
    modelMap.addAttribute("files", files);
    return "fileUploadView";
}

Now, we can simply iterate over that array to display files informations:

<%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %>
<html>
    <head>
        <title>Spring MVC File Upload</title>
    </head>
    <body>
        <h2>Submitted Files</h2>
	    <table>
            <c:forEach items="${files}" var="file">    
                <tr>
                    <td>OriginalFileName:</td>
                    <td>${file.originalFilename}</td>
                </tr>
                <tr>
                    <td>Type:</td>
                    <td>${file.contentType}</td>
                </tr>
            </c:forEach>
	    </table>
    </body>
</html>

6. Conclusion

In this article we looked at different ways to configure multipart support in Spring. Using these, we can support file uploads in our web applications.

The implementation of this tutorial can be found in a GitHub project. When the project runs locally, the form example can be accessed at: http://localhost:8080/spring-mvc-java/fileUpload

I usually post about Dev stuff on Twitter - you can follow me there:


Entity Validation, Optimistic Locking, and Query Consistency in Spring Data Couchbase

$
0
0

1. Introduction

After our introduction to Spring Data Couchbase, in this second tutorial we focus on the support for entity validation (JSR-303), optimistic locking, and different levels of query consistency for a Couchbase document database.

2. Entity Validation

Spring Data Couchbase provides support for JSR-303 entity validation annotations. In order to take advantage of this feature, first we add the JSR-303 library to the dependencies section of our Maven project:

<dependency>
    <groupId>javax.validation</groupId>
    <artifactId>validation-api</artifactId>
    <version>1.1.0.Final</version>
</dependency>

Then we add an implementation of JSR-303. We will use the Hibernate implementation:

<dependency>
    <groupId>org.hibernate</groupId>
    <artifactId>hibernate-validator</artifactId>
    <version>5.2.4.Final</version>
</dependency>

Finally, we add a validator factory bean and corresponding Couchbase event listener to our Couchbase configuration:

@Bean
public LocalValidatorFactoryBean localValidatorFactoryBean() {
    return new LocalValidatorFactoryBean();
}

@Bean
public ValidatingCouchbaseEventListener validatingCouchbaseEventListener() {
    return new ValidatingCouchbaseEventListener(localValidatorFactoryBean());
}

The equivalent XML configuration looks like this:

<bean id="validator"
  class="org.springframework.validation.beanvalidation.LocalValidatorFactoryBean"/>

<bean id="validatingEventListener" 
  class="org.springframework.data.couchbase.core.mapping.event.ValidatingCouchbaseEventListener"/>

Now we add JSR-303 annotations to our entity classes. When a constraint violation is encountered during a persistence operation, the operation will fail, throwing a ConstraintViolationException.

Here is a sample of the constraints that we can enforce involving our Student entities:

@Field
@NotNull
@Size(min=1, max=20)
@Pattern(regexp="^[a-zA-Z .'-]+$")
private String firstName;

...
@Field
@Past
private DateTime dateOfBirth;

3. Optimistic Locking

Spring Data Couchbase does not support multi-document transactions similar to those you can achieve in other Spring Data modules such as Spring Data JPA (via the @Transactional annotation), nor does it provide a rollback feature.

However it does support optimistic locking in much the same way as other Spring Data modules through the use of the @Version annotation:

@Version
private long version;

Under the covers, Couchbase uses what is known as a “compare and swap” (CAS) mechanism to achieve optimistic locking at the datastore level.

Each document in Couchbase has an associated CAS value that is modified automatically any time the document’s metadata or contents are altered. The use of the @Version annotation on a field causes that field to be populated with the current CAS value whenever a document is retrieved from Couchbase.

When you attempt to save the document back to Couchbase, this field is checked against the current CAS value in Couchbase. If the values do not match, the persistence operation will fail with an OptimisticLockingException.

It is extremely important to note that you should never attempt to access or modify this field in your code.

4. Query Consistency

When implementing a persistence layer over Couchbase, you have to consider the possibility of stale reads and writes. This is because when documents are inserted, updated, or deleted, it may take some time before the backing views and indexes are updated to reflect these changes.

And if you have a large dataset backed by a cluster of Couchbase nodes, this can become a significant problem, especially for a OLTP system.

Spring Data provides a robust level of consistency for some repository and template operations, plus a couple of options that let you determine the level of read and write consistency that is acceptable for your application.

4.1. Levels of Consistency

Spring Data allows you to specify various levels of query consistency and staleness for your application via the Consistency enum found in the org.springframework.data.couchbase.core.query package.

This enum defines the following levels of query consistency and staleness, from least to most strict:

  • EVENTUALLY_CONSISTENT
    • stale reads are allowed
    • indexes are updated according to Couchbase standard algorithm
  • UPDATE_AFTER
    • stale reads are allowed
    • indexes are updated after each request
  • DEFAULT_CONSISTENCY (same as READ_YOUR_OWN_WRITES)
  • READ_YOUR_OWN_WRITES
    • stale reads are not allowed
    • indexes are updated after each request
  • STRONGLY_CONSISTENT
    • stale reads are not allowed
    • indexes are updated after each statement

4.2. Default Behavior

Consider the case where you have documents that have been deleted from Couchbase, and the backing views and indexes have not been fully updated.

The CouchbaseRepository built-in method deleteAll() safely ignores documents that were found by the backing view but whose deletion is not yet reflected by the view.

Likewise, the CouchbaseTemplate built-in methods findByView and findBySpatialView offer a similar level of consistency by not returning documents that were initially found by the backing view but which have since been deleted.

For all other template methods, built-in repository methods, and derived repository query methods, according to the official Spring Data Couchbase 2.1.x documentation as of this writing, Spring Data uses a default consistency level of Consistency.READ_YOUR_OWN_WRITES.

It is worth noting that earlier versions of the library used a default of Consistency.UPDATE_AFTER.

Whichever version you are using, if you have any reservations about blindly accepting the default consistency level being provided, Spring offers two methods by which you can declaratively control the consistency level(s) being used, as the following subsections will describe.

4.3. Global Consistency Setting

If you are using Couchbase repositories and your application calls for a stronger level of consistency, or if it can tolerate a weaker level, then you may override the default consistency setting for all repositories by overriding the getDefaultConsistency() method in your Couchbase configuration.

Here is how you can override global consistency level in your Couchbase configuration class:

@Override
public Consistency getDefaultConsistency() {
    return Consistency.STRONGLY_CONSISTENT;
}

Here is the equivalent XML configuration:

<couchbase:template consistency="STRONGLY_CONSISTENT"/>

Note that the price of stricter levels of consistency is increased latency at query time, so be sure to tailor this setting based on the needs of your application.

For example, a data warehouse or reporting application in which data is often appended or updated only in batch would be a good candidate for EVENTUALLY_CONSISTENT, whereas an OLTP application should probably tend towards the more strict levels such as READ_YOUR_OWN_WRITES or STRONGLY_CONSISTENT.

4.4. Custom Consistency Implementation

If you need more finely tuned consistency settings, you can override the default consistency level on a query-by-query basis by providing your own repository implementation for any queries whose consistency level you want to control independently and making use of the queryView and/or queryN1QL methods provided by CouchbaseTemplate.

Let’s implement a custom repository method called findByFirstNameStartsWith for our Student entity for which we do not want to allow stale reads.

First, create an interface containing the custom method declaration:

public interface CustomStudentRepository {
    List<Student> findByFirstNameStartsWith(String s);
}

Next, implement the interface, setting the Stale setting from the underlying Couchbase Java SDK to the desired level:

public class CustomStudentRepositoryImpl implements CustomStudentRepository {

    @Autowired
    private CouchbaseTemplate template;

    public List<Student> findByFirstNameStartsWith(String s) {
        return template.findByView(ViewQuery.from("student", "byFirstName")
          .startKey(s)
          .stale(Stale.FALSE),
          Student.class);
    }
}

Finally, by having your standard repository interface extend both the generic CrudRepository interface and your custom repository interface, clients will have access to all the built-in and derived methods of your standard repository interface, plus any custom methods you implemented in your custom repository class:

public interface StudentRepository extends CrudRepository<Student, String>,
  CustomStudentRepository {
    ...
}

5. Conclusion

In this tutorial, we showed how to implement JSR-303 entity validation and achieve optimistic locking capability when using the Spring Data Couchbase community project.

We also discussed the need for understanding query consistency in Couchbase, and we introduced the different levels of consistency provided by Spring Data Couchbase.

Finally, we explained the default consistency levels used by Spring Data Couchbase globally and for a few specific methods, and we demonstrated ways to override the global default consistency setting as well as how to override consistency settings on a query-by-query basis by providing your own custom repository implementations.

You can view the complete source code for this tutorial in the github project.

To learn more about Spring Data Couchbase, visit the official Spring Data Couchbase project site.

XStream User Guide: JSON

$
0
0

I usually post about Dev stuff on Twitter - you can follow me there:

1. Overview

This is the third article in a series about XStream. If you want to learn about its basic use in converting Java objects to XML and vice versa, please refer to the previous articles.

Beyond its XML-handling capabilities, XStream can also convert Java objects to and from JSON. In this tutorial, we will learn about these features.

2. Prerequisites

Before reading this tutorial, please go through the first article in this series, in which we explain the basics of the library.

3. Dependencies

<dependency>
    <groupId>com.thoughtworks.xstream</groupId>
    <artifactId>xstream</artifactId>
    <version>1.4.5</version>
</dependency>

4. JSON Drivers

In the previous articles, we learned how to set up an XStream instance and to select an XML driver. Similarly, there are two drivers available to convert objects to and from JSON: JsonHierarchicalStreamDriver and JettisonMappedXmlDriver.

4.1. JsonHierarchicalStreamDriver

This driver class can serialize objects to JSON, but is not capable of deserializing back to objects. It does not require any extra dependencies, and its driver class is self-contained.

4.2. JettisonMappedXmlDriver

This driver class is capable of converting JSON to and from objects. Using this driver class, we need to add an extra dependency for jettison.

<dependency>
    <groupId>org.codehaus.jettison</groupId>
    <artifactId>jettison</artifactId>
    <version>1.3.7</version>
</dependency>

5. Serializing an Object to JSON

Let’s create a Customer class:

public class Customer {

    private String firstName;
    private String lastName;
    private Date dob;
    private String age;
    private List<ContactDetails> contactDetailsList;
       
    // getters and setters
}

Note that we have (perhaps unexpectedly) created age as a String. We will explain this choice later.

5.1. Using JsonHierarchicalStreamDriver

We will pass a JsonHierarchicalStreamDriver to create an XStream instance.

xstream = new XStream(new JsonHierarchicalStreamDriver());
dataJson = xstream.toXML(customer);

This generates the following JSON:

{
  "com.baeldung.pojo.Customer": {
    "firstName": "John",
    "lastName": "Doe",
    "dob": "1986-02-14 16:22:18.186 UTC",
    "age": "30",
    "contactDetailsList": [
      {
        "mobile": "6673543265",
        "landline": "0124-2460311"
      },
      {
        "mobile": "4676543565",
        "landline": "0120-223312"
      }
    ]
  }
}

5.2. JettisonMappedXmlDriver Implementation

We will pass a JettisonMappedXmlDriver class to create an instance.

xstream = new XStream(new JettisonMappedXmlDriver());
dataJson = xstream.toXML(customer);

This generates the following JSON:

{
  "com.baeldung.pojo.Customer": {
    "firstName": "John",
    "lastName": "Doe",
    "dob": "1986-02-14 16:25:50.745 UTC",
    "age": 30,
    "contactDetailsList": [
      {
        "com.baeldung.pojo.ContactDetails": [
          {
            "mobile": 6673543265,
            "landline": "0124-2460311"
          },
          {
            "mobile": 4676543565,
            "landline": "0120-223312"
          }
        ]
      }
    ]
  }
}

5.3. Analysis

Based on the output from the two drivers, we can clearly see that there are some slight differences in the generated JSON. For example, JettisonMappedXmlDriver omits the double quotes for numeric values despite the data type being java.lang.String:

"mobile": 4676543565,
"age": 30,

JsonHierarchicalStreamDriver, on the other hand, retains the double quotes.

6. Deserializing JSON to an Object

Let’s take the following JSON to convert it back to a Customer object:

{
  "customer": {
    "firstName": "John",
    "lastName": "Doe",
    "dob": "1986-02-14 16:41:01.987 UTC",
    "age": 30,
    "contactDetailsList": [
      {
        "com.baeldung.pojo.ContactDetails": [
          {
            "mobile": 6673543265,
            "landline": "0124-2460311"
          },
          {
            "mobile": 4676543565,
            "landline": "0120-223312"
          }
        ]
      }
    ]
  }
}

Recall that only one of the drivers (JettisonMappedXMLDriver) can deserialize JSON. Attempting to use JsonHierarchicalStreamDriver for this purpose will result in an UnsupportedOperationException.

Using the Jettison driver, we can deserialize the Customer object:

customer = (Customer) xstream.fromXML(dataJson);

7. Conclusion

In this article we have covered the JSON handling capabilities XStream, converting objects to and from JSON. We also looked at how we can tweak our JSON output, making it shorter, simpler and more readable.

As with XStream’s XML processing, there are other ways we can further customize the way JSON is serialized by configuring the instance, using either annotations or programmatic configuration. For more details and examples, please refer to the first article in this series.

The complete source code with examples can be downloaded from the linked GitHub repository.

I usually post about Dev stuff on Twitter - you can follow me there:


Java Web Weekly, Issue 120

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> How to Replace Rules in JUnit 5 [codeaffine.com]

I find deep-dives into the upcoming JUnit 5 very interesting.

But, if you’re using rules in JUnit 4 and know they’re going away in version 5 – you’ll find this one particularly useful.

>> Overriding Dependency Versions with Spring Boot [spring.io]

Gone are the days where you have to painstakingly lay out each Spring dependency and versions manually. There are now – and have been for a while – much easier ways to get your dependency tree in working order.

>> Hibernate 5: How to persist LocalDateTime & Co with Hibernate [thoughts-on-java.org]

I remember struggling with this a few years back – I’m glad Hibernate finally supports the new Date classes well.

>> Would We Still Criticise Checked Exceptions, If Java had a Better try-catch Syntax? [jooq.org]

As always, interesting ruminations on improving the Java syntax – this time with better try-catch syntax.

>> JUnit 5 – Extension Model [codefx.org]

Working with JUnit 5 is going to be fun, and extending it is going to be even more so.

Libraries (and IDEs) won’t have to hack around the API any more – which is bound to lead to some good things coming on top of the new JUnit.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> How to run database integration tests 20 times faster [vladmihalcea.com]

I haven’t seen a ram disk in a while 🙂

This writeup is practical and chock full of solid advice if you want to speed up your builds and don’t mind getting your hands a bit dirty with some low level tools.

>> Eric Evans — Tackling Complexity in the Heart of Software [dddeurope.com]

Yeah. Good talk.

Also worth reading:

3. Musings

>> Are Your Arguments Falsifiable? [daedtech.com]

A fun read in general, but particularly if you regularly put your work out there and get feedback on it.

>> How I’ve Avoided Burnout During More Than 3 Decades As A Programmer [thecodist.com]

Interesting advice from someone who’s been doing this stick for a whole lot longer then most of us.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Why does your agreeing sound like mocking? [dilbert.com]

>> And it’s free? [dilbert.com]

>> Pictures of people who were attacked by bears [dilbert.com]

5. Pick of the Week

Instead of picking something, this week I’m going to ask you a question:

Do you like the new Baeldung design?

Let me know in the comments – and have a great weekend.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Spring Expression Language Guide

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

The Spring Expression Language (SpEL) is a powerful expression language that supports querying and manipulating an object graph at runtime. It can be used with XML or annotation-based Spring configurations.

There are several operators available in the language:

Type Operators
Arithmetic +, -, *, /, %, ^, div, mod
Relational <, >, ==, !=, <=, >=, lt, gt, eq, ne, le, ge
Logical and, or, not, &&, ||, !
Conditional ?:
Regex matches

2. Operators

For these examples, we will use annotation-based configuration. More details about XML configuration can be found in later sections of this article.

SpEL expressions begin with the # symbol, and are wrapped in braces: #{expression}. Properties can be referenced in a similar fashion, starting with a $ symbol, and wrapped in braces: ${property.name}. Property placeholders cannot contain SpEL expressions, but expressions can contain property references:

#{${someProperty} + 2}

In the example above, assume someProperty has value 2, so resulting expression would be 2 + 2, which would be evaluated to 4.

2.1. Arithmetic Operators

All basic arithmetic operators are supported.

@Value("#{19 + 1}") // Will inject 20
private double add; 

@Value("#{'String1 ' + 'string2'}") // Will inject "String1 string2"
private String addString; 

@Value("#{20 - 1}") // Will inject 19
private double subtract;

@Value("#{10 * 2}") // Will inject 20
private double multiply;

@Value("#{36 / 2}") // Will inject 19
private double divide;

@Value("#{36 div 2}") // Will inject 18, the same as for / operator
private double divideAlphabetic; 

@Value("#{37 % 10}") // Will inject 7
private double modulo;

@Value("#{37 mod 10}") // Will inject 7, the same as for % operator
private double moduloAlphabetic; 

@Value("#{2 ^ 9}") // Will inject 512
private double powerOf;

@Value("#{(2 + 2) * 2 + 9}") // Will inject 17
private double brackets;

Divide and modulo operations have alphabetic aliases, div for / and mod for %. The + operator can also be used to concatenate strings.

2.2. Relational and Logical Operators

All basic relational and logical operations are also supported.

@Value("#{1 == 1}") // Will inject true
private boolean equal;

@Value("#{1 eq 1}") // Will inject true
private boolean equalAlphabetic;

@Value("#{1 != 1}") // Will inject false
private boolean notEqual;

@Value("#{1 ne 1}") // Will inject false
private boolean notEqualAlphabetic;

@Value("#{1 < 1}") // Will inject false
private boolean lessThan;

@Value("#{1 lt 1}") // Will inject false
private boolean lessThanAlphabetic;

@Value("#{1 <= 1}") // Will inject true
private boolean lessThanOrEqual;

@Value("#{1 le 1}") // Will inject true
private boolean lessThanOrEqualAlphabetic;

@Value("#{1 > 1}") // Will inject false
private boolean greaterThan;

@Value("#{1 gt 1}") // Will inject 7
private boolean greaterThanAlphabetic;

@Value("#{1 >= 1}") // Will inject true
private boolean greaterThanOrEqual;

@Value("#{1 ge 1}") // Will inject true
private boolean greaterThanOrEqualAlphabetic;

All relational operators have alphabetic aliases, as well. For example, in XML-based configs we can’t use operators containing angle brackets (<, <=, >, >=). Instead, we can use lt (less than), le (less than or equal), gt (greater than), or ge (greater than or equal).

2.3. Logical Operators

SpEL supports all basic logical operations:

@Value("#{250 > 200 && 200 < 4000}") // Will inject true
private boolean and; 

@Value("#{250 > 200 and 200 < 4000}") // Will inject true
private boolean andAlphabetic;

@Value("#{400 > 300 || 150 < 100}") // Will inject true
private boolean or;

@Value("#{400 > 300 or 150 < 100}") // Will inject true
private boolean orAlphabetic;

@Value("#{!true}") // Will inject false
private boolean not;

@Value("#{not true}") // Will inject false
private boolean notAlphabetic;

As with arithmetic and relational operators, all logical operators also have alphabetic clones.

2.4. Conditional Operators

Conditional operators are used to inject different values depending on some condition:

@Value("#{2 > 1 ? 'a' : 'b'}") // Will inject "b"
private String ternary;

The ternary operator is used for performing compact if-then-else conditional logic inside the expression. In this example we trying to check if there was true value or not.

Another common use for ternary operator is to check if some variable is null and then return the variable value or a default:

@Value("#{someBean.someProperty != null ? someBean.someProperty : 'default'}")
private String ternary;

The Elvis operator is a way of shortening of the ternary operator syntax for the case above used in the Groovy language. It is also available in SpEL. The code below is equivalent to the code above:

@Value("#{someBean.someProperty ?: 'default'}") // Will inject provided string if someProperty is null
private String elvis;

2.5. Using Regex in SpEL

The matches operator can be used to check whether or not a string matches a given regular expression.

@Value("#{'100' matches '\\d+' }") // Will inject true
private boolean validNumericStringResult;

@Value("#{'100fghdjf' matches '\\d+' }") // Will inject false
private boolean invalidNumericStringResult;

@Value("#{'valid alphabetic string' matches '[a-zA-Z\\s]+' }") // Will inject true
private boolean validAlphabeticStringResult;

@Value("#{'invalid alphabetic string #$1' matches '[a-zA-Z\\s]+' }") // Will inject false
private boolean invalidAlphabeticStringResult;

@Value("#{someBean.someValue matches '\d+'}") // Will inject true if someValue contains only digits
private boolean validNumericValue;

2.6. Accessing List and Map Objects

With help of SpEL, we can access the contents of any Map or List in the context. We will create new bean workersHolder that will store information about some workers and their salaries in a List and a Map:

@Component("workersHolder")
public class WorkersHolder {
    private List<String> workers = new LinkedList<>();
    private Map<String, Integer> salaryByWorkers = new HashMap<>();

    public WorkersHolder() {
        workers.add("John");
        workers.add("Susie");
        workers.add("Alex");
        workers.add("George");

        salaryByWorkers.put("John", 35000);
        salaryByWorkers.put("Susie", 47000);
        salaryByWorkers.put("Alex", 12000);
        salaryByWorkers.put("George", 14000);
    }

    //Getters and setters
}

Now we can access the values inside the collections using SpEL:

@Value("#{workersHolder.salaryByWorkers['John']}") // Will inject 35000
private Integer johnSalary;

@Value("#{workersHolder.salaryByWorkers['George']}") // Will inject 14000
private Integer georgeSalary;

@Value("#{workersHolder.salaryByWorkers['Susie']}") // Will inject 47000
private Integer susieSalary;

@Value("#{workersHolder.workers[0]}") // Will inject John
private String firstWorker;

@Value("#{workersHolder.workers[3]}") // Will inject George
private String lastWorker;

@Value("#{workersHolder.workers.size()}") // Will inject 4
private Integer numberOfWorkers;

3. Use in Spring Configuration

3.1. Referencing a Bean

In this example we will look at how to use SpEL in XML-based configuration. Expressions can be used to reference beans or bean fields/methods. For example, suppose we have the following classes:

public class Engine {
    private int capacity;
    private int horsePower;
    private int numberOfCylinders;

   // Getters and setters
}

public class Car {
    private String make;
    private int model;
    private Engine engine;
    private int horsePower;

   // Getters and setters
}

Now we create an application context in which expressions are used to inject values:

<bean id="engine" class="com.baeldung.spring.spel.Engine">
   <property name="capacity" value="3200"/>
   <property name="horsePower" value="250"/>
   <property name="numberOfCylinders" value="6"/>
</bean>
<bean id="someCar" class="com.baeldung.spring.spel.Car">
   <property name="make" value="Some make"/>
   <property name="model" value="Some model"/>
   <property name="engine" value="#{engine}"/>
   <property name="horsePower" value="#{engine.horsePower}"/>
</bean>

Take a look at the someCar bean. The engine and horsePower fields of someCar use expressions that are bean references to the engine bean and horsePower field respectively.

To do the same with annotation-based configurations, use the @Value(“#{expression}”) annotation.

3.2. Using Operators in Configuration

Each operator from the first section of this article can be used in XML and annotation-based configurations. However, remember that in XML-based configuration, we can’t use angle bracket operators. Instead, we should use their alphabetic aliases, such as lt (less than) or le (less than or equals). For annotation-based configurations, there are no such restrictions.

public class SpelOperators {
    private boolean equal;
    private boolean notEqual;
    private boolean greaterThanOrEqual;
    private boolean and;
    private boolean or;
    private String addString;
    
    // Getters and setters
    @Override
    public String toString() {
        // toString which include all fields
    }

Now we will add a spelOperators bean to the application context:

<bean id="spelOperators" class="com.baeldung.spring.spel.SpelOperators">
   <property name="equal" value="#{1 == 1}"/>
   <property name="notEqual" value="#{1 lt 1}"/>
   <property name="greaterThanOrEqual" value="#{someCar.engine.numberOfCylinders >= 6}"/>
   <property name="and" value="#{someCar.horsePower == 250 and someCar.engine.capacity lt 4000}"/>
   <property name="or" value="#{someCar.horsePower > 300 or someCar.engine.capacity > 3000}"/>
   <property name="addString" value="#{someCar.model + ' manufactured by ' + someCar.make}"/>
</bean>

Retrieving that bean from the context, we can then verify that values were injected properly:

ApplicationContext context = new ClassPathXmlApplicationContext("applicationContext.xml");
SpelOperators spelOperators = (SpelOperators) context.getBean("spelOperators");

Here we can see the output of the toString method of spelOperators bean:

[equal=true, notEqual=false, greaterThanOrEqual=true, and=true, 
or=true, addString=Some model manufactured by Some make]

4. Parsing Expressions Programmatically

At times, we may want to parse expressions outside the context of configuration. Fortunately, this is possible, using SpelExpressionParser. We can use all operators that we saw in previous examples, but should use them without braces and hash symbol. That is, if we want to use an expression with the operator, when used in Spring configuration, the syntax is #{1 + 1}; when used outside of configuration, the syntax is simply 1 + 1.

In the following examples, we will use the Car and Engine beans defined in the previous section.

4.1. Using ExpressionParser

Let’s look at a simple example:

ExpressionParser expressionParser = new SpelExpressionParser();
Expression expression = expressionParser.parseExpression("'Any string'");
String result = (String) expression.getValue();

ExpressionParser is responsible for parsing expression strings. In this example SpEL parser will simply evaluate the string ‘Any String’ as an expression. Unsurprisingly, the result will be ‘Any String’.

As with using SpEL in configuration, we can use it to call methods, access properties, or call constructors.

Expression expression = expressionParser.parseExpression("'Any string'.length()");
Integer result = (Integer) expression.getValue();

Additionally, instead of directly operating on the literal, we could call the constructor:

Expression expression = expressionParser.parseExpression("new String('Any string').length()");

We can also access the bytes property of String class in the same way, resulting in the byte[] representation of the string:

Expression expression = expressionParser.parseExpression("'Any string'.bytes");
byte[] result = (byte[]) expression.getValue();

We can chain method calls, just as in normal Java code:

Expression expression = expressionParser.parseExpression("'Any string'.replace(\" \", \"\").length()");
Integer result = (Integer) expression.getValue();

In this case, the result will be 9, because we have replaced whitespace with the empty string. If we don’t wish to cast the expression result, we can use the generic method T getValue(Class<T> desiredResultType), in which we can provide the desired type of class that we want to be returned. Note that EvaluationException will be thrown if the returned value cannot be cast to desiredResultType:

Integer result = expression.getValue(Integer.class);

The most common usage is to provide an expression string that is evaluated against a specific object instance:

Car car = new Car();
car.setMake("Good manufacturer");
car.setModel("Model 3");
car.setYearOfProduction(2014);

ExpressionParser expressionParser = new SpelExpressionParser();
Expression expression = expressionParser.parseExpression("model");

EvaluationContext context = new StandardEvaluationContext(car);
String result = (String) expression.getValue(context);

In this case, the result will be equal to value of the model field of the car object, “Model 3“. The StandardEvaluationContext class specifies which object the expression will be evaluated against. It cannot be changed after the context object is created. StandardEvaluationContext is expensive to construct, and during repeated usage it builds up cached state that enables subsequent expression evaluations to be performed more quickly. Because of caching it is good practice to reuse StandardEvaluationContext where it possible, if the root object does not change.

However, if the root object is changed repeatedly, we can use the mechanism shown in example below:

Expression expression = expressionParser.parseExpression("model");
String result = (String) expression.getValue(car);

Here, we call the getValue method with an argument that represents the object to which we want to apply a SpEL expression. We can also use the generic getValue method, just as before:

Expression expression = expressionParser.parseExpression("yearOfProduction > 2005");
boolean result = expression.getValue(car, Boolean.class);

4.2. Using ExpressionParser to Set a Value

Using the setValue method on the Expression object returned by parsing an expression, we can set values on objects. SpEL will take care of type conversion. By default, SpEL uses org.springframework.core.convert.ConversionService. We can create our own custom convertor between types. ConversionService is generics aware, so it can be used with generics. Let’s take a look how we can use it in practice:

Car car = new Car();
car.setMake("Good manufacturer");
car.setModel("Model 3");
car.setYearOfProduction(2014);

CarPark carPark = new CarPark();
carPark.getCars().add(car);

StandardEvaluationContext context = new StandardEvaluationContext(carPark);

ExpressionParser expressionParser = new SpelExpressionParser();
expressionParser.parseExpression("cars[0].model").setValue(context, "Other model");

The resulting car object will have modelOther model” which was changed from “Model 3“.

4.3. Parser Configuration

In the following example, we will use the following class:

public class CarPark {
    private List<Car> cars = new ArrayList<>();

    // Getter and setter
}

It is possible to configure ExpressionParser by calling the constructor with a SpelParserConfiguration object. For example, if we try to add car object into the cars array of CarPark class without configuring the parser, we will get an error like this:

EL1025E:(pos 4): The collection has '0' elements, index '0' is invalid

We can change the behavior of the parser, to allow it to automatically create elements if the specified index is null (autoGrowNullReferences, the first parameter to the constructor), or to automatically grow an array or list to accommodate elements beyond its initial size (autoGrowCollections, the second parameter).

SpelParserConfiguration config = new SpelParserConfiguration(true, true);
StandardEvaluationContext context = new StandardEvaluationContext(carPark);

ExpressionParser expressionParser = new SpelExpressionParser(config);
expressionParser.parseExpression("cars[0]").setValue(context, car);

Car result = carPark.getCars().get(0);

The resulting car object will be equal to the car object which was set as the first element of the cars array of carPark object from the previous example.

5. Conclusion

SpEL is a powerful, well-supported expression language that can be used across all the products in the Spring portfolio. It can be used to configure Spring applications or to write parsers to perform more general tasks in any application.

The code samples in this article are available in the linked GitHub repository.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE


An Intro to Spring HATEOAS

$
0
0

I just announced the Master Class of my "REST With Spring" Course:

>> THE "REST WITH SPRING" CLASSES

1. Overview

This article explains the process of creating hypermedia-driven REST web service using Spring HATEOAS project.

2. Spring-HATEOAS

The Spring HATEOAS project is a library of APIs that we can use to easily create REST representations that follow the principle of HATEOAS (Hypertext as the Engine of Application State).

Generally speaking, the principle implies that the API should guide the client through the through the application by returning relevant information about the next potential steps, along with each response.

In this article we are going to build an example using Spring HATEOAS with the goal of decoupling the client and server, and theoretically allowing the API to change its URI scheme without breaking clients.

3. Preparation

First, let’s add Spring HATEOAS dependency:

<dependency>
    <groupId>org.springframework.hateoas</groupId>
    <artifactId>spring-hateoas</artifactId>
    <version>0.19.0.RELEASE</version>
</dependency>

Next, we have the Customer resource without Spring HATEOAS support:

public class Customer {

    private String customerId;
    private String customerName;
    private String companyName;

    // standard getters and setters
}

And we have a controller class without Spring HATEOAS support:

@RestController
@RequestMapping(value = "/customers")
public class CustomerController {
    @Autowired
    private CustomerService customerService;

    @RequestMapping(value = "/{customerId}", method = RequestMethod.GET)
    public Customer getCustomerById(@PathVariable String customerId) {
        return customerService.getCustomerDetail(customerId);
    }
}

Finally, the customer resource representation:

{
    "customerId": "10A",
    "customerName": "Jane",
    "customerCompany": "ABC Company"
}

4. Adding HATEOAS Support

In a Spring HATEOAS project, we don’t need to either look up the Servlet context nor concatenate the path variable to the base URI. Spring HATEOAS offers three abstractions for creating the URI – ResourceSupport, Link and ControllerLinkBuilder. These are used to create the metadata and associate it to the resource representation.

4.1. Adding Hypermedia Support to a Resource

Spring HATEOAS project provides a base class called ResourceSupport to inherit from when creating resource representation.

public class Customer extends ResourceSupport {
    private String customerId;
    private String customerName;
    private String companyName;
 
    // standard getters and setters
}

The Customer resource extends from ResourseSupport class to inherit the add() method. So once we create a link, we can easily set that value to the resource representation without adding any new fields to it.

Spring HATEOAS provides a Link object to store the metadata (location or URI of the resource).

We’ll first create a simple link manually:

Link link = new Link("http://localhost:8080/spring-security-rest/api/customers/10A");

The Link object follows the Atom link syntax and consists of a rel which identifies relation to the resource and href attribute which is the actual link itself.

Here’s how the Customer resource looks now that it contains the new link:

{
    "links": [{
        "rel": "self",
        "href": "http://localhost:8080/spring-security-rest/api/customers/10A"
    }],
    "customerId": "10A",
    "customerName": "Jane",
    "customerCompany": "ABC Company"
}

The URI associated with the response is qualified as a self link. The semantics of the self relation is clear – it’s simply the canonical location the Resource can be accessed at.

Another very important abstraction offered by the library is the ControllerLinkBuilder – which simplifies building URIs by avoiding hard-coded the links.

The following snippet shows building the customer self-link using the ControllerLinkBuilder class:

linkTo(CustomerController.class).slash(customer.getCustomerId()).withSelfRel();

Let’s have a look:

  • the linkTo() method inspects the controller class and obtains its root mapping
  • the slash() method adds the customerId value as the path variable of the link
  • finally, the withSelfMethod() qualifies the relation as a self-link

5. Relations

In the previous section we’ve shown a self-referencing relation. More complex systems may involve other relations as well.

For example, a customer can have a relationship to orders. The Order class will be modeled as a resource as well:

public class Order extends ResourceSupport {
    private String orderId;
    private double price;
    private int quantity;

    // standard getters and setters
}

At this point, the CustomerController controller can be extended with a method that return all orders of a particular customer:

@RequestMapping(value = "/{customerId}/orders", method = RequestMethod.GET)
public List getOrdersForCustomer(@PathVariable String customerId) {
    return orderService.getAllOrdersForCustomer(customerId);
}

An important thing to notice here, is that the hyperlink for the customer orders depends on the mapping of getOrdersForCustomer() method. We’ll refer to this types of links as method links and show how the ControllerLinkBuilder can assist in their creation.

The ControllerLinkBuilder offers rich support for Spring MVC Controllers. The following example shows how to build HATEOAS hyperlinks based on the getOrdersForCustomer() method of the CustomerController class.

List<Order> methodLinkBuilder = 
  methodOn(CustomerController.class).getOrdersForCustomer(customer.getCustomerId());
Link ordersLink = linkTo(methodLinkBuilder).withRel("allOrders");

The methodOn() obtains the method mapping by making dummy invocation of the target method on the proxy controller, and sets the customerId as the path variable of the URI.

7. Spring HATEOAS in Action

Let’s put the self-link and method link creation all together in a getAllCustomers() method.

@RequestMapping(method = RequestMethod.GET)
public List getAllCustomers() {
    List allCustomers = customerService.allCustomers();
    for (Customer customer : allCustomers) {
        Link selfLink = linkTo(CustomerController.class).slash(customer.getCustomerId()).withSelfRel();
        customer.add(selfLink);
        
        if (orderService.getAllOrdersForCustomer(customer.getCustomerId()).size() > 0) {
            List<Order> methodLinkBuilder = 
              methodOn(CustomerController.class).getOrdersForCustomer(customer.getCustomerId());
            Link ordersLink = linkTo(methodLinkBuilder).withRel("allOrders");
            customer.add(ordersLink);
        }
    }
    return allCustomers;
}

Let’s invoke the getAllCustomers() method:

curl http://localhost:8080/spring-security-rest/api/customers

And examine the result:

[{
    "links": [{
        "rel": "self",
        "href": "http://localhost:8080/spring-security-rest/api/customers/10A"
    }, {
        "rel": "allOrders",
        "href": "http://localhost:8080/spring-security-rest/api/customers/10A/orders"
    }],
    "customerId": "10A",
    "customerName": "Jane",
    "companyName": "ABC Company"
}, {
    "links": [{
        "rel": "self",
        "href": "http://localhost:8080/spring-security-rest/api/customers/20B"
    }, {
        "rel": "allOrders",
        "href": "http://localhost:8080/spring-security-rest/api/customers/20B/orders"
    }],
    "customerId": "20B",
    "customerName": "Bob",
    "companyName": "XYZ Company"
}, {
    "links": [{
        "rel": "self",
        "href": "http://localhost:8080/spring-security-rest/api/customers/30C"
    }],
    "customerId": "30C",
    "customerName": "Tim",
    "companyName": "CKV Company"
}]

Within each resource representation, there is a self  link and the allOrders link to extract all orders of a customer. If a customer does not have orders, then the link for orders will not appear.

This example demonstrates how Spring HATEOAS fosters API discoverability in a rest web service. If the link exists, the client can follow it and get all orders for a customer:

curl http://localhost:8080/spring-security-rest/api/customers/10A/orders
[{
    "links": [{
        "rel": "self",
        "href": "http://localhost:8080/spring-security-rest/api/customers/10A/001A"
    }],
    "orderId": "001A",
    "price": 150.0,
    "quantity": 25
}, {
    "links": [{
        "rel": "self",
        "href": "http://localhost:8080/spring-security-rest/api/customers/10A/002A"
    }],
    "orderId": "002A",
    "price": 250.0,
    "quantity": 15
}]

8. Conclusion

In this tutorial, we have discussed how to build a hypermedia-driven Spring REST web service using Spring HATEOAS project.

In the example, we see that client can have a single entry point to the application and further actions can be taken based on the metadata in the response representation. This allows the server to change its URI scheme without breaking the client. Also, the application can advertise new capabilities by putting new links or URIs in the representation.

The full implementation of this article can be found in the GitHub project – this is an Eclipse based project, so it should be easy to import and run as it is.

The Master Class of my "REST With Spring" Course is finally out:

>> CHECK OUT THE CLASSES

Java Web Weekly, Issue 121

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> Understanding Reactive types [spring.io]

Even more insight into reactive types and semantics, and of course into the upcoming Spring 5 work that’s happening behind the scenes.

>> String Compaction [javaspecialists.eu]

Interesting as as always, this one is an low level exploration of how the JVM deals with memory and Strings.

>> Testing improvements in Spring Boot 1.4 [spring.io]

Testing in a Spring Boot project is getting simpler and more streamlined – especially when it comes to mocking and handling of complex JSON.

>> The Parameterless Generic Method Antipattern [jooq.org]

A very interesting piece on how the Java compiler doesn’t always do the right thing when it comes to using generics.

>> Java EE vs Java SE: Has Oracle Given up on Enterprise Software? [takipi.com]

A well researched and insightful writeup about the state of Java EE today.

>> Most popular Java EE servers: 2016 edition [plumbr.eu]

And continuing the Java EE thread, some real-world data about the popularity of existing Java EE servers.

>> Exercises in Kotlin: Part 1 – Getting Started [dhananjaynene.com]

>> Exercises in Kotlin: Part 2 – High level syntax and Variables [dhananjaynene.com]

>> Exercises in Kotlin: Part 3 – Functions [dhananjaynene.com]

If you’re curious about Kotlin – this looks like a great place to start.

I haven’t yet gone through the exercises myself, but they’re on my weekend todo list.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Ideal HTTP Performance [mnot.net]

We’re all working with HTTP one way or another, so it really doesn’t hurt understanding the protocol well. This is a great writeup to get us there.

>> Boost Your REST API with HTTP Caching [kennethlange.com]

A quick and practical intro to using caching headers with a REST API.

>> A Beginner’s Guide to Addressing Concurrency Issues [techblog.bozho.net]

Taking a step back before diving head first into a complex architecture problem is fantastic advice.

There is a time when analyzing the transactional semantics of your system and improving them is the right thing to do. And then there are all the other times when it just seems like it is.

Also worth reading:

3. Musings

>> Join me at GeeCON [code-cop.org]

GeeCON is going to be a blast, can’t wait to get there – if you’re coming, make sure you say hi.

>> A Taxonomy of Software Consultants [daedtech.com]

Getting some clarity around the terms we’re using when talking about our work we do and about ourselves is definitely a useful thing to spend some time on.

>> The powerful hacker culture [lemire.me]

The hacker culture and the drive to tinker, experiment and simply do – is one of the things I like most about our ecosystem, and probably one of the top reasons we’re all in it.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Feel as if you have a strategy [dilbert.com]

>> Must. not. cry. on. the. outside [dilbert.com]

>> I see what you’re doing [dilbert.com]

5. Pick of the Week

>> I’m a boring programmer (and proud of it) [m.signalvnoise.com]

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Guide to the Fork/Join Framework in Java

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

The fork/join framework was presented in Java 7. It provides tools to help speed up parallel processing by attempting to use all available processor cores – which is accomplished through a divide and conquer approach.

In practice, this means that the framework first “forks”, recursively breaking the task into smaller independent subtasks, until they are simple enough to be executed asynchronously.

After that, the “join” part begins, in which results of all subtasks are recursively joined into a single result, or in the case of a task which returns void, the program simply waits until every subtask is executed.

To provide effective parallel execution, the fork/join framework uses a pool of threads called the ForkJoinPool, which manages worker threads of type ForkJoinWorkerThread. 

2. ForkJoinPool

The ForkJoinPool is the heart of the framework. It is an implementation of the ExecutorService that manages worker threads and provides us with tools to get information about the thread pool state and performance.

Worker threads can execute only one task at the time, but the ForkJoinPool doesn’t create a separate thread for every single subtask. Instead, each thread in the pool has its own double-ended queue (or deque, pronounced deck) which stores tasks.

This architecture is vital for balancing the thread’s workload with the help of the work stealing algorithm.

2.1. Work Stealing Algorithm

Simply put – free threads try to “steal” work from deques of busy threads.

By default, a worker thread gets tasks from the head of its own deque. When it is empty, the thread takes a task from the tail of the deque of another busy thread or from the global entry queue, since this is where the biggest pieces of work are likely to be located.

This approach minimizes the possibility that threads will compete for tasks. It also reduces the number of times the thread will have to go looking for work, as it works on the biggest available chunks of work first.

2.2. ForkJoinPool Instantiation

In Java 8, the most convenient way to get access to the instance of the ForkJoinPool is to use its static method commonPool(). As its name suggests, this will provide a reference to the common pool, which is a default thread pool for every ForkJoinTask.

According to Oracle’s documentation, using the predefined common pool reduces resource consumption, since this discourages the creation of a separate thread pool per task.

ForkJoinPool commonPool = ForkJoinPool.commonPool();

The same behavior can be achieved in Java 7 by creating a ForkJoinPool and assigning it to a public static field of a utility class:

public static ForkJoinPool forkJoinPool = new ForkJoinPool(2);

Now it can be easily accessed:

ForkJoinPool forkJoinPool = PoolUtil.forkJoinPool;

With ForkJoinPool’s constructors, it is possible to create a custom thread pool with a specific level of parallelism, thread factory, and exception handler. In the example above, the pool has a parallelism level of 2. This means that pool will use 2 processor cores.

3. ForkJoinTask<V>

ForkJoinTask is the base type for tasks executed inside ForkJoinPool. In practice, one of its two subclasses should be extended: the RecursiveAction for void tasks and the RecursiveTask<V> for tasks that return a value. They both have an abstract method compute() in which the task’s logic is defined.

3.1. RecursiveAction – An Example

In the example below, the unit of work to be processed is represented by a String called workload. For demonstration purposes, the task is a nonsensical one: it simply uppercases its input and logs it.

To demonstrate the forking behavior of the framework, the example splits the task if workload.length() is larger than a specified threshold using the createSubtask() method.

The String is recursively divided into substrings, creating CustomRecursiveTask instances which are based on these substrings.

As a result, the method returns a List<CustomRecursiveAction>.  

The list is submitted to the ForkJoinPool using the invokeAll() method:

public class CustomRecursiveAction extends RecursiveAction {

    private String workload = "";
    private static final int THRESHOLD = 4;

    private static Logger logger = 
      Logger.getAnonymousLogger();

    public CustomRecursiveAction(String workload) {
        this.workload = workload;
    }

    @Override
    protected void compute() {
        if (workload.length() > THRESHOLD) {
            ForkJoinTask.invokeAll(createSubtasks());
        } else {
           processing(workload);
        }
    }

    private List<CustomRecursiveAction> createSubtasks() {
        List<CustomRecursiveAction> subtasks = new ArrayList<>();

        String partOne = workload.substring(0, workload.length() / 2);
        String partTwo = workload.substring(workload.length() / 2, workload.length());

        subtasks.add(new CustomRecursiveAction(partOne));
        subtasks.add(new CustomRecursiveAction(partTwo));

        return subtasks;
    }

    private void processing(String work) {
        String result = work.toUpperCase();
        logger.info("This result - (" + result + ") - was processed by " 
          + Thread.currentThread().getName());
    }
}

This pattern can be used to develop your own RecursiveAction classes. To do this, create an object which represents the total amount of work, chose suitable threshold, define a method to divide the work, and define a method to do the work.

3.2. RecursiveTask<V>

For tasks that return a value, the logic here is similar, except that result for each subtask is united in a single result:

public class CustomRecursiveTask extends RecursiveTask<Integer> {
    private int[] arr;

    private static final int THRESHOLD = 20;

    public CustomRecursiveTask(int[] arr) {
        this.arr = arr;
    }

    @Override
    protected Integer compute() {
        if (arr.length > THRESHOLD) {
            return ForkJoinTask.invokeAll(createSubtasks())
              .stream()
              .mapToInt(ForkJoinTask::join)
              .sum();
        } else {
            return processing(arr);
        }
    }

    private Collection<CustomRecursiveTask> createSubtasks() {
        List<CustomRecursiveTask> dividedTasks = new ArrayList<>();
        dividedTasks.add(new CustomRecursiveTask(
          Arrays.copyOfRange(arr, 0, arr.length / 2)));
        dividedTasks.add(new CustomRecursiveTask(
          Arrays.copyOfRange(arr, arr.length / 2, arr.length)));
        return dividedTasks;
    }

    private Integer processing(int[] arr) {
        return Arrays.stream(arr)
          .filter(a -> a > 10 && a < 27)
          .map(a -> a * 10)
          .sum();
    }
}

In this example, the work is represented by an array stored in the arr field of the CustomRecursiveTask class. The createSubtask() method recursively divides the task into smaller pieces of work until each piece is smaller than the threshold. Then, the invokeAll() method submits subtasks to the common pull and returns a list of Future.

To trigger execution, the join() method called for each subtask.

In this example, this is accomplished using Java 8’s Stream API; the sum() method is used as a representation of combining subresults into the final result.

4. Submitting Tasks to the ForkJoinPool

To submit tasks to the thread pool, few approaches can be used.

The submit() or execute() method (their usecases are the same):

forkJoinPool.execute(customRecursiveTask);
int result = customRecursiveTask.join();

The invoke() method forks the task and waits for the result, and doesn’t need any manual joining:

int result = forkJoinPool.invoke(customRecursiveTask);

The invokeAll() method is the most convenient way to submit a sequence of ForkJoinTasks to the ForkJoinPool. It takes tasks as parameters (two tasks, varargs, or a collection), forks them returns a collection of Future objects in the order in which they were produced.

Alternatively, you can use separate fork() and join() methods. The fork() method submits a task to a pool, but it doesn’t trigger its execution. The join() method is be used for this purpose. In the case of RecursiveAction, the join() returns nothing but null; for RecursiveTask<V>, it returns the result of the task’s execution:

customRecursiveTaskFirst.fork();
result = customRecursiveTaskLast.join();

In our RecursiveTask<V> example we used the invokeAll() method to submit a sequence of subtasks to the pool. The same job can be done with fork() and join(), though this has consequences for the ordering of the results.

To avoid confusion, it is generally a good idea to use invokeAll() method to submit more than one task to the ForkJoinPool.

5. Conclusions

Using the fork/join framework can speed up processing of large tasks, but to achieve this outcome, some guidelines should be followed:

  • Use as few thread pools as possible – in most cases the best decision is to use one thread pool per application or system
  • Use the default common thread pool, if no specific tuning is needed
  • Use a reasonable threshold for splitting ForkJoingTask into subtasks
  • Avoid any blocking in your ForkJoingTasks

The examples used in this article are available in the linked GitHub repository.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Introduction to PowerMock

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

1. Overview

Unit testing with the help of a mocking framework has been recognized as a useful practice for a long time, and the Mockito framework, in particular, has dominated this market in recent years.

And in order to facilitate decent code designs and make the public API simple, some desired features have been intentionally left out. In some cases, however, these shortcomings force testers to write cumbersome code just to make the creation of mocks feasible.

This is where the PowerMock framework comes into play.

PowerMockito is a PowerMock’s extension API to support Mockito. It provides capabilities to work with the Java Reflection API in a simple way to overcome the problems of Mockito, such as the lack of ability to mock final, static or private methods.

This tutorial will give an introduction to the PowerMockito API and how it is applied in tests.

2. Preparing for Testing with PowerMockito

The first step to integrate PowerMock support for Mockito is to include the following two dependencies in the Maven POM file:

<dependency>
    <groupId>org.powermock</groupId>
    <artifactId>powermock-module-junit4</artifactId>
    <version>1.6.4</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.powermock</groupId>
    <artifactId>powermock-api-mockito</artifactId>
    <version>1.6.4</version>
    <scope>test</scope>
</dependency>

Next, we need to prepare our test cases for working with PowerMockito by applying the following two annotations:

@RunWith(PowerMockRunner.class)
@PrepareForTest(fullyQualifiedNames = "com.baeldung.powermockito.introduction.*")

The fullyQualifiedNames element in the @PrepareForTest annotation represents an array of fully qualified names of types we want to mock. In this case, we use a package name with a wildcard to tell PowerMockito to prepare all types within the com.baeldung.powermockito.introduction package for mocking.

Now we are ready to exploit the power of PowerMockito.

3. Mocking Constructors and Final Methods

In this section, we will demonstrate the ways to get a mock instance instead of a real one when instantiating a class with the new operator, and then use that object to mock a final method. The collaborating class, whose constructors and final methods will be mocked, is defined as follows:

public class CollaboratorWithFinalMethods {
    public final String helloMethod() {
        return "Hello World!";
    }
}

First, we create a mock object using the PowerMockito API:

CollaboratorWithFinalMethods mock = mock(CollaboratorWithFinalMethods.class);

Next, set an expectation telling that whenever the no-arg constructor of that class is invoked, a mock instance should be returned rather than a real one:

whenNew(CollaboratorWithFinalMethods.class).withNoArguments().thenReturn(mock);

Let’s see how this construction mocking works in action by instantiating the CollaboratorWithFinalMethods class using its default constructor, and then verify the behaviors of PowerMock:

CollaboratorWithFinalMethods collaborator = new CollaboratorWithFinalMethods();
verifyNew(CollaboratorWithFinalMethods.class).withNoArguments();

In the next step, an expectation is set to the final method:

when(collaborator.helloMethod()).thenReturn("Hello Baeldung!");

This method is then executed:

String welcome = collaborator.helloMethod();

The following assertions confirm that the helloMethod method has been called on the collaborator object, and returns the value set by the mocking expectation:

Mockito.verify(collaborator).helloMethod();
assertEquals("Hello Baeldung!", welcome);

If we want to mock a specific final method rather than all the final ones inside an object, the Mockito.spy(T object) method may come in handy. This is illustrated in section 5.

4. Mocking Static Methods

Suppose that we want to mock static methods of a class named CollaboratorWithStaticMethods. This class is declared as follows:

public class CollaboratorWithStaticMethods {
    public static String firstMethod(String name) {
        return "Hello " + name + " !";
    }

    public static String secondMethod() {
        return "Hello no one!";
    }

    public static String thirdMethod() {
        return "Hello no one again!";
    }
}

In order to mock these static methods, we need to register the enclosing class with the PowerMockito API:

mockStatic(CollaboratorWithStaticMethods.class);

Alternatively, we may use the Mockito.spy(Class<T> class) method to mock a specific one as demonstrated in the following section.

Next, expectations can be set to define the values methods should return when invoked:

when(CollaboratorWithStaticMethods.firstMethod(Mockito.anyString()))
  .thenReturn("Hello Baeldung!");
when(CollaboratorWithStaticMethods.secondMethod()).thenReturn("Nothing special");

Or an exception may be set to be thrown when calling the thirdMethod method:

doThrow(new RuntimeException()).when(CollaboratorWithStaticMethods.class);
CollaboratorWithStaticMethods.thirdMethod();

Now, it is time for executing the first two methods:

String firstWelcome = CollaboratorWithStaticMethods.firstMethod("Whoever");
String secondWelcome = CollaboratorWithStaticMethods.firstMethod("Whatever");

Instead of calling members of the real class, the above invocations are delegated to the mock’s methods. The following assertions prove that the mock has come into effect:

assertEquals("Hello Baeldung!", firstWelcome);
assertEquals("Hello Baeldung!", secondWelcome);

We are also able to verify behaviors of the mock’s methods, including how many times a method is invoked. In this case, the firstMethod has been called twice, while the secondMethod has never:

verifyStatic(Mockito.times(2));
CollaboratorWithStaticMethods.firstMethod(Mockito.anyString());
        
verifyStatic(Mockito.never());
CollaboratorWithStaticMethods.secondMethod();

Note: The verifyStatic method must be called right before any static method verification for PowerMockito to know that the successive method invocation is what needs to be verified.

Lastly, the static thirdMethod method should throw a RuntimeException as declared on the mock before. It is validated by the expected element of the @Test annotation:

@Test(expected = RuntimeException.class)
public void givenStaticMethods_whenUsingPowerMockito_thenCorrect() {
    // other methods   
       
    CollaboratorWithStaticMethods.thirdMethod();
}

5. Partial Mocking

Instead of mocking an entire class, the PowerMockito API allows for mocking part of it using the spy method. The following class will be used as the collaborator to illustrate the PowerMock support for partial mocking:

public class CollaboratorForPartialMocking {
    public static String staticMethod() {
        return "Hello Baeldung!";
    }

    public final String finalMethod() {
        return "Hello Baeldung!";
    }

    private String privateMethod() {
        return "Hello Baeldung!";
    }

    public String privateMethodCaller() {
        return privateMethod() + " Welcome to the Java world.";
    }
}

Let’s begin with mocking a static method, which is named staticMethod in the above class definition. First, use the PowerMockito API to partially mock the CollaboratorForPartialMocking class and set an expectation for its static method:

spy(CollaboratorForPartialMocking.class);
when(CollaboratorForPartialMocking.staticMethod()).thenReturn("I am a static mock method.");

The static method is then executed:

returnValue = CollaboratorForPartialMocking.staticMethod();

The mocking behavior is verified as follows:

verifyStatic();
CollaboratorForPartialMocking.staticMethod();

The following assertion confirms that the mock method has actually been called by comparing the return value against the expectation:

assertEquals("I am a static mock method.", returnValue);

Now it is time to move on to the final and private methods. In order to illustrate the partial mocking of these methods, we need to instantiate the class and tell the PowerMockito API to spy it:

CollaboratorForPartialMocking collaborator = new CollaboratorForPartialMocking();
CollaboratorForPartialMocking mock = spy(collaborator);

The objects created above are used to demonstrating the mocking of both the final and private methods. We will deal with the final method now by setting an expectation and invoke the method:

when(mock.finalMethod()).thenReturn("I am a final mock method.");
returnValue = mock.finalMethod();

The behavior of partially mocking that method is proved:

Mockito.verify(mock).finalMethod();

A test verifies that calling the finalMethod method will return a value that matches the expectation:

assertEquals("I am a final mock method.", returnValue);

A similar process is applied to the private method. The main difference is that we cannot directly invoke this method from the test case. Basically, a private method is to be called by other ones from the same class. In the CollaboratorForPartialMocking class, the privateMethod method is to be invoked by the privateMethodCaller method and we will use the latter as a delegate. Let’s start with the expectation and invocation:

when(mock, "privateMethod").thenReturn("I am a private mock method.");
returnValue = mock.privateMethodCaller();

The mocking of the private method is confirmed:

verifyPrivate(mock).invoke("privateMethod");

The following test makes sure that the return value from invocation of the private method is the same as the expectation:

assertEquals("I am a private mock method. Welcome to the Java world.", returnValue);

6. Conclusion

This tutorial has provided an introduction to the PowerMockito API, demonstrating its use in solving some of the problems developers encounter when using the Mockito framework. The implementation of these examples and code snippets can be found in the linked GitHub project.

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Java Web Weekly, Issue 122

$
0
0

I just released the Starter Class of "Learn Spring Security":

>> CHECK OUT THE COURSE

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> Event Sourcing in Microservices Using Spring Cloud and Reactor [kennybastani.com]

All the cool kids of doing microservices – and most are left with a complex and hard to manage mess on their hands.

But there are ways to avoid that – and Event Sourcing is one of the best ways I found to get there.

>> How to verify equality without equals method [lkrnac.net]

A cool deep-dive into testing the implementation of the equals method using reflection.

>> Exploring CQRS with Axon Framework: Closing thoughts [geekabyte.blogspot.com]

The end of a long running series I’ve been following closely, all about one of my favorite topics – Event Sourcing and CQRS.

CQRS is definitely not the holy grail, but it sure comes close in some scenarios 🙂

>> How to join unrelated entities with JPA and Hibernate [thoughts-on-java.org]

A cool addition to Hibernate I wasn’t aware of.

>> Java EE 8 MVC: Global exception handling [mscharhag.com]

A very quick and to the point intro to handling exceptions if you’re doing work with Java EE.

>> Save Time by Writing Less Test Code [petrikainulainen.net]

Some initial details about a course I’m really excited about (check out this week’s pick for more on that).

 

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> Why You Should Do Periodic Reviews of Legacy Code [daedtech.com]

Solid advice for keeping your less visited code for rotting and getting out of sync with the parts of the system you’re actively working on.

>> Evaluating Delusional Startups [zachholman.com]

A funny read if you’re out of that game, and hopefully a helpful one is you’re not.

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> Bury something in the woods [dilbert.com]

>> Can’t you find meaning in your personal life? [dilbert.com]

>> That could work [dilbert.com]

4. Pick of the Week

Almost a year ago, when I started to work on my first course, I wrote that we have so few solid courses in our ecosystem. I know from experience that it takes a long time to put together a good, high quality course – around 6 months of ongoing work – which explains why there are so few of these out there.

 

That’s slowly changing – Petri’s newly announced testing course is definitely going to be reference material:

>> TEST WITH SPRING

 

The packages have been at 50% off all week (ending today) – so if you’re into testing, pick this one up. If you’re not really into testing, then definitely pick this one up.

I’m excited about this one, not just because it’s about testing, but also about Spring (which is very cool).

Get the early-bird price (20% Off) of my upcoming "Learn Spring Security" Course:

>> CHECK OUT THE COURSE

Viewing all 3717 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>