Quantcast
Channel: Baeldung
Viewing all 3692 articles
Browse latest View live

Decode an OkHttp JSON Response

$
0
0

1. Introduction

In this article, we’ll see several techniques for decoding a JSON response while using OkHttp.

2. OkHttp Response

OkHttp is an HTTP client for Java and Android with features like transparent handling of GZIP, response caching, and recovery from network problems.

In spite of these great features, OkHttp doesn’t have a built-in encoder/decoder for JSON, XML, and other content types. However, we can implement these with the help of XML/JSON binding libraries, or we can use high-level libraries like Feign or Retrofit.

To implement our JSON decoder, we need to extract the JSON from the result of the service call. For this, we can access the body via the body() method of the Response object. The ResponseBody class has several options for extracting this data:

  • byteStream(): exposes the raw bytes of the body as an InputStream; we can use this for all formats, but usually it is used for binaries and files
  • charStream(): when we have a text response, charStream() wraps its InputStream in a Reader and handles encoding according to the response’s content type or “UTF-8” if charset isn’t set in the response header; however, when using charStream(), we can’t change the Reader‘s encoding
  • string(): returns the whole response body as a String; manages the encoding the same as charStream(), but if we need a different encoding, we can use source().readString(charset) instead

In this article, we’re going to use string() since our response is small and we don’t have memory or performance concerns. The byteStream() and charStream() methods are better choices in production systems when performance and memory matter.

To start, let’ s add okhttp to our pom.xml file:

<dependency>
    <groupId>com.squareup.okhttp3</groupId>
    <artifactId>okhttp</artifactId> 
    <version>3.14.2</version> 
</dependency>

And then, we model the SimpleEntity to test our decoders:

public class SimpleEntity {
    protected String name;

    public SimpleEntity(String name) {
        this.name = name;
    }
    
    // no-arg constructor, getters, and setters
}

Now, we’re going to initiate our test:

SimpleEntity sampleResponse = new SimpleEntity("Baeldung");

OkHttpClient client = // build an instance;
MockWebServer server = // build an instance;
Request request = new Request.Builder().url(server.url("...")).build();

3. Decode the ResponseBody with Jackson

Jackson is one of the most popular libraries for JSON-Object binding.

Let’s add jackson-databind to our pom.xml:

<dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-databind</artifactId>
    <version>2.9.9</version>
</dependency>

Jackson’s ObjectMapper lets us convert JSON to an object. Thus, we can decode the response using ObjectMapper.readValue():

ObjectMapper objectMapper = new ObjectMapper(); 
ResponseBody responseBody = client.newCall(request).execute().body(); 
SimpleEntity entity = objectMapper.readValue(responseBody.string(), SimpleEntity.class);

Assert.assertNotNull(entity);
Assert.assertEquals(sampleResponse.getName(), entity.getName());

4. Decode the ResponseBody with Gson

Gson is another useful library for mapping JSON to Objects and vice versa.

Let’s add gson to our pom.xml file:

<dependency>
    <groupId>com.google.code.gson</groupId>
    <artifactId>gson</artifactId>
    <version>2.8.5</version>
</dependency>

Let’s see how we can use Gson.fromJson() to decode the response body:

Gson gson = new Gson(); 
ResponseBody responseBody = client.newCall(request).execute().body();
SimpleEntity entity = gson.fromJson(responseBody.string(), SimpleEntity.class);

Assert.assertNotNull(entity);
Assert.assertEquals(sampleResponse.getName(), entity.getName());

5. Conclusion

In this article, we’ve explored several ways to decode the JSON response of OkHttp with Jackson and Gson.

The complete sample is available on GitHub.


Spring Session with MongoDB

$
0
0

1. Overview

In this quick tutorial, we’ll be exploring how to use the Spring Session backed with MongoDB, both with and without Spring Boot.

Spring Session can also be backed with other stores such as Redis and JDBC.

2. Spring Boot Configuration

First, let’s look at the dependencies and the configuration required for Spring Boot. To start with, let’s add the latest versions of spring-session-data-mongodb and spring-boot-starter-data-mongodb to our project:

<dependency>
    <groupId>org.springframework.session</groupId>
    <artifactId>spring-session-data-mongodb</artifactId>
    <version>2.1.5.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-mongodb</artifactId>
    <version>2.1.8.RELEASE</version>
</dependency>

After that, to enable Spring Boot auto-configuration, we’ll need to add the Spring Session store-type as mongodb in the application.properties:

spring.session.store-type=mongodb

3. Spring Configuration Without Spring Boot

Now, let’s take a look at the dependencies and the configuration required to store the Spring session in MongoDB without Spring Boot.

Similar to the Spring Boot configuration, we’ll need the spring-session-data-mongodb dependency. However, here we’ll use the spring-data-mongodb dependency to access our MongoDB database:

<dependency>
    <groupId>org.springframework.session</groupId>
    <artifactId>spring-session-data-mongodb</artifactId>
    <version>2.1.5.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.springframework.data</groupId>
    <artifactId>spring-data-mongodb</artifactId>
    <version>2.1.8.RELEASE</version>
</dependency>

Finally, let’s see how to configure the application:

@EnableMongoHttpSession
public class HttpSessionConfig {

    @Bean
    public JdkMongoSessionConverter jdkMongoSessionConverter() {
        return new JdkMongoSessionConverter(Duration.ofMinutes(30));
    }
}

The @EnableMongoHttpSession annotation enables the configuration required to store the session data in MongoDB.

Also, note that the JdkMongoSessionConverter is responsible for serializing and deserializing the session data.

4. Example Application

Let’s create an application to test the configurations. We’ll be using Spring Boot, as it’s faster and requires less configuration.

We’ll begin by creating the controller to handle requests:

@RestController
public class SpringSessionMongoDBController {

    @GetMapping("/")
    public ResponseEntity<Integer> count(HttpSession session) {

        Integer counter = (Integer) session.getAttribute("count");

        if (counter == null) {
            counter = 1;
        } else {
            counter++;
        }

        session.setAttribute("count", counter);

        return ResponseEntity.ok(counter);
    }
}

As we can see in this example, we’re incrementing counter on every hit to the endpoint and storing its value in a session attribute named count.

5. Testing the Application

Let’s test the application to see if we’re actually able to store the session data in MongoDB.

To do so, we’ll access the endpoint and inspect the cookie that we’ll receive. This will contain a session id. After that, we’ll query the MongoDB collection to fetch the session data using the session id:

@Test
public void givenEndpointIsCalledTwiceAndResponseIsReturned_whenMongoDBIsQueriedForCount_thenCountMustBeSame() {
    HttpEntity<String> response = restTemplate
      .exchange("http://localhost:" + 8080, HttpMethod.GET, null, String.class);
    HttpHeaders headers = response.getHeaders();
    String set_cookie = headers.getFirst(HttpHeaders.SET_COOKIE);

    Assert.assertEquals(response.getBody(),
      repository.findById(getSessionId(set_cookie)).getAttribute("count").toString());
}

private String getSessionId(String cookie) {
    return new String(Base64.getDecoder().decode(cookie.split(";")[0].split("=")[1]));
}

6. How Does it Work?

Let’s take a look at what goes on in the Spring session behind the scenes.

The SessionRepositoryFilter is responsible for most of the work:

  • converts the HttpSession into a MongoSession
  • checks if there’s a Cookie present, and if so, loads the session data from the store
  • saves the updated session data in the store
  • checks the validity of the session

Also, the SessionRepositoryFilter creates a cookie with the name SESSION that is HttpOnly and secure. This cookie contains the session id, which is a Base64-encoded value.

To customize the cookie name or properties, we’ll have to create a Spring bean of type DefaultCookieSerializer.

For instance, here we’re disabling the httponly property of the cookie:

@Bean
public DefaultCookieSerializer customCookieSerializer(){
    DefaultCookieSerializer cookieSerializer = new DefaultCookieSerializer();
        
    cookieSerializer.setUseHttpOnlyCookie(false);
        
    return cookieSerializer;
}

7. Session Details Stored in MongoDB

Let’s query our session collection using the following command in our MongoDB console:

db.sessions.findOne()

As a result, we’ll get a BSON document similar to:

{
    "_id" : "5d985be4-217c-472c-ae02-d6fca454662b",
    "created" : ISODate("2019-05-14T16:45:41.021Z"),
    "accessed" : ISODate("2019-05-14T17:18:59.118Z"),
    "interval" : "PT30M",
    "principal" : null,
    "expireAt" : ISODate("2019-05-14T17:48:59.118Z"),
    "attr" : BinData(0,"rO0ABXNyABFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAFY291bnRzcgARamF2YS5sYW5nLkludGVnZXIS4qCk94GHOAIAAUkABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAC3g=")
}

The _id is a UUID that will be Base64-encoded by the DefaultCookieSerializer and set as a value in the SESSION cookie. Also, note that the attr attribute contains the actual value of our counter.

8. Conclusion

In this tutorial, we’ve explored Spring Session backed with MongoDB — a powerful tool for managing HTTP sessions in a distributed system. With this purpose in mind, it can be very useful in solving the problem of replicating sessions across multiple instances of the application.

As usual, the source code is available over on GitHub.

Working with XML in Groovy

$
0
0

1. Introduction

Groovy provides a substantial number of methods dedicated to traversing and manipulating XML content.

In this tutorial, we’ll demonstrate how to add, edit, or delete elements from XML in Groovy using various approaches. We’ll also show how to create an XML structure from scratch.

2. Defining the Model

Let’s define an XML structure in our resources directory that we’ll use throughout our examples:

<articles>
    <article>
        <title>First steps in Java</title>
        <author id="1">
            <firstname>Siena</firstname>
            <lastname>Kerr</lastname>
        </author>
        <release-date>2018-12-01</release-date>
    </article>
    <article>
        <title>Dockerize your SpringBoot application</title>
        <author id="2">
            <firstname>Jonas</firstname>
            <lastname>Lugo</lastname>
        </author>
        <release-date>2018-12-01</release-date>
    </article>
    <article>
        <title>SpringBoot tutorial</title>
        <author id="3">
            <firstname>Daniele</firstname>
            <lastname>Ferguson</lastname>
        </author>
        <release-date>2018-06-12</release-date>
    </article>
    <article>
        <title>Java 12 insights</title>
        <author id="1">
            <firstname>Siena</firstname>
            <lastname>Kerr</lastname>
        </author>
        <release-date>2018-07-22</release-date>
    </article>
</articles>

And read it into an InputStream variable:

def xmlFile = getClass().getResourceAsStream("articles.xml")

3. XmlParser

Let’s start exploring this stream with the XmlParser class.

3.1. Reading

Reading and parsing an XML file is probably the most common XML operation a developer will have to do. The XmlParser provides a very straightforward interface meant for exactly that:

def articles = new XmlParser().parse(xmlFile)

At this point, we can access attributes and values of XML structure using GPath expressions. 

Let’s now implement a simple test using Spock to check whether our articles object is correct:

def "Should read XML file properly"() {
    given: "XML file"

    when: "Using XmlParser to read file"
    def articles = new XmlParser().parse(xmlFile)

    then: "Xml is loaded properly"
    articles.'*'.size() == 4
    articles.article[0].author.firstname.text() == "Siena"
    articles.article[2].'release-date'.text() == "2018-06-12"
    articles.article[3].title.text() == "Java 12 insights"
    articles.article.find { it.author.'@id'.text() == "3" }.author.firstname.text() == "Daniele"
}

To understand how to access XML values and how to use the GPath expressions, let’s focus for a moment on the internal structure of the result of the XmlParser#parse operation.

The articles object is an instance of groovy.util.Node. Every Node consists of a name, attributes map, value, and parent (which can be either null or another Node).

In our case, the value of articles is a groovy.util.NodeList instance, which is a wrapper class for a collection of Nodes. The NodeList extends the java.util.ArrayList class, which provides extraction of elements by index. To obtain a string value of a Node, we use groovy.util.Node#text().

In the above example, we introduced a few GPath expressions:

  • articles.article[0].author.firstname — get the author’s first name for the first article – articles.article[n] would directly access the nth article
  • ‘*’  — get a list of article‘s children – it’s the equivalent of groovy.util.Node#children()
  • author.’@id’ — get the author element’s id attribute – author.’@attributeName’ accesses the attribute value by its name (the equivalents are: author[‘@id’] and author.@id)

3.2. Adding a Node

Similar to the previous example, let’s read the XML content into a variable first. This will allow us to define a new node and add it to our articles list using groovy.util.Node#append.

Let’s now implement a test which proves our point:

def "Should add node to existing xml using NodeBuilder"() {
    given: "XML object"
    def articles = new XmlParser().parse(xmlFile)

    when: "Adding node to xml"
    def articleNode = new NodeBuilder().article(id: '5') {
        title('Traversing XML in the nutshell')
        author {
            firstname('Martin')
            lastname('Schmidt')
        }
        'release-date'('2019-05-18')
    }
    articles.append(articleNode)

    then: "Node is added to xml properly"
    articles.'*'.size() == 5
    articles.article[4].title.text() == "Traversing XML in the nutshell"
}

As we can see in the above example, the process is pretty straightforward.

Let’s also notice that we used groovy.util.NodeBuilder, which is a neat alternative to using the Node constructor for our Node definition.

3.3. Modifying a Node

We can also modify the values of nodes using the XmlParser. To do so, let’s once again parse the content of the XML file. Next, we can edit the content node by changing the value field of the Node object.

Let’s remember that while XmlParser uses the GPath expressions, we always retrieve the instance of the NodeList, so to modify the first (and only) element, we have to access it using its index.

Let’s check our assumptions by writing a quick test:

def "Should modify node"() {
    given: "XML object"
    def articles = new XmlParser().parse(xmlFile)

    when: "Changing value of one of the nodes"
    articles.article.each { it.'release-date'[0].value = "2019-05-18" }

    then: "XML is updated"
    articles.article.findAll { it.'release-date'.text() != "2019-05-18" }.isEmpty()
}

In the above example, we’ve also used the Groovy Collections API to traverse the NodeList.

3.4. Replacing a Node

Next, let’s see how to replace the whole node instead of just modifying one of its values.

Similarly to adding a new element, we’ll use the NodeBuilder for the Node definition and then replace one of the existing nodes within it using groovy.util.Node#replaceNode:

def "Should replace node"() {
    given: "XML object"
    def articles = new XmlParser().parse(xmlFile)

    when: "Adding node to xml"
    def articleNode = new NodeBuilder().article(id: '5') {
        title('Traversing XML in the nutshell')
        author {
            firstname('Martin')
            lastname('Schmidt')
        }
        'release-date'('2019-05-18')
    }
    articles.article[0].replaceNode(articleNode)

    then: "Node is added to xml properly"
    articles.'*'.size() == 4
    articles.article[0].title.text() == "Traversing XML in the nutshell"
}

3.5. Deleting a Node

Deleting a node using the XmlParser is quite tricky. Although the Node class provides the remove(Node child) method, in most cases, we wouldn’t use it by itself.

Instead, we’ll show how to delete a node whose value fulfills a given condition.

By default, accessing the nested elements using a chain of Node.NodeList references returns a copy of the corresponding children nodes. Because of that, we can’t use the java.util.NodeList#removeAll method directly on our article collection.

To delete a node by a predicate, we have to find all nodes matching our condition first, and then iterate through them and invoke java.util.Node#remove method on the parent each time.

Let’s implement a test that removes all articles whose author has an id other than 3:

def "Should remove article from xml"() {
    given: "XML object"
    def articles = new XmlParser().parse(xmlFile)

    when: "Removing all articles but the ones with id==3"
    articles.article
      .findAll { it.author.'@id'.text() != "3" }
      .each { articles.remove(it) }

    then: "There is only one article left"
    articles.children().size() == 1
    articles.article[0].author.'@id'.text() == "3"
}

As we can see, as a result of our remove operation, we received an XML structure with only one article, and its id is 3.

4. XmlSlurper

Groovy also provides another class dedicated to working with XML. In this section, we’ll show how to read and manipulate the XML structure using the XmlSlurper.

4.1. Reading

As in our previous examples, let’s start with parsing the XML structure from a file:

def "Should read XML file properly"() {
    given: "XML file"

    when: "Using XmlSlurper to read file"
    def articles = new XmlSlurper().parse(xmlFile)

    then: "Xml is loaded properly"
    articles.'*'.size() == 4
    articles.article[0].author.firstname == "Siena"
    articles.article[2].'release-date' == "2018-06-12"
    articles.article[3].title == "Java 12 insights"
    articles.article.find { it.author.'@id' == "3" }.author.firstname == "Daniele"
}

As we can see, the interface is identical to that of XmlParser. However, the output structure uses the groovy.util.slurpersupport.GPathResult, which is a wrapper class for Node. GPathResult provides simplified definitions of methods such as: equals() and toString() by wrapping Node#text(). As a result, we can read fields and parameters directly using just their names.

4.2. Adding a Node

Adding a Node is also very similar to using XmlParser. In this case, however, groovy.util.slurpersupport.GPathResult#appendNode provides a method that takes an instance of java.lang.Object as an argument. As a result, we can simplify new Node definitions following the same convention introduced by NodeBuilder:

def "Should add node to existing xml"() {
    given: "XML object"
    def articles = new XmlSlurper().parse(xmlFile)

    when: "Adding node to xml"
    articles.appendNode {
        article(id: '5') {
            title('Traversing XML in the nutshell')
            author {
                firstname('Martin')
                lastname('Schmidt')
            }
            'release-date'('2019-05-18')
        }
    }

    articles = new XmlSlurper().parseText(XmlUtil.serialize(articles))

    then: "Node is added to xml properly"
    articles.'*'.size() == 5
    articles.article[4].title == "Traversing XML in the nutshell"
}

In case we need to modify the structure of our XML with XmlSlurper, we have to reinitialize our articles object to see the results. We can achieve that using the combination of the groovy.util.XmlSlurper#parseText and the groovy.xmlXmlUtil#serialize methods.

4.3. Modifying a Node

As we mentioned before, the GPathResult introduces a simplified approach to data manipulation. That being said, in contrast to the XmlSlurper, we can modify the values directly using the node name or parameter name:

def "Should modify node"() {
    given: "XML object"
    def articles = new XmlSlurper().parse(xmlFile)

    when: "Changing value of one of the nodes"
    articles.article.each { it.'release-date' = "2019-05-18" }

    then: "XML is updated"
    articles.article.findAll { it.'release-date' != "2019-05-18" }.isEmpty()
}

Let’s notice that when we only modify the values of the XML object, we don’t have to parse the whole structure again.

4.4. Replacing a Node

Now let’s move to replacing the whole node. Again, the GPathResult comes to the rescue. We can easily replace the node using groovy.util.slurpersupport.NodeChild#replaceNode, which extends GPathResult and follows the same convention of using the Object values as arguments:

def "Should replace node"() {
    given: "XML object"
    def articles = new XmlSlurper().parse(xmlFile)

    when: "Replacing node"
    articles.article[0].replaceNode {
        article(id: '5') {
            title('Traversing XML in the nutshell')
            author {
                firstname('Martin')
                lastname('Schmidt')
            }
            'release-date'('2019-05-18')
        }
    }

    articles = new XmlSlurper().parseText(XmlUtil.serialize(articles))

    then: "Node is replaced properly"
    articles.'*'.size() == 4
    articles.article[0].title == "Traversing XML in the nutshell"
}

As was the case when adding a node, we’re modifying the structure of the XML, so we have to parse it again.

4.5. Deleting a Node

To remove a node using XmlSlurper, we can reuse the groovy.util.slurpersupport.NodeChild#replaceNode method simply by providing an empty Node definition:

def "Should remove article from xml"() {
    given: "XML object"
    def articles = new XmlSlurper().parse(xmlFile)

    when: "Removing all articles but the ones with id==3"
    articles.article
      .findAll { it.author.'@id' != "3" }
      .replaceNode {}

    articles = new XmlSlurper().parseText(XmlUtil.serialize(articles))

    then: "There is only one article left"
    articles.children().size() == 1
    articles.article[0].author.'@id' == "3"
}

Again, modifying the XML structure requires reinitialization of our articles object.

5. XmlParser vs XmlSlurper

As we showed in our examples, the usages of XmlParser and XmlSlurper are pretty similar. We can more or less achieve the same results with both. However, some differences between them can tilt the scales towards one or the other.

First of all, XmlParser always parses the whole document into the DOM-ish structure. Because of that, we can simultaneously read from and write into it. We can’t do the same with XmlSlurper as it evaluates paths more lazily. As a result, XmlParser can consume more memory.

On the other hand, XmlSlurper uses more straightforward definitions, making it simpler to work with. We also need to remember that any structural changes made to XML using XmlSlurper require reinitialization, which can have an unacceptable performance hit in case of making many changes one after another.

The decision of which tool to use should be made with care and depends entirely on the use case.

6. MarkupBuilder

Apart from reading and manipulating the XML tree, Groovy also provides tooling to create an XML document from scratch. Let’s now create a document consisting of the first two articles from our first example using groovy.xml.MarkupBuilder:

def "Should create XML properly"() {
    given: "Node structures"

    when: "Using MarkupBuilderTest to create xml structure"
    def writer = new StringWriter()
    new MarkupBuilder(writer).articles {
        article {
            title('First steps in Java')
            author(id: '1') {
                firstname('Siena')
                lastname('Kerr')
            }
            'release-date'('2018-12-01')
        }
        article {
            title('Dockerize your SpringBoot application')
            author(id: '2') {
                firstname('Jonas')
                lastname('Lugo')
            }
            'release-date'('2018-12-01')
        }
    }

    then: "Xml is created properly"
    XmlUtil.serialize(writer.toString()) == XmlUtil.serialize(xmlFile.text)
}

In the above example, we can see that MarkupBuilder uses the very same approach for the Node definitions we used with NodeBuilder and GPathResult previously.

To compare output from MarkupBuilder with the expected XML structure, we used the groovy.xml.XmlUtil#serialize method.

7. Conclusion

In this article, we explored multiple ways of manipulating XML structures using Groovy.

We looked at examples of parsing, adding, editing, replacing, and deleting nodes using two classes provided by Groovy: XmlParser and XmlSlurper. We also discussed differences between them and showed how we could build an XML tree from scratch using MarkupBuilder.

As always, the complete code used in this article is available over on GitHub.

Hibernate Validator Specific Constraints

$
0
0

1. Overview

In this tutorial, we’re going to review Hibernate Validator constraints, which are built into Hibernate Validator but are outside the Bean Validation spec. For a recap of Bean Validation, please refer to our article on Java Bean Validation Basics.

2. Hibernate Validator Setup

At the very least, we should add Hibernate Validator to our dependencies:

<dependency>
    <groupId>org.hibernate.validator</groupId>
    <artifactId>hibernate-validator</artifactId>
    <version>6.0.16.Final</version>
</dependency>

Note that Hibernate Validator does not depend on Hibernate, the ORM, which we’ve covered in many other articles.

Additionally, some of the annotations that we’ll introduce only apply if our project makes use of certain libraries. So, for each one of those, we’ll indicate the necessary dependencies.

3. Validating Money-related Values

3.1. Validating Credit Card Numbers

Valid credit card numbers must satisfy a checksum, which we compute using Luhn’s Algorithm. The @CreditCardNumber constraint succeeds when a string satisfies the checksum.

@CreditCardNumber does not perform any other check on the input string. In particular, it doesn’t check the length of the input. Therefore, it can only detect numbers that are invalid due to a small typo.

Note that, by default, the constraint fails if the string contains characters which aren’t digits, but we can tell it to ignore them:

@CreditCardNumber(ignoreNonDigitCharacters = true)
private String lenientCreditCardNumber;

Then, we can include characters such as spaces or dashes:

validations.setLenientCreditCardNumber("7992-7398-713");
constraintViolations = validator.validateProperty(validations, "lenientCreditCardNumber");
assertTrue(constraintViolations.isEmpty());

3.2. Validating Monetary Values

The @Currency validator checks whether a given monetary amount is in the specified currency:

@Currency("EUR")
private MonetaryAmount balance;

The class MonetaryAmount is part of Java Money. Therefore, @Currency only applies when a Java Money implementation is available.

Once we have set Java Money up correctly, we can check the constraint:

bean.setBalance(Money.of(new BigDecimal(100.0), Monetary.getCurrency("EUR")));
constraintViolations = validator.validateProperty(bean, "balance");
assertEquals(0, constraintViolations.size());

4. Validating Ranges

4.1. Numeric and Monetary Ranges

The bean validation specification defines several constraints which we can enforce on numeric fields. Besides those, Hibernate Validator provides a handy annotation, @Range, that acts as a combination of @Min and @Max, matching a range inclusively:

@Range(min = 0, max = 100)
private BigDecimal percent;

Like @Min and @Max, @Range is applicable on fields of primitive number types and their wrappers; BigInteger and BigDecimalString representations of the above, and, finally, MonetaryValue fields.

4.2. Duration of Time

In addition to standard JSR 380 annotations for values that represent points in time, Hibernate Validator includes constraints for Durations as well. Make sure to check out the Period and Duration classes of Java Time first.

So, we can enforce minimum and maximum durations on a property:

@DurationMin(days = 1, hours = 2)
@DurationMax(days = 2, hours = 1)
private Duration duration;

Even if we didn’t show them all here, the annotation has parameters for all units of time from nanoseconds to days.

Please note that, by default, minimum and maximum values are inclusive. That is, a value which is exactly the same as the minimum or the maximum will pass validation.

If we want boundary values to be invalid, instead, we define the inclusive property to be false:

@DurationMax(minutes = 30, inclusive = false)

5. Validating Strings

5.1. String Length

We can use two slightly different constraints to enforce that a string is of a certain length.

Generally, we’ll want to ensure a string’s length in characters – the one we measure with the length method – is between a minimum and a maximum. In that case, we use @Length on a String property or field:

@Length(min = 1, max = 3)
private String someString;

However, due to the intricacies of Unicode, sometimes the length in characters and the length in code points differ. When we want to check the latter, we use @CodePointLength:

@CodePointLength(min = 1, max = 3)
private String someString;

For example, the string “aa\uD835\uDD0A” is 4 characters long, but it contains only 3 code points, so it’ll fail the first constraint and pass the second one.

Also, with both annotations, we can omit the minimum or the maximum value.

5.2. Checks on Strings of Digits

We’ve already seen how to check that a string is a valid credit card number. However, Hibernate Validator includes several other constraints for strings of digits.

The first one we’re reviewing is @LuhnCheck. This is the generalized version of @CreditCardNumber, in that it performs the same check, but allows for additional parameters:

@LuhnCheck(startIndex = 0, endIndex = Integer.MAX_VALUE, checkDigitIndex = -1)
private String someString;

Here, we’ve shown the default values of the parameters, so the above is equivalent to a simple @LuhnCheck annotation.

But, as we can see, we can perform the check on a substring (startIndex and endIndex) and tell the constraint which digit is the checksum digit, with -1 meaning the last one in the checked substring.

Other interesting constraints include the modulo 10 check (@Mod10Check) and the modulo 11 check (@Mod11Check), which are typically used for barcodes and other codes such as ISBN.

However, for those specific cases, Hibernate Validator happens to provide a constraint to validate ISBN codes, @ISBN, as well as an @EAN constraint for EAN barcodes.

5.3. URL and HTML Validation

The @Url constraint verifies that a string is a valid representation of a URL. Additionally, we can check that specific component of the URL has a certain value:

@URL(protocol = "https")
private String url;

We can thus check the protocol, the host and the port. If that’s not sufficient, there’s a regexp property that we can use to match the URL against a regular expression.

We can also verify that a property contains “safe” HTML code (for example, without script tags):

@SafeHtml
private String html;

@SafeHtml uses the JSoup library, which must be included in our dependencies.

We can tailor the HTML sanitization to our needs using built-in tag whitelists (the whitelist property of the annotation) and including additional tags and attributes (the additionalTags and additionalTagsWithAttributes parameters).

6. Other Constraints

Let’s mention briefly that Hibernate Validator includes some country and locale-specific constraints, in particular for some Brazilian and Polish identification numbers, taxpayer codes and similar. Please refer to the relevant section of the documentation for a full list.

Also, we can check that a collection does not contain duplicates with @UniqueElements.

Finally, for complex cases not covered by existing annotations, we can invoke a script written in a JSR-223 compatible scripting engine. We’ve, of course, touched on JSR-223 in our article about Nashorn, the JavaScript implementation included in modern JVMs.

In this case, the annotation is at the class level, and the script is invoked on the entire instance, passed as the variable _this:

@ScriptAssert(lang = "nashorn", script = "_this.valid")
public class AdditionalValidations {
    private boolean valid = true;
    // standard getters and setters
}

Then, we can check the constraint on the whole instance:

bean.setValid(false);
constraintViolations = validator.validate(bean);
assertEquals(1, constraintViolations.size());

7. Conclusion

In this article, we’ve listed the constraints in Hibernate Validator that go beyond the minimal set defined in the Bean Validation specification.

The implementation of all these examples and code snippets can be found in the GitHub repository as a Maven project, so it should be easy to import and run as is.

Java Optional as Return Type

$
0
0

1. Introduction

The Optional type was introduced in Java 8.  It provides a clear and explicit way to convey the message that there may not be a value, without using null.

When getting an Optional return type, we’re likely to check if the value is missing, leading to fewer NullPointerExceptions in the applications. However, the Optional type isn’t suitable in all places.

Although we can use it wherever we see fit, in this tutorial, we’ll focus on some best practices of using Optional as a return type.

2. Optional as Return Type

An Optional type can be a return type for most methods except some scenarios discussed later in the tutorial.

Most of the time, returning an Optional is just fine:

public static Optional<User> findUserByName(String name) {
    User user = usersByName.get(name);
    Optional<User> opt = Optional.ofNullable(user);
    return opt;
}

This is handy since we can use the Optional API in the calling method:

public static void changeUserName(String oldFirstName, String newFirstName) {
    findUserByFirstName(oldFirstName).ifPresent(user -> user.setFirstName(newFirstName));
}

It’s also appropriate for a static method or utility method to return an Optional value.  However, there are many situations where we should not return an Optional type.

3. When to Not Return Optional

Because Optional is a wrapper and value-based class, there are some operations that can’t be done against Optional object. Many times, it’s just simply better to return the actual type rather than an Optional type.

Generally speaking, for getters in POJOs, it’s more suitable to return the actual type, not an Optional type. Particularly, it’s important for Entity Beans, Data Models, and DTOs to have traditional getters.

We’ll examine some of the important use cases below.

3.1. Serialization

Let’s imagine we have a simple entity:

public class Sock implements Serializable {
    Integer size;
    Optional<Sock> pair;

    // ... getters and setters
}

This actually won’t work at all. If we were to try and serialize this, we’d get an NotSerializableException:

new ObjectOutputStream(new ByteArrayOutputStream()).writeObject(new Sock());

And really, while serializing Optional may work with other libraries, it certainly adds what may be unnecessary complexity.

Let’s take a look at another application of this same serialization mismatch, this time with JSON.

3.2. JSON

Modern applications convert Java objects to JSON all the time. If a getter returns an Optional type, we’ll most likely see some unexpected data structure in the final JSON.

Let’s say we have a bean with an optional property:

private String firstName;

public Optional<String> getFirstName() {
    return Optional.ofNullable(firstName);
}

public void setFirstName(String firstName) {
    this.firstName = firstName;
}

So, if we use Jackson to serialize an instance of Optional, we’ll get:

{"firstName":{"present":true}}

But, what we’d really want is:

{"firstName":"Baeldung"}

So, Optional is a pain for serialization use cases. Next, let’s look at the cousin to serialization: writing data to a database.

3.3. JPA

In JPA, the getter, setter, and field should have name as well as type agreement. For example, a firstName field of type String should be paired with a getter called getFirstName that also returns a String.

Following this convention makes several things simpler, including the use of reflection by libraries like Hibernate, to give us great Object-Relational mapping support.

Let’s take a look at our same use case of an optional first name in a POJO.

This time, though, it’ll be a JPA entity:

@Entity
public class UserOptionalField implements Serializable {
    @Id
    private long userId;

    private Optional<String> firstName;

    // ... getters and setters
}

And let’s go ahead and try to persist it:

UserOptionalField user = new UserOptionalField();
user.setUserId(1l);
user.setFirstName(Optional.of("Baeldung"));
entityManager.persist(user);

Sadly, we run into an error:

Caused by: javax.persistence.PersistenceException: [PersistenceUnit: com.baeldung.optionalReturnType] Unable to build Hibernate SessionFactory
	at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.persistenceException(EntityManagerFactoryBuilderImpl.java:1015)
	at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.build(EntityManagerFactoryBuilderImpl.java:941)
	at org.hibernate.jpa.HibernatePersistenceProvider.createEntityManagerFactory(HibernatePersistenceProvider.java:56)
	at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:79)
	at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:54)
	at com.baeldung.optionalReturnType.PersistOptionalTypeExample.<clinit>(PersistOptionalTypeExample.java:11)
Caused by: org.hibernate.MappingException: Could not determine type for: java.util.Optional, at table: UserOptionalField, for columns: [org.hibernate.mapping.Column(firstName)]

We could try deviating from this standard. For example, we could keep the property as a String, but change the getter:

@Column(nullable = true) 
private String firstName; 

public Optional<String> getFirstName() { 
    return Optional.ofNullable(firstName); 
}

It appears that we could have both ways: have an Optional return type for the getter and a persistable field firstName.

However, now that we are inconsistent with our getter, setter, and field, it’ll be more difficult to leverage JPA defaults and IDE source code tools.

Until JPA has elegant support of Optional type, we should stick to the traditional code. It’s simpler and better:

private String firstName;

// ... traditional getter and setter

Let’s finally take a look at how this affects the front end – check to see if the problem we run into sounds familiar.

3.4. Expression Languages

Preparing a DTO for the front-end presents similar difficulties.

For example, let’s imagine that we are using JSP templating to read our UserOptional DTO’s firstName from the request:

<c:out value="${requestScope.user.firstName}" />

Since it’s an Optional, we’ll not see “Baeldung“. Instead, we’ll see the String representation of the Optional type:

Optional[Baeldung]

And this isn’t a problem just with JSP. Any templating language, be it Velocity, Freemarker, or something else, will need to add support for this. Until then, let’s continue to keep our DTOs simple.

4. Conclusion

In this tutorial, we’ve learned how we can return an Optional object, and how to deal with this kind of return value.

On the other hand, we’ve also learned that there are many scenarios that we would be better off to not use Optional return type for a getter. While we can use Optional type as a hint that there might be no non-null value, we should be careful not to overuse the Optional return type, particularly in a getter of an entity bean or a DTO.

The source code of the examples in this tutorial can be found on GitHub.

The Spring Boot Starter Parent

$
0
0

1. Introduction

In this tutorial, we’ll learn about spring-boot-starter-parent and how we can benefit from it for better dependency management, default configurations for plugins and quickly build our Spring Boot applications.

We’ll also see how we can override the versions of existing dependencies and properties provided by starter-parent.

2. Spring Boot Starter Parent

The spring-boot-starter-parent project is a special starter project that provides default configurations for our application and a complete dependency tree that we can use to quickly build our Spring Boot application.

It also provides default configuration for Maven plugins such as maven-failsafe-plugin, maven-jar-plugin, maven-surefire-plugin, maven-war-plugin. Besides that, it also inherits dependency management from spring-boot-dependencies which is a parent to the spring-boot-starter-parent.

We can add this parent starter in our project by adding this as a parent in our project’s pom.xml :

<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>2.1.5.RELEASE</version>
</parent>

We can always get the latest version of spring-boot-starter-parent from Maven Central.

3. Managing Dependencies

Once, we’ve declared the starter parent as our project’s parent, we can pull any dependency from the parent by just declaring it in our dependencies tag.

Also, we don’t need to define versions of the dependencies, Maven will download jar files based on version defined for starter parent in the parent tag.

For example, if we’re building a web project, we can add spring-boot-starter-web directly, and we don’t need to specify the version:

<dependencies>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
    </dependency>
</dependencies>

4. The Dependency Management Tag

To manage a different version of a dependency provided by the starter parent we can declare dependency and its version explicitly in the dependencyManagement section:

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-jpa</artifactId>
            <version>2.1.1.RELEASE</version>
        </dependency>
    </dependencies>
</dependencyManagement>

5. Properties

To change the value of any property defined in the starter parent, we can re-declare it in our properties section.

The spring-boot-starter-parent via its parent spring-boot-dependencies uses properties for configuring all the dependencies versions, Java version, and Maven plugin versions.

Therefore, it makes it easy for us to control these configurations by just changing the corresponding property.

If we want to change the version of any dependency that we want to pull from the starter parent, we can add the dependency in the dependency tag and directly configure its property:

<properties>
    <junit.version>4.11</junit.version>
</properties>

6. Other Property Overrides

We can also use properties for other configurations such as managing plugin versions or for even some base configuration like managing the Java version, source encoding.

We just need to re-declare the property with a new value.

For example, to change the Java version we can indicate it in the java.version property:

<properties>
    <java.version>1.8</java.version>
</properties>

7. Spring Boot Project Without Starter Parent

Sometimes we have a custom Maven parent. Or, we may prefer to declare all our Maven configurations manually.

In that case, we may opt to not use the spring-boot-starter-parent project. But, we can still benefit from its dependency tree by adding a dependency spring-boot-dependencies in our project in import scope.

Let’s explain this with a simple example in which we want to use another parent other than the starter parent:

<parent>
    <groupId>com.baeldung</groupId>
    <artifactId>spring-boot-parent</artifactId>
    <version>1.0.0-SNAPSHOT</version>
</parent>

Here, we have used parent-modules a different project as our parent dependency.

Now, in this case, we can still get the same benefits of dependency management by adding it in import scope and pom type:

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-dependencies</artifactId>
            <version>2.1.1.RELEASE</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>

Furthermore, we can pull in any dependency by just declaring it in dependencies as we did in our previous examples. No version numbers are needed for those dependencies.

8. Summary

In this tutorial, we have given an overview of spring-boot-starter-parent and the benefit of adding it as a parent in any child project.

Next, we learned how to manage dependencies. We can override dependencies in dependencyManagement or via properties.

Source code for snippets used in this tutorial is available on Github, one using the starter parent and the other a custom parent.

Integrating Groovy into Java Applications

$
0
0

1. Introduction

In this tutorial, we’ll explore the latest techniques to integrate Groovy into a Java Application.

2. A Few Words About Groovy

The Groovy programming language is a powerful, optionally-typed and dynamic language. It’s supported by the Apache Software Foundation and the Groovy community, with contributions from more than 200 developers.

It can be used to build an entire application, to create a module or an additional library interacting with our Java code, or to run scripts evaluated and compiled on the fly.

For more information, please read Introduction to Groovy Language or go to the official documentation.

3. Maven Dependencies

At the time of writing, the latest stable release is 2.5.7, while Groovy 2.6 and 3.0 (both started in fall ’17) are still in alpha stage.

Similar to Spring Boot, we just need to include the groovy-all pom to add all the dependencies we may need, without worrying about their versions:

<dependency>
    <groupId>org.codehaus.groovy</groupId>
    <artifactId>groovy-all</artifactId>
    <version>${groovy.version}</version>
    <type>pom</type>
</dependency>

4. Joint Compilation

Before going into the details of how to configure Maven, we need to understand what we are dealing with.

Our code will contain both Java and Groovy files. Groovy won’t have any problem at all finding the Java classes, but what if we want Java to find Groovy classes and methods?

Here comes joint compilation to the rescue!

Joint compilation is a process designed to compile both Java and Groovy files in the same project, in a single Maven command.

With joint compilation, the Groovy compiler will:

  • parse the source files
  • depending on the implementation, create stubs that are compatible with the Java compiler
  • invoke the Java compiler to compile the stubs along with Java sources – this way Java classes can find Groovy dependencies
  • compile the Groovy sources – now our Groovy sources can find their Java dependencies

Depending on the plugin implementing it, we may be required to separate the files into specific folders or to tell the compiler where to find them.

Without joint compilation, the Java source files would be compiled as if they were Groovy sources. Sometimes this might work since most of the Java 1.7 syntax is compatible with Groovy, but the semantics would be different.

5. Maven Compiler Plugins

There are a few compiler plugins available that support joint compilation, each with its strengths and weaknesses.

The two most commonly used with Maven are Groovy-Eclipse Maven and GMaven+.

5.1. The Groovy-Eclipse Maven Plugin

The Groovy-Eclipse Maven plugin simplifies the joint compilation by avoiding stubs generation, still a mandatory step for other compilers like GMaven+, but it presents some configuration quirks.

To enable retrieval of the newest compiler artifacts, we have to add the Maven Bintray repository:

<pluginRepositories>
    <pluginRepository>
        <id>bintray</id>
        <name>Groovy Bintray</name>
        <url>https://dl.bintray.com/groovy/maven</url>
        <releases>
            <!-- avoid automatic updates -->
            <updatePolicy>never</updatePolicy>
        </releases>
        <snapshots>
            <enabled>false</enabled>
        </snapshots>
    </pluginRepository>
</pluginRepositories>

Then, in the plugin section, we tell the Maven compiler which Groovy compiler version it has to use.

In fact, the plugin we’ll use – the Maven compiler plugin – doesn’t actually compile, but instead delegates the job to the groovy-eclipse-batch artifact:

<plugin>
    <artifactId>maven-compiler-plugin</artifactId>
    <version>3.8.0</version>
    <configuration>
        <compilerId>groovy-eclipse-compiler</compilerId>
        <source>${java.version}</source>
        <target>${java.version}</target>
    </configuration>
    <dependencies>
        <dependency>
            <groupId>org.codehaus.groovy</groupId>
            <artifactId>groovy-eclipse-compiler</artifactId>
            <version>3.3.0-01</version>
        </dependency>
        <dependency>
            <groupId>org.codehaus.groovy</groupId>
            <artifactId>groovy-eclipse-batch</artifactId>
            <version>${groovy.version}-01</version>
        </dependency>
    </dependencies>
</plugin>

The groovy-all dependency version should match the compiler version.

Finally, we need to configure our source autodiscovery: by default, the compiler would look into folders such as src/main/java and src/main/groovy, but if our java folder is empty, the compiler won’t look for our groovy sources.

The same mechanism is valid for our tests.

To force the file discovery, we could add any file in src/main/java and src/test/java, or simply add the groovy-eclipse-compiler plugin:

<plugin>
    <groupId>org.codehaus.groovy</groupId>
    <artifactId>groovy-eclipse-compiler</artifactId>
    <version>3.3.0-01</version>
    <extensions>true</extensions>
</plugin>

The <extension> section is mandatory to let the plugin add the extra build phase and goals, containing the two Groovy source folders.

5.2. The GMavenPlus Plugin

The GMavenPlus plugin may have a name similar to the old GMaven plugin, but instead of creating a mere patch, the author made an effort to simplify and decouple the compiler from a specific Groovy version.

To do so, the plugin separates itself from the standard guidelines for compiler plugins.

The GMavenPlus compiler adds support for features that were still not present in other compilers at the time, such as invokedynamic, the interactive shell console, and Android.

On the other side, it presents some complications:

  • it modifies Maven’s source directories to contain both the Java and the Groovy sources, but not the Java stubs
  • it requires us to manage stubs if we don’t delete them with the proper goals

To configure our project, we need to add  the gmavenplus-plugin:

<plugin>
    <groupId>org.codehaus.gmavenplus</groupId>
    <artifactId>gmavenplus-plugin</artifactId>
    <version>1.7.0</version>
    <executions>
        <execution>
            <goals>
                <goal>execute</goal>
                <goal>addSources</goal>
                <goal>addTestSources</goal>
                <goal>generateStubs</goal>
                <goal>compile</goal>
                <goal>generateTestStubs</goal>
                <goal>compileTests</goal>
                <goal>removeStubs</goal>
                <goal>removeTestStubs</goal>
            </goals>
        </execution>
    </executions>
    <dependencies>
        <dependency>
            <groupId>org.codehaus.groovy</groupId>
            <artifactId>groovy-all</artifactId>
            <!-- any version of Groovy \>= 1.5.0 should work here -->
            <version>2.5.6</version>
            <scope>runtime</scope>
            <type>pom</type>
        </dependency>
    </dependencies>
</plugin>

To allow testing of this plugin, we created a second pom file called gmavenplus-pom.xml in the sample.

5.3. Compiling With the Eclipse-Maven Plugin

Now that everything is configured, we can finally build our classes.

In the example we provided, we created a simple Java application in the source folder src/main/java and some Groovy scripts in src/main/groovy, where we can create Groovy classes and scripts.

Let’s build everything with the Eclipse-Maven plugin:

$ mvn clean compile
...
[INFO] --- maven-compiler-plugin:3.8.0:compile (default-compile) @ core-groovy-2 ---
[INFO] Changes detected - recompiling the module!
[INFO] Using Groovy-Eclipse compiler to compile both Java and Groovy files
...

Here we see that Groovy is compiling everything.

5.4. Compiling With GMavenPlus

GMavenPlus shows some differences:

$ mvn -f gmavenplus-pom.xml clean compile
...
[INFO] --- gmavenplus-plugin:1.7.0:generateStubs (default) @ core-groovy-2 ---
[INFO] Using Groovy 2.5.7 to perform generateStubs.
[INFO] Generated 2 stubs.
[INFO]
...
[INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ core-groovy-2 ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 3 source files to XXX\Baeldung\TutorialsRepo\core-groovy-2\target\classes
[INFO]
...
[INFO] --- gmavenplus-plugin:1.7.0:compile (default) @ core-groovy-2 ---
[INFO] Using Groovy 2.5.7 to perform compile.
[INFO] Compiled 2 files.
[INFO]
...
[INFO] --- gmavenplus-plugin:1.7.0:removeStubs (default) @ core-groovy-2 ---
[INFO]
...

We notice right away that GMavenPlus goes through the additional steps of:

  1. Generating stubs, one for each groovy file
  2. Compiling the Java files – stubs and Java code alike
  3. Compiling the Groovy files

By generating stubs, GMavenPlus inherits a weakness that caused many headaches to developers in the past years, when working with joint compilation.

In the ideal scenario, everything would work just fine, but introducing more steps we have also more points of failure: for example, the build may fail before being able to clean up the stubs.

If this happens, old stubs left around may confuse our IDE, which would then show compilation errors where we know everything should be correct.

Only a clean build would then avoid a painful and long witch hunt.

5.5. Packaging Dependencies in the Jar File

To run the program as a jar from the command line, we added the maven-assembly-plugin, which will include all the Groovy dependencies in a “fat jar” named with the postfix defined in the property descriptorRef:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-assembly-plugin</artifactId>
    <version>3.1.0</version>
    <configuration>
        <!-- get all project dependencies -->
        <descriptorRefs>
            <descriptorRef>jar-with-dependencies</descriptorRef>
        </descriptorRefs>
        <!-- MainClass in mainfest make a executable jar -->
        <archive>
            <manifest>
                <mainClass>com.baeldung.MyJointCompilationApp</mainClass>
            </manifest>
        </archive>
    </configuration>
    <executions>
        <execution>
            <id>make-assembly</id>
            <!-- bind to the packaging phase -->
            <phase>package</phase>
            <goals>
                <goal>single</goal>
            </goals>
        </execution>
    </executions>
</plugin>

Once the compilation is complete we can run our code with this command:

$ java -jar target/core-groovy-2-1.0-SNAPSHOT-jar-with-dependencies.jar com.baeldung.MyJointCompilationApp

6. Loading Groovy Code on the Fly

The Maven compilation let us include Groovy files in our project and reference their classes and methods from Java.

Although, this is not enough if we want to change the logic at runtime: the compilation runs outside the runtime stage, so we still have to restart our application in order to see our changes.

To take advantage of the dynamic power (and risks) of Groovy, we need to explore the techniques available to load our files when our application is already running.

6.1. GroovyClassLoader

To achieve this, we need the GroovyClassLoader, which can parse source code in text or file format and generate the resulting class objects.

When the source is a file, the compilation result is also cached, to avoid overhead when we ask the loader multiple instances of the same class.

Script coming directly from a String object, instead, won’t be cached, hence calling the same script multiple times could still cause memory leaks.

GroovyClassLoader is the foundation other integration systems are built on.

The implementation is relatively simple:

private final GroovyClassLoader loader;

private Double addWithGroovyClassLoader(int x, int y) 
  throws IllegalAccessException, InstantiationException, IOException {
    Class calcClass = loader.parseClass(
      new File("src/main/groovy/com/baeldung/", "CalcMath.groovy"));
    GroovyObject calc = (GroovyObject) calcClass.newInstance();
    return (Double) calc.invokeMethod("calcSum", new Object[] { x, y });
}

public MyJointCompilationApp() {
    loader = new GroovyClassLoader(this.getClass().getClassLoader());
    // ...
}

6.2. GroovyShell

The Shell Script Loader parse() method accepts sources in text or file format and generates an instance of the Script class.

This instance inherits the run() method from Script, which executes the entire file top to bottom and returns the result given by the last line executed.

If we want to, we can also extend Script in our code, and override the default implementation to call directly our internal logic.

The implementation to call Script.run() looks like this:

private Double addWithGroovyShellRun(int x, int y) throws IOException {
    Script script = shell.parse(new File("src/main/groovy/com/baeldung/", "CalcScript.groovy"));
    return (Double) script.run();
}

public MyJointCompilationApp() {
    // ...
    shell = new GroovyShell(loader, new Binding());
    // ...
}

Please note that the run() doesn’t accept parameters, so we would need to add to our file some global variables initialize them through the Binding object.

As this object is passed in the GroovyShell initialization, the variables are shared with all the Script instances.

If we prefer a more granular control, we can use invokeMethod(), which can access our own methods through reflection and pass arguments directly.

Let’s look at this implementation:

private final GroovyShell shell;

private Double addWithGroovyShell(int x, int y) throws IOException {
    Script script = shell.parse(new File("src/main/groovy/com/baeldung/", "CalcScript.groovy"));
    return (Double) script.invokeMethod("calcSum", new Object[] { x, y });
}

public MyJointCompilationApp() {
    // ...
    shell = new GroovyShell(loader, new Binding());
    // ...
}

Under the covers, GroovyShell relies on the GroovyClassLoader for compiling and caching the resulting classes, so the same rules explained earlier apply in the same way.

6.3. GroovyScriptEngine

The GroovyScriptEngine class is particularly for those applications which rely on the reloading of a script and its dependencies.

Although we have these additional features, the implementation has only a few small differences:

private final GroovyScriptEngine engine;

private void addWithGroovyScriptEngine(int x, int y) throws IllegalAccessException,
  InstantiationException, ResourceException, ScriptException {
    Class<GroovyObject> calcClass = engine.loadScriptByName("CalcMath.groovy");
    GroovyObject calc = calcClass.newInstance();
    Object result = calc.invokeMethod("calcSum", new Object[] { x, y });
    LOG.info("Result of CalcMath.calcSum() method is {}", result);
}

public MyJointCompilationApp() {
    ...
    URL url = null;
    try {
        url = new File("src/main/groovy/com/baeldung/").toURI().toURL();
    } catch (MalformedURLException e) {
        LOG.error("Exception while creating url", e);
    }
    engine = new GroovyScriptEngine(new URL[] {url}, this.getClass().getClassLoader());
    engineFromFactory = new GroovyScriptEngineFactory().getScriptEngine(); 
}

This time we have to configure source roots, and we refer to the script with just its name, which is a bit cleaner.

Looking inside the loadScriptByName method, we can see right away the check isSourceNewer where the engine checks if the source currently in cache is still valid.

Every time our file changes, GroovyScriptEngine will automatically reload that particular file and all the classes depending on it.

Although this is a handy and powerful feature, it could cause a very dangerous side effect: reloading many times a huge number of files will result in CPU overhead without warning.

If that happens, we may need to implement our own caching mechanism to deal with this issue.

6.4. GroovyScriptEngineFactory (JSR-223)

JSR-223 provides a standard API for calling scripting frameworks since Java 6.

The implementation looks similar, although we go back to loading via full file paths:

private final ScriptEngine engineFromFactory;

private void addWithEngineFactory(int x, int y) throws IllegalAccessException, 
  InstantiationException, javax.script.ScriptException, FileNotFoundException {
    Class calcClas = (Class) engineFromFactory.eval(
      new FileReader(new File("src/main/groovy/com/baeldung/", "CalcMath.groovy")));
    GroovyObject calc = (GroovyObject) calcClas.newInstance();
    Object result = calc.invokeMethod("calcSum", new Object[] { x, y });
    LOG.info("Result of CalcMath.calcSum() method is {}", result);
}

public MyJointCompilationApp() {
    // ...
    engineFromFactory = new GroovyScriptEngineFactory().getScriptEngine();
}

It’s great if we are integrating our app with several scripting languages, but its feature set is more restricted. For example, it doesn’t support class reloading. As such, if we are only integrating with Groovy, then it may be better to stick with earlier approaches.

7. Pitfalls of Dynamic Compilation

Using any of the methods above, we could create an application that reads scripts or classes from a specific folder outside our jar file.

This would give us the flexibility to add new features while the system is running (unless we require new code in the Java part), thus achieving some sort of Continuous Delivery development.

But beware this double-edged sword: we now need to protect ourselves very carefully from failures that could happen both at compile time and runtime, de facto ensuring that our code fails safely.

8. Pitfalls of Running Groovy in a Java Project

8.1. Performance

We all know that when a system needs to be very performant, there are some golden rules to follow.

Two that may weigh more on our project are:

  • avoid reflection
  • minimize the number of bytecode instructions

Reflection, in particular, is a costly operation due to the process of checking the class, the fields, the methods, the method parameters, and so on.

If we analyze the method calls from Java to Groovy, for example, when running the example addWithCompiledClasses, the stack of operation between .calcSum and the first line of the actual Groovy method looks like:

calcSum:4, CalcScript (com.baeldung)
addWithCompiledClasses:43, MyJointCompilationApp (com.baeldung)
addWithStaticCompiledClasses:95, MyJointCompilationApp (com.baeldung)
main:117, App (com.baeldung)

Which is consistent with Java. The same happens when we cast the object returned by the loader and call its method.

However, this is what the invokeMethod call does:

calcSum:4, CalcScript (com.baeldung)
invoke0:-1, NativeMethodAccessorImpl (sun.reflect)
invoke:62, NativeMethodAccessorImpl (sun.reflect)
invoke:43, DelegatingMethodAccessorImpl (sun.reflect)
invoke:498, Method (java.lang.reflect)
invoke:101, CachedMethod (org.codehaus.groovy.reflection)
doMethodInvoke:323, MetaMethod (groovy.lang)
invokeMethod:1217, MetaClassImpl (groovy.lang)
invokeMethod:1041, MetaClassImpl (groovy.lang)
invokeMethod:821, MetaClassImpl (groovy.lang)
invokeMethod:44, GroovyObjectSupport (groovy.lang)
invokeMethod:77, Script (groovy.lang)
addWithGroovyShell:52, MyJointCompilationApp (com.baeldung)
addWithDynamicCompiledClasses:99, MyJointCompilationApp (com.baeldung)
main:118, MyJointCompilationApp (com.baeldung)

In this case, we can appreciate what’s really behind Groovy’s power: the MetaClass.

A MetaClass defines the behavior of any given Groovy or Java class, so Groovy looks into it whenever there’s a dynamic operation to execute in order to find the target method or field. Once found, the standard reflection flow executes it.

Two golden rules broken with one invoke method!

If we need to work with hundreds of dynamic Groovy files, how we call our methods will then make a huge performance difference in our system.

8.2. Method or Property Not Found

As mentioned earlier, if we want to deploy new versions of Groovy files in a CD life cycle, we need to treat them like they were an API separate from our core system.

This means putting in place multiple fail-safe checks and code design restrictions so our newly joined developer doesn’t blow up the production system with a wrong push.

Examples of each are: having a CI pipeline and using method deprecation instead of deletion.

What happens if we don’t? We get dreadful exceptions due to missing methods and wrong argument counts and types.

And if we think that compilation would save us, let’s look at the method calcSum2() of our Groovy scripts:

// this method will fail in runtime
def calcSum2(x, y) {
    // DANGER! The variable "log" may be undefined
    log.info "Executing $x + $y"
    // DANGER! This method doesn't exist!
    calcSum3()
    // DANGER! The logged variable "z" is undefined!
    log.info("Logging an undefined variable: $z")
}

By looking through the entire file, we immediately see two problems: the method calcSum3() and the variable z are not defined anywhere.

Even so, the script is compiled successfully, without even a single warning, both statically in Maven and dynamically in the GroovyClassLoader.

It’ll fail only when we try to invoke it.

Maven’s static compilation will show an error only if our Java code refers directly to calcSum3(), after casting the GroovyObject like we do in the addWithCompiledClasses() method, but it’s still ineffective if we use reflection instead.

9. Conclusion

In this article, we explored how we can integrate Groovy in our Java application, looking at different integration methods and some of the problems we may encounter with mixed languages.

As usual, the source code used in the examples can be found on GitHub.

Copying Sets in Java

$
0
0

1. Overview

A Set is a collection that contains no duplicate elements. In Java, Set is an interface that extends the Collection interface.

In this tutorial, we’ll go through different ways of copying sets in Java.

2. Copy Constructor

One way of copying a Set is to use the copy constructor of a Set implementation:

Set<T> copy = new HashSet<>(original);

A copy constructor is a special type of constructor that is used to create a new object by copying an existing object.

Here, we’re not really cloning the elements of the given set. We’re just copying the object references into the new set. For that reason, each change made in one element will affect both sets.

3. Set.addAll

The Set interface has an addAll methodIt adds the elements in the collection to the target set. Therefore, we can use the addAll method to copy the elements of an existing set to an empty set:

Set<T> copy = new HashSet<>();
copy.addAll(original);

4. Set.clone

Let’s keep in mind that Set is an interface that extends the Collection interface, therefore we need to refer to an object that implements the Set interface to create another instance of a Set. HashSet, TreeSet, LinkedHashSet, and EnumSet are all examples of Set implementations in Java.

All these Set implementations have a clone method since they all implement the Cloneable interface.

So, as another approach to copying a set, we can call the set’s clone method:

Set<T> copy = (Set<T>) original.clone();

Let’s note that cloning originally comes from the Object.clone. Set implementations override the clone method of the Object class. The nature of the clone depends on the actual implementation. For example, HashSet does only a shallow copy, though we can code our way to doing a deep copy.

As we can see, we are forced to typecast the cloned object to Set<T> since the clone method actually returns an Object.

5. JSON

Another approach to copy a set is to serialize it into a JSON String and create a new set from the generated JSON StringIt’s also worth to note that for this approach all the elements in the set and referenced elements must be serializable and that we’ll be performing a deep copy of all the objects.

In this example, we’ll copy the set by using the serialization and deserialization methods of Google’s Gson library:

Gson gson = new Gson();
String jsonStr = gson.toJson(original);
Set<T> copy = gson.fromJson(jsonStr, Set.class);

6. Apache Commons Lang

Apache Commons Lang has a class SerializationUtils that provides a special method – clone – that can be used to clone a given object. We can make use of this method to copy a set:

for (T item : original) {
    copy.add(SerializationUtils.clone(item));
}

Let’s note that SerializationUtils.clone expects its parameter to extend the Serializable class.

7. Collectors.toSet

Or, we can use Java 8’s Stream API with Collectors to clone a set:

Set<T> copy = original.stream()
    .collect(Collectors.toSet());

One advantage of the Stream API is that it provides more convenience by allowing us to use skips, filters, and more.

8. Using Java 10

Java 10 brings a new feature into the Set interface that allows us to create an immutable set from the elements of a given collection:

Set<T> copy = Set.copyOf(original);

Note that Set.copyOf expects a non-null parameter.

9. Conclusion

In this quick tutorial, we’ve explored different ways of copying sets in Java.

As always, check out the source code for our examples, including the one for Java 10.


Introduction to SPF4J

$
0
0

1. Overview

Performance testing is an activity often pushed towards the end stages of the software development cycle. We usually rely on Java profilers to help troubleshoot performance issues.

In this tutorial, we’ll go through the Simple Performance Framework for Java (SPF4J). It provides us APIs that can be added to our code. As a result, we can make performance monitoring an integral part of our component.

2. Basic Concepts of Metrics Capture and Visualization

Before we start, let’s try to understand the concepts of metrics capture and visualization using a simple example.

Let’s imagine that we’re interested in monitoring the downloads of a newly launched app in an app store. For the sake of learning, let’s think of doing this experiment manually.

2.1. Capturing the Metrics

First, we need to decide on what needs to be measured. The metric we’re interested is downloads/min. Therefore, we’ll measure the number of downloads.

Second, how frequently do we need to take the measurements? Let’s decide “once per minute”.

Finally, how long should we monitor? Let’s decide “for one hour”.

With these rules in place, we’re ready to conduct the experiment. Once the experiment is over, we can see the results:

Time	Cumulative Downloads	Downloads/min
----------------------------------------------
T       497                     0  
T+1     624                     127
T+2     676                     52
...     
T+14    19347                   17390
T+15    19427                   80
...  
T+22    27195                   7350
...  
T+41    41321                   11885
...   
T+60    43395                   40

The first two columns – time and cumulative downloads – are direct values we observe. The third column, downloads/min, is a derived value calculated as the difference between current and previous cumulative download values. This gives us the actual number of downloads during that time period.

2.2. Visualizing the Metrics

Let’s plot a simple linear graph of time vs downloads/min.

Downloads per Minute Chart

We can see that there are some peaks indicating a large number of downloads that happened on a few occasions. Due to the linear scale used for downloads axis, the lower values appear as a straight line.

Let’s change downloads axis to use a logarithmic scale (base 10) and plot a log/linear graph.

Now we actually start seeing the lower values. And they are closer towards 100 (+/-). Notice that the linear graph indicated an average of 703 as it included the peaks also.

If we were to exclude the peaks as aberrations, we can conclude from our experiment using the log/linear graph:

  • the average downloads/min is in the order of 100s

3. Performance Monitoring of a Function Call

Having understood how to capture a simple metric and analyze it from the previous example, let’s now apply it on a simple Java method — isPrimeNumber:

private static boolean isPrimeNumber(long number) {
    for (long i = 2; i <= number / 2; i++) {
        if (number % i == 0)
            return false;
    }
    return true;
}

Using SPF4J, there are two ways to capture metrics. Let’s explore them in the next section.

4. Setup and Configuration

4.1. Maven Setup

SPF4J provides us many different libraries for different purposes, but we only need a few for our simple example.

The core library is spf4j-core, which provides us most of the necessary features.

Let’s add this as a Maven dependency:

<dependency>
    <groupId>org.spf4j</groupId>
    <artifactId>spf4j-core</artifactId>
    <version>8.6.10</version>
</dependency>

There is a more well-suited library for performance monitoring — spf4j-aspects, which uses AspectJ.

We’ll explore this in our example, so let’s add this too:

<dependency>
    <groupId>org.spf4j</groupId>
    <artifactId>spf4j-aspects</artifactId>
    <version>8.6.10</version>
</dependency>

And finally, SPF4J also comes with a simple UI that is quite useful for data visualization, so let’s add spf4j-ui as well:

<dependency>
    <groupId>org.spf4j</groupId>
    <artifactId>spf4j-ui</artifactId>
    <version>8.6.10</version>
</dependency>

4.2. Configuration of Output Files

SPF4J framework writes data into a time-series-database (TSDB) and can optionally also write to a text file.

Let’s configure both of them and set a system property spf4j.perf.ms.config:

public static void initialize() {
  String tsDbFile = System.getProperty("user.dir") + File.separator + "spf4j-performance-monitoring.tsdb2";
  String tsTextFile = System.getProperty("user.dir") + File.separator + "spf4j-performance-monitoring.txt";
  LOGGER.info("\nTime Series DB (TSDB) : {}\nTime Series text file : {}", tsDbFile, tsTextFile);
  System.setProperty("spf4j.perf.ms.config", "TSDB@" + tsDbFile + "," + "TSDB_TXT@" + tsTextFile);
}

4.3. Recorders and Sources

The SPF4J framework’s core capability is to record, aggregate, and save metrics, so that there is no post-processing needed when analyzing it. It does so by using the MeasurementRecorder and MeasurementRecorderSource classes.

These two classes provide two different ways to record a metric. The key difference is that MeasurementRecorder can be invoked from anywhere, whereas MeasurementRecorderSource is used only with annotations.

The framework provides us a RecorderFactory class to create instances of recorder and recorder source classes for different types of aggregations:

  • createScalableQuantizedRecorder() and createScalableQuantizedRecorderSource()
  • createScalableCountingRecorder() and createScalableCountingRecorderSource()
  • createScalableMinMaxAvgRecorder() and createScalableMinMaxAvgRecorderSource()
  • createDirectRecorder() and createDirectRecorderSource()

For our example, let’s choose scalable quantized aggregation.

4.4. Creating a Recorder

First, let’s create a helper method to create an instance of MeasurementRecorder:

public static MeasurementRecorder getMeasurementRecorder(Object forWhat) {
    String unitOfMeasurement = "ms";
    int sampleTimeMillis = 1_000;
    int factor = 10;
    int lowerMagnitude = 0;
    int higherMagnitude = 4;
    int quantasPerMagnitude = 10;

    return RecorderFactory.createScalableQuantizedRecorder(
      forWhat, unitOfMeasurement, sampleTimeMillis, factor, lowerMagnitude, 
      higherMagnitude, quantasPerMagnitude);
}

Let’s look at the different settings:

  • unitOfMeasurement – the unit value being measured – for a performance monitoring scenario, it is generally a unit of time
  • sampleTimeMillis – the time period for taking measurements – or in other words, how often to take measurements
  • factor – the base of the logarithmic scale used for plotting the measured value
  • lowerMagnitude – the minimum value on the logarithmic scale – for log base 10, lowerMagnitude = 0 means 10 to power 0 = 1
  • higherMagnitude – the maximum value on the logarithmic scale – for log base 10, higherMagnitude = 4 means 10 to power 4 = 10,000
  • quantasPerMagnitude – number of sections within a magnitude – if a magnitude ranges from 1,000 to 10,000, then quantasPerMagnitude = 10 means the range will be divided into 10 sub-ranges

We can see that the values can be changed as per our need. So, it might be a good idea to create separate MeasurementRecorder instances for different measurements.

4.5. Creating a Source

Next, let’s create an instance of MeasurementRecorderSource using another helper method:

public static final class RecorderSourceForIsPrimeNumber extends RecorderSourceInstance {
    public static final MeasurementRecorderSource INSTANCE;
    static {
        Object forWhat = App.class + " isPrimeNumber";
        String unitOfMeasurement = "ms";
        int sampleTimeMillis = 1_000;
        int factor = 10;
        int lowerMagnitude = 0;
        int higherMagnitude = 4;
        int quantasPerMagnitude = 10;
        INSTANCE = RecorderFactory.createScalableQuantizedRecorderSource(
          forWhat, unitOfMeasurement, sampleTimeMillis, factor, 
          lowerMagnitude, higherMagnitude, quantasPerMagnitude);
    }
}

Notice that we’ve used the same values for settings as previously.

4.6. Creating a Configuration Class

Let’s now create a handy Spf4jConfig class and put all the above methods inside it:

public class Spf4jConfig {
    public static void initialize() {
        //...
    }

    public static MeasurementRecorder getMeasurementRecorder(Object forWhat) {
        //...
    }

    public static final class RecorderSourceForIsPrimeNumber extends RecorderSourceInstance {
        //...
    }
}

4.7. Configuring aop.xml

SPF4J provides us the option to annotate methods on which to do performance measurement and monitoring. It uses the AspectJ library, which allows adding additional behavior needed for performance monitoring to existing code without modification of the code itself.

Let’s weave our class and aspect using load-time weaver and put aop.xml under a META-INF folder:

<aspectj>
    <aspects>
        <aspect name="org.spf4j.perf.aspects.PerformanceMonitorAspect" />
    </aspects>
    <weaver options="-verbose">
        <include within="com..*" />
        <include within="org.spf4j.perf.aspects.PerformanceMonitorAspect" />
    </weaver>
</aspectj>

5. Using MeasurementRecorder

Let’s now see how to use the MeasurementRecorder to record the performance metrics of our test function.

5.1. Recording the Metrics

Let’s generate 100 random numbers and invoke the prime check method in a loop. Prior to this, let’s call our Spf4jConfig class to do the initialization and to create an instance of MeasureRecorder class. Using this instance, let’s call the record() method to save the individual time taken for 100 isPrimeNumber() calls:

Spf4jConfig.initialize();
MeasurementRecorder measurementRecorder = Spf4jConfig
  .getMeasurementRecorder(App.class + " isPrimeNumber");
Random random = new Random();
for (int i = 0; i < 100; i++) {
    long numberToCheck = random.nextInt(999_999_999 - 100_000_000 + 1) + 100_000_000;
    long startTime = System.currentTimeMillis();
    boolean isPrime = isPrimeNumber(numberToCheck);
    measurementRecorder.record(System.currentTimeMillis() - startTime);
    LOGGER.info("{}. {} is prime? {}", i + 1, numberToCheck, isPrime);
}

5.2. Running the Code

We’re now ready to test the performance of our simple function isPrimeNumber().

Let’s run the code and see the results:

Time Series DB (TSDB) : E:\Projects\spf4j-core-app\spf4j-performance-monitoring.tsdb2
Time Series text file : E:\Projects\spf4j-core-app\spf4j-performance-monitoring.txt
1. 406704834 is prime? false
...
9. 507639059 is prime? true
...
20. 557385397 is prime? true
...
26. 152042771 is prime? true
...
100. 841159884 is prime? false

5.3. Viewing the Results

Let’s launch the SPF4J UI by running the command from the project folder:

java -jar target/dependency-jars/spf4j-ui-8.6.9.jar

This will bring up a desktop UI application. Then, from the menu let’s choose File > Open. After that, let’s use the browse window to locate the spf4j-performance-monitoring.tsdb2 file and open it.

We can now see a new window open up with a tree view containing our file name and a child item. Let’s click on the child item and then click on the Plot button above it.

This will generate a series of graphs.

The first graph, measurement distribution, is a variation of the log-linear graph we saw earlier. This graph additionally shows a heatmap based on the count.

The second graph shows aggregated data like min, max, and average:

And the last graph shows the count of measurements vs time:

6. Using MeasurementRecorderSource

In the previous section, we had to write extra code around our functionality to record the measurements. In this section, let’s use another approach to avoid this.

6.1. Recording the Metrics

First, we’ll remove the extra code added for capturing and recording metrics:

Spf4jConfig.initialize();
Random random = new Random();
for (int i = 0; i < 50; i++) {
    long numberToCheck = random.nextInt(999_999_999 - 100_000_000 + 1) + 100_000_000;
    isPrimeNumber(numberToCheck);
}

Instead of all that boilerplate, next, let’s annotate the isPrimeNumber() method using @PerformanceMonitor:

@PerformanceMonitor(
  warnThresholdMillis = 1,
  errorThresholdMillis = 100, 
  recorderSource = Spf4jConfig.RecorderSourceForIsPrimeNumber.class)
private static boolean isPrimeNumber(long number) {
    //...
}

Let’s look at the different settings:

  • warnThresholdMillis – maximum time allowed for the method to run without a warning message
  • errorThresholdMillis – maximum time allowed for the method to run without an error message
  • recorderSource – an instance of MeasurementRecorderSource

6.2. Running the Code

Let’s do a Maven build first and then execute the code by passing a Java agent:

java -javaagent:target/dependency-jars/aspectjweaver-1.8.13.jar -jar target/spf4j-aspects-app.jar

We see the results:

Time Series DB (TSDB) : E:\Projects\spf4j-aspects-app\spf4j-performance-monitoring.tsdb2
Time Series text file : E:\Projects\spf4j-aspects-app\spf4j-performance-monitoring.txt

[DEBUG] Execution time 0 ms for execution(App.isPrimeNumber(..)), arguments [555031768]
...
[ERROR] Execution time  2826 ms for execution(App.isPrimeNumber(..)) exceeds error threshold of 100 ms, arguments [464032213]
...

We can see that SPF4J framework logs the time taken for every method call. And whenever it exceeds the errorThresholdMillis value of 100 ms, it logs it as an error. The argument passed to the method is also logged.

6.3. Viewing the Results

We can view the results same way as we did earlier using the SPF4J UI so we can refer to the previous section.

7. Conclusion

In this article, we talked about the basic concepts of capturing and visualizing metrics.

We then understood the performance monitoring capabilities of SPF4J framework with the help of a simple example. We also used the built-in UI tool to visualize the data.

As always, the examples from this article are available over on GitHub.

Breaking Out of Nested Loops

$
0
0

1. Overview

In this tutorial, we’ll create some examples to show different ways to use break within a loop. Next, we’ll also see how to terminate a loop without using break at all.

2. The Problem

Nested loops are very useful, for instance, to search in a list of lists.

One example would be a list of students, where each student has a list of planned courses. Let’s say we want to find the name of one person that planned course 0.

First, we’d loop over the list of students. Then, inside that loop, we’d loop over the list of planned courses.

When we print the names of the students and courses we’ll get the following result:

student 0
  course 0
  course 1
student 1
  course 0
  course 1

We wanted to find the first student that planned course 0. However, if we just use loops then the application will continue searching after the course is found.

After we find a person who planned the specific course, we want to stop searching. Continuing to search would take more time and resources while we don’t need the extra information. That’s why we want to break out of the nested loop.

3. Break

The first option we have to go out of a nested loop is to simply use the break statement:

String result = "";
for (int outerCounter = 0; outerCounter < 2; outerCounter++) {
    result += "outer" + outerCounter;
    for (int innerCounter = 0; innerCounter < 2; innerCounter++) {
        result += "inner" + innerCounter;
        if (innerCounter == 0) {
            break;
        }
    }
}
return result;

We have an outer loop and an inner loop, both loops have two iterations. If the counter of the inner loop equals 0 we execute the break command. When we run the example, it will show the following result:

outer0inner0outer1inner0

Or we could adjust the code to make it a bit more readable:

outer 0
  inner 0
outer 1
  inner 0

Is this what we want?

Almost, the inner loop is terminated by the break statement after 0 is found. However, the outer loop continues, which is not what we want. We want to stop processing completely as soon as we have the answer.

4. Labeled Break

The previous example was a step in the right direction, but we need to improve it a bit. We can do that by using a labeled break:

String result = "";
myBreakLabel:
for (int outerCounter = 0; outerCounter < 2; outerCounter++) {
    result += "outer" + outerCounter;
    for (int innerCounter = 0; innerCounter < 2; innerCounter++) {
        result += "inner" + innerCounter;
        if (innerCounter == 0) {
            break myBreakLabel;
        }
    }
}
return result;

A labeled break will terminate the outer loop instead of just the inner loop. We achieve that by adding the myBreakLabel outside the loop and changing the break statement to stop myBreakLabel. After we run the example we get the following result:

outer0inner0

We can read it a bit better with some formatting:

outer 0
  inner 0

If we look at the result we can see that both the inner loop and the outer loop are terminated, which is what we wanted to achieve.

5. Return

As an alternative, we could also use the return statement to directly return the result when it’s found:

String result = "";
for (int outerCounter = 0; outerCounter < 2; outerCounter++) {
    result += "outer" + outerCounter;
    for (int innerCounter = 0; innerCounter < 2; innerCounter++) {
        result += "inner" + innerCounter;
        if (innerCounter == 0) {
            return result;
        }
    }
}
return "failed";

The label is removed and the break statement is replaced by a return statement.

When we execute the code above we get the same result as for the labeled break. Note that for this strategy to work, we typically need to move the block of loops into its own method.

6. Conclusion

So, we’ve just looked at what to do when we need to exit early from a loop, like when we’ve found the item we’re searching for. The break keyword is helpful for single loops, and we can use labeled breaks for nested loops.

Alternatively, we can use a return statement. Using return makes the code better readable and less error-prone as we don’t have to think about the difference between unlabeled and labeled breaks.

Feel free to have a look at the code over on GitHub.

Java Weekly, Issue 288

$
0
0

Here we go…

1. Spring and Java

>> Hiding Services & Runtime Discovery with Spring Cloud Gateway [spring.io]

A solid, ready-to-run example using Spring Cloud’s gateway and registry services. Good stuff.

>> Exercises in Programming Style: maps are objects too [blog.frankel.ch]

A functional solution to the word extraction problem seen previously in the series, this time using an immutable map in Kotlin.

>> Running Gradle inside Maven [andresalmiray.com]

And although Gradle builds don’t participate directly in the Maven reactor, you can execute a Gradle build within a multi-module Maven project with the right combination of plugins.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musing

>> How to use S3 POST signed URLs [advancedweb.hu]

An overview of how to use POST URLs to get around the shortcomings of their PUT counterparts.

>> Moving Control to the Endpoints [mnot.net]

And a look at the obstacles standing in the way of wider adoption of encrypted DNS solutions.

Also worth reading:

3. Comics

>> Measuring Excellence [dilbert.com]

>> When Wally Is Busy [dilbert.com]

>> Zombie Projects [dilbert.com]

4. Pick of the Week

A cool writeup from Datadog focused on how to how to actually get your log data processed:

>> How to collect, customize, and standardize Java logs [datadoghq.com]

 

If you’ve ever worked on any sufficiently large application, you know all too well that’s not a trivial task.

Check If a String Is a Valid Date in Java

$
0
0

1. Introduction

In this tutorial, we’ll discuss the various ways to check if a String contains a valid date in Java. We’ll discuss the solutions before Java 8, after Java 8, and using the Apache Commons Validator.

2. Date Validation Overview

Whenever we receive data in any application, we need to verify that it’s valid before doing any further processing.

In the case of date inputs, we may need to verify the following:

  • The input contains the date in a valid format, such as MM/DD/YYYY
  • The various parts of the input are in a valid range
  • The input resolves to a valid date in the calendar

We can use regular expressions to do the above. However, regular expressions to handle various input formats and locales are complex and error-prone. In addition, they can degrade performance.

We’ll discuss the different ways to implement date validations in a flexible, robust, and efficient manner.

First, let’s write an interface for the date validation:

public interface DateValidator {
   boolean isValid(String dateStr);
}

In the next sections, we’ll implement this interface using the various approaches.

3. Validate Using DateFormat

Java has provided facilities to format and parse dates since the beginning. This functionality is in the DateFormat abstract class and its implementation — SimpleDateFormat.

Let’s implement the date validation using the parse method of the DateFormat class:

public class DateValidatorUsingDateFormat implements DateValidator {
    private String dateFormat;

    public DateValidatorUsingDateFormat(String dateFormat) {
        this.dateFormat = dateFormat;
    }

    @Override
    public boolean isValid(String dateStr) {
        DateFormat sdf = new SimpleDateFormat(this.dateFormat);
        sdf.setLenient(false);
        try {
            sdf.parse(dateStr);
        } catch (ParseException e) {
            return false;
        }
        return true;
    }
}

Since the DateFormat and related classes are not thread-safe, we are creating a new instance for each method call.

Next, let’s write the unit test for this class:

DateValidator validator = new DateValidatorUsingDateFormat("MM/dd/yyyy");

assertTrue(validator.isValid("02/28/2019"));        
assertFalse(validator.isValid("02/30/2019"));

This has been the most common solution before Java 8.

4. Validate Using LocalDate

Java 8 introduced an improved Date and Time API. It added the LocalDate class, which represents the date without time. This class is immutable and thread-safe.

LocalDate provides two static methods to parse dates. Both of them use a DateTimeFormatter to do the actual parsing:

public static LocalDate parse​(CharSequence text)
// parses dates using using DateTimeFormatter.ISO_LOCAL_DATE

public static LocalDate parse​(CharSequence text, DateTimeFormatter formatter)
// parses dates using the provided formatter

Let’s use the parse method to implement the date validation:

public class DateValidatorUsingLocalDate implements DateValidator {
    private DateTimeFormatter dateFormatter;
    
    public DateValidatorUsingLocalDate(DateTimeFormatter dateFormatter) {
        this.dateFormatter = dateFormatter;
    }

    @Override
    public boolean isValid(String dateStr) {
        try {
            LocalDate.parse(dateStr, this.dateFormatter);
        } catch (DateTimeParseException e) {
            return false;
        }
        return true;
    }
}

The implementation uses a DateTimeFormatter object for formatting. Since this class is thread-safe, we’re using the same instance across different method calls.

Let’s also add a unit test for this implementation:

DateTimeFormatter dateFormatter = DateTimeFormatter.BASIC_ISO_DATE;
DateValidator validator = new DateValidatorUsingLocalDate(dateFormatter);
        
assertTrue(validator.isValid("20190228"));
assertFalse(validator.isValid("20190230"));

5. Validate Using DateTimeFormatter

In the previous section, we saw that LocalDate uses a DateTimeFormatter object for parsing. We can also use the DateTimeFormatter class directly for formatting and parsing.

DateTimeFormatter parses a text in two phases. In Phase 1, it parses the text into various date and time fields based on the configuration. In Phase 2, it resolves the parsed fields into a date and/or time object.

The ResolverStyle attribute controls phase 2. It is an enum having three possible values:

  • LENIENT – resolves dates and times leniently
  • SMART – resolves dates and times in an intelligent manner
  • STRICT – resolves dates and times strictly

Now, let’s write the date validation using DateTimeFormatter directly:

public class DateValidatorUsingDateTimeFormatter implements DateValidator {
    private DateTimeFormatter dateFormatter;
    
    public DateValidatorUsingDateTimeFormatter(DateTimeFormatter dateFormatter) {
        this.dateFormatter = dateFormatter;
    }

    @Override
    public boolean isValid(String dateStr) {
        try {
            this.dateFormatter.parse(dateStr);
        } catch (DateTimeParseException e) {
            return false;
        }
        return true;
    }
}

Next, let’s add the unit test for this class:

DateTimeFormatter dateFormatter = DateTimeFormatter.ofPattern("uuuu-MM-dd", Locale.US)
    .withResolverStyle(ResolverStyle.STRICT);
DateValidator validator = new DateValidatorUsingDateTimeFormatter(dateFormatter);
        
assertTrue(validator.isValid("2019-02-28"));
assertFalse(validator.isValid("2019-02-30"));

In the above test, we’re creating a DateTimeFormatter based on pattern and locale. We are using the strict resolution for dates.

6. Validate Using Apache Commons Validator

The Apache Commons project provides a validation framework. This contains validation routines, such as date, time, numbers, currency, IP address, email, and URL.

For our goal in this article, let’s take a look at the GenericValidator class, which provides a couple of methods to check if a String contains a valid date:

public static boolean isDate(String value, Locale locale)
  
public static boolean isDate(String value,String datePattern, boolean strict)

To use the library, let’s add the commons-validator Maven dependency to our project:

<dependency>
    <groupId>commons-validator</groupId>
    <artifactId>commons-validator</artifactId>
    <version>1.6</version>
</dependency>

Next, let’s use the GenericValidator class to validate dates:

assertTrue(GenericValidator.isDate("2019-02-28", "yyyy-MM-dd", true));
assertFalse(GenericValidator.isDate("2019-02-29", "yyyy-MM-dd", true));

7. Conclusion

In this article, we looked at the various ways to check if a String contains a valid date. As usual, the full source code can be found over on GitHub.

Key Value Store with Chronicle Map

$
0
0

 1. Overview

In this tutorial, we’re going to see how we can use the Chronicle Map for storing key-value pairs. We’ll also be creating short examples to demonstrate its behavior and usage.

2. What is a Chronicle Map?

Following the documentation, “Chronicle Map is a super-fast, in-memory, non-blocking, key-value store, designed for low-latency, and/or multi-process applications”.

In a nutshell, it’s an off-heap key-value store. The map doesn’t require a large amount of RAM for it to function properly. It can grow based on the available disk capacity. Furthermore, it supports replication of the data in a multi-master server setup.

Let’s now see how we can set up and work with it.

3. Maven Dependency

To get started, we’ll need to add the chronicle-map dependency to our project:

<dependency>
    <groupId>net.openhft</groupId>
    <artifactId>chronicle-map</artifactId>
    <version>3.17.2</version>
</dependency>

4. Types of Chronicle Map

We can create a map in two ways: either as an in-memory map or as a persisted map.

Let’s see both of these in detail.

4.1. In-Memory Map

An in-memory Chronicle Map is a map store that is created within the physical memory of the server. This means it’s accessible only within the JVM process in which the map store is created.

Let’s see a quick example:

ChronicleMap<LongValue, CharSequence> inMemoryCountryMap = ChronicleMap
  .of(LongValue.class, CharSequence.class)
  .name("country-map")
  .entries(50)
  .averageValue("America")
  .create();

For the sake of simplicity, we’re creating a map that stores 50 country ids and their names. As we can see in the code snippet, the creation is pretty straightforward except for the averageValue() configuration. This tells the map to configure the average number of bytes taken by map entry values.

In other words, when creating the map, the Chronicle Map determines the average number of bytes taken by the serialized form of values. It does this by serializing the given average value using the configured value marshallers. It will then allocate the determined number of bytes for the value of each map entry.

One thing we have to note when it comes to the in-memory map is that the data is accessible only when the JVM process is alive. The library will clear the data when the process terminates.

4.2. Persisted Map

Unlike an in-memory map, the implementation will save a persisted map to disk. Let’s now see how we can create a persisted map:

ChronicleMap<LongValue, CharSequence> persistedCountryMap = ChronicleMap
  .of(LongValue.class, CharSequence.class)
  .name("country-map")
  .entries(50)
  .averageValue("America")
  .createPersistedTo(new File(System.getProperty("user.home") + "/country-details.dat"));

This will create a file called country-details.dat in the folder specified. If this file is already available in the specified path, then the builder implementation will open a link to the existing data store from this JVM process.

We can make use of the persisted map in cases where we want it to:

  • survive beyond the creator process; for example, to support hot application redeployment
  • make it global in a server; for example, to support multiple concurrent process access
  • act as a data store that we’ll save to the disk

5. Size Configuration

It’s mandatory to configure the average value and average key while creating a Chronicle Map, except in the case where our key/value type is either a boxed primitive or a value interface. In our example, we’re not configuring the average key since the key type LongValue is a value interface.

Now, let’s see what the options are for configuring the average number of key/value bytes:

  • averageValue() – The value from which the average number of bytes to be allocated for the value of a map entry is determined
  • averageValueSize() – The average number of bytes to be allocated for the value of a map entry
  • constantValueSizeBySample() – The number of bytes to be allocated for the value of a map entry when the size of the value is always the same
  • averageKey() – The key from which the average number of bytes to be allocated for the key of a map entry is determined
  • averageKeySize() – The average number of bytes to be allocated for the key of a map entry
  • constantKeySizeBySample() – The number of bytes to be allocated for the key of a map entry when the size of the key is always the same

6. Key And Value Types

There are certain standards that we need to follow when creating a Chronicle Map, especially when defining the key and value. The map works best when we create the key and value using the recommended types.

Here are some of the recommended types:

  • Value interfaces
  • Any class implementing Byteable interface from Chronicle Bytes
  • Any class implementing BytesMarshallable interface from Chronicle Bytes; the implementation class should have a public no-arg constructor
  • byte[] and ByteBuffer
  • CharSequence, String, and StringBuilder
  • Integer, Long, and Double
  • Any class implementing java.io.Externalizable; the implementation class should have a public no-arg constructor
  • Any type implementing java.io.Serializable, including boxed primitive types (except those listed above) and array types
  • Any other type, if custom serializers are provided

7. Querying a Chronicle Map

Chronicle Map supports single-key queries as well as multi-key queries.

7.1. Single-Key Queries

Single-key queries are the operations that deal with a single key. ChronicleMap supports all the operations from the Java Map interface and ConcurrentMap interface:

LongValue qatarKey = Values.newHeapInstance(LongValue.class);
qatarKey.setValue(1);
inMemoryCountryMap.put(qatarKey, "Qatar");

//...

CharSequence country = inMemoryCountryMap.get(key);

In addition to the normal get and put operations, ChronicleMap adds a special operation, getUsing(), that reduces the memory footprint while retrieving and processing an entry. Let’s see this in action:

LongValue key = Values.newHeapInstance(LongValue.class);
StringBuilder country = new StringBuilder();
key.setValue(1);
persistedCountryMap.getUsing(key, country);
assertThat(country.toString(), is(equalTo("Romania")));

key.setValue(2);
persistedCountryMap.getUsing(key, country);
assertThat(country.toString(), is(equalTo("India")));

Here we’ve used the same StringBuilder object for retrieving values of different keys by passing it to the getUsing() method. It basically reuses the same object for retrieving different entries. In our case, the getUsing() method is equivalent to:

country.setLength(0);
country.append(persistedCountryMap.get(key));

7.2. Multi-Key Queries

There may be use cases where we need to deal with multiple keys at the same time. For this, we can use the queryContext() functionality. The queryContext() method will create a context for working with a map entry.

Let’s first create a multimap and add some values to it:

Set<Integer> averageValue = IntStream.of(1, 2).boxed().collect(Collectors.toSet());
ChronicleMap<Integer, Set<Integer>> multiMap = ChronicleMap
  .of(Integer.class, (Class<Set<Integer>>) (Class) Set.class)
  .name("multi-map")
  .entries(50)
  .averageValue(averageValue)
  .create();

Set<Integer> set1 = new HashSet<>();
set1.add(1);
set1.add(2);
multiMap.put(1, set1);

Set<Integer> set2 = new HashSet<>();
set2.add(3);
multiMap.put(2, set2);

To work with multiple entries, we have to lock those entries to prevent inconsistency that may occur due to a concurrent update:

try (ExternalMapQueryContext<Integer, Set<Integer>, ?> fistContext = multiMap.queryContext(1)) {
    try (ExternalMapQueryContext<Integer, Set<Integer>, ?> secondContext = multiMap.queryContext(2)) {
        fistContext.updateLock().lock();
        secondContext.updateLock().lock();

        MapEntry<Integer, Set<Integer>> firstEntry = fistContext.entry();
        Set<Integer> firstSet = firstEntry.value().get();
        firstSet.remove(2);

        MapEntry<Integer, Set<Integer>> secondEntry = secondContext.entry();
        Set<Integer> secondSet = secondEntry.value().get();
        secondSet.add(4);

        firstEntry.doReplaceValue(fistContext.wrapValueAsData(firstSet));
        secondEntry.doReplaceValue(secondContext.wrapValueAsData(secondSet));
    }
} finally {
    assertThat(multiMap.get(1).size(), is(equalTo(1)));
    assertThat(multiMap.get(2).size(), is(equalTo(2)));
}

8. Closing the Chronicle Map

Now that we’ve finished working with our maps, let’s call the close() method on our map objects to release the off-heap memory and the resources associated with it:

persistedCountryMap.close();
inMemoryCountryMap.close();
multiMap.close();

One thing to keep in mind here is that all the map operations must be completed before closing the map. Otherwise, the JVM might crash unexpectedly.

9. Conclusion

In this tutorial, we’ve learned how to use a Chronicle Map to store and retrieve key-value pairs. Even though the community version is available with most of the core functionalities, the commercial version has some advanced features like data replication across multiple servers and remote calls.

All the examples we’ve discussed here can be found over the Github project.

Checking for Empty or Blank Strings in Java

$
0
0

1. Introduction

In this tutorial, we’ll go through some ways of checking for empty or blank strings in Java. We’ve got some native language approaches as well as a couple of libraries.

2. Empty vs. Blank

It’s, of course, pretty common to know when a string is empty or blank, but let’s make sure we’re on the same page with our definitions.

We consider a string to be empty if it’s either null or a string without any length. If a string only consists of whitespace only, then we call it blank.

For Java, whitespaces are characters like spaces, tabs and so on. Have a look at Character.isWhitespace for examples.

3. Empty Strings

3.1. With Java 6 and Above

If we are at least on Java 6, then the simplest way to check for an empty string is String#isEmpty:

boolean isEmptyString(String string) {
    return string == null || string.isEmpty();
}

To make it also null-safe we need to add an extra check.

3.2. With Java 5 and Below

String#isEmpty was introduced with Java 6. For Java 5 and below, we can use String#length instead.

boolean isEmptyString(String string) {
    return string == null || string.length() == 0;
}

In fact, String#isEmpty is just a shortcut to String#length.

4. Blank Strings

Both String#isEmpty and String#length can be used to check for empty strings.

If we also want to detect blank strings, we can achieve this with the help of String#trim. It will remove all leading and trailing whitespaces before performing the check.

boolean isBlankString(String string) {
    return string == null || string.trim().isEmpty();
}

To be precise, String#trim will remove all leading and trailing characters with a Unicode code less than or equal to U+0020.

And also remember that Strings are immutable, so calling trim won’t actually change the underlying string.

5. Bean Validation

Another way to check for blank strings is regular expressions. This comes handy for instance with Java Bean Validation:

@Pattern(regexp = "\\A(?!\\s*\\Z).+")
String someString;

The given regular expression ensures that empty or blank strings will not validate.

6. With Apache Commons

If it’s ok to add dependencies, we can use Apache Commons Lang. This has a host of helpers for Java.

If we use Maven, we need to add the commons-lang3 dependency to our pom:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-lang3</artifactId>
</dependency>

Among other things, this gives us StringUtils.

This class comes with methods like isEmpty, isBlank and so on:

StringUtils.isBlank(string)

This call does the same as our own isBlankString method. It’s null-safe and also checks for whitespaces.

7. With Guava

Another well-known library that brings certain string related utilities is Google’s Guava. Starting with version 23.1, there are two flavors of Guava: android and jre. The Android flavor targets Android and Java 7, whereas the JRE flavor goes for Java 8.

If we’re not targeting Android, we can just add the JRE flavor to our pom:

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>28.0-jre</version>
</dependency>

Guavas Strings class comes with a method Strings.isNullOrEmpty:

Strings.isNullOrEmpty(string)

It checks whether a given string is null or empty, but it will not check for whitespace-only strings.

8. Conclusion

There are several ways to check whether a string is empty or not. Often, we also want to check if a string is blank, meaning that it consists of only whitespace characters.

The most convenient way is to use Apache Commons Lang, which provides helpers such as StringUtils.isBlank. If we want to stick to plain Java, we can use a combination of String#trim with either String#isEmpty or String#length. For Bean Validation, regular expressions can be used instead.

Make sure to check out all these samples over on GitHub.

Checking If a String Is a Repeated Substring

$
0
0

1. Introduction

In this tutorial, we’ll show how we can check in Java if a String is a sequence of repeated substrings.

2. The Problem

Before we continue with the implementation, let’s set up some conditions. First, we’ll assume that our String has at least two characters. Second, there’s at least one repetition of a substring.

This is best illustrated with some examples by checking out a few repeated substrings:

"aa"
"ababab"
"barrybarrybarry"

And a few non-repeated ones:

"aba"
"cbacbac"
"carlosxcarlosy"

We’ll now show a few solutions to the problem.

3. A Naive Solution

Let’s implement the first solution.

The process is rather simple: we’ll check the String‘s length and eliminate the single character Strings at the very beginning.

Then, since the length of a substring can’t be larger than a half of the string’s length, we’ll iterate through the half of the String and create the substring in every iteration by appending the next character to the previous substring.

We’ll next remove those substrings from the original String and check if the length of the “stripped” one is zero. That would mean that it’s made only of its substrings:

public static boolean containsOnlySubstrings(String string) {

    if (string.length() < 2) {
        return false;
    }

    StringBuilder substr = new StringBuilder();
    for (int i = 0; i < string.length() / 2; i++) {
        substr.append(string.charAt(i));

        String clearedFromSubstrings = string.replaceAll(substr.toString(), "");

        if (clearedFromSubstrings.length() == 0) {
            return true;
        }
    }

    return false;
}

Let’s create some Strings to test our method:

String validString = "aa";
String validStringTwo = "ababab";
String validStringThree = "baeldungbaeldung";

String invalidString = "aca";
String invalidStringTwo = "ababa";
String invalidStringThree = "baeldungnonrepeatedbaeldung";

And, finally, we can easily check its validity:

assertTrue(containsOnlySubstrings(validString));
assertTrue(containsOnlySubstrings(validStringTwo));
assertTrue(containsOnlySubstrings(validStringThree));

assertFalse(containsOnlySubstrings(invalidString));
assertFalse(containsOnlySubstrings(invalidStringTwo));
assertFalse(containsOnlySubstrings(invalidStringThree));

Although this solution works, it’s not very efficient since we iterate through the half of the String and use replaceAll() method in every iteration.

Obviously, it comes with the cost regarding the performance. It’ll run in time O(n^2).

4. The Efficient Solution

Now, we’ll illustrate another approach.

Namely, we should make use of the fact that a String is made of the repeated substrings if and only if it’s a nontrivial rotation of itself.

The rotation here means that we remove some characters from the beginning of the String and put them at the end. For example, “eldungba” is the rotation of “baeldung”. If we rotate a String and get the original one, then we can apply this rotation over and over again and get the String consisting of the repeated substrings.

Next, we need to check if this is the case with our example. To accomplish this, we’ll make use of the theorem which says that if String A and String B have the same length, then we can say that A is a rotation of B if and only if A is a substring of BB. If we go with the example from the previous paragraph, we can confirm this theorem: baeldungbaeldung.

Since we know that our String A will always be a substring of AA, we then only need to check if the String A is a substring of AA excluding the first character:

public static boolean containsOnlySubstringsEfficient(String string) {
    return ((string + string).indexOf(string, 1) != string.length());
}

We can test this method the same way as the previous one. This time, we have O(n) time complexity.

We can find some useful theorems about the topic in String analysis research.

5. Conclusion

In this article, we illustrated two ways of checking if a String consists only of its substrings in Java.

All code samples used in the article are available over on GitHub.


Java Multi-line String

$
0
0

1. Overview

Due to the fact that there is no native multi-line string class in Java yet, it's a little bit tricky to create and utilize multi-line strings.

In this tutorial, we walk through several methods to make and use multi-line strings in Java.

2. Getting the Line Separator

Each operating system can have its own way of defining and recognizing new lines. In Java, it's very easy to get the operating system line separator:

String newLine = System.getProperty("line.separator");

We're going to use this newLine in the following sections to create multi-line strings.

3. String Concatenation

String concatenation is an easy native method which can be used to create multi-line strings:

public String stringConcatenation() {
    return "Get busy living"
            .concat(newLine)
            .concat("or")
            .concat(newLine)
            .concat("get busy dying.")
            .concat(newLine)
            .concat("--Stephen King");
}

Using the + operator is another way of achieving the same thing. Java compilers translate concat() and the + operator in the same way:

public String stringConcatenation() {
    return "Get busy living"
            + newLine
            + "or"
            + newLine
            + "get busy dying."
            + newLine
            + "--Stephen King";
}

4. String Join

Java 8 introduced String#join, which takes a delimiter along with some strings as arguments. It returns a final string having all input strings joined together with the delimiter:

public String stringJoin() {
    return String.join(newLine,
                       "Get busy living",
                       "or",
                       "get busy dying.",
                       "--Stephen King");
}

5. String Builder

StringBuilder is a helper class to build Strings. StringBuilder was introduced in Java 1.5 as a replacement for StringBuffer. It's a good choice for building huge strings in a loop:

public String stringBuilder() {
    return new StringBuilder()
            .append("Get busy living")
            .append(newLine)
            .append("or")
            .append(newLine)
            .append("get busy dying.")
            .append(newLine)
            .append("--Stephen King")
            .toString();
}

6. String Writer

StringWriter is another method that we can utilize to create a multi-line string. We don't need newLine here, because we use PrintWriter. The println function automatically adds new lines:

public String stringWriter() {
    StringWriter stringWriter = new StringWriter();
    PrintWriter printWriter = new PrintWriter(stringWriter);
    printWriter.println("Get busy living");
    printWriter.println("or");
    printWriter.println("get busy dying.");
    printWriter.println("--Stephen King");
    return stringWriter.toString();
}

7. Guava Joiner

Using an external library just for a simple task like this doesn't make much sense, however, if the project already uses the library for other purposes, we can utilize it. For example, Google's Guava library is very popular. Guava has a Joiner class that is able to build multi-line strings:

public String guavaJoiner() {
    return Joiner.on(newLine).join(ImmutableList.of("Get busy living",
        "or",
        "get busy dying.",
        "--Stephen King"));
}

8. Loading from a File

Java reads files exactly as they are. This means that if we have a multi-line string in a text file, we'll have the same string when we read the file. There are a lot of ways to read from a file in Java.

Actually, it's a good practice to separate long strings from code:

public String loadFromFile() throws IOException {
    return new String(Files.readAllBytes(Paths.get("src/main/resources/stephenking.txt")));
}

9. Using IDE Features

Many modern IDEs support multi-line copy/paste. Eclipse and IntelliJ IDEA are examples of such IDEs. We can simply copy our multi-line string and paste in inside two double quotes in these IDEs.

Obviously, this method doesn't work for string creation in run time, but it's a quick and easy way to get a multi-line string.

10. Conclusion

In this tutorial, we learned several methods to build multi-line strings in Java.

The good news is Java 13 will have native support for multi-line strings via Text Blocks. Needless to say, all the methods above will still work in Java 13.

The code for all the methods in this article is available over on Github.

Why Choose Spring as Your Java Framework?

$
0
0

1. Overview

In this article, we’ll go through the main value proposition of Spring as one of the most popular Java frameworks.

More importantly, we’ll try to understand the reasons for Spring being our framework of choice. Details of Spring and its constituent parts have been widely covered in our previous tutorials. Hence we’ll skip the introductory “how” parts and mostly focus on “why”s.

2. Why Use Any Framework?

Before we begin any discussion in particular on Spring, let’s first understand why do we need to use any framework at all in the first place.

A general purpose programming language like Java is capable of supporting a wide variety of application. Not to mention that Java is actively being worked upon and improving every day.

Moreover, there are countless open source and proprietary libraries to support Java in this regard.

So why do we need a framework after all? Honestly, it isn’t absolutely necessary to use a framework to accomplish a task. But, it’s often advisable to use one for several reasons:

  • Helps us focus on the core task rather than the boilerplate associated with it
  • Brings together years of wisdom in the form of design patterns
  • Helps us adhere to the industry and regulatory standards
  • Brings down the total cost of ownership for the application

We’ve just scratched the surface here and we must say that the benefits and difficult to ignore. But it can’t be all positives, so what’s the catch:

  • Forces us to write an application in a specific manner
  • Binds to a specific version of language and libraries
  • Adds to the resource footprint of the application

Frankly, there are no silver bullets in software development and frameworks and certainly no exception to that. So the choice of which framework or none should be driven from the context.

Hopefully, we’ll be better placed to make this decision with respect to Spring in Java by the end of this article.

3. Brief Overview of Spring Ecosystem

Before we begin our qualitative assessment of Spring Framework, let’s have a closer look into what does Spring ecosystem look like.

Spring came into existence somewhere in 2003 at a time when Java Enterprise Edition was evolving fast and developing an enterprise application was exciting but nonetheless tedious!

Spring started out as an Inversion of Control (IoC) container for Java. We still relate Spring mostly to it and in fact, it forms the core of the framework and other projects that have been developed on top of it.

3.1. Spring Framework

Spring framework is divided into modules which makes it really easy to pick and choose in parts to use in any application:

  • Core: Provides core features like DI (Dependency Injection), Internationalisation, Validation, and AOP (Aspect Oriented Programming)
  • Data Access: Supports data access through JTA (Java Transaction API), JPA (Java Persistence API), and JDBC (Java Database Connectivity)
  • Web: Supports both Servlet API (Spring MVC) and of recently Reactive API (Spring WebFlux), and additionally supports WebSockets, STOMP, and WebClient
  • Integration: Supports integration to Enterprise Java through JMS (Java Message Service), JMX (Java Management Extension), and RMI (Remote Method Invocation)
  • Testing: Wide support for unit and integration testing through Mock Objects, Test Fixtures, Context Management, and Caching

3.2. Spring Projects

But what makes Spring much more valuable is a strong ecosystem that has grown around it over the years and that continues to evolve actively. These are structured as Spring projects which are developed on top of the Spring framework.

Although the list of Spring projects is a long one and it keeps changing, there are a few worth mentioning:

  • Boot: Provides us with a set of highly opinionated but extensible template for creating various projects based on Spring in almost no time. It makes it really easy to create standalone Spring applications with embedded Tomcat or a similar container.
  • Cloud: Provides support to easily develop some of the common distributed system patterns like service discovery, circuit breaker, and API gateway. It helps us cut down the effort to deploy such boilerplate patterns in local, remote or even managed platforms.
  • Security: Provides a robust mechanism to develop authentication and authorization for projects based on Spring in a highly customizable manner. With minimal declarative support, we get protection against common attacks like session fixation, click-jacking, and cross-site request forgery.
  • Mobile: Provides capabilities to detect the device and adapt the application behavior accordingly. Additionally, supports device-aware view management for optimal user experience, site preference management, and site switcher.
  • Batch: Provides a lightweight framework for developing batch applications for enterprise systems like data archival. Has intuitive support for scheduling, restart, skipping, collecting metrics, and logging. Additionally, supports scaling up for high-volume jobs through optimization and partitioning.

Needless to say that this is quite an abstract introduction to what Spring has to offer. But it provides us enough ground with respect to Spring’s organization and breadth to take our discussion further.

4. Spring in Action

It is customary to add a hello-world program to understand any new technology.

Let’s see how Spring can make it a cakewalk to write a program which does more than just hello-world. We’ll create an application that will expose CRUD operations as REST APIs for a domain entity like Employee backed by an in-memory database. What’s more, we’ll protect our mutation endpoints using basic auth. Finally, no application can really be complete without good, old unit tests.

4.1. Project Set-up

We’ll set up our Spring Boot project using Spring Initializr, which is a convenient online tool to bootstrap projects with the right dependencies. We’ll add Web, JPA, H2, and Security as project dependencies to get the Maven configuration set-up correctly.

More details on bootstrapping are available in one of our previous articles.

4.2. Domain Model and Persistence

With so little to be done, we are already ready to define our domain model and persistence.

Let’s first define the Employee as a simple JPA entity:

@Entity
public class Employee {
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;
    @NotNull
    private String firstName;
    @NotNull
    private String lastName;
    // Standard constructor, getters and setters
}

Note the auto-generated id we’ve included in our entity definition.

Now we have to define a JPA repository for our entity. This is where Spring makes it really simple:

public interface EmployeeRepository 
  extends CrudRepository<Employee, Long> {
    List<Employee> findAll();
}

All we have to do is define an interface like this, and Spring JPA will provide us with an implementation fleshed out with default and custom operations. Quite neat! Find more details on working with Spring Data JPA in our other articles.

4.3. Controller

Now we have to define a web controller to route and handle our incoming requests:

@RestController
public class EmployeeController {
    @Autowired
    private EmployeeRepository repository;
    @GetMapping("/employees")
    public List<Employee> getEmployees() {
        return repository.findAll();
    }
    // Other CRUD endpoints handlers
}

Really, all we had to do was annotate the class and define routing meta information along with each handler method.

Working with Spring REST controllers is covered in great details in our previous article.

4.4. Security

So we have defined everything now, but what about securing operations like create or delete employees? We don’t want unauthenticated access to those endpoints!

Spring Security really shines in this area:

@EnableWebSecurity
public class WebSecurityConfig 
  extends WebSecurityConfigurerAdapter {
 
    @Override
    protected void configure(HttpSecurity http) 
      throws Exception {
        http
          .authorizeRequests()
            .antMatchers(HttpMethod.GET, "/employees", "/employees/**")
            .permitAll()
          .anyRequest()
            .authenticated()
          .and()
            .httpBasic();
    }
    // other necessary beans and definitions
}

There are more details here which require attention to understand but the most important point to note is the declarative manner in which we have only allowed GET operations unrestricted.

4.5. Testing

Now we’ have done everything, but wait, how do we test this?

Let’s see if Spring can make it easy to write unit tests for REST controllers:

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = WebEnvironment.RANDOM_PORT)
@AutoConfigureMockMvc
public class EmployeeControllerTests {
    @Autowired
    private MockMvc mvc;
    @Test
    @WithMockUser()
    public void givenNoEmployee_whenCreateEmployee_thenEmployeeCreated() throws Exception {
        mvc.perform(post("/employees").content(
            new ObjectMapper().writeValueAsString(new Employee("First", "Last"))
            .with(csrf()))
          .contentType(MediaType.APPLICATION_JSON)
          .accept(MediaType.APPLICATION_JSON))
          .andExpect(MockMvcResultMatchers.status()
            .isCreated())
          .andExpect(jsonPath("$.firstName", is("First")))
          .andExpect(jsonPath("$.lastName", is("Last")));
    }
    // other tests as necessary
}

As we can see, Spring provides us with the necessary infrastructure to write simple unit and integration tests which otherwise depend on the Spring context to be initialized and configured.

4.6. Running the Application

Finally, how do we run this application? This is another interesting aspect of Spring Boot. Although we can package this as a regular application and deploy traditionally on a Servlet container.

But where is fun this that! Spring Boot comes with an embedded Tomcat server:

@SpringBootApplication
public class Application {
    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }
}

This is a class which comes pre-created as part of the bootstrap and has all the necessary details to start this application using the embedded server.

Moreover, this is highly customizable.

5. Alternatives to Spring

While choosing to use a framework is relatively easier, choosing between frameworks can often be daunting with the choices we have. But for that, we must have at least a rough understanding of what alternatives are there for the features that Spring has to offer.

As we discussed previously, the Spring framework together with its projects offer a wide choice for an enterprise developer to pick from. If we do a quick assessment of contemporary Java frameworks, they don’t even come close to the ecosystem that Spring provides us.

However, for specific areas, they do form a compelling argument to pick as alternatives:

  • Guice: Offers a robust IoC container for Java applications
  • Play: Quite aptly fits in as a Web framework with reactive support
  • Hibernate: An established framework for data access with JPA support

Other than these there are some recent additions that offer wider support than a specific domain but still do not cover everything that Spring has to offer:

  • Micronaut: A JVM-based framework tailored towards cloud-native microservices
  • Quarkus: A new age Java stack which promises to deliver faster boot time and a smaller footprint

Obviously, it’s neither necessary nor feasible to iterate over the list completely but we do get the broad idea here.

6. So, Why Choose Spring?

Finally, we’ve built all the required context to address our central question, why Spring? We understand the ways a framework can help us in developing complex enterprise applications.

Moreover, we do understand the options we’ve got for specific concerns like web, data access, integration in terms of framework, especially for Java.

Now, where does Spring shine among all these? Let’s explore.

6.1. Usability

One of the key aspects of any framework’s popularity is how easy it is for developers to use it. Spring through multiple configuration options and Convention over Configuration makes it really easy for developers to start and then configure exactly what they need.

Projects like Spring Boot have made bootstrapping a complex Spring project almost trivial. Not to mention, it has excellent documentation and tutorials to help anyone get on-boarded.

6.2. Modularity

Another key aspect of Spring’s popularity is its highly modular nature. We’ve options to use the entire Spring framework or just the modules necessary. Moreover, we can optionally include one or more Spring projects depending upon the need.

What’s more, we’ve got the option to use other frameworks like Hibernate or Struts as well!

6.3. Conformance

Although Spring does not support all of Java EE specifications, it supports all of its technologies, often improving the support over the standard specification where necessary. For instance, Spring supports JPA based repositories and hence makes it trivial to switch providers.

Moreover, Spring supports industry specifications like Reactive Stream under Spring Web Reactive and HATEOAS under Spring HATEOAS.

6.4. Testability

Adoption of any framework largely also depends on the fact that how easy it is to test the application built on top of it. Spring at the core advocates and supports Test Driven Development (TDD).

Spring application is mostly composed of POJOs which naturally makes unit testing relatively much simpler. However, Spring does provide Mock Objects for scenarios like MVC where unit testing gets complicated otherwise.

6.5 Maturity

Spring has a long history of innovation, adoption, and standardization. Over the years, it’s become mature enough to become a default solution for most common problems faced in the development of large scale enterprise applications.

What’s even more exciting is how actively it’s being developed and maintained. Support for new language features and enterprise integration solutions are being developed every day.

6.6. Community Support

Last but not least, any framework or even library survive the industry through innovation and there’s no better place for innovation than community. Spring is an open source led by Pivotal Software and backed by a large consortium of organizations and individual developers.

This has meant that it remains contextual and often futuristic, as evident by the number of projects under its umbrella.

7. Reasons Not to Use Spring

There is a wide variety of application which can benefit from a different level of Spring usage, and that is changing as fast as Spring is growing.

However, we must understand that Spring like any other framework is helpful in managing the complexity of application development. It helps us to avoid common pitfalls and keeps the application maintainable as it grows over time.

This comes at the cost of an additional resource footprint and learning curve, however small that may be. If there is really an application which is simple enough and not expected to grow complex, perhaps it may benefit more to not use any framework at all!

8. Conclusion

In this article, we discussed the benefits of using a framework in application development. We further discussed briefly Spring Framework in particular.

While on the subject, we also looked into some of the alternate frameworks available for Java.

Finally, we discussed the reasons which can compel us to choose Spring as the framework of choice for Java.

We should end this article with a note of advice, though. However compelling it may sound, there is usually no single, one-size-fits-all solution in software development.

Hence, we must apply our wisdom in selecting the simplest of solutions for the specific problems we target to solve.

Setting the MySQL JDBC Timezone Using Spring Boot Configuration

$
0
0

1. Overview

Sometimes, when we’re storing dates in MySQL, we realize that the date from the database is different from our system or JVM.

Other times, we just need to run our app with another timezone.

In this tutorial, we’re going to see different ways to change the timezone of MySQL using Spring Boot configuration.

2. Timezone as a URL Param

One way we can specify the timezone is in the connection URL string as a parameter.

By default, MySQL uses useLegacyDatetimeCode=true. In order to select our timezone, we have to change this property to false. And of course, we also add the property serverTimezone to specify the timezone:

spring:
  datasource:
    url: jdbc:mysql://localhost:3306/test?serverTimezone=UTC&useLegacyDatetimeCode=false
    username: root
    password:

Also, we can, of course, configure the datasource with Java configuration instead.

We have more information about this property and others in the MySQL official documentation.

3. Spring Boot Property

Or, instead of indicating the timezone via the serverTimezone URL parameter, we can specify the time_zone property in our Spring Boot configuration:

spring.jpa.properties.hibernate.jdbc.time_zone=UTC

Or with YAML:

spring:
  jpa:
    properties:
      hibernate:
        jdbc:
          time_zone: UTC

But, it’s still necessary to add useLegacyDatetimeCode=false in the URL as we’ve seen before.

4. JVM Default Timezone

And of course, we can update the default timezone that Java has.

Again, we add useLegacyDatetimeCode=false in the URL as before. And then we just need to add a simple method:

@PostConstruct
void started() {
  TimeZone.setDefault(TimeZone.getTimeZone("UTC"));
}

But, this solution could generate other problems since it’s application-wide. Perhaps other parts of the applications need another timezone. For example, we may need to connect to different databases and they, for some reason, need dates to be stored in different timezones.

5. Conclusion

In this tutorial, we saw a few different ways to configure the MySQL JDBC timezone in Spring. We did it with a URL param, with a property, and by changing the JVM default timezone.

The full set of examples is over on GitHub.

A Guide to NanoHTTPD

$
0
0

1. Introduction

NanoHTTPD is an open-source, lightweight, web server written in Java.

In this tutorial, we’ll create a few REST APIs to explore its features.

2. Project Setup

Let’s add the NanoHTTPD core dependency to our pom.xml:

<dependency>
    <groupId>org.nanohttpd</groupId>
    <artifactId>nanohttpd</artifactId>
    <version>2.3.1</version>
</dependency>

To create a simple server, we need to extend NanoHTTPD and override its serve method:

public class App extends NanoHTTPD {
    public App() throws IOException {
        super(8080);
        start(NanoHTTPD.SOCKET_READ_TIMEOUT, false);
    }

    public static void main(String[] args ) throws IOException {
        new App();
    }

    @Override
    public Response serve(IHTTPSession session) {
        return newFixedLengthResponse("Hello world");
    }
}

We defined our running port as 8080 and server to work as a daemon (no read timeout).

Once we’ll start the application, the URL http://localhost:8080/ will return the Hello world message. We’re using NanoHTTPD#newFixedLengthResponse method as a convenient way of building a NanoHTTPD.Response object.

Let’s try our project with cURL:

> curl 'http://localhost:8080/'
Hello world

3. REST API

In the way of HTTP methods, NanoHTTPD allows GET, POST, PUT, DELETE, HEAD, TRACE, and several others.

Simply put, we can find supported HTTP verbs via the method enum. Let’s see how this plays out.

3.1. HTTP GET

First, let’s take a look at GET. Say, for example, that we want to return content only when the application receives a GET request.

Unlike Java Servlet containers, we don’t have a doGet method available – instead, we just check the value via getMethod:

@Override
public Response serve(IHTTPSession session) {
    if (session.getMethod() == Method.GET) {
        String itemIdRequestParameter = session.getParameters().get("itemId").get(0);
        return newFixedLengthResponse("Requested itemId = " + itemIdRequestParameter);
    }
    return newFixedLengthResponse(Response.Status.NOT_FOUND, MIME_PLAINTEXT, 
        "The requested resource does not exist");
}

That was pretty simple, right? Let’s run a quick test by curling our new endpoint and see that the request parameter itemId is read correctly:

> curl 'http://localhost:8080/?itemId=23Bk8'
Requested itemId = 23Bk8

3.2. HTTP POST

We previously reacted to a GET and read a parameter from the URL.

In order to cover the two most popular HTTP methods, it’s time for us to handle a POST (and thus read the request body):

@Override
public Response serve(IHTTPSession session) {
    if (session.getMethod() == Method.POST) {
        try {
            session.parseBody(new HashMap<>());
            String requestBody = session.getQueryParameterString();
            return newFixedLengthResponse("Request body = " + requestBody);
        } catch (IOException | ResponseException e) {
            // handle
        }
    }
    return newFixedLengthResponse(Response.Status.NOT_FOUND, MIME_PLAINTEXT, 
      "The requested resource does not exist");
}
Notice that before when we asked for the request body, we first called the parseBody method. That’s because we wanted to load the request body for later retrieval.

We’ll include a body in our cURL command:

> curl -X POST -d 'deliveryAddress=Washington nr 4&quantity=5''http://localhost:8080/'
Request body = deliveryAddress=Washington nr 4&quantity=5

The remaining HTTP methods are very similar in nature, so we’ll skip those.

4. Cross-Origin Resource Sharing

Using CORS, we enable cross-domain communication. The most common use case is AJAX calls from a different domain.
 
The first approach that we can use is to enable CORS for all our APIs. Using the -cors argument, we’ll allow access to all domains. We can also define which domains we allow with –cors=”http://dashboard.myApp.com http://admin.myapp.com”.
 
The second approach is to enable CORS for individual APIs. Let’s see how to use addHeader to achieve this:
@Override 
public Response serve(IHTTPSession session) {
    Response response = newFixedLengthResponse("Hello world"); 
    response.addHeader("Access-Control-Allow-Origin", "*");
    return response;
}

Now when we cURL, we’ll get our CORS header back:

> curl -v 'http://localhost:8080'
HTTP/1.1 200 OK 
Content-Type: text/html
Date: Thu, 13 Jun 2019 03:58:14 GMT
Access-Control-Allow-Origin: *
Connection: keep-alive
Content-Length: 11

Hello world

5. File Upload

NanoHTTPD has a separate dependency for file uploads, so let’s add it to our project:

<dependency>
    <groupId>org.nanohttpd</groupId>
    <artifactId>nanohttpd-apache-fileupload</artifactId>
    <version>2.3.1</version>
</dependency>
<dependency>
    <groupId>javax.servlet</groupId>
    <artifactId>javax.servlet-api</artifactId>
    <version>4.0.1</version>
    <scope>provided</scope>
</dependency>

Please note that the servlet-api dependency is also needed (otherwise we’ll get a compilation error).

What NanoHTTPD exposes is a class called NanoFileUpload:

@Override
public Response serve(IHTTPSession session) {
    try {
        List<FileItem> files
          = new NanoFileUpload(new DiskFileItemFactory()).parseRequest(session);
        int uploadedCount = 0;
        for (FileItem file : files) {
            try {
                String fileName = file.getName(); 
                byte[] fileContent = file.get(); 
                Files.write(Paths.get(fileName), fileContent);
                uploadedCount++;
            } catch (Exception exception) {
                // handle
            }
        }
        return newFixedLengthResponse(Response.Status.OK, MIME_PLAINTEXT, 
          "Uploaded files " + uploadedCount + " out of " + files.size());
    } catch (IOException | FileUploadException e) {
        throw new IllegalArgumentException("Could not handle files from API request", e);
    }
    return newFixedLengthResponse(
      Response.Status.BAD_REQUEST, MIME_PLAINTEXT, "Error when uploading");
}

Hey, let’s try it out:

> curl -F 'filename=@/pathToFile.txt' 'http://localhost:8080'
Uploaded files: 1

6. Multiple Routes

A nanolet is like a servlet but has a very low profile. We can use them to define many routes served by a single server (unlike previous examples with one route).

Firstly, let’s add the required dependency for nanolets:

<dependency>
    <groupId>org.nanohttpd</groupId>
    <artifactId>nanohttpd-nanolets</artifactId>
    <version>2.3.1</version>
</dependency>

And now we’ll extend our main class using the RouterNanoHTTPD, define our running port and have the server run as a daemon.

The addMappings method is where we’ll define our handlers:

public class MultipleRoutesExample extends RouterNanoHTTPD {
    public MultipleRoutesExample() throws IOException {
        super(8080);
        addMappings();
        start(NanoHTTPD.SOCKET_READ_TIMEOUT, false);
    }
 
    @Override
    public void addMappings() {
        // todo fill in the routes
    }
}

The next step is to define our addMappings method. Let’s define a few handlers. 

The first one is an IndexHandler class to “/” path. This class comes with the NanoHTTPD library and returns by default a Hello World message. We can override the getText method when we want a different response:

addRoute("/", IndexHandler.class); // inside addMappings method

And to test our new route we can do:

> curl 'http://localhost:8080' 
<html><body><h2>Hello world!</h3></body></html>

Secondly, let’s create a new UserHandler class which extends the existing DefaultHandler. The route for it will be /users. Here we played around with the text, MIME type, and the status code returned:

public static class UserHandler extends DefaultHandler {
    @Override
    public String getText() {
        return "UserA, UserB, UserC";
    }

    @Override
    public String getMimeType() {
        return MIME_PLAINTEXT;
    }

    @Override
    public Response.IStatus getStatus() {
        return Response.Status.OK;
    }
}

To call this route we’ll issue a cURL command again:

> curl -X POST 'http://localhost:8080/users' 
UserA, UserB, UserC

Finally, we can explore the GeneralHandler with a new StoreHandler class. We modified the returned message to include the storeId section of the URL.

public static class StoreHandler extends GeneralHandler {
    @Override
    public Response get(
      UriResource uriResource, Map<String, String> urlParams, IHTTPSession session) {
        return newFixedLengthResponse("Retrieving store for id = "
          + urlParams.get("storeId"));
    }
}

Let’s check our new API:

> curl 'http://localhost:8080/stores/123' 
Retrieving store for id = 123

7. HTTPS

In order to use the HTTPS, we’ll need a certificate. Please refer to our article on SSL for more in-depth information.

We could use a service like Let’s Encrypt or we can simply generate a self-signed certificate as follows:

> keytool -genkey -keyalg RSA -alias selfsigned
  -keystore keystore.jks -storepass password -validity 360
  -keysize 2048 -ext SAN=DNS:localhost,IP:127.0.0.1  -validity 9999

Next, we’d copy this keystore.jks to a location on our classpath, like say the src/main/resources folder of a Maven project.

After that, we can reference it in a call to NanoHTTPD#makeSSLSocketFactory:

public class HttpsExample  extends NanoHTTPD {

    public HttpsExample() throws IOException {
        super(8080);
        makeSecure(NanoHTTPD.makeSSLSocketFactory(
          "/keystore.jks", "password".toCharArray()), null);
        start(NanoHTTPD.SOCKET_READ_TIMEOUT, false);
    }

    // main and serve methods
}

And now we can try it out. Please notice the use of the —insecure parameter, because cURL won’t be able to verify our self-signed certificate by default:

> curl --insecure 'https://localhost:8443'
HTTPS call is a success

8. WebSockets

NanoHTTPD supports WebSockets.

Let’s create the simplest implementation of a WebSocket. For this, we’ll need to extend the NanoWSD class. We’ll also need to add the NanoHTTPD dependency for WebSocket:

<dependency>
    <groupId>org.nanohttpd</groupId>
    <artifactId>nanohttpd-websocket</artifactId>
    <version>2.3.1</version>
</dependency>

For our implementation, we’ll just reply with a simple text payload:

public class WsdExample extends NanoWSD {
    public WsdExample() throws IOException {
        super(8080);
        start(NanoHTTPD.SOCKET_READ_TIMEOUT, false);
    }

    public static void main(String[] args) throws IOException {
        new WsdExample();
    }

    @Override
    protected WebSocket openWebSocket(IHTTPSession ihttpSession) {
        return new WsdSocket(ihttpSession);
    }

    private static class WsdSocket extends WebSocket {
        public WsdSocket(IHTTPSession handshakeRequest) {
            super(handshakeRequest);
        }

        //override onOpen, onClose, onPong and onException methods

        @Override
        protected void onMessage(WebSocketFrame webSocketFrame) {
            try {
                send(webSocketFrame.getTextPayload() + " to you");
            } catch (IOException e) {
                // handle
            }
        }
    }
}

Instead of cURL this time, we’ll use wscat:

> wscat -c localhost:8080
hello
hello to you
bye
bye to you

9. Conclusion

To sum it up, we’ve created a project that uses the NanoHTTPD library. Next, we defined RESTful APIs and explored more HTTP related functionalities. In the end, we also implemented a WebSocket.

The implementation of all these snippets is available over on GitHub.

A Guide to Apache Mesos

$
0
0

1. Overview

We usually deploy various applications on the same cluster of machines. For example, it’s common nowadays to have a distributed processing engine like Apache Spark or Apache Flink with distributed databases like Apache Cassandra in the same cluster.

Apache Mesos is a platform that allows effective resource sharing between such applications.

In this article, we’ll first discuss a few problems of resource allocation within applications deployed on the same cluster. Later, we’ll see how Apache Mesos provides better resource utilization between applications.

2. Sharing the Cluster

Many applications need to share a cluster. By and large, there are two common approaches:

  • Partition the cluster statically and run an application on each partition
  • Allocate a set of machines to an application

Although these approaches allow applications to run independently of each other, it doesn’t achieve high resource utilization.

For instance, consider an application that runs only for a short period followed by a period of inactivity. Now, since we have allocated static machines or partitions to this application, we have unutilized resources during the inactive period.

We can optimize resource utilization by reallocating free resources during the inactive period to other applications.

Apache Mesos helps with dynamic resource allocation between applications.

3. Apache Mesos

With both cluster sharing approaches we discussed above, applications are only aware of the resources of a particular partition or machine they are running. However, Apache Mesos provides an abstract view of all the resources in the cluster to applications.

As we’ll see shortly, Mesos acts as an interface between machines and applications. It provides applications with the available resources on all machines in the cluster. It frequently updates this information to include resources that are freed up by applications that have reached completion status. This allows applications to make the best decision about which task to execute on which machine.

In order to understand how Mesos works, let’s have a look at its architecture:

This image is part of the official documentation for Mesos. Here, Hadoop and MPI are two applications that share the cluster.

We’ll talk about each component shown here in the next few sections.

3.1. Mesos Master

Master is the core component in this setup and stores the current state of resources in the cluster. Additionally, it acts as an orchestrator between the agents and the applications by passing information about things like resources and tasks.

Since any failure in master results in the loss of state about resources and tasks, we deploy it in high availability configuration. As can be seen in the diagram above, Mesos deploys standby master daemons along with one leader. These daemons rely on Zookeeper for recovering state in case of a failure.

3.2. Mesos Agents

A Mesos cluster must run an agent on every machine. These agents report their resources to the master periodically and in turn, receive tasks which an application has scheduled to run. This cycle repeats after the scheduled task is either complete or lost.

We’ll see how applications schedule and execute tasks on these agents in the following sections.

3.3. Mesos Frameworks

Mesos allows applications to implement an abstract component that interacts with the Master to receive the available resources in the cluster and moreover make scheduling decisions based on them. These components are known as frameworks.

A Mesos framework is composed of two sub-components:

  • Scheduler – Enables applications to schedule tasks based on available resources on all the agents
  • Executor – Runs on all agents and contains all the information necessary to execute any scheduled task on that agent

This entire process is depicted with this flow:

 

First, the agents report their resources to the master. At this instant, master offers these resources to all registered schedulers. This process is known as resource offer, and we’ll discuss it in detail in the next section.

The scheduler then picks the best agent and executes various tasks on it through the Master. As soon as the executor completes the assigned task, agents re-publish their resources to the master. Master repeats this process of resource sharing for all frameworks in the cluster.

Mesos allows applications to implement their custom scheduler and executor in various programming languages. A Java implementation of scheduler must implement the Scheduler interface:

public class HelloWorldScheduler implements Scheduler {
 
    @Override
    public void registered(SchedulerDriver schedulerDriver, Protos.FrameworkID frameworkID, 
      Protos.MasterInfo masterInfo) {
    }
 
    @Override
    public void reregistered(SchedulerDriver schedulerDriver, Protos.MasterInfo masterInfo) {
    }
 
    @Override
    public void resourceOffers(SchedulerDriver schedulerDriver, List<Offer> list) {
    }
 
    @Override
    public void offerRescinded(SchedulerDriver schedulerDriver, OfferID offerID) {
    }
 
    @Override
    public void statusUpdate(SchedulerDriver schedulerDriver, Protos.TaskStatus taskStatus) {
    }
 
    @Override
    public void frameworkMessage(SchedulerDriver schedulerDriver, Protos.ExecutorID executorID, 
      Protos.SlaveID slaveID, byte[] bytes) {
    }
 
    @Override
    public void disconnected(SchedulerDriver schedulerDriver) {
    }
 
    @Override
    public void slaveLost(SchedulerDriver schedulerDriver, Protos.SlaveID slaveID) {
    }
 
    @Override
    public void executorLost(SchedulerDriver schedulerDriver, Protos.ExecutorID executorID, 
      Protos.SlaveID slaveID, int i) {
    }
 
    @Override
    public void error(SchedulerDriver schedulerDriver, String s) {
    }
}

As can be seen, it mostly consists of various callback methods for communication with the master in particular.

Similarly, the implementation of an executor must implement the Executor interface:

public class HelloWorldExecutor implements Executor {
    @Override
    public void registered(ExecutorDriver driver, Protos.ExecutorInfo executorInfo, 
      Protos.FrameworkInfo frameworkInfo, Protos.SlaveInfo slaveInfo) {
    }
  
    @Override
    public void reregistered(ExecutorDriver driver, Protos.SlaveInfo slaveInfo) {
    }
  
    @Override
    public void disconnected(ExecutorDriver driver) {
    }
  
    @Override
    public void launchTask(ExecutorDriver driver, Protos.TaskInfo task) {
    }
  
    @Override
    public void killTask(ExecutorDriver driver, Protos.TaskID taskId) {
    }
  
    @Override
    public void frameworkMessage(ExecutorDriver driver, byte[] data) {
    }
  
    @Override
    public void shutdown(ExecutorDriver driver) {
    }
}

We’ll see an operational version of scheduler and executor in a later section.

4. Resource Management

4.1. Resource Offers

As we discussed earlier, agents publish their resource information to the master. In turn, the master offers these resources to the frameworks running in the cluster. This process is known as a resource offer.

A resource offer consists of two parts – resources and attributes.

Resources are used to publish hardware information of the agent machine such as memory, CPU, and disk.

There are five predefined resources for every Agent:

  • cpu
  • gpus
  • mem
  • disk
  • ports

The values for these resources can be defined in one of the three types:

  • Scalar – Used to represent numerical information using floating point numbers to allow fractional values such as 1.5G of memory
  • Range – Used to represent a range of scalar values – for example, a port range
  • Set – Used to represent multiple text values

By default, Mesos agent tries to detect these resources from the machine.

However, in some situations, we can configure custom resources on an agent. The values for such custom resources should again be in any one of the types discussed above.

For instance, we can start our agent with these resources:

--resources='cpus:24;gpus:2;mem:24576;disk:409600;ports:[21000-24000,30000-34000];bugs(debug_role):{a,b,c}'

As can be seen, we’ve configured the agent with few of the predefined resources and one custom resource named bugs which is of set type.

In addition to resources, agents can publish key-value attributes to the master. These attributes act as additional metadata for the agent and help frameworks in scheduling decisions.

A useful example can be to add agents into different racks or zones and then schedule various tasks on the same rack or zone to achieve data locality:

--attributes='rack:abc;zone:west;os:centos5;level:10;keys:[1000-1500]'

Similar to resources, values for attributes can be either a scalar, a range, or a text type.

4.2. Resource Roles

Many modern-day operating systems support multiple users. Similarly, Mesos also supports multiple users in the same cluster. These users are known as roles. We can consider each role as a resource consumer within a cluster.

Due to this, Mesos agents can partition the resources under different roles based on different allocation strategies. Furthermore, frameworks can subscribe to these roles within the cluster and have fine-grained control over resources under different roles.

For example, consider a cluster hosting applications which are serving different users in an organization. So by dividing the resources into roles, every application can work in isolation from one another.

Additionally, frameworks can use these roles to achieve data locality.

For instance, suppose we have two applications in the cluster named producer and consumer. Here, producer writes data to a persistent volume which consumer can read afterward. We can optimize the consumer application by sharing the volume with the producer.

Since Mesos allows multiple applications to subscribe to the same role, we can associate the persistent volume with a resource role. Furthermore, the frameworks for both producer and consumer will both subscribe to the same resource role. Therefore, the consumer application can now launch the data reading task on the same volume as the producer application.

4.3. Resource Reservation

Now the question may arise as to how Mesos allocates cluster resources into different roles. Mesos allocates the resources through reservations.

There are two types of reservations:

  • Static Reservation
  • Dynamic Reservation

Static reservation is similar to the resource allocation on agent startup we discussed in the earlier sections:

 --resources="cpus:4;mem:2048;cpus(baeldung):8;mem(baeldung):4096"

The only difference here is that now the Mesos agent reserves eight CPUs and 4096m of memory for the role named baeldung.

Dynamic reservation allows us to reshuffle the resources within roles, unlike the static reservation. Mesos allows frameworks and cluster operators to dynamically change the allocation of resources via framework messages as a response to resource offer or via HTTP endpoints.

Mesos allocates all resources without any role into a default role named (*). Master offers such resources to all frameworks whether or not they have subscribed to it.

4.4. Resource Weights and Quotas

Generally, the Mesos master offers resources using a fairness strategy. It uses the weighted Dominant Resource Fairness (wDRF) to identify the roles that lack resources. The master then offers more resources to the frameworks that have subscribed to these roles.

Event though fair sharing of resources between applications is an important characteristic of Mesos, its not always necessary. Suppose a cluster hosting applications that have a low resource footprint along with those having a high resource demand. In such deployments, we would want to allocate resources based on the nature of the application.

Mesos allows frameworks to demand more resources by subscribing to roles and adding a higher value of weight for that role. Therefore, if there are two roles, one of weight 1 and another of weight 2, Mesos will allocate twice the fair share of resources to the second role.

Similar to resources, we can configure weights via HTTP endpoints.

Besides ensuring a fair share of resources to a role with weights, Mesos also ensures that the minimum resources for a role are allocated.

Mesos allows us to add quotas to the resource roles. A quota specifies the minimum amount of resources that a role is guaranteed to receive.

5. Implementing Framework

As we discussed in an earlier section, Mesos allows applications to provide framework implementations in a language of their choice. In Java, a framework is implemented using the main class – which acts as an entry point for the framework process – and an implementation of Scheduler and Executor discussed earlier.

5.1. Framework Main Class

Before we implement a scheduler and an executor, we’ll first implement the entry point for our framework that:

  • Registers itself with the master
  • Provides executor runtime information to agents
  • Starts the scheduler

We’ll first add a Maven dependency for Mesos:

<dependency>
    <groupId>org.apache.mesos</groupId>
    <artifactId>mesos</artifactId>
    <version>0.28.3</version>
</dependency>

Next, we’ll implement the HelloWorldMain for our framework. One of the first things we’ll do is to start the executor process on the Mesos agent:

public static void main(String[] args) {
  
    String path = System.getProperty("user.dir")
      + "/target/libraries2-1.0.0-SNAPSHOT.jar";
  
    CommandInfo.URI uri = CommandInfo.URI.newBuilder().setValue(path).setExtract(false).build();
  
    String helloWorldCommand = "java -cp libraries2-1.0.0-SNAPSHOT.jar com.baeldung.mesos.executors.HelloWorldExecutor";
    CommandInfo commandInfoHelloWorld = CommandInfo.newBuilder()
      .setValue(helloWorldCommand)
      .addUris(uri)
      .build();
  
    ExecutorInfo executorHelloWorld = ExecutorInfo.newBuilder()
      .setExecutorId(Protos.ExecutorID.newBuilder()
      .setValue("HelloWorldExecutor"))
      .setCommand(commandInfoHelloWorld)
      .setName("Hello World (Java)")
      .setSource("java")
      .build();
}

Here, we first configured the executor binary location. Mesos agent would download this binary upon framework registration. Next, the agent would run the given command to start the executor process.

Next, we’ll initialize our framework and start the scheduler:

FrameworkInfo.Builder frameworkBuilder = FrameworkInfo.newBuilder()
  .setFailoverTimeout(120000)
  .setUser("")
  .setName("Hello World Framework (Java)");
 
frameworkBuilder.setPrincipal("test-framework-java");
 
MesosSchedulerDriver driver = new MesosSchedulerDriver(new HelloWorldScheduler(),
  frameworkBuilder.build(), args[0]);

Finally, we’ll start the MesosSchedulerDriver that registers itself with the Master. For successful registration, we must pass the IP of the Master as a program argument args[0] to this main class:

int status = driver.run() == Protos.Status.DRIVER_STOPPED ? 0 : 1;

driver.stop();

System.exit(status);

In the class shown above, CommandInfo, ExecutorInfo, and FrameworkInfo are all Java representations of protobuf messages between master and frameworks.

5.2. Implementing Scheduler

Since Mesos 1.0, we can invoke the HTTP endpoint from any Java application to send and receive messages to the Mesos master. Some of these messages include, for example, framework registration, resource offers, and offer rejections.

For Mesos 0.28 or earlier, we need to implement the Scheduler interface:

For the most part, we’ll only focus on the resourceOffers method of the Scheduler. Let’s see how a scheduler receives resources and initializes tasks based on them.

First, we’ll see how the scheduler allocates resources for a task:

@Override
public void resourceOffers(SchedulerDriver schedulerDriver, List<Offer> list) {

    for (Offer offer : list) {
        List<TaskInfo> tasks = new ArrayList<TaskInfo>();
        Protos.TaskID taskId = Protos.TaskID.newBuilder()
          .setValue(Integer.toString(launchedTasks++)).build();

        System.out.println("Launching printHelloWorld " + taskId.getValue() + " Hello World Java");

        Protos.Resource.Builder cpus = Protos.Resource.newBuilder()
          .setName("cpus")
          .setType(Protos.Value.Type.SCALAR)
          .setScalar(Protos.Value.Scalar.newBuilder()
            .setValue(1));

        Protos.Resource.Builder mem = Protos.Resource.newBuilder()
          .setName("mem")
          .setType(Protos.Value.Type.SCALAR)
          .setScalar(Protos.Value.Scalar.newBuilder()
            .setValue(128));

Here, we allocated 1 CPU and 128M of memory for our task. Next, we’ll use the SchedulerDriver to launch the task on an agent:

        TaskInfo printHelloWorld = TaskInfo.newBuilder()
          .setName("printHelloWorld " + taskId.getValue())
          .setTaskId(taskId)
          .setSlaveId(offer.getSlaveId())
          .addResources(cpus)
          .addResources(mem)
          .setExecutor(ExecutorInfo.newBuilder(helloWorldExecutor))
          .build();

        List<OfferID> offerIDS = new ArrayList<>();
        offerIDS.add(offer.getId());

        tasks.add(printHelloWorld);

        schedulerDriver.launchTasks(offerIDS, tasks);
    }
}

Alternatively, Scheduler often finds the need to reject resource offers. For example, if the Scheduler cannot launch a task on an agent due to lack of resources, it must immediately decline that offer:

schedulerDriver.declineOffer(offer.getId());

5.3. Implementing Executor

As we discussed earlier, the executor component of the framework is responsible for executing application tasks on the Mesos agent.

We used the HTTP endpoints for implementing Scheduler in Mesos 1.0. Likewise, we can use the HTTP endpoint for the executor.

In an earlier section, we discussed how a framework configures an agent to start the executor process:

java -cp libraries2-1.0.0-SNAPSHOT.jar com.baeldung.mesos.executors.HelloWorldExecutor

Notably, this command considers HelloWorldExecutor as the main class. We’ll implement this main method to initialize the MesosExecutorDriver that connects with Mesos agents to receive tasks and share other information like task status:

public class HelloWorldExecutor implements Executor {
    public static void main(String[] args) {
        MesosExecutorDriver driver = new MesosExecutorDriver(new HelloWorldExecutor());
        System.exit(driver.run() == Protos.Status.DRIVER_STOPPED ? 0 : 1);
    }
}

The last thing to do now is to accept tasks from the framework and launch them on the agent. The information to launch any task is self-contained within the HelloWorldExecutor:

public void launchTask(ExecutorDriver driver, TaskInfo task) {
 
    Protos.TaskStatus status = Protos.TaskStatus.newBuilder()
      .setTaskId(task.getTaskId())
      .setState(Protos.TaskState.TASK_RUNNING)
      .build();
    driver.sendStatusUpdate(status);
 
    System.out.println("Execute Task!!!");
 
    status = Protos.TaskStatus.newBuilder()
      .setTaskId(task.getTaskId())
      .setState(Protos.TaskState.TASK_FINISHED)
      .build();
    driver.sendStatusUpdate(status);
}

Of course, this is just a simple implementation, but it explains how an executor shares task status with the master at every stage and then executes the task before sending a completion status.

In some cases, executors can also send data back to the scheduler:

String myStatus = "Hello Framework";
driver.sendFrameworkMessage(myStatus.getBytes());

6. Conclusion

In this article, we discussed resource sharing between applications running in the same cluster in brief. We also discussed how Apache Mesos helps applications achieve maximum utilization with an abstract view of the cluster resources like CPU and memory.

Later on, we discussed the dynamic allocation of resources between applications based on various fairness policies and roles. Mesos allows applications to make scheduling decisions based on resource offers from Mesos agents in the cluster.

Finally, we saw an implementation of Mesos framework in Java.

As usual, all examples are available over at GitHub.

Viewing all 3692 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>