Quantcast
Channel: Baeldung
Viewing all 3842 articles
Browse latest View live

Using Guava CountingOutputStream

$
0
0

1. Overview

In this tutorial, we’re going to look at the CountingOutputStream class and how to use it.

The class can be found in popular libraries like Apache Commons or Google Guava. We’re going focus on the implementation in the Guava library.

2. CountingOutputStream

2.1. Maven Dependency

CountingOutputStream is part of Google’s Guava package.

Let’s start by adding the dependency to the pom.xml:

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>24.1-jre</version>
</dependency>

The latest version of the dependency can be checked here.

2.2. Class Details

The class extends java.io.FilterOutputStream, overrides the write() and close() methods, and provides the new method getCount().

The constructor takes another OutputStream object as an input parameter. While writing data, the class then counts the number of bytes written into this OutputStream.

In order to get the count, we can simply call getCount() to return the current number of bytes:

/** Returns the number of bytes written. */
public long getCount() {
    return count;
}

3. Use Case

Let’s use CountingOutputStream in a practical use case. For the sake of example, we’re going to put the code into a JUnit test to make it executable.

In our case, we’re going to write data to an OutputStream and check if we have reached a limit of MAX bytes.

Once we reach the limit, we want to break the execution by throwing an exception:

public class GuavaCountingOutputStreamTest {
    static int MAX = 5;

    @Test(expected = RuntimeException.class)
    public void givenData_whenCountReachesLimit_thenThrowException()
      throws Exception {
 
        ByteArrayOutputStream out = new ByteArrayOutputStream();
        CountingOutputStream cos = new CountingOutputStream(out);

        byte[] data = new byte[1024];
        ByteArrayInputStream in = new ByteArrayInputStream(data);
        
        int b;
        while ((b = in.read()) != -1) {
            cos.write(b);
            if (cos.getCount() >= MAX) {
                throw new RuntimeException("Write limit reached");
            }
        }
    }
}

4. Conclusion

In this quick article, we’ve looked at the CountingOutputStream class and its usage. The class provides the additional method getCount() that returns the number of bytes written to the OutputStream so far.

Finally, as always, the code used during the discussion can be found over on GitHub.


Displaying Money Amounts in Words

$
0
0

1. Overview

In this tutorial, we’ll see how we can convert a monetary amount into words-representation in Java.

We’ll also see how a custom implementation could look like, via an external library – Tradukisto.

2. Implementation

Let’s first start with our own implementation. The first step is to declare two String arrays with the following elements:

public static String[] ones = { 
  "", "one", "two", "three", "four", 
  "five", "six", "seven", "eight", 
  "nine", "ten", "eleven", "twelve", 
  "thirteen", "fourteen", "fifteen", 
  "sixteen", "seventeen", "eighteen", 
  "nineteen" 
};

public static String[] tens = {
  "",          // 0
  "",          // 1
  "twenty",    // 2
  "thirty",    // 3
  "forty",     // 4
  "fifty",     // 5
  "sixty",     // 6
  "seventy",   // 7
  "eighty",    // 8
  "ninety"     // 9
};

When we receive an input, we’ll need to handle the invalid values (zero and negative values). Once a valid input is received, we can extract the number of dollars and cents into variables:

 long dollars = (long) money;
 long cents = Math.round((money - dollars) * 100);

If the number given is less than 20, then we’ll get the appropriate ones’ element from the array based on the index:

if (n < 20) {
    return ones[(int) n];
}

We’ll use a similar approach for numbers less than 100, but now we have to use tens array as well:

if (n < 100) {
    return tens[(int) n / 10] 
      + ((n % 10 != 0) ? " " : "") 
      + ones[(int) n % 10];
}

We do this similarly for numbers that are less than one thousand.

Next, we use recursive calls to deal with numbers that are less than one million, as shown below:

if (n < 1_000_000) {
    return convert(n / 1000) + " thousand" + ((n % 1000 != 0) ? " " : "") 
      + convert(n % 1000);
}

The same approach is used for numbers that are less than one billion, and so on.

Here is the main method that can be called to do this conversion:

 public static String getMoneyIntoWords(double money) {
    long dollars = (long) money;
    long cents = Math.round((money - dollars) * 100);
    if (money == 0D) {
        return "";
    }
    if (money < 0) {
        return INVALID_INPUT_GIVEN;
    }
    String dollarsPart = "";
    if (dollars > 0) {
        dollarsPart = convert(dollars) 
          + " dollar" 
          + (dollars == 1 ? "" : "s");
    }
    String centsPart = "";
    if (cents > 0) {
        if (dollarParts.length() > 0) {
            centsPart = " and ";
        }
        centsPart += convert(cents) + " cent" + (cents == 1 ? "" : "s");
    }
    return dollarsPart + centsPart;
}

Let’s test our code to make sure it works:

@Test
public void whenGivenDollarsAndCents_thenReturnWords() {
    String expectedResult
     = "nine hundred twenty four dollars and sixty cents";
    
    assertEquals(
      expectedResult, 
      NumberWordConverter.getMoneyIntoWords(924.6));
}

@Test
public void whenTwoBillionDollarsGiven_thenReturnWords() {
    String expectedResult 
      = "two billion one hundred thirty three million two hundred" 
        + " forty seven thousand eight hundred ten dollars";
 
    assertEquals(
      expectedResult, 
      NumberWordConverter.getMoneyIntoWords(2_133_247_810));
}

@Test
public void whenThirtyMillionDollarsGiven_thenReturnWords() {
    String expectedResult 
      = "thirty three million three hundred forty eight thousand nine hundred seventy eight dollars";
    assertEquals(
      expectedResult, 
      NumberWordConverter.getMoneyIntoWords(33_348_978));
}

Let’s also test some edge cases, and make sure we have covered them as well:

@Test
public void whenZeroDollarsGiven_thenReturnEmptyString() {
    assertEquals("", NumberWordConverter.getMoneyIntoWords(0));
}

@Test
public void whenNoDollarsAndNineFiveNineCents_thenCorrectRounding() {
    assertEquals(   
      "ninety six cents", 
      NumberWordConverter.getMoneyIntoWords(0.959));
}
  
@Test
public void whenNoDollarsAndOneCent_thenReturnCentSingular() {
    assertEquals(
      "one cent", 
      NumberWordConverter.getMoneyIntoWords(0.01));
}

3. Using a Library

Now that we’ve implemented our own algorithm, let’s do this conversion by using an existing library.

Tradukisto is a library for Java 8+, which can help us convert numbers to their word representations. First, we need to import it into our project (the latest version of this library can be found here):

<dependency>
    <groupId>pl.allegro.finance</groupId>
    <artifactId>tradukisto</artifactId>
    <version>1.0.1</version>
</dependency>

We can now use MoneyConverters‘s asWords() method to do this conversion:

public String getMoneyIntoWords(String input) {
    MoneyConverters converter = MoneyConverters.ENGLISH_BANKING_MONEY_VALUE;
    return converter.asWords(new BigDecimal(input));
}

Let’s test this method with a simple test case:

@Test
public void whenGivenDollarsAndCents_thenReturnWordsVersionTwo() {
    assertEquals(
      "three hundred ten £ 00/100", 
      NumberWordConverter.getMoneyIntoWords("310"));
}

We could also use the ICU4J library to do this, but it’s a large one and comes with many other features that are out of the scope of this article.

However, have a look at it if Unicode and globalization support is needed.

4. Conclusion

In this quick article, we saw two approaches on how to do the conversion of a sum of money into words.

The code for all the examples explained here, and much more can be found over on GitHub.

Getting Started with Java and Zookeeper

$
0
0

1. Overview

Apache ZooKeeper is a distributed coordination service which eases the development of distributed applications. It’s used by projects like Apache Hadoop, HBase and others for different use cases like leader election, configuration management, node coordination, server lease management, etc.

Nodes within ZooKeeper cluster store their data in a shared hierarchal namespace which is similar to a standard file system or a tree data structure.

In this article, we’ll explore how to use Java API of Apache Zookeeper to store, update and delete information stored within ZooKeeper.

2. Setup

The latest version of the Apache ZooKeeper Java library can be found here:

<dependency>
    <groupId>org.apache.zookeeper</groupId>
    <artifactId>zookeeper</artifactId>
    <version>3.4.11</version>
</dependency>

3. ZooKeeper Data Model – ZNode

ZooKeeper has a hierarchal namespace, much like a distributed file system where it stores coordination data like status information, coordination information, location information, etc. This information is stored on different nodes.

Every node in a ZooKeeper tree is referred to as ZNode.

Each ZNode maintains version numbers and timestamps for any data or ACL changes. Also, this allows ZooKeeper to validate the cache and to coordinate updates.

4. Installation

4.1. Installation

Latest ZooKeeper release can be downloaded from here. Before doing that we need to make sure we meet the system requirements described here.

4.2. Standalone Mode

For this article, we’ll be running ZooKeeper in a standalone mode as it requires minimal configuration. Follow the steps described in the documentation here.

Note: In standalone mode, there’s no replication so if ZooKeeper process fails, the service will go down.

5. ZooKeeper CLI Examples

We’ll now use the ZooKeeper Command Line Interface (CLI) to interact with ZooKeeper:

bin/zkCli.sh -server 127.0.0.1:2181

Above command starts a standalone instance locally. Let’s now look at how to create a ZNode and store information within ZooKeeper:

[zk: localhost:2181(CONNECTED) 0] create /MyFirstZNode ZNodeVal
Created /FirstZnode

We just created a ZNode ‘MyFirstZNode’ at the root of ZooKeeper hierarchical namespace and written ‘ZNodeVal’ to it.

Since we’ve not passed any flag, a created ZNode will be persistent.

Let’s now issue a ‘get’ command to fetch the data as well as metadata associated with a ZNode:

[zk: localhost:2181(CONNECTED) 1] get /FirstZnode

“Myfirstzookeeper-app”
cZxid = 0x7f
ctime = Sun Feb 18 16:15:47 IST 2018
mZxid = 0x7f
mtime = Sun Feb 18 16:15:47 IST 2018
pZxid = 0x7f
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 22
numChildren = 0

We can update the data of an existing ZNode using the set operation.

For example:

set /MyFirstZNode ZNodeValUpdated

This will update the data at MyFirstZNode from ZNodeVal to ZNodeValUpdated.

6. ZooKeeper Java API Example

Let’s now look at Zookeeper Java API and create a node, update the node and retrieve some data.

6.1. Java Packages

The ZooKeeper Java bindings are composed mainly of two Java packages:

  1. org.apache.zookeeper: which defines the main class of the ZooKeeper client library along with many static definitions of the ZooKeeper event types and states
  2. org.apache.zookeeper.data: that defines the characteristics associated with ZNodes, such as Access Control Lists (ACL), IDs, stats, and so on

There’s also ZooKeeper Java APIs are used in server implementation such as org.apache.zookeeper.server, org.apache.zookeeper.server.quorum, and org.apache.zookeeper.server.upgrade.

However, they’re beyond the scope of this article.

6.2. Connecting to a ZooKeeper Instance

Let’s now create ZKConnection class which will be used to connect and disconnect from an already running ZooKeeper:

public class ZKConnection {
    private ZooKeeper zoo;
    CountDownLatch connectionLatch = new CountDownLatch(1);

    // ...

    public ZooKeeper connect(String host) 
      throws IOException, 
      InterruptedException {
        zoo = new ZooKeeper(host, 2000, new Watcher() {
            public void process(WatchedEvent we) {
                if (we.getState() == KeeperState.SyncConnected) {
                    connectionLatch.countDown();
                }
            }
        });

        connectionLatch.await();
        return zoo;
    }

    public void close() throws InterruptedException {
        zoo.close();
    }
}

To use a ZooKeeper service, an application must first instantiate an object of ZooKeeper class, which is the main class of ZooKeeper client library.

In connect method, we’re instantiating an instance of ZooKeeper class. Also, we’ve registered a callback method to process the WatchedEvent from ZooKeeper for connection acceptance and accordingly finish the connect method using countdown method of CountDownLatch.

Once a connection to a server is established, a session ID gets assigned to the client. To keep the session valid, the client should periodically send heartbeats to the server.

The client application can call ZooKeeper APIs as long as its session ID remains valid.

6.3. Client Operations

We’ll now create a ZKManager interface which exposes different operations like creating a ZNode and saving some data, fetching and updating the ZNode Data:

public interface ZKManager {
    public void create(String path, byte[] data)
      throws KeeperException, InterruptedException;
    public Object getZNodeData(String path, boolean watchFlag);
    public void update(String path, byte[] data) 
      throws KeeperException, InterruptedException;
}

Let’s now look at the implementation of the above interface:

public class ZKManagerImpl implements ZKManager {
    private static ZooKeeper zkeeper;
    private static ZKConnection zkConnection;

    public ZKManagerImpl() {
        initialize();
    }

    private void initialize() {
        zkConnection = new ZKConnection();
        zkeeper = zkConnection.connect("localhost");
    }

    public void closeConnection() {
        zkConnection.close();
    }

    public void create(String path, byte[] data) 
      throws KeeperException, 
      InterruptedException {
 
        zkeeper.create(
          path, 
          data, 
          ZooDefs.Ids.OPEN_ACL_UNSAFE, 
          CreateMode.PERSISTENT);
    }

    public Object getZNodeData(String path, boolean watchFlag) 
      throws KeeperException, 
      InterruptedException {
 
        byte[] b = null;
        b = zkeeper.getData(path, null, null);
        return new String(b, "UTF-8");
    }

    public void update(String path, byte[] data) throws KeeperException, 
      InterruptedException {
        int version = zkeeper.exists(path, true).getVersion();
        zkeeper.setData(path, data, version);
    }
}

In the above code, connect and disconnect calls are delegated to the earlier created ZKConnection class. Our create method is used to create a ZNode at given path from the byte array data. For demonstration purpose only, we’ve kept ACL completely open.

Once created, the ZNode is persistent and doesn’t get deleted when the client disconnects.

The logic to fetch ZNode data from ZooKeeper in our getZNodeData method is quite straightforward. Finally, with the update method, we’re checking the presence of ZNode on given path and fetching it if it exists.

Beyond that, for updating the data, we first check for ZNode existence and get the current version. Then, we invoke the setData method with the path of ZNode, data and current version as parameters. ZooKeeper will update the data only if the passed version matches with the latest version.

7. Conclusion

When developing distributed applications, Apache ZooKeeper plays a critical role as a distributed coordination service. Specifically for use cases like storing shared configuration, electing the master node, and so on.

ZooKeeper also provides an elegant Java-based API’s to be used in client-side application code for seamless communication with ZooKeeper ZNodes.

And as always, all sources for this tutorial can be found over on Github.

Java Weekly, Issue 221

$
0
0

Here we go…

1. Spring and Java

>> Monitor and troubleshoot Java applications and services with Datadog 

Optimize performance with end-to-end tracing and out-of-the-box support for popular Java frameworks, application servers, and databases. Try it free.

>> Java 10 Released [infoq.com]

Yes. Java 10 is out. Nuff said.

>> Java 11 Will Include More Than Just Features [blog.takipi.com]

The next Java release will be the first LTS release after Java 8.

>> Micrometer: Spring Boot 2’s new application metrics collector [spring.io]

Spring Boot 2.0 features a new metrics collector – this is a good opportunity to explore the new functionality.

>> Java Environment Management [blog.frankel.ch]

Nowadays, given we might need to switch between different JDKs a lot – this tool might come in handy.

>> JUnit 5 Tutorial: Writing Assertions With JUnit 5 Assertion API [petrikainulainen.net]

JUnit 5 features a slightly revised way of writing assertions. Good stuff.

>> Servlet and Reactive Stacks in Spring Framework 5 [infoq.com]

“Going reactive” isn’t just about using new APIs – the reactive stack handles higher concurrency with fewer hardware resources.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical and Musings

>> Mock? What, When, How? [blog.codecentric.de]

Be careful what you mock, sometimes it might make your codebase un-refactorable.

Also worth reading:

3. Comics

And my favorite Dilberts of the week:

>> Unplugged Server [dilbert.com]

>> Bob is Proud of His Flip Phone [dilbert.com]

>> Tina Wants to Borrow Wally’s Phone [dilbert.com]

4. Pick of the Week

>> Get specific! [sivers.org]

A Guide to Flips for Spring

$
0
0

1. Overview

In this tutorial, we’ll have a look at Flips, a library that implements feature flags in the form of powerful annotations for Spring Core, Spring MVC, and Spring Boot applications.

Feature flags (or toggles) are a pattern for delivering new features quickly and safely. These toggles allow us to modify application behavior without changing or deploying new code. Martin Fowler’s blog has a very informative article about feature flags here.

2. Maven Dependency

Before we get started, we need to add the Flips library to our pom.xml:

<dependency>
    <groupId>com.github.feature-flip</groupId>
    <artifactId>flips-core</artifactId>
    <version>1.0.1</version>
</dependency>

Maven Central has the latest version of the library, and the Github project is here.

Of course, we also need to include a Spring:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <version>1.5.10.RELEASE</version>
</dependency>

Since Flips isn’t yet compatible with Spring version 5.x, we’re going to use the latest version of Spring Boot in the 4.x branch.

3. A Simple REST Service for Flips

Let’s put together a simple Spring Boot project for adding and toggling new features and flags.

Our REST application will provide access to Foo resources:

public class Foo {
    private String name;
    private int id;
}

We’ll simply create a Service that maintains a list of Foos:

@Service
public class FlipService {

    private List<Foo> foos;

    public List<Foo> getAllFoos() {
        return foos;
    }

    public Foo getNewFoo() {
        return new Foo("New Foo!", 99);
    }
}

We’ll refer to additional service methods as we go, but this snippet should be enough to illustrate what FlipService does in the system.

And of course, we need to create a Controller:

@RestController
public class FlipController {

    private FlipService flipService;

    // constructors

    @GetMapping("/foos")
    public List<Foo> getAllFoos() {
        return flipService.getAllFoos();
    }
}

4. Control Features Based on Configuration

The most basic use of Flips is to enable or disable a feature based on configuration. Flips has several annotations for this.

4.1. Environment Property

Let’s imagine we added a new capability to FlipService; retrieving Foos by their id.

Let’s add the new request to the controller:

@GetMapping("/foos/{id}")
@FlipOnEnvironmentProperty(
  property = "feature.foo.by.id", 
  expectedValue = "Y")
public Foo getFooById(@PathVariable int id) {
    return flipService.getFooById(id)
      .orElse(new Foo("Not Found", -1));
}

The @FlipOnEnvironmentProperty controls whether or not this API is available.

Simply put, when feature.foo.by.id is Y, we can make requests by Id. If it isn’t (or not defined at all) Flips will disable the API method.

If a feature isn’t enabled, Flips will throw FeatureNotEnabledException and Spring will return “Not Implemented” to the REST client.

When we call the API with the property set to N, this is what we see:

Status = 501
Headers = {Content-Type=[application/json;charset=UTF-8]}
Content type = application/json;charset=UTF-8
Body = {
    "errorMessage": "Feature not enabled, identified by method 
      public com.baeldung.flips.model.Foo
      com.baeldung.flips.controller.FlipController.getFooById(int)",
    "className":"com.baeldung.flips.controller.FlipController",
    "featureName":"getFooById"
}

As expected, Spring catches the FeatureNotEnabledException and returns status 501 to the client.

4.2. Active Profile

Spring has long given us the ability to map beans to different profiles, such as devtest, or prod. Expanding on this capability to mapping feature flags to the active profile makes intuitive sense.

Let’s see how features are enabled or disabled based on the active Spring Profile:

@RequestMapping(value = "/foos", method = RequestMethod.GET)
@FlipOnProfiles(activeProfiles = "dev")
public List getAllFoos() {
    return flipService.getAllFoos();
}

The @FlipOnProfiles annotation accepts a list of profile names. If the active profile is in the list, the API is accessible.

4.3. Spring Expressions

Spring’s Expression Language (SpEL) is the powerful mechanism for manipulating the runtime environment. Flips has us a way to toggle features with it as well.

@FlipOnSpringExpression toggles a method based on a SpEL expression that returns a boolean.

Let’s use a simple expression to control a new feature:

@FlipOnSpringExpression(expression = "(2 + 2) == 4")
@GetMapping("/foo/new")
public Foo getNewFoo() {
    return flipService.getNewFoo();
}

4.4. Disable

To disable a feature completely, use @FlipOff:

@GetMapping("/foo/first")
@FlipOff
public Foo getFirstFoo() {
    return flipService.getLastFoo();
}

In this example, getFirstFoo() is completely inaccessible.

As we’ll see below, we can combine Flips annotations, making it possible to use @FlipOff to disable a feature based on the environment or other criteria.

5. Control Features with Date/Time

Flips can toggle a feature based on a date/time or the day of the week. Tying the availability of a new feature to the day or date has obvious advantages.

5.1. Date and Time

@FlipOnDateTime accepts the name of a property that is formatted in ISO 8601 format.

So let’s set a property indicating a new feature that will be active on March 1st:

first.active.after=2018-03-01T00:00:00Z

Then we’ll write an API for retrieving the first Foo:

@GetMapping("/foo/first")
@FlipOnDateTime(cutoffDateTimeProperty = "first.active.after")
public Foo getFirstFoo() {
    return flipService.getLastFoo();
}

Flips will check the named property. If the property exists and the specified date/time have passed, the feature is enabled.

5.2. Day of Week

The library provides @FlipOnDaysOfWeek, which is useful for operations such as A/B testing:

@GetMapping("/foo/{id}")
@FlipOnDaysOfWeek(daysOfWeek={DayOfWeek.MONDAY, DayOfWeek.WEDNESDAY})
public Foo getFooByNewId(@PathVariable int id) {
    return flipService.getFooById(id).orElse(new Foo("Not Found", -1));
}

getFooByNewId() is only available on Mondays and Wednesdays.

6. Replace a Bean

Switching methods on and off is useful, but we may want to introduce new behavior via new objects. @FlipBean directs Flips to call a method in a new bean.

A Flips annotation can work on any Spring @Component. So far, we’ve only modified our @RestController, let’s try modifying our Service.

We’ll create a new service with different behavior from FlipService:

@Service
public class NewFlipService {
    public Foo getNewFoo() {
        return new Foo("Shiny New Foo!", 100);
    }
}

We will replace the old service’s getNewFoo() with the new version:

@FlipBean(with = NewFlipService.class)
public Foo getNewFoo() {
    return new Foo("New Foo!", 99);
}

Flips will direct calls to getNewThing() to NewFlipService. @FlipBean is another toggle that is most useful when combined with others. Let’s look at that now.

7. Combining Toggles

We combine toggles by specifying more than one. Flips evaluates these in sequence, with implicit “AND” logic. Therefore all of them must be true to toggle the feature on.

Let’s combine two of our previous examples:

@FlipBean(
  with = NewFlipService.class)
@FlipOnEnvironmentProperty(
  property = "feature.foo.by.id", 
  expectedValue = "Y")
public Foo getNewFoo() {
    return new Foo("New Foo!", 99);
}

We’ve made use of the new service configurable.

8. Conclusion

In this brief guide, we created a simple Spring Boot service and toggled APIs on and off using Flips annotations. We saw how features are toggled using configuration information and date/time, and also how features can be toggled by swapping beans at runtime.

Code samples, as always, can be found over on GitHub.

Hamcrest Bean Matchers

$
0
0

1. Overview

Hamcrest is a library that provides methods, called matchers, to help developers write simpler unit tests. There are plenty of matchers, you can get started by reading about some of them here.

In this article, we’ll explore beans matchers.

2. Setup

To get Hamcrest, we just need to add the following Maven dependency to our pom.xml:

<dependency>
    <groupId>org.hamcrest</groupId>
    <artifactId>java-hamcrest</artifactId>
    <version>2.0.0.0</version>
    <scope>test</scope>
</dependency>

The latest Hamcrest version can be found on Maven Central.

3. Bean Matchers

Bean matchers are extremely useful to check conditions over POJOs, something that is frequently required when writing most unit tests.

Before getting started, we’ll create a class that will help us through the examples:

public class City {
    String name;
    String state;

    // standard constructor, getters and setters

}

Now that we’re all set, let’s see beans matchers in action!

3.1. hasProperty

This matcher is basically to check if certain bean contains a specific property identified by the property’s name:

@Test
public void givenACity_whenHasProperty_thenCorrect() {
    City city = new City("San Francisco", "CA");
    
    assertThat(city, hasProperty("state"));
}

So, this test will pass because our City bean has a property named state.

Following this idea, we can also test if a bean has certain property and that property has certain value:

@Test
public void givenACity_whenHasPropertyWithValueEqualTo_thenCorrect() {
    City city = new City("San Francisco", "CA");
        
    assertThat(city, hasProperty("name", equalTo("San Francisco")));
}

As we can see, hasProperty is overloaded and can be used with a second matcher to check a specific condition over a property.

So, we can also do this:

@Test
public void givenACity_whenHasPropertyWithValueEqualToIgnoringCase_thenCorrect() {
    City city = new City("San Francisco", "CA");

    assertThat(city, hasProperty("state", equalToIgnoringCase("ca")));
}

Useful, right? We can take this idea one step further with the matcher that we’ll explore next.

3.2. samePropertyValuesAs

Sometimes when we have to do checks over a lot of properties of a bean, it may be simpler to create a new bean with the desired values. Then, we can check for equality between the tested bean and the new one. Of course, Hamcrest provides a matcher for this situation:

@Test
public void givenACity_whenSamePropertyValuesAs_thenCorrect() {
    City city = new City("San Francisco", "CA");
    City city2 = new City("San Francisco", "CA");

    assertThat(city, samePropertyValuesAs(city2));
}

This results in fewer assertions and simpler code. Same way, we can test the negative case:

@Test
public void givenACity_whenNotSamePropertyValuesAs_thenCorrect() {
    City city = new City("San Francisco", "CA");
    City city2 = new City("Los Angeles", "CA");

    assertThat(city, not(samePropertyValuesAs(city2)));
}

Next, well see a couple of util methods to inspect class properties.

3.3. getPropertyDescriptor

There’re scenarios when it may come in handy being able to explore a class structure. Hamcrest provides some util methods to do so:

@Test
public void givenACity_whenGetPropertyDescriptor_thenCorrect() {
    City city = new City("San Francisco", "CA");
    PropertyDescriptor descriptor = getPropertyDescriptor("state", city);

    assertThat(descriptor
      .getReadMethod()
      .getName(), is(equalTo("getState")));
}

The object descriptor retrieves a lot of information about the property state. In this case, we’ve extracted the getter’s name and assert that it is equal to some expected value. Note that we can also apply other text matchers.

Moving on, the last method we will explore is a more general case of this same idea.

3.4. propertyDescriptorsFor

This method does basically the same as the one in the previous section, but for all the properties of the bean. We also need to specify how high we want to go in the class hierarchy:

@Test
public void givenACity_whenGetPropertyDescriptorsFor_thenCorrect() {
    City city = new City("San Francisco", "CA");
    PropertyDescriptor[] descriptors = propertyDescriptorsFor(
      city, Object.class);
 
    List<String> getters = Arrays.stream(descriptors)
      .map(x -> x.getReadMethod().getName())
      .collect(toList());

    assertThat(getters, containsInAnyOrder("getName", "getState"));
}

So, what we did here is: get all the property descriptors from the bean city and stop at the Object level.

Then, we just used Java 8’s features to filter the getter methods.

Finally, we used collections matchers to check something over the getters list. You can find more information about collections matchers here.

4. Conclusion

Hamcrest matchers consist of a great set of tools to be used across every project. They’re easy to learn and become extremely useful in short time.

Beans matchers in particular, provide an effective way of making assertions over POJOs, something that is frequently required when writing unit tests.

To get the complete implementation of this examples, please refer to the GitHub project.

A Quick Guide to the Spring @Lazy Annotation

$
0
0

1. Overview

By default, Spring creates all singleton beans eagerly at the startup/bootstrapping of the application context. The reason behind this is simple: to avoid and detect all possible errors immediately rather than at runtime.

However, there’re cases when we need to create a bean, not at the application context startup, but when we request it.

In this quick tutorial, we’re going to discuss Spring’s @Lazy annotation.

2. Lazy Initialization

The @Lazy annotation has been present since Spring version 3.0. There’re several ways to tell the IoC container to initialize a bean lazily.

2.1. @Configuration Class

When we put @Lazy annotation over the @Configuration class, it indicates that all the methods with @Bean annotation should be loaded lazily.

This is the equivalent for the XML based configuration’s default-lazy-init=“true attribute.

Let’s have a look here:

@Lazy
@Configuration
@ComponentScan(basePackages = "com.baeldung.lazy")
public class AppConfig {

    @Bean
    public Region getRegion(){
        return new Region();
    }

    @Bean
    public Country getCountry(){
        return new Country();
    }
}

Let’s now test the functionality:

@Test
public void givenLazyAnnotation_whenConfigClass_thenLazyAll() {

    AnnotationConfigApplicationContext ctx
     = new AnnotationConfigApplicationContext();
    ctx.register(AppConfig.class);
    ctx.refresh();
    ctx.getBean(Region.class);
    ctx.getBean(Country.class);
}

As we see, all beans are created only when we request them for the first time:

Bean factory for ...AnnotationConfigApplicationContext: 
...DefaultListableBeanFactory: [...];
// application context started
Region bean initialized
Country bean initialized

To apply this to only a specific bean, let’s remove the @Lazy from a class.

Then we add it to the config of the desired bean:

@Bean
@Lazy(true)
public Region getRegion(){
    return new Region();
}

2.2 With @Autowired

Before going ahead, check out these guides for @Autowired and @Component annotations.

Here, in order to initialize a lazy bean, we reference it from another one.

The bean that we want to load lazily:

@Lazy
@Component
public class City {
    public City() {
        System.out.println("City bean initialized");
    }
}

And it’s reference:

public class Region {

    @Lazy
    @Autowired
    private City city;

    public Region() {
        System.out.println("Region bean initialized");
    }

    public City getCityInstance() {
        return city;
    }
}

Note, that the @Lazy is mandatory in both places.

With the @Component annotation on the City class and while referencing it with @Autowired:

@Test
public void givenLazyAnnotation_whenAutowire_thenLazyBean() {
    // load up ctx appication context
    Region region = ctx.getBean(Region.class);
    region.getCityInstance();
}

Here, the City bean is initialized only when we call the getCityInstance() method.

3. Conclusion

In this quick tutorial, we learned the basics of Spring’s @Lazy annotation. We examined several ways to configure and use it.

As usual, the complete code for this guide is available over on GitHub.

Working with Kotlin and JPA

$
0
0

1. Introduction

One of Kotlin’s characteristics is the interoperability with Java libraries, and JPA is certainly one of these.

In this tutorial, we’ll explore how to use Kotlin Data Classes as JPA entities.

2. Dependencies

To keep things simple, we’ll use Hibernate as our JPA implementation; we’ll need to add the following dependencies to our Maven project:

<dependency>
    <groupId>org.hibernate</groupId>
    <artifactId>hibernate-core</artifactId>
    <version>5.2.15.Final</version>
</dependency>
<dependency>
    <groupId>org.hibernate</groupId>
    <artifactId>hibernate-testing</artifactId>
    <version>5.2.15.Final</version>
    <scope>test</scope>
</dependency>

We’ll also use an H2 embedded database to run our tests:

<dependency>
    <groupId>com.h2database</groupId>
    <artifactId>h2</artifactId>
    <version>1.4.196</version>
    <scope>test</scope>
</dependency>

For Kotlin we’ll use the following:

<dependency>
    <groupId>org.jetbrains.kotlin</groupId>
    <artifactId>kotlin-stdlib-jdk8</artifactId>
    <version>1.2.30</version>
</dependency>

Of course, the most recent versions of Hibernate, H2, and Kotlin can be found in Maven Central.

3. Compiler Plugins (jpa-plugin)

To use JPA, the entity classes need a constructor without parameters.

By default, the Kotlin data classes don’t have it, and to generate them we’ll need to use the jpa-plugin:

<plugin>
    <artifactId>kotlin-maven-plugin</artifactId>
    <groupId>org.jetbrains.kotlin</groupId>
    <version>1.2.30</version>
    <configuration>
        <compilerPlugins>
        <plugin>jpa</plugin>
        </compilerPlugins>
        <jvmTarget>1.8</jvmTarget>
    </configuration>
    <dependencies>
        <dependency>
        <groupId>org.jetbrains.kotlin</groupId>
        <artifactId>kotlin-maven-noarg</artifactId>
        <version>1.2.30</version>
        </dependency>
    </dependencies>
    <!--...-->
</plugin>

4. JPA with Kotlin Data Classes

After the previous setup is done, we’re ready to use JPA with data classes.

Let’s start creating a Person data class with two attributes – id and name, like this:

@Entity
data class Person(
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    val id: Int,
    
    @Column(nullable = false)
    val name: String
)

As we can see, we can freely use the annotations from JPA like @Entity, @Column and @Id.

To see our entity in action, we’ll create the following test:

@Test
fun givenPerson_whenSaved_thenFound() {
    doInHibernate(({ this.sessionFactory() }), { session ->
        val personToSave = Person(0, "John")
        session.persist(personToSave)
        val personFound = session.find(Person::class.java, personToSave.id)
        session.refresh(personFound)

        assertTrue(personToSave == personFound)
    })
}

After running the test with logging enabled, we can see the following results:

Hibernate: insert into Person (id, name) values (null, ?)
Hibernate: select person0_.id as id1_0_0_, person0_.name as name2_0_0_ from Person person0_ where person0_.id=?

That is an indicator that all is going well.
It is important to note that if we don’t use the jpa-plugin in runtime, we are going to get an InstantiationException, due to the lack of default constructor:

javax.persistence.PersistenceException: org.hibernate.InstantiationException: No default constructor for entity: : com.baeldung.entity.Person

Now, we’ll test again with null values. To do this, let’s extend our Person entity with a new attribute email and a @OneToMany relationship:

    //...
    @Column(nullable = true)
    val email: String? = null,

    @Column(nullable = true)
    @OneToMany(cascade = [CascadeType.ALL])
    val phoneNumbers: List<PhoneNumber>? = null

We can also see that email and phoneNumbers fields are nullable, thus are declared with the question mark.

The PhoneNumber entity has two attributes – id and number:

@Entity
data class PhoneNumber(
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    val id: Int,

    @Column(nullable = false)
    val number: String
)

Let’s verify this with a test:

@Test
fun givenPersonWithNullFields_whenSaved_thenFound() {
    doInHibernate(({ this.sessionFactory() }), { session ->
        val personToSave = Person(0, "John", null, null)
        session.persist(personToSave)
        val personFound = session.find(Person::class.java, personToSave.id)
        session.refresh(personFound)

        assertTrue(personToSave == personFound)
    })
}

This time, we’ll get one insert statement:

Hibernate: insert into Person (id, email, name) values (null, ?, ?)
Hibernate: select person0_.id as id1_0_1_, person0_.email as email2_0_1_, person0_.name as name3_0_1_, phonenumbe1_.Person_id as Person_i1_1_3_, phonenumbe2_.id as phoneNum2_1_3_, phonenumbe2_.id as id1_2_0_, phonenumbe2_.number as number2_2_0_ from Person person0_ left outer join Person_PhoneNumber phonenumbe1_ on person0_.id=phonenumbe1_.Person_id left outer join PhoneNumber phonenumbe2_ on phonenumbe1_.phoneNumbers_id=phonenumbe2_.id where person0_.id=?

Let’s test one more time but without null data to verify the output:

@Test
fun givenPersonWithFullData_whenSaved_thenFound() {
    doInHibernate(({ this.sessionFactory() }), { session ->
        val personToSave = Person(
          0, 
          "John", 
          "jhon@test.com", 
          Arrays.asList(PhoneNumber(0, "202-555-0171"), PhoneNumber(0, "202-555-0102")))
        session.persist(personToSave)
        val personFound = session.find(Person::class.java, personToSave.id)
        session.refresh(personFound)

        assertTrue(personToSave == personFound)
    })
}

And, as we can see, now we get three insert statements:

Hibernate: insert into Person (id, email, name) values (null, ?, ?)
Hibernate: insert into PhoneNumber (id, number) values (null, ?)
Hibernate: insert into PhoneNumber (id, number) values (null, ?)

5. Conclusion

In this quick article, we saw an example of how to integrate Kotlin data classes with JPA using the jpa-plugin and Hibernate.

As always, the source code is available over on GitHub.


Build an MVC Web Application with Grails

$
0
0

1. Overview

In this tutorial, we’ll learn how to create a simple web application using Grails.

Grails (more precisely it’s latest major version) is a framework built on top of the Spring Boot project and uses the Apache Groovy language to develop web apps.

It’s inspired by the Rails Framework for Ruby and is built around the convention-over-configuration philosophy which allows reducing boilerplate code.

2. Setup

First of all, let’s head over to the official page to prepare the environment. At the time of this tutorial, the latest version is 3.3.3.

Simply put, there are two ways of installing Grails: via SDKMAN or by downloading the distribution and adding binaries to the PATH environment variable.

We won’t cover the setup step by step because it is well documented in the Grails Docs.

3. Anatomy of a Grails App

In this section, we will get a better understanding of the Grails application structure. As we mentioned earlier, Grails prefers convention over configuration, therefore the location of files defines their purpose. Let’s see what we have in the grails-app directory:

  • assets – a place where we store static assets files like styles, javascript files or images
  • conf – contains project configuration files:
    • application.yml contains standard web app settings like data source, mime types, and other Grails or Spring related settings
    • resources.groovy contains spring bean definitions
    • logback.groovy contains logging configuration
  • controllers – responsible for handling requests and generating responses or delegating them to the views. By convention, when a file name ends with *Controller, the framework creates a default URL mapping for each action defined in the controller class
  • domain – contains the business model of the Grails application. Each class living here will be mapped to database tables by GORM
  • i18n – used for internationalization support
  • init – an entry point of the application
  • services – the business logic of the application will live here. By convention, Grails will create a Spring singleton bean for each service
  • taglib – the place for custom tag libraries
  • views – contains views and templates

4. A Simple Web Application

In this chapter, we will create a simple web app for managing Students. Let’s start by invoking the CLI command for creating an application skeleton:

grails create-app

When the basic structure of the project has been generated, let’s move on to implementing actual web app components.

4.1. Domain Layer

As we are implementing a web application for handling Students, let’s start with generating a domain class called Student:

grails create-domain-class com.baeldung.grails.Student

And finally, let’s add the firstName and lastName properties to it:

class Student {
    String firstName
    String lastName
}

Grails applies its conventions and will set up an object-relational mapping for all classes located in grails-app/domain directory.

Moreover, thanks to the GormEntity trait, all domain classes will have access to all CRUD operations, which we’ll use in the next section for implementing services.

4.2. Service Layer

Our application will handle the following use cases:

  • Viewing a list of students
  • Creating new students
  • Removing existing students

Let’s implement these use cases. We will start by generating a service class:

grails create-service com.baeldung.grails.Student

Let’s head over to the grails-app/services directory, find our newly created service in the appropriate package and add all necessary methods:

@Transactional
class StudentService {

    def get(id){
        Student.get(id)
    }

    def list() {
        Student.list()
    }

    def save(student){
        student.save()
    }

    def delete(id){
        Student.get(id).delete()
    }
}

Note that services don’t support transactions by default. We can enable this feature by adding the @Transactional annotation to the class.

4.3. Controller Layer

In order to make the business logic available to the UI, let’s create a StudentController by invoking the following command:

grails create-controller com.baeldung.grails.Student

By default, Grails injects beans by names. It means that we can easily inject the StudentService singleton instance into our controller by declaring an instance variable called studentsService.

We can now define actions for reading, creating and deleting students.

class StudentController {

    def studentService

    def index() {
        respond studentService.list()
    }

    def show(Long id) {
        respond studentService.get(id)
    }

    def create() {
        respond new Student(params)
    }

    def save(Student student) {
        studentService.save(student)
        redirect action:"index", method:"GET"
    }

    def delete(Long id) {
        studentService.delete(id)
        redirect action:"index", method:"GET"
    }
}

By convention, the index() action from this controller will be mapped to the URI /student/index, the show() action to /student/show and so on.

4.4. View Layer

Having set up our controller actions, we can now proceed to create the UI views. We will create three Groovy Server Pages for listing, creating and removing Students.

By convention, Grails will render a view based on controller name and action. For example, the index() action from StudentController will resolve to /grails-app/views/student/index.gsp

Let’s start with implementing the view /grails-app/views/student/index.gsp, which will display a list of students. We’ll use the tag <f:table/> to create an HTML table displaying all students returned from the index() action in our controller.

By convention, when we respond with a list of objects, Grails will add the “List” suffix to the model name so that we can access the list of student objects with the variable studentList:

<!DOCTYPE html>
<html>
    <head>
        <meta name="layout" content="main" />
    </head>
    <body>
        <div class="nav" role="navigation">
            <ul>
                <li><g:link class="create" action="create">Create</g:link></li>
            </ul>
        </div>
        <div id="list-student" class="content scaffold-list" role="main">
            <f:table collection="${studentList}" 
                properties="['firstName', 'lastName']" />
        </div>
    </body>
</html>

We’ll now proceed to the view /grails-app/views/student/create.gsp, which allows the user to create new Students. We’ll use the built-in <f:all/> tag, which displays a form for all properties of a given bean:

<!DOCTYPE html>
<html>
    <head>
        <meta name="layout" content="main" />
    </head>
    <body>
        <div id="create-student" class="content scaffold-create" role="main">
            <g:form resource="${this.student}" method="POST">
                <fieldset class="form">
                    <f:all bean="student"/>
                </fieldset>
                <fieldset class="buttons">
                    <g:submitButton name="create" class="save" value="Create" />
                </fieldset>
            </g:form>
        </div>
    </body>
</html>

Finally, let’s create the view /grails-app/views/student/show.gsp for viewing and eventually deleting students.

Among other tags, we’ll take advantage of <f:display/>, which takes a bean as an argument and displays all its fields:

<!DOCTYPE html>
<html>
    <head>
        <meta name="layout" content="main" />
    </head>
    <body>
        <div class="nav" role="navigation">
            <ul>
                <li><g:link class="list" action="index">Students list</g:link></li>
            </ul>
        </div>
        <div id="show-student" class="content scaffold-show" role="main">
            <f:display bean="student" />
            <g:form resource="${this.student}" method="DELETE">
                <fieldset class="buttons">
                    <input class="delete" type="submit" value="delete" />
                </fieldset>
            </g:form>
        </div>
    </body>
</html>

4.5. Unit Tests

Grails mainly takes advantage of Spock for testing purposes. If you are not familiar with Spock, we highly recommend reading this tutorial first.

Let’s start with unit testing the index() action of our StudentController. 

We’ll mock the list() method from StudentService and test if index() returns the expected model:

void "Test the index action returns the correct model"() {
    given:
    controller.studentService = Mock(StudentService) {
        list() >> [new Student(firstName: 'John',lastName: 'Doe')]
    }
 
    when:"The index action is executed"
    controller.index()

    then:"The model is correct"
    model.studentList.size() == 1
    model.studentList[0].firstName == 'John'
    model.studentList[0].lastName == 'Doe'
}

Now, let’s test the delete() action. We’ll verify if delete() was invoked from StudentService and verify redirection to the index page:

void "Test the delete action with an instance"() {
    given:
    controller.studentService = Mock(StudentService) {
      1 * delete(2)
    }

    when:"The domain instance is passed to the delete action"
    request.contentType = FORM_CONTENT_TYPE
    request.method = 'DELETE'
    controller.delete(2)

    then:"The user is redirected to index"
    response.redirectedUrl == '/student/index'
}

4.6. Integration Tests

Next, let’s have a look at how to create integration tests for the service layer. Mainly we’ll test integration with a database configured in grails-app/conf/application.yml. 

By default, Grails uses the in-memory H2 database for this purpose.

First of all, let’s start with defining a helper method for creating data to populate the database:

private Long setupData() {
    new Student(firstName: 'John',lastName: 'Doe')
      .save(flush: true, failOnError: true)
    new Student(firstName: 'Max',lastName: 'Foo')
      .save(flush: true, failOnError: true)
    Student student = new Student(firstName: 'Alex',lastName: 'Bar')
      .save(flush: true, failOnError: true)
    student.id
}

Thanks to the @Rollback annotation on our integration test class, each method will run in a separate transaction, which will be rolled back at the end of the test.

Take a look at how we implemented the integration test for our list() method:

void "test list"() {
    setupData()

    when:
    List<Student> studentList = studentService.list()

    then:
    studentList.size() == 3
    studentList[0].lastName == 'Doe'
    studentList[1].lastName == 'Foo'
    studentList[2].lastName == 'Bar'
}

Also, let’s test the delete() method and validate if the total count of students is decremented by one:

void "test delete"() {
    Long id = setupData()

    expect:
    studentService.list().size() == 3

    when:
    studentService.delete(id)
    sessionFactory.currentSession.flush()

    then:
    studentService.list().size() == 2
}

5. Running and Deploying

Running and deploying apps can be done by invoking single command via Grails CLI.

For running the app use:

grails run-app

By default, Grails will setup Tomcat on port 8080.

Let’s navigate to http://localhost:8080/student/index to see what our web application looks like:

If you want to deploy your application to a servlet container, use:

grails war

to create a ready-to-deploy war artifact.

6. Conclusion

In this article, we focused on how to create a Grails web application using the convention-over-configuration philosophy. We also saw how to perform unit and integration tests with the Spock framework.

As always, all the code used here can be found over on GitHub.

Introduction to RxRelay for RxJava

$
0
0

1. Introduction

The popularity of RxJava has led to the creation of multiple third-party libraries that extend its functionality.

Many of those libraries were an answer to typical problems that developers were dealing with when using RxJava. RxRelay is one of these solutions.

2. Dealing with a Subject

Simply put, a Subject acts as a bridge between Observable and Observer. Since it’s an Observer, it can subscribe to one or more Observables and receive events from them.

Also, given it’s at the same time an Observable, it can reemit events or emit new events to its subscribers. More information about the Subject can be found in this article.

One of the issues with Subject is that after it receives onComplete() or onError() – it’s no longer able to move data. Sometimes it’s the desired behavior, but sometimes it’s not.

In cases when such behavior isn’t desired, we should consider using RxRelay.

3. Relay

A Relay is basically a Subject, but without the ability to call onComplete() and onError(), thus it’s constantly able to emit data.

This allows us to create bridges between different types of API without worrying about accidentally triggering the terminal state.

To use RxRelay we need to add the following dependency to our project:

<dependency>
  <groupId>com.jakewharton.rxrelay2</groupId>
  <artifactId>rxrelay</artifactId>
  <version>1.2.0</version>
</dependency>

4. Types of Relay

There’re three different types of Relay available in the library. We’ll quickly explore all three here.

4.1. PublishRelay

This type of Relay will reemit all events once the  Observer has subscribed to it.

The events will be emitted to all subscribers:

public void whenObserverSubscribedToPublishRelay_itReceivesEmittedEvents() {
    PublishRelay<Integer> publishRelay = PublishRelay.create();
    TestObserver<Integer> firstObserver = TestObserver.create();
    TestObserver<Integer> secondObserver = TestObserver.create();
    
    publishRelay.subscribe(firstObserver);
    firstObserver.assertSubscribed();
    publishRelay.accept(5);
    publishRelay.accept(10);
    publishRelay.subscribe(secondObserver);
    secondObserver.assertSubscribed();
    publishRelay.accept(15);
    firstObserver.assertValues(5, 10, 15);
    
    // second receives only the last event
    secondObserver.assertValue(15);
}

There’s no buffering of events in this case, so this behavior is similar to a cold Observable.

4.2. BehaviorRelay

This type of Relay will reemit the most recent observed event and all subsequent events  once the Observer has subscribed:

public void whenObserverSubscribedToBehaviorRelay_itReceivesEmittedEvents() {
    BehaviorRelay<Integer> behaviorRelay = BehaviorRelay.create();
    TestObserver<Integer> firstObserver = TestObserver.create();
    TestObserver<Integer> secondObserver = TestObserver.create();
    behaviorRelay.accept(5);     
    behaviorRelay.subscribe(firstObserver);
    behaviorRelay.accept(10);
    behaviorRelay.subscribe(secondObserver);
    behaviorRelay.accept(15);
    firstObserver.assertValues(5, 10, 15);
    secondObserver.assertValues(10, 15);
}

When we’re creating the BehaviorRelay we can specify the default value, which will be emitted, if there’re no other events to emit.

To specify the default value we can use createDefault() method:

public void whenObserverSubscribedToBehaviorRelay_itReceivesDefaultValue() {
    BehaviorRelay<Integer> behaviorRelay = BehaviorRelay.createDefault(1);
    TestObserver<Integer> firstObserver = new TestObserver<>();
    behaviorRelay.subscribe(firstObserver);
    firstObserver.assertValue(1);
}

If we don’t want to specify the default value, we can use the create() method:

public void whenObserverSubscribedToBehaviorRelayWithoutDefaultValue_itIsEmpty() {
    BehaviorRelay<Integer> behaviorRelay = BehaviorRelay.create();
    TestObserver<Integer> firstObserver = new TestObserver<>();
    behaviorRelay.subscribe(firstObserver);
    firstObserver.assertEmpty();
}

4.3. ReplayRelay

This type of Relay buffers all events it has received and then reemits it to all subscribers that subscribe to it:

 public void whenObserverSubscribedToReplayRelay_itReceivesEmittedEvents() {
    ReplayRelay<Integer> replayRelay = ReplayRelay.create();
    TestObserver<Integer> firstObserver = TestObserver.create();
    TestObserver<Integer> secondObserver = TestObserver.create();
    replayRelay.subscribe(firstObserver);
    replayRelay.accept(5);
    replayRelay.accept(10);
    replayRelay.accept(15);
    replayRelay.subscribe(secondObserver);
    firstObserver.assertValues(5, 10, 15);
    secondObserver.assertValues(5, 10, 15);
}

All elements are buffered and all subscribers will receive the same events, so this behavior is similar to the cold Observable.

When we’re creating the ReplayRelay we can provide maximal buffer size and time to live for events.

To create the Relay with limited buffer size we can use the createWithSize() method. When there’re more events to be buffered than the set buffer size, previous elements will be discarded:

public void whenObserverSubscribedToReplayRelayWithLimitedSize_itReceivesEmittedEvents() {
    ReplayRelay<Integer> replayRelay = ReplayRelay.createWithSize(2);
    TestObserver<Integer> firstObserver = TestObserver.create();
    replayRelay.accept(5);
    replayRelay.accept(10);
    replayRelay.accept(15);
    replayRelay.accept(20);
    replayRelay.subscribe(firstObserver);
    firstObserver.assertValues(15, 20);
}

We can also create ReplayRelay with max time to leave for buffered events using the createWithTime() method:

public void whenObserverSubscribedToReplayRelayWithMaxAge_thenItReceivesEmittedEvents() {
    SingleScheduler scheduler = new SingleScheduler();
    ReplayRelay<Integer> replayRelay =
      ReplayRelay.createWithTime(2000, TimeUnit.MILLISECONDS, scheduler);
    long current =  scheduler.now(TimeUnit.MILLISECONDS);
    TestObserver<Integer> firstObserver = TestObserver.create();
    replayRelay.accept(5);
    replayRelay.accept(10);
    replayRelay.accept(15);
    replayRelay.accept(20);
    Thread.sleep(3000);
    replayRelay.subscribe(firstObserver);
    firstObserver.assertEmpty();
}

5. Custom Relay

All types described above extend the common abstract class Relay, this gives us an ability to write our own custom Relay class.

To create a custom Relay we need to implement three methods: accept(), hasObservers() and subscribeActual(). 

Let’s write a simple Relay that will reemit event to one of the subscribers chosen at random:

public class RandomRelay extends Relay<Integer> {
    Random random = new Random();

    List<Observer<? super Integer>> observers = new ArrayList<>();

    @Override
    public void accept(Integer integer) {
        int observerIndex = random.nextInt() % observers.size();
        observers.get(observerIndex).onNext(integer);
    }

    @Override
    public boolean hasObservers() {
        return observers.isEmpty();
    }

    @Override
    protected void subscribeActual(Observer<? super Integer> observer) {
        observers.add(observer);
        observer.onSubscribe(Disposables.fromRunnable(
          () -> System.out.println("Disposed")));
    }
}

We can now test that only one subscriber will receive the event:

public void whenTwoObserversSubscribedToRandomRelay_thenOnlyOneReceivesEvent() {
    RandomRelay randomRelay = new RandomRelay();
    TestObserver<Integer> firstObserver = TestObserver.create();
    TestObserver<Integer> secondObserver = TestObserver.create();
    randomRelay.subscribe(firstObserver);
    randomRelay.subscribe(secondObserver);
    randomRelay.accept(5);
    if(firstObserver.values().isEmpty()) {
        secondObserver.assertValue(5);
    } else {
        firstObserver.assertValue(5);
        secondObserver.assertEmpty();
    }
}

6. Conclusion

In this tutorial, we had a look at RxRelay, a type similar to Subject but without the ability to trigger the terminal state.

More information can be found in the documentation. And, as always all the code samples can be found over on GitHub.

A Simple Tagging Implementation with MongoDB

$
0
0

1. Overview

In this tutorial, we’ll take a look at a simple tagging implementation using Java and MongoDB.

For those unfamiliar with the concept, a tag is a keyword used as a “label” to group documents into different categories. This allows the users to quickly navigate through similar content and it’s especially useful when dealing with a big amount of data.

That being said, it’s not surprising that this technique is very commonly used in blogs. In this scenario, each post has one or more tags according to the topics covered. When the user finishes reading, he can follow one of the tags to view more content related to that topic.

Let’s see how we can implement this scenario.

2. Dependency

In order to query the database, we’ll have to include the MongoDB driver dependency in our pom.xml:

<dependency>
    <groupId>org.mongodb</groupId>
    <artifactId>mongo-java-driver</artifactId>
    <version>3.6.3</version>
</dependency>

The current version of this dependency can be found here.

3. Data Model

First of all, let’s start by planning out what a post document should look like.

To keep it simple, our data model will only have a title, which we’ll also use as the document id, an author, and some tags.

We’ll store the tags inside an array since a post will probably have more than just one:

{
    "_id" : "Java 8 and MongoDB",
    "author" : "Donato Rimenti",
    "tags" : ["Java", "MongoDB", "Java 8", "Stream API"]
}

We’ll also create the corresponding Java model class:

public class Post {
    private String title;
    private String author;
    private List<String> tags;

    // getters and setters
}

4. Updating Tags

Now that we have set up the database and inserted a couple of sample posts, let’s see how we can update them.

Our repository class will include two methods to handle the addition and removal of tags by using the title to find them. We’ll also return a boolean to indicate whether the query updated an element or not:

public boolean addTags(String title, List<String> tags) {
    UpdateResult result = collection.updateOne(
      new BasicDBObject(DBCollection.ID_FIELD_NAME, title), 
      Updates.addEachToSet(TAGS_FIELD, tags));
    return result.getModifiedCount() == 1;
}

public boolean removeTags(String title, List<String> tags) {
    UpdateResult result = collection.updateOne(
      new BasicDBObject(DBCollection.ID_FIELD_NAME, title), 
      Updates.pullAll(TAGS_FIELD, tags));
    return result.getModifiedCount() == 1;
}

We used the addEachToSet method instead of push for the addition so that if the tags are already there, we won’t add them again.

Notice also that the addToSet operator wouldn’t work either since it would add the new tags as a nested array which is not what we want.

Another way we can perform our updates is through the Mongo shell. For instance, let’s update the post JUnit5 with Java. In particular, we want to add the tags Java and JUnit5  and remove the tags Spring and REST:

db.posts.updateOne(
    { _id : "JUnit 5 with Java" }, 
    { $addToSet : 
        { "tags" : 
            { $each : ["Java", "JUnit5"] }
        }
});

db.posts.updateOne(
    {_id : "JUnit 5 with Java" },
    { $pull : 
        { "tags" : { $in : ["Spring", "REST"] }
    }
});

5. Queries

Last but not least, let’s go through some of the most common queries we may be interested in while working with tags. For this purpose, we’ll take advantage of three array operators in particular:

  • $in – returns the documents where a field contains any value of the specified array
  • $nin – returns the documents where a field doesn’t contain any value of the specified array
  • $all – returns the documents where a field contains all the values of the specified array

We’ll define three methods to query the posts in relation to a collection of tags passed as arguments. They will return the posts which match at least one tag, all the tags and none of the tags. We’ll also create a mapping method to handle the conversion between a document and our model using Java 8’s Stream API:

public List<Post> postsWithAtLeastOneTag(String... tags) {
    FindIterable<Document> results = collection
      .find(Filters.in(TAGS_FIELD, tags));
    return StreamSupport.stream(results.spliterator(), false)
      .map(TagRepository::documentToPost)
      .collect(Collectors.toList());
}

public List<Post> postsWithAllTags(String... tags) {
    FindIterable<Document> results = collection
      .find(Filters.all(TAGS_FIELD, tags));
    return StreamSupport.stream(results.spliterator(), false)
      .map(TagRepository::documentToPost)
      .collect(Collectors.toList());
}

public List<Post> postsWithoutTags(String... tags) {
    FindIterable<Document> results = collection
      .find(Filters.nin(TAGS_FIELD, tags));
    return StreamSupport.stream(results.spliterator(), false)
      .map(TagRepository::documentToPost)
      .collect(Collectors.toList());
}

private static Post documentToPost(Document document) {
    Post post = new Post();
    post.setTitle(document.getString(DBCollection.ID_FIELD_NAME));
    post.setAuthor(document.getString("author"));
    post.setTags((List<String>) document.get(TAGS_FIELD));
    return post;
}

Again, let’s also take a look at the shell equivalent queries. We’ll fetch three different post collection respectively tagged with MongoDB or Stream API, tagged with both Java 8 and JUnit 5 and not tagged with Groovy nor Scala:

db.posts.find({
    "tags" : { $in : ["MongoDB", "Stream API" ] } 
});

db.posts.find({
    "tags" : { $all : ["Java 8", "JUnit 5" ] } 
});

db.posts.find({
    "tags" : { $nin : ["Groovy", "Scala" ] } 
});

6. Conclusion

In this article, we showed how to build a tagging mechanism. Of course, we can use and readapt this same methodology for other purposes apart from a blog.

If you are interested further in learning MongoDB, we encourage you to read this introductory article.

As always, all the code in the example is available over on the Github project.

Hamcrest Object Matchers

$
0
0

1. Overview

Hamcrest provides static matchers for making unit test assertions simpler and more legible. You can get started exploring some of the available matchers here.

In this quick tutorial, we’ll dive deeper into object matchers.

2. Setup

To get Hamcrest, we just need to add the following Maven dependency to our pom.xml:

<dependency>
    <groupId>org.hamcrest</groupId>
    <artifactId>java-hamcrest</artifactId>
    <version>2.0.0.0</version>
    <scope>test</scope>
</dependency>

The latest Hamcrest version can be found over on Maven Central.

3. Object Matchers

Object matchers are meant to perform checks over object’s properties.

Before looking into the matchers, we’ll create a couple of beans to make the examples simple to understand.

Our first object is called Location and has no properties:

public class Location {}

We’ll name our second bean City and add the following implementation to it:

public class City extends Location {
    
    String name;
    String state;

    // standard constructor, getters and setters

    @Override
    public String toString() {
        if (this.name == null && this.state == null) {
            return null;
        }
        StringBuilder sb = new StringBuilder();
        sb.append("[");
        sb.append("Name: ");
        sb.append(this.name);
        sb.append(", ");
        sb.append("State: ");
        sb.append(this.state);
        sb.append("]");
        return sb.toString();
    }
}

Note that City extends Location. We’ll make use of that later. Now, let’s start with the object matchers!

3.1. hasToString

As the name says, the hasToString method verifies that certain object has a toString method that returns a specific String:

@Test
public void givenACity_whenHasToString_thenCorrect() {
    City city = new City("San Francisco", "CA");
    
    assertThat(city, hasToString("[Name: San Francisco, State: CA]"));
}

So, we’re creating a City and verifying that its toString method returns the String that we want. We can take this one step further and instead of checking for equality, check for some other condition:

@Test
public void givenACity_whenHasToStringEqualToIgnoringCase_thenCorrect() {
    City city = new City("San Francisco", "CA");

    assertThat(city, hasToString(
      equalToIgnoringCase("[NAME: SAN FRANCISCO, STATE: CA]")));
}

As we can see, hasToString is overloaded and can receive both a String or a text matcher as a parameter. So, we can also do things like:

@Test
public void givenACity_whenHasToStringEmptyOrNullString_thenCorrect() {
    City city = new City(null, null);
    
    assertThat(city, hasToString(emptyOrNullString()));
}

You can find more information on text matchers here. Now let’s move to the next object matcher.

3.2. typeCompatibleWith

This matcher represents an is-a relationship. Here comes our Location superclass into play:

@Test
public void givenACity_whenTypeCompatibleWithLocation_thenCorrect() {
    City city = new City("San Francisco", "CA");

    assertThat(city.getClass(), is(typeCompatibleWith(Location.class)));
}

This is saying that City is-a Location, which is true and this test should pass. Also, if we wanted to test the negative case:

@Test
public void givenACity_whenTypeNotCompatibleWithString_thenCorrect() {
    City city = new City("San Francisco", "CA");

    assertThat(city.getClass(), is(not(typeCompatibleWith(String.class))));
}

Of course, our City class is not a String.

Finally, note that all Java objects should pass the following test:

@Test
public void givenACity_whenTypeCompatibleWithObject_thenCorrect() {
    City city = new City("San Francisco", "CA");

    assertThat(city.getClass(), is(typeCompatibleWith(Object.class)));
}

Please remember that the matcher is consists of a wrapper over another matcher with the purpose of making the whole assertion more readable.

4. Conclusion

Hamcrest provides a simple and clean way of creating assertions. There is a wide variety of matchers that make every developer’s life simpler as well as every project more readable.

And object matchers are definitely a straightforward way of checking class properties. 

As always, you’ll find the full implementation over on the GitHub project.

Introduction to CheckStyle

$
0
0

1. Overview

Checkstyle is an open source tool that checks code against a configurable set of rules.

In this tutorial, we’re going to look at how to integrate Checkstyle into a Java project via Maven and by using IDE plugins.

The plugins mentioned in below sections aren’t dependent on each other and can be integrated individually in our build or IDEs. For example, the Maven plugin isn’t needed in our pom.xml to run the validations in our Eclipse IDE.

2. Checkstyle Maven Plugin

2.1. Maven Configuration

To add Checkstyle to a project, we need to add the plugin in the reporting section of a pom.xml:

<reporting>
    <plugins>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-checkstyle-plugin</artifactId>
            <version>3.0.0</version>
            <configuration>
                <configLocation>checkstyle.xml</configLocation>
            </configuration>
        </plugin>
    </plugins>
</reporting>

This plugin comes with two predefined checks, a Sun-style check, and a Google-style check. The default check for a project is sun_checks.xml.

To use our custom configuration, we can specify our configuration file as shown in the sample above. Using this config, the plugin will now read our custom configuration instead of the default one provided.

The latest version of the plugin can be found on Maven Central.

2.2. Report Generation

Now that our Maven plugin is configured, we can generate a report for our code by running the mvn site command. Once the build finishes, the report is available in the target/site folder under the name checkstyle.html.

There are three major parts to a Checkstyle report:

Files: This section of the report provides us with the list of files in which the violations have happened. It also shows us the counts of the violations against their severity levels. Here is how the files section of the report looks like:

Rules: This part of the report gives us an overview of the rules that were used to check for violations. It shows the category of the rules, the number of violations and the severity of those violations. Here is a sample of the report that shows the rules section:

Details: Finally, the details section of the report provides us the details of the violations that have happened. The details provided are at line number level. Here is a sample details section of the report:

2.3. Build Integration

If there’s a need to have stringent checks on the coding style, we can configure the plugin in such a way that the build fails when the code doesn’t adhere to the standards.

We do this by adding an execution goal to our plugin definition:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-checkstyle-plugin</artifactId>
    <version>${checkstyle-maven-plugin.version}</version>
    <configuration>
        <configLocation>checkstyle.xml</configLocation>
    </configuration>
    <executions>
        <execution>
            <goals>
                <goal>check</goal>
            </goals>
        </execution>
    </executions>
</plugin>

The configLocation attribute defines which configuration file to refer to for the validations.

In our case, the config file is checkstyle.xml. The goal check mentioned in the execution section asks the plugin to run in the verify phase of the build and forces a build failure when a violation of coding standards occurs.

Now, if we run the mvn clean install command, it will scan the files for violations and the build will fail if any violations are found.

3. Eclipse Plugin

3.1. Configurations

Just like with the Maven integration, Eclipse enables us to use our custom configuration.

To import our configuration, go to Window -> Preferences -> Checkstyle. At the Global Check Configurations section, click on New.

This will open up a dialogue which will provide us options to specify our custom configuration file.

3.2. Reports Browsing

Now that our plugin is configured we can use it to analyze our code.

To check coding style for a project, right-click the project in the Eclipse Project Explorer and select CheckStyle -> Check Code with Checkstyle.

The plugin will give us feedback on our Java code within the Eclipse, text editor. It will also generate the violation report for the project which is available as a view in Eclipse.

To view to violation report, go to Window -> Show View -> Other, and search for Checkstyle. Options for Violations and Violations Chart should be displayed.

Selecting either option will give us a representation of violations grouped by type. Here is the violation pie chart for a sample project:

Clicking on a section of the pie chart would take us to the list of actual violations in the code.

Alternatively, we can open the Problem view of Eclipse IDE and check the problems reported by the plugin.

Here is a sample Problem View of Eclipse IDE:

Clicking on any of the warnings will take us to the code where the violation has happened.

4. IntelliJ IDEA Plugin

4.1. Configuration

Like Eclipse, IntelliJ IDEA also enables us to use our own custom configurations with a project.

In the IDE open Settings and search for Checkstyle. A window is shown that has the option to select our checks. Click on the + button and a window will open which will let us specify the location of the file to be used.

Now, we select a configuration XML file and click Next. This will open up the previous window and show our newly added custom configuration option. We select the new configuration and click on OK to start using it in our project.

4.2. Reports Browsing

Now that our plugin is configured, let’s use it to check for violations. To check for violations of a particular project, go to Analyze -> Inspect Code.

The Inspections Results will give us a view of the violations under the Checkstyle section. Here is a sample report:

Clicking on the violations will take us to the exact lines on the file where the violations have happened.

5. Custom Checkstyle Configuration

In the Maven report generation section (Section 2.2), we used a custom configuration file to perform our own coding standard checks.

We have a way to create our own custom configuration XML file if we don’t want to use the packaged Google or Sun checks.

Here is the custom configuration file used for above checks:

<!DOCTYPE module PUBLIC
  "-//Puppy Crawl//DTD Check Configuration 1.3//EN"
  "http://www.puppycrawl.com/dtds/configuration_1_3.dtd">
<module name="Checker">
    <module name="TreeWalker">
        <module name="AvoidStarImport">
            <property name="severity" value="warning" />
        </module>
    </module>
</module>

5.1. DOCTYPE Definition 

The first line of the i.e. the DOCTYPE definition is an important part of the file and it tells where to download the DTD from so that the configurations can be understood by the system.

If we don’t include this definition in our configuration file won’t be a valid configuration file.

5.2. Modules

A config file is primarily composed of Modules. A module has an attribute name which represents what the module does. The value of the name attribute corresponds to a class in the plugin’s code which is executed when the plugin is run.

Let’s learn about the different modules present in the config above.

5.3. Module Details

  • Checker: Modules are structured in a tree that has the Checker module at the root. This module defines the properties that are inherited by all other modules of the configuration.
  • TreeWalker: This module checks the individual Java source files and defines properties that are applicable to checking such files.
  • AvoidStarImport: This module sets a standard for not using Star imports in our Java code. It also has a property that asks the plugin to report the severity of such issues as a warning. Thus, whenever such violations are found in the code, a warning will be flagged against them.

To read more about custom configurations follow this link.

6. Report Analysis for the Spring-Rest Project

In this section, we’re going to shed some light on an analysis done by Checkstyle, using the custom configuration created in section 5 above, on the spring-rest project available on Github as an example.

6.1. Violation Report Generation

We’ve imported the configuration to Eclipse IDE and here is the violation report that is generated for the project:

The warnings reported here says that wildcard imports should be avoided in the code. We have two files that don’t comply with this standard. When we click on the warning it takes us to the Java file which has the violation.

Here is how the HeavyResourceController.java file shows the warning reported:

6.2. Issue Resolution

Using Star imports is not a good practice in general as it can create conflicts when two or more packages contain the same class.

As an example, consider the class List, which is available in packages java.util and java.awt both. If we use both the imports of java.util .* and  java.awt.* our compiler will fail to compile the code, as List is available in both packages.

To resolve the issue mentioned above we organize the imports in both files and save them. Now when we run the plugin again we don’t see the violations and our code is now following the standards set in our custom configuration.

7. Conclusion

In this article, we’ve covered basics for integrating Checkstyle in our Java project.

We’ve learned that it is a simple yet powerful tool that’s used to make sure that developers adhere to the coding standards set by the organization.

The sample code we used for static analysis is available over on Github.

The “final” Keyword in Java

$
0
0

1. Overview

While inheritance enables us to reuse existing code, sometimes we do need to set limitations on extensibility for various reasons; the final keyword allows us to do exactly that.

In this tutorial, we’ll take a look at what the final keyword means for classes, methods, and variables.

2. Final Classes

Classes marked as final can’t be extended. If we look at the code of Java core libraries, we’ll find many final classes there. One example is the String class.

Consider the situation if we can extend the String class, override any of its methods, and substitute all the String instances with the instances of our specific String subclass.

The result of the operations over String objects will then become unpredictable. And given that the String class is used everywhere, it’s unacceptable. That’s why the String class is marked as final.

Any attempt to inherit from a final class will cause a compiler error. To demonstrate this, let’s create the final class Cat:

public final class Cat {

    private int weight;

    // standard getter and setter
}

And let’s try to extend it:

public class BlackCat extends Cat {
}

We’ll see the compiler error:

The type BlackCat cannot subclass the final class Cat

Note that the final keyword in a class declaration doesn’t mean that the objects of this class are immutable. We can change the fields of Cat object freely:

Cat cat = new Cat();
cat.setWeight(1);

assertEquals(1, cat.getWeight());

We just can’t extend it.

If we follow the rules of good design strictly, we should create and document a class carefully or declare it final for safety reasons. However, we should use caution when creating final classes.

Notice that making a class final means that no other programmer can improve it. Imagine that we’re using a class and don’t have the source code for it, and there’s a problem with one method.

If the class is final, we can’t extend it to override the method and fix the problem. In other words, we lose extensibility, one of the benefits of object-oriented programming.

3. Final Methods

Methods marked as final cannot be overridden. When we design a class and feel that a method shouldn’t be overridden, we can make this method final. We can also find many final methods in Java core libraries.

Sometimes we don’t need to prohibit a class extension entirely, but only prevent overriding of some methods. A good example of this is the Thread class. It’s legal to extend it and thus create a custom thread class. But its isAlive() methods is final.

This method checks if a thread is alive. It’s impossible to override the isAlive() method correctly for many reasons. One of them is that this method is native. Native code is implemented in another programming language and is often specific to the operating system and hardware it’s running on.

Let’s create a Dog class and make its sound() method final:

public class Dog {
    public final void sound() {
        // ...
    }
}

Now let’s extend the Dog class and try to override its sound() method:

public class BlackDog extends Dog {
    public void sound() {
    }
}

We’ll see the compiler error:

- overrides
com.baeldung.finalkeyword.Dog.sound
- Cannot override the final method from Dog
sound() method is final and can’t be overridden

If some methods of our class are called by other methods, we should consider making the called methods final. Otherwise, overriding them can affect the work of callers and cause surprising results.

If our constructor calls other methods, we should generally declare these methods final for the above reason.

What’s the difference between making all methods of the class final and marking the class itself final? In the first case, we can extend the class and add new methods to it.

In the second case, we can’t do this.

4. Final Variables

Variables marked as final can’t be reassigned. Once a final variable is initialized, it can’t be altered.

4.1. Final Primitive Variables

Let’s declare a primitive final variable i, then assign 1 to it.

And let’s try to assign a value of 2 to it:

public void whenFinalVariableAssign_thenOnlyOnce() {
    final int i = 1;
    //...
    i=2;
}

The compiler says:

The final local variable i may already have been assigned

4.2. Final Reference Variables

If we have a final reference variable, we can’t reassign it either. But this doesn’t mean that the object it refers to is immutable. We can change the properties of this object freely.

To demonstrate this, let’s declare the final reference variable cat and initialize it:

final Cat cat = new Cat();

If we try to reassign it we’ll see a compiler error:

The final local variable cat cannot be assigned. It must be blank and not using a compound assignment

But we can change the properties of Cat instance:

cat.setWeight(5);

assertEquals(5, cat.getWeight());

4.3. Final Fields

Final fields can be either constants or write-once fields. To distinguish them, we should ask a question — would we include this field if we were to serialize the object? If no, then it’s not part of the object, but a constant.

Note that according to naming conventions, class constants should be uppercase, with components separated by underscore (“_”) characters:

static final int MAX_WIDTH = 999;

Note that any final field must be initialized before the constructor completes.

For static final fields, this means that we can initialize them:

  • upon declaration as shown in the above example
  • in the static initializer block

For instance final fields, this means that we can initialize them:

  • upon declaration
  • in the instance initializer block
  • in the constructor

Otherwise, the compiler will give us an error.

4.4. Final Arguments

The final keyword is also legal to put before method arguments. A final argument can’t be changed inside a method:

public void methodWithFinalArguments(final int x) {
    x=1;
}

The above assignment causes the compiler error:

The final local variable x cannot be assigned. It must be blank and not using a compound assignment

5. Conclusion

In this article, we learned what the final keyword means for classes, methods, and variables. Although we may not use the final keyword often in our internal code, it may be a good design solution.

As always, the complete code for this article can be found in the GitHub project.

Headers, Cookies and Parameters with REST-assured

$
0
0

1. Overview

In this quick tutorial, we’ll explore some REST-assured advanced scenarios. We explored REST-assured before in the tutorial a Guide to REST-assured.

To continue, we’ll cover examples that show how to set headers, cookie and parameters for our requests.

The setup is the same as the previous article, so let’s dive into our examples.

2. Setting Parameters

Now, let’s discuss how to specify different parameters to our request – starting with path parameters.

2.1. Path Parameters

We can use pathParam(parameter-name, value) to specify a path parameter:

@Test
public void whenUsePathParam_thenOK() {
    given().pathParam("user", "eugenp")
      .when().get("/users/{user}/repos")
      .then().statusCode(200);
}

To add multiple path parameters we’ll use the pathParams() method:

@Test
public void whenUseMultiplePathParam_thenOK() {
    given().pathParams("owner", "eugenp", "repo", "tutorials")
      .when().get("/repos/{owner}/{repo}")
      .then().statusCode(200);

    given().pathParams("owner", "eugenp")
      .when().get("/repos/{owner}/{repo}","tutorials")
      .then().statusCode(200);
}

In this example, we’ve used named path parameters, but we can also add unnamed parameters, and even combine the two:

given().pathParams("owner", "eugenp")
  .when().get("/repos/{owner}/{repo}", "tutorials")
  .then().statusCode(200);

The resulting URL, in this case, is https://api.github.com/repos/eugenp/tutorials.

Note that the unnamed parameters are index-based.

2.2. Query Parameters

Next, let’s see how we can specify query parameters using queryParam():

@Test
public void whenUseQueryParam_thenOK() {
    given().queryParam("q", "john").when().get("/search/users")
      .then().statusCode(200);

    given().param("q", "john").when().get("/search/users")
      .then().statusCode(200);
}

The param() method will act like queryParam() with GET requests.

For adding multiple query parameters, we can either chain several queryParam() methods, or add the parameters to a queryParams() method:

@Test
public void whenUseMultipleQueryParam_thenOK() {
 
    int perPage = 20;
    given().queryParam("q", "john").queryParam("per_page",perPage)
      .when().get("/search/users")
      .then().body("items.size()", is(perPage));   
     
    given().queryParams("q", "john","per_page",perPage)
      .when().get("/search/users")
      .then().body("items.size()", is(perPage));
}

2.3. Form Parameters

Finally, we can specify form parameters using formParam():

@Test
public void whenUseFormParam_thenSuccess() {
 
    given().formParams("username", "john","password","1234").post("/");

    given().params("username", "john","password","1234").post("/");
}

The param() method will act life formParam() for POST requests.

Also note that formParam() adds a Content-Type header with the value “application/x-www-form-urlencoded“.

3. Setting Headers

Next, we can customize our request headers using header():

@Test
public void whenUseCustomHeader_thenOK() {
 
    given().header("User-Agent", "MyAppName").when().get("/users/eugenp")
      .then().statusCode(200);
}

In this example, we’ve used header() to set the User-Agent header.

We can also add a header with multiple values using the same method:

@Test
public void whenUseMultipleHeaderValues_thenOK() {
 
    given().header("My-Header", "val1", "val2")
      .when().get("/users/eugenp")
      .then().statusCode(200);
}

In this example, we’ll have a request with two headers: My-Header:val1 and My-Header:val2.

For adding multiple headers, we’ll use the headers() method:

@Test
public void whenUseMultipleHeaders_thenOK() {
 
    given().header("User-Agent", "MyAppName", "Accept-Charset", "utf-8")
      .when().get("/users/eugenp")
      .then().statusCode(200);
}

4. Adding Cookies

We can also specify custom cookie to our request using cookie():

@Test
public void whenUseCookie_thenOK() {
 
    given().cookie("session_id", "1234").when().get("/users/eugenp")
      .then().statusCode(200);
}

We can also customize our cookie using cookie Builder:

@Test
public void whenUseCookieBuilder_thenOK() {
    Cookie myCookie = new Cookie.Builder("session_id", "1234")
      .setSecured(true)
      .setComment("session id cookie")
      .build();

    given().cookie(myCookie)
      .when().get("/users/eugenp")
      .then().statusCode(200);
}

5. Conclusion

In this article, we’ve shown how we can specify request parameters, headers, and cookies when using REST-assured.

And, as always, the full source code for the examples is available over on GitHub.


JSON Schema Validation with REST-assured

$
0
0

1. Overview

The REST-assured library provides support for testing REST APIs, usually in JSON format.

From time to time it may be desirable, without analyzing the response in detail, to know first-off whether the JSON body conforms to a certain JSON format.

In this quick tutorial, we’ll take a look at how we can validate a JSON response based on a predefined JSON schema.

2. Setup

The initial REST-assured setup is the same as our previous article.

In addition, we also need to include the json-schema-validator module in the pom.xml file:

<dependency>
    <groupId>io.rest-assured</groupId>
    <artifactId>json-schema-validator</artifactId>
    <version>3.0.0</version>
</dependency>

To ensure you have the latest version, follow this link.

We also need another library with the same name but a different author and functionality. It’s not a module from REST-assured but rather, it’s used under the hood by the json-schema-validator to perform validation:

<dependency>
    <groupId>com.github.fge</groupId>
    <artifactId>json-schema-validator</artifactId>
    <version>2.2.6</version>
</dependency>

Its latest version can be found here.

The library, json-schema-validator, may also need the json-schema-core dependency:

<dependency>
    <groupId>com.github.fge</groupId>
    <artifactId>json-schema-core</artifactId>
    <version>1.2.5</version>
</dependency>

And the latest version is always found here.

3. JSON Schema Validation

Let’s have a look at an example.

As a JSON schema, we’ll use a JSON saved in a file called event_0.json, which is present in the classpath:

{
    "id": "390",
    "data": {
        "leagueId": 35,
        "homeTeam": "Norway",
        "visitingTeam": "England",
    },
    "odds": [{
        "price": "1.30",
        "name": "1"
    },
    {
        "price": "5.25",
        "name": "X"
    }]
}

Then assuming that this is the general format followed by all data returned by our REST API, we can then check a JSON response for conformance like so:

@Test
public void givenUrl_whenJsonResponseConformsToSchema_thenCorrect() {
    get("/events?id=390").then().assertThat()
      .body(matchesJsonSchemaInClasspath("event_0.json"));
}

Notice that we’ll still statically import matchesJsonSchemaInClasspath from io.restassured.module.jsv.JsonSchemaValidator.

4. JSON Schema Validation Settings

4.1. Validate a Response

The json-schema-validator module of REST-assured gives us the power to perform fine-grained validation by defining our own custom configuration rules.

Say we want our validation to always use the JSON schema version 4:

@Test
public void givenUrl_whenValidatesResponseWithInstanceSettings_thenCorrect() {
    JsonSchemaFactory jsonSchemaFactory = JsonSchemaFactory.newBuilder()
      .setValidationConfiguration(
        ValidationConfiguration.newBuilder()
          .setDefaultVersion(SchemaVersion.DRAFTV4).freeze())
            .freeze();
    get("/events?id=390").then().assertThat()
      .body(matchesJsonSchemaInClasspath("event_0.json")
        .using(jsonSchemaFactory));
}

We would do this by using the JsonSchemaFactory and specify the version 4 SchemaVersion and assert that it is using that schema when a request is made.

4.2. Check Validations

By default, the json-schema-validator runs checked validations on the JSON response String. This means that if the schema defines odds as an array as in the following JSON:

{
    "odds": [{
        "price": "1.30",
        "name": "1"
    },
    {
        "price": "5.25",
        "name": "X"
    }]
}

then the validator will always be expecting an array as the value for odds, hence a response where odds is a String will fail validation. So, if we would like to be less strict with our responses, we can add a custom rule during validation by first making the following static import:

io.restassured.module.jsv.JsonSchemaValidatorSettings.settings;

then execute the test with the validation check set to false:

@Test
public void givenUrl_whenValidatesResponseWithStaticSettings_thenCorrect() {
    get("/events?id=390").then().assertThat().body(matchesJsonSchemaInClasspath
      ("event_0.json").using(settings().with().checkedValidation(false)));
}

4.3. Global Validation Configuration

These customizations are very flexible, but with a large number of tests we would have to define a validation for each test, this is cumbersome and not very maintainable.

To avoid this, we have the freedom to define our configuration just once and let it apply to all tests.

We’ll configure the validation to be unchecked and to always use it against JSON schema version 3:

JsonSchemaFactory factory = JsonSchemaFactory.newBuilder()
  .setValidationConfiguration(
   ValidationConfiguration.newBuilder()
    .setDefaultVersion(SchemaVersion.DRAFTV3)
      .freeze()).freeze();
JsonSchemaValidator.settings = settings()
  .with().jsonSchemaFactory(factory)
      .and().with().checkedValidation(false);

then to remove this configuration call the reset method:

JsonSchemaValidator.reset();

5. Conclusion

In this article, we’ve shown how we can validate a JSON response against a schema when using REST-assured.

As always, the full source code for the example is available over on GitHub.

Handling Daylight Savings Time in Java

$
0
0

1. Overview

Daylight Saving Time, or DST, is a practice of advancing clocks during summer months in order to leverage an additional hour of the natural light (saving heating power, illumination power, enhancing the mood, and so on).

It’s used by several countries and needs to be taken into account when working with dates and timestamps.

In this tutorial, we’ll see how to correctly handle DST in Java according to different locations.

2. JRE and DST Mutability

First, it’s extremely important to understand that worldwide DST zones change very often and there’s no central authority coordinating it.

A country, or in some cases even a city, can decide if and how to apply or revoke it.

Everytime it happens, the change is recorded in the IANA Time Zone Database, and the update will be rolled out in a future release of the JRE.

In case it’s not possible to wait, we can force the modified Time Zone data containing the new DST settings into the JRE through an official Oracle tool called Java Time Zone Updater Tool, available on the Java SE download page.

3. The Wrong Way: Three-Letter Timezone ID

Back in the JDK 1.1 days, the API allowed three-letter time zone IDs, but this led to several problems.

First, this was because the same three-letter ID could refer to multiple time zones. For example, CST could be U.S. “Central Standard Time”, but also “China Standard Time”.  The Java platform could then only recognize one of them.

Another issue was that Standard timezones never take Daylight Saving Time into an account. Multiple areas/regions/cities can have their local DST inside the same Standard time zone, so the Standard time doesn’t observe it.

Due to backward compatibility, it’s still possible to instantiate a java.util.Timezone with a three-letter ID. However, this method is deprecated and shouldn’t be used anymore.

4. The Right Way: TZDB Timezone ID

The right way to handle DST in Java is to instantiate a Timezone with a specific TZDB Timezone ID, eg. “Europe/Rome”.

Then, we’ll use this in conjunction with time-specific classes like java.util.Calendar to get a proper configuration of the TimeZone’s raw offset (to the GMT time zone), and automatic DST shift adjustments.

Let’s see how the shift from GMT+1 to GMT+2 (which happens in Italy on March 25, 2018, at 02:00 am) is automatically handled when using the right TimeZone:

TimeZone tz = TimeZone.getTimeZone("Europe/Rome");
TimeZone.setDefault(tz);
Calendar cal = Calendar.getInstance(tz, Locale.ITALIAN);
DateFormat df = new SimpleDateFormat("yyyy-MM-dd HH:mm", Locale.ITALIAN);
Date dateBeforeDST = df.parse("2018-03-25 01:55");
cal.setTime(dateBeforeDST);
 
assertThat(cal.get(Calendar.ZONE_OFFSET)).isEqualTo(3600000);
assertThat(cal.get(Calendar.DST_OFFSET)).isEqualTo(0);

As we can see, ZONE_OFFSET is 60 minutes (because Italy is GMT+1) while DST_OFFSET is 0 at that time.

Let’s add ten minutes to the Calendar:

cal.add(Calendar.MINUTE, 10);

Now DST_OFFSET has become 60 minutes too, and the country has transitioned its local time from CET (Central European Time) to CEST (Central European Summer Time) which is GMT+2:

Date dateAfterDST = cal.getTime();
 
assertThat(cal.get(Calendar.DST_OFFSET))
  .isEqualTo(3600000);
assertThat(dateAfterDST)
  .isEqualTo(df.parse("2018-03-25 03:05"));

If we display the two dates in the console, we’ll see the time zone change as well:

Before DST (00:55 UTC - 01:55 GMT+1) = Sun Mar 25 01:55:00 CET 2018
After DST (01:05 UTC - 03:05 GMT+2) = Sun Mar 25 03:05:00 CEST 2018

As a final test, we can measure the distance between the two Dates, 1:55 and 3:05:

Long deltaBetweenDatesInMillis = dateAfterDST.getTime() - dateBeforeDST.getTime();
Long tenMinutesInMillis = (1000L * 60 * 10);
 
assertThat(deltaBetweenDatesInMillis)
  .isEqualTo(tenMinutesInMillis);

As we’d expect, the distance is of 10 minutes instead of 70.

We’ve seen how to avoid falling into the common pitfalls that we can encounter when working with Date through the correct usage of TimeZone and Locale.

5. The Best Way: Java 8 Date/Time API

Working with these thread-unsafe and not always user-friendly java.util classes have always been tough, especially due to compatibility concerns which prevented them from being properly refactored.

For this reason, Java 8 introduced a brand new package, java.time, and a whole new API set, the Date/Time API. This is ISO-centric, fully thread-safe and heavily inspired by the famous library Joda-Time.

Let’s take a closer look at this new classes, starting from the successor of java.util.Date, java.time.LocalDateTime:

LocalDateTime localDateTimeBeforeDST = LocalDateTime
  .of(2018, 3, 25, 1, 55);
 
assertThat(localDateTimeBeforeDST.toString())
  .isEqualTo("2018-03-25T01:55");

We can observe how a LocalDateTime is conforming to the ISO8601 profile, a standard and widely adopted date-time notation.

It’s completely unaware of Zones and Offsets, though, that’s why we need to convert it into a fully DST-aware java.time.ZonedDateTime:

ZoneId italianZoneId = ZoneId.of("Europe/Rome");
ZonedDateTime zonedDateTimeBeforeDST = localDateTimeBeforeDST
  .atZone(italianZoneId);
 
assertThat(zonedDateTimeBeforeDST.toString())
  .isEqualTo("2018-03-25T01:55+01:00[Europe/Rome]"); 

As we can see, now the date incorporates two fundamental trailing pieces of information: +01:00 is the ZoneOffset, while [Europe/Rome] is the ZoneId.

Like in the previous example, let’s trigger DST through the addition of ten minutes:

ZonedDateTime zonedDateTimeAfterDST = zonedDateTimeBeforeDST
  .plus(10, ChronoUnit.MINUTES);
 
assertThat(zonedDateTimeAfterDST.toString())
  .isEqualTo("2018-03-25T03:05+02:00[Europe/Rome]");

Again, we see how both the time and the zone offset are shifting forward, and still keeping the same distance:

Long deltaBetweenDatesInMinutes = ChronoUnit.MINUTES
  .between(zonedDateTimeBeforeDST,zonedDateTimeAfterDST);
assertThat(deltaBetweenDatesInMinutes)
  .isEqualTo(10);

6. Conclusion

We’ve seen what Daylight Saving Time is and how to handle it through some practical examples in different versions of Java core API.

When working with Java 8 and above, the usage of the new java.time package is encouraged thanks to the ease of use and to its standard, thread-safe nature.

As always, the full source code is available over on Github.

Combining Observables in RxJava

$
0
0

1. Introduction

In this quick tutorial, we’ll discuss different ways of combining Observables in RxJava.

If you’re new to RxJava, definitely check out this intro tutorial first.

Now, let’s jump right in.

2. Observables

Observable sequences, or simply Observables, are representations of asynchronous data streams.

These’re based on the Observer pattern wherein an object called an Observer, subscribes to items emitted by an Observable.

The subscription is non-blocking as the Observer stands to react to whatever the Observable will emit in the future. This, in turn, facilitates concurrency.

Here’s a simple demonstration in RxJava:

Observable
  .from(new String[] { "John", "Doe" })
  .subscribe(name -> System.out.println("Hello " + name))

3. Combining Observables

When programming using a reactive framework, it’s a common use-case to combine various Observables.

In a web application, for example, we may need to get two sets of asynchronous data streams that are independent of each other.

Instead of waiting for the previous stream to complete before requesting the next stream, we can call both at the same time and subscribe to the combined streams.

In this section, we’ll discuss some of the different ways we can combine multiple Observables in RxJava and the different use-cases to which each method applies.

3.1. Merge

We can use the merge operator to combine the output of multiple Observables so that they act like one:

@Test
public void givenTwoObservables_whenMerged_shouldEmitCombinedResults() {
    TestSubscriber<String> testSubscriber = new TestSubscriber<>();

    Observable.merge(
      Observable.from(new String[] {"Hello", "World"}),
      Observable.from(new String[] {"I love", "RxJava"})
    ).subscribe(testSubscriber);

    testSubscriber.assertValues("Hello", "World", "I love", "RxJava");
}

3.2. MergeDelayError

The mergeDelayError method is the same as merge in that it combines multiple Observables into one, but if errors occur during the merge, it allows error-free items to continue before propagating the errors:

@Test
public void givenMutipleObservablesOneThrows_whenMerged_thenCombineBeforePropagatingError() {
    TestSubscriber<String> testSubscriber = new TestSubscriber<>();
        
    Observable.mergeDelayError(
      Observable.from(new String[] { "hello", "world" }),
      Observable.error(new RuntimeException("Some exception")),
      Observable.from(new String[] { "rxjava" })
    ).subscribe(testSubscriber);

    testSubscriber.assertValues("hello", "world", "rxjava");
    testSubscriber.assertError(RuntimeException.class);
}

The above example emits all the error-free values:

hello
world
rxjava

Note that if we use merge instead of mergeDelayError, the String “rxjava” won’t be emitted because merge immediately stops the flow of data from Observables when an error occurs.

3.3. Zip

The zip extension method brings together two sequences of values as pairs:

@Test
public void givenTwoObservables_whenZipped_thenReturnCombinedResults() {
    List<String> zippedStrings = new ArrayList<>();

    Observable.zip(
      Observable.from(new String[] { "Simple", "Moderate", "Complex" }), 
      Observable.from(new String[] { "Solutions", "Success", "Hierarchy"}),
      (str1, str2) -> str1 + " " + str2).subscribe(zippedStrings::add);
        
    assertThat(zippedStrings).isNotEmpty();
    assertThat(zippedStrings.size()).isEqualTo(3);
    assertThat(zippedStrings).contains("Simple Solutions", "Moderate Success", "Complex Hierarchy");
}

3.4. Zip with Interval

In this example, we will zip a stream with interval which in effect will delay the emission of elements of the first stream:

@Test
public void givenAStream_whenZippedWithInterval_shouldDelayStreamEmmission() {
    TestSubscriber<String> testSubscriber = new TestSubscriber<>();
        
    Observable<String> data = Observable.just("one", "two", "three", "four", "five");
    Observable<Long> interval = Observable.interval(1L, TimeUnit.SECONDS);
        
    Observable
      .zip(data, interval, (strData, tick) -> String.format("[%d]=%s", tick, strData))
      .toBlocking().subscribe(testSubscriber);
        
    testSubscriber.assertCompleted();
    testSubscriber.assertValueCount(5);
    testSubscriber.assertValues("[0]=one", "[1]=two", "[2]=three", "[3]=four", "[4]=five");
}

4. Summary

In this article, we’ve seen a few of the methods for combining Observables with RxJava. You can learn about other methods like combineLatest, join, groupJoin, switchOnNext, in the official RxJava documentation.

As always, the source code for this article is available in our GitHub repo.

The Spring @Controller and @RestController Annotations

$
0
0

1. Overview

In this quick tutorial, we’ll discuss the difference between @Controller and @RestController annotations in Spring MVC.

The first annotation is used for traditional Spring controllers and has been part of the framework for a very long time.

The @RestController annotation was introduced in Spring 4.0 to simplify the creation of RESTful web services. It’s a convenience annotation that combines @Controller and @ResponseBody – which eliminates the need to annotate every request handling method of the controller class with the @ResponseBody annotation.

2. Spring MVC @Controller

Classic controllers can be annotated with the @Controller annotation. This is simply a specialization of the @Component class and allows implementation classes to be autodetected through the classpath scanning.

@Controller is typically used in combination with a @RequestMapping annotation used on request handling methods.

Let’s see a quick example of the Spring MVC controller:

@Controller
@RequestMapping("books")
public class SimpleBookController {

    @GetMapping("/{id}", produces = "application/json")
    public @ResponseBody Book getBook(@PathVariable int id) {
        return findBookById(id);
    }

    private Book findBookById(int id) {
        // ...
    }
}

The request handling method is annotated with @ResponseBody. This annotation enables automatic serialization of the return object into the HttpResponse.

3. Spring MVC @RestController

@RestController is a specialized version of the controller. It includes the @Controller and @ResponseBody annotations and as a result, simplifies the controller implementation:

@RestController
@RequestMapping("books-rest")
public class SimpleBookRestController {
    
    @GetMapping("/{id}", produces = "application/json")
    public Book getBook(@PathVariable int id) {
        return findBookById(id);
    }

    private Book findBookById(int id) {
        // ...
    }
}

The controller is annotated with the @RestController annotation, therefore the @ResponseBody isn’t required.

Every request handling method of the controller class automatically serializes return objects into HttpResponse.

4. Conclusion

In this article, we saw the classic and specialized REST controllers available in the Spring Framework.

The complete source code for the example is available in the GitHub project; this is a Maven project, so it can be imported and used as-is.

Command-Line Arguments in Spring Boot

$
0
0

1. Overview

In this quick tutorial, we’ll discuss how to pass command-line arguments to a Spring Boot application.

We can use command-line arguments to configure our application, override application properties or pass custom arguments.

2. Maven Command-Line Arguments

First, let’s see how we can pass arguments while running our application using Maven Plugin.

Later, we’ll see how to access the arguments in our code.

2.1. Spring Boot 1.x

For Spring Boot 1.x, we can pass the arguments to our application using -Drun.arguments:

mvn spring-boot:run -Drun.arguments=--customArgument=custom

We can also pass multiple parameters to our app:

mvn spring-boot:run -Drun.arguments=--spring.main.banner-mode=off,--customArgument=custom

Note that:

  • Arguments should be comma separated
  • Each argument should be prefixed with —
  • We can also pass configuration properties, like spring.main.banner-mode shown in the example above

2.2. Spring Boot 2.x

For Spring Boot 2.x, we can pass the arguments using -Dspring-boot.run.arguments:

mvn spring-boot:run -Dspring-boot.run.arguments=--spring.main.banner-mode=off,--customArgument=custom

3. Gradle Command-Line Arguments

Next, let’s discover how to pass arguments while running our application using Gradle Plugin.

We’ll need to configure our bootRun task in build.gradle file:

bootRun {
    if (project.hasProperty('args')) {
        args project.args.split(',')
    }
}

Now, we can pass the command-line arguments as follows:

./gradlew bootRun -Pargs=--spring.main.banner-mode=off,--customArgument=custom

4. Overriding System Properties

Other than passing custom arguments, we can also override system properties.

For example, here’s our application.properties file:

server.port=8081
spring.application.name=SampleApp

To override the server.port value, we need to pass the new value in the following manner (for Spring Boot 1.x):

mvn spring-boot:run -Drun.arguments=--server.port=8085

Similarly for Spring Boot 2.x:

mvn spring-boot:run -Dspring-boot.run.arguments=--server.port=8085

Note that:

  • Spring Boot converts command-line arguments to properties and adds them as environment variables
  • We can use short command-line arguments –port=8085 instead of –server.port=8085 by using a placeholder in our application.properties:
    server.port=${port:8080}
  • Command-line arguments take precedence over application.properties values

If needed, we can stop our application from converting command-line arguments to properties:

@SpringBootApplication
public class Application extends SpringBootServletInitializer {
    public static void main(String[] args) {
        SpringApplication application = new SpringApplication(Application.class);
        application.setAddCommandLineProperties(false);
        application.run(args);
    }
}

5. Accessing Command-Line Arguments

Let’s see how we can access the command-line arguments from our application’s main() method:

@SpringBootApplication
public class Application extends SpringBootServletInitializer {
    public static void main(String[] args) {
        for(String arg:args) {
            System.out.println(arg);
        }
        SpringApplication.run(Application.class, args);
    }
}

This will print the arguments we passed to our application from command-line, but we could also use them later in our application.

6. Conclusion

In this article, we learned how to pass arguments to our Spring Boot application from command-line, and how to do it using both Maven and Gradle.

We’ve also shown how you can access those arguments from your code, in order to configure your application.

Viewing all 3842 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>