Quantcast
Channel: Baeldung
Viewing all 3689 articles
Browse latest View live

Spring Batch – Tasklets vs Chunks

$
0
0

 1. Introduction

Spring Batch provides two different ways for implementing a job: using tasklets and chunks.

In this article, we’ll learn how to configure and implement both methods using a simple real-life example.

2. Dependencies

Let’s get started by adding the required dependencies:

<dependency>
    <groupId>org.springframework.batch</groupId>
    <artifactId>spring-batch-core</artifactId>
    <version>4.0.0.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.springframework.batch</groupId>
    <artifactId>spring-batch-test</artifactId>
    <version>4.0.0.RELEASE</version>
    <scope>test</scope>
</dependency>

To get the latest version of spring-batch-core and spring-batch-test, please refer to Maven Central.

3. Our Use Case

Let’s consider a CSV file with the following content:

Mae Hodges,10/22/1972
Gary Potter,02/22/1953
Betty Wise,02/17/1968
Wayne Rose,04/06/1977
Adam Caldwell,09/27/1995
Lucille Phillips,05/14/1992

The first position of each line represents a person’s name and the second position represents his/her date of birth.

Our use case is to generate another CSV file that contains each person’s name and age:

Mae Hodges,45
Gary Potter,64
Betty Wise,49
Wayne Rose,40
Adam Caldwell,22
Lucille Phillips,25

Now that our domain is clear let’s go ahead and build a solution using both approaches. We’ll start with tasklets.

4. Tasklets Approach

4.1. Introduction and Design

Tasklets are meant to perform a single task within a step. Our job will consist of several steps that execute one after the other. Each step should perform only one defined task.

Our job will consist of three steps:

  1. Read lines from the input CSV file.
  2. Calculate age for every person in the input CSV file.
  3. Write name and age of each person to a new output CSV file.

Now that the big picture is ready, let’s create one class per step.

LinesReader will be in charge of reading data from the input file:

public class LinesReader implements Tasklet {
    // ...
}

LinesProcessor will calculate the age for every person in the file:

public class LinesProcessor implements Tasklet {
    // ...
}

Finally, LinesWriter will have the responsibility of writing names and ages to an output file:

public class LinesWriter implements Tasklet {
    // ...
}

At this point, all our steps implement Tasklet interface. That will force us to implement its execute method:

@Override
public RepeatStatus execute(StepContribution stepContribution, 
  ChunkContext chunkContext) throws Exception {
    // ...
}

This method is where we’ll add the logic for each step. Before starting with that code, let’s configure our job.

4.2. Configuration

We need to add some configuration to Spring’s application context. After adding standard bean declaration for the classes created in the previous section, we’re ready to create our job definition:

@Configuration
@EnableBatchProcessing
public class TaskletsConfig {

    @Autowired 
    private JobBuilderFactory jobs;

    @Autowired 
    private StepBuilderFactory steps;

    @Bean
    protected Step readLines() {
        return steps
          .get("readLines")
          .tasklet(linesReader())
          .build();
    }

    @Bean
    protected Step processLines() {
        return steps
          .get("processLines")
          .tasklet(linesProcessor())
          .build();
    }

    @Bean
    protected Step writeLines() {
        return steps
          .get("writeLines")
          .tasklet(linesWriter())
          .build();
    }

    @Bean
    public Job job() {
        return jobs
          .get("taskletsJob")
          .start(readLines())
          .next(processLines())
          .next(writeLines())
          .build();
    }

    // ...

}

This means that our “taskletsJob” will consist of three steps. The first one (readLines) will execute the tasklet defined in the bean linesReader and move to the next step: processLines. ProcessLines will perform the tasklet defined in the bean linesProcessor and go to the final step: writeLines.

Our job flow is defined, and we’re ready to add some logic!

4.3. Model and Utils

As we’ll be manipulating lines in a CSV file, we’re going to create a class Line:

public class Line implements Serializable {

    private String name;
    private LocalDate dob;
    private Long age;

    // standard constructor, getters, setters and toString implementation

}

Please note that Line implements Serializable. That is because Line will act as a DTO to transfer data between steps. According to Spring Batch, objects that are transferred between steps must be serializable.

On the other hand, we can start thinking about reading and writing lines.

For that, we’ll make use of OpenCSV:

<dependency>
    <groupId>com.opencsv</groupId>
    <artifactId>opencsv</artifactId>
    <version>4.1</version>
</dependency>

Look for the latest OpenCSV version in Maven Central.

Once OpenCSV is included, we’re also going to create a FileUtils class. It will provide methods for reading and writing CSV lines:

public class FileUtils {

    public Line readLine() throws Exception {
        if (CSVReader == null) 
          initReader();
        String[] line = CSVReader.readNext();
        if (line == null) 
          return null;
        return new Line(
          line[0], 
          LocalDate.parse(
            line[1], 
            DateTimeFormatter.ofPattern("MM/dd/yyyy")));
    }

    public void writeLine(Line line) throws Exception {
        if (CSVWriter == null) 
          initWriter();
        String[] lineStr = new String[2];
        lineStr[0] = line.getName();
        lineStr[1] = line
          .getAge()
          .toString();
        CSVWriter.writeNext(lineStr);
    }

    // ...
}

Notice that readLine acts as a wrapper over OpenCSV’s readNext method and returns a Line object.

Same way, writeLine wraps OpenCSV’s writeNext receiving a Line object. Full implementation of this class can be found in the GitHub Project.

At this point, we’re all set to start with each step implementation.

4.4. LinesReader

Let’s go ahead and complete our LinesReader class:

public class LinesReader implements Tasklet, StepExecutionListener {

    private final Logger logger = LoggerFactory
      .getLogger(LinesReader.class);

    private List<Line> lines;
    private FileUtils fu;

    @Override
    public void beforeStep(StepExecution stepExecution) {
        lines = new ArrayList<>();
        fu = new FileUtils(
          "taskletsvschunks/input/tasklets-vs-chunks.csv");
        logger.debug("Lines Reader initialized.");
    }

    @Override
    public RepeatStatus execute(StepContribution stepContribution, 
      ChunkContext chunkContext) throws Exception {
        Line line = fu.readLine();
        while (line != null) {
            lines.add(line);
            logger.debug("Read line: " + line.toString());
            line = fu.readLine();
        }
        return RepeatStatus.FINISHED;
    }

    @Override
    public ExitStatus afterStep(StepExecution stepExecution) {
        fu.closeReader();
        stepExecution
          .getJobExecution()
          .getExecutionContext()
          .put("lines", this.lines);
        logger.debug("Lines Reader ended.");
        return ExitStatus.COMPLETED;
    }
}

LinesReader’s execute method creates a FileUtils instance over the input file path. Then, adds lines to a list until there’re no more lines to read.

Our class also implements StepExecutionListener that provides two extra methods: beforeStep and afterStep. We’ll use those methods to initialize and close things before and after execute runs.

If we take a look at afterStep code, we’ll notice the line where the result list (lines) is put in the job’s context to make it available for the next step:

stepExecution
  .getJobExecution()
  .getExecutionContext()
  .put("lines", this.lines);

At this point, our first step has already fulfilled its responsibility: load CSV lines into a List in memory. Let’s move to the second step and process them.

4.5. LinesProcessor

LinesProcessor will also implement StepExecutionListener and of course, Tasklet. That means that it will implement beforeStep, execute and afterStep methods as well:

public class LinesProcessor implements Tasklet, StepExecutionListener {

    private Logger logger = LoggerFactory.getLogger(
      LinesProcessor.class);

    private List<Line> lines;

    @Override
    public void beforeStep(StepExecution stepExecution) {
        ExecutionContext executionContext = stepExecution
          .getJobExecution()
          .getExecutionContext();
        this.lines = (List<Line>) executionContext.get("lines");
        logger.debug("Lines Processor initialized.");
    }

    @Override
    public RepeatStatus execute(StepContribution stepContribution, 
      ChunkContext chunkContext) throws Exception {
        for (Line line : lines) {
            long age = ChronoUnit.YEARS.between(
              line.getDob(), 
              LocalDate.now());
            logger.debug("Calculated age " + age + " for line " + line.toString());
            line.setAge(age);
        }
        return RepeatStatus.FINISHED;
    }

    @Override
    public ExitStatus afterStep(StepExecution stepExecution) {
        logger.debug("Lines Processor ended.");
        return ExitStatus.COMPLETED;
    }
}

It’s effortless to understand that it loads lines list from the job’s context and calculates the age of each person.

There’s no need to put another result list in the context as modifications happen on the same object that comes from the previous step.

And we’re ready for our last step.

4.6. LinesWriter

LinesWriter‘s task is to go over lines list and write name and age to the output file:

public class LinesWriter implements Tasklet, StepExecutionListener {

    private final Logger logger = LoggerFactory
      .getLogger(LinesWriter.class);

    private List<Line> lines;
    private FileUtils fu;

    @Override
    public void beforeStep(StepExecution stepExecution) {
        ExecutionContext executionContext = stepExecution
          .getJobExecution()
          .getExecutionContext();
        this.lines = (List<Line>) executionContext.get("lines");
        fu = new FileUtils("output.csv");
        logger.debug("Lines Writer initialized.");
    }

    @Override
    public RepeatStatus execute(StepContribution stepContribution, 
      ChunkContext chunkContext) throws Exception {
        for (Line line : lines) {
            fu.writeLine(line);
            logger.debug("Wrote line " + line.toString());
        }
        return RepeatStatus.FINISHED;
    }

    @Override
    public ExitStatus afterStep(StepExecution stepExecution) {
        fu.closeWriter();
        logger.debug("Lines Writer ended.");
        return ExitStatus.COMPLETED;
    }
}

We’re done with our job’s implementation! Let’s create a test to run it and see the results.

4.7. Running the Job

To run the job, we’ll create a test:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes = TaskletsConfig.class)
public class TaskletsTest {

    @Autowired 
    private JobLauncherTestUtils jobLauncherTestUtils;

    @Test
    public void givenTaskletsJob_whenJobEnds_thenStatusCompleted()
      throws Exception {
 
        JobExecution jobExecution = jobLauncherTestUtils.launchJob();
        assertEquals(ExitStatus.COMPLETED, jobExecution.getExitStatus());
    }
}

ContextConfiguration annotation is pointing to the Spring context configuration class, that has our job definition.

We’ll need to add a couple of extra beans before running the test:

@Bean
public JobLauncherTestUtils jobLauncherTestUtils() {
    return new JobLauncherTestUtils();
}

@Bean
public JobRepository jobRepository() throws Exception {
    MapJobRepositoryFactoryBean factory
      = new MapJobRepositoryFactoryBean();
    factory.setTransactionManager(transactionManager());
    return (JobRepository) factory.getObject();
}

@Bean
public PlatformTransactionManager transactionManager() {
    return new ResourcelessTransactionManager();
}

@Bean
public JobLauncher jobLauncher() throws Exception {
    SimpleJobLauncher jobLauncher = new SimpleJobLauncher();
    jobLauncher.setJobRepository(jobRepository());
    return jobLauncher;
}

Everything is ready! Go ahead and run the test!

After the job has finished, output.csv has the expected content and logs show the execution flow:

[main] DEBUG o.b.t.tasklets.LinesReader - Lines Reader initialized.
[main] DEBUG o.b.t.tasklets.LinesReader - Read line: [Mae Hodges,10/22/1972]
[main] DEBUG o.b.t.tasklets.LinesReader - Read line: [Gary Potter,02/22/1953]
[main] DEBUG o.b.t.tasklets.LinesReader - Read line: [Betty Wise,02/17/1968]
[main] DEBUG o.b.t.tasklets.LinesReader - Read line: [Wayne Rose,04/06/1977]
[main] DEBUG o.b.t.tasklets.LinesReader - Read line: [Adam Caldwell,09/27/1995]
[main] DEBUG o.b.t.tasklets.LinesReader - Read line: [Lucille Phillips,05/14/1992]
[main] DEBUG o.b.t.tasklets.LinesReader - Lines Reader ended.
[main] DEBUG o.b.t.tasklets.LinesProcessor - Lines Processor initialized.
[main] DEBUG o.b.t.tasklets.LinesProcessor - Calculated age 45 for line [Mae Hodges,10/22/1972]
[main] DEBUG o.b.t.tasklets.LinesProcessor - Calculated age 64 for line [Gary Potter,02/22/1953]
[main] DEBUG o.b.t.tasklets.LinesProcessor - Calculated age 49 for line [Betty Wise,02/17/1968]
[main] DEBUG o.b.t.tasklets.LinesProcessor - Calculated age 40 for line [Wayne Rose,04/06/1977]
[main] DEBUG o.b.t.tasklets.LinesProcessor - Calculated age 22 for line [Adam Caldwell,09/27/1995]
[main] DEBUG o.b.t.tasklets.LinesProcessor - Calculated age 25 for line [Lucille Phillips,05/14/1992]
[main] DEBUG o.b.t.tasklets.LinesProcessor - Lines Processor ended.
[main] DEBUG o.b.t.tasklets.LinesWriter - Lines Writer initialized.
[main] DEBUG o.b.t.tasklets.LinesWriter - Wrote line [Mae Hodges,10/22/1972,45]
[main] DEBUG o.b.t.tasklets.LinesWriter - Wrote line [Gary Potter,02/22/1953,64]
[main] DEBUG o.b.t.tasklets.LinesWriter - Wrote line [Betty Wise,02/17/1968,49]
[main] DEBUG o.b.t.tasklets.LinesWriter - Wrote line [Wayne Rose,04/06/1977,40]
[main] DEBUG o.b.t.tasklets.LinesWriter - Wrote line [Adam Caldwell,09/27/1995,22]
[main] DEBUG o.b.t.tasklets.LinesWriter - Wrote line [Lucille Phillips,05/14/1992,25]
[main] DEBUG o.b.t.tasklets.LinesWriter - Lines Writer ended.

That’s it for Tasklets. Now we can move on to the Chunks approach.

5. Chunks Approach

5.1. Introduction and Design

As the name suggests, this approach performs actions over chunks of data. That is, instead of reading, processing and writing all the lines at once, it’ll read, process and write a fixed amount of records (chunk) at a time.

Then, it’ll repeat the cycle until there’s no more data in the file.

As a result, the flow will be slightly different:

  1. While there’re lines:
    •  Do for X amount of lines:
      • Read one line
      • Process one line
    • Write X amount of lines.

So, we also need to create three beans for chunk oriented approach:

public class LineReader {
     // ...
}
public class LineProcessor {
    // ...
}
public class LinesWriter {
    // ...
}

Before moving to implementation, let’s configure our job.

5.2. Configuration

The job definition will also look different:

@Configuration
@EnableBatchProcessing
public class ChunksConfig {

    @Autowired 
    private JobBuilderFactory jobs;

    @Autowired 
    private StepBuilderFactory steps;

    @Bean
    public ItemReader<Line> itemReader() {
        return new LineReader();
    }

    @Bean
    public ItemProcessor<Line, Line> itemProcessor() {
        return new LineProcessor();
    }

    @Bean
    public ItemWriter<Line> itemWriter() {
        return new LinesWriter();
    }

    @Bean
    protected Step processLines(ItemReader<Line> reader,
      ItemProcessor<Line, Line> processor, ItemWriter<Line> writer) {
        return steps.get("processLines").<Line, Line> chunk(2)
          .reader(reader)
          .processor(processor)
          .writer(writer)
          .build();
    }

    @Bean
    public Job job() {
        return jobs
          .get("chunksJob")
          .start(processLines(itemReader(), itemProcessor(), itemWriter()))
          .build();
    }

}

In this case, there’s only one step performing only one tasklet.

However, that tasklet defines a reader, a writer and a processor that will act over chunks of data.

Note that the commit interval indicates the amount of data to be processed in one chunk. Our job will read, process and write two lines at a time.

Now we’re ready to add our chunk logic!

5.3. LineReader

LineReader will be in charge of reading one record and returning a Line instance with its content.

To become a reader, our class has to implement ItemReader interface:

public class LineReader implements ItemReader<Line> {
     @Override
     public Line read() throws Exception {
         Line line = fu.readLine();
         if (line != null) 
           logger.debug("Read line: " + line.toString());
         return line;
     }
}

The code is straightforward, it just reads one line and returns it. We’ll also implement StepExecutionListener for the final version of this class:

public class LineReader implements 
  ItemReader<Line>, StepExecutionListener {

    private final Logger logger = LoggerFactory
      .getLogger(LineReader.class);
 
    private FileUtils fu;

    @Override
    public void beforeStep(StepExecution stepExecution) {
        fu = new FileUtils("taskletsvschunks/input/tasklets-vs-chunks.csv");
        logger.debug("Line Reader initialized.");
    }

    @Override
    public Line read() throws Exception {
        Line line = fu.readLine();
        if (line != null) logger.debug("Read line: " + line.toString());
        return line;
    }

    @Override
    public ExitStatus afterStep(StepExecution stepExecution) {
        fu.closeReader();
        logger.debug("Line Reader ended.");
        return ExitStatus.COMPLETED;
    }
}

It should be noticed that beforeStep and afterStep execute before and after the whole step respectively.

5.4. LineProcessor

LineProcessor follows pretty much the same logic than LineReader.

However, in this case, we’ll implement ItemProcessor and its method process():

public class LineProcessor implements ItemProcessor<Line, Line> {

    private Logger logger = LoggerFactory.getLogger(LineProcessor.class);

    @Override
    public Line process(Line line) throws Exception {
        long age = ChronoUnit.YEARS
          .between(line.getDob(), LocalDate.now());
        logger.debug("Calculated age " + age + " for line " + line.toString());
        line.setAge(age);
        return line;
    }

}

The process() method takes an input line, processes it and returns an output line. Again, we’ll also implement StepExecutionListener:

public class LineProcessor implements 
  ItemProcessor<Line, Line>, StepExecutionListener {

    private Logger logger = LoggerFactory.getLogger(LineProcessor.class);

    @Override
    public void beforeStep(StepExecution stepExecution) {
        logger.debug("Line Processor initialized.");
    }
    
    @Override
    public Line process(Line line) throws Exception {
        long age = ChronoUnit.YEARS
          .between(line.getDob(), LocalDate.now());
        logger.debug(
          "Calculated age " + age + " for line " + line.toString());
        line.setAge(age);
        return line;
    }

    @Override
    public ExitStatus afterStep(StepExecution stepExecution) {
        logger.debug("Line Processor ended.");
        return ExitStatus.COMPLETED;
    }
}

5.5. LinesWriter

Unlike reader and processor, LinesWriter will write an entire chunk of lines so that it receives a List of Lines:

public class LinesWriter implements 
  ItemWriter<Line>, StepExecutionListener {

    private final Logger logger = LoggerFactory
      .getLogger(LinesWriter.class);
 
    private FileUtils fu;

    @Override
    public void beforeStep(StepExecution stepExecution) {
        fu = new FileUtils("output.csv");
        logger.debug("Line Writer initialized.");
    }

    @Override
    public void write(List<? extends Line> lines) throws Exception {
        for (Line line : lines) {
            fu.writeLine(line);
            logger.debug("Wrote line " + line.toString());
        }
    }

    @Override
    public ExitStatus afterStep(StepExecution stepExecution) {
        fu.closeWriter();
        logger.debug("Line Writer ended.");
        return ExitStatus.COMPLETED;
    }
}

LinesWriter code speaks for itself. And again, we’re ready to test our job.

5.6. Running the Job

We’ll create a new test, same as the one we created for the tasklets approach:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes = ChunksConfig.class)
public class ChunksTest {

    @Autowired
    private JobLauncherTestUtils jobLauncherTestUtils;

    @Test
    public void givenChunksJob_whenJobEnds_thenStatusCompleted() 
      throws Exception {
 
        JobExecution jobExecution = jobLauncherTestUtils.launchJob();
 
        assertEquals(ExitStatus.COMPLETED, jobExecution.getExitStatus()); 
    }
}

After configuring ChunksConfig as explained above for TaskletsConfig, we’re all set to run the test!

Once the job is done, we can see that output.csv contains the expected result again, and the logs describe the flow:

[main] DEBUG o.b.t.chunks.LineReader - Line Reader initialized.
[main] DEBUG o.b.t.chunks.LinesWriter - Line Writer initialized.
[main] DEBUG o.b.t.chunks.LineProcessor - Line Processor initialized.
[main] DEBUG o.b.t.chunks.LineReader - Read line: [Mae Hodges,10/22/1972]
[main] DEBUG o.b.t.chunks.LineReader - Read line: [Gary Potter,02/22/1953]
[main] DEBUG o.b.t.chunks.LineProcessor - Calculated age 45 for line [Mae Hodges,10/22/1972]
[main] DEBUG o.b.t.chunks.LineProcessor - Calculated age 64 for line [Gary Potter,02/22/1953]
[main] DEBUG o.b.t.chunks.LinesWriter - Wrote line [Mae Hodges,10/22/1972,45]
[main] DEBUG o.b.t.chunks.LinesWriter - Wrote line [Gary Potter,02/22/1953,64]
[main] DEBUG o.b.t.chunks.LineReader - Read line: [Betty Wise,02/17/1968]
[main] DEBUG o.b.t.chunks.LineReader - Read line: [Wayne Rose,04/06/1977]
[main] DEBUG o.b.t.chunks.LineProcessor - Calculated age 49 for line [Betty Wise,02/17/1968]
[main] DEBUG o.b.t.chunks.LineProcessor - Calculated age 40 for line [Wayne Rose,04/06/1977]
[main] DEBUG o.b.t.chunks.LinesWriter - Wrote line [Betty Wise,02/17/1968,49]
[main] DEBUG o.b.t.chunks.LinesWriter - Wrote line [Wayne Rose,04/06/1977,40]
[main] DEBUG o.b.t.chunks.LineReader - Read line: [Adam Caldwell,09/27/1995]
[main] DEBUG o.b.t.chunks.LineReader - Read line: [Lucille Phillips,05/14/1992]
[main] DEBUG o.b.t.chunks.LineProcessor - Calculated age 22 for line [Adam Caldwell,09/27/1995]
[main] DEBUG o.b.t.chunks.LineProcessor - Calculated age 25 for line [Lucille Phillips,05/14/1992]
[main] DEBUG o.b.t.chunks.LinesWriter - Wrote line [Adam Caldwell,09/27/1995,22]
[main] DEBUG o.b.t.chunks.LinesWriter - Wrote line [Lucille Phillips,05/14/1992,25]
[main] DEBUG o.b.t.chunks.LineProcessor - Line Processor ended.
[main] DEBUG o.b.t.chunks.LinesWriter - Line Writer ended.
[main] DEBUG o.b.t.chunks.LineReader - Line Reader ended.

We have the same result and a different flow. Logs make evident how the job executes following this approach.

6. Conclusion

Different contexts will show the need for one approach or the other. While Tasklets feel more natural for ‘one task after the other’ scenarios, chunks provide a simple solution to deal with paginated reads or situations where we don’t want to keep a significant amount of data in memory.

The complete implementation of this example can be found in the GitHub project.


Shuffling Collections In Java

$
0
0

1. Overview

In this quick article, we’ll see how we can shuffle a collection in Java. Java has a built-in method for shuffling List objects — we’ll utilize it for other collections as well.

2. Shuffling a List

We’ll use the method java.util.Collections.shuffle, which takes as input a List and shuffles it in-place. By in-place, we mean that it shuffles the same list as passed in input instead of creating a new one with shuffled elements.

Let’s look at a quick example showing how to shuffle a List:

List<String> students = Arrays.asList("Foo", "Bar", "Baz", "Qux");
Collections.shuffle(students);

There’s a second version of java.util.Collections.shuffle that also accepts as input a custom source of randomness. This can be used to make shuffling a deterministic process if we have such a requirement for our application.

Let’s use this second variant to achieve the same shuffling on two lists:

List<String> students_1 = Arrays.asList("Foo", "Bar", "Baz", "Qux");
List<String> students_2 = Arrays.asList("Foo", "Bar", "Baz", "Qux");

int seedValue = 10;

Collections.shuffle(students_1, new Random(seedValue));
Collections.shuffle(students_2, new Random(seedValue));

assertThat(students_1).isEqualTo(students_2);

When using identical sources of randomness (initialized from same seed value), the generated random number sequence will be the same for both shuffles. Thus, after shuffling, both lists will contain elements in the exact same order.

3. Shuffling Elements of Unordered Collections

We may want to shuffle other collections as well such as Set, Map, or Queue, for example, but all these collections are unordered — they don’t maintain any specific order.

Some implementations, such as LinkedHashMap, or a Set with a Comparator – do maintain a fixed order, thus we cannot shuffle them either.

However, we can still access their elements randomly by converting them first into a List, then shuffling this List.

Let’s see a quick example of shuffling elements of a Map:

Map<Integer, String> studentsById = new HashMap<>();
studentsById.put(1, "Foo");
studentsById.put(2, "Bar");
studentsById.put(3, "Baz");
studentsById.put(4, "Qux");

List<Map.Entry<Integer, String>> shuffledStudentEntries
 = new ArrayList<>(studentsById.entrySet());
Collections.shuffle(shuffledStudentEntries);

List<String> shuffledStudents = shuffledStudentEntries.stream()
  .map(Map.Entry::getValue)
  .collect(Collectors.toList());

Similarly, we can shuffle elements of a Set:

Set<String> students = new HashSet<>(
  Arrays.asList("Foo", "Bar", "Baz", "Qux"));
List<String> studentList = new ArrayList<>(students);
Collections.shuffle(studentList);

4. Conclusion

In this quick tutorial, we saw how to use java.util.Collections.shuffle to shuffle various collections in Java.

This naturally works directly with a List, and we can utilize it indirectly to randomize the order of elements in other collections as well. We can also control the shuffling process by providing a custom source of randomness and make it deterministic.

As usual, all code demonstrated in this article is available over on GitHub.

Java Weekly, Issue 217

$
0
0

Here we go…

1. Spring and Java

>> Monitor and troubleshoot Java applications and services with Datadog 

Optimize performance with end-to-end tracing and out-of-the-box support for popular Java frameworks, application servers, and databases. Try it free.

>> Package by layer for Spring project is obsolete [lkrnac.net]

In the world of Microservices and DDD, package-by-layer doesn’t seem to make much sense anymore.

>> Designing, Implementing and Using Reactive APIs [infoq.com]

Before pursuing a reactive approach, ensure that going reactive is not introducing unnecessary complexity.

>> Spring Data Projections [blog.sourced-bvba.be]

It turns out we can easily create custom projections with Spring Data. Very nice.

>> JUnit and Cucumber test reports based on source code and behavior [advancedweb.hu]

Detailed failure messages for Java tests and without the use of complex assertion libraries – definitely a cool addition to the JUnit and Cucumber stack.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> The Practical Test Pyramid [martinfowler.com]

Regardless of the type of tests you’re implementing, testing the observable behavior (instead of implementation details) will save a lot of frustration later on.

>> Virtual Panel: Succeeding with Event Sourcing [infoq.com]

Event Sourcing in isolation is definitely useful, but its power and potential are amplified when it’s used to complement a CQRS architecture and Domain Driven Design – it’s important to respect the boundaries of our bounded-contexts.

>> Generic Platform – The Rule of Three [scottlogic.com]

Premature genericisation can contribute to the “legacy code” you have in your system.

>> Model Actions, not Data [amundsen.com]

Relying on your data model as any guide for your API design is almost always a bad idea. Words of wisdom here.

Also worth reading:

3. Musings

>> Promoting Test Driven Development with a Remote Team [daedtech.com]

Distributed teams can highly benefit from adopting TDD – it’s well worth investing into that adoption.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Not Morons [dilbert.com]

>> Clear Direction [dilbert.com]

>> Option One [dilbert.com]

5. Pick of the Week

>> The world needs more modest, linear growth companies. Please make some. [m.signalvnoise.com]

Spring Boot Security Auto-Configuration

$
0
0

1. Introduction

In this article, we’ll have a look at Spring Boot’s opinionated approach to security.

Simply put, we’re going to focus on the default security configuration and how we can disable or customize it if we need to.

2. Default Security Setup

In order to add security to our Spring Boot application, we need to add the security starter dependency:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-security</artifactId>
</dependency>

This will include the SecurityAutoConfiguration class – containing the initial/default security configuration.

Notice how we didn’t specify the version here, with the assumption that the project is already using Boot as the parent.

Simply put, by default, the application will get Basic Authentication enabled. There are some predefined properties, such as:

security.user.name=user
security.basic.enabled=true

If we start the application, we’ll notice that the default password is randomly generated and printed in the console log:

Using default security password: c8be15de-4488-4490-9dc6-fab3f91435c6

For more defaults see the Spring Boot Common Application Properties at the security properties chapter.

3. Disabling the Auto-Configuration

To discard the security auto-configuration and add our own configuration, we need to exclude the SecurityAutoConfiguration class.

This can be done via a simple exclusion:

@SpringBootApplication(exclude = { SecurityAutoConfiguration.class })
public class SpringBootSecurityApplication {

    public static void main(String[] args) {
        SpringApplication.run(SpringBootSecurityApplication.class, args);
    }
}

Or by adding some configuration into the application.properties file:

spring.autoconfigure.exclude=org.springframework.boot.autoconfigure.security.SecurityAutoConfiguration

There are also some particular cases in which this setup isn’t quite enough.

For example, almost each Spring Boot application is started with Actuator in the classpath. This causes problems because another auto-configuration class needs the one we’ve just excluded, so the application will fail to start.

In order to fix this issue, we need to exclude that class; and, specific to the Actuator situation, we need to exclude ManagementWebSecurityAutoConfiguration.

3.1. Disabling vs. Surpassing Security Auto-Configuration

There’s a significant difference between disabling autoconfiguration and surpassing it.

By disabling it, it’s just like adding the Spring Security dependency and the whole setup from scratch. This can be useful in several cases:

  1. Integrating application security with a custom security provider
  2. Migrating a legacy Spring application with already existing security setup – to Spring Boot

But, most of the time we won’t need to fully disable the security auto-configuration.

The way Spring Boot is configured permits surpassing the autoconfigured security by adding in our new/custom configuration classes. This is typically easier, as we’re just customizing an existing security setup to fulfill our needs.

4. Configuring Spring Boot Security

If we’ve chosen the path of disabling security auto-configuration, we naturally need to provide our own configuration.

As we’ve discussed before, this is the default security configuration; we can customize it by modifying the property file.

We can, for example, override the default password by adding our own:

security.user.password=password

If we want a more flexible configuration, with multiple users and roles for example – you now need to make use of a full @Configuration class:

@Configuration
@EnableWebSecurity
public class BasicConfiguration extends WebSecurityConfigurerAdapter {

    @Override
    protected void configure(AuthenticationManagerBuilder auth)
      throws Exception {
        auth
          .inMemoryAuthentication()
          .withUser("user")
            .password("password")
            .roles("USER")
            .and()
          .withUser("admin")
            .password("admin")
            .roles("USER", "ADMIN");
    }

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http
          .authorizeRequests()
          .anyRequest()
          .authenticated()
          .and()
          .httpBasic();
    }
}

The @EnableWebSecurity annotation is crucial if we disable the default security configuration.

If missing, the application will fail to start. The annotation is only optional if we’re just overriding the default behavior using a WebSecurityConfigurerAdapter.

Now, we should verify that our security configuration applies correctly by with a couple of quick live tests:

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = RANDOM_PORT)
public class BasicConfigurationIntegrationTest {

    TestRestTemplate restTemplate;
    URL base;
    @LocalServerPort int port;

    @Before
    public void setUp() throws MalformedURLException {
        restTemplate = new TestRestTemplate("user", "password");
        base = new URL("http://localhost:" + port);
    }

    @Test
    public void whenLoggedUserRequestsHomePage_ThenSuccess()
     throws IllegalStateException, IOException {
        ResponseEntity<String> response 
          = restTemplate.getForEntity(base.toString(), String.class);
 
        assertEquals(HttpStatus.OK, response.getStatusCode());
        assertTrue(response
          .getBody()
          .contains("Baeldung"));
    }

    @Test
    public void whenUserWithWrongCredentials_thenUnauthorizedPage() 
      throws Exception {
 
        restTemplate = new TestRestTemplate("user", "wrongpassword");
        ResponseEntity<String> response 
          = restTemplate.getForEntity(base.toString(), String.class);
 
        assertEquals(HttpStatus.UNAUTHORIZED, response.getStatusCode());
        assertTrue(response
          .getBody()
          .contains("Unauthorized"));
    }
}

The idea is that behind Spring Boot Security is, in fact, Spring Security, so any security configuration that can be done with this one, or any integration this one supports can be also implemented into Spring Boot.

5. Spring Boot OAuth2 Auto-Configuration

Spring Boot has a dedicated auto-configuration support for OAuth2.

Before we get to that, let’s add the Maven dependency to start setting up our application:

<dependency>
   <groupId>org.springframework.security.oauth</groupId>
   <artifactId>spring-security-oauth2</artifactId>
</dependency>

This dependency includes a set of classes that are capable of triggering the auto-configuration mechanism defined in OAuth2AutoConfiguration class.

Now, we have multiple choices to continue, depending on the scope of our application.

5.1. OAuth2 Authorization Server Auto-Configuration

If we want our application to be an OAuth2 provider, we can use @EnableAuthorizationServer.

On startup, we’ll notice in the logs that the auto-configuration classes will generate a client id and a client-secret for our authorization server and of course a random password for basic authentication.

Using default security password: a81cb256-f243-40c0-a585-81ce1b952a98
security.oauth2.client.client-id = 39d2835b-1f87-4a77-9798-e2975f36972e
security.oauth2.client.client-secret = f1463f8b-0791-46fe-9269-521b86c55b71

These credentials can be used to obtain an access token:

curl -X POST -u 39d2835b-1f87-4a77-9798-e2975f36972e:f1463f8b-0791-46fe-9269-521b86c55b71 \
 -d grant_type=client_credentials -d username=user -d password=a81cb256-f243-40c0-a585-81ce1b952a98 \
 -d scope=write  http://localhost:8080/oauth/token

5.2. Other Spring Boot OAuth2 Auto-Configuration Settings

There are some other use cases covered by Spring Boot OAuth2 like:

  1. Resource Server – @EnableResourceServer
  2. Client Application – @EnableOAuth2Sso or @EnableOAuth2Client

If we need our application to be one of the types above we just have to add some configuration to application properties.

All OAuth2 specific properties can be found at Spring Boot Common Application Properties.

6. Conclusion

In this article, we focused on the default security configuration provided by Spring Boot. We saw how the security auto-configuration mechanism can be disabled or overridden and how a new security configuration can be applied.

The source code can be found over on Github.

Feature Flags with Spring

$
0
0

1. Overview

In this article, we’ll briefly define feature flags and propose an opinionated and pragmatic approach to implement them in Spring Boot applications. Then, we’ll dig into more sophisticated iterations taking advantage of different Spring Boot features.

We’ll discuss various scenarios that might require feature flagging and talk about possible solutions. We’ll do this using a Bitcoin Miner example application.

2. Feature Flags

Feature Flags – sometimes called feature toggles – are a mechanism that allows us to enable or disable specific functionality of our application without having to modify code or, ideally, redeploy our app.

Depending on the dynamics required by a given feature flag we might need to configure them globally, per app instance, or more granularly – perhaps per user or request.

As with many situations in Software Engineering, it’s important to try to use the most straightforward approach that tackles the problem at hand without adding unnecessary complexity.

Feature flags are a potent tool that, when used wisely, can bring reliability and stability to our system. However, when they’re misused or under-maintained, they can quickly become sources of complexity and headaches.

There are many scenarios where feature flags could come in handy:

Trunk-based development and nontrivial features

In trunk-based development, particularly when we want to keep integrating frequently, we might find ourselves not ready to release a certain piece of functionality. Feature flags can come in handy to enable us to keep releasing without making our changes available until complete.

Environment-specific configuration

We might find ourselves requiring certain functionality to reset our DB for an E2E testing environment.

Alternatively, we might need to use a different security configuration for non-production environments from that used in the production environment.

Hence, we could take advantage of feature flags to toggle the right setup in the right environment.

A/B testing

Releasing multiple solutions for the same problem and measuring the impact is a compelling technique that we could implement using feature flags.

Canary releasing

When deploying new features, we might decide to do it gradually, starting with a small group of users, and expanding its adoption as we validate the correctness of its behavior. Feature flags allow us to achieve this.

In the following sections, we’ll try to provide a practical approach to tackle the above-mentioned scenarios.

Let’s break down different strategies to feature flagging, starting with the simplest scenario to then move into a more granular and more complex setup.

3. Application-Level Feature Flags

If we need to tackle any of the first two use cases, application-level features flags are a simple way of getting things working.

A simple feature flag would typically involve a property and some configuration based on the value of that property.

3.1. Feature Flags Using Spring Profiles

In Spring we can take advantage of profiles. Conveniently, profiles enable us to configure certain beans selectively. With a few constructs around them, we can quickly create a simple and elegant solution for application-level feature flags.

Let’s pretend we’re building a BitCoin mining system. Our software is already in production, and we’re tasked to create an experimental, improved mining algorithm.

In our JavaConfig we could profile our components:

@Configuration
public class ProfiledMiningConfig {

    @Bean
    @Profile("!experimental-miner")
    public BitcoinMiner defaultMiner() {
        return new DefaultBitcoinMiner();
    }

    @Bean
    @Profile("experimental-miner")
    public BitcoinMiner experimentalMiner() {
        return new ExperimentalBitcoinMiner();
    }
}

Then, with the previous configuration, we simply need to include our profile to opt-in for our new functionality. There’re tons of ways of configuring our app in general and enabling profiles in particular. Likewise, there are testing utilities to make our lives easier.

As long as our system is simple enough, we could then create an environment-based configuration to determine which features flags to apply and which ones to ignore.

Let’s imagine we have a new UI based on cards instead of tables, together with the previous experimental miner.

We’d like to enable both features in our acceptance environment (UAT). We could create an application-uat.yml file:

spring:
  profiles:
    include: experimental-miner,ui-cards

# More config here

With the previous file in place, we’d just need to enable the UAT profile in the UAT environment to get the desired set of features.

It’s also important to understand how to take advantage of spring.profiles.include. Compared to spring.profiles.active, the former enables us to include profiles in an additive manner.

In our case, we want the uat profile also to include experimental-miner and ui-cards.

3.2. Feature Flags Using Custom Properties

Profiles are a great and simple way to get the job done. However, we might require profiles for other purposes. Or perhaps, we might want to build a more structured feature flag infrastructure.

For these scenarios, custom properties might be a desirable option.

Let’s rewrite our previous example taking advantage of @ConditionalOnProperty and our namespace:

@Configuration
public class CustomPropsMiningConfig {

    @Bean
    @ConditionalOnProperty(
      name = "features.miner.experimental", 
      matchIfMissing = true)
    public BitcoinMiner defaultMiner() {
        return new DefaultBitcoinMiner();
    }

    @Bean
    @ConditionalOnProperty(
      name = "features.miner.experimental")
    public BitcoinMiner experimentalMiner() {
        return new ExperimentalBitcoinMiner();
    }
}

The previous example builds on top of Spring Boot’s conditional configuration and configures one component or another, depending on whether the property is set to true or false (or omitted altogether).

The result is very similar to the one in 3.1, but now, we have our namespace. Having our namespace allows us to create meaningful YAML/properties files:

#[...] Some Spring config

features:
  miner:
    experimental: true
  ui:
    cards: true
    
#[...] Other feature flags

Also, this new setup allows us to prefix our feature flags – in our case, using the features prefix.

It might seem like a small detail, but as our application grows and complexity increases, this simple iteration will help us keep our feature flags under control.

Let’s talk about other benefits of this approach.

3.3. Using @ConfigurationProperties

As soon as we get a prefixed set of properties, we can create a POJO decorated with @ConfigurationProperties to get a programmatic handle in our code.

Following our ongoing example:

@Component
@ConfigurationProperties(prefix = "features")
public class ConfigProperties {

    private MinerProperties miner;
    private UIProperties ui;

    // standard getters and setters

    public static class MinerProperties {
        private boolean experimental;
        // standard getters and setters
    }

    public static class UIProperties {
        private boolean cards;
        // standard getters and setters
    }
}

By putting our feature flags’ state in a cohesive unit, we open up new possibilities, allowing us to easily expose that information to other parts of our system, such as the UI, or to downstream systems.

3.4. Exposing Feature Configuration

Our Bitcoin mining system got a UI upgrade which is not entirely ready yet. For that reason, we decided to feature-flag it. We might have a single-page app using React, Angular, or Vue.

Regardless of the technology, we need to know what features are enabled so that we can render our page accordingly.

Let’s create a simple endpoint to serve our configuration so that our UI can query the backend when needed:

@RestController
public class FeaturesConfigController {

    private ConfigProperties properties;

    // constructor

    @GetMapping("/feature-flags")
    public ConfigProperties getProperties() {
        return properties;
    }
}

There might be more sophisticated ways of serving this information, such as creating custom actuator endpoints. But for the sake of this guide, a controller endpoint feels like good enough a solution.

3.5. Keeping the Camp Clean

Although it might sound obvious, once we’ve implemented our feature flags thoughtfully, it’s equally important to remain disciplined in getting rid of them once they’re no longer needed.

Feature flags for the first use case – trunk-based development and non-trivial features – are typically short-lived. This means that we’re going to need to make sure that our ConfigProperties, our Java configuration, and our YAML files stay clean and up-to-date.

4. More Granular Feature Flags

Sometimes we find ourselves in more complex scenarios. For A/B testing or canary releases, our previous approach is simply not enough.

To get feature flags at a more granular level, we may need to create our solution. This could involve customizing our user entity to include feature-specific information, or perhaps extending our web framework.

Polluting our users with feature flags might not be an appealing idea for everybody, however, and there are other solutions.

As an alternative, we could take advantage of some built-in tools such as Togglz. This tool adds some complexity but offers a nice out-of-the-box solution and provides first-class integration with Spring Boot.

Togglz supports different activation strategies:

  1. Username: Flags associated with specific users
  2. Gradual rollout: Flags enabled for a percentage of the user base. This is useful for Canary releases, for example, when we want to validate the behavior of our features
  3. Release date: We could schedule flags to be enabled at a certain date and time. This might be useful for a product launch, a coordinated release, or offers and discounts
  4. Client IP: Flagged features based on clients IPs. These might come in handy when applying the specific configuration to specific customers, given they have static IPs
  5. Server IP: In this case, the IP of the server is used to determine whether a feature should be enabled or not. This might be useful for canary releases too, with a slightly different approach than the gradual rollout – like when we want to assess performance impact in our instances
  6. ScriptEngine: We could enable feature flags based on arbitrary scripts. This is arguably the most flexible option
  7. System Properties: We could set certain system properties to determine the state of a feature flag. This would be quite similar to what we achieved with our most straightforward approach

5. Summary

In this article, we had a chance to talk about feature flags. Additionally, we discussed how Spring could help us achieve some of this functionality without adding new libraries.

We started by defining how this pattern can help us with a few common use cases.

Next, we built a few simple solutions using Spring and Spring Boot out-of-the-box tools. With that, we came up with a simple yet powerful feature flagging construct.

Down below, we compared a couple of alternatives. Moving from the simpler and less flexible solution to a more sophisticated, although more complex, pattern.

Finally, we briefly provided a few guidelines to build more robust solutions. This is useful when we need a higher degree of granularity.

RxJava 2 – Flowable

$
0
0

1. Introduction

RxJava is a Reactive Extensions Java implementation that allows us to write event-driven, and asynchronous applications. More information on how to use RxJava can be found in our intro article here.

RxJava 2 was rewritten from scratch, which brought multiple new features; some of which were created as a response for issues that existed in the previous version of the framework.

One of such features is the io.reactivex.Flowable.

2. Observable vs. Flowable

In the previous version of RxJava, there was only one base class for dealing with backpressure-aware and non-backpressure-aware sources – Observable.

RxJava 2 introduced a clear distinction between these two kinds of sources – backpressure-aware sources are now represented using a dedicated class – Flowable.

Observable sources don’t support backpressure. Because of that, we should use it for sources that we merely consume and can’t influence.

Also, if we’re dealing with a big number of elements, two possible problems connected with backpressure can occur depending on the type of the Observable.

In case of using a so-called cold Observable“, events are emitted lazily, so we’re safe from overflowing an observer.

When using a hot Observable” that will merely continue to emit events, even if the consumer can’t keep up.

More about the backpressure can be found here, in our backpressure focused article.

3. Creating a Flowable

There are different ways to create a Flowable. Conveniently for us, those methods look similar to the methods in Observable in the first version of RxJava.

3.1. Simple Flowable

We can create a Flowable using the just() method similarly as we could with Observable : 

Flowable<Integer> integerFlowable = Flowable.just(1, 2, 3, 4);

Even though using the just() is quite simple, it isn’t very common to create a Flowable from static data, and it’s used for testing purposes.

3.2. Flowable from Observable

When we have an Observable we can easily transform it to Flowable using the toFlowable() method:

Observable<Integer> integerObservable = Observable.just(1, 2, 3);
Flowable<Integer> integerFlowable = integerObservable
  .toFlowable(BackpressureStrategy.BUFFER);

Notice that to be able to perform the conversion, we need to enrich the Observable with a BackpressureStrategy. We’ll describe available strategies in the next section.

3.3. Flowable from FlowableOnSubscribe

RxJava 2 introduced a functional interface FlowableOnSubscribe, which represents a Flowable that starts emitting events after the consumer subscribes to it.

Due to that, all clients will receive the same set of events, which makes FlowableOnSubscribe backpressure-safe.

When we have the FlowableOnSubscribe we can use it to create the Flowable:

FlowableOnSubscribe<Integer> flowableOnSubscribe
 = flowable -> flowable.onNext(1);
Flowable<Integer> integerFlowable = Flowable
  .create(flowableOnSubscribe, BackpressureStrategy.BUFFER);

The documentation describes many more methods to create Flowable.

4. Flowable BackpressureStrategy

Some methods like toFlowable()  or create()  take a BackpressureStrategy as an argument.

The BackpressureStrategy is an enumeration, which defines the backpressure behaviour that we’ll apply to our Flowable. 

It can cache or drop events or not implement any behaviour at all, in the last case, we will be responsible for defining it, using backpressure operators.

BackpressureStrategy is similar to BackpressureMode present in the previous version of RxJava.

There are five different strategies available in RxJava 2.

4.1. Buffer

If we use the BackpressureStrategy.BUFFERthe source will buffer all the events until the subscriber can consume them:

public void thenAllValuesAreBufferedAndReceived() {
    List testList = IntStream.range(0, 100000)
      .boxed()
      .collect(Collectors.toList());
 
    Observable observable = Observable.fromIterable(testList);
    TestSubscriber<Integer> testSubscriber = observable
      .toFlowable(BackpressureStrategy.BUFFER)
      .observeOn(Schedulers.computation()).test();

    testSubscriber.awaitTerminalEvent();

    List<Integer> receivedInts = testSubscriber.getEvents()
      .get(0)
      .stream()
      .mapToInt(object -> (int) object)
      .boxed()
      .collect(Collectors.toList());

    assertEquals(testList, receivedInts);
}

It’s similar to invoking onBackpressureBuffer() method on Flowable, but it doesn’t allow to define a buffer size or the onOverflow action explicitly.

4.2. Drop

We can use the BackpressureStrategy.DROP to discard the events that cannot be consumed instead of buffering them.

Again this is similar to using onBackpressureDrop() on Flowable:

public void whenDropStrategyUsed_thenOnBackpressureDropped() {
   
    Observable observable = Observable.fromIterable(testList);
    TestSubscriber<Integer> testSubscriber = observable
      .toFlowable(BackpressureStrategy.DROP)
      .observeOn(Schedulers.computation())
      .test();
    testSubscriber.awaitTerminalEvent();
    List<Integer> receivedInts = testSubscriber.getEvents()
      .get(0)
      .stream()
      .mapToInt(object -> (int) object)
      .boxed()
      .collect(Collectors.toList());

    assertThat(receivedInts.size() < testList.size());
    assertThat(!receivedInts.contains(100000));
 }

4.3. Latest

Using the BackpressureStrategy.LATEST will force the source to keep only the latest events, thus overwriting any previous values if the consumer can’t keep up:

public void whenLatestStrategyUsed_thenTheLastElementReceived() {
  
    Observable observable = Observable.fromIterable(testList);
    TestSubscriber<Integer> testSubscriber = observable
      .toFlowable(BackpressureStrategy.LATEST)
      .observeOn(Schedulers.computation())
      .test();

    testSubscriber.awaitTerminalEvent();
    List<Integer> receivedInts = testSubscriber.getEvents()
      .get(0)
      .stream()
      .mapToInt(object -> (int) object)
      .boxed()
      .collect(Collectors.toList());

    assertThat(receivedInts.size() < testList.size());
    assertThat(receivedInts.contains(100000));
 }

BackpressureStrategy.LATEST and BackpressureStrategy.DROP look very similar when we look at the code.

However, BackpressureStrategy.LATEST will overwrite elements that our subscriber can’t handle and keep only the latest ones, hence the name.

BackpressureStrategy.DROP, on the other hand, will discard elements that can’t be handled. This means that newest elements won’t necessarily be emitted.

4.4. Error

When we’re using the BackpressureStrategy.ERROR, we’re simply saying that we don’t expect backpressure to occur. Consequently, a MissingBackpressureException should be thrown if the consumer can’t keep up with the source:

public void whenErrorStrategyUsed_thenExceptionIsThrown() {
    Observable observable = Observable.range(1, 100000);
    TestSubscriber subscriber = observable
      .toFlowable(BackpressureStrategy.ERROR)
      .observeOn(Schedulers.computation())
      .test();

    subscriber.awaitTerminalEvent();
    subscriber.assertError(MissingBackpressureException.class);
}

4.5. Missing

If we use the BackpressureStrategy.MISSING, the source will push elements without discarding or buffering.

The downstream will have to deal with overflows in this case:

public void whenMissingStrategyUsed_thenException() {
    Observable observable = Observable.range(1, 100000);
    TestSubscriber subscriber = observable
      .toFlowable(BackpressureStrategy.MISSING)
      .observeOn(Schedulers.computation())
      .test();
    subscriber.awaitTerminalEvent();
    subscriber.assertError(MissingBackpressureException.class);
}

In our tests, we’re excepting MissingbackpressureException for both ERROR and MISSING strategies. As both of them will throw such exception when the source’s internal buffer is overflown.

However, it’s worth to note that both of them have a different purpose.

We should use the former one when we don’t expect backpressure at all, and we want the source to throw an exception in case if it occurs.

The latter one could be used if we don’t want to specify a default behavior on the creation of the Flowable. And we’re going to use backpressure operators to define it later on.

5. Summary

In this tutorial, we’ve presented the new class introduced in RxJava 2 called Flowable.

To find more information about the Flowable itself and it’s API we can refer to the documentation.

As always all the code samples can be found over on GitHub.

A Simple Tagging Implementation with JPA

$
0
0

1. Overview

Tagging is a standard design pattern that allows us to categorize and filter items in our data model.

In this article, we’ll implement tagging using Spring and JPA. We’ll be using Spring Data to accomplish the task. Furthermore, this implementation will be useful if you want to use Hibernate.

This is the second article in a series on implementing tagging. To see how to implement it with Elasticsearch, go here.

2. Adding Tags

First, we’re going to explore the most straightforward implementation of tagging: a List of Strings. We can implement tags by adding a new field to our entity like this:

@Entity
public class Student {
    // ...

    @ElementCollection
    private List<String> tags = new ArrayList<>();

    // ...
}

Notice the use of the ElementCollection annotation on our new field. Since we’re running in front of a data store, we need to tell it how to store our tags.

If we didn’t add the annotation, they’d be stored in a single blob which would be harder to work with. This annotation creates another table called STUDENT_TAGS (i.e., <entity>_<field>) which will make our queries more robust.

This creates a One-To-Many relationship between our entity and tags! We’re implementing the simplest version of tagging here. Because of this, we’ll potentially have a lot of duplicate tags (one for each entity that has it). We’ll talk more about this concept later.

3. Building Queries

Tags allow us to perform some interesting queries on our data. We can search for entities with a specific tag, filter a table scan, or even limit what results come back in a particular query. Let’s take a look at each of these case.

3.1. Searching Tags

The tag field we added to our data model can be searched similar to other fields on our model. We keep the tags in a separate table when building the query.

Here is how we search for an entity containing a specific tag:

@Query("SELECT s FROM Student s JOIN s.tags t WHERE t = LOWER(:tag)")
List<Student> retrieveByTag(@Param("tag") String tag);

Because the tags are stored in another table, we need to JOIN them in our query – this will return all of the Student entities with a matching tag.

First, let’s set up some test data:

Student student = new Student(0, "Larry");
student.setTags(Arrays.asList("full time", "computer science"));
studentRepository.save(student);

Student student2 = new Student(1, "Curly");
student2.setTags(Arrays.asList("part time", "rocket science"));
studentRepository.save(student2);

Student student3 = new Student(2, "Moe");
student3.setTags(Arrays.asList("full time", "philosophy"));
studentRepository.save(student3);

Student student4 = new Student(3, "Shemp");
student4.setTags(Arrays.asList("part time", "mathematics"));
studentRepository.save(student4);

Next, let’s test it and make sure it works:

// Grab only the first result
Student student2 = studentRepository.retrieveByTag("full time").get(0);
assertEquals("name incorrect", "Larry", student2.getName());

We’ll get back the first student in the repository with the full time tag. This is exactly what we wanted.

In addition, we can extend this example to show how to filter a larger dataset. Here is the example:

List<Student> students = studentRepository.retrieveByTag("full time");
assertEquals("size incorrect", 2, students.size());

With a little refactoring, we can modify the repository to take in multiple tags as a filter so we can refine our results even more.

3.2. Filtering A Query

Another useful application of our simple tagging is applying a filter to a specific query. While the previous examples also allowed us to do filtering, they worked on all of the data in our table.

Since we also need to filter other searches, let’s look at an example:

@Query("SELECT s FROM Student s JOIN s.tags t WHERE s.name = LOWER(:name) AND t = LOWER(:tag)")
List<Student> retrieveByNameFilterByTag(@Param("name") String name, @Param("tag") String tag);

We can see that this query is nearly identical to the one above. A tag is nothing more than another constraint to use in our query.

Our usage example is also going to look familiar:

Student student2 = studentRepository.retrieveByNameFilterByTag(
  "Moe", "full time").get(0);
assertEquals("name incorrect", "moe", student2.getName());

Consequently, we can apply the tag filter to any query on this entity. This gives the user a lot of power in the interface to find the exact data they need.

4. Advanced Tagging

Our simple tagging implementation is a great place to start. But, due to the One-To-Many relationship, we can run into some issues.

First, we’ll end up with a table full of duplicate tags. This won’t be a problem on small projects, but larger systems could end up with millions (or even billions) of duplicate entries.

Also, our Tag model isn’t very robust. What if we wanted to keep track of when the tag was initially created? In our current implementation, we have no way of doing that.

Finally, we can’t share our tags across multiple entity types. This can lead to even more duplication that can impact our system performance.

Many-To-Many relationships will solve most of our problems. To learn how to use the @manytomany annotation, check out this article (since this is beyond the scope of this article).

5. Conclusion

Tagging is a simple and straightforward way to be able to query data and combined with the Java Persistence API, we’ve got a powerful filtering feature that is easily implemented.

Although the simple implementation may not always be the most appropriate, we’ve highlighted the routes to take to help resolve that situation.

As always, the code used in this article can be found on over on GitHub.

Comparing Strings in Java

$
0
0

1. Overview

In this article, we’ll talk about the different ways of comparing Strings in Java.

As String is one of the most used data types in Java, this is naturally a very commonly used operation.

2. String Comparison with String Class

2.1. Using “==” Comparison Operator

Using the “==” operator for comparing text values is one of the most common mistakes Java beginners make. This is incorrect because “==” only checks the referential equality of two Strings, meaning if they reference the same object or not.

Let’s see an example of this behavior:

String string1 = "using comparison operator";
String string2 = "using comparison operator";
String string3 = new String("using comparison operator");
 
assertThat(string1 == string2).isTrue();
assertThat(string1 == string3).isFalse();

In the example above, the first assertion is true because the two variables point to the same String literal.

On the other hand, the second assertion is false because string1 is created with a literal and string3 is created using the new operator – therefore they reference different objects.

2.2. Using equals()

The String class overrides the equals() inherited from Object. This method compares two Strings character by character, ignoring their address.

It considers them equal if they are of the same length and the characters are in same order:

String string1 = "using equals method";
String string2 = "using equals method";
        
String string3 = "using EQUALS method";
String string4 = new String("using equals method");

assertThat(string1.equals(string2)).isTrue();
assertThat(string1.equals(string4)).isTrue();

assertThat(string1.equals(null)).isFalse();
assertThat(string1.equals(string3)).isFalse();

In this example, string1, string2, and string4 variables are equal because they have the same case and value irrespective of their address.

For string3 the method retuns false, as it’s case sensitive.

Also, if any of the two strings is null, then the method returns false.

2.3. Using equalsIgnoreCase()

The equalsIgnoreCase() method returns a boolean value. As the name suggests this method ignores casing in characters while comparing Strings:

String string1 = "using equals ignore case";
String string2 = "USING EQUALS IGNORE CASE";

assertThat(string1.equalsIgnoreCase(string2)).isTrue();

2.4. Using compareTo()

The compareTo() method returns an int type value and compares two Strings character by character lexicographically based on a dictionary or natural ordering.

This method returns 0 if two Strings are equal or if both are null, a negative number if the first String comes before the argument, and a number greater than zero if the first String comes after the argument String.

Let’s see an example:

String author = "author";
String book = "book";
String duplicateBook = "book";

assertThat(author.compareTo(book))
  .isEqualTo(-1);
assertThat(book.compareTo(author))
  .isEqualTo(1);
assertThat(duplicateBook.compareTo(book))
  .isEqualTo(0);

2.5. Using compareToIgnoreCase()

The compareToIgnoreCase() is similar to the previous method, except it ignores case:

String author = "Author";
String book = "book";
String duplicateBook = "BOOK";

assertThat(author.compareToIgnoreCase(book))
  .isEqualTo(-1);
assertThat(book.compareToIgnoreCase(author))
  .isEqualTo(1);
assertThat(duplicateBook.compareToIgnoreCase(book))
  .isEqualTo(0);

3. String Comparison with Objects Class

Objects is a utility class which contains a static equals() method, useful in this scenario – to compare two Strings.

The method returns true if two Strings are equal by first comparing them using their address i.e “==”. Consequently, if both arguments are null, it returns true and if exactly one argument is null, it returns false.

Otherwise, it then simply calls the equals() method of the passed argument’s type’s class – which in our case is String’s class equals() method. This method is case sensitive because it internally calls String class’s equals() method.

Let’s test this:

String string1 = "using objects equals";
String string2 = "using objects equals";
String string3 = new String("using objects equals");

assertThat(Objects.equals(string1, string2)).isTrue();
assertThat(Objects.equals(string1, string3)).isTrue();

assertThat(Objects.equals(null, null)).isTrue();
assertThat(Objects.equals(null, string1)).isFalse();

4. String Comparison with Apache Commons

The Apache Commons library contains a utility class called StringUtils for String-related operations; this also has some very beneficial methods for String comparison.

4.1. Using equals() and equalsIgnoreCase()

The equals() method of StringUtils class is an enhanced version of the String class method equals(), which also handles null values:

assertThat(StringUtils.equals(null, null))
  .isTrue();
assertThat(StringUtils.equals(null, "equals method"))
  .isFalse();
assertThat(StringUtils.equals("equals method", "equals method"))
  .isTrue();
assertThat(StringUtils.equals("equals method", "EQUALS METHOD"))
  .isFalse();

The equalsIgnoreCase() method of StringUtils returns a boolean value. This works similarly to equals(), except it ignores casing of characters in Strings:

assertThat(StringUtils.equals("equals method", "equals method"))
  .isTrue();
assertThat(StringUtils.equals("equals method", "EQUALS METHOD"))
  .isTrue();

4.2. Using equalsAny() and equalsAnyIgnoreCase()

The equalsAny() method’s first argument is a String and the second is a multi-args type CharSequence. The method returns true if any of the other given Strings match against the first String case sensitively.

Otherwise, false is returned:

assertThat(StringUtils.equalsAny(null, null, null))
  .isTrue();
assertThat(StringUtils.equalsAny("equals any", "equals any", "any"))
  .isTrue();
assertThat(StringUtils.equalsAny("equals any", null, "equals any"))
  .isTrue();
assertThat(StringUtils.equalsAny(null, "equals", "any"))
  .isFalse();
assertThat(StringUtils.equalsAny("equals any", "EQUALS ANY", "ANY"))
  .isFalse();

The equalsAnyIgnoreCase() method works similarly to the equalsAny() method, but also ignores casing:

assertThat(StringUtils.equalsAnyIgnoreCase("ignore case", "IGNORE CASE", "any")).isTrue();

4.3. Using compare() and compareIgnoreCase()

The compare() method in StringUtils class is a null-safe version of the compareTo() method of String class and handles null values by considering a null value less than a non-null value. Two null values are considered equal.

Furthermore, this method can be used to sort a list of Strings with null entries:

assertThat(StringUtils.compare(null, null))
  .isEqualTo(0);
assertThat(StringUtils.compare(null, "abc"))
  .isEqualTo(-1);
assertThat(StringUtils.compare("abc", "bbc"))
  .isEqualTo(-1);
assertThat(StringUtils.compare("bbc", "abc"))
  .isEqualTo(1);

The compareIgnoreCase() method behaves similarly, except it ignores casing:

assertThat(StringUtils.compareIgnoreCase("Abc", "bbc"))
  .isEqualTo(-1);
assertThat(StringUtils.compareIgnoreCase("bbc", "ABC"))
  .isEqualTo(1);
assertThat(StringUtils.compareIgnoreCase("abc", "ABC"))
  .isEqualTo(0);

The two methods can also be used with a nullIsLess option. This is a third boolean argument which decides if null values should be considered less or not.

A null value is lower than another String if nullIsLess is true and higher if nullIsLess is false.

Let’s try it out:

assertThat(StringUtils.compare(null, "abc", true))
  .isEqualTo(-1);
assertThat(StringUtils.compare(null, "abc", false))
  .isEqualTo(1);

The compareIgnoreCase() method with a third boolean argument work similarly, except by ignoring case.

5. Conclusion

In this quick tutorial, we discussed different ways of comparing Strings.

And, as always, the source code for the examples can be found over on GitHub.


A Guide to Infinispan in Java

$
0
0

1. Overview

In this guide, we’ll learn about Infinispan, an in-memory key/value data store that ships with a more robust set of features than other tools of the same niche.

To understand how it works, we’ll build a simple project showcasing the most common features and check how they can be used.

2. Project Setup

To be able to use it this way, we’ll need to add it’s dependency in our pom.xml.

The latest version can be found in Maven Central repository:

<dependency>
    <groupId>org.infinispan</groupId>
    <artifactId>infinispan-core</artifactId>
    <version>9.1.5.Final</version>
</dependency>

All the necessary underlying infrastructure will be handled programmatically from now on.

3. CacheManager Setup

The CacheManager is the foundation of the majority of features that we’ll use. It acts as a container for all declared caches, controlling their lifecycle, and is responsible for the global configuration.

Infinispan ships with a really easy way to build the CacheManager:

public DefaultCacheManager cacheManager() {
    return new DefaultCacheManager();
}

Now we’re able to build our caches with it.

4. Caches Setup

A cache is defined by a name and a configuration. The necessary configuration can be built using the class ConfigurationBuilder, already available in our classpath.

To test our caches, we’ll build a simple method that simulates some heavy query:

public class HelloWorldRepository {
    public String getHelloWorld() {
        try {
            System.out.println("Executing some heavy query");
            Thread.sleep(1000);
        } catch (InterruptedException e) {
            // ...
            e.printStackTrace();
        }
        return "Hello World!";
    }
}

Also, to be able to check for changes in our caches, Infinispan provides a simple annotation @Listener.

When defining our cache, we can pass some object interested in any event happening inside it, and Infinispan will notify it when handling the cache:

@Listener
public class CacheListener {
    @CacheEntryCreated
    public void entryCreated(CacheEntryCreatedEvent<String, String> event) {
        this.printLog("Adding key '" + event.getKey() 
          + "' to cache", event);
    }

    @CacheEntryExpired
    public void entryExpired(CacheEntryExpiredEvent<String, String> event) {
        this.printLog("Expiring key '" + event.getKey() 
          + "' from cache", event);
    }

    @CacheEntryVisited
    public void entryVisited(CacheEntryVisitedEvent<String, String> event) {
        this.printLog("Key '" + event.getKey() + "' was visited", event);
    }

    @CacheEntryActivated
    public void entryActivated(CacheEntryActivatedEvent<String, String> event) {
        this.printLog("Activating key '" + event.getKey() 
          + "' on cache", event);
    }

    @CacheEntryPassivated
    public void entryPassivated(CacheEntryPassivatedEvent<String, String> event) {
        this.printLog("Passivating key '" + event.getKey() 
          + "' from cache", event);
    }

    @CacheEntryLoaded
    public void entryLoaded(CacheEntryLoadedEvent<String, String> event) {
        this.printLog("Loading key '" + event.getKey() 
          + "' to cache", event);
    }

    @CacheEntriesEvicted
    public void entriesEvicted(CacheEntriesEvictedEvent<String, String> event) {
        StringBuilder builder = new StringBuilder();
        event.getEntries().forEach(
          (key, value) -> builder.append(key).append(", "));
        System.out.println("Evicting following entries from cache: " 
          + builder.toString());
    }

    private void printLog(String log, CacheEntryEvent event) {
        if (!event.isPre()) {
            System.out.println(log);
        }
    }
}

Before printing our message we check if the event being notified already has happened, because, for some event types, Infinispan sends two notifications: one before and one right after it has been processed.

Now let’s build a method to handle the cache creation for us:

private <K, V> Cache<K, V> buildCache(
  String cacheName, 
  DefaultCacheManager cacheManager, 
  CacheListener listener, 
  Configuration configuration) {

    cacheManager.defineConfiguration(cacheName, configuration);
    Cache<K, V> cache = cacheManager.getCache(cacheName);
    cache.addListener(listener);
    return cache;
}

Notice how we pass a configuration to CacheManager, and then use the same cacheName to get the object corresponding to the wanted cache. Note also how we inform the listener to the cache object itself.

We’ll now check five different cache configurations, and we’ll see how we can set them up and make the best use of them.

4.1. Simple Cache

The simplest type of cache can be defined in one line, using our method buildCache:

public Cache<String, String> simpleHelloWorldCache(
  DefaultCacheManager cacheManager, 
  CacheListener listener) {
    return this.buildCache(SIMPLE_HELLO_WORLD_CACHE, 
      cacheManager, listener, new ConfigurationBuilder().build());
}

We can now build a Service:

public String findSimpleHelloWorld() {
    String cacheKey = "simple-hello";
    return simpleHelloWorldCache
      .computeIfAbsent(cacheKey, k -> repository.getHelloWorld());
}

Note how we use the cache, first checking if the wanted entry is already cached.  If it isn’t, we’ll need to call our Repository and then cache it.

Let’s add a simple method in our tests to time our methods:

protected <T> long timeThis(Supplier<T> supplier) {
    long millis = System.currentTimeMillis();
    supplier.get();
    return System.currentTimeMillis() - millis;
}

Testing it, we can check the time between executing two method calls:

@Test
public void whenGetIsCalledTwoTimes_thenTheSecondShouldHitTheCache() {
    assertThat(timeThis(() -> helloWorldService.findSimpleHelloWorld()))
      .isGreaterThanOrEqualTo(1000);

    assertThat(timeThis(() -> helloWorldService.findSimpleHelloWorld()))
      .isLessThan(100);
}

4.2. Expiration Cache

We can define a cache in which all entries have a lifespan, in other words, elements will be removed from the cache after a given period. The configuration is quite simple:

private Configuration expiringConfiguration() {
    return new ConfigurationBuilder().expiration()
      .lifespan(1, TimeUnit.SECONDS)
      .build();
}

Now we build our cache using the above configuration:

public Cache<String, String> expiringHelloWorldCache(
  DefaultCacheManager cacheManager, 
  CacheListener listener) {
    
    return this.buildCache(EXPIRING_HELLO_WORLD_CACHE, 
      cacheManager, listener, expiringConfiguration());
}

And finally, use it in a similar method from our simple cache above:

public String findSimpleHelloWorldInExpiringCache() {
    String cacheKey = "simple-hello";
    String helloWorld = expiringHelloWorldCache.get(cacheKey);
    if (helloWorld == null) {
        helloWorld = repository.getHelloWorld();
        expiringHelloWorldCache.put(cacheKey, helloWorld);
    }
    return helloWorld;
}

Let’s test our times again:

@Test
public void whenGetIsCalledTwoTimesQuickly_thenTheSecondShouldHitTheCache() {
    assertThat(timeThis(() -> helloWorldService.findExpiringHelloWorld()))
      .isGreaterThanOrEqualTo(1000);

    assertThat(timeThis(() -> helloWorldService.findExpiringHelloWorld()))
      .isLessThan(100);
}

Running it, we see that in quick succession the cache hits. To showcase that the expiration is relative to its entry put time, let’s force it in our entry:

@Test
public void whenGetIsCalledTwiceSparsely_thenNeitherHitsTheCache()
  throws InterruptedException {

    assertThat(timeThis(() -> helloWorldService.findExpiringHelloWorld()))
      .isGreaterThanOrEqualTo(1000);

    Thread.sleep(1100);

    assertThat(timeThis(() -> helloWorldService.findExpiringHelloWorld()))
      .isGreaterThanOrEqualTo(1000);
}

After running the test, note how after the given time our entry was expired from the cache. We can confirm this by looking at the printed log lines from our listener:

Executing some heavy query
Adding key 'simple-hello' to cache
Expiring key 'simple-hello' from cache
Executing some heavy query
Adding key 'simple-hello' to cache

Note that the entry is expired when we try to access it. Infinispan checks for an expired entry in two moments: when we try to access it or when the reaper thread scans the cache.

We can use expiration even in caches without it in their main configuration. The method put accepts more arguments:

simpleHelloWorldCache.put(cacheKey, helloWorld, 10, TimeUnit.SECONDS);

Or, instead of a fixed lifespan, we can give our entry a maximum idleTime:

simpleHelloWorldCache.put(cacheKey, helloWorld, -1, TimeUnit.SECONDS, 10, TimeUnit.SECONDS);

Using -1 to the lifespan attribute, the cache won’t suffer expiration from it, but when we combine it with 10 seconds of idleTime, we tell Infinispan to expire this entry unless it is visited in this timeframe.

4.3. Cache Eviction

In Infinispan we can limit the number of entries in a given cache with the eviction configuration:

private Configuration evictingConfiguration() {
    return new ConfigurationBuilder()
      .memory().evictionType(EvictionType.COUNT).size(1)
      .build();
}

In this example, we’re limiting the maximum entries in this cache to one, meaning that, if we try to enter another one, it’ll be evicted from our cache.

Again, the method is similar to the already presented here:

public String findEvictingHelloWorld(String key) {
    String value = evictingHelloWorldCache.get(key);
    if(value == null) {
        value = repository.getHelloWorld();
        evictingHelloWorldCache.put(key, value);
    }
    return value;
}

Let’s build our test:

@Test
public void whenTwoAreAdded_thenFirstShouldntBeAvailable() {

    assertThat(timeThis(
      () -> helloWorldService.findEvictingHelloWorld("key 1")))
      .isGreaterThanOrEqualTo(1000);

    assertThat(timeThis(
      () -> helloWorldService.findEvictingHelloWorld("key 2")))
      .isGreaterThanOrEqualTo(1000);

    assertThat(timeThis(
      () -> helloWorldService.findEvictingHelloWorld("key 1")))
      .isGreaterThanOrEqualTo(1000);
}

Running the test, we can look at our listener log of activities:

Executing some heavy query
Adding key 'key 1' to cache
Executing some heavy query
Evicting following entries from cache: key 1, 
Adding key 'key 2' to cache
Executing some heavy query
Evicting following entries from cache: key 2, 
Adding key 'key 1' to cache

Check how the first key was automatically removed from the cache when we inserted the second one, and then, the second one removed also to give room for our first key again.

4.4. Passivation Cache

The cache passivation is one of the powerful features of Infinispan. By combining passivation and eviction, we can create a cache that doesn’t occupy a lot of memory, without losing information.

Let’s have a look at a passivation configuration:

private Configuration passivatingConfiguration() {
    return new ConfigurationBuilder()
      .memory().evictionType(EvictionType.COUNT).size(1)
      .persistence() 
      .passivation(true)    // activating passivation
      .addSingleFileStore() // in a single file
      .purgeOnStartup(true) // clean the file on startup
      .location(System.getProperty("java.io.tmpdir")) 
      .build();
}

We’re again forcing just one entry in our cache memory, but telling Infinispan to passivate the remaining entries, instead of just removing them.

Let’s see what happens when we try to fill more than one entry:

public String findPassivatingHelloWorld(String key) {
    return passivatingHelloWorldCache.computeIfAbsent(key, k -> 
      repository.getHelloWorld());
}

Let’s build our test and run it:

@Test
public void whenTwoAreAdded_thenTheFirstShouldBeAvailable() {

    assertThat(timeThis(
      () -> helloWorldService.findPassivatingHelloWorld("key 1")))
      .isGreaterThanOrEqualTo(1000);

    assertThat(timeThis(
      () -> helloWorldService.findPassivatingHelloWorld("key 2")))
      .isGreaterThanOrEqualTo(1000);

    assertThat(timeThis(
      () -> helloWorldService.findPassivatingHelloWorld("key 1")))
      .isLessThan(100);
}

Now let’s look at our listener activities:

Executing some heavy query
Adding key 'key 1' to cache
Executing some heavy query
Passivating key 'key 1' from cache
Evicting following entries from cache: key 1, 
Adding key 'key 2' to cache
Passivating key 'key 2' from cache
Evicting following entries from cache: key 2, 
Loading key 'key 1' to cache
Activating key 'key 1' on cache
Key 'key 1' was visited

Note how many steps did it take to keep our cache with only one entry. Also, note the order of steps – passivation, eviction and then loading followed by activation. Let’s see  what those steps mean:

  • Passivation – our entry is stored in another place, away from the mains storage of Infinispan (in this case, the memory)
  • Eviction – the entry is removed, to free memory and to keep the configured maximum number of entries in the cache
  • Loading – when trying to reach our passivated entry, Infinispan checks it’s stored contents and load the entry to the memory again
  • Activation – the entry is now accessible in Infinispan again

4.5. Transactional Cache

Infinispan ships with a powerful transaction control. Like the database counterpart, it is useful in maintaining integrity while more than one thread is trying to write the same entry.

Let’s see how we can define a cache with transactional capabilities:

private Configuration transactionalConfiguration() {
    return new ConfigurationBuilder()
      .transaction().transactionMode(TransactionMode.TRANSACTIONAL)
      .lockingMode(LockingMode.PESSIMISTIC)
      .build();
}

To make it possible to test it, let’s build two methods – one that finishes its transaction rapidly, and one that takes a while:

public Integer getQuickHowManyVisits() {
    TransactionManager tm = transactionalCache
      .getAdvancedCache().getTransactionManager();
    tm.begin();
    Integer howManyVisits = transactionalCache.get(KEY);
    howManyVisits++;
    System.out.println("I'll try to set HowManyVisits to " + howManyVisits);
    StopWatch watch = new StopWatch();
    watch.start();
    transactionalCache.put(KEY, howManyVisits);
    watch.stop();
    System.out.println("I was able to set HowManyVisits to " + howManyVisits + 
      " after waiting " + watch.getTotalTimeSeconds() + " seconds");

    tm.commit();
    return howManyVisits;
}
public void startBackgroundBatch() {
    TransactionManager tm = transactionalCache
      .getAdvancedCache().getTransactionManager();
    tm.begin();
    transactionalCache.put(KEY, 1000);
    System.out.println("HowManyVisits should now be 1000, " +
      "but we are holding the transaction");
    Thread.sleep(1000L);
    tm.rollback();
    System.out.println("The slow batch suffered a rollback");
}

Now let’s create a test that executes both methods and check how Infinispan will behave:

@Test
public void whenLockingAnEntry_thenItShouldBeInaccessible() throws InterruptedException {
    Runnable backGroundJob = () -> transactionalService.startBackgroundBatch();
    Thread backgroundThread = new Thread(backGroundJob);
    transactionalService.getQuickHowManyVisits();
    backgroundThread.start();
    Thread.sleep(100); //lets wait our thread warm up

    assertThat(timeThis(() -> transactionalService.getQuickHowManyVisits()))
      .isGreaterThan(500).isLessThan(1000);
}

Executing it, we’ll see the following activities in our console again:

Adding key 'key' to cache
Key 'key' was visited
Ill try to set HowManyVisits to 1
I was able to set HowManyVisits to 1 after waiting 0.001 seconds
HowManyVisits should now be 1000, but we are holding the transaction
Key 'key' was visited
Ill try to set HowManyVisits to 2
I was able to set HowManyVisits to 2 after waiting 0.902 seconds
The slow batch suffered a rollback

Check the time on the main thread, waiting for the end of the transaction created by the slow method.

5. Conclusion

In this article, we’ve seen what Infinispan is, and it’s leading features and capabilities as a cache within an application.

As always, the code can be found over on Github.

Building Microservices with Eclipse MicroProfile

$
0
0

1. Overview

In this article, we’ll focus on building a microservice based on Eclipse MicroProfile.

We’ll look at how to write a RESTful web application using JAX-RS, CDI and JSON-P APIs.

2. A Microservice Architecture

Simply put, microservices are a software architecture style that forms a complete system as a collection of several independent services.

Each one focuses on one functional perimeter and communicates to the others with a language-agnostic protocol, such as REST.

3. Eclipse MicroProfile

Eclipse MicroProfile is an initiative that aims to optimize Enterprise Java for the Microservices architecture. It’s based on a subset of Java EE WebProfile APIs, so we can build MicroProfile applications like we build Java EE ones.

The goal of MicroProfile is to define standard APIs for building microservices and deliver portable applications across multiple MicroProfile runtimes.

4. Maven Dependencies

All dependencies required to build an Eclipse MicroProfile application are provided by this BOM (Bill Of Materials) dependency:

<dependency>
    <groupId>org.eclipse.microprofile</groupId>
    <artifactId>microprofile</artifactId>
    <version>1.2</version>
    <type>pom</type>
    <scope>provided</scope>
</dependency>

The scope is set as provided because the MicroProfile runtime already includes the API and the implementation.

5. Representation Model

Let’s start by creating a quick resource class:

public class Book {
    private String id;
    private String name;
    private String author;
    private Integer pages;
    // ...
}

As we can see, there’s no annotation on this Book class.

6. Using CDI

Simply put, CDI is an API that provides dependency injection and lifecycle management. It simplifies the use of Enterprise beans in Web Applications.

Let’s now create a CDI managed bean as a store for the book representation:

@ApplicationScoped
public class BookManager {

    private ConcurrentMap<String, Book> inMemoryStore
      = new ConcurrentHashMap<>();

    public String add(Book book) {
        // ...
    }

    public Book get(String id) {
        // ...
    }

    public List getAll() {
        // ...
    }
}

We annotate this class with @ApplicationScoped because we need only one instance whose state is shared by all clients. For that, we used a ConcurrentMap as a type-safe in-memory data store. Then we added methods for CRUD operations.

Now our bean is a CDI ready and can be injected into the bean BookEndpoint using the @Inject annotation.

7. JAX-RS API

To create a REST application with JAX-RS, we need to create an Application class annotated with @ApplicationPath and a resource annotated with @Path.

7.1. JAX RS Application

The JAX-RS Application identifies the base URI under which we expose the resource in a Web Application.

Let’s create the following JAX-RS Application:

@ApplicationPath("/library")
public class LibraryApplication extends Application {
}

In this example, all JAX-RS resource classes in the Web Application are associated with the LibraryApplication making them under the same library path, that’s the value of the ApplicationPath annotation.

This annotated class tells the JAX RS runtime that it should find resources automatically and exposes them.

7.2. JAX RS Endpoint

An Endpoint class, also called Resource class, should define one resource although many of the same types are technically possible.

Each Java class annotated with @Path, or having at least one method annotated with @Path or @HttpMethod is an Endpoint.

Now, we’ll create a JAX-RS Endpoint that exposes that representation:

@Path("books")
@RequestScoped
public class BookEndpoint {

    @Inject
    private BookManager bookManager;
 
    @GET
    @Path("{id}")
    @Produces(MediaType.APPLICATION_JSON)
    public Response getBook(@PathParam("id") String id) {
        return Response.ok(bookManager.get(id)).build();
    }
 
    @GET
    @Produces(MediaType.APPLICATION_JSON)
    public Response getAllBooks() {
        return Response.ok(bookManager.getAll()).build();
    }
 
    @POST
    @Consumes(MediaType.APPLICATION_JSON)
    public Response add(Book book) {
        String bookId = bookManager.add(book);
        return Response.created(
          UriBuilder.fromResource(this.getClass())
            .path(bookId).build())
            .build();
    }
}

At this point, we can access the BookEndpoint Resource under the /library/books path in the web application.

7.3. JAX RS JSON Media Type

JAX RS supports many media types for communicating with REST clients, but Eclipse MicroProfile restricts the use of JSON as it specifies the use of the JSOP-P API. As such, we need to annotate our methods with @Consumes(MediaType.APPLICATION_JSON) and @Produces(MediaType.APPLICATION_JSON).

The @Consumes annotation restricts the accepted formats – in this example, only JSON data format is accepted. The HTTP request header Content-Type should be application/json.

The same idea lies behind the @Produces annotation. The JAX RS Runtime should marshal the response to JSON format. The request HTTP header Accept should be application/json.

8. JSON-P

JAX RS Runtime supports JSON-P out of the box so that we can use JsonObject as a method input parameter or return type.

But in the real world, we often work with POJO classes. So we need a way to do the mapping between JsonObject and POJO. Here’s where the JAX RS entity provider goes to play.

For marshaling JSON input stream to the Book POJO, that’s invoking a resource method with a parameter of type Book, we need to create a class BookMessageBodyReader:

@Provider
@Consumes(MediaType.APPLICATION_JSON)
public class BookMessageBodyReader implements MessageBodyReader<Book> {

    @Override
    public boolean isReadable(
      Class<?> type, Type genericType, 
      Annotation[] annotations, 
      MediaType mediaType) {
 
        return type.equals(Book.class);
    }

    @Override
    public Book readFrom(
      Class type, Type genericType, 
      Annotation[] annotations,
      MediaType mediaType, 
      MultivaluedMap<String, String> httpHeaders, 
      InputStream entityStream) throws IOException, WebApplicationException {
 
        return BookMapper.map(entityStream);
    }
}

We do the same process to unmarshal a Book to JSON output stream, that’s invoking a resource method whose return type is Book, by creating a BookMessageBodyWriter:

@Provider
@Produces(MediaType.APPLICATION_JSON)
public class BookMessageBodyWriter 
  implements MessageBodyWriter<Book> {
 
    @Override
    public boolean isWriteable(
      Class<?> type, Type genericType, 
      Annotation[] annotations, 
      MediaType mediaType) {
 
        return type.equals(Book.class);
    }
 
    // ...
 
    @Override
    public void writeTo(
      Book book, Class<?> type, 
      Type genericType, 
      Annotation[] annotations, 
      MediaType mediaType, 
      MultivaluedMap<String, Object> httpHeaders, 
      OutputStream entityStream) throws IOException, WebApplicationException {
 
        JsonWriter jsonWriter = Json.createWriter(entityStream);
        JsonObject jsonObject = BookMapper.map(book);
        jsonWriter.writeObject(jsonObject);
        jsonWriter.close();
    }
}

As BookMessageBodyReader and BookMessageBodyWriter are annotated with @Provider, they’re registered automatically by the JAX RS runtime.

9. Building and Running the Application

A MicroProfile application is portable and should run in any compliant MicroProfile runtime. We’ll explain how to build and run our application in Open Liberty, but we can use any compliant Eclipse MicroProfile.

We configure Open Liberty runtime through a config file server.xml:

<server description="OpenLiberty MicroProfile server">
    <featureManager>
        <feature>jaxrs-2.0</feature>
        <feature>cdi-1.2</feature>
        <feature>jsonp-1.0</feature>
    </featureManager>
    <httpEndpoint httpPort="${default.http.port}" httpsPort="${default.https.port}"
      id="defaultHttpEndpoint" host="*"/>
    <applicationManager autoExpand="true"/>
    <webApplication context-root="${app.context.root}" location="${app.location}"/>
</server>

Let’s add the plugin liberty-maven-plugin to our pom.xml:

<?xml version="1.0" encoding="UTF-8"?>
<plugin>
    <groupId>net.wasdev.wlp.maven.plugins</groupId>
    <artifactId>liberty-maven-plugin</artifactId>
    <version>2.1.2</version>
    <configuration>
        <assemblyArtifact>
            <groupId>io.openliberty</groupId>
            <artifactId>openliberty-runtime</artifactId>
            <version>17.0.0.4</version>
            <type>zip</type>
        </assemblyArtifact>
        <configFile>${basedir}/src/main/liberty/config/server.xml</configFile>
        <packageFile>${package.file}</packageFile>
        <include>${packaging.type}</include>
        <looseApplication>false</looseApplication>
        <installAppPackages>project</installAppPackages>
        <bootstrapProperties>
            <app.context.root>/</app.context.root>
            <app.location>${project.artifactId}-${project.version}.war</app.location>
            <default.http.port>9080</default.http.port>
            <default.https.port>9443</default.https.port>
        </bootstrapProperties>
    </configuration>
    <executions>
        <execution>
            <id>install-server</id>
            <phase>prepare-package</phase>
            <goals>
                <goal>install-server</goal>
                <goal>create-server</goal>
                <goal>install-feature</goal>
            </goals>
        </execution>
        <execution>
            <id>package-server-with-apps</id>
            <phase>package</phase>
            <goals>
                <goal>install-apps</goal>
                <goal>package-server</goal>
            </goals>
        </execution>
    </executions>
</plugin>

This plugin is configurable throw a set of properties:

<properties>
    <!--...-->
    <app.name>library</app.name>
    <package.file>${project.build.directory}/${app.name}-service.jar</package.file>
    <packaging.type>runnable</packaging.type>
</properties>

The exec goal above produces an executable jar file so that our application will be an independent microservice which can be deployed and run in isolation. We can also deploy it as Docker image.

To create an executable jar, run the following command:

mvn package

And to run our microservice, we use this command:

java -jar target/library-service.jar

This will start the Open Liberty runtime and deploy our service. We can access to our Endpoint and getting all books at this URL:

curl http://localhost:9080/library/books

The result is a JSON:

[
  {
    "id": "0001-201802",
    "isbn": "1",
    "name": "Building Microservice With Eclipse MicroProfile",
    "author": "baeldung",
    "pages": 420
  }
]

To get a single book, we request this URL:

curl http://localhost:9080/library/books/0001-201802

And the result is JSON:

{
    "id": "0001-201802",
    "isbn": "1",
    "name": "Building Microservice With Eclipse MicroProfile",
    "author": "baeldung",
    "pages": 420
}

Now we’ll add a new Book by interacting with the API:

curl 
  -H "Content-Type: application/json" 
  -X POST 
  -d '{"isbn": "22", "name": "Gradle in Action","author": "baeldung","pages": 420}' 
  http://localhost:9080/library/books

As we can see, the status of the response is 201, indicating that the book was successfully created, and the Location is the URI by which we can access it:

< HTTP/1.1 201 Created
< Location: http://localhost:9080/library/books/0009-201802

10. Conclusion

This article demonstrated how to build a simple microservice based on Eclipse MicroProfile, discussing JAX RS, JSON-P and CDI.

The code is available over on Github; this is a Maven-based project, so it should be simple to import and run as it is.

Method Overloading and Overriding in Java

$
0
0

1. Overview

Method overloading and overriding are key concepts of the Java programming language, and as such, they deserve an in-depth look.

In this article, we’ll learn the basics of these concepts and see in what situations they can be useful.

2. Method Overloading

Method overloading is a powerful mechanism that allows us to define cohesive class APIs. To better understand why method overloading is such a valuable feature, let’s see a simple example.

Suppose that we’ve written a naive utility class that implements different methods for multiplying two numbers, three numbers, and so on.

If we’ve given the methods misleading or ambiguous names, such as multiply2(), multiply3(), multiply4(), then that would be a badly designed class API. Here’s where method overloading comes into play.

Simply put, we can implement method overloading in two different ways:

  • implementing two or more methods that have the same name but take different numbers of arguments
  • implementing two or more methods that have the same name but take arguments of different types

2.1. Different Numbers of Arguments

The Multiplier class shows, in a nutshell, how to overload the multiply() method by simply defining two implementations that take different numbers of arguments:

public class Multiplier {
    
    public int multiply(int a, int b) {
        return a * b;
    }
    
    public int multiply(int a, int b, int c) {
        return a * b * c;
    }
}

2.2. Arguments of Different Types

Similarly, we can overload the multiply() method by making it accept arguments of different types:

public class Multiplier {
    
    public int multiply(int a, int b) {
        return a * b;
    }
    
    public double multiply(double a, double b) {
        return a * b;
    }
}

Furthermore, it’s legitimate to define the Multiplier class with both types of method overloading:

public class Multiplier {
    
    public int multiply(int a, int b) {
        return a * b;
    }
    
    public int multiply(int a, int b, int c) {
        return a * b * c;
    }
    
    public double multiply(double a, double b) {
        return a * b;
    }
}

It’s worth noting, however, that it’s not possible to have two method implementations that differ only in their return types.

To understand why – let’s consider the following example:

public int multiply(int a, int b) { 
    return a * b; 
}
 
public double multiply(int a, int b) { 
    return a * b; 
}

In this case, the code simply wouldn’t compile because of the method call ambiguity – the compiler wouldn’t know which implementation of multiply() to call.

2.3. Type Promotion

One neat feature provided by method overloading is the so-called type promotion, a.k.a. widening primitive conversion.

In simple terms, one given type is implicitly promoted to another one when there’s no matching between the types of the arguments passed to the overloaded method and a specific method implementation.

To understand more clearly how type promotion works, consider the following implementations of the multiply() method:

public double multiply(int a, long b) {
    return a * b;
}

public int multiply(int a, int b, int c) {
    return a * b * c;
}

Now, calling the method with two int arguments will result in the second argument being promoted to long, as in this case there’s not a matching implementation of the method with two int arguments.

Let’s see a quick unit test to demonstrate type promotion:

@Test
public void whenCalledMultiplyAndNoMatching_thenTypePromotion() {
    assertThat(multiplier.multiply(10, 10)).isEqualTo(100.0);
}

Conversely, if we call the method with a matching implementation, type promotion just doesn’t take place:

@Test
public void whenCalledMultiplyAndMatching_thenNoTypePromotion() {
    assertThat(multiplier.multiply(10, 10, 10)).isEqualTo(1000);
}

Here’s a summary of the type promotion rules that apply for method overloading:

  • byte can be promoted to short, int, long, float, or double
  • short can be promoted to int, long, float, or double
  • char can be promoted to int, long, float, or double
  • int can be promoted to long, float, or double
  • long can be promoted to float or double
  • float can be promoted to double

2.4. Static Binding

The ability to associate a specific method call to the method’s body is known as binding.

In the case of method overloading, the binding is performed statically at compile time, hence it’s called static binding.

The compiler can effectively set the binding at compile time by simply checking the methods’ signatures.

 3. Method Overriding

Method overriding allows us to provide fine-grained implementations in subclasses for methods defined in a base class.

While method overriding is a powerful feature – considering that is a logical consequence of using inheritance, one of the biggest pillars of OOP – when and where to utilize it should be analyzed carefully, on a per-use-case basis.

Let’s see now how to use method overriding by creating a simple, inheritance-based (“is-a”) relationship.

Here’s the base class:

public class Vehicle {
    
    public String accelerate(long mph) {
        return "The vehicle accelerates at : " + mph + " MPH.";
    }
    
    public String stop() {
        return "The vehicle has stopped.";
    }
    
    public String run() {
        return "The vehicle is running.";
    }
}

And here’s a contrived subclass:

public class Car extends Vehicle {

    @Override
    public String accelerate(long mph) {
        return "The car accelerates at : " + mph + " MPH.";
    }
}

In the hierarchy above, we’ve simply overridden the accelerate() method in order to provide a more refined implementation for the subtype Car.

Here, it’s clear to see that if an application uses instances of the Vehicle class, then it can work with instances of Car as well, as both implementations of the accelerate() method have the same signature and the same return type.

Let’s write a few unit tests to check the Vehicle and Car classes:

@Test
public void whenCalledAccelerate_thenOneAssertion() {
    assertThat(vehicle.accelerate(100))
      .isEqualTo("The vehicle accelerates at : 100 MPH.");
}
    
@Test
public void whenCalledRun_thenOneAssertion() {
    assertThat(vehicle.run())
      .isEqualTo("The vehicle is running.");
}
    
@Test
public void whenCalledStop_thenOneAssertion() {
    assertThat(vehicle.stop())
      .isEqualTo("The vehicle has stopped.");
}

@Test
public void whenCalledAccelerate_thenOneAssertion() {
    assertThat(car.accelerate(80))
      .isEqualTo("The car accelerates at : 80 MPH.");
}
    
@Test
public void whenCalledRun_thenOneAssertion() {
    assertThat(car.run())
      .isEqualTo("The vehicle is running.");
}
    
@Test
public void whenCalledStop_thenOneAssertion() {
    assertThat(car.stop())
      .isEqualTo("The vehicle has stopped.");
}

Now, let’s see some unit tests that show how the run() and stop() methods, which aren’t overridden, return equal values for both Car and Vehicle:

@Test
public void givenVehicleCarInstances_whenCalledRun_thenEqual() {
    assertThat(vehicle.run()).isEqualTo(car.run());
}
 
@Test
public void givenVehicleCarInstances_whenCalledStop_thenEqual() {
   assertThat(vehicle.stop()).isEqualTo(car.stop());
}

In our case, we have access to the source code for both classes, so we can clearly see that calling the accelerate() method on a base Vehicle instance and calling accelerate() on a Car instance will return different values for the same argument.

Therefore, the following test demonstrates that the overridden method is invoked for an instance of Car:

@Test
public void whenCalledAccelerateWithSameArgument_thenNotEqual() {
    assertThat(vehicle.accelerate(100))
      .isNotEqualTo(car.accelerate(100));
}

3.1. Type Substitutability

A core principle in OOP is that of type substitutability, which is closely associated with the Liskov Substitution Principle (LSP).

Simply put, the LSP states that if an application works with a given base type, then it should also work with any of its subtypes. That way, type substitutability is properly preserved.

The biggest problem with method overriding is that some specific method implementations in the derived classes might not fully adhere to the LSP and therefore fail to preserve type substitutability.

Of course, it’s valid to make an overridden method to accept arguments of different types and return a different type as well, but with full adherence to these rules:

  • If a method in the base class takes argument(s) of a given type, the overridden method should take the same type or a supertype (a.k.a. contravariant method arguments)
  • If a method in the base class returns void, the overridden method should return void
  • If a method in the base class returns a primitive, the overridden method should return the same primitive
  • If a method in the base class returns a certain type, the overridden method should return the same type or a subtype (a.k.a. covariant return type)
  • If a method in the base class throws an exception, the overridden method must throw the same exception or a subtype of the base class exception

3.2. Dynamic Binding

Considering that method overriding can be only implemented with inheritance, where there is a hierarchy of a base type and subtype(s), the compiler can’t determine at compile time what method to call, as both the base class and the subclasses define the same methods.

As a consequence, the compiler needs to check the type of object to know what method should be invoked.

As this checking happens at runtime, method overriding is a typical example of dynamic binding.

4. Conclusion

In this tutorial, we learned how to implement method overloading and method overriding, and we explored some typical situations where they’re useful.

As usual, all the code samples shown in this article are available over on GitHub.

Object Type Casting in Java

$
0
0

1. Overview

The Java type system is made up of two kinds of types: primitives and references.

We covered primitive conversions in this article, and we’ll focus on references casting here, to get a good understanding of how Java handles types.

2. Primitive vs. Reference

Although primitive conversions and reference variable casting may look similar, they’re quite different concepts.

In both cases, we’re “turning” one type into another. But, in a simplified way, a primitive variable contains its value, and conversion of a primitive variable means irreversible changes in its value:

double myDouble = 1.1;
int myInt = (int) myDouble;
        
assertNotEquals(myDouble, myInt);

After the conversion in the above example, myInt variable is 1, and we can’t restore the previous value 1.1 from it.

Reference variables are different; the reference variable only refers to an object but doesn’t contain the object itself.

And casting a reference variable doesn’t touch the object it refers to, but only labels this object in another way, expanding or narrowing opportunities to work with it. Upcasting narrows the list of methods and properties available to this object, and downcasting can extend it.

A reference is like a remote control to an object. The remote control has more or fewer buttons depending on its type, and the object itself is stored in a heap. When we do casting, we change the type of the remote control but don’t change the object itself.

3. Upcasting

Casting from a subclass to a superclass is called upcasting. Typically, the upcasting is implicitly performed by the compiler.

Upcasting is closely related to inheritance – another core concept in Java. It’s common to use reference variables to refer to a more specific type. And every time we do this, implicit upcasting takes place.

To demonstrate upcasting let’s define an Animal class:

public class Animal {

    public void eat() {
        // ... 
    }
}

Now let’s extend Animal:

public class Cat extends Animal {

    public void eat() {
         // ... 
    }

    public void meow() {
         // ... 
    }
}

Now we can create an object of Cat class and assign it to the reference variable of type Cat:

Cat cat = new Cat();

And we can also assign it to the reference variable of type Animal:

Animal animal = cat;

In the above assignment, implicit upcasting takes place. We could do it explicitly:

animal = (Animal) cat;

But there is no need to do explicit cast up the inheritance tree. The compiler knows that cat is an Animal and doesn’t display any errors.

Note, that reference can refer to any subtype of the declared type.

Using upcasting, we’ve restricted the number of methods available to Cat instance but haven’t changed the instance itself. Now we can’t do anything that is specific to Cat – we can’t invoke meow() on the animal variable.

Although Cat object remains Cat object, calling meow() would cause the compiler error:

// animal.meow(); The method meow() is undefined for the type Animal

To invoke meow() we need to downcast animal, and we’ll do this later.

But now we’ll describe what gives us the upcasting. Thanks to upcasting, we can take advantage of polymorphism.

3.1. Polymorphism

Let’s define another subclass of Animal, a Dog class:

public class Dog extends Animal {

    public void eat() {
         // ... 
    }
}

Now we can define the feed() method which treats all cats and dogs like animals:

public class AnimalFeeder {

    public void feed(List<Animal> animals) {
        animals.forEach(animal -> {
            animal.eat();
        });
    }
}

We don’t want AnimalFeeder to care about which animal is on the list – a Cat or a Dog. In the feed() method they are all animals.

Implicit upcasting occurs when we add objects of a specific type to the animals list:

List<Animal> animals = new ArrayList<>();
animals.add(new Cat());
animals.add(new Dog());
new AnimalFeeder().feed(animals);

We add cats and dogs and they are upcast to Animal type implicitly. Each Cat is an Animal and each Dog is an Animal. They’re polymorphic.

By the way, all Java objects are polymorphic because each object is an Object at least. We can assign an instance of Animal to the reference variable of Object type and the compiler won’t complain:

Object object = new Animal();

That’s why all Java objects we create already have Object specific methods, for example, toString().

Upcasting to an interface is also common.

We can create Mew interface and make Cat implement it:

public interface Mew {
    public void meow();
}

public class Cat extends Animal implements Mew {
    
    public void eat() {
         // ... 
    }

    public void meow() {
         // ... 
    }
}

Now any Cat object can also be upcast to Mew:

Mew mew = new Cat();

Cat is a Mew, upcasting is legal and done implicitly.

Thus, Cat is a Mew, Animal, Object, and Cat. It can be assigned to reference variables of all four types in our example.

3.2. Overriding

In the example above, the eat() method is overridden. This means that although eat() is called on the variable of the Animal type, the work is done by methods invoked on real objects – cats and dogs:

public void feed(List<Animal> animals) {
    animals.forEach(animal -> {
        animal.eat();
    });
}

If we add some logging to our classes, we’ll see that Cat’s and Dog’s methods are called:

web - 2018-02-15 22:48:49,354 [main] INFO com.baeldung.casting.Cat - cat is eating
web - 2018-02-15 22:48:49,363 [main] INFO com.baeldung.casting.Dog - dog is eating

To sum up:

  • A reference variable can refer to an object if the object is of the same type as a variable or if it is a subtype
  • Upcasting happens implicitly
  • All Java objects are polymorphic and can be treated as objects of supertype due to upcasting

4. Downcasting

What if we want to use the variable of type Animal to invoke a method available only to Cat class? Here comes the downcasting. It’s the casting from a superclass to a subclass.

Let’s take an example:

Animal animal = new Cat();

We know that animal variable refers to the instance of Cat. And we want to invoke Cat’s meow() method on the animal. But the compiler complains that meow() method doesn’t exist for the type Animal.

To call meow() we should downcast animal to Cat:

((Cat) animal).meow();

The inner parentheses and the type they contain are sometimes called the cast operator. Note that external parentheses are also needed to compile the code.

Let’s rewrite the previous AnimalFeeder example with meow() method:

public class AnimalFeeder {

    public void feed(List<Animal> animals) {
        animals.forEach(animal -> {
            animal.eat();
            if (animal instanceof Cat) {
                ((Cat) animal).meow();
            }
        });
    }
}

Now we gain access to all methods available to Cat class. Look at the log to make sure that meow() is actually called:

web - 2018-02-16 18:13:45,445 [main] INFO com.baeldung.casting.Cat - cat is eating
web - 2018-02-16 18:13:45,454 [main] INFO com.baeldung.casting.Cat - meow
web - 2018-02-16 18:13:45,455 [main] INFO com.baeldung.casting.Dog - dog is eating

Note that in the above example we’re trying to downcast only those objects which are really instances of Cat. To do this, we use the operator instanceof.

4.1. instanceof Operator

We often use instanceof operator before downcasting to check if the object belongs to the specific type:

if (animal instanceof Cat) {
    ((Cat) animal).meow();
}

4.2. ClassCastException

If we hadn’t checked the type with the instanceof operator, the compiler wouldn’t have complained. But at runtime, there would be an exception.

To demonstrate this let’s remove the instanceof operator from the above code:

public void uncheckedFeed(List<Animal> animals) {
    animals.forEach(animal -> {
        animal.eat();
        ((Cat) animal).meow();
    });
}

This code compiles without issues. But if we try to run it we’ll see an exception:

java.lang.ClassCastException: com.baeldung.casting.Dog cannot be cast to com.baeldung.casting.Cat

This means that we are trying to convert an object which is an instance of Dog into a Cat instance.

ClassCastException’s always thrown at runtime if the type we downcast to doesn’t match the type of the real object.

Note, that if we try to downcast to an unrelated type, the compiler won’t allow this:

Animal animal;
String s = (String) animal;

The compiler says “Cannot cast from Animal to String”.

For the code to compile, both types should be in the same inheritance tree.

Let’s sum up:

  • Downcasting is necessary to gain access to members specific to subclass
  • Downcasting is done using cast operator
  • To downcast an object safely, we need instanceof operator
  • If the real object doesn’t match the type we downcast to, then ClassCastException will be thrown at runtime

5. Conclusion

In this foundational tutorial, we’ve explored what is upcasting, downcasting, how to use them and how these concepts can help you take advantage of polymorphism.

As always, the code for this article is available over on GitHub.

Code Analysis with SonarQube

$
0
0

1. Overview

In this article, we’re going to be looking at static source code analysis with SonarQube – which is an open-source platform for ensuring code quality.

Let’s start with a core question – why analyze source code in the first place? Very simply put, to ensure quality, reliability, and maintainability over the life-span of the project; a poorly written codebase is always more expensive to maintain.

Alright, now let’s get started by downloading the latest LTS version of SonarQube from the download page and setting up our local server as outlined in this quick start guide.

2. Analyzing Source Code

Now that we’re logged in, we’re required to create a token by specifying a name – which can be our username or any other name of choice and click on the generate button.

We’ll use the token later at the point of analyzing our project(s). We also need to select the primary language (Java) and the build technology of the project (Maven).

Let’s define the plugin in the pom.xml:

<build>
    <pluginManagement>
        <plugins>
            <plugin>
                <groupId>org.sonarsource.scanner.maven</groupId>
                <artifactId>sonar-maven-plugin</artifactId>
                <version>3.4.0.905</version>
            </plugin>
        </plugins>
    </pluginManagement>
</build>

The latest version of the plugin is available here. Now, we need to execute this command from the root of our project directory to scan it:

mvn sonar:sonar -Dsonar.host.url=http://localhost:9000 
  -Dsonar.login=the-generated-token

We need to replace the-generated-token with the token from above.

The project that we used in this article is available here.

We specified the host URL of the SonarQube server and the login (generated token) as parameters for the Maven plugin.

After executing the command, the results will be available on the Projects dashboard – at http://localhost:9000.

There are other parameters that we can pass to the Maven plugin or even set from the web interface; sonar.host.url, sonar.projectKey, and sonar.sources are mandatory while others are optional.

Other analysis-parameters and their default values are here. Also, note that each language-plugin has rules for analyzing compatible source code.

3. Analysis Result

Now that we’ve analyzed our first project, we can go to the web interface at http://localhost:9000 and refresh the page.

There we’ll see the report summary:

Discovered issues can either be a Bug, Vulnerability, Code Smell, Coverage or Duplication. Each category has a corresponding number of issues or a percentage value.

Moreover, issues can have one of five different severity levels: blocker, critical, major, minor and info. Just in front of the project name is an icon that displays the Quality Gate status – passed (green) or failed (red).

Clicking on the project name will take us to a dedicated dashboard where we can explore issues particular to the project in greater detail.

We can see the project code, activity and perform administration tasks from the project dashboard – each available on a separate tab.

Though there is a global Issues tab, the Issues tab on the project dashboard display issues specific to the project concerned alone:

The issues tab always display the category, severity level, tag(s), and the calculated effort (regarding time) it will take to rectify an issue.

From the issues tab, it’s possible to assign an issue to another user, comment on it, and change its severity level. Clicking on the issue itself will show more detail about the issue.

The issue tab comes with sophisticated filters to the left. These are good for pinpointing issues. So how can one know if the codebase is healthy enough for deployment into production? That’s what Quality Gate is for.

4. SonarQube Quality Gate

In this section, we’re going to look at a key feature of SonarQube – Quality Gate. Then we’ll see an example of how to set up a custom one.

4.1. What is a Quality Gate?

A Quality Gate is a set of conditions the project must meet before it can qualify for production release. It answers one question: can I push my code to production in its current state or not?

Ensuring code quality of “new” code while fixing existing ones is one good way to maintain a good codebase over time. The Quality Gate facilitates setting up rules for validating every new code added to the codebase on subsequent analysis.

The conditions set in the Quality Gate still affect unmodified code segments. If we can prevent new issues arising, over time, we’ll eliminate all issues.

This approach is comparable to fixing the water leakage from the source. This brings us to a particular term – Leakage Period. This is the period between two analyses/versions of the project.

If we rerun the analysis, on the same project, the overview tab of the project dashboard will show results for the leak period:

From the web interface, the Quality Gates tab is where we can access all the defined quality gates. By default, SonarQube way came preinstalled with the server.

The default configuration for SonarQube way flags the code as failed if:

  • the coverage on new code is less than 80%
  • percentage of duplicated lines on new code is greater than 3
  • maintainability, reliability or security rating is worse than A

With this understanding, we can create a custom Quality Gate.

4.2. Adding Custom Quality Gate

First, we need to click on the Quality Gates tab and then click on the Create button which is on the left of the page. We’ll need to give it a name – baeldung.

Now we can set the conditions we want:

From the Add Condition drop-down, let’s choose Blocker Issuesit’ll immediately show up on the list of conditions.

We’ll specify is greater than as the Operator, set zero (0) for the Error column and check Over Leak Period column:

Then we’ll click on the Add button to effect the changes. Let’s add another condition following the same procedure as above.

We’ll select issues from the Add Condition drop-down and check Over Leak Period column.

The value of the Operator column will be set to “is less than” and we’ll add one (1) as the value for the Error column. This means if the number of issues in the new code added is less than 1, mark the Quality Gate as failed.

I know this doesn’t make technical sense but let’s use it for learning sake. Don’t forget to click the Add button to save the rule.

One final step, we need to attach a project to our custom Quality Gate. We can do so by scrolling down the page to the Projects section.

There we need to click on All and then mark our project of choice. We can as well set it as the default Quality Gate from the top-right corner of the page.

We’ll scan the project source code, again, as we did before with Maven command. When that’s done, we’ll go to the projects tab and refresh.

This time, the project will not meet the Quality Gate criteria and will fail. Why? Because in one of our rules we have specified that, it should fail if there are no new issues.

Let’s go back to the Quality Gates tab and change the condition for issues to is greater than. We need to click the update button to effect this change.

A new scan of the source code will pass this time around.

5. Integrating SonarQube into a CI

Making SonarQube part of a Continuous Integration process is possible. This will automatically fail the build if the code analysis did not satisfy the Quality Gate condition.

For us to achieve this, we’re going to be using SonarCloud which is the cloud-hosted version of SonaQube server. We can create an account here.

From My Account > Organizations, we can see the organization key, and it will usually be in the form xxxx-github or xxxx-bitbucket.

Also from My Account > Security, we can generate a token as we did in the local instance of the server. Take note of both the token and the organization key for later use.

In this article, we’ll be using Travis CI, and we’ll create an account here with an existing Github profile. It will load all our projects, and we can flip the switch on any to activate Travis CI on it.

We need to add the token we generated on SonarCloud to Travis environment variables. We can do this by clicking on the project we’ve activated for CI.

Then, we’ll click “More Options” > “Settings” and then scroll down to “Environment Variables”:

We’ll add a new entry with the name SONAR_TOKEN and use the token generated, on SonarCloud, as the value. Travis CI will encrypt and hide it from public view:

Finally, we need to add a .travis.yml file to the root of our project with the following content:

language: java
sudo: false
install: true
addons:
  sonarcloud:
    organization: "your_organization_key"
    token:
      secure: "$SONAR_TOKEN"
jdk:
  - oraclejdk8
script:
  - mvn clean org.jacoco:jacoco-maven-plugin:prepare-agent package sonar:sonar
cache:
  directories:
    - '$HOME/.m2/repository'
    - '$HOME/.sonar/cache'

Remember to substitute your organization key with the organization key described above. Committing the new code and pushing to Github repo will trigger Travis CI build and in turn activate the sonar scanning as well.

6. Conclusion

In this tutorial, we’ve looked at how to set up a SonarQube server locally and how to use Quality Gate to define the criteria for the fitness of a project for production release.

The SonarQube documentation has more information about other aspects of the platform.

A Practical Guide to DecimalFormat

$
0
0

1. Overview

In this article, we’re going to explore the DecimalFormat class along with its practical usages.

This is a subclass of NumberFormat, which allows formatting decimal numbers’ String representation using predefined patterns.

It can also be used inversely, to parse Strings into numbers.

2. How Does it Work?

In order to format a number, we have to define a pattern, which is a sequence of special characters potentially mixed with text.

There are 11 Special Pattern Characters, but the most important are:

  • 0 – prints a digit if provided, 0 otherwise
  • # – prints a digit if provided, nothing otherwise
  • . – indicate where to put the decimal separator
  • , – indicate where to put the grouping separator

When the pattern gets applied to a number, its formatting rules are executed, and the result is printed according to the DecimalFormatSymbol of our JVM’s Locale unless a specific Locale is specified.

The following examples’ outputs are from a JVM running on an English Locale.

3. Basic Formatting

Let’s now see which outputs are produced when formatting the same number with the following patterns.

3.1. Simple Decimals

double d = 1234567.89;    
assertThat(
  new DecimalFormat("#.##").format(d)).isEqualTo("1234567.89");
assertThat(
  new DecimalFormat("0.00").format(d)).isEqualTo("1234567.89");

As we can see, the integer part is never discarded, no matter if the pattern is smaller than the number.

assertThat(new DecimalFormat("#########.###").format(d))
  .isEqualTo("1234567.89");
assertThat(new DecimalFormat("000000000.000").format(d))
  .isEqualTo("001234567.890");

If the pattern instead is bigger than the number, zeros get added, while hashes get dropped, both in the integer and in the decimal parts.

3.2. Rounding

If the decimal part of the pattern can’t contain the whole precision of the input number, it gets rounded.

Here, the .89 part has been rounded to .90, then the 0 has been dropped:

assertThat(new DecimalFormat("#.#").format(d))
  .isEqualTo("1234567.9");

Here, the .89 part has been rounded to 1.00, then the .00 has been dropped and the 1 has been summed to the 7:

assertThat(new DecimalFormat("#").format(d))
  .isEqualTo("1234568");

The default rounding mode is HALF_EVEN, but it can be customized through the setRoundingMode method.

3.3. Grouping

The grouping separator is used to specify a sub-pattern which gets repeated automatically:

assertThat(new DecimalFormat("#,###.#").format(d))
  .isEqualTo("1,234,567.9");
assertThat(new DecimalFormat("#,###").format(d))
  .isEqualTo("1,234,568");

3.4. Multiple Grouping Patterns

Some countries have a variable number of grouping patterns in their numbering systems.

The Indian Numbering System uses the format #,##,###.##, in which only the first grouping separator holds three numbers, while all the others hold two numbers.

This isn’t possible to achieve using the DecimalFormat class, which keeps only the latest pattern encountered from left to right, and applies it to the whole number, ignoring previous grouping patterns.

An attempt to use the pattern #,##,##,##,### would result in a regroup to #######,### and end in a redistribution to #,###,###,###.

To achieve multiple grouping pattern matching, it’s necessary to write our own String manipulation code, or alternatively to try the Icu4J’s DecimalFormat, which allows that.

3.5. Mixing String Literals

It’s possible to mix String literals within the pattern:

assertThat(new DecimalFormat("The # number")
  .format(d))
  .isEqualTo("The 1234568 number");

It’s also possible to use special characters as String literals, through escaping:

assertThat(new DecimalFormat("The '#' # number")
  .format(d))
  .isEqualTo("The # 1234568 number");

4. Localized Formatting

Many countries don’t use English symbols and use the comma as decimal separator and the dot as grouping separator.

Running the #,###.## pattern on a JVM with an Italian Locale, for example, would output 1.234.567,89.

While this could be a useful i18n feature in some cases, in others we might want to enforce a specific, JVM-independent format.

Here’s how we can do that:

assertThat(new DecimalFormat("#,###.##", 
  new DecimalFormatSymbols(Locale.ENGLISH)).format(d))
  .isEqualTo("1,234,567.89");
assertThat(new DecimalFormat("#,###.##", 
  new DecimalFormatSymbols(Locale.ITALIAN)).format(d))
  .isEqualTo("1.234.567,89");

If the Locale we’re interested in is not among the ones covered by the DecimalFormatSymbols constructor, we can specify it with the getInstance method:

Locale customLocale = new Locale("it", "IT");
assertThat(new DecimalFormat(
  "#,###.##", 
   DecimalFormatSymbols.getInstance(customLocale)).format(d))
  .isEqualTo("1.234.567,89");

5. Scientific Notations

The Scientific Notation represents the product of a Mantissa and an exponent of ten. The number 1234567.89 can also be represented as 12.3456789 * 10^5 (the dot is shifted by 5 positions).

5.1. E-Notation

It’s possible to express a number in Scientific Notation using the E pattern character representing the exponent of ten:

assertThat(new DecimalFormat("00.#######E0").format(d))
  .isEqualTo("12.3456789E5");
assertThat(new DecimalFormat("000.000000E0").format(d))
  .isEqualTo("123.456789E4");

We should keep in mind that the number of characters after the exponent is relevant, so if we need to express 10^12, we need E00 and not E0.

5.2. Engineering Notation

It’s common to use a particular form of Scientific Notation called Engineering Notation, which adjusts results in order to be expressed as multiple of three, for example when using measuring units like Kilo (10^3), Mega (10^6), Giga (10^9), and so on.

We can enforce this kind of notation by adjusting the maximum number of integer digits (the characters expressed with the # and on the left of the decimal separator) so that it’s higher than the minimum number (the one expressed with the 0) and higher than 1.

This forces the exponent to be a multiple of the maximum number, so for this use-case we want the maximum number to be three:

assertThat(new DecimalFormat("##0.######E0")
  .format(d)).isEqualTo("1.23456789E6");		
assertThat(new DecimalFormat("###.000000E0")
  .format(d)).isEqualTo("1.23456789E6");

6. Parsing

Let’s see how is possible to parse a String into a Number with the parse method:

assertThat(new DecimalFormat("", new DecimalFormatSymbols(Locale.ENGLISH))
  .parse("1234567.89"))
  .isEqualTo(1234567.89);
assertThat(new DecimalFormat("", new DecimalFormatSymbols(Locale.ITALIAN))
  .parse("1.234.567,89"))
  .isEqualTo(1234567.89);

Since the returned value isn’t inferred by the presence of a decimal separator, we can use the methods like .doubleValue(), .longValue() of the returned Number object to enforce a specific primitive in output.

We can also obtain a BigDecimal as follows:

NumberFormat nf = new DecimalFormat(
  "", 
  new DecimalFormatSymbols(Locale.ENGLISH));
((DecimalFormat) nf).setParseBigDecimal(true);
 
assertThat(nf.parse("1234567.89"))
  .isEqualTo(BigDecimal.valueOf(1234567.89));

7. Thread-Safety

DecimalFormat isn’t thread-safe, thus we should pay special attention when sharing the same instance between threads.

8. Conclusion

We’ve seen the major usages of the DecimalFormat class, along with its strengths and weaknesses.

As always, the full source code is available over on Github.

Intro to Google Cloud Storage with Java

$
0
0

1. Overview

Google Cloud Storage offers online storage tailored to an individual application’s needs based on location, the frequency of access, and cost. Unlike Amazon Web Services, Google Cloud Storage uses a single API for high, medium, and low-frequency access.

Like most cloud platforms, Google offers a free tier of access; the pricing details are here.

In this tutorial, we’ll connect to storage, create a bucket, write, read, and update data. While using the API to read and write data, we’ll also use the gsutil cloud storage utility.

2. Google Cloud Storage Setup

2.1. Maven Dependency

We need to add a single dependency to our pom.xml:

<dependency>
    <groupId>com.google.cloud</groupId>
    <artifactId>google-cloud-storage</artifactId>
    <version>1.17.0</version>
</dependency>

Maven Central has the latest version of the library.

2.2. Create Authentication Key

Before we can connect to Google Cloud, we need to configure authentication. Google Cloud Platform (GCP) applications load a private key and configuration information from a JSON configuration file. We generate this file via the GCP console. Access to the console requires a valid Google Cloud Platform Account.

We create our configuration by:

  1. Going to the Google Cloud Platform Console
  2. If we haven’t yet defined a GCP project, we click the create button and enter a project name, such as “baeldung-cloud-tutorial
  3. Select “new service account” from the drop-down list
  4. Add a name such as “baeldung-cloud-storage” into the account name field.
  5. Under “role” select Project, and then Owner in the submenu.
  6. Select create, and the console downloads a private key file.

The role in step #6 authorizes the account to access project resources. For the sake of simplicity, we gave this account complete access to all project resources.

For a production environment, we would define a role that corresponds to the access the application needs.

2.3. Install the Authentication Key

Next, we copy the file downloaded from GCP console to a convenient location and point the GOOGLE_APPLICATION_CREDENTIALS environment variable at it. This is the easiest way to load the credentials, although we’ll look at another possibility below.

For Linux or Mac:

export GOOGLE_APPLICATION_CREDENTIALS="/path/to/file"

For Windows:

set GOOGLE_APPLICATION_CREDENTIALS="C:\path\to\file"

2.4. Install Cloud Tools

Google provides several tools for managing their cloud platform. We’re going to use gsutil during this tutorial to read and write data alongside the API.

We can do this in two easy steps:

  1. Install the Cloud SDK from the instructions here for our platform.
  2. Follow the Quickstart for our platform here. In step 4 of Initialize the SDK, we select the project name in step 4 of section 2.2 above (“baeldung-cloud-storage” or whichever name you used).

gsutil is now installed and configured to read data from our cloud project.

3. Connecting to Storage and Creating a Bucket

3.1. Connect to Storage

Before we can use Google Cloud storage, we have to create a service object. If we’ve already set up the GOOGLE_APPLICATION_CREDENTIALS environment variable, we can use the default instance:

Storage storage = StorageOptions.getDefaultInstance().getService();

If we don’t want to use the environment variable, we have to create a Credentials instance and pass it to Storage with the project name:

Credentials credentials = GoogleCredentials
  .fromStream(new FileInputStream("path/to/file"));
Storage storage = StorageOptions.newBuilder().setCredentials(credentials)
  .setProjectId("baeldung-cloud-tutorial").build().getService();

3.2. Creating a Bucket

Now that we’re connected and authenticated, we can create a bucket. Buckets are containers that hold objects. They can be used to organize and control data access.

There is no limit to the number of objects in a bucket. GCP limits the numbers of operations on buckets and encourages application designers to emphasize operations on objects rather than on buckets.

Creating a bucket requires a BucketInfo:

Bucket bucket = storage.create(BucketInfo.of("baeldung-bucket"));

For this simple example, we a bucket name and accept the default properties. Bucket names must be globally unique. If we choose a name that is already used, create() will fail.

3.3. Examing a Bucket with gsutil

Since we have a bucket now, we can take a examine it with gsutil.

Let’s open a command prompt and take a look:

$ gsutil ls -L -b gs://baeldung-1-bucket/
gs://baeldung-1-bucket/ :
	Storage class:			STANDARD
	Location constraint:		US
	Versioning enabled:		None
	Logging configuration:		None
	Website configuration:		None
	CORS configuration: 		None
	Lifecycle configuration:	None
	Requester Pays enabled:		None
	Labels:				None
	Time created:			Sun, 11 Feb 2018 21:09:15 GMT
	Time updated:			Sun, 11 Feb 2018 21:09:15 GMT
	Metageneration:			1
	ACL:
	  [
	    {
	      "entity": "project-owners-385323156907",
	      "projectTeam": {
	        "projectNumber": "385323156907",
	        "team": "owners"
	      },
	      "role": "OWNER"
	    },
	    ...
	  ]
	Default ACL:
	  [
	    {
	      "entity": "project-owners-385323156907",
	      "projectTeam": {
	        "projectNumber": "385323156907",
	        "team": "owners"
	      },
	      "role": "OWNER"
	    },
            ...
	  ]

gsutil looks a lot like shell commands, and anyone familiar with the Unix command line should feel very comfortable here. Notice we passed in the path to our bucket as a URL: gs://baeldung-1-bucket/, along with a few other options.

The ls option produces a listing or objects or buckets, and the -L option indicated that we want a detailed listing – so we received details about the bucket including the creation times and access controls.

Let’s add some data to our bucket!

4. Reading, Writing and Updating Data

In Google Cloud Storage, objects are stored in BlobsBlob names can contain any Unicode character, limited to 1024 characters.

4.1. Writing Data

Let’s save a String to our bucket:

String value = "Hello, World!";
byte[] bytes = value.getBytes(UTF_8);
Blob blob = bucket.create("my-first-blob", bytes);

As you can see, objects are simply arrays of bytes in the bucket, so we store a String by simply operating with its raw bytes.

4.2. Reading Data with gsutil

Now that we have a bucket with an object in it, let’s take a look at gsutil.

Let’s start by listing the contents of our bucket:

$ gsutil ls gs://baeldung-1-bucket/
gs://baeldung-1-bucket/my-first-blob

We passed gsutil the ls option again but omitted -b and -L, so we asked for a brief listing of objects. We receive a list of URIs for each object, which is one in our case.

Let’s examine the object:

$ gsutil cat gs://baeldung-1-bucket/my-first-blob
Hello World!

Cat concatenates the contents of the object to standard output. We see the String we wrote to the Blob.

4.3. Reading Data

Blobs are assigned a BlobId upon creation.

The easiest way to retrieve a Blob is with the BlobId:

Blob blob = storage.get(blobId);
String value = new String(blob.getContent());

We pass the id to Storage and get the Blob in return, and getContent() returns the bytes.

If we don’t have the BlobId, we can search the Bucket by name:

Page<Blob> blobs = bucket.list();
for (Blob blob: blobs.getValues()) {
    if (name.equals(blob.getName())) {
        return new String(blob.getContent());
    }
}

4.4. Updating Data

We can update a Blob by retrieving it and then accessing its WriteableByteChannel:

String newString = "Bye now!";
Blob blob = storage.get(blobId);
WritableByteChannel channel = blob.writer();
channel.write(ByteBuffer.wrap(newString.getBytes(UTF_8)));
channel.close();

Let’s examine the updated object:

$ gsutil cat gs://baeldung-1-bucket/my-first-blob
Bye now!

4.5. Save an Object to File, Then Delete

Let’s save the updated the object to a file:

$ gsutil copy gs://baeldung-1-bucket/my-first-blob my-first-blob
Copying gs://baeldung-1-bucket/my-first-blob...
/ [1 files][    9.0 B/    9.0 B]
Operation completed over 1 objects/9.0 B.
Grovers-Mill:~ egoebelbecker$ cat my-first-blob
Bye now!

As expected, the copy option copies the object to the filename specified on the command line.

gsutil can copy any object from Google Cloud Storage to the local file system, assuming there is enough space to store it. 

We’ll finish by cleaning up:

$ gsutil rm gs://baeldung-1-bucket/my-first-blob
Removing gs://baeldung-1-bucket/my-first-blob...
/ [1 objects]
Operation completed over 1 objects.
$ gsutil ls gs://baeldung-1-bucket/
$

rm (del works too) deletes the specified object

5. Conclusion

In this brief tutorial, we created credentials for Google Cloud Storage and connected to the infrastructure. We created a bucket, wrote data, and then read and modified it. As we’re working with the API, we also used gsutil to examine cloud storage as we created and read data.

We also discussed how to use buckets and write and modify data efficiently.

Code samples, as always, can be found over on GitHub.


An Intro to Spring Cloud Task

$
0
0

1. Overview

The goal of Spring Cloud Task is to provide the functionality of creating short-lived microservices for Spring Boot application.

In Spring Cloud Task, we’ve got the flexibility of running any task dynamically, allocating resources on demand and retrieving the results after the Task completion.

Tasks is a new primitive within Spring Cloud Data Flow allowing users to execute virtually any Spring Boot application as a short-lived task.

2. Developing a Simple Task Application

2.1. Adding Relevant Dependencies

To start, we can add dependency management section with spring-cloud-task-dependencies:

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-task-dependencies</artifactId>
            <version>1.2.2.RELEASE</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>

This dependency management manages versions of dependencies through the import scope.

We need to add the following dependencies:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-task</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-task-core</artifactId>
</dependency>

This is the link to the Maven Central of spring-cloud-task-core.

Now, to start our Spring Boot application, we need spring-boot-starter with the relevant parent.

We’re going to use Spring Data JPA as an ORM tool, so we need to add the dependency for that as well:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-jpa</artifactId>
    <version>1.5.10</version>
</dependency>

The details of bootstrapping a simple Spring Boot application with Spring Data JPA are available here.

We can check the newest version of the spring-boot-starter-parent on Maven Central.

2.2. The @EnableTask Annotation

To bootstrap the functionality of Spring Cloud Task, we need to add @EnableTask annotation:

@SpringBootApplication
@EnableTask
public class TaskDemo {
    // ...
}

The annotation brings SimpleTaskConfiguration class in the picture which in turns registers the TaskRepository and its infrastructure. By default, an in-memory map is used to store the status of the TaskRepository.

The primary information of TaskRepository is modeled in TaskExecution class. The noted fields of this class are taskName, startTime, endTime, exitMessage. The exitMessage stores the available information at the exit time.

If an exit is caused by a failure in any event of the application, the complete exception stack trace will be stored here.

Spring Boot provides an interface ExitCodeExceptionMapper which maps uncaught exceptions to exit codes allowing scrutinized debug. The Cloud Task stores the information in the data source for future analysis.

2.3. Configuring a DataSource For TaskRepository

The in-memory map to store the TaskRepository will vanish once the task ends and we’ll lose data related to Task events. To store in a permanent storage, we’re going to use MySQL as a data source with Spring Data JPA.

The data source is configured in application.yml file. To configure Spring Cloud Task to use the provided data source as an storage of TaskRepository, we need to create a class that extends DefaultTaskConfigurer.

Now, we can send configured Datasource as a constructor argument to the superclass’ constructor:

@Autowired
private DataSource dataSource;

public class HelloWorldTaskConfigurer extends DefaultTaskConfigurer{
    public HelloWorldTaskConfigurer(DataSource dataSource){
        super(dataSource);
    }
}

To have the above configuration in action, we need to annotate an instance of DataSource with @Autowired annotation and inject the instance as constructor-argument of a HelloWorldTaskConfigurer bean defined above:

@Bean
public HelloWorldTaskConfigurer getTaskConfigurer() {
    return new HelloWorldTaskConfigurer(dataSource);
}

This completes the configuration to store TaskRepository to MySQL database.

2.4. Implementation

In Spring Boot, we can execute any Task just before application finishes its startup. We can use ApplicationRunner or  CommandLineRunner interfaces to create a simple Task.

We need to implement the run method of these interfaces and declare the implementing class as a bean:

@Component
public static class HelloWorldApplicationRunner 
  implements ApplicationRunner {
 
    @Override
    public void run(ApplicationArguments arg0) throws Exception {
        System.out.println("Hello World from Spring Cloud Task!");
    }
}

Now, if we run our application, we should get our task producing necessary output with required tables created in our MySQL database logging the event data of the Task.

3. Life-cycle of a Spring Cloud Task

In the beginning, we create an entry in the TaskRepository.  This’s the indication that all beans are ready to be used in the Application and the run method of Runner interface is ready to be executed.

Upon completion of the execution of the run method or in any failure of ApplicationContext event, TaskRepository will be updated with another entry.

During the task life-cycle, we can register listeners available from TaskExecutionListener interface. We need a class implementing the interface having three methods – onTaskEnd, onTaksFailed and onTaskStartup triggered in respective events of the Task.

We need to declare the bean of the implementing class in our TaskDemo class:

@Bean
public TaskListener taskListener() {
    return new TaskListener();
}

4. Integration with Spring Batch

We can execute Spring Batch Job as a Task and log events of the Job execution using Spring Cloud Task. To enable this feature we need to add Batch dependencies pertaining to Boot and Cloud:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-batch</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-task-batch</artifactId>
</dependency>

Here is the link to the Maven Central of spring-cloud-task-batch.

To configure a job as a Task we need to have the Job bean registered in the JobConfiguration class:

@Bean
public Job job2() {
    return jobBuilderFactory.get("job2")
      .start(stepBuilderFactory.get("job2step1")
      .tasklet(new Tasklet(){
          @Override
          public RepeatStatus execute(
            StepContribution contribution,
            ChunkContext chunkContext) throws Exception {
            System.out.println("This job is from Baeldung");
                return RepeatStatus.FINISHED;
          }
    }).build()).build();
}

We need to decorate the TaskDemo class with @EnableBatchProcessing annotation:

//..Other Annotation..
@EnableBatchProcessing
public class TaskDemo {
    // ...
}

The @EnableBatchProcessing annotation enables Spring Batch features with a base configuration required to set up batch jobs.

Now, if we run the application, the @EnableBatchProcessing annotation will trigger the Spring Batch Job execution and Spring Cloud Task will log the events of the executions of all batch jobs with the other Task executed in the springcloud database.

5. Launching a Task from Stream

We can trigger Tasks from Spring Cloud Stream.  To serve this purpose, we have the @EnableTaskLaucnher annotation. Once, we add the annotation with Spring Boot app, a TaskSink will be available:

@SpringBootApplication
@EnableTaskLauncher
public class StreamTaskSinkApplication {
    public static void main(String[] args) {
        SpringApplication.run(TaskSinkApplication.class, args);
    }
}

The TaskSink receives the message from a stream that contains a GenericMessage containing TaskLaunchRequest as a payload. Then it triggers a Task-based on co-ordinate provided in the Task launch request.

To have TaskSink functional, we require a bean configured that implements TaskLauncher interface. For testing purpose, we’re mocking the implementation here:

@Bean
public TaskLauncher taskLauncher() {
    return mock(TaskLauncher.class);
}

We need to note here that the TaskLauncher interface is only available after adding the spring-cloud-deployer-local dependency:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-deployer-local</artifactId>
    <version>1.3.1.RELEASE</version>
</dependency>

We can test whether the Task launched by invoking input of the Sink interface:

public class StreamTaskSinkApplicationTests {
   
    @Autowired
    private Sink sink; 
    
    //
}

Now, we create an instance of TaskLaunchRequest and send that as a payload of GenericMessage<TaskLaunchRequestobject. Then we can invoke the input channel of the Sink keeping the GenericMessage object in the channel.

6. Conclusion

In this tutorial, we’ve explored how Spring Cloud Task performs and how to configure it to log its events in a database. We also observed how Spring Batch job is defined and stored in the TaskRepository. Lastly, we explained how we can trigger Task from within Spring Cloud Stream.

As always, the code is available over on GitHub.

How to Detect the OS Using Java

$
0
0

1. Introduction

There are a couple of ways to figure out the OS on which our code is running on.

In this brief article, we’re going to see how to focus on doing OS detection in Java.

2. Implementation

One way is to make use of the System.getProperty(os.name) to obtain the name of the operating system.

The second way is to make use of SystemUtils from the Apache Commons Lang API.

Let’s see both of them in action.

2.1. Using System Properties

We can make use of the System class to detect the OS.

Let’s check it out:

public String getOperatingSystem() {
    String os = System.getProperty("os.name");
    // System.out.println("Using System Property: " + os);
    return os;
}

2.2. SystemUtils – Apache Commons Lang

SystemUtils from Apache Commons Lang is another popular option to try for. It’s a nice API that gracefully takes care of such details.

Let’s find out the OS using SystemUtils:

public String getOperatingSystemSystemUtils() {
    String os = SystemUtils.OS_NAME;
    // System.out.println("Using SystemUtils: " + os);
    return os;
}

3. Result

Executing the code in our environment gives us the same result:

Using SystemUtils: Windows 10
Using System Property: Windows 10

4. Conclusion

In this quick article, we saw how we can find/detect the OS programmatically, from Java.

As always, the code examples for this article are available over on GitHub.

Combining Publishers in Project Reactor

$
0
0

1. Overview

In this article, we’ll take a look at various ways of combining Publishers in Project Reactor.

2. Maven Dependencies

Let’s set up our example with the Project Reactor dependencies:

<dependency>
    <groupId>io.projectreactor</groupId>
    <artifactId>reactor-core</artifactId>
    <version>3.1.4.RELEASE</version>
</dependency>
<dependency>
    <groupId>io.projectreactor</groupId>
    <artifactId>reactor-test</artifactId>
    <version>3.1.4.RELEASE</version>
    <scope>test</scope>
</dependency>

3. Combining Publishers

Given a scenario when one has to work with Flux<T> or Mono<T>, there are different ways to combine streams.

Let’s create a few examples to illustrate the usage of static methods in the Flux<T> class such as concat, concatWith, merge, zip and combineLatest.

Our examples will make use of two publishers of type Flux<Integer>, namely evenNumbers, which is a Flux of Integer and holds a sequence of even numbers starting with 1 (min variable) and limited by 5 (max variable).

We’ll create oddNumbers, also a Flux of type Integer of odd numbers:

Flux<Integer> evenNumbers = Flux
  .range(min, max)
  .filter(x -> x % 2 == 0); // i.e. 2, 4

Flux<Integer> oddNumbers = Flux
  .range(min, max)
  .filter(x -> x % 2 > 0);  // ie. 1, 3, 5

3.1. concat()

The concat method executes a concatenation of the inputs, forwarding elements emitted by the sources downstream.

The concatenation is achieved by sequentially subscribing to the first source then waiting for it to complete before subscribing to the next, and so on until the last source completes. Any error interrupts the sequence immediately and is forwarded downstream.

Here is a quick example:

@Test
public void givenFluxes_whenConcatIsInvoked_thenConcat() {
    Flux<Integer> fluxOfIntegers = Flux.concat(
      evenNumbers, 
      oddNumbers);
        
    StepVerifier.create(fluxOfIntegers)
      .expectNext(2)
      .expectNext(4)
      .expectNext(1)
      .expectNext(3)
      .expectNext(5)
      .expectComplete()
      .verify();
}

3.2. concatWith()

Using the static method concatWith, we’ll produce a concatenation of two sources of type Flux<T> as a result:

@Test
public void givenFluxes_whenConcatWithIsInvoked_thenConcatWith() {
    Flux<Integer> fluxOfIntegers = evenNumbers.concatWith(oddNumbers);
        
    // same stepVerifier as in the concat example above
}

3.3. combineLatest()

The Flux static method combineLatest will generate data provided by the combination of the most recently published value from each of the Publisher sources.

Here’s an example of the usage of this method with two Publisher sources and a BiFunction as parameters:

@Test
public void givenFluxes_whenCombineLatestIsInvoked_thenCombineLatest() {
    Flux<Integer> fluxOfIntegers = Flux.combineLatest(
      evenNumbers, 
      oddNumbers, 
      (a, b) -> a + b);

    StepVerifier.create(fluxOfIntegers)
      .expectNext(5) // 4 + 1
      .expectNext(7) // 4 + 3
      .expectNext(9) // 4 + 5
      .expectComplete()
      .verify();
}

We can see here that the function combineLatest applied the function  “a + b” using the latest element of evenNumbers (4) and the elements of oddNumbers (1,3,5), thus generating the sequence 5,7,9.

3.4. merge()

The merge function executes a merging of the data from Publisher sequences contained in an array into an interleaved merged sequence:

@Test
public void givenFluxes_whenMergeIsInvoked_thenMerge() {
    Flux<Integer> fluxOfIntegers = Flux.merge(
      evenNumbers, 
      oddNumbers);
        
    StepVerifier.create(fluxOfIntegers)
      .expectNext(2)
      .expectNext(4)
      .expectNext(1)
      .expectNext(3)
      .expectNext(5)
      .expectComplete()
      .verify();
}

An interesting thing to note is that, opposed to concat (lazy subscription), the sources are subscribed eagerly.

Here, we can see a different outcome of the merge function if we insert a delay between the elements of the Publishers:

@Test
public void givenFluxes_whenMergeWithDelayedElementsIsInvoked_thenMergeWithDelayedElements() {
    Flux<Integer> fluxOfIntegers = Flux.merge(
      evenNumbers.delayElements(Duration.ofMillis(500L)), 
      oddNumbers.delayElements(Duration.ofMillis(300L)));
        
    StepVerifier.create(fluxOfIntegers)
      .expectNext(1)
      .expectNext(2)
      .expectNext(3)
      .expectNext(5)
      .expectNext(4)
      .expectComplete()
      .verify();
}

3.5. mergeSequential()

The mergeSequential method merges data from Publisher sequences provided in an array into an ordered merged sequence.

Unlike concat, sources are subscribed to eagerly.

Also, unlike merge, their emitted values are merged into the final sequence in subscription order:

@Test
public void testMergeSequential() {
    Flux<Integer> fluxOfIntegers = Flux.mergeSequential(
      evenNumbers, 
      oddNumbers);
        
    StepVerifier.create(fluxOfIntegers)
      .expectNext(2)
      .expectNext(4)
      .expectNext(1)
      .expectNext(3)
      .expectNext(5)
      .expectComplete()
      .verify();
}

3.6. mergeDelayError()

The mergeDelayError merges data from Publisher sequences contained in an array into an interleaved merged sequence.

Unlike concat, sources are subscribed to eagerly.

This variant of the static merge method will delay any error until after the rest of the merge backlog has been processed.

Here is an example of mergeDelayError:

@Test
public void givenFluxes_whenMergeWithDelayedElementsIsInvoked_thenMergeWithDelayedElements() {
    Flux<Integer> fluxOfIntegers = Flux.mergeDelayError(1, 
      evenNumbers.delayElements(Duration.ofMillis(500L)), 
      oddNumbers.delayElements(Duration.ofMillis(300L)));
        
    StepVerifier.create(fluxOfIntegers)
      .expectNext(1)
      .expectNext(2)
      .expectNext(3)
      .expectNext(5)
      .expectNext(4)
      .expectComplete()
      .verify();
}

3.7. mergeWith()

The static method mergeWith merges data from this Flux and a Publisher into an interleaved merged sequence.

Again, unlike concat, inner sources are subscribed to eagerly:

@Test
public void givenFluxes_whenMergeWithIsInvoked_thenMergeWith() {
    Flux<Integer> fluxOfIntegers = evenNumbers.mergeWith(oddNumbers);
        
    // same StepVerifier as in "3.4. Merge"
    StepVerifier.create(fluxOfIntegers)
      .expectNext(2)
      .expectNext(4)
      .expectNext(1)
      .expectNext(3)
      .expectNext(5)
      .expectComplete()
      .verify();
}

3.8. zip()

The static method zip agglutinates multiple sources together, i.e., waits for all the sources to emit one element and combines these elements into an output value (constructed by the provided combinator function).

The operator will continue doing so until any of the sources completes:

@Test
public void givenFluxes_whenZipIsInvoked_thenZip() {
    Flux<Integer> fluxOfIntegers = Flux.zip(
      evenNumbers, 
      oddNumbers, 
      (a, b) -> a + b);
        
    StepVerifier.create(fluxOfIntegers)
      .expectNext(3) // 2 + 1
      .expectNext(7) // 4 + 3
      .expectComplete()
      .verify();
}

As there is no element left from evenNumbers to pair up,  the element 5 from oddNumbers Publisher is ignored.

3.9. zipWith()

The zipWith executes the same method that zip does, but only with two Publishers:

@Test
public void givenFluxes_whenZipWithIsInvoked_thenZipWith() {
    Flux<Integer> fluxOfIntegers = evenNumbers
     .zipWith(oddNumbers, (a, b) -> a * b);
        
    StepVerifier.create(fluxOfIntegers)
      .expectNext(2)  // 2 * 1
      .expectNext(12) // 4 * 3
      .expectComplete()
      .verify();
}

4. Conclusion

In this quick tutorial, we’ve discovered multiple ways of combining Publishers.

As always, the examples are available in over on GitHub.

Java Weekly, Issue 218

$
0
0

Here we go…

1. Spring and Java

>> Monitor and troubleshoot Java applications and services with Datadog 

Optimize performance with end-to-end tracing and out-of-the-box support for popular Java frameworks, application servers, and databases. Try it free.

>> Brian Goetz Speaks to InfoQ on Data Classes for Java [infoq.com]

A super interesting dive into data classes – showing what challenges creators of Java need to face when designing the language.

>> How Java 10 will CHANGE the Way You Code [blog.takipi.com]

Local Variable Type Inference is another exciting upcoming feature of Java – let’s hope it won’t be abused 🙂

>> Putting Bean Validation Constraints to Guava’s Multimap [in.relation.to]

We can now apply constraints to the contents of collections. Nice.

>> How to Order Versioned File Names Semantically in Java [blog.jooq.org]

Finally, a proper Comparator implementation for comparing semantically versioned filenames.

>> How To Use Multi-release JARs To Target Multiple Java Versions [blog.codefx.org]

DevOps life made easier – multi-release JARs can contain bytecode for different Java versions and JVMs.

>> Spring Cloud Stream 2.0 – Polled Consumers [spring.io]

Spring Cloud Stream 2.0 applications can control the rate at which messages are consumed.

>> JDK 10 [openjdk.java.net]

Here’s how you can keep track of the JDKs in Java 10.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Tacking Restbucks with Clean Architecture, episode 1 [blog.sourced-bvba.be]

The beginning of an interesting series showcasing Clean Architecture principles by example.

Also worth reading:

3. Musings

>> Breaking and Mending Compatibility [michaelfeathers.silvrback.com]

Sometimes it makes more sense to mess up observable behaviors of your system so that users don’t make false assumptions about the contract.

>> Tech Stack, Framework, Library or API: How Not to Specialize [daedtech.com]

Business clients often don’t care about the tools you use to solve their problems 🙂

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Ted Dies By Software [dilbert.com]

>> Meeting Moth [dilbert.com]

>> Dogbert Consults [dilbert.com]

5. Pick of the Week

>> Rock stars have a boss? [sivers.org]

The Checker Framework – Pluggable Type Systems for Java

$
0
0

1. Overview

From the Java 8 release onwards, it’s possible to compile programs using the so-called Pluggable Type Systems – which can apply stricter checks than the ones applied by the compiler.

We only need to use the annotations provided by the several Pluggable Type Systems available.

In this quick article, we’ll explore the Checker Framework, courtesy of the University of Washington.

2. Maven

To start working with the Checker Framework, we need to first add it into our pom.xml:

<dependency>
    <groupId>org.checkerframework</groupId>
    <artifactId>checker-qual</artifactId>
    <version>2.3.2</version>
</dependency>
<dependency>
    <groupId>org.checkerframework</groupId>
    <artifactId>checker</artifactId>
    <version>2.3.2</version>
</dependency>
<dependency>
    <groupId>org.checkerframework</groupId>
    <artifactId>jdk8</artifactId>
    <version>2.3.2</version>
</dependency>

The latest version of the libraries can be checked on Maven Central.

The first two dependencies contain the code of The Checker Framework while the latter is a custom version of the Java 8 classes, in which all types have been properly annotated by the developers of The Checker Framework.

We then have to properly tweak the maven-compiler-plugin to use The Checker Framework as a pluggable Type System:

<plugin>
    <artifactId>maven-compiler-plugin</artifactId>
    <version>3.6.1</version>
    <configuration>
        <source>1.8</source>
        <target>1.8</target>
        <compilerArguments>
            <Xmaxerrs>10000</Xmaxerrs>
            <Xmaxwarns>10000</Xmaxwarns>
        </compilerArguments>
        <annotationProcessors>
            <annotationProcessor>
                org.checkerframework.checker.nullness.NullnessChecker
            </annotationProcessor>
            <annotationProcessor>
                org.checkerframework.checker.interning.InterningChecker
            </annotationProcessor>
            <annotationProcessor>
                org.checkerframework.checker.fenum.FenumChecker
            </annotationProcessor>
            <annotationProcessor>
                org.checkerframework.checker.formatter.FormatterChecker
            </annotationProcessor>
        </annotationProcessors>
        <compilerArgs>
            <arg>-AprintErrorStack</arg>
            <arg>-Awarns</arg>
        </compilerArgs>
    </configuration>
</plugin>

The main point here is the content of the <annotationProcessors> tag. Here we listed all the checkers that we want to run against our sources.

3. Avoiding NullPointerExceptions

The first scenario in which The Checker Framework can help us is identifying the piece of codes where a NullPoinerException could originate:

private static int countArgs(@NonNull String[] args) {
    return args.length;
}

public static void main(@Nullable String[] args) {
    System.out.println(countArgs(args));
}

In the above example, we declared with the @NonNull annotation that the args argument of countArgs() has to be not null.

Regardless of this constraint, in main(), we invoke the method passing an argument that can indeed be null, because it’s been annotated with @Nullable.

When we compile the code, The Checker Framework duly warns us that something in our code could be wrong:

[WARNING] /checker-plugin/.../NonNullExample.java:[12,38] [argument.type.incompatible]
 incompatible types in argument.
  found   : null
  required: @Initialized @NonNull String @Initialized @NonNull []

4. Proper Use of Constants as Enumerations

Sometimes we use a series of constants as they were items of an enumeration.

Let’s suppose we need a series of countries and planets. We can then annotate these items with the @Fenum annotation to group all the constants that are part of the same “fake” enumeration:

static final @Fenum("country") String ITALY = "IT";
static final @Fenum("country") String US = "US";
static final @Fenum("country") String UNITED_KINGDOM = "UK";

static final @Fenum("planet") String MARS = "Mars";
static final @Fenum("planet") String EARTH = "Earth";
static final @Fenum("planet") String VENUS = "Venus";

After that, when we write a method that should accept a String that is a “planet”, we can properly annotate the argument:

void greetPlanet(@Fenum("planet") String planet){
    System.out.println("Hello " + planet);
}

By error, we can invoke greetPlanet() with a string that hasn’t been defined as being a possible value for a planet, such:

public static void main(String[] args) {
    obj.greetPlanets(US);
}

The Checker Framework can spot the error:

[WARNING] /checker-plugin/.../FakeNumExample.java:[29,26] [argument.type.incompatible]
 incompatible types in argument.
  found   : @Fenum("country") String
  required: @Fenum("planet") String

5. Regular Expressions

Let’s suppose we know a String variable has to store a regular expression with at least one matching group.

We can leverage the Checker Framework and declare such variable like that:

@Regex(1) private static String FIND_NUMBERS = "\\d*";

This is obviously a potential error because the regular expression we assigned to FIND_NUMBERS does not have any matching group.

Indeed, the Checker Framework will diligently inform us about our error at compile time:

[WARNING] /checker-plugin/.../RegexExample.java:[7,51] [assignment.type.incompatible]
incompatible types in assignment.
  found   : @Regex String
  required: @Regex(1) String

6. Conclusion

The Checker Framework is a useful tool for developers that want to go beyond the standard compiler and improve the correctness of their code.

It’s able to detect, at compile time, several typical errors that can usually only be detected at runtime or even halt compilation by raising a compilation error.

There’re many more standard checks than what we covered in this article; check out the checks available in The Checker Framework official manual here, or even write your own.

As always, the source code for this tutorial, with some more examples, can be found over on GitHub.

Viewing all 3689 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>