Quantcast
Channel: Baeldung
Viewing all 3791 articles
Browse latest View live

Java Flow Control Interview Questions (+ Answers)

$
0
0

1. Introduction

Control flow statements allow developers to use decision making, looping and branching to conditionally change the flow of execution of particular blocks of code.

In this article, we’ll go through some flow control interview questions that might pop up during an interview and, where appropriate; we’ll implement examples to understand their answers better.

2. Questions

Q1. Describe the if-then and if-then-else statements. What types of expressions can be used as conditions?

Both statements tell our program to execute the code inside of them only if a particular condition evaluates to true. However, the if-then-else statement provides a secondary path of execution in case the if clause evaluates to false:

if (age >= 21) {
    // ...
} else {
    // ...
}

Unlike other programming languages, Java only supports boolean expressions as conditions. If we try to use a different type of expression, we’ll get a compilation error.

Q2. Describe the switch statement. What object types can be used in the switch clause?

Switch allows the selection of several execution paths based on a variables’ value.

Each path is labeled with case or default, the switch statement evaluates each case expression for a match and executes all statements that follow the matching label until a break statement is found. If it can’t find a match, the default block will be executed instead:

switch (yearsOfJavaExperience) {
    case 0:
        System.out.println("Student");
        break;
    case 1:
        System.out.println("Junior");
        break;
    case 2:
        System.out.println("Middle");
        break;
    default:
        System.out.println("Senior");
}

We can use byte, short, char, int, their wrapped versions, enums and Strings as switch values.

Q3. What happens when we forget to put a break statement in a case clause of a switch?

The switch statement falls-trough. This means that it will continue the execution of all case labels until if finds a break statement, even though those labels don’t match the expression’s value.

Here’s an example to demonstrate this:

int operation = 2;
int number = 10;

switch (operation) {
    case 1:
        number = number + 10;
        break;
    case 2:
        number = number - 4;
    case 3:
        number = number / 3;
    case 4:
        number = number * 10;
        break;
}

After running the code, number holds the value 20, instead of 6. This can be useful in situations when we want to associate the same action with multiple cases.

Q4. When is preferable to use a switch over an if-then-else statement and vice versa?

A switch statement is better suited when testing a single variable against many single values or when several values will execute the same code:

switch (month) {
    case 1:
    case 3:
    case 5:
    case 7:
    case 8:
    case 10:
    case 12:
        days = 31;
        break;
case 2:
    days = 28;
    break;
default:
    days = 30;
}

An if-then-else statement is preferable when we need to check ranges of values or multiple conditions:

if (aPassword == null || aPassword.isEmpty()) {
    // empty password
} else if (aPassword.length() < 8 || aPassword.equals("12345678")) {
    // weak password
} else {
    // good password
}

Q5. What types of loops does Java support?

Java offers three different types of loops: for, while, and do-while.

A for loop provides a way to iterate over a range of values. It’s most useful when we know in advance how many times a task is going to be repeated:

for (int i = 0; i < 10; i++) {
     // ...
}

A while loop can execute a block of statements while a particular condition is true:

while (iterator.hasNext()) {
    // ...
}

A do-while is a variation of a while statement in which the evaluation of the boolean expression is at the bottom of the loop. This guarantees that the code will execute at least once:

do {
    // ...
} while (choice != -1);

Q6. What is an enhanced for loop?

Is another syntax of the for statement designed to iterate through all the elements of a collection, array, enum or any object implementing the Iterable interface:

for (String aString : arrayOfStrings) {
    // ...
}

Q7. How can you exit anticipatedly from a loop?

Using the break statement, we can terminate the execution of a loop immediately:

for (int i = 0; ; i++) {
    if (i > 10) {
        break;
    }
}

Q8. What is the difference between an unlabeled and a labeled break statement?

An unlabeled break statement terminates the innermost switch, for, while or do-while statement, whereas a labeled break ends the execution of an outer statement.

Let’s create an example to demonstrate this:

int[][] table = { { 1, 2, 3 }, { 25, 37, 49 }, { 55, 68, 93 } };
boolean found = false;
int loopCycles = 0;

outer: for (int[] rows : table) {
    for (int row : rows) {
        loopCycles++;
        if (row == 37) {
            found = true;
            break outer;
        }
    }
}

When the number 37 is found, the labeled break statement terminates the outermost for loop, and no more cycles are executed. Thus, loopCycles ends with the value of 6.

However, the unlabeled break only ends the innermost statement, returning the flow of control to the outermost for that continues the loop to the next row in the table variable, making the loopCycles end with a value of 8.

Q9. What is the difference between an unlabeled and a labeled continue statement?

An unlabeled continue statement skips to the end of the current iteration in the innermost for, while, or do-while loop, whereas a labeled continue skips to an outer loop marked with the given label.

Here’s an example that demonstrates this:

int[][] table = { { 1, 15, 3 }, { 25, 15, 49 }, { 15, 68, 93 } };
int loopCycles = 0;

outer: for (int[] rows : table) {
    for (int row : rows) {
        loopCycles++;
        if (row == 15) {
            continue outer;
        }
    }
}

The reasoning is the same as in the previous question. The labeled continue statement terminates the outermost for loop.

Thus, loopCycles ends holding the value 5, whereas the unlabeled version only terminates the innermost statement, making the loopCycles end with a value of 9.

Q10. Describe the execution flow inside a try-catch-finally construct.

When a program has entered the try block, and an exception is thrown inside it, the execution of the try block is interrupted, and the flow of control continues with a catch block that can handle the exception being thrown.

If no such block exists then the current method execution stops, and the exception is thrown to the previous method on the call stack. Alternatively, if no exception occurs, all catch blocks are ignored, and program execution continues normally.

finally block is always executed whether an exception was thrown or not inside the body of the try block.

Q11. In which situations the finally block may not be executed?

When the JVM is terminated while executing the try or catch blocks, for instance, by calling System.exit(), or when the executing thread is interrupted or killed, then the finally block is not executed.

Q12. What is the result of executing the following code?

public static int assignment() {
    int number = 1;
    try {
        number = 3;
        if (true) {
            throw new Exception("Test Exception");
        }
        number = 2;
    } catch (Exception ex) {
        return number;
    } finally {
        number = 4;
    }
    return number;
}

System.out.println(assignment());

The code outputs the number 3. Even though the finally block is always executed, this happens only after the try block exits.

In the example, the return statement is executed before the try-catch block ends. Thus, the assignment to number in the finally block makes no effect, since the variable is already returned to the calling code of the testAssignment method.

Q13. In which situations try-finally block might be used even when exceptions might not be thrown?

This block is useful when we want to ensure we don’t accidentally bypass the clean up of resources used in the code by encountering a break, continue or return statement:

HeavyProcess heavyProcess = new HeavyProcess();
try {
    // ...
    return heavyProcess.heavyTask();
} finally {
    heavyProcess.doCleanUp();
}

Also, we may face situations in which we can’t locally handle the exception being thrown, or we want the current method to throw the exception still while allowing us to free up resources:

public void doDangerousTask(Task task) throws ComplicatedException {
    try {
        // ...
        task.gatherResources();
        if (task.isComplicated()) {
            throw new ComplicatedException("Too difficult");
        }
        // ...
    } finally {
        task.freeResources();
    }
}

Q14. How does try-with-resources work?

The try-with-resources statement declares and initializes one or more resources before executing the try block and closes them automatically at the end of the statement regardless of whether the block completed normally or abruptly. Any object implementing AutoCloseable or Closeable interfaces can be used as a resource:

try (StringWriter writer = new StringWriter()) {
    writer.write("Hello world!");
}

3. Conclusion

In this article, we covered some of the most frequently asked questions appearing in technical interviews for Java developers, regarding control flow statements. This should only be treated as the start of further research and not as an exhaustive list.

Good luck in your interview.


An Introduction to ThreadLocal in Java

$
0
0

1. Overview

In this article, we will be looking at the ThreadLocal construct from the java.lang package. This gives us the ability to store data individually for the current thread – and simply wrap it within a special type of object.

2. ThreadLocal API

The TheadLocal construct allows us to store data that will be accessible only by a specific thread.

Let’s say that we want to have an Integer value that will be bundled with the specific thread:

ThreadLocal<Integer> threadLocalValue = new ThreadLocal<>();

Next, when we want to use this value from a thread we only need to call a get() or set() method. Simply put, we can think that ThreadLocal stores data inside of a map – with the thread as the key.

Due to that fact, when we call a get() method on the threadLocalValue we will get an Integer value for the requesting thread:

threadLocalValue.set(1);
Integer result = threadLocalValue.get();

We can construct an instance of the ThreadLocal by using the withInitial() static method and passing a supplier to it:

ThreadLocal<Integer> threadLocal = ThreadLocal.withInitial(() -> 1);

To remove value from the ThreadLocal we can call a remove() method:

threadLocal.remove();

To see how to use the ThreadLocal properly, firstly, we will look at an example that does not use a ThreadLocal, then we will rewrite our example to leverage that construct.

3. Storing User Data in a Map

Let’s consider a program that needs to store the user specific Context data per given user id:

public class Context {
    private String userName;

    public Context(String userName) {
        this.userName = userName;
    }
}

We want to have one thread per user id. We’ll create a SharedMapWithUserContext class that implements a Runnable interface. The implementation in the run() method calls some database through the UserRepository class that returns a Context object for a given userId.

Next, we store that context in the ConcurentHashMap keyed by userId:

public class SharedMapWithUserContext implements Runnable {
 
    public static Map<Integer, Context> userContextPerUserId
      = new ConcurrentHashMap<>();
    private Integer userId;
    private UserRepository userRepository = new UserRepository();

    @Override
    public void run() {
        String userName = userRepository.getUserNameForUserId(userId);
        userContextPerUserId.put(userId, new Context(userName));
    }

    // standard constructor
}

We can easily test our code by creating and starting two threads for two different userIds and asserting that we have two entries in the userContextPerUserId map:

SharedMapWithUserContext firstUser = new SharedMapWithUserContext(1);
SharedMapWithUserContext secondUser = new SharedMapWithUserContext(2);
new Thread(firstUser).start();
new Thread(secondUser).start();

assertEquals(SharedMapWithUserContext.userContextPerUserId.size(), 2);

4. Storing User Data in ThreadLocal

We can rewrite our example to store the user Context instance using a ThreadLocal. Each thread will have its own ThreadLocal instance.

When using ThreadLocal we need to be very careful because every ThreadLocal instance is associated with a particular thread. In our example, we have a dedicated thread for each particular userId and this thread is created by us so we have full control over it.

The run() method will fetch the user context and store it into the ThreadLocal variable using the set() method:

public class ThreadLocalWithUserContext implements Runnable {
 
    private static ThreadLocal<Context> userContext 
      = new ThreadLocal<>();
    private Integer userId;
    private UserRepository userRepository = new UserRepository();

    @Override
    public void run() {
        String userName = userRepository.getUserNameForUserId(userId);
        userContext.set(new Context(userName));
        System.out.println("thread context for given userId: " 
          + userId + " is: " + userContext.get());
    }
    
    // standard constructor
}

We can test it by starting two threads that will execute the action for a given userId:

ThreadLocalWithUserContext firstUser 
  = new ThreadLocalWithUserContext(1);
ThreadLocalWithUserContext secondUser 
  = new ThreadLocalWithUserContext(2);
new Thread(firstUser).start();
new Thread(secondUser).start();

After running this code we’ll see on the standard output that ThreadLocal was set per given thread:

thread context for given userId: 1 is: Context{userNameSecret='18a78f8e-24d2-4abf-91d6-79eaa198123f'}
thread context for given userId: 2 is: Context{userNameSecret='e19f6a0a-253e-423e-8b2b-bca1f471ae5c'}

We can see that each of the users has its own Context.

5. Do not use ThreadLocal with ExecutorService

If we want to use an ExecutorService and submit a Runnable to it, using ThreadLocal will yield non-deterministic results – because we do not have a guarantee that every Runnable action for a given userId will be handled by the same thread every time it is executed.

Because of that, our ThreadLocal will be shared among different userIds. That’s why we should not use a TheadLocal together with ExecutorService. It should only be used when we have full control over which thread will pick which runnable action to execute.

6. Conclusion

In this quick article, we were looking at the ThreadLocal construct. We implemented the logic that uses ConcurrentHashMap that was shared between threads to store the context associated with a particular userId. Next, we rewrote our example to leverage ThreadLocal to store data that is associated with a particular userId and with a particular thread.

The implementation of all these examples and code snippets can be found in the GitHub project – this is a Maven project, so it should be easy to import and run as it is.

JVM Log Forging

$
0
0

1. Overview

In this quick article, we’ll explore one of the most common security issues in JVM world – Log Forging. We’ll also show an example technique that can protect us from this security concern.

2. What Is Log Forging?

According to OWASP, log forging is one of the most common attack techniques.

Log forging vulnerabilities occur when data enters an application from an untrusted source or the data is written to an application/system log file by some external entity.

As per OWASP guidelines log forging or injection is a technique of writing unvalidated user input to log files so that it can allow an attacker to forge log entries or inject malicious content into the logs.

Simply put, by log forging, an attacker tries to add/modify record content by exploring security loopholes in the application.

3. Example

Consider an example where a user submits a payment request from the web. From the application level, once this request gets processed, one entry will be logged with the amount:

private final Logger logger 
  = LoggerFactory.getLogger(LogForgingDemo.class);

public void addLog( String amount ) {
    logger.info( "Amount credited = {}" , amount );
}

public static void main( String[] args ) {
    LogForgingDemo demo = new LogForgingDemo();
    demo.addLog( "300" );
}

If we look at the console, we will see something like this:

web - 2017-04-12 17:45:29,978 [main] 
  INFO  com.baeldung.logforging.LogForgingDemo - Amount credited = 300

Now, suppose an attacker provide the input as “\n\nweb – 2017-04-12 17:47:08,957 [main] INFO Amount reversed successfully”, then the log will be:

web - 2017-04-12 17:52:14,124 [main] INFO  com.baeldung.logforging.
  LogForgingDemo - Amount credited = 300

web - 2017-04-12 17:47:08,957 [main] INFO Amount reversed successfully

Intentionally, the attacker has been able to create a forged entry in the application log which corrupted the value of the logs and confuses any audit type activities in future. This is the essence of log forging.

4. Prevention

The most obvious solution is not to write any user input into log files.

But, that might not be possible in all circumstances since the user given data is necessary for debugging or audit the application activity in future.

We have to use some other alternative for tackling this kind of scenario.

4.1. Introduce Validation

One of the easiest solutions is always validating the input before logging. One problem with this approach is that we will have to validate a lot of data at runtime which will impact the overall system performance.

Also, if the validation fails, the data will not be logged and become lost forever which is often not an acceptable scenario.

4.2. Database Logging

Another option is to log the data into the database. That is more secure than the other approach since ‘\n’ or newline means nothing to this context. However, this will raise another performance concern since a massive number of database connections will be used for logging user data.

What’s more, this technique introduces another security vulnerability – namely SQL Injection. To tackle this, we might end up writing many extra lines of code.

4.3. ESAPI

Using ESAPI is the most shared and advisable technique as per this context. Here, each and every user data is encoded before writing into the logs. ESAPI is an open source API available from OWASP:

<dependency>
    <groupId>org.owasp.esapi</groupId>
    <artifactId>esapi</artifactId>
    <version>2.1.0.1</version>
</dependency>

It’s available in the Central Maven Repository.

We can encode the data using ESAPI‘s Encoder interface:

public String encode(String message) {
    message = message.replace( '\n' ,  '_' ).replace( '\r' , '_' )
      .replace( '\t' , '_' );
    message = ESAPI.encoder().encodeForHTML( message );
    return message;
}

Here, we have created one wrapper method which replaces all carriage returns and line feeds with underscores and encodes the modified message.

In the earlier example if we encode the message using this wrapper function the log should look something like this:

web - 2017-04-12 18:15:58,528 [main] INFO  com.baeldung.logforging.
  LogForgingDemo - Amount credited = 300
__web - 2017-04-12 17:47:08,957 [main] INFO Amount reversed successfully

Here, the corrupted string fragment is encoded and can be easily identified.

Once important point to note is that to use ESAPI we need to include ESAPI.properties file in the classpath else the ESAPI API will throw an exception at runtime. It’s available here.

5. Conclusion

In this quick tutorial, we learned about log forging and techniques to overcome this security concern.

Like always, the full source code is available on over on GitHub.

Introduction to Apache Flink with Java

$
0
0

1. Overview

Apache Flink is a Big Data processing framework that allows programmers to process the vast amount of data in a very efficient and scalable manner.

In this article, we’ll introduce some of the core API concepts and standard data transformations available in the Apache Flink Java API. The fluent style of this API makes it easy to work with Flink’s central construct – the distributed collection.

First, we will take a look at Flink’s DataSet API transformations and use them to implement a word count program. Then we will take a brief look at Flink’s DataStream API, which allows you to process streams of events in a real-time fashion.

2. Maven Dependency

To get started we’ll need to add Maven dependencies to flink-java and flink-test-utils libraries:

<dependency>
    <groupId>org.apache.flink</groupId>
    <artifactId>flink-java</artifactId>
    <version>1.2.0</version>
</dependency>
<dependency>
    <groupId>org.apache.flink</groupId>
    <artifactId>flink-test-utils_2.10</artifactId>
    <version>1.2.0</version>
    <scope>test<scope>
</dependency>

3. Core API Concepts

When working with Flink, we need to know couple things related to its API:

  • Every Flink program performs transformations on distributed collections of data. A variety functions for transforming data are provided, including filtering, mapping, joining, grouping, and aggregating
  • A sink operation in Flink triggers the execution of a stream to produce the desired result of the program, such as saving the result to the file system or printing it to the standard output
  • Flink transformations are lazy, meaning that they are not executed until a sink operation is invoked
  • The Apache Flink API supports two modes of operations — batch and real-time. If you are dealing with a limited data source that can be processed in batch mode, you will use the DataSet API. Should you want to process unbounded streams of data in real-time, you would need to use the DataStream API

4. DataSet API Transformations

The entry point to the Flink program is an instance of the ExecutionEnvironment class — this defines the context in which a program is executed.

Let’s create an ExecutionEnvironment to start our processing:

ExecutionEnvironment env
  = ExecutionEnvironment.getExecutionEnvironment();

Note that when you launch the application on the local machine, it will perform processing on the local JVM. Should you want to start processing on a cluster of machines, you would need to install Apache Flink on those machines and configure the ExecutionEnvironment accordingly.

4.1. Creating a DataSet

To start performing data transformations, we need to supply our program with the data.

Let’s create an instance of the DataSet class using our ExecutionEnvironement:

DataSet<Integer> amounts = env.fromElements(1, 29, 40, 50);

You can create a DataSet from multiple sources, such as Apache Kafka, a CSV, file or virtually any other data source.

4.2. Filter and Reduce

Once you create an instance of the DataSet class, you can apply transformations to it.

Let’s say that you want to filter numbers that are above a certain threshold and next sum them allYou can use the filter() and reduce() transformations to achieve this:

int threshold = 30;
List<Integer> collect = amounts
  .filter(a -> a > threshold)
  .reduce((integer, t1) -> integer + t1)
  .collect();

assertThat(collect.get(0)).isEqualTo(90);

Note that the collect() method is a sink operation that triggers the actual data transformations.

4.3. Map

Let’s say that you have a DataSet of Person objects:

private static class Person {
    private int age;
    private String name;

    // standard constructors/getters/setters
}

Next, let’s create a DataSet of these objects:

DataSet<Person> personDataSource = env.fromCollection(
  Arrays.asList(
    new Person(23, "Tom"),
    new Person(75, "Michael")));

Suppose that you want to extract only the age field from every object of the collection. You can use the map() transformation to get only a specific field of the Person class:

List<Integer> ages = personDataSource
  .map(p -> p.age)
  .collect();

assertThat(ages).hasSize(2);
assertThat(ages).contains(23, 75);

4.4. Join 

When you have two datasets, you may want to join them on some id field. For this, you can use the join() transformation.

Let’s create collections of transactions and addresses of a user:

Tuple3<Integer, String, String> address
  = new Tuple3<>(1, "5th Avenue", "London");
DataSet<Tuple3<Integer, String, String>> addresses
  = env.fromElements(address);

Tuple2<Integer, String> firstTransaction 
  = new Tuple2<>(1, "Transaction_1");
DataSet<Tuple2<Integer, String>> transactions 
  = env.fromElements(firstTransaction, new Tuple2<>(12, "Transaction_2"));

The first field in both tuples is of an Integer type, and this is an id field on which we want to join both data sets.

To perform the actual joining logic, we need to implement a KeySelector interface for address and transaction:

private static class IdKeySelectorTransaction 
  implements KeySelector<Tuple2<Integer, String>, Integer> {
    @Override
    public Integer getKey(Tuple2<Integer, String> value) {
        return value.f0;
    }
}

private static class IdKeySelectorAddress 
  implements KeySelector<Tuple3<Integer, String, String>, Integer> {
    @Override
    public Integer getKey(Tuple3<Integer, String, String> value) {
        return value.f0;
    }
}

Each selector is only returning the field on which the join should be performed.

Unfortunately, it’s not possible to use lambda expressions here because Flink needs generic type info.

Next, let’s implement merging logic using those selectors:

List<Tuple2<Tuple2<Integer, String>, Tuple3<Integer, String, String>>>
  joined = transactions.join(addresses)
  .where(new IdKeySelectorTransaction())
  .equalTo(new IdKeySelectorAddress())
  .collect();

assertThat(joined).hasSize(1);
assertThat(joined).contains(new Tuple2<>(firstTransaction, address));

4.5. Sort

Let’s say that you have the following collection of Tuple2:

Tuple2<Integer, String> secondPerson = new Tuple2<>(4, "Tom");
Tuple2<Integer, String> thirdPerson = new Tuple2<>(5, "Scott");
Tuple2<Integer, String> fourthPerson = new Tuple2<>(200, "Michael");
Tuple2<Integer, String> firstPerson = new Tuple2<>(1, "Jack");
DataSet<Tuple2<Integer, String>> transactions = env.fromElements(
  fourthPerson, secondPerson, thirdPerson, firstPerson);

If you want to sort this collection by the first field of the tuple, you can use the sortPartitions() transformation:

List<Tuple2<Integer, String>> sorted = transactions
  .sortPartition(new IdKeySelectorTransaction(), Order.ASCENDING)
  .collect();

assertThat(sorted)
  .containsExactly(firstPerson, secondPerson, thirdPerson, fourthPerson);

5. Word Count 

The word count problem is one that is commonly used to showcase the capabilities of Big Data processing frameworks. The basic solution involves counting word occurrences in a text input. Let’s use Flink to implement a solution to this problem.

As the first step in our solution, we create a LineSplitter class that splits our input into tokens (words), collecting for each token a Tuple2 of key-values pairs. In each of these tuples, the key is a word found in the text, and the value is the integer one (1).

This class implements the FlatMapFunction interface that takes String as an input and produces a Tuple2<String, Integer>:

public class LineSplitter implements FlatMapFunction<String, Tuple2<String, Integer>> {

    @Override
    public void flatMap(String value, Collector<Tuple2<String, Integer>> out) {
        Stream.of(value.toLowerCase().split("\\W+"))
          .filter(t -> t.length() > 0)
          .forEach(token -> out.collect(new Tuple2<>(token, 1)));
    }
}

We call the collect() method on the Collector class to push data forward in the processing pipeline.

Our next and final step is to group the tuples by their first elements (words) and then perform a sum aggregate on the second elements to produce a count of the word occurrences:

public static DataSet<Tuple2<String, Integer>> startWordCount(
  ExecutionEnvironment env, List<String> lines) throws Exception {
    DataSet<String> text = env.fromCollection(lines);

    return text.flatMap(new LineSplitter())
      .groupBy(0)
      .aggregate(Aggregations.SUM, 1);
}

We are using three types of the Flink transformations: flatMap(), groupBy(), and aggregate().

Let’s write a test to assert that the word count implementation is working as expected:

List<String> lines = Arrays.asList(
  "This is a first sentence",
  "This is a second sentence with a one word");

DataSet<Tuple2<String, Integer>> result = WordCount.startWordCount(env, lines);

List<Tuple2<String, Integer>> collect = result.collect();
 
assertThat(collect).containsExactlyInAnyOrder(
  new Tuple2<>("a", 3), new Tuple2<>("sentence", 2), new Tuple2<>("word", 1),
  new Tuple2<>("is", 2), new Tuple2<>("this", 2), new Tuple2<>("second", 1),
  new Tuple2<>("first", 1), new Tuple2<>("with", 1), new Tuple2<>("one", 1));

6. DataStream API

6.1. Creating a DataStream

Apache Flink also supports the processing of streams of events through its DataStream API. If we want to start consuming events, we first need to use the StreamExecutionEnvironment class:

StreamExecutionEnvironment executionEnvironment
 = StreamExecutionEnvironment.getExecutionEnvironment();

Next, we can create a stream of events using the executionEnvironment from a variety of sources. It could be some message bus like Apache Kafka, but in this example, we will simply create a source from a couple of  string elements:

DataStream<Tuple2<String, Integer>> dataStream = executionEnvironment.fromElements(
  "This is a first sentence", 
  "This is a second sentence with a one word");

We can apply transformations to every element of the DataStream like in the normal DataSet class:

SingleOutputStreamOperator<String> upperCase = text.map(String::toUpperCase);

To trigger the execution, we need to invoke a sink operation such as print() that will just print the result of transformations to the standard output, following with the execute() method on the StreamExecutionEnvironment class:

upperCase.print();
env.execute();

It will produce the following output:

1> THIS IS A FIRST SENTENCE
2> THIS IS A SECOND SENTENCE WITH A ONE WORD

6.2. Windowing of Events

When processing a stream of events in real time, you may sometimes need to group events together and apply some computation on a window of those events.

Suppose we have a stream of events, where each event is a pair consisting of the event number and the timestamp when the event was sent to our system, and that we can tolerate events that are out-of-order but only if they are no more than twenty seconds late.

For this example, let’s first create a stream simulating two events that are several minutes apart and define a timestamp extractor that specifies our lateness threshold:

SingleOutputStreamOperator<Tuple2<Integer, Long>> windowed
  = env.fromElements(
  new Tuple2<>(16, ZonedDateTime.now().plusMinutes(25).toInstant().getEpochSecond()),
  new Tuple2<>(15, ZonedDateTime.now().plusMinutes(2).toInstant().getEpochSecond()))
  .assignTimestampsAndWatermarks(
    new BoundedOutOfOrdernessTimestampExtractor
      <Tuple2<Integer, Long>>(Time.seconds(20)) {
 
        @Override
        public long extractTimestamp(Tuple2<Integer, Long> element) {
          return element.f1 * 1000;
        }
    });

Next, let’s define a window operation to group our events into five-second windows and apply a transformation on those events:

SingleOutputStreamOperator<Tuple2<Integer, Long>> reduced = windowed
  .windowAll(TumblingEventTimeWindows.of(Time.seconds(5)))
  .maxBy(0, true);
reduced.print();

It will get the last element of every five-second window, so it prints out:

1> (15,1491221519)

Note that we do not see the second event because it arrived later than the specified lateness threshold.

7. Conclusion

In this article, we introduced the Apache Flink framework and looked at some of the transformations supplied with its API.

We implemented a word count program using Flink’s fluent and functional DataSet API. Then we looked at the DataStream API and implemented a simple real-time transformation on a stream of events.

The implementation of all these examples and code snippets can be found over on GitHub – this is a Maven project, so it should be easy to import and run as it is.

Java Exceptions Interview Questions (+ Answers)

$
0
0

1. Overview

Exceptions are an essential topic that every Java developer should be familiar with. This article provides answers to some of the questions that might pop up during an interview.

2. Questions

Q1. What is an exception?

An exception is an abnormal event that occurs during the execution of a program and disrupts the normal flow of the program’s instructions.

Q2. What is the purpose of the throw and throws keywords?

The throws keyword is used to specify that a method may raise an exception during its execution. It enforces explicit exception handling when calling a method:

public void simpleMethod() throws Exception {
    // ...
}

The throw keyword allows us to throw an exception object to interrupt the normal flow of the program. This is most commonly used when a program fails to satisfy a given condition:

if (task.isTooComplicated()) {
    throw new TooComplicatedException("The task is too complicated");
}

Q3. How can you handle an exception?

By using a try-catch-finally statement:

try {
    // ...
} catch (ExceptionType1 ex) {
    // ...
} catch (ExceptionType2 ex) {
    // ...
} finally {
    // ...
}

The block of code in which an exception may occur is enclosed in a try block. This block is also called “protected” or “guarded” code.

If an exception occurs, the catch block that matches the exception being thrown is executed, if not, all catch blocks are ignored.

The finally block is always executed after the try block exits, whether an exception was thrown or not inside it.

Q4. How can you catch multiple exceptions?

There are three ways of handling multiple exceptions in a block of code.

The first is to use a catch block that can handle all exception types being thrown:

try {
    // ...
} catch (Exception ex) {
    // ...
}

You should keep in mind that the recommended practice is to use exception handlers that are as accurate as possible.

Exception handlers that are too broad can make your code more error-prone, catch exceptions that weren’t anticipated, and cause unexpected behavior in your program.

The second way is implementing multiple catch blocks:

try {
    // ...
} catch (FileNotFoundException ex) {
    // ...
} catch (EOFException ex) {
    // ...
}

Note that, if the exceptions have an inheritance relationship; the child type must come first and the parent type later. If we fail to do this, it will result in a compilation error.

The third is to use a multi-catch block:

try {
    // ...
} catch (FileNotFoundException | EOFException ex) {
    // ...
}

This feature, first introduced in Java 7; reduces code duplication and makes it easier to maintain.

Q5. What is the difference between a checked and an unchecked exception?

A checked exception must be handled within a try-catch block or declared in a throws clause; whereas an unchecked exception is not required to be handled nor declared.

Checked and unchecked exceptions are also known as compile-time and runtime exceptions respectively.

All exceptions are checked exceptions, except those indicated by Error, RuntimeException, and their subclasses.

Q6. What is the difference between an exception and error?

An exception is an event that represents a condition from which is possible to recover, whereas error represents an external situation usually impossible to recover from.

All errors thrown by the JVM are instances of Error or one of its subclasses, the more common ones include but are not limited to:

  • OutOfMemoryError – thrown when the JVM cannot allocate more objects because it is out memory, and the garbage collector was unable to make more available
  • StackOverflowError – occurs when the stack space for a thread has run out, typically because an application recurses too deeply
  • ExceptionInInitializerError – signals that an unexpected exception occurred during the evaluation of a static initializer
  • NoClassDefFoundError – is thrown when the classloader tries to load the definition of a class and couldn’t find it, usually because the required class files were not found in the classpath
  • UnsupportedClassVersionError – occurs when the JVM attempts to read a class file and determines that the version in the file is not supported, normally because the file was generated with a newer version of Java

Although an error can be handled with a try statement, this is not a recommended practice since there is no guarantee that the program will be able to do anything reliably after the error was thrown.

Q7. What exception will be thrown executing the following code block?

Integer[][] ints = { { 1, 2, 3 }, { null }, { 7, 8, 9 } };
System.out.println("value = " + ints[1][1].intValue());

It throws an ArrayIndexOutOfBoundsException since we’re trying to access a position greater than the length of the array.

Q8. What is exception chaining?

Occurs when an exception is thrown in response to another exception. This allows us to discover the complete history of our raised problem:

try {
    task.readConfigFile();
} catch (FileNotFoundException ex) {
    throw new TaskException("Could not perform task", ex);
}

Q9. What is a stacktrace and how does it relate to an exception?

A stack trace provides the names of the classes and methods that were called, from the start of the application to the point an exception occurred.

It’s a very useful debugging tool since it enables us to determine exactly where the exception was thrown in the application and the original causes that led to it.

Q10. Why would you want to subclass an exception?

If the exception type isn’t represented by those that already exist in the Java platform, or if you need to provide more information to client code to treat it in a more precise manner, then you should create a custom exception.

Deciding whether a custom exception should be checked or unchecked depends entirely on the business case. However, as a rule of thumb; if the code using your exception can be expected to recover from it, then create a checked exception otherwise make it unchecked.

Also, you should inherit from the most specific Exception subclass that closely relates to the one you want to throw. If there is no such class, then choose Exception as the parent.

Q11. What are some advantages of exceptions?

Traditional error detection and handling techniques often lead to spaghetti code hard to maintain and difficult to read. However, exceptions enable us to separate the core logic of our application from the details of what to do when something unexpected happens.

Also, since the JVM searches backward through the call stack to find any methods interested in handling a particular exception; we gain the ability to propagate an error up in the call stack without writing additional code.

Also, because all exceptions thrown in a program are objects, they can be grouped or categorized based on its class hierarchy. This allows us to catch a group exceptions in a single exception handler by specifying the exception’s superclass in the catch block.

Q12. Can you throw any exception inside a lambda expression’s body?

When using a standard functional interface already provided by Java, you can only throw unchecked exceptions because standard functional interfaces do not have a “throws” clause in method signatures:

List<Integer> integers = Arrays.asList(3, 9, 7, 0, 10, 20);
integers.forEach(i -> {
    if (i == 0) {
        throw new IllegalArgumentException("Zero not allowed");
    }
    System.out.println(Math.PI / i);
});

However, if you are using a custom functional interface, throwing checked exceptions is possible:

@FunctionalInterface
public static interface CheckedFunction<T> {
    void apply(T t) throws Exception;
}
public void processTasks(
  List<Task> taks, CheckedFunction<Task> checkedFunction) {
    for (Task task : taks) {
        try {
            checkedFunction.apply(task);
        } catch (Exception e) {
            // ...
        }
    }
}

processTasks(taskList, t -> {
    // ...
    throw new Exception("Something happened");
});

Q13. What are the rules we need to follow when overriding a method that throws an exception?

Several rules dictate how exceptions must be declared in the context of inheritance.

When the parent class method doesn’t throw any exceptions, the child class method can’t throw any checked exception, but it may throw any unchecked.

Here’s an example code to demonstrate this:

class Parent {
    void doSomething() {
        // ...
    }
}

class Child extends Parent {
    void doSomething() throws IllegalArgumentException {
        // ...
    }
}

The next example will fail to compile since the overriding method throws a checked exception not declared in the overridden method:

class Parent {
    void doSomething() {
        // ...
    }
}

class Child extends Parent {
    void doSomething() throws IOException {
        // Compilation error
    }
}

When the parent class method throws one or more checked exceptions, the child class method can throw any unchecked exception; all, none or a subset of the declared checked exceptions, and even a greater number of these as long as they have the same scope or narrower.

Here’s an example code that successfully follows the previous rule:

class Parent {
    void doSomething() throws IOException, ParseException {
        // ...
    }

    void doSomethingElse() throws IOException {
        // ...
    }
}

class Child extends Parent {
    void doSomething() throws IOException {
        // ...
    }

    void doSomethingElse() throws FileNotFoundException, EOFException {
        // ...
    }
}

Note that both methods respect the rule. The first throws fewer exceptions than the overridden method, and the second, even though it throws more; they’re narrower in scope.

However, if we try to throw a checked exception that the parent class method doesn’t declare or we throw one with a broader scope; we’ll get a compilation error:

class Parent {
    void doSomething() throws FileNotFoundException {
        // ...
    }
}

class Child extends Parent {
    void doSomething() throws IOException {
        // Compilation error
    }
}

When the parent class method has a throws clause with an unchecked exception, the child class method can throw none or any number of unchecked exceptions, even though they are not related.

Here’s an example that honors the rule:

class Parent {
    void doSomething() throws IllegalArgumentException {
        // ...
    }
}

class Child extends Parent {
    void doSomething()
      throws ArithmeticException, BufferOverflowException {
        // ...
    }
}

Q14. Will the following code compile?

void doSomething() {
    // ...
    throw new RuntimeException(new Exception("Chained Exception"));
}

Yes. When chaining exceptions, the compiler only cares about the first one in the chain and, because it detects an unchecked exception, we don’t need to add a throws clause.

Q15. Is there any way of throwing a checked exception from a method that does not have a throws clause?

Yes. We can take advantage of the type erasure performed by the compiler and make it think we are throwing an unchecked exception, when, in fact; we’re throwing a checked exception:

public <T extends Throwable> T sneakyThrow(Throwable ex) throws T {
    throw (T) ex;
}

public void methodWithoutThrows() {
    this.<RuntimeException>sneakyThrow(new Exception("Checked Exception"));
}

3. Conclusion

In this article, we’ve explored some of the questions that are likely to appear in technical interviews for Java developers, regarding exceptions. This is not an exhaustive list, and it should be treated only as the start of further research.

We, at Baeldung, wish you success in any upcoming interviews.

Guide to sun.misc.Unsafe

$
0
0

1. Overview

In this article, we’ll have a look at a very interesting class provided by the JDK – Unsafe from the sun.misc package. This class provides us with low-level mechanisms that were designed to be used only by the core Java library and not by standard users.

This provides us with low-level mechanisms primarily designed for internal use within the core libraries.

2. Obtaining an Instance of the Unsafe 

Firstly, to be able to use the Unsafe class, we need to get an instance – which is not straightforward given the class was designed only for the internal usage.

The way to obtain the instance is via the static method getUnsafe(). The caveat is that by default – this will throw a SecurityException.

Fortunately, we can obtain the instance using reflection:

Field f = Unsafe.class.getDeclaredField("theUnsafe");
f.setAccessible(true);
unsafe = (Unsafe) f.get(null);

3. Instantiating a Class Using Unsafe

Let’s say that we have a simple class with a constructor that sets a variable value when the object is created:

class InitializationOrdering {
    private long a;

    public InitializationOrdering() {
        this.a = 1;
    }

    public long getA() {
        return this.a;
    }
}

When we initialize that object using the constructor, the getA() method will return a value of 1:

InitializationOrdering o1 = new InitializationOrdering();
assertEquals(o1.getA(), 1);

But we can use the allocateInstance() method using Unsafe. It will only allocate the memory for our class, and will not invoke a constructor:

InitializationOrdering o3 
  = (InitializationOrdering) unsafe.allocateInstance(InitializationOrdering.class);
 
assertEquals(o3.getA(), 0);

Notice the constructor was not invoked and due to that fact, the getA() method returned the default value for the long type – which is 0.

4. Altering Private Fields

Let’s say that we have a class that holds a secret private value:

class SecretHolder {
    private int SECRET_VALUE = 0;

    public boolean secretIsDisclosed() {
        return SECRET_VALUE == 1;
    }
}

Using the putInt() method from Unsafe, we can change a value of the private SECRET_VALUE field, changing/corrupting the state of that instance:

SecretHolder secretHolder = new SecretHolder();

Field f = secretHolder.getClass().getDeclaredField("SECRET_VALUE");
unsafe.putInt(secretHolder, unsafe.objectFieldOffset(f), 1);

assertTrue(secretHolder.secretIsDisclosed());

Once we get a field by the reflection call, we can alter its value to any other int value using the Unsafe.

5. Throwing an Exception

The code that is invoked via Unsafe is not examined in the same way by the compiler as regular Java code. We can use the throwException() method to throw any exception without restricting the caller to handle that exception, even if it’s a checked exception:

@Test(expected = IOException.class)
public void givenUnsafeThrowException_whenThrowCheckedException_thenNotNeedToCatchIt() {
    unsafe.throwException(new IOException());
}

After throwing an IOException, which is checked, we don’t need to catch it nor specify it in the method declaration.

6. Off-Heap Memory

If an application is running out of available memory on the JVM, we could end up forcing the GC process to run too often. Ideally, we would want a special memory region, of the heap and not controlled by the GC process.

The allocateMemory() method from the Unsafe class gives us the ability to allocate huge objects off the heap, meaning that this memory will not be seen and taken into account by the GC and the JVM.

This can be very useful, but we need to remember that this memory needs to be managed manually and properly reclaiming with freeMemory() when no longer needed.

Let’s say that we want to create the large off-heap memory array of bytes. We can use the allocateMemory() method to achieve that:

class OffHeapArray {
    private final static int BYTE = 1;
    private long size;
    private long address;

    public OffHeapArray(long size) throws NoSuchFieldException, IllegalAccessException {
        this.size = size;
        address = getUnsafe().allocateMemory(size * BYTE);
    }

    private Unsafe getUnsafe() throws IllegalAccessException, NoSuchFieldException {
        Field f = Unsafe.class.getDeclaredField("theUnsafe");
        f.setAccessible(true);
        return (Unsafe) f.get(null);
    }

    public void set(long i, byte value) throws NoSuchFieldException, IllegalAccessException {
        getUnsafe().putByte(address + i * BYTE, value);
    }

    public int get(long idx) throws NoSuchFieldException, IllegalAccessException {
        return getUnsafe().getByte(address + idx * BYTE);
    }

    public long size() {
        return size;
    }
    
    public void freeMemory() throws NoSuchFieldException, IllegalAccessException {
        getUnsafe().freeMemory(address);
    }
}

In the constructor of the OffHeapArray, we’re initializing the array that is of a given size. We are storing the beginning address of the array in the address field. The set() method is taking the index and the given value that will be stored in the array. The get() method is retrieving the byte value using its index that is an offset from the start address of the array.

Next, we can allocate that off-heap array using its constructor:

long SUPER_SIZE = (long) Integer.MAX_VALUE * 2;
OffHeapArray array = new OffHeapArray(SUPER_SIZE);

We can put N numbers of byte values into this array and then retrieve those values, summing them up to test if our addressing works correctly:

int sum = 0;
for (int i = 0; i < 100; i++) {
    array.set((long) Integer.MAX_VALUE + i, (byte) 3);
    sum += array.get((long) Integer.MAX_VALUE + i);
}

assertEquals(array.size(), SUPER_SIZE);
assertEquals(sum, 300);

In the end, we need to release the memory back to the OS by calling freeMemory().

7. CompareAndSwap Operation

The very efficient constructs from the java.concurrent package, like AtomicInteger, are using the compareAndSwap() methods out of Unsafe underneath, to provide the best possible performance. This construct is widely used in the lock-free algorithms that can leverage the CAS processor instruction to provide great speedup compared to the standard pessimistic synchronization mechanism in Java.

We can construct the CAS based counter using the compareAndSwapLong() method from Unsafe:

class CASCounter {
    private Unsafe unsafe;
    private volatile long counter = 0;
    private long offset;

    private Unsafe getUnsafe() throws IllegalAccessException, NoSuchFieldException {
        Field f = Unsafe.class.getDeclaredField("theUnsafe");
        f.setAccessible(true);
        return (Unsafe) f.get(null);
    }

    public CASCounter() throws Exception {
        unsafe = getUnsafe();
        offset = unsafe.objectFieldOffset(CASCounter.class.getDeclaredField("counter"));
    }

    public void increment() {
        long before = counter;
        while (!unsafe.compareAndSwapLong(this, offset, before, before + 1)) {
            before = counter;
        }
    }

    public long getCounter() {
        return counter;
    }
}

In the CASCounter constructor we are getting the address of the counter field, to be able to use it later in the increment() method. That field needs to be declared as the volatile, to be visible to all threads that are writing and reading this value. We are using the objectFieldOffset() method to get the memory address of the offset field.

The most important part of this class is the increment() method. We’re using the compareAndSwapLong() in the while loop to increment previously fetched value, checking if that previous value changed since we fetched it.

If it did, then we are retrying that operation until we succeed. There is no blocking here, which is why this is called a lock-free algorithm.

We can test our code by incrementing the shared counter from multiple threads:

int NUM_OF_THREADS = 1_000;
int NUM_OF_INCREMENTS = 10_000;
ExecutorService service = Executors.newFixedThreadPool(NUM_OF_THREADS);
CASCounter casCounter = new CASCounter();

IntStream.rangeClosed(0, NUM_OF_THREADS - 1)
  .forEach(i -> service.submit(() -> IntStream
    .rangeClosed(0, NUM_OF_INCREMENTS - 1)
    .forEach(j -> casCounter.increment())));

Next, to assert that state of the counter is proper, we can get the counter value from it:

assertEquals(NUM_OF_INCREMENTS * NUM_OF_THREADS, casCounter.getCounter());

8. Park/Unpark

There are two very interesting methods in the Unsafe API that are used by the JVM to context switch threads. When the thread is waiting for some action, the JVM can make this thread blocked by using the park() method from the Unsafe class.

It is very similar to the Object.wait() method, but it is calling the native OS code, thus taking advantage of some architecture specifics to get the best performance.

When the thread is blocked and needs to be made runnable again, the JVM uses the unpark() method. We’ll often see those method invocations in thread dumps, especially in the applications which use thread pools.

9. Conclusion

In this article, we were looking at the Unsafe class and its most useful construct.

We saw how to access private fields, how to allocate off-heap memory, and how to use the compare-and-swap construct to implement lock-free algorithms.

The implementation of all these examples and code snippets can be found over on GitHub – this is a Maven project, so it should be easy to import and run as it is.

JVM Garbage Collectors

$
0
0

1. Overview

In this quick tutorial, we will show the basics of different JVM Garbage Collection (GC) implementations. Additionally, we’ll find out how to enable a particular type of Garbage Collection in our applications.

2. Brief Introduction to Garbage Collection

From the name, it looks like Garbage Collection deals with finding and deleting the garbage from memory. However, in reality, Garbage Collection tracks each and every object available in the JVM heap space and removes unused ones.

In simple words, GC works in two simple steps known as Mark and Sweep:

  • Mark – it is where the garbage collector identifies which pieces of memory are in use and which are not
  • Sweep – this step removes objects identified during the “mark” phase

Advantages:

  • No manual memory allocation/deallocation handling because unused memory space is automatically handled by GC
  • No overhead of handling Dangling Pointer
  • Automatic Memory Leak management (GC on its own can’t guarantee the full proof solution to memory leaking, however, it takes care of a good portion of it)

Disadvantages:

  • Since JVM has to keep track of object reference creation/deletion, this activity requires more CPU power besides the original application. It may affect the performance of requests which required large memory
  • Programmers have no control over the scheduling of CPU time dedicated to freeing objects that are no longer needed
  • Using some GC implementations might result in application stopping unpredictably
  • Automatized memory management will not be as efficient as the proper manual memory allocation/deallocation

3. GC Implementations

JVM has four types of GC implementations:

  • Serial Garbage Collector
  • Parallel Garbage Collector
  • CMS Garbage Collector
  • G1 Garbage Collector

3.1. Serial Garbage Collector

This is the simplest GC implementation, as it basically works with a single thread. As a result, this GC implementation freezes all application threads when it runs. Hence, it not a good idea to use it in multi-threaded applications like server environments.

However, there was an excellent talk by Twitter engineers at QCon 2012 on the performance of Serial Garbage Collector – which is a good way to understand this collector better.

The Serial GC is the garbage collector of choice for most applications that do not have small pause time requirements and run on client-style machines. To enable Serial Garbage Collector, we can use the following argument:

java -XX:+UseSerialGC -jar Application.java

3.2. Parallel Garbage Collector

It’s the default GC of the JVM and sometimes called Throughput Collectors. Unlike Serial Garbage Collector, this uses multiple threads for managing heap space. But it also freezes other application threads while performing GC.

If we use this GC, we can specify maximum garbage collection threads and pause time, throughput and footprint (heap size).

The numbers of garbage collector threads can be controlled with the command-line option -XX:ParallelGCThreads=<N>.

The maximum pause time goal (gap [in milliseconds] between two GC)is specified with the command-line option -XX:MaxGCPauseMillis=<N>.

The maximum throughput target (measured regarding the time spent doing garbage collection versus the time spent outside of garbage collection) is specified by the command-line option -XX:GCTimeRatio=<N>.

Maximum heap footprint (the amount of heap memory that a program requires while running) is specified using the option -Xmx<N>.

To enable Parallel Garbage Collector, we can use the following argument:

java -XX:+UseParallelGC -jar Application.java

3.3. CMS Garbage Collector

The Concurrent Mark Sweep (CMS) implementation uses multiple garbage collector threads for garbage collection. It’s designed for applications that prefer shorter garbage collection pauses, and that can afford to share processor resources with the garbage collector while the application is running.

Simply put, applications using this type of GC respond slower on average but do not stop responding to perform garbage collection.

A quick point to note here is that since this GC is concurrent, an invocation of explicit garbage collection such as using System.gc() while the concurrent process is working, will result in Concurrent Mode Failure / Interruption.

If more than 98% of the total time is spent in CMS garbage collection and less than 2% of the heap is recovered, then an OutOfMemoryError is thrown by the CMS collector. If necessary, this feature can be disabled by adding the option -XX:-UseGCOverheadLimit to the command line.

This collector also has a mode knows as an incremental mode which is being deprecated in Java SE 8 and may be removed in a future major release.

To enable the CMS Garbage Collector, we can use the following flag:

java -XX:+USeParNewGC -jar Application.java

3.4. G1 Garbage Collector

G1 (Garbage First) Garbage Collector is designed for applications running on multi-processor machines with large memory space. It’s available since JDK7 Update 4 and in later releases.

G1 collector will replace the CMS collector since it’s more performance efficient.

Unlike other collectors, G1 collector partitions the heap into a set of equal-sized heap regions, each a contiguous range of virtual memory. When performing garbage collections, G1 shows a concurrent global marking phase (i.e. phase 1 known as Marking) to determine the liveness of objects throughout the heap.

After the mark phase is completed, G1 knows which regions are mostly empty. It collects in these areas first, which usually yields a significant amount of free space (i.e. phase 2 known as Sweeping). It is why this method of garbage collection is called Garbage-First.

To enable G1 Garbage Collector, we can use the following argument:

java -XX:+UseG1GC -jar Application.java

3.5. Java 8 Changes

Java 8u20 has introduced one more JVM parameter for reducing the unnecessary use of memory by creating too many instances of same String. This optimizes the heap memory by removing duplicate String values to a global single char[] array.

This parameter can be enabled by adding -XX:+UseStringDeduplication as JVM parameter.

4. Conclusion

In this quick tutorial, we had a look at the different JVM Garbage Collection implementations and their use cases.

More detailed documentation can be found here.

A Guide to MyBatis

$
0
0

1. Introduction

MyBatis is an open source persistence framework which simplifies the implementation of database access in Java applications. It provides the support for custom SQL, stored procedures and different types of mapping relations.

Simply put, it’s an alternative to JDBC and Hibernate.

2. Maven Dependencies

To make use of MyBatis we need to add the dependency to our pom.xml:

<dependency>
    <groupId>org.mybatis</groupId>
    <artifactId>mybatis</artifactId>
    <version>3.4.4</version>
</dependency>

The latest version of the dependency can be found here.

3. Java APIs

3.1. SQLSessionFactory

SQLSessionFactory is the core class for every MyBatis application. This class is instantiated by using SQLSessionFactoryBuilder’s builder() method which loads a configuration XML file:

String resource = "mybatis-config.xml";
InputStream inputStream Resources.getResourceAsStream(resource);
SQLSessionFactory sqlSessionFactory
  = new SqlSessionFactoryBuilder().build(inputStream);

The Java configuration file includes settings like data source definition, transaction manager details, and a list of mappers which define relations between entities, these together are used to build the SQLSessionFactory instance:

public static SqlSessionFactory buildqlSessionFactory() {
    DataSource dataSource 
      = new PooledDataSource(DRIVER, URL, USERNAME, PASSWORD);

    Environment environment 
      = new Environment("Development", new JdbcTransactionFactory(), dataSource);
        
    Configuration configuration = new Configuration(environment);
    configuration.addMapper(PersonMapper.class);
    // ...

    SqlSessionFactoryBuilder builder = new SqlSessionFactoryBuilder();
    return builder.build(configuration);
}

3.2. SQLSession

SQLSession contains methods for performing database operations, obtaining mappers and managing transactions. It can be instantiated from SQLSessionFactory class. Instances of this class are not thread-safe.

After performing the database operation the session should be closed. Since SqlSession implements the AutoCloseable interface, we can use the try-with-resources block:

try(SqlSession session = sqlSessionFactory.openSession()) {
    // do work
}

4. Mappers

Mappers are Java interfaces that map methods to the corresponding SQL statements. MyBatis provides annotations for defining database operations:

public interface PersonMapper {

    @Insert("Insert into person(name) values (#{name})")
    public Integer save(Person person);

    // ...

    @Select(
      "Select personId, name from Person where personId=#{personId}")
    @Results(value = {
      @Result(property = "personId", column = "personId"),
      @Result(property="name", column = "name"),
      @Result(property = "addresses", javaType = List.class,
        column = "personId", many=@Many(select = "getAddresses"))
    })
    public Person getPersonById(Integer personId);

    // ...
}

5. MyBatis Annotations

Let’s see some of the main annotations provided by MyBatis:

  • @Insert, @Select, @Update, @Deletethose annotations represent SQL statements to be executed by calling annotated methods:
    @Insert("Insert into person(name) values (#{name})")
    public Integer save(Person person);
    
    @Update("Update Person set name= #{name} where personId=#{personId}")
    public void updatePerson(Person person);
    
    @Delete("Delete from Person where personId=#{personId}")
    public void deletePersonById(Integer personId);
    
    @Select("SELECT person.personId, person.name FROM person 
      WHERE person.personId = #{personId}")
    Person getPerson(Integer personId);
  • @Results  – it is a list of result mappings that contain the details of how the database columns are mapped to Java class attributes:
    @Select("Select personId, name from Person where personId=#{personId}")
    @Results(value = {
      @Result(property = "personId", column = "personId")
        // ...   
    })
    public Person getPersonById(Integer personId);
  • @Result – it represents a single instance of Result out of the list of results retrieved from @Results. It includes the details like mapping from database column to Java bean property, Java type of the property and also the association with other Java objects:
    @Results(value = {
      @Result(property = "personId", column = "personId"),
      @Result(property="name", column = "name"),
      @Result(property = "addresses", javaType =List.class) 
        // ... 
    })
    public Person getPersonById(Integer personId);
  • @Many – it specifies a mapping of one object to a collection of the other objects:
    @Results(value ={
      @Result(property = "addresses", javaType = List.class, 
        column = "personId",
        many=@Many(select = "getAddresses"))
    })

    Here getAddresses is the method which returns the collection of Address by querying Address table.

    @Select("select addressId, streetAddress, personId from address 
      where personId=#{personId}")
    public Address getAddresses(Integer personId);

    Similar to @Many annotation, we have @One annotation which specifies the one to one mapping relationship between objects.

  • @MapKey – this is used to convert the list of records to Map of records with the key as defined by value attribute:
    @Select("select * from Person")
    @MapKey("personId")
    Map<Integer, Person> getAllPerson();
  • @Optionsthis annotation specifies a wide range of switches and configuration to be defined so that instead of defining them on other statements we can @Options to define them:
    @Insert("Insert into address (streetAddress, personId) 
      values(#{streetAddress}, #{personId})")
    @Options(useGeneratedKeys = false, flushCache=true)
    public Integer saveAddress(Address address);

6. Dynamic SQL

Dynamic SQL is a very powerful feature provided by MyBatis. With this, we can structure our complex SQL with accuracy.

With traditional JDBC code, we have to write SQL statements, concatenate them with the accuracy of spaces between them and putting the commas at right places. This is very error prone and very difficult to debug, in the case of large SQL statements.

Let’s explore how we can use dynamic SQL in our application:

@SelectProvider(type=MyBatisUtil.class, method="getPersonByName")
public Person getPersonByName(String name);

Here we have specified a class and a method name which actually constructs and generate the final SQL:

public class MyBatisUtil {
 
    // ...
 
    public String getPersonByName(String name){
        return new SQL() {{
            SELECT("*");
            FROM("person");
            WHERE("name like #{name} || '%'");
        }}.toString();
    }
}

Dynamic SQL provides all the SQL constructs as a class e.g. SELECT, WHERE etc. With this, we can dynamically change the generation of WHERE clause.

7. Stored Procedure Support

We can also execute the stored procedure using @Select annotation. Here we need to pass the name of the stored procedure, the parameter list and use an explicit Call to that procedure:

@Select(value= "{CALL getPersonByProc(#{personId,
  mode=IN, jdbcType=INTEGER})}")
@Options(statementType = StatementType.CALLABLE)
public Person getPersonByProc(Integer personId);

8. Conclusion

In this quick tutorial, we’ve seen the different features provided by MyBatis and how it ease out the development of database facing applications. We have also seen various annotations provided by the library.

The complete code for this article is available over on GitHub.


Creating a Custom Starter with Spring Boot

$
0
0

1. Overview

The core Spring Boot developers provide starters for most of the popular open source projects, but we are not limited to these.

We can also write our own custom starters. If we have an internal library for use within our organization, it would be a good practice to also write a starter for it if it’s going to be used in Spring Boot context.

These starters enable developers to avoid lengthy configuration and quickly jumpstart their development. However, with a lot of things happening in the background, it sometimes becomes difficult to understand how an annotation or just including a dependency in the pom.xml enables so many features.

In this article, we’ll demystify the Spring Boot magic to see what’s going on behind the scenes. Then we will use these concepts to create a starter for our own custom library.

2. Demystifying Spring Boot’s Autoconfiguration

2.1. Auto Configuration Classes

When Spring Boot starts up, it looks for a file named spring.factories in the classpath. This file is located in the META-INF directory. Let’s look at a snippet of this file from the spring-boot-autoconfigure project:

# Auto Configure
org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
org.springframework.boot.autoconfigure.amqp.RabbitAutoConfiguration,\
org.springframework.boot.autoconfigure.cassandra.CassandraAutoConfiguration,\
org.springframework.boot.autoconfigure.mongo.MongoAutoConfiguration,\
org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaAutoConfiguration

This file maps a name to different configuration classes which Spring Boot will try to run. So, as per this snippet, Spring Boot will try to run all the configuration classes for RabbitMQ, Cassandra, MongoDB and Hibernate.

Whether or not these classes will actually run will depend on the presence of dependent classes on the classpath. For example, if the classes for MongoDB are found on the classpath, MongoAutoConfiguration will run and all the mongo related beans will be initialized.

This conditional initialization is enabled by the @ConditionalOnClass annotation. Let’s look at the code snippet from MongoAutoConfiguration class to see its usage:

@Configuration
@ConditionalOnClass(MongoClient.class)
@EnableConfigurationProperties(MongoProperties.class)
@ConditionalOnMissingBean(type = "org.springframework.data.mongodb.MongoDbFactory")
public class MongoAutoConfiguration {
    // configuration code
}

Now how – if the MongoClient is available in the classpath – this configuration class will run populating the Spring bean factory with a MongoClient initialized with default config settings.

2.2. Custom Properties from application.properties File

Spring Boot initializes the beans using some pre-configured defaults. To override those defaults, we generally declare them in the application.properties file with some specific name. These properties are automatically picked up by the Spring Boot container.

Let’s see how that works.

In the code snippet for MongoAutoConfiguration, @EnableConfigurationProperties annotation is declared with the MongoProperties class which acts as the container for custom properties:

@ConfigurationProperties(prefix = "spring.data.mongodb")
public class MongoProperties {

    private String host;

    // other fields with standard getters and setters
}

The prefix plus the field name make the names of the properties in the application.properties file. So, to set the host for MongoDB, we only need to write the following in the property file:

spring.data.mongodb.host = localhost

Similarly, values for other fields in the class can be set using the property file.

3. Creating a Custom Starter

Based on the concepts in section 2, to create a custom starter we need to write the following components:

  1. An auto-configure class for our library along with a properties class for custom configuration.
  2. A starter pom to bring in the dependencies of the library and the autoconfigure project.

For demonstration, we have created a simple greeting library that will take in a greeting message for different times of day as configuration parameters and output the greeting message. We will also create a sample Spring Boot application to demonstrate the usage of our autoconfigure and starter modules.

3.1. The Autoconfigure Module

We’ll call our auto configure module greeter-spring-boot-autoconfigure. This module will have two main classes i.e. GreeterProperties which will enable setting custom properties through application.properties file and GreeterAutoConfiguartion which will create the beans for greeter library.

Let’s look at the code for both the classes:

@ConfigurationProperties(prefix = "baeldung.greeter")
public class GreeterProperties {

    private String userName;
    private String morningMessage;
    private String afternoonMessage;
    private String eveningMessage;
    private String nightMessage;

    // standard getters and setters

}
@Configuration
@ConditionalOnClass(Greeter.class)
@EnableConfigurationProperties(GreeterProperties.class)
public class GreeterAutoConfiguration {

    @Autowired
    private GreeterProperties greeterProperties;

    @Bean
    @ConditionalOnMissingBean
    public GreetingConfig greeterConfig() {

        String userName = greeterProperties.getUserName() == null
          ? System.getProperty("user.name") 
          : greeterProperties.getUserName();
        
        // ..

        GreetingConfig greetingConfig = new GreetingConfig();
        greetingConfig.put(USER_NAME, userName);
        // ...
        return greetingConfig;
    }

    @Bean
    @ConditionalOnMissingBean
    public Greeter greeter(GreetingConfig greetingConfig) {
        return new Greeter(greetingConfig);
    }
}

We also need to add a spring.factories file in the src/main/resource/META-INF directory with the following content:

org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
  com.baeldung.greeter.autoconfigure.GreeterAutoConfiguration

On application startup, the GreeterAutoConfiguration class will run if the class Greeter is present in the classpath. If run successfully, it will populate the Spring application context with GreeterConfig and Greeter beans by reading the properties via GreeterProperties class.

The @ConditionalOnMissingBean annotation will ensure that these beans will only be created if they don’t already exist. This enables developers to completely override the auto-configured beans by defining their own in one of the @Configuration classes.

3.2. Creating pom.xml

Now let’s create the starter pom which will bring in the dependencies for the auto-configure module and the greeter library.

As per the naming convention, all the starters which are not managed by the core Spring Boot team should start with the library name followed by the suffix -spring-boot-starter. So we will call our starter as greeter-spring-boot-starter:

<project ...>
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.baeldung</groupId>
    <artifactId>greeter-spring-boot-starter</artifactId>
    <version>0.0.1-SNAPSHOT</version>

    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <greeter.version>0.0.1-SNAPSHOT</greeter.version>
        <spring-boot.version>1.5.2.RELEASE</spring-boot.version>
    </properties>

    <dependencies>

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter</artifactId>
            <version>${spring-boot.version}</version>
        </dependency>

        <dependency>
            <groupId>com.baeldung</groupId>
            <artifactId>greeter-spring-boot-autoconfigure</artifactId>
            <version>${project.version}</version>
        </dependency>

        <dependency>
            <groupId>com.baeldung</groupId>
            <artifactId>greeter</artifactId>
            <version>${greeter.version}</version>
        </dependency>

    </dependencies>

</project>

3.3. Using the Starter

Let’s create greeter-spring-boot-sample-app which will use the starter. In the pom.xml we need to add it as a dependency:

<dependency>
    <groupId>com.baeldung</groupId>
    <artifactId>greeter-spring-boot-starter</artifactId>
    <version>${greeter-starter.version}</version>
</dependency>

Spring Boot will automatically configure everything and we will have a Greeter bean ready to be injected and used.

Let’s also change some of the default values of the GreeterProperties by defining them in the application.properties file with the baeldung.greeter prefix:

baeldung.greeter.userName=Baeldung
baeldung.greeter.afternoonMessage=Woha\ Afternoon

Finally, let’s use the Greeter bean in our application:

@SpringBootApplication
public class GreeterSampleApplication implements CommandLineRunner {

    @Autowired
    private Greeter greeter;

    public static void main(String[] args) {
        SpringApplication.run(GreeterSampleApplication.class, args);
    }

    @Override
    public void run(String... args) throws Exception {
        String message = greeter.greet();
        System.out.println(message);
    }
}

4. Conclusion

In this quick tutorial, we focused on rolling out a custom Spring Boot starter, and on how these starters, together with the autoconfigure mechanism – work in the background to eliminate a lot of manual configuration.

The complete source code for all the modules we created in this article can be found over on GitHub.

Generics in Kotlin

$
0
0

1. Overview

In this article, we will be looking at the generic types in the Kotlin language.

They are very similar to those from the Java language, but the Kotlin language creators tried to make them a little bit more intuitive and understandable by introducing special keywords like out and in.

2. Creating Parameterized Classes

Let’s say that we want to create a parameterized class. We can easily do this in Kotlin language by using generic types:

class ParameterizedClass<A>(private val value: A) {

    fun getValue(): A {
        return value
    }
}

We can create an instance of such a class by setting a parameterized type explicitly when using the constructor:

val parameterizedClass = ParameterizedClass<String>("string-value")

val res = parameterizedClass.getValue()

assertTrue(res is String)

Happily, Kotlin can infer the generic type from the parameter type so we can omit that when using the constructor:

val parameterizedClass = ParameterizedClass("string-value")

val res = parameterizedClass.getValue()

assertTrue(res is String)

3. Kotlin out and in Keywords

3.1. The out Keyword

Let’s say that we want to create a producer class that will be producing a result of some type T. Sometimes; we want to assign that produced value to a reference that is of a supertype of the type T.

To achieve that using Kotlin, we need to use the out keyword on the generic type. It means that we can assign this reference to any of its supertypes. The out value can be only be produced by the given class but not consumed:

class ParameterizedProducer<out T>(private val value: T) {
    fun get(): T {
        return value
    }
}

We defined a ParameterizedProducer class that can produce a value of type T.

Next; we can assign an instance of the ParameterizedProducer class to the reference that is a supertype of it:

val parameterizedProducer = ParameterizedProducer("string")

val ref: ParameterizedProducer<Any> = parameterizedProducer

assertTrue(ref is ParameterizedProducer<Any>)

If the type T in the ParamaterizedProducer class will not be the out type, the given statement will produce a compiler error.

3.2. The in Keyword

Sometimes, we have an opposite situation meaning that we have a reference of type T and we want to be able to assign it to the subtype of T.

We can use the in keyword on the generic type if we want to assign it to the reference of its subtype. The in keyword can be used only on the parameter type that is consumed, not produced:

class ParameterizedConsumer<in T> {
    fun toString(value: T): String {
        return value.toString()
    }
}

We declare that a toString() method will only be consuming a value of type T.

Next, we can assign a reference of type Number to the reference of its subtype – Double:

val parameterizedConsumer = ParameterizedConsumer<Number>()

val ref: ParameterizedConsumer<Double> = parameterizedConsumer

assertTrue(ref is ParameterizedConsumer<Double>)

If the type T in the ParameterizedCounsumer will not be the in type, the given statement will produce a compiler error.

4. Type Projections

4.1. Copy an Array of Subtypes to an Array of Supertypes

Let’s say that we have an array of some type, and we want to copy the whole array into the array of Any type. It is a valid operation, but to allow the compiler to compile our code we need to annotate the input parameter with the out keyword.

This lets the compiler know that input argument can be of any type that is a subtype of the Any:

fun copy(from: Array<out Any>, to: Array<Any?>) {
    assert(from.size == to.size)
    for (i in from.indices)
        to[i] = from[i]
}

If the from parameter is not of the out Any type, we will not be able to pass an array of an Int type as an argument:

val ints: Array<Int> = arrayOf(1, 2, 3)
val any: Array<Any?> = arrayOfNulls(3)

copy(ints, any)

assertEquals(any[0], 1)
assertEquals(any[1], 2)
assertEquals(any[2], 3)

4.2. Adding Elements of a Subtype to an Array of its Supertype

Let’s say that we have the following situation – we have an array of Any type that is a supertype of Int and we want to add an Int element to this array. We need to use the in keyword as a type of the destination array to let the compiler know that we can copy the Int value to this array:

fun fill(dest: Array<in Int>, value: Int) {
    dest[0] = value
}

Then, we can copy a value of the Int type to the array of Any:

val objects: Array<Any?> = arrayOfNulls(1)

fill(objects, 1)

assertEquals(objects[0], 1)

4.3. Star Projections

There are situations when we do not care about the specific type of the value. Let’s say that we just want to print all the elements of an array and it does not matter what the type of the elements in this array is.

To achieve that, we can use a star projection:

fun printArray(array: Array<*>) { 
    array.forEach { println(it) }
}

Then, we can pass an array of any type to the printArray() method:

val array = arrayOf(1,2,3) 
printArray(array)

When using the star projection reference type, we can read values from it, but we cannot write them because it will cause a compilation error.

5. Generic Constraints

Let’s say that we want to sort an array of elements, and each element type should implement a Comparable interface. We can use the generic constraints to specify that requirement:

fun <T: Comparable<T>> sort(list: List<T>): List<T> {
    return list.sorted()
}

In the given example, we defined that all elements T needed to implement the Comparable interface. Otherwise, if we will try to pass a list of elements that does not implement this interface, it will cause a compiler error.

We defined a sort function that takes as an argument a list of elements that implement Comparable, so we can call the sorted() method on it. Let’s look at the test case for that method:

val listOfInts = listOf(5,2,3,4,1)

val sorted = sort(listOfInts)

assertEquals(sorted, listOf(1,2,3,4,5))

We can easily pass a list of Ints because the Int type implements the Comparable interface.

6. Conclusion

In this article, we were looking at the Kotlin Generic types. We saw how to use the out and in keywords properly. We used type projections and defined a generic method that uses generic constraints.

The implementation of all these examples and code snippets can be found in the GitHub project – this is a Maven project, so it should be easy to import and run as it is.

Introduction to Groovy Language

$
0
0

1. Overview

Groovy is a dynamic, scripting language for the JVM. It compiles to bytecode and blends seamlessly with Java code and libraries.

In this article, we’re going to take a look some of the essential features of Groovy, including basic syntax, control structures, and collections.

Then we will look at some of the main features that make it an attractive language, including null safety, implicit truth, operators, and strings.

2. Environment

If we want to use Groovy in Maven projects, we need to add the following to the pom.xml:

<build>
    <plugins>
        // ...
        <plugin>
            <groupId>org.codehaus.gmavenplus</groupId>
            <artifactId>gmavenplus-plugin</artifactId>
            <version>1.5</version>
       </plugin>
   </plugins>
</build>
<dependencies>
    // ...
    <dependency>
        <groupId>org.codehaus.groovy</groupId>
        <artifactId>groovy-all</artifactId>
        <version>2.4.10</version>
    </dependency>
</dependencies>

The most recent Maven plugin can be found here and the latest version of the groovy-all here.

3. Basic Features

There are many useful features in Groovy. Now, let’s look at the basic building blocks of the language and how it differs from Java.

Now, let’s look at the basic building blocks of the language and how it differs from Java.

3.1. Dynamic Typing

One of the most important features of Groovy is its support for dynamic typing.

Type definitions are optional and actual types are determined at runtime. Let’s take a look at these two classes:

class Duck {
    String getName() {
        'Duck'
    }
}
class Cat {
    String getName() {
        'Cat'
    }
}

Those two classes define the same getName method, but it is not defined explicitly in a contract.

Now, imagine that we have a list of objects containing ducks and cats that have the getName method. With Groovy, we can do the following:

Duck duck = new Duck()
Cat cat = new Cat()

def list = [duck, cat]
list.each { obj ->
    println obj.getName()
}

The code will compile, and the output of the code above would be:

Duck
Cat

3.2. Implicit Truthy Conversion

Like in JavaScript, Groovy evaluates every object to a boolean if required, e.g. when using it inside an if statement or when negating the value:

if("hello") {...}
if(15) {...}
if(someObject) {...}

There are a few simple rules to remember about this conversion:

  • Non-empty Collections, arrays, maps evaluate to true
  • Matcher with at least one match evaluates to true
  • Iterators and Enumerations with further elements are coerced to true
  • Non-empty Strings, GStrings and CharSequences, are coerced to true
  • Non-zero numbers are evaluated to true
  • Non-null object references are coerced to true

If we want to customize the implicit truthy conversion, we can define our asBoolean() method.

3.3. Imports

Some packages get imported by default, and we don’t need to import them explicitly:

import java.lang.* 
import java.util.* 
import java.io.* 
import java.net.* 

import groovy.lang.* 
import groovy.util.* 

import java.math.BigInteger 
import java.math.BigDecimal

4. AST Transforms

AST (Abstract Syntax Tree) transforms allows us to hook into the Groovy compilation process and customize it to meet our needs. This is done at compilation time, so there is no performance penalty when running application. We can create our AST transformations, but we can also use the built-in ones.

We can create our transformations, or we can benefit from the built-in ones.

Let’s take a look at some annotations worth knowing.

4.1. Annotation TypeChecked

This annotation is used for forcing the compiler to do strict type checking for annotated pieces of code. The type checking mechanism is extensible, so we can even provide even stricter type checking than available in Java when desired.

Let’s take a look at the example below:

class Universe {
    @TypeChecked
    int answer() { "forty two" }
}

If we try to compile this code, we’ll observe the following error:

[Static type checking] - Cannot return value of type java.lang.String on method returning type int

The @TypeChecked annotation can be applied to classes and methods.

4.2. Annotation CompileStatic

This annotation allows the compiler to execute compile-time checks as it is done with Java code. After that, the compiler performs a static compilation, thus bypassing the Groovy metaobject protocol.

When a class is annotated, all methods, properties, files, inner classes, etc. of the annotated class will be type-checked. When a method is annotated, static compilation is applied only to those items (closures and anonymous inner classes) that are enclosed by that method.

5. Properties

In Groovy, we can create POGOs (Plain Old Groovy Objects) that work the same way as POJOs in Java, although they’re more compact because getters and setters are automatically generated for public properties during compilation. It’s important to remember that they will be generated only if they’re not already defined.

This gives us the flexibility of defining attributes as open fields while retaining the ability to override the behavior when setting or getting the values.

Consider this object:

class Person {
    String name
    String lastName
}

Since the default scope for classes, fields, and methods is public – this is a public class, and the two fields are public.

The compiler will convert these into private fields and add getName(), setName(), getLastName() and setLasfName() methods. If we define the setter and getter for a particular field, the compiler will not create a public method.

5.1. Shortcut Notations

Groovy offers a shortcut notation for getting and setting properties. Instead of the Java-way of calling getters and setters, we can use a field-like access notation:

resourceGroup.getResourcePrototype().getName() == SERVER_TYPE_NAME
resourceGroup.resourcePrototype.name == SERVER_TYPE_NAME

resourcePrototype.setName("something")
resourcePrototype.name = "something"

6. Operators

Let’s now take a look at new operators added on top of those known from plain Java.

6.1. Null-Safe Dereference

The most popular one is the null-safe dereference operator “?” which allows us to avoid a NullPointerException when calling a method or accessing a property of a null object. It’s especially useful in chained calls where a null value could occur at some point in the chain.

For example, we can safely call:

String name = person?.organization?.parent?.name

In the example above if a person, person.organization, or organization.parent are null, then null is returned.

6.2. Elvis Operator

The Elvis operator “?:” lets us condense ternary expressions. These two are equivalent:

String name = person.name ?: defaultName

and

String name = person.name ? person.name : defaultName

They both assign the value of person.name to the name variable if it is Groovy true (in this case, not null and has a non-zero length).

6.3. Spaceship Operator

The spaceship operator “<=>” is a relational operator that performs like Java’s compareTo() which compares two objects and returns -1, 0, or +1 depending on the values of both arguments.

If the left argument is greater than the right, the operator returns 1. If the left argument is less than the right, the operator returns −1. If the arguments are equal, 0 is returned.

The greatest advantage of using the comparison operators is the smooth handling of nulls such that x <=> y will never throw a NullPointerException:

println 5 <=> null

The above example will print 1 as a result.

7. Strings

There are multiple ways for expressing string literals. The approach used in Java (double-quoted strings) is supported, but it is also allowed to use single quotes when preferred.

Multi-line strings, sometimes called heredocs in other languages, are also supported, using triple quotes (either single or double).

Multi-line strings, sometimes called heredocs in other languages, are also supported, using triple quotes (either single or double).

Strings defined with double quotes support interpolation using the ${} syntax:

def name = "Bill Gates"
def greeting = "Hello, ${name}"

In fact, any expression can be placed inside the ${}:

def name = "Bill Gates"
def greeting = "Hello, ${name.toUpperCase()}"

A String with double quotes is called a GString if it contains an expression ${}, otherwise, it is a plain String object.

The code below will run without failing the test:

def a = "hello" 
assert a.class.name == 'java.lang.String'

def b = 'hello'
assert b.class.name == 'java.lang.String'

def c = "${b}"
assert c.class.name == 'org.codehaus.groovy.runtime.GStringImpl'

8. Collections and Maps

Let’s take a look at how some basic data structures are handled.

8.1. Lists

Here’s some code to add a few elements to a new instance of ArrayList in Java:

List<String> list = new ArrayList<>();
list.add("Hello");
list.add("World");

And here’s the same operation in Groovy:

List list = ['Hello', 'World']

Lists are by default of type java.util.ArrayList and can also be declared explicitly by calling the corresponding constructor.

There isn’t a separate syntax for a Set, but we can use type coercion for that. Either use:

Set greeting = ['Hello', 'World']

or:

def greeting = ['Hello', 'World'] as Set

8.2. Map

The syntax for a Map is similar, albeit a bit more verbose, because we need to be able to specify keys and values delimited with colons:

def key = 'Key3'
def aMap = [
    'Key1': 'Value 1', 
    Key2: 'Value 2',
    (key): 'Another value'
]

After this initialization, we will get a new LinkedHashMap with the entries: Key1 -> Value1, Key2 -> Value 2, Key3 -> Another Value.

We can access entries in the map in many ways:

println aMap['Key1']
println aMap[key]
println aMap.Key1

9. Control Structures

9.1. Conditionals: if-else

Groovy supports the conditional if/else syntax as expected:

if (...) {
    // ...
} else if (...) {
    // ...
} else {
    // ...
}

9.2. Conditionals: switch-case

The switch statement is backward compatible with Java code so that we can fall through cases sharing the same code for multiple matches.

The most important difference is that a switch can perform matching against multiple different value types:

def x = 1.23
def result = ""

switch ( x ) {
    case "foo":
        result = "found foo"
        break

    case "bar":
        result += "bar"
        break

    case [4, 5, 6, 'inList']:
        result = "list"
        break

    case 12..30:
        result = "range"
        break

    case Number:
        result = "number"
        break

    case ~/fo*/: 
        result = "foo regex"
        break

    case { it < 0 }: // or { x < 0 }
        result = "negative"
        break

    default:
        result = "default"
}

println(result)

The example above will print number.

9.3. Loops: while

Groovy supports the usual while loops like Java does:

def x = 0
def y = 5

while ( y-- > 0 ) {
    x++
}

9.4. Loops: for

Groovy embraces this simplicity and strongly encourages for loops following this structure:

for (variable in iterable) { body }

The for loop iterates over iterable. Frequently used iterables are ranges, collections, maps, arrays, iterators, and enumerations. In fact, any object can be an iterable.

Braces around the body are optional if it consists of only one statement. Below are examples of iterating over a range, list, array, map, and strings:

def x = 0
for ( i in 0..9 ) {
    x += i
}

x = 0
for ( i in [0, 1, 2, 3, 4] ) {
    x += i
}

def array = (0..4).toArray()
x = 0
for ( i in array ) {
    x += i
}

def map = ['abc':1, 'def':2, 'xyz':3]
x = 0
for ( e in map ) {
    x += e.value
}

x = 0
for ( v in map.values() ) {
    x += v
}

def text = "abc"
def list = []
for (c in text) {
    list.add(c)
}

Object iteration makes the Groovy for-loop a sophisticated control structure. It’s a valid counterpart to using methods that iterate over an object with closures, such as using Collection’s each method.

The main difference is that the body of a for loop isn’t a closure, this means this body is a block:

for (x in 0..9) { println x }

whereas this body is a closure:

(0..9).each { println it }

Even though they look similar, they’re very different in construction.

A closure is an object of its own and has different features. It can be constructed in a different place and passed to the each method. However, the body of the for-loop is directly generated as bytecode at its point of appearance. No special scoping rules apply.

10. Exception Handling

The big difference is that checked exceptions handling is not enforced.

To handle general exceptions, we can place the potentially exception-causing code in a try/catch block:

try {
    someActionThatWillThrowAnException()
} catch (e)
    // log the error message, and/or handle in some way
}

By not declaring the type of exception we catch, any exception will be caught here.

11. Closures

Simply put, a closure is an anonymous block of executable code which can be passed to variables and has access to data in the context where it was defined.

They’re also similar to anonymous inner classes, although they don’t implement an interface or extend a base class. They are similar to lambdas in Java.

Interestingly, Groovy can take full advantage of the JDK additions that have been introduced to support lambdas, especially the streaming API. We can always use closures where lambda expressions are expected.

Let’s consider the example below:

def helloWorld = {
    println "Hello World"
}

The variable helloWorld now holds a reference to the closure, and we can execute it by calling its call method:

helloWorld.call()

Groovy lets us use a more natural method call syntax – it invokes the call method for us:

helloWorld()

11.1. Parameters

Like methods, closures can have parameters. There are three variants.

In the latter example, because there’s nothing declpersistence_startared, there is only one parameter with the default name it. The modified closure that prints what it is sent would be:

def printTheParam = { println it }

We could call it like this:

printTheParam('hello')
printTheParam 'hello'

We can also expect parameters in closures and pass them when calling:

def power = { int x, int y ->
    return Math.pow(x, y)
}
println power(2, 3)

The type definition of parameters is the same as variables. If we define a type, we can only use this type, but can also it and pass in anything we want:

def say = { what ->
    println what
}
say "Hello World"

11.2. Optional Return

The last statement of a closure may be implicitly returned without the need to write a return statement. This can be used to reduce the boilerplate code to a minimum. Thus a closure that calculates the square of a number can be shortened as follows:

def square = { it * it }
println square(4)

This closure makes usage of the implicit parameter it and the optional return statement.

12. Conclusion

This article provided a quick introduction to the Groovy language and its key features. We started by introducing simple concepts such as basic syntax, conditional statements, and operators. We also demonstrated some more advanced features such as operators and closures.

If you want to find more information about the language and it’s semantics, you may go directly to the official site.

Java Web Weekly, Issue 173

$
0
0

Lots of interesting writeups on Java 9 this week.

Here we go…

1. Spring and Java

>> Spring Tips: Season 2 Recap [spring.io]

A summary of the Spring Tips series, including integrations with jOOQ, Couchbase, MyBatis, and a lot more.

>> Light at the End of the Long Tunnel for Java EE 8 [infoq.com]

Looks like that waiting for Java EE 8 finally comes to an end.

>> Custom collectors in Java 8 [frankel.ch]

Java 8 comes with an overwhelming set of collectors for the Stream API but sometimes even this is not enough and you need to create your own collectors – which might be more complicated than you thought.

>> Togglz aspect with Spring Boot [insaneprogramming.be]

A quick and practical guide to using Togglz with Boot.

>> Java 9 modules – JPMS basics [joda.org]

Another solid guide to modularity in Java 9.

>> Critical Deficiencies in Jigsaw (JSR-376, Java Platform Module System) [developer.jboss.org]

Redhat team raised multiple issues regarding the current implementation of the Jigsaw Project. It looks like there were multiple compromises made when developing the new modular system for Java.

>> 8 Ways to use the features of your database with Hibernate [thoughts-on-java.org]

There are quite a few common misconceptions about Hibernate – one of which is that it can only be used for simple mapping. It turns out you can call database functions, stored procedures, map views, and quite a bit more.

>> Want to Know What’s in a GC Pause? Go Look at the GC Log! [infoq.com]

GC logs can be a source of crucial information if you know how to read it – which can be tricky because GC logging is not thread-safe (prior to Java 9).

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Stop sweeping your failing tests under the RUG [ontestautomation.com]

Instead of retrying your tests till they green out, it might be a better move to invest into fixing problems with the system, or with the test itself.

Also worth reading:

3. Musings

>> Elements of Helpful Code Documentation [daedtech.com]

Discovering APIs by experimenting with them is fun but not very efficient (especially from the customer side of things). By taking care of documentation, we can get a lot more productive in the long term.

>> Alternatives to Lines of Code [daedtech.com]

It’s not a secret that measuring productivity using LoC/day is less than ideal and can be hacked easily, although it still appears quite attractive to some managers.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> My mother taught herself Ruby on Rails over the weekend [dilbert.com]

>> Does the strategy create itself? [dilbert.com]

>> Setting up my dev environment [dilbert.com]

5. Pick of the Week

>> Unnecessary Qualifiers [m.signalvnoise.com]

A Guide to Java EE Web-Related Annotations

$
0
0

1. Overview

Java EE annotations make developers’ life easier by allowing them to specify how application components should behave in a container. These are modern alternatives for XML descriptors and basically make it possible to avoid boilerplate code.

In this article, we’ll focus on annotations introduced with Servlet API 3.1 in Java EE 7. We will examine their purpose and look at their usage.

2. Web Annotations

Servlet API 3.1 introduced a new set of annotation types that can be used in Servlet classes:

  • @WebServlet
  • @WebInitParam
  • @WebFilter
  • @WebListener
  • @ServletSecurity
  • @HttpConstraint
  • @HttpMethodConstraint
  • @MultipartConfig

We’ll examine them in detail in next sections.

3. @WebServlet

Simply put, this annotation allows us to declare Java classes as servlets:

@WebServlet("/account")
public class AccountServlet extends javax.servlet.http.HttpServlet {
    
    public void doGet(HttpServletRequest request, HttpServletResponse response) 
      throws IOException {
        // ...
    }
 
    public void doPost(HttpServletRequest request, HttpServletResponse response) 
      throws IOException {
        // ...
    }
}

3.1. Using Attributes of @WebServlet Annotation

@WebServlet has a set of attributes that allow us to customize the servlet:

  • name
  • description
  • urlPatterns
  • initParams

We can use these as shown in the example below:

@WebServlet(
  name = "BankAccountServlet", 
  description = "Represents a Bank Account and it's transactions", 
  urlPatterns = {"/account", "/bankAccount" }, 
  initParams = { @WebInitParam(name = "type", value = "savings")})
public class AccountServlet extends javax.servlet.http.HttpServlet {

    String accountType = null;

    public void init(ServletConfig config) throws ServletException {
        // ...
    }

    public void doGet(HttpServletRequest request, HttpServletResponse response) 
      throws IOException {
        // ... 
    }

    public void doPost(HttpServletRequest request, HttpServletResponse response) 
      throws IOException {
        // ...  
    }
}

The name attribute overrides the default servlet name which is the fully qualified class name by default. If we want to provide a description of what the servlet does, we can use the description attribute.

The urlPatterns attribute is used to specify the URL(s) at which the servlet is available (multiple values can be provided to this attribute as shown in the code example).

4. @WebInitParam

This annotation is used with the initParams attribute of the @WebServlet annotation and the servlet’s initialization parameters.

In this example, we set a servlet initialization parameter type, to the value of ‘savings’:

@WebServlet(
  name = "BankAccountServlet", 
  description = "Represents a Bank Account and it's transactions", 
  urlPatterns = {"/account", "/bankAccount" }, 
  initParams = { @WebInitParam(name = "type", value = "savings")})
public class AccountServlet extends javax.servlet.http.HttpServlet {

    String accountType = null;

    public void init(ServletConfig config) throws ServletException {
        accountType = config.getInitParameter("type");
    }

    public void doPost(HttpServletRequest request, HttpServletResponse response) 
      throws IOException {
        // ...
    }
}

5. @WebFilter

If we want to alter the request and response of a servlet without touching its internal logic, we can use the WebFilter annotation. We can associate filters with a servlet or with a group of servlets and static content by specifying an URL pattern.

In the example below we are using the @WebFilter annotation to redirect any unauthorized access to the login page:

@WebFilter(
  urlPatterns = "/account/*",
  filterName = "LoggingFilter",
  description = "Filter all account transaction URLs")
public class LogInFilter implements javax.servlet.Filter {
    
    public void init(FilterConfig filterConfig) throws ServletException {
    }

    public void doFilter(
        ServletRequest request, ServletResponse response, FilterChain chain) 
          throws IOException, ServletException {
        HttpServletRequest req = (HttpServletRequest) request;
        HttpServletResponse res = (HttpServletResponse) response;

        res.sendRedirect(req.getContextPath() + "/login.jsp");
        chain.doFilter(request, response);
    }

    public void destroy() {
    }

}

6. @WebListener

Should we want knowledge or control over how and when a servlet and its requests are initialized or changed, we can use the @WebListener annotation.

To write a web listener we need to extend one or more of the following interfaces:

  • ServletContextListener  – for notifications about the ServletContext lifecycle
  • ServletContextAttributeListener – for notifications when a ServletContext attribute is changed
  • ServletRequestListener – for notifications whenever a request for a resource is made
  • ServletRequestAttributeListener – for notifications when an attribute is added, removed or changed in a ServletRequest
  • HttpSessionListener – for notifications when a new session is created and destroyed
  • HttpSessionAttributeListener – for notifications when a new attribute is being added to or removed from a session

Below is an example of how we can use a ServletContextListener to configured a web application:

@WebListener
public class BankAppServletContextListener 
  implements ServletContextListener {

    public void contextInitialized(ServletContextEvent sce) { 
        sce.getServletContext().setAttribute("ATTR_DEFAULT_LANGUAGE", "english"); 
    } 
    
    public void contextDestroyed(ServletContextEvent sce) { 
        // ... 
    } 
}

7. @ServletSecurity

When we want to specify the security model for our servlet, including roles, access control and authentication requirements we use the annotation @ServletSecurity.

In this example we will restrict access to our AccountServlet using the @ServletSecurity annotation:

@WebServlet(
  name = "BankAccountServlet", 
  description = "Represents a Bank Account and it's transactions", 
  urlPatterns = {"/account", "/bankAccount" }, 
  initParams = { @WebInitParam(name = "type", value = "savings")})
@ServletSecurity(
  value = @HttpConstraint(rolesAllowed = {"Member"}),
  httpMethodConstraints = {@HttpMethodConstraint(value = "POST", rolesAllowed = {"Admin"})})
public class AccountServlet extends javax.servlet.http.HttpServlet {

    String accountType = null;

    public void init(ServletConfig config) throws ServletException {
        // ...
    }

    public void doGet(HttpServletRequest request, HttpServletResponse response) 
      throws IOException {
       // ...
    }

    public void doPost(HttpServletRequest request, HttpServletResponse response) 
      throws IOException {        
        double accountBalance = 1000d;

        String paramDepositAmt = request.getParameter("dep");
        double depositAmt = Double.parseDouble(paramDepositAmt);
        accountBalance = accountBalance + depositAmt;

        PrintWriter writer = response.getWriter();
        writer.println("<html> Balance of " + accountType + " account is: " + accountBalance 
        + "</html>");
        writer.flush();
    }
}

In this case, when invoking the AccountServlet, the browser pops up a login screen for the user to enter a valid username and password.

We can use @HttpConstraint and  @HttpMethodConstraint annotations to specify values for the attributes value and httpMethodConstraints, of @ServletSecurity annotation.

@HttpConstraint  annotation applies to all HTTP methods. In other words, it specifies the default security constraint.

@HttpConstraint has three attributes:

  • value
  • rolesAllowed
  • transportGuarantee

Out of these attributes, the most commonly used attribute is rolesAllowed. In the example code snippet above, users who belong to the role Member are allowed to invoke all HTTP methods.

@HttpMethodConstraint  annotation allows us to specify the security constraints of a particular HTTP method.

@HttpMethodConstraint has the following attributes:

  • value
  • emptyRoleSemantic
  • rolesAllowed
  • transportGuarantee

In the example code snippet above, it shows how the doPost method is restricted only for users who belong to the Admin role, allowing the deposit function to be done only by an Admin user.

8. @MultipartConfig

This annotation is used when we need to annotate a servlet to handle multipart/form-data requests (typically used for a File Upload servlet).

This will expose the getParts() and getPart(name) methods of the HttpServletRequest can be used to access all parts as well as an individual part.

The uploaded file can be written to the disk by calling the write(fileName) of the Part object.

Now we will look at an example servlet UploadCustomerDocumentsServlet that demonstrates its usage:

@WebServlet(urlPatterns = { "/uploadCustDocs" })
@MultipartConfig(
  fileSizeThreshold = 1024 * 1024 * 20,
  maxFileSize = 1024 * 1024 * 20,
  maxRequestSize = 1024 * 1024 * 25,
  location = "./custDocs")
public class UploadCustomerDocumentsServlet extends HttpServlet {

    protected void doPost(HttpServletRequest request, HttpServletResponse response) 
      throws ServletException, IOException {
        for (Part part : request.getParts()) {
            part.write("myFile");
        }
    }

}

@MultipartConfig  has four attributes:

  • fileSizeThreshold – This is the size threshold when saving the uploaded file temporarily. If the uploaded file’s size is greater than this threshold, it will be stored on the disk. Otherwise, the file is stored in memory (size in bytes)
  • maxFileSize  – This is the maximum size of the uploaded file (size in bytes)
  • maxRequestSize – This is the highest size of the request, including both uploaded files and other form data (size in bytes)
  • location – The is the directory where uploaded files are stored

9. Conclusion

In this article, we looked at some Java EE annotations introduced with the Servlet API 3.1 and their purpose and their usage.

Source code related to this article can be found over on GitHub.

Introduction to Apache Commons Math

$
0
0

1. Overview

We’re frequently in need of using mathematical tools, and sometimes java.lang.Math is simply not enough. Fortunately, Apache Commons has the goal of filling in the leaks of the standard library, with Apache Commons Math.

Apache Commons Math is the biggest open-source library of mathematical functions and utilities for Java. Given that this article is just an introduction, we will just give an overview of the library and present the most compelling use cases.

2. Starting with Apache Commons Math

2.1. The Usages of Apache Commons Math

Apache Commons Math consists of mathematical functions (erf for instance), structures representing mathematical concepts (like complex numbers, polynomials, vectors, etc.), and algorithms that we can apply to these structures (root finding, optimization, curve fitting, computation of intersections of geometrical figures, etc.).

2.2. Maven Configuration

If you’re using Maven, simply add this dependency:

<dependency>
  <groupId>org.apache.commons</groupId>
  <artifactId>commons-math3</artifactId>
  <version>3.6.1</version>
</dependency>

2.3. Package Overview

Apache Commons Math is divided into several packages:

  • org.apache.commons.math3.stat – statistics and statistical tests
  • org.apache.commons.math3.distribution – probability distributions
  • org.apache.commons.math3.random – random numbers, strings and data generation
  • org.apache.commons.math3.analysis – root finding, integration, interpolation, polynomials, etc.
  • org.apache.commons.math3.linear – matrices, solving linear systems
  • org.apache.commons.math3.geometry – geometry (Euclidean spaces and binary space partitioning)
  • org.apache.commons.math3.transform – transform methods (fast Fourier)
  • org.apache.commons.math3.ode – ordinary differential equations integration
  • org.apache.commons.math3.fitting – curve fitting
  • org.apache.commons.math3.optim – function maximization or minimization
  • org.apache.commons.math3.genetics – genetic algorithms
  • org.apache.commons.math3.ml – machine learning (clustering and neural networks)
  • org.apache.commons.math3.util – common math/stat functions extending java.lang.Math
  • org.apache.commons.math3.special – special functions (Gamma, Beta)
  • org.apache.commons.math3.complex – complex numbers
  • org.apache.commons.math3.fraction – rational numbers

3. Statistics, Probabilities, and Randomness

3.1. Statistics

The package org.apache.commons.math3.stat provides several tools for statistical computations. For example, to compute mean, standard deviation, and many more, we can use DescriptiveStatistics:

double[] values = new double[] {65, 51 , 16, 11 , 6519, 191 ,0 , 98, 19854, 1, 32};
DescriptiveStatistics descriptiveStatistics = new DescriptiveStatistics();
for (double v : values) {
    descriptiveStatistics.addValue(v);
}

double mean = descriptiveStatistics.getMean();
double median = descriptiveStatistics.getPercentile(50);
double standardDeviation = descriptiveStatistics.getStandardDeviation();

In this package, we can find tools for computing the covariance, correlation, or to perform statistical tests (using TestUtils).

3.2. Probabilities and Distributions

In core Java, Math.random() can be used for generating random values, but these values are uniformly distributed between 0 and 1.

Sometimes, we want to produce a random value using a more complex distribution. For this, we can use the framework provided by org.apache.commons.math3.distribution.

Here is how to generate random values according to the normal distribution with the mean of 10 and the standard deviation of 3:

NormalDistribution normalDistribution = new NormalDistribution(10, 3);
double randomValue = normalDistribution.sample();

Or we can obtain the probability P(X = x) of getting a value for discrete distributions, or the cumulative probability P(X <= x) for continuous distributions.

4. Analysis

Analysis related functions and algorithms can be found in org.apache.commons.math3.analysis.

4.1. Root Finding

A root is a value where a function has the value of 0. Commons-Math includes implementation of several root-finding algorithms.

Here, we try to find the root of v -> (v * v) – 2 :

UnivariateFunction function = v -> Math.pow(v, 2) - 2;
UnivariateSolver solver = new BracketingNthOrderBrentSolver(1.0e-12, 1.0e-8, 5);
double c = solver.solve(100, function, -10.0, 10.0, 0);

First, we start by defining the function, then we define the solver, and we set the desired accuracy. Finally, we call the solve() API.

The root-finding operation will be performed using several iterations, so it’s a matter of finding a compromise between execution time and accuracy.

4.2. Calculating Integrals

The integration works almost like root finding:

UnivariateFunction function = v -> v;
UnivariateIntegrator integrator = new SimpsonIntegrator(1.0e-12, 1.0e-8, 1, 32);
double i = integrator.integrate(100, function, 0, 10);

We start by defining a function, we choose an integrator among the available integration solutions existing, we set the desired accuracy, and finally, we integrate.

5. Linear Algebra

If we have a linear system of equations under the form AX = B where A is a matrix of real numbers, and B a vector of real numbers – Commons Math provides structures to represent both the matrix and the vector, and also provide solvers to find the value of X:

RealMatrix a = new Array2DRowRealMatrix(
  new double[][] { { 2, 3, -2 }, { -1, 7, 6 }, { 4, -3, -5 } },
  false);
RealVector b = new ArrayRealVector(n
  ew double[] { 1, -2, 1 }, 
  false);

DecompositionSolver solver = new LUDecomposition(a).getSolver();

RealVector solution = solver.solve(b);

The case is pretty straightforward: we define a matrix a from an array of array of doubles, and a vector b from an array of a vector.

Then, we create an LUDecomposition which provides a solver for equations under the form AX = B. As its name states it, LUDecomposition relies on the LU decomposition, and thus works only with square matrices.

For other matrices, different solvers exist, usually solving the equation using the least square method.

6. Geometry

The package org.apache.commons.math3.geometry provides several classes for representing geometrical objects and several tools to manipulate them. It is important to note that this package is divided into different sub-packages, regarding of the kind of geometry we want to use:

It is important to note that this package is divided into different sub-packages, regarding of the kind of geometry we want to use:

  • org.apache.commons.math3.geometry.euclidean.oned – 1D Euclidean geometry
  • org.apache.commons.math3.geometry.euclidean.twod – 2D Euclidean geometry
  • org.apache.commons.math3.geometry.euclidean.threed – 3D Euclidean geometry
  • org.apache.commons.math3.geometry.spherical.oned – 1D spherical geometry
  • org.apache.commons.math3.geometry.spherical.twod – 2D spherical geometry

The most useful classes are probably Vector2D, Vector3D, Line, and Segment. They are used for representing 2D vectors (or points), 3D vectors, lines, and segments respectively.

When using classes mentioned above, it is possible to perform some computation. For instance, the following code performs the calculation of the intersection of two 2D lines:

Line l1 = new Line(new Vector2D(0, 0), new Vector2D(1, 1), 0);
Line l2 = new Line(new Vector2D(0, 1), new Vector2D(1, 1.5), 0);

Vector2D intersection = l1.intersection(l2);

It is also feasible to use these structures to get the distance of a point to a line, or the closest point of a line to another line (in 3D).

7. Optimization, Genetic Algorithms, and Machine Learning

Commons-Math also provides some tools and algorithms for more complex tasks related to optimization and machine learning.

7.1. Optimization

Optimization usually consists of minimizing or maximizing cost functions. Algorithms for optimization can be found in org.apache.commons.math3.optim and org.apache.commons.math3.optimimization. It includes linear and nonlinear optimization algorithms.

We can note that there are duplicate classes in the optim and optimization packages: the optimization package is mostly deprecated and will be removed in the Commons Math 4.

7.2. Genetic Algorithms

Genetic algorithms are a kind of meta-heuristics: they are a solution to finding an acceptable solution to a problem when deterministic algorithms are too slow. An overview of genetic algorithms can be found here.

The package org.apache.commons.math3.genetics provides a framework to perform computations using genetic algorithms. It contains structure that can be used to represent a population and a chromosome, and standard algorithms to perform mutation, crossover, and selection operations.

The following classes give a good start point:

7.3. Machine Learning

Machine learning in Commons-Math is divided into two parts: clustering and neural networks.

The clustering part consists of putting a label on vectors according to their similarity regarding a distance metric. The clustering algorithms provided are based on the K-means algorithm.

The neural network part gives classes to represent networks (Network) and neurons (Neuron). One may note that the provided functions are limited compared to the most common neural network frameworks, but it can still be useful for small applications with low requirements.

8. Utilities

8.1. FastMath

FastMath is a static class located in org.apache.commons.math3.util and working exactly like java.lang.Math.

Its purpose is to provide, at least the same functions that we can found in java.lang.Math, but with faster implementations. So, when a program is heavily relying on mathematical computations, it is a good idea to replace calls to Math.sin() (for instance) to calls to FastMath.sin() to improve the performance of the application. On the other hand please note that FastMath is less accurate than java.lang.Math.

8.2. Common and Special Functions

Commons-Math provides standard mathematical functions that are not implemented in java.lang.Math (like factorial). Most of these functions can be found in the packages org.apache.commons.math3.special and org.apache.commons.math3.util.

For instance, if we want to compute the factorial of 10 we can simply do:

long factorial = CombinatorialUtils.factorial(10);

Functions related to arithmetic (gcd, lcm, etc.) can be found in ArithmeticUtils, and functions related to combinatorial can be found in CombinatorialUtils. Some other special functions, like erf, can be accessed in org.apache.commons.math3.special.

8.3. Fraction and Complex Numbers

It is also possible to handle more complex types using commons-math: fraction and complex numbers. These structures allow us to perform specific computation on this kind of numbers.

Then, we can compute the sum of two fractions and display the result as a string representation of a fraction (i.e. under the form “a / b”):

Fraction lhs = new Fraction(1, 3);
Fraction rhs = new Fraction(2, 5);
Fraction sum = lhs.add(rhs);

String str = new FractionFormat().format(sum);

Or, we can quickly compute power of complex numbers:

Complex first = new Complex(1.0, 3.0);
Complex second = new Complex(2.0, 5.0);

Complex power = first.pow(second);

9. Conclusion

In this tutorial, we presented a few of the interesting things you can do using Apache Commons Math.

Unfortunately, this article can’t cover the whole field of analysis or linear algebra, and thus, only provides examples for the most common situations.

However, for more information, we can read the well-written documentation, which provides a lot of details for all aspects of the library.

And, as always, the code samples can be found here on GitHub.

A Quick Guide to Spring @Value

$
0
0

1. Overview

In this quick article, we’re going to have a look at the @Value Spring annotation.

This annotation can be used for injecting values into fields in Spring-managed beans and it can be applied at the field or constructor/method parameter level.

2. Setting up the Application

To describe different kinds of usage for this annotation, we need to configure a simple Spring application configuration class.

And naturally, we’ll need a properties file to define the values we want to inject with the @Value annotation. And so, we’ll first need to define a @PropertySource in our configuration class – with the properties file name.

Let’s define the properties file:

value.from.file=Value got from the file
priority=Properties file
listOfValues=A,B,C

3. Usage Examples

As a basic and mostly useless usage example we can only inject “string value” from the annotation to the field:

@Value("string value")
private String stringValue;

Using the @PropertySource annotation allows us to work with values from properties files with the @Value annotation. In the following example we get “Value got from the file” assigned to the field:

@Value("${value.from.file}")
private String valueFromFile;

We can also set the value from system properties with the same syntax. Let’s assume that we have defined a system property named systemValue and look at the following sample:

@Value("${systemValue}")
private String systemValue;

Default values can be provided for properties that might not be defined. In this example the value “some default” will be injected:

@Value("${unknown.param:some default}")
private String someDefault;

If the same property is defined as a system property and in the properties file, then the system property would be applied.

Suppose we had a property priority defined as a system property with the value “System property” and defined as something else in the properties file. In the following code the value would be “System property”:

@Value("${priority}")
private String prioritySystemProperty;

Sometimes we need to inject a bunch of values. It would be convenient to define them as comma-separated values for the single property in the properties file or as a system property and to inject into an array. In the first section, we defined comma-separated values in the listOfValues of the properties file, so in the following example the array values would be [“A”, “B”, “C”]:

@Value("${listOfValues}")
private String[] valuesArray;

4. Advanced Examples with SpEL

We can also use SpEL expressions to get the value. If we have a system property named priority, then its value will be applied to the field in the next example:

@Value("#{systemProperties['priority']}")
private String spelValue;

If we have not defined the system property, then the null value will be assigned. To prevent this, we can provide a default value in the SpEL expression. In the following example, we get “some default” value for the field if the system property is not defined:

@Value("#{systemProperties['unknown'] ?: 'some default'}")
private String spelSomeDefault;

Furthermore, we can use a field value from other beans. Suppose we have a bean named someBean with a field someValue equal to 10. Then 10 will be assigned to the field in this example:

@Value("#{someBean.someValue}")
private Integer someBeanValue;

We can manipulate properties to get a List of values. In the following sample, we get a list of string values A, B, and C:

@Value("#{'${listOfValues}'.split(',')}")
private List<String> valuesList;

5. Conclusion

In this quick tutorial, we examined the various possibilities of using the @Value annotation with simple properties defined in the file, with system properties, and with properties calculated with SpEL expressions.

As always the example application is available on GitHub project.


RabbitMQ Message Dispatching with Spring AMQP

$
0
0

1. Introduction

In this article, we’ll explore the concept of fanout and topic exchanges with Spring AMQP and RabbitMQ.

At a high level, fanout exchanges will broadcast the same message to all bound queues, while topic exchanges use a routing key for passing messages to a particular bound queue or queues.

Prior reading of Messaging With Spring AMQP is recommended for this article.

2. Setting Up a Fanout Exchange

Let’s set up one fanout exchange with two bound queues.

Spring AMQP allows us to aggregate all the declarations of queues, exchanges, and bindings from Java configurations in a collection and returned as a List<Declarable>:

@Bean
public List<Declarable> fanoutBindings() {
    Queue fanoutQueue1 = new Queue("fanout.queue1", false);
    Queue fanoutQueue2 = new Queue("fanout.queue2", false);
    FanoutExchange fanoutExchange = new FanoutExchange("fanout.exchange");

    return Arrays.asList(
      fanoutQueue1,
      fanoutQueue2,
      fanoutExchange,
      bind(fanoutQueue1).to(fanoutExchange),
      BindingBuilder.bind(fanoutQueue2).to(fanoutExchange));
}

With this configuration, we expect that both queues will get all the messages sent to this exchange.

3. Setting Up a Topic Exchange

We’ll also set up a topic exchange with two bound queues each with specific routing keys:

@Bean
public List<Declarable> topicBindings() {
    Queue topicQueue1 = new Queue(topicQueue1Name, false);
    Queue topicQueue2 = new Queue(topicQueue2Name, false);

    TopicExchange topicExchange = new TopicExchange(topicExchangeName);

    return Arrays.asList(
      topicQueue1,
      topicQueue2,
      topicExchange,
      BindingBuilder
        .bind(topicQueue1)
        .to(topicExchange).with("*.important.*"),
      BindingBuilder
        .bind(topicQueue2)
        .to(topicExchange).with("#.error"));
}

With this configuration, we expect topicQueue1 to get all messages with routing keys having a three-word pattern with the middle word being “important” – for example: “user.important.error” or “blog.important.notification”.

The second queue will get all messages with routing keys ending in error; matching examples are “user.important.error” or “blog.post.save.error”.

4. Setting Up a Producer

We’ll need to create a message producer to use the exchanges we configured. The BroadcastMessageProducer class will use a RabbitTemplate and its method convertAndSend to send messages:

@Component
public class BroadcastMessageProducer {

    @Autowired
    private RabbitTemplate rabbitTemplate;

    public void sendMessages(String message) {
        rabbitTemplate.convertAndSend(
          SpringAmqpConfig.fanoutExchangeName, "", message);
        rabbitTemplate.convertAndSend(
          SpringAmqpConfig.topicExchangeName, "user.not-important.info", message);
        rabbitTemplate.convertAndSend(
          SpringAmqpConfig.topicExchangeName, "user.important.error", message);
    }
}

The RabbitTemplate provides many overloaded convertAndSend() methods for different exchange types.

When we send a message to fanout exchange, the routing key is ignored, and the message is passed to all bound queues.

When we send a message to the topic exchange, we need to pass a routing key. Based on this routing key the message will be delivered to specific queues.

5. Configuring Consumers

Finally, let’s set up four consumers – one for each queue – to pick up the messages produced:

@Component
public class BroadcastMessageConsumers {

    @RabbitListener(queues = {SpringAmqpConfig.fanoutQueue1Name})
    public void receiveMessageFromFanout1(String message) {
    }

    @RabbitListener(queues = {SpringAmqpConfig.fanoutQueue2Name})
    public void receiveMessageFromFanout2(String message) {
    }

    @RabbitListener(queues = {SpringAmqpConfig.topicQueue1Name})
    public void receiveMessageFromTopic1(String message) {
    }

    @RabbitListener(queues = {SpringAmqpConfig.topicQueue2Name})
    public void receiveMessageFromTopic2(String message) {
    }
}

We configure consumers using the @RabbitListener annotation. The only argument passed here is the queues’ name. Consumers are not aware here of exchanges or routing keys.

6. Running the Example

Our sample project is a Spring Boot application, and so it will initialize the application together with a connection to RabbitMQ and set up all queues, exchanges, and bindings.

By default, our application expects a RabbitMQ instance running on the localhost on port 5672. You can modify the defaults in application.yaml.

Our project exposes HTTP endpoint on the URI: “/broadcast”, that accepts POST calls with a message in the request body.

When we send a request to this URI with body “Test” we should see something similar to this in log output:

2017-04-14 12:27:59.611  INFO 4534 --- [cTaskExecutor-1] c.b.springamqpsimple.MessageConsumers    : Received fanout 1 message:Test
2017-04-14 12:27:59.611  INFO 4534 --- [cTaskExecutor-1] c.b.springamqpsimple.MessageConsumers    : Received topic 2 message: Test
2017-04-14 12:27:59.611  INFO 4534 --- [cTaskExecutor-1] c.b.springamqpsimple.MessageConsumers    : Received fanout 2 message: Test
2017-04-14 12:27:59.611  INFO 4534 --- [cTaskExecutor-1] c.b.springamqpsimple.MessageConsumers    : Received topic 1 message: Test
2017-04-14 12:27:59.612  INFO 4534 --- [cTaskExecutor-1] c.b.springamqpsimple.MessageConsumers    : Received topic 2 message: Test

The order in which you will see these messages is, of course, not guaranteed.

7. Conclusion

In this quick tutorial, we covered fanout and topic exchanges with Spring AMQP and RabbitMQ.

The complete source code and all code snippets for this article are available on GitHub repository.

Spring Remoting with AMQP

$
0
0

1. Overview

We saw in the previous installments of the series how we can leverage Spring Remoting and the related technologies to enable synchronous Remote Procedure Calls on top of an HTTP channel between a server and a client.

In this article, we’ll explore Spring Remoting on top of AMQP, which makes it possible to execute synchronous RPC while leveraging a medium that is inherently asynchronous.

2. Installing RabbitMQ

There are various messaging systems that are compatible with AMQP that we could use, and we choose RabbitMQ because it’s a proven platform and it’s fully supported in Spring – both products are managed by the same company (Pivotal).

If you’re not acquainted with AMQP or RabbitMQ you can read our quick introduction.

So, the first step is to install and start RabbitMQ. There are various ways to install it – just choose your preferred method following the steps mentioned in the official guide.

3. Maven Dependencies

We are going to set up server and client Spring Boot applications to show how AMQP Remoting works. As is often the case with Spring Boot, we just have to choose and import the correct starter dependencies, as explained here:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-amqp</artifactId>
    <exclusions>
        <exclusion>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-tomcat</artifactId>
        </exclusion>
    </exclusions>
</dependency>

We explicitly excluded the spring-boot-starter-tomcat because we don’t need any embedded HTTP server – that would be automatically started instead if we allowed Maven to import all the transitive dependencies in the classpath.

4. Server Application

4.1. Expose the Service

As we showed in the previous articles, we’ll expose a CabBookingService that simulates a likely remote service.

Let’s start by declaring a bean that implements the interface of the service we want to make remotely callable. This is the bean that will actually execute the service call at server-side:

@Bean 
CabBookingService bookingService() {
    return new CabBookingServiceImpl();
}

Let’s then define the queue from which the server will retrieve invocations. It’s enough in this case to specify a name for it, providing it in the constructor:

@Bean 
Queue queue() {
    return new Queue("remotingQueue");
}

As we already know from the previous articles, one of the main concepts of Spring Remoting is the Service Exporter, the component that actually collects the invocation requests from some source ─ in this case a RabbitMQ queue ─ and invokes the desired method on the service implementation.

In this case, we define an AmqpInvokerServiceExporter that ─ as you can see ─ needs a reference to an AmqpTemplate. The AmqpTemplate class is provided by the Spring Framework and eases the handling of  AMQP-compatible messaging systems the same way the JdbcTemplate makes easier to deal with databases.

We won’t explicitly define such AmqpTemplate bean because it will be automatically provided by Spring Boot‘s auto-configuration module:

@Bean AmqpInvokerServiceExporter exporter(
  CabBookingService implementation, AmqpTemplate template) {
 
    AmqpInvokerServiceExporter exporter = new AmqpInvokerServiceExporter();
    exporter.setServiceInterface(CabBookingService.class);
    exporter.setService(implementation);
    exporter.setAmqpTemplate(template);
    return exporter;
}

Finally, we need to define a container that has the responsibility to consume messages from the queue and forward them to some specified listener.

We’ll then connect this container to the service exporter, we created in the previous step, to allow it to receive the queued messages. Here the ConnectionFactory is automatically provided by Spring Boot the same way the AmqpTemplate is:

@Bean 
SimpleMessageListenerContainer listener(
  ConnectionFactory facotry, 
  AmqpInvokerServiceExporter exporter, 
  Queue queue) {
 
    SimpleMessageListenerContainer container
     = new SimpleMessageListenerContainer(facotry);
    container.setMessageListener(exporter);
    container.setQueueNames(queue.getName());
    return container;
}

4.2. Configuration

Let’s remember to set up the application.properties file to allow Spring Boot to configure the basic objects. Obviously, the values of the parameters will also depend on the way RabbitMQ has been installed.

For instance, the following one could be a  reasonable configuration when RabbitMQ runs it the same machine where this example runs:

spring.rabbitmq.dynamic=true
spring.rabbitmq.port=5672
spring.rabbitmq.username=guest
spring.rabbitmq.password=guest
spring.rabbitmq.host=localhost

5. Client Application

5.1. Invoke the Remote Service

Let’s tackle the client now. Again, we need to define the queue where invocation messages will be written to. We need to double-check that both client and server use the same name.

@Bean 
Queue queue() {
    return new Queue("remotingQueue");
}

At client-side, we need a slightly more complex setup than on the server side. In fact, we need to define an Exchange with the related Binding:

@Bean 
Exchange directExchange(Queue someQueue) {
    DirectExchange exchange = new DirectExchange("remoting.exchange");
    BindingBuilder
      .bind(someQueue)
      .to(exchange)
      .with("remoting.binding");
    return exchange;
}

A good intro on the main concepts of RabbitMQ as Exchanges and Bindings is available here.

Since Spring Boot does not auto-configure the AmqpTemplate, we must set one up ourselves, specifying a routing key. In doing so, we need to double-check that the routing key and the exchange match with the one used to define the Exchange in the previous step:

@Bean RabbitTemplate amqpTemplate(ConnectionFactory factory) {
    RabbitTemplate template = new RabbitTemplate(factory);
    template.setRoutingKey("remoting.binding");
    template.setExchange("remoting.exchange");
    return template;
}

Then, as we did with other Spring Remoting implementations, we define a FactoryBean that will produce local proxies of the service that is remotely exposed. Nothing too fancy here, we just need to provide the interface of the remote service:

@Bean AmqpProxyFactoryBean amqpFactoryBean(AmqpTemplate amqpTemplate) {
    AmqpProxyFactoryBean factoryBean = new AmqpProxyFactoryBean();
    factoryBean.setServiceInterface(CabBookingService.class);
    factoryBean.setAmqpTemplate(amqpTemplate);
    return factoryBean;
}

We can now use the remote service as if it was declared as a local bean:

CabBookingService service = context.getBean(CabBookingService.class);
out.println(service.bookRide("13 Seagate Blvd, Key Largo, FL 33037"));

5.2. Setup

Also for the client application, we have to properly choose the values in the application.properties file. In a common setup, those would exactly match the ones used on the server side.

5.3. Run the Example

This should be enough to demonstrate the remote invocation through RabbitMQ. Let’s then start RabbitMQ, the server application, and the client application that invokes the remote service.

What happens behind the scenes is that the AmqpProxyFactoryBean will build a proxy that implements the CabBookingService.

When a method is invoked on that proxy, it queues a message on the RabbitMQ, specifying in it all the parameters of the invocation and a name of a queue to be used to send back the result.

The message is consumed from the AmqpInvokerServiceExporter that invokes the actual implementation. It then collects the result in a message and places it on the queue which name was specified in the incoming message.

The AmqpProxyFactoryBean receives back the result and, finally, returns the value that has been originally produced at the server side.

6. Conclusion

In this article, we saw how we can use Spring Remoting to provide RPC on top of a messaging system.

It’s probably not the way to go for the main scenarios where we probably prefer to leverage the asynchronicity of RabbitMQ, but in some selected and limited scenarios, a synchronous call can be easier to understand and quicker and simpler to develop.

As usual, you’ll find the sources on over on GitHub.

Create a Custom Auto-Configuration with Spring Boot

$
0
0

1. Overview

Simply put, the Spring Boot autoconfiguration represents a way to automatically configure a Spring application based on the dependencies that are present on the classpath.

This can make development faster and easier by eliminating the need for defining certain beans that are included in the auto-configuration classes.

In the following section, we’re going to take a look at creating our custom Spring Boot auto-configuration.

2. Maven Dependencies

Let’s start with the dependencies that we need:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-jpa</artifactId>
    <version>1.5.2.RELEASE</version>
</dependency>
<dependency>
    <groupId>mysql</groupId>
    <artifactId>mysql-connector-java</artifactId>
    <version>6.0.6</version>
</dependency>

The latest versions of spring-boot-starter-data-jpa and mysql-connector-java can be downloaded from Maven Central.

3. Creating a Custom Auto-Configuration

To create a custom auto-configuration, we need to create a class annotated as @Configuration and register it.

Let’s create a custom configuration for a MySQL data source:

@Configuration
public class MySQLAutoconfiguration {
    //...
}

The next mandatory step is registering the class as an auto-configuration candidate, by adding the name of the class under the key org.springframework.boot.autoconfigure.EnableAutoConfiguration in the standard file resources/META-INF/spring.factories:

org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
com.baeldung.autoconfiguration.MySQLAutoconfiguration

If we want our auto-configuration class to have priority over other auto-configuration candidates, we can add the @AutoConfigureOrder(Ordered.HIGHEST_PRECEDENCE) annotation.

Auto-configuration is designed using classes and beans marked with @Conditional annotations so that the auto-configuration or specific parts of it can be replaced.

Note that the auto-configuration is only in effect if the auto-configured beans are not defined in the application. If you define your bean, then the default one will be overridden.

3.1. Class Conditions

Class conditions allow us to specify that a configuration bean will be included if a specified class is present using the @ConditionalOnClass annotation, or if a class is absent using the @ConditionalOnMissingClass annotation.

Let’s specify that our MySQLConfiguration will only be loaded if the class DataSource is present, in which case we can assume the application will use a database:

@Configuration
@ConditionalOnClass(DataSource.class)
public class MySQLAutoconfiguration {
    //...
}

3.2. Bean Conditions

If we want to include a bean only if a specified bean is present or not, we can use the @ConditionalOnBean and @ConditionalOnMissingBean annotations.

To exemplify this, let’s add an entityManagerFactory bean to our configuration class, and specify we only want this bean to be created if a bean called dataSource is present and if a bean called entityManagerFactory is not already defined:

@Bean
@ConditionalOnBean(name = "dataSource")
@ConditionalOnMissingBean
public LocalContainerEntityManagerFactoryBean entityManagerFactory() {
    LocalContainerEntityManagerFactoryBean em
      = new LocalContainerEntityManagerFactoryBean();
    em.setDataSource(dataSource());
    em.setPackagesToScan("com.baeldung.autoconfiguration.example");
    em.setJpaVendorAdapter(new HibernateJpaVendorAdapter());
    if (additionalProperties() != null) {
        em.setJpaProperties(additionalProperties());
    }
    return em;
}

Let’s also configure a transactionManager bean that will only be loaded if a bean of type JpaTransactionManager is not already defined:

@Bean
@ConditionalOnMissingBean(type = "JpaTransactionManager")
JpaTransactionManager transactionManager(EntityManagerFactory entityManagerFactory) {
    JpaTransactionManager transactionManager = new JpaTransactionManager();
    transactionManager.setEntityManagerFactory(entityManagerFactory);
    return transactionManager;
}

3.3. Property Conditions

The @ConditionalOnProperty annotation is used to specify if a configuration will be loaded based on the presence and value of a Spring Environment property.

First, let’s add a property source file for our configuration that will determine where the properties will be read from:

@PropertySource("classpath:mysql.properties")
public class MySQLAutoconfiguration {
    //...
}

We can configure the main DataSource bean that will be used to create connections to the database in such a way that it will only be loaded if a property called usemysql is present.

We can use the attribute havingValue to specify certain values of the usemysql property that have to be matched.

Let’s define the dataSource bean with default values that connect to a local database called myDb if the usemysql property is set to local:

@Bean
@ConditionalOnProperty(
  name = "usemysql", 
  havingValue = "local")
@ConditionalOnMissingBean
public DataSource dataSource() {
    DriverManagerDataSource dataSource = new DriverManagerDataSource();
 
    dataSource.setDriverClassName("com.mysql.cj.jdbc.Driver");
    dataSource.setUrl("jdbc:mysql://localhost:3306/myDb?createDatabaseIfNotExist=true");
    dataSource.setUsername("mysqluser");
    dataSource.setPassword("mysqlpass");

    return dataSource;
}

If the usemysql property is set to custom, the dataSource bean will be configured using custom properties values for the database URL, user, and password:

@Bean(name = "dataSource")
@ConditionalOnProperty(
  name = "usemysql", 
  havingValue = "custom")
@ConditionalOnMissingBean
public DataSource dataSource2() {
    DriverManagerDataSource dataSource = new DriverManagerDataSource();
        
    dataSource.setDriverClassName("com.mysql.cj.jdbc.Driver");
    dataSource.setUrl(env.getProperty("mysql.url"));
    dataSource.setUsername(env.getProperty("mysql.user") != null 
      ? env.getProperty("mysql.user") : "");
    dataSource.setPassword(env.getProperty("mysql.pass") != null 
      ? env.getProperty("mysql.pass") : "");
        
    return dataSource;
}

The mysql.properties file will contain the usemysql property:

usemysql=local

If an application that uses the MySQLAutoconfiguration wishes to override the default properties, all it needs to do is add different values for the mysql.url, mysql.user and mysql.pass properties and the usemysql=custom line in the mysql.properties file.

3.4. Resource Conditions

Adding the @ConditionalOnResource annotation means that the configuration will only be loaded when a specified resource is present.

Let’s define a method called additionalProperties() that will return a Properties object containing Hibernate-specific properties to be used by the entityManagerFactory bean, only if the resource file mysql.properties is present:

@ConditionalOnResource(
  resources = "classpath:mysql.properties")
@Conditional(HibernateCondition.class)
Properties additionalProperties() {
    Properties hibernateProperties = new Properties();

    hibernateProperties.setProperty("hibernate.hbm2ddl.auto", 
      env.getProperty("mysql-hibernate.hbm2ddl.auto"));
    hibernateProperties.setProperty("hibernate.dialect", 
      env.getProperty("mysql-hibernate.dialect"));
    hibernateProperties.setProperty("hibernate.show_sql", 
      env.getProperty("mysql-hibernate.show_sql") != null 
      ? env.getProperty("mysql-hibernate.show_sql") : "false");
    return hibernateProperties;
}

We can add the Hibernate specific properties to the mysql.properties file:

mysql-hibernate.dialect=org.hibernate.dialect.MySQLDialect
mysql-hibernate.show_sql=true
mysql-hibernate.hbm2ddl.auto=create-drop

3.5. Custom Conditions

If we don’t want to use any of the conditions available in Spring Boot, we can also define custom conditions by extending the SpringBootCondition class and overriding the getMatchOutcome() method.

Let’s create a condition called HibernateCondition for our additionalProperties() method that will verify whether a HibernateEntityManager class is present on the classpath:

static class HibernateCondition extends SpringBootCondition {

    private static String[] CLASS_NAMES
      = { "org.hibernate.ejb.HibernateEntityManager", 
          "org.hibernate.jpa.HibernateEntityManager" };

    @Override
    public ConditionOutcome getMatchOutcome(ConditionContext context, 
      AnnotatedTypeMetadata metadata) {
 
        ConditionMessage.Builder message
          = ConditionMessage.forCondition("Hibernate");
        return Arrays.stream(CLASS_NAMES)
          .filter(className -> ClassUtils.isPresent(className, context.getClassLoader()))
          .map(className -> ConditionOutcome
            .match(message.found("class")
            .items(Style.NORMAL, className)))
          .findAny()
          .orElseGet(() -> ConditionOutcome
            .noMatch(message.didNotFind("class", "classes")
            .items(Style.NORMAL, Arrays.asList(CLASS_NAMES))));
    }
}

Then we can add the condition to the additionalProperties() method:

@Conditional(HibernateCondition.class)
Properties additionalProperties() {
  //...
}

3.6. Application Conditions

We can also specify that the configuration can be loaded only inside/outside a web context, by adding the @ConditionalOnWebApplication or @ConditionalOnNotWebApplication annotation.

4. Testing the Auto-Configuration

Let’s create a very simple example to test our auto-configuration. We will create an entity class called MyUser, and a MyUserRepository interface using Spring Data:

@Entity
public class MyUser {
    @Id
    private String email;

    // standard constructor, getters, setters
}
public interface MyUserRepository 
  extends JpaRepository<MyUser, String> { }

To enable auto-configuration, we can use one of the @SpringBootApplication or @EnableAutoConfiguration annotations:

@SpringBootApplication
public class AutoconfigurationApplication {
    public static void main(String[] args) {
        SpringApplication.run(AutoconfigurationApplication.class, args);
    }
}

Next, let’s write a JUnit test that saves a MyUser entity:

@RunWith(SpringJUnit4ClassRunner.class)
@SpringBootTest(
  classes = AutoconfigurationApplication.class)
@EnableJpaRepositories(
  basePackages = { "com.baeldung.autoconfiguration.example" })
public class AutoconfigurationTest {

    @Autowired
    private MyUserRepository userRepository;

    @Test
    public void whenSaveUser_thenOk() {
        MyUser user = new MyUser("user@email.com");
        userRepository.save(user);
    }
}

Since we have not defined our DataSource configuration, the application will use the auto-configuration we have created to connect to a MySQL database called myDb.

The connection string contains the createDatabaseIfNotExist=true property, so the database does not need to exist. However, the user mysqluser or the one specified through the mysql.user property if it is present, needs to be created.

We can check the application log to see that the MySQL data source is being used:

web - 2017-04-12 00:01:33,956 [main] INFO  o.s.j.d.DriverManagerDataSource - Loaded JDBC driver: com.mysql.cj.jdbc.Driver

5. Disabling Auto-Configuration Classes

If we wanted to exclude the auto-configuration from being loaded, we could add the @EnableAutoConfiguration annotation with exclude or excludeName attribute to a configuration class:

@Configuration
@EnableAutoConfiguration(
  exclude={MySQLAutoconfiguration.class})
public class AutoconfigurationApplication {
    //...
}

Another option to disable specific auto-configurations is by setting the spring.autoconfigure.exclude property:

spring.autoconfigure.exclude=com.baeldung.autoconfiguration.MySQLAutoconfiguration

6. Conclusions

In this tutorial, we’ve shown how to create a custom Spring Boot auto-configuration. The full source code of the example can be found over on GitHub.

The JUnit test can be run using the autoconfiguration profile: mvn clean install -Pautoconfiguration.

A Guide to Java SynchronousQueue

$
0
0

1. Overview

In this article, we’ll be looking at the SynchronousQueue from the java.util.concurrent package.

Simply put, this implementation allows us to exchange information between threads in a thread-safe manner.

2. API Overview

The SynchronousQueue only has two supported operations: take() and put(), and both of them are blocking.

For example, when we want to add an element to the queue, we need to call the put() method. That method will block until some other thread calls the take() method, signaling that it is ready to take an element.

Although the SynchronousQueue has an interface of a queue, we should think about it as an exchange point for a single element between two threads, in which one thread is handing off an element, and another thread is taking that element.

3. Implementing Handoffs Using a Shared Variable

To see why the SynchronousQueue can be so useful, we will implement a logic using a shared variable between two threads and next, we will rewrite that logic using SynchronousQueue making our code a lot simpler and more readable.

Let’s say that we have two threads – a producer and a consumer – and when the producer is setting a value of a shared variable, we want to signal that fact to the consumer thread. Next, the consumer thread will fetch a value from a shared variable.

We will use the CountDownLatch to coordinate those two threads, to prevent a situation when the consumer accesses a value of a shared variable that was not set yet.

We will define a sharedState variable and a CountDownLatch that will be used for coordinating processing:

ExecutorService executor = Executors.newFixedThreadPool(2);
AtomicInteger sharedState = new AtomicInteger();
CountDownLatch countDownLatch = new CountDownLatch(1);

The producer will save a random integer to the sharedState variable, and execute the countDown() method on the countDownLatch, signaling to the consumer that it can fetch a value from the sharedState:

Runnable producer = () -> {
    Integer producedElement = ThreadLocalRandom
      .current()
      .nextInt();
    sharedState.set(producedElement);
    countDownLatch.countDown();
};

The consumer will wait on the countDownLatch using the await() method. When the producer signals that the variable was set, the consumer will fetch it from the sharedState:

Runnable consumer = () -> {
    try {
        countDownLatch.await();
        Integer consumedElement = sharedState.get();
    } catch (InterruptedException ex) {
        ex.printStackTrace();
    }
};

Last but not least, let’s start our program:

executor.execute(producer);
executor.execute(consumer);

executor.awaitTermination(500, TimeUnit.MILLISECONDS);
executor.shutdown();
assertEquals(countDownLatch.getCount(), 0);

It will produce the following output:

Saving an element: -1507375353 to the exchange point
consumed an element: -1507375353 from the exchange point

We can see that this is a lot of code to implement such a simple functionality as exchanging an element between two threads. In the next section, we will try to make it better.

4. Implementing Handoffs Using the SynchronousQueue

Let’s now implement the same functionality as in the previous section, but with a SynchronousQueue. It has a double effect because we can use it for exchanging state between threads and for coordinating that action so that we don’t need to use anything besides SynchronousQueue.

Firstly, we will define a queue:

ExecutorService executor = Executors.newFixedThreadPool(2);
SynchronousQueue<Integer> queue = new SynchronousQueue<>();

The producer will call a put() method that will block until some other thread takes an element from the queue:

Runnable producer = () -> {
    Integer producedElement = ThreadLocalRandom
      .current()
      .nextInt();
    try {
        queue.put(producedElement);
    } catch (InterruptedException ex) {
        ex.printStackTrace();
    }
};

The consumer will simply retrieve that element using the take() method:

Runnable consumer = () -> {
    try {
        Integer consumedElement = queue.take();
    } catch (InterruptedException ex) {
        ex.printStackTrace();
    }
};

Next, we will start our program:

executor.execute(producer);
executor.execute(consumer);

executor.awaitTermination(500, TimeUnit.MILLISECONDS);
executor.shutdown();
assertEquals(queue.size(), 0);

It will produce the following output:

Saving an element: 339626897 to the exchange point
consumed an element: 339626897 from the exchange point

We can see that a SynchronousQueue is used as an exchange point between the threads, which is a lot better and more understandable than the previous example which used the shared state together with a CountDownLatch.

5. Conclusion

In this quick tutorial, we looked at the SynchronousQueue construct. We created a program that exchanges data between two threads using shared state, and then rewrote that program to leverage the SynchronousQueue construct. This serves as an exchange point that coordinates the producer and the consumer thread.

The implementation of all these examples and code snippets can be found in the GitHub project – this is a Maven project, so it should be easy to import and run as it is.

Logout in a OAuth Secured Application

$
0
0

1. Overview

In this quick tutorial, we’re going to show how we can add logout functionality to an OAuth Spring Security application.

We’ll, of course, use the OAuth application described in a previous article – Creating a REST API with OAuth2.

2. Remove the Access Token

Simply put, logging out in an OAuth-secured environment involves rendering the user’s Access Token invalid – so it can no longer be used.

In a JdbcTokenStore-based implementation, this means removing the token from the TokenStore.

Let’s implement a delete operation for the token. We’re going to use the parimary /oauth/token URL structure here and simply introduce a new DELETE operation for it.

Now, because we’re actually using the /oauth/token URI here – we need to handle it carefully. We won’t be able to simply add this to any controller – because the framework already has operations mapped to that URI – with POST and GET.

Instead what we need to do is to define this is a @FrameworkEndpoint – so that it gets picked up and resolved by the FrameworkEndpointHandlerMapping instead of the standard RequestMappingHandlerMapping. That way we won’t run into any partial matches and we won’t have any conflicts:

@FrameworkEndpoint
public class RevokeTokenEndpoint {

    @Resource(name = "tokenServices")
    ConsumerTokenServices tokenServices;

    @RequestMapping(method = RequestMethod.DELETE, value = "/oauth/token")
    @ResponseBody
    public void revokeToken(HttpServletRequest request) {
        String authorization = request.getHeader("Authorization");
        if (authorization != null && authorization.contains("Bearer")){
            String tokenId = authorization.substring("Bearer".length()+1);
            tokenServices.revokeToken(tokenId);
        }
    }
}

Notice how we’re extracting the token out of the request, simply using the standard Authorization header.

3. Remove the Refresh Token

In a previous article on Handling the Refresh Token we have set up our application to be able to refresh the Access Token, using a Refresh Token. This implementation makes use of a Zuul proxy – with a CustomPostZuulFilter to add the refresh_token value received from the Authorization Server to a refreshToken cookie.

When revoking the Access Token, as shown in the previous section, the Refresh Token associated with it is also invalidated. However, the httpOnly cookie will remain set on the client, given that we can’t remove it via JavaScript – so we need to remove it from the server side.

Let’s enhance the CustomPostZuulFilter implementation that intercepts the /oauth/token/revoke URL so that it will remove the refreshToken cookie when encountering this URL:

@Component
public class CustomPostZuulFilter extends ZuulFilter {
    //...
    @Override
    public Object run() {
        //...
        String requestMethod = ctx.getRequest().getMethod();
        if (requestURI.contains("oauth/token") && requestMethod.equals("DELETE")) {
            Cookie cookie = new Cookie("refreshToken", "");
            cookie.setMaxAge(0);
            cookie.setPath(ctx.getRequest().getContextPath() + "/oauth/token");
            ctx.getResponse().addCookie(cookie);
        }
        //...
    }
}

4. Remove the Access Token from the AngularJS Client

Besides revoking the access token from the token store, the access_token cookie will also need to be removed from the client side.

Let’s add a method to our AngularJS controller that clears the access_token cookie and calls the /oauth/token/revoke DELETE mapping:

$scope.logout = function() {
    logout($scope.loginData);
}
function logout(params) {
    var req = {
        method: 'DELETE',
        url: "oauth/token"
    }
    $http(req).then(
        function(data){
            $cookies.remove("access_token");
            window.location.href="login";
        },function(){
            console.log("error");
        }
    );
}

This function will be called when clicking on the Logout link:

<a class="btn btn-info" href="#" ng-click="logout()">Logout</a>

5. Conclusion

In this quick but in-depth tutorial, we’ve shown how we can logout a user from an OAuth secured application and invalidate the tokens of that user.

The full source code of the examples can be found over on GitHub.

Viewing all 3791 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>