Quantcast
Channel: Baeldung
Viewing all 3702 articles
Browse latest View live

Concurrent Strategies using MDBs

$
0
0

1. Introduction

Message Driven Beans, also known as “MDB”, handle message processing in an asynchronous context. We can learn the basics of MDB in this article.

This tutorial will discuss some strategies and best practices to implement concurrency using Message Driven Beans.

If you want to understand more about the basics of concurrency using Java, you can get started here.

In order to better use MDBs and concurrency, there are some considerations to make. It’s important to keep in mind that those considerations should be driven by the business rules and the needs of our application.

2. Tuning the Thread Pool

Tuning the Thread Pool is probably the main point of attention. To make a good use of concurrency, we must tune the number of MDB instances available to consume messages. When one instance is busy handling a message, other instances are able to pick up the next ones.

The MessageListener thread is responsible to execute the onMessage method of an MDB. This thread is part of the MessageListener thread pool, which means that it’s pooled and reused over and over again. This pool also has a configuration that allows us to set the number of threads, which may impact the performance:

  • setting a small pool size will cause messages to be consumed slowly (“MDB Throttling”)
  • setting a very large pool size might decrease performance – or not even work at all.

On Wildfly, we can set this value by accessing the management console. JMS capability isn’t enabled on the default standalone profile; we need to start the server using the full profile.

Usually, on a local installation, we access it through http://127.0.0.1:9990/console/index.html. After that, we need to access Configuration / Subsystems / Messaging / Server, select our server and click “View”.

Choose the “Attributes” tab, click on “Edit” and change the value of “Thread Pool Max Size”. The default value is 30.

3. Tuning Max Sessions

Another configurable property to be aware of is Maximum Sessions. This defines the concurrency for a particular listener port. Usually, this defaults to 1 but increasing it can give more scalability and availability to the MDB application.

We can configure it either by annotations or .xml descriptors. Through annotations, we use the @ActivationConfigProperty:

@MessageDriven(activationConfig = {
    @ActivationConfigProperty(
        propertyName=”maxSession”, propertyValue=”50”
    )
})

If the chosen method of configuration is .xml descriptors we can configure maxSession like this:

<activation-config>
    <activation-config-property>
        <activation-config-property-name>maxSession</activation-config-property-name>
        <activation-config-property-value>50</activation-config-property-value>
    </activation-config-property>
</activation-config>

4. Deployment Environment

When we have a requirement for high availability, we should consider deploying the MDB on an application server cluster. Thus, it can execute on any of the servers in the cluster and many application servers can invoke it concurrently, which also improves scalability.

For this particular case, we have an important choice to make:

  • make all servers in the cluster eligible to receive messages, which allows the use of all of its processing power, or
  • ensure message processing in a sequential manner by allowing just one server to receive them at a time

If we use an enterprise bus, a good practice is to deploy the MDB to the same server or cluster as the bus member to optimize the messaging performance.

5. Message Model and Message Types

Although this isn’t as clear as just setting another value to a pool, the message model and the message type might affect one of the best advantages of using concurrency: performance.

When choosing XML for a message type, for instance, the size of the message can affect the time spent to process it. This is an important consideration especially if the application handles a large number of messages.

Regarding the message model, if the application needs to send the same message to a lot of consumers, a publish-subscribe model might be the right choice. This would reduce the overhead of processing the messages, providing better performance.

To consume from a Topic on a publish-subscribe model, we can use annotations:

@ActivationConfigProperty(
  propertyName = "destinationType", 
  propertyValue = "javax.jms.Topic")

Again, we can also configure those values in a .xml deployment descriptor:

<activation-config>
    <activation-config-property>
        <activation-config-property-name>destinationType</activation-config-property-name>
        <activation-config-property-value>javax.jms.Topic</activation-config-property-value>
    </activation-config-property>
</activation-config>

If sending the very same message to many consumers isn’t a requirement, the regular PTP (Point-to-Point) model would suffice.

To consume from a Queue, we set the annotation as:

@ActivationConfigProperty(
  propertyName = "destinationType", 
  propertyValue = "javax.jms.Queue")

If we’re using .xml deployment descriptor, we can set it:

<activation-config>
    <activation-config-property>
        <activation-config-property-name>destinationType</activation-config-property-name>
        <activation-config-property-value>javax.jms.Queue</activation-config-property-value>
    </activation-config-property>
</activation-config>

6. Conclusion

As many computer scientists and IT writers already stated, we no longer have processors’ speed increasing on a fast pace. To make our programs work faster, we need to work with the higher number of processors and cores available today.

This article discussed some best practices for getting the most out of concurrency using MDBs.


Set the Time Zone of a Date in Java

$
0
0

1. Overview

In this quick tutorial, we’ll see how to set the time zone of a date using Java 7, Java 8 and the Joda-Time library.

2. Using Java 8

Java 8 introduced a new Date-Time API for working with dates and times which was largely based off of the Joda-Time library.

The Instant class from Java Date Time API models a single instantaneous point on the timeline in UTC. This represents the count of nanoseconds since the epoch of the first moment of 1970 UTC.

First, we’ll obtain the current Instant from the system clock and ZoneId for a time zone name:

Instant nowUtc = Instant.now();
ZoneId asiaSingapore = ZoneId.of("Asia/Singapore");

Finally, the ZoneId and Instant can be utilized to create a date-time object with time-zone details. The ZonedDateTime class represents a date-time with a time-zone in the ISO-8601 calendar system:

ZonedDateTime nowAsiaSingapore = ZonedDateTime.ofInstant(nowUtc, asiaSingapore);

We’ve used Java 8’s ZonedDateTime to represent a date-time with a time zone.

3. Using Java 7

In Java 7, setting the time-zone is a bit tricky. The Date class (which represents a specific instant in time) doesn’t contain any time zone information.

First, let’s get the current UTC date and a TimeZone object:

Date nowUtc = new Date();
TimeZone asiaSingapore = TimeZone.getTimeZone(timeZone);

In Java 7, we need to use the Calendar class to represent a date with a time zone.

Finally, we can create a nowUtc Calendar with the asiaSingapore TimeZone and set the time:

Calendar nowAsiaSingapore = Calendar.getInstance(asiaSingapore);
nowAsiaSingapore.setTime(nowUtc);

It’s recommended to avoid the Java 7 date time API in favor of Java 8 date time API or the Joda-Time library.

4. Using Joda-Time

If Java 8 isn’t an option, we can still get the same kind of result from Joda-Time, a de-facto standard for date-time operations in the pre-Java 8 world.

First, we need to add the Joda-Time dependency to pom.xml:

<dependency>
  <groupId>joda-time</groupId>
  <artifactId>joda-time</artifactId>
  <version>2.10</version>
</dependency>

To represent an exact point on the timeline we can use Instant from org.joda.time package. Internally, the class holds one piece of data, the instant as milliseconds from the Java epoch of 1970-01-01T00:00:00Z:

Instant nowUtc = Instant.now();

We’ll use DateTimeZone to represent a time-zone (for the specified time zone id):

DateTimeZone asiaSingapore = DateTimeZone.forID("Asia/Singapore");

Now the nowUtc time will be converted to a DateTime object using the time zone information:

DateTime nowAsiaSingapore = nowUtc.toDateTime(asiaSingapore);

This is how Joda-time API can be used to combine date and time zone information.

5. Conclusion

In this article, we found out how to set the time zone in Java using Java 7, 8 and Joda-Time API. To learn more about Java 8’s date-time support check out our Java 8 date-time intro.

As always the code snippet is available in the GitHub repository.

Why String is Immutable in Java?

$
0
0

1. Introduction

In Java, Strings are immutable. An obvious question that is quite prevalent in interviews is “Why Strings are designed as immutable in Java?”

James Gosling, the creator of Java, was once asked in an interview when should one use immutables, to which he answers:

I would use an immutable whenever I can.

He further supports his argument stating features that immutability provides, such as caching, security, easy reuse without replication, etc.

In this tutorial, we’ll further explore why the Java language designers decided to keep String immutable.

2. What is an Immutable Object?

An immutable object is an object whose internal state remains constant after it has been entirely created. This means that once the object has been assigned to a variable, we can neither update the reference nor mutate the internal state by any means.

We have a separate article that discusses immutable objects in detail. For more information, read the Immutable Objects in Java article.

3. Why is String Immutable in Java?

The key benefits of keeping this class as immutable are caching, security, synchronization, and performance.

Let’s discuss how these things work.

3.1. Introduce to String Pool

The String is the most widely used data structure. Caching the String literals and reusing them saves a lot of heap space because different String variables refer to the same object in the String pool. String intern pool serves exactly this purpose.

Java String Pool is the special memory region where Strings are stored by the JVM. Since Strings are immutable in Java, the JVM optimizes the amount of memory allocated for them by storing only one copy of each literal String in the pool. This process is called interning:

String s1 = "Hello World";
String s2 = "Hello World";
         
assertThat(s1 == s2).isTrue();

Because of the presence of the String pool in the preceding example, two different variables are pointing to same String object from the pool, thus saving crucial memory resource.

We have a separate article dedicated to Java String Pool. For more information, head on over to that article.

3.2. Security

The String is widely used in Java applications to store sensitive pieces of information like usernames, passwords, connection URLs, network connections, etc. It’s also used extensively by JVM class loaders while loading classes.

Hence securing String class is crucial regarding the security of the whole application in general. For example, consider this simple code snippet:

void criticalMethod(String userName) {
    // perform security checks
    if (!isAlphaNumeric(userName)) {
        throw new SecurityException(); 
    }
	
    // do some secondary tasks
    initializeDatabase();
	
    // critical task
    connection.executeUpdate("UPDATE Customers SET Status = 'Active' " +
      " WHERE UserName = '" + userName + "'");
}

In the above code snippet, let’s say that we received a String object from an untrustworthy source. We’re doing all necessary security checks initially to check if the String is only alphanumeric, followed by some more operations.

Remember that our unreliable source caller method still has reference to this userName object.

If Strings were mutable, then by the time we execute the update, we can’t be sure that the String we received, even after performing security checks, would be safe. The untrustworthy caller method still has the reference and can change the String between integrity checks. Thus making our query prone to SQL injections in this case. So mutable Strings could lead to degradation of security over time.

It could also happen that the String userName is visible to another thread, which could then change its value after the integrity check.

In general, immutability comes to our rescue in this case because it’s easier to operate with sensitive code when values don’t change because there are fewer interleavings of operations that might affect the result.

3.3. Synchronization

Being immutable automatically makes the String thread safe since they won’t be changed when accessed from multiple threads.

Hence immutable objects, in general, can be shared across multiple threads running simultaneously. They’re also thread-safe because if a thread changes the value, then instead of modifying the same, a new String would be created in the String pool. Hence, Strings are safe for multi-threading.

3.4. Hashcode Caching

Since String objects are abundantly used as a data structure, they are also widely used in hash implementations like HashMap, HashTable, HashSet, etc. When operating upon these hash implementations, hashCode() method is called quite frequently for bucketing.

The immutability guarantees Strings that their value won’t change. So the hashCode() method is overridden in String class to facilitate caching, such that the hash is calculated and cached during the first hashCode() call and the same value is returned ever since.

This, in turn, improves the performance of collections that uses hash implementations when operated with String objects.

On the other hand, mutable Strings would produce two different hashcodes at the time of insertion and retrieval if contents of String was modified after the operation, potentially losing the value object in the Map.

3.5. Performance

As we saw previously, String pool exists because Strings are immutable. In turn, it enhances the performance by saving heap memory and faster access of hash implementations when operated with Strings.

Since String is the most widely used data structure, improving the performance of String have a considerable effect on improving the performance of the whole application in general.

4. Conclusion

Through this article, we can conclude that Strings are immutable precisely so that their references can be treated as a normal variable and one can pass them around, between methods and across threads, without worrying about whether the actual String object it’s pointing to will change.

We also learned as what might be the other reasons that prompted the Java language designers to make this class as immutable.

Running JUnit Tests Programmatically, from a Java Application

$
0
0

1. Overview

In this tutorial, we’ll show how to run JUnit tests directly from Java code – there are scenarios where this option comes in handy.

If you are new to JUnit, or if you want to upgrade to JUnit 5, you can check some of many tutorials we have on the topic.

2. Maven Dependencies

We’ll need a couple of basic dependencies to run both JUnit 4 and JUnit 5 tests:

<dependencies>
    <dependency>
        <groupId>org.junit.jupiter</groupId>
        <artifactId>junit-jupiter-engine</artifactId>
        <version>5.2.0</version>
        <scope>test</scope>
    </dependency>
    <dependency>
        <groupId>org.junit.platform</groupId>
        <artifactId>junit-platform-launcher</artifactId>
        <version>1.2.0</version>
    </dependency>
</dependencies>

// for JUnit 4
<dependency> 
    <groupId>junit</groupId> 
    <artifactId>junit</artifactId> 
    <version>4.12</version> 
    <scope>test</scope> 
</dependency>

The latest versions of JUnit 4, JUnit 5, and JUnit Platform Launcher can be found on Maven Central.

3. Running JUnit 4 Tests

3.1. Test Scenario

For both JUnit 4 and JUnit 5, we’ll set up a few “placeholder” test classes which will be enough to demonstrate our examples:

public class FirstUnitTest {

    @Test
    public void whenThis_thenThat() {
        assertTrue(true);
    }

    @Test
    public void whenSomething_thenSomething() {
        assertTrue(true);
    }

    @Test
    public void whenSomethingElse_thenSomethingElse() {
        assertTrue(true);
    }
}
public class SecondUnitTest {

    @Test
    public void whenSomething_thenSomething() {
        assertTrue(true);
    }

    @Test
    public void whensomethingElse_thenSomethingElse() {
        assertTrue(true);
    }
}

When using JUnit 4, we create test classes adding @Test annotation to every test method.

We can also add other useful annotations, such as @Before or @After, but that’s not in the scope of this tutorial.

3.2. Running a Single Test Class

To run JUnit tests from Java code, we can use the JUnitCore class (with an addition of TextListener class, used to display the output in System.out):

JUnitCore junit = new JUnitCore();
junit.addListener(new TextListener(System.out));
junit.run(FirstUnitTest.class);

On the console, we’ll see a very simple message indicating successful tests:

Running one test class:
..
Time: 0.019
OK (2 tests)

3.3. Running Multiple Test Classes

If we want to specify multiple test classes with JUnit 4, we can use the same code as for a single class, and simply add the additional classes:

JUnitCore junit = new JUnitCore();
junit.addListener(new TextListener(System.out));

Result result = junit.run(
  FirstUnitTest.class, 
  SecondUnitTest.class);

resultReport(result);

Note that the result is stored in an instance of JUnit’s Result class, which we’re printing out using a simple utility method:

public static void resultReport(Result result) {
    System.out.println("Finished. Result: Failures: " +
      result.getFailureCount() + ". Ignored: " +
      result.getIgnoreCount() + ". Tests run: " +
      result.getRunCount() + ". Time: " +
      result.getRunTime() + "ms.");
}

3.4. Running a Test Suite

If we need to group some test classes in order to run them, we can create a TestSuite. This is just an empty class where we specify all classes using JUnit annotations:

@RunWith(Suite.class)
@Suite.SuiteClasses({
  FirstUnitTest.class,
  SecondUnitTest.class
})
public class MyTestSuite {
}

To run these tests, we’ll again use the same code as before:

JUnitCore junit = new JUnitCore();
junit.addListener(new TextListener(System.out));
Result result = junit.run(MyTestSuite.class);
resultReport(result);

3.5. Running Repeated Tests

One of the interesting features of JUnit is that we can repeat tests by creating instances of RepeatedTest. This can be really helpful when we’re testing random values, or for performance checks.

In the next example, we’ll run the tests from MergeListsTest five times:

Test test = new JUnit4TestAdapter(FirstUnitTest.class);
RepeatedTest repeatedTest = new RepeatedTest(test, 5);

JUnitCore junit = new JUnitCore();
junit.addListener(new TextListener(System.out));

junit.run(repeatedTest);

Here, we’re using JUnit4TestAdapter as a wrapper for our test class.

We can even create suites programmatically, applying repeated testing:

TestSuite mySuite = new ActiveTestSuite();

JUnitCore junit = new JUnitCore();
junit.addListener(new TextListener(System.out));

mySuite.addTest(new RepeatedTest(new JUnit4TestAdapter(FirstUnitTest.class), 5));
mySuite.addTest(new RepeatedTest(new JUnit4TestAdapter(SecondUnitTest.class), 3));

junit.run(mySuite);

4. Running JUnit 5 Tests

4.1. Test Scenario

With JUnit 5, we’ll use the same sample test classes as for the previous demo – FirstUnitTest and SecondUnitTest, with some minor differences due to a different version of JUnit framework, like the package for @Test and assertion methods.

4.2. Running Single Test Class

To run JUnit 5 tests from Java code, we’ll set up an instance of LauncherDiscoveryRequest. It uses a builder class where we must set package selectors and testing class name filters, to get all test classes that we want to run.

This request is then associated with a launcher and, before executing the tests, we’ll also set up a test plan and an execution listener.

Both of these will offer information about the tests to be executed and of the results:

public class RunJUnit5TestsFromJava {
    SummaryGeneratingListener listener = new SummaryGeneratingListener();

    public void runOne() {
        LauncherDiscoveryRequest request = LauncherDiscoveryRequestBuilder.request()
          .selectors(selectClass(FirstUnitTest.class))
          .build();
        Launcher launcher = LauncherFactory.create();
        TestPlan testPlan = launcher.discover(request);
        launcher.registerTestExecutionListeners(listener);
        launcher.execute(request);
    }
    // main method...
}

4.3. Running Multiple Test Classes

We can set selectors and filters to the request to run multiple test classes.

Let’s see how we can set package selectors and testing class name filters, to get all test classes that we want to run:

public void runAll() {
    LauncherDiscoveryRequest request = LauncherDiscoveryRequestBuilder.request()
      .selectors(selectPackage("com.baeldung.junit5.runfromjava"))
      .filters(includeClassNamePatterns(".*Test"))
      .build();
    Launcher launcher = LauncherFactory.create();
    TestPlan testPlan = launcher.discover(request);
    launcher.registerTestExecutionListeners(listener);
    launcher.execute(request);
}

4.4. Test Output

In the main() method, we call our class, and we also use the listener to get the result details. This time the result is stored as a TestExecutionSummary.

The simplest way to extract its info merely is by printing to a console output stream:

public static void main(String[] args) {
    RunJUnit5TestsFromJava runner = new RunJUnit5TestsFromJava();
    runner.runAll();

    TestExecutionSummary summary = runner.listener.getSummary();
    summary.printTo(new PrintWriter(System.out));
}

This will give us the details of our test run:

Test run finished after 177 ms
[         7 containers found      ]
[         0 containers skipped    ]
[         7 containers started    ]
[         0 containers aborted    ]
[         7 containers successful ]
[         0 containers failed     ]
[        10 tests found           ]
[         0 tests skipped         ]
[        10 tests started         ]
[         0 tests aborted         ]
[        10 tests successful      ]
[         0 tests failed          ]

5. Conclusion

In this article, we’ve shown how to run JUnit tests programmatically from Java code, covering JUnit 4 as well as the recent JUnit 5 version of this testing framework.

As always, the implementation of the examples shown here is available over on GitHub for both the JUnit 5 examples, as well as JUnit 4.

Default Password Encoder in Spring Security 5

$
0
0

1. Overview

In Spring Security 4, it was possible to store passwords in plain text using in-memory authentication.

A major overhaul of the password management process in version 5 has introduced more secure default mechanism for encoding and decoding passwords. This means that if your Spring application stores passwords in plain text, upgrading to Spring Security 5 may cause problems.

In this short tutorial, we’ll describe one of those potential problems and demonstrate a solution to the issue.

2. Spring Security 4

We’ll start by showing a standard security configuration that provides simple in-memory authentication (valid for Spring 4):

@Configuration
public class InMemoryAuthWebSecurityConfigurer 
  extends WebSecurityConfigurerAdapter {

    @Override
    protected void configure(AuthenticationManagerBuilder auth) 
      throws Exception {
        auth.inMemoryAuthentication()
          .withUser("spring")
          .password("secret")
          .roles("USER");
    }

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http.authorizeRequests()
          .antMatchers("/private/**")
          .authenticated()
          .antMatchers("/public/**")
          .permitAll()
          .and()
          .httpBasic();
    }
}

This configuration defines authentication for all /private/ mapped methods and public access for everything under /public/.

If we use the same configuration under Spring Security 5, we’d get the following error:

java.lang.IllegalArgumentException: There is no PasswordEncoder mapped for the id "null"

The error tells us that the given password couldn’t be decoded since no password encoder was configured for our in-memory authentication.

3. Spring Security 5

We can fix this the error by defining a DelegatingPasswordEncoder with the PasswordEncoderFactories class.

We use this encoder to configure our user with the AuthenticationManagerBuilder:

@Configuration
public class InMemoryAuthWebSecurityConfigurer 
  extends WebSecurityConfigurerAdapter {

    @Override
    protected void configure(AuthenticationManagerBuilder auth) 
      throws Exception {
        PasswordEncoder encoder = PasswordEncoderFactories.createDelegatingPasswordEncoder();
        auth.inMemoryAuthentication()
          .withUser("spring")
          .password(encoder.encode("secret"))
          .roles("USER");
    }
}

Now, with this configuration we’re storing our in-memory password using BCrypt in the following format:

{bcrypt}$2a$10$MF7hYnWLeLT66gNccBgxaONZHbrSMjlUofkp50sSpBw2PJjUqU.zS

Although we can define our own set of password encoders, it’s recommended to stick with the default encoders provided in PasswordEncoderFactories.

3.1. Migrating Existing Passwords

We can update existing passwords to the recommended Spring Security 5 standards by:

  • Updating plain text stored passwords with their value encoded:
String encoded = new BCryptPasswordEncoder().encode(plainTextPassword);
  • Prefixing hashed stored passwords with their known encoder identifier:
{bcrypt}$2a$10$MF7hYnWLeLT66gNccBgxaONZHbrSMjlUofkp50sSpBw2PJjUqU.zS
{sha256}97cde38028ad898ebc02e690819fa220e88c62e0699403e94fff291cfffaf8410849f27605abcbc0
  • Requesting users to update their passwords when the encoding-mechanism for stored passwords is unknown

4. Conclusion

In this quick example, we updated a valid a Spring 4 in-memory authentication configuration to Spring 5 using the new password storage mechanism.

As always, you can find the source code over on the GitHub project.

Extracting Principal and Authorities using Spring Security OAuth

$
0
0

1. Overview

In this tutorial, we’ll illustrate how to create an application that delegates user authentication to a third party, as well as to a custom authorization server, using Spring Boot and Spring Security OAuth.

Also, we’ll demonstrate how to extract both Principal and Authorities using Spring’s PrincipalExtractor and AuthoritiesExtractor interfaces.

For an introduction to Spring Security OAuth2 please refer to these articles.

2. Maven Dependencies

To get started, we need to add the spring-security-oauth2-autoconfigure dependency to our pom.xml:

<dependency>
    <groupId>org.springframework.security.oauth.boot</groupId>
    <artifactId>spring-security-oauth2-autoconfigure</artifactId>
    <version>2.0.1.RELEASE</version>
</dependency>

3. OAuth Authentication using Github

Next, let’s create the security configuration of our application:

@Configuration
@EnableOAuth2Sso
public class SecurityConfig 
  extends WebSecurityConfigurerAdapter {

    @Override
    protected void configure(HttpSecurity http) 
      throws Exception {
 
        http.antMatcher("/**")
          .authorizeRequests()
          .antMatchers("/login**")
          .permitAll()
          .anyRequest()
          .authenticated()
          .and()
          .formLogin().disable();
    }
}

In short, we’re saying that anyone can access the /login endpoint and that all other endpoints will require user authentication.

We’ve also annotated our configuration class with @EnableOAuthSso which converts our application into an OAuth client and creates the necessary components for it to behave as such.

While Spring creates most of the components for us by default, we still need to configure some properties:

security.oauth2.client.client-id=89a7c4facbb3434d599d
security.oauth2.client.client-secret=9b3b08e4a340bd20e866787e4645b54f73d74b6a
security.oauth2.client.access-token-uri=https://github.com/login/oauth/access_token
security.oauth2.client.user-authorization-uri=https://github.com/login/oauth/authorize
security.oauth2.client.scope=read:user,user:email
security.oauth2.resource.user-info-uri=https://api.github.com/user

Instead of dealing with user account management, we’re delegating it to a third party – in this case, Github – thus enabling us to focus on the logic of our application.

4. Extracting Principal and Authorities

When acting as an OAuth client and authenticating users through a third party there are three steps we need to consider:

  1. User authentication – the user authenticates with the third party
  2. User authorization – follows authentication, it’s when the user allows our application to perform certain operations on their behalf; this is where scopes come in
  3. Fetch user data – use the OAuth token we’ve obtained to retrieve user’s data

Once we retrieve the user’s data, Spring is able to automatically create the user’s Principal and Authorities.

While that may be acceptable, more often than not we find ourselves in a scenario where we want to have complete control over them.

To do so, Spring gives us two interfaces we can use to override its default behavior:

  • PrincipalExtractor – Interface we can use to provide our custom logic to extract the Principal
  • AuthoritiesExtractor – Similar to PrincipalExtractor, but it’s used to customize Authorities extraction instead

By default, Spring provides two components – FixedPrincipalExtractor and FixedAuthoritiesExtractor  that implement these interfaces and have a pre-defined strategy to create them for us.

4.1. Customizing Github’s Authentication

In our case, we’re aware of how Github’s user data looks like and what we can use to tailor them according to our needs.

As such, to override Spring’s default components we just need to create two Beans that also implement these interfaces.

For our application’s Principal we’re simply going to use the user’s Github username:

@Bean
public class GithubPrincipalExtractor 
  implements PrincipalExtractor {

    @Override
    public Object extractPrincipal(Map<String, Object> map) {
        return map.get("login");
    }
}

Depending on our user’s Github subscription – free, or otherwise – we’ll give them a GITHUB_USER_SUBSCRIBED, or a GITHUB_USER_FREE authority:

@Bean
public class GithubAuthoritiesExtractor 
  implements AuthoritiesExtractor {
    List<GrantedAuthority> GITHUB_FREE_AUTHORITIES
     = AuthorityUtils.commaSeparatedStringToAuthorityList(
     "GITHUB_USER,GITHUB_USER_FREE");
    List<GrantedAuthority> GITHUB_SUBSCRIBED_AUTHORITIES 
     = AuthorityUtils.commaSeparatedStringToAuthorityList(
     "GITHUB_USER,GITHUB_USER_SUBSCRIBED");

    @Override
    public List<GrantedAuthority> extractAuthorities
      (Map<String, Object> map) {
 
        if (Objects.nonNull(map.get("plan"))) {
            if (!((LinkedHashMap) map.get("plan"))
              .get("name")
              .equals("free")) {
                return GITHUB_SUBSCRIBED_AUTHORITIES;
            }
        }
        return GITHUB_FREE_AUTHORITIES;
    }
}

4.2. Using a Custom Authorization Server

We can also use our own Authorization Server for our users – instead of relying on a third party.

Despite the authorization server we decide to use, the components we need to customize both Principal and Authorities remain the same: a PrincipalExtractor and an AuthoritiesExtractor.

We just need to be aware of the data returned by the user-info-uri endpoint and use it as we see fit.

Let’s change our application to authenticate our users using the authorization server described in this article:

security.oauth2.client.client-id=SampleClientId
security.oauth2.client.client-secret=secret
security.oauth2.client.access-token-uri=http://localhost:8081/auth/oauth/token
security.oauth2.client.user-authorization-uri=http://localhost:8081/auth/oauth/authorize
security.oauth2.resource.user-info-uri=http://localhost:8081/auth/user/me

Now that we’re pointing to our authorization server we need to create both extractors; in this case, our PrincipalExtractor is going to extract the Principal from the Map using the name key:

@Bean
public class BaeldungPrincipalExtractor 
  implements PrincipalExtractor {

    @Override
    public Object extractPrincipal(Map<String, Object> map) {
        return map.get("name");
    }
}

As for authorities, our Authorization Server is already placing them in its user-info-uri‘s data.

As such, we’re going to extract and enrich them:

@Bean
public class BaeldungAuthoritiesExtractor 
  implements AuthoritiesExtractor {

    @Override
    public List<GrantedAuthority> extractAuthorities
      (Map<String, Object> map) {
        return AuthorityUtils
          .commaSeparatedStringToAuthorityList(asAuthorities(map));
    }

    private String asAuthorities(Map<String, Object> map) {
        List<String> authorities = new ArrayList<>();
        authorities.add("BAELDUNG_USER");
        List<LinkedHashMap<String, String>> authz = 
          (List<LinkedHashMap<String, String>>) map.get("authorities");
        for (LinkedHashMap<String, String> entry : authz) {
            authorities.add(entry.get("authority"));
        }
        return String.join(",", authorities);
    }
}

5. Conclusion

In this article, we’ve implemented an application that delegates user authentication to a third party, as well as to a custom authorization server, and demonstrated how to customize both Principal and Authorities.

As usual, the implementation of this example can be found over on Github.

When running locally, you can run and test the application at localhost:8082

Server-Sent Events (SSE) In JAX-RS

$
0
0

1. Overview

Server-Sent Events (SSE) is an HTTP based specification that provides a way to establish a long-running and mono-channel connection from the server to the client. 

The client initiates the SSE connection by using the media type text/event-stream in the Accept header.

Later, it gets updated automatically without requesting the server.

We can check more details about the specification on the official spec.

In this tutorial, we’ll introduce the new JAX-RS 2.1 implementation of SSE.

Hence, we’ll look at how we can publish events with the JAX-RS Server API. Also, we’ll explore how we can consume them either by the JAX-RS Client API or just by an HTTP client like the curl tool.

2. Understanding SSE Events

An SSE Event is a block of text composed of the following fields:

  • Event: the event’s type. The server can send many messages of different types and the client may only listen for a particular type or can process differently each event type
  • Data: the message sent by the server. We can have many data lines for the same event
  • Id: the id of the event, used to send the Last-Event-ID header, after a connection retry. It is useful as it can prevent the server from sending already sent events
  • Retry: the time, in milliseconds, for the client to establish a new connection when the current is lost. The last received Id will be automatically sent through the Last-Event-ID header
  • :‘: this is a comment and is ignored by the client

Also, two consecutive events are separated by a double newline ‘\n\n‘.

Additionally, the data in the same event can be written in many lines as can be seen in the following example:

event: stock
id: 1
: price change
retry: 4000
data: {"dateTime":"2018-07-14T18:06:00.285","id":1,
data: "name":"GOOG","price":75.7119}

event: stock
id: 2
: price change
retry: 4000
data: {"dateTime":"2018-07-14T18:06:00.285","id":2,"name":"IBM","price":83.4611}

In JAX RS, an SSE event is abstracted by the SseEvent interface, or more precisely, by the two subinterfaces OutboundSseEvent and InboundSseEvent.

While the OutboundSseEvent is used on the Server API and designs a sent event, the InboundSseEvent is used by the Client API and abstracts a received event.

3. Publishing SSE Events

Now that we discussed what an SSE event is let’s see how we can build and send it to an HTTP client.

3.1. Project Setup

We already have a tutorial about setting up a JAX RS-based Maven project. Feel free to have a look there to see how to set dependencies and get started with JAX RS.

3.2. SSE Resource Method

An SSE Resource method is a JAX RS method that:

  • Can produce a text/event-stream media type
  • Has an injected SseEventSink parameter, where events are sent
  • May also have an injected Sse parameter which is used as an entry point to create an event builder
@GET
@Path("prices")
@Produces("text/event-stream")
public void getStockPrices(@Context SseEventSink sseEventSink, @Context Sse sse) {
    //...
}

In consequence, the client should make the first HTTP request, with the following HTTP header:

Accept: text/event-stream

3.3. The SSE Instance

An SSE instance is a context bean that the JAX RS Runtime will make available for injection.

We could use it as a factory to create:

  • OutboundSseEvent.Builder – allows us to create events then
  • SseBroadcaster – allows us to broadcast events to multiple subscribers

Let’s see how that works:

@Context
public void setSse(Sse sse) {
    this.sse = sse;
    this.eventBuilder = sse.newEventBuilder();
    this.sseBroadcaster = sse.newBroadcaster();
}

Now, let’s focus on the event builder. OutboundSseEvent.Builder is responsible for creating the OutboundSseEvent:

OutboundSseEvent sseEvent = this.eventBuilder
  .name("stock")
  .id(String.valueOf(lastEventId))
  .mediaType(MediaType.APPLICATION_JSON_TYPE)
  .data(Stock.class, stock)
  .reconnectDelay(4000)
  .comment("price change")
  .build();

As we can see, the builder has methods to set values for all event fields shown above. Additionally, the mediaType() method is used to serialize the data field Java object to a suitable text format.

By default, the media type of the data field is text/plain. Hence, it doesn’t need to be explicitly specified when dealing with the String data type.

Otherwise, if we want to handle a custom object, we need to specify the media type or to provide a custom MessageBodyWriter. The JAX RS Runtime provides MessageBodyWriters for the most known media types.

The Sse instance also has two builders shortcuts for creating an event with only the data field, or the type and data fields:

OutboundSseEvent sseEvent = sse.newEvent("cool Event");
OutboundSseEvent sseEvent = sse.newEvent("typed event", "data Event");

3.4. Sending Simple Event

Now that we know how to build events and we understand how an SSE Resource works. Let’s send a simple event.

The SseEventSink interface abstracts a single HTTP connection. The JAX-RS Runtime can make it available only through injection in the SSE resource method.

Sending an event is then as simple as invoking SseEventSink.send(). 

In the next example will send a bunch of stock updates and will eventually close the event stream:

@GET
@Path("prices")
@Produces("text/event-stream")
public void getStockPrices(@Context SseEventSink sseEventSink /*..*/) {
    int lastEventId = //..;
    while (running) {
        Stock stock = stockService.getNextTransaction(lastEventId);
        if (stock != null) {
            OutboundSseEvent sseEvent = this.eventBuilder
              .name("stock")
              .id(String.valueOf(lastEventId))
              .mediaType(MediaType.APPLICATION_JSON_TYPE)
              .data(Stock.class, stock)
              .reconnectDelay(3000)
              .comment("price change")
              .build();
            sseEventSink.send(sseEvent);
            lastEventId++;
        }
     //..
    }
    sseEventSink.close();
}

After sending all events, the server closes the connection either by explicitly invoking the close() method or, preferably, by using the try-with-resource, as the SseEventSink extends the AutoClosable interface:

try (SseEventSink sink = sseEventSink) {
    OutboundSseEvent sseEvent = //..
    sink.send(sseEvent);
}

In our sample app we can see this running if we visit:

http://localhost:9080/sse-jaxrs-server/sse.html

3.5. Broadcasting Events

Broadcasting is the process by which events are sent to multiple clients simultaneously. This is accomplished by the SseBroadcaster API, and it is done in three simple steps:

First, we create the SseBroadcaster object from an injected Sse context as shown previously:

SseBroadcaster sseBroadcaster = sse.newBroadcaster();

Then, clients should subscribe to be able to receive Sse Events. This is generally done in an SSE resource method where a SseEventSink context instance is injected:

@GET
@Path("subscribe")
@Produces(MediaType.SERVER_SENT_EVENTS)
public void listen(@Context SseEventSink sseEventSink) {
    this.sseBroadcaster.register(sseEventSink);
}

And finally, we can trigger the event publishing by invoking the broadcast() method:

@GET
@Path("publish")
public void broadcast() {
    OutboundSseEvent sseEvent = //...;
    this.sseBroadcaster.broadcast(sseEvent);
}

This will send the same event to each registered SseEventSink.

To showcase the broadcasting, we can access this URL:

http://localhost:9080/sse-jaxrs-server/sse-broadcast.html

And then we can trigger the broadcasting by invoking the broadcast() resource method:

curl -X GET http://localhost:9080/sse-jaxrs-server/sse/stock/publish

4. Consuming SSE Events

To consume an SSE event sent by the server, we can use any HTTP client, but for this tutorial, we’ll use the JAX RS client API.

4.1. JAX RS Client API for SSE

To get started with the client API for SSE, we need to provide dependencies for JAX RS Client implementation.

Here, we’ll use Apache CXF client implementation:

<dependency>
    <groupId>org.apache.cxf</groupId>
    <artifactId>cxf-rt-rs-client</artifactId>
    <version>${cxf-version}</version>
</dependency>
<dependency>
    <groupId>org.apache.cxf</groupId>
    <artifactId>cxf-rt-rs-sse</artifactId>
    <version>${cxf-version}</version>
</dependency>

The SseEventSource is the heart of this API, and it is constructed from The WebTarget.

We start by listening for incoming events whose are abstracted by the InboundSseEvent interface:

Client client = ClientBuilder.newClient();
WebTarget target = client.target(url);
try (SseEventSource source = SseEventSource.target(target).build()) {
    source.register((inboundSseEvent) -> System.out.println(inboundSseEvent));
    source.open();
}

Once the connection established, the registered event consumer will be invoked for each received InboundSseEvent.

We can then use the readData() method to read the original data String:

String data = inboundSseEvent.readData();

Or we can use the overloaded version to get the Deserialized Java Object using the suitable media type:

Stock stock = inboundSseEvent.readData(Stock.class, MediaType.Application_Json);

Here, we just provided a simple event consumer that print the incoming event in the console.

5. Conclusion

In this tutorial, we focused on how to use the Server-Sent Events in JAX RS 2.1. We provided an example that showcases how to send events to a single client as well as how to broadcast events to multiples clients.

Finally, we consumed these events using the JAX-RS client API.

As usual, the code of this tutorial can be found over on Github.

Jersey MVC Support

$
0
0

1. Overview

Jersey is an open-source framework for developing RESTFul Web Services.

As well as serving as the JAX-RS reference implementation it also includes a number of extensions to further simplify web application development.

In this tutorial, we’ll create a small example application that uses the Model-View-Controller (MVC) extension offered by Jersey.

To learn how to create an API with Jersey, check out this writeup here.

2. MVC in Jersey

Jersey contains an extension to support the Model-View-Controller (MVC) design pattern.

First of all, in the context of Jersey components, the Controller from the MVC pattern corresponds to a resource class or method.

Likewise, the View corresponds to a template bound to a resource class or method. Finally, the model represents a Java object returned from a resource method (Controller).

To use the capabilities of Jersey MVC in our application, we first need to register the MVC module extension that we wish to use.

In our example, we’re going to use the popular Java template engine Freemarker. This is one of the rendering engines supported by Jersey out of the box along with Mustache and standard Java Server Pages (JSP).

For more information about how MVC works, please refer to this tutorial.

3. Application Setup

In this section, we’ll start by configuring the necessary Maven dependencies in our pom.xml.

Then, we’ll take a look at how to configure and run our server using a simple embedded Grizzly server.

3.1. Maven Dependencies

Let’s start by adding the Jersey MVC Freemarker extension.

We can get the latest version from Maven Central:

<dependency>
    <groupId>org.glassfish.jersey.ext</groupId>
    <artifactId>jersey-mvc-freemarker</artifactId>
    <version>2.27</version>
</dependency>

We’re also going to need the Grizzly servlet container.

Again we can find the latest version in Maven Central:

<dependency>
    <groupId>org.glassfish.jersey.containers</groupId>
    <artifactId>jersey-container-grizzly2-servlet</artifactId>
    <version>2.27</version>
</dependency>

3.2. Configuring the Server

To make use of the Jersey MVC templating support in our application we need to register the specific JAX-RS features provided by the MVC modules.

With this in mind, we define a custom resource configuration:

public class ViewApplicationConfig extends ResourceConfig {    
    public ViewApplicationConfig() {
        packages("com.baeldung.jersey.server");
        property(FreemarkerMvcFeature.TEMPLATE_BASE_PATH, "templates/freemarker");
        register(FreemarkerMvcFeature.class);;
    }
}

In the above example we configure three items:

  • First, we use the packages method to tell Jersey to scan the com.baeldung.jersey.server package for classes annotated with @Path. This will register our FruitResource
  • Next, we configure the base path in order to resolve our templates. This tells Jersey to look in the /src/main/resources/templates/freemarker for Freemarker templates
  • Finally, we register the feature that handles the Freemarker rendering via the FreemarkerMvcFeature class

3.3. Running the Application

Now let’s look at how to run our web application. We’ll use the exec-maven-plugin to configure our pom.xml to execute our embedded web server:

<plugin>
    <groupId>org.codehaus.mojo</groupId>
    <artifactId>exec-maven-plugin</artifactId>
    <configuration>                
        <mainClass>com.baeldung.jersey.server.http.EmbeddedHttpServer</mainClass>
    </configuration>
</plugin>

Let’s now compile and run our application using Maven:

mvn clean compile exec:java
...
Jul 28, 2018 6:21:08 PM org.glassfish.grizzly.http.server.HttpServer start
INFO: [HttpServer] Started.
Application started.
Try out http://localhost:8082/fruit
Stop the application using CTRL+C

Go to the browser URL – http://localhost:8080/fruit. Voila, the “Welcome Fruit Index Page!” is displayed.

4. MVC Templates

In Jersey, the MVC API consists of two classes to bind the model to the view namely Viewable and @Template

In this section we’ll explain three different ways of linking templates to our view:

  • Using the Viewable class
  • Using the @Template annotation
  • How to handle errors with MVC and pass them to a specific template

4.1. Using Viewable in a Resource Class

Let’s start by looking at Viewable:

@Path("/fruit")
public class FruitResource {
    @GET
    public Viewable get() {
        return new Viewable("/index.ftl", "Fruit Index Page");
    }
}

In this example, the FruitResource JAX-RS resource class is the controller. The Viewable instance encapsulates the referenced data model which is a simple String.

Furthermore, we also include a named reference to the associated view template – index.ftl.

4.2. Using @Template on a Resource Method

There’s no need to use Viewable every time we want to bind a model to a template.

In this next example, we’ll simply annotate our resource method with @Template:

@GET
@Template(name = "/all.ftl")
@Path("/all")
@Produces(MediaType.TEXT_HTML)
public Map<String, Object> getAllFruit() {
    List<Fruit> fruits = new ArrayList<>();
    fruits.add(new Fruit("banana", "yellow"));
    fruits.add(new Fruit("apple", "red"));
    fruits.add(new Fruit("kiwi", "green"));

    Map<String, Object> model = new HashMap<>();
    model.put("items", fruits);
    return model;
}

In this example, we’ve used the @Template annotation. This avoids wrapping our model directly in a template reference via Viewable and makes our resource method more readable.

The model is now represented by the return value of our annotated resource method – a Map<String, Object>. This is passed directly to the template all.ftl which simply displays our list of fruit.

4.3. Handling Errors with MVC

Now let’s take a look at how to handle errors using the @ErrorTemplate annotation:

@GET
@ErrorTemplate(name = "/error.ftl")
@Template(name = "/named.ftl")
@Path("{name}")
@Produces(MediaType.TEXT_HTML)
public String getFruitByName(@PathParam("name") String name) {
    if (!"banana".equalsIgnoreCase(name)) {
        throw new IllegalArgumentException("Fruit not found: " + name);
    }
    return name;
}

Generally speaking, the purpose of the @ErrorTemplate annotation is to bind the model to an error view. This error handler will take care of rendering the response when an exception is thrown during the processing of a request.

In our simple Fruit API example if no errors occur during processing then the named.ftl template is used to render the page. Otherwise, if an exception is raised then the error.ftl template is shown to the user.

In this case, the model is the thrown exception itself. This means from within our template we can call methods directly on the exception object.

Let’s take a quick look at a snippet from our error.ftl template to highlight this:

<body>
    <h1>Error - ${model.message}!</h1>
</body>

In our final example, we’ll take a look at a simple unit test:

@Test
public void testErrorTemplate() {
    String response = target("/fruit/orange").request()
      .get(String.class);
    assertThat(response, containsString("Error -  Fruit not found: orange!"));
}

In the above example, we use the response from our fruit resource. We check the response contains the message from the IllegalArgumentException that was thrown.

5. Conclusion

In this article, we’ve explored the Jersey framework MVC extension.

We started by introducing how MVC works in Jersey. Next, we took a look at how to configure, run and set up an example web application.

Finally, we looked at three ways of using MVC templates with Jersey and Freemarker and how to handle errors.

As always, the full source code of the article is available over on GitHub.


Introduction to Hazelcast Jet

$
0
0

1. Introduction

In this tutorial, we’ll learn about Hazelcast Jet. It’s a distributed data processing engine provided by Hazelcast, Inc. and is built on top of Hazelcast IMDG

If you want to learn about Hazelcast IMDG, here is an article for getting started.

2. What is Hazelcast Jet?

Hazelcast Jet is a distributed data processing engine that treats data as streams. It can process data that is stored in a database or files as well as the data that is streamed by a Kafka server.

It can perform aggregate functions over infinite data streams by dividing the streams into subsets and applying aggregation over each subset. This concept is known as windowing in the Jet terminology.

We can deploy Jet in a cluster of machines and then submit our data processing jobs to it. Jet will make all the members of the cluster automatically process the data. Each member of the cluster consumes a part of the data, and that makes it easy to scale up to any level of throughput.

Here are the typical use cases for Hazelcast Jet:

  • Real-Time Stream Processing
  • Fast Batch Processing
  • Processing Java 8 Streams in a distributed way
  • Data processing in Microservices

3. Setup

To setup Hazelcast Jet in our environment, we just need to add a single Maven dependency to our pom.xml.

Here’s how we do it:

<dependency>
    <groupId>com.hazelcast.jet</groupId>
    <artifactId>hazelcast-jet</artifactId>
    <version>0.6</version>
</dependency>

Including this dependency will download a 10 Mb jar file which provides us with the all the infrastructure we need to build a distributed data processing pipeline.

The latest version for Hazelcast Jet can be found here.

4. Sample Application

In order to learn more about Hazelcast Jet, we’ll create a sample application that takes an input of sentences and a word to find in those sentences and returns the count of the specified word in those sentences.

4.1. The Pipeline

A Pipeline forms the basic construct for a Jet application. Processing within a pipeline follows these steps:

  • draw data from a source
  • transform the data
  • drain the data into a sink

For our application, the pipeline will draw from a distributed List, apply the transformation of grouping and aggregation and finally drain to a distributed Map.

Here’s how we write our pipeline:

private Pipeline createPipeLine() {
    Pipeline p = Pipeline.create();
    p.drawFrom(Sources.<String> list(LIST_NAME))
      .flatMap(
        word -> traverseArray(word.toLowerCase().split("\\W+")))
      .filter(word -> !word.isEmpty())
      .groupingKey(wholeItem())
      .aggregate(counting())
      .drainTo(Sinks.map(MAP_NAME));
    return p;
}

Once we’ve drawn from the source, we traverse the data and split it around the space using a regular expression. After that, we filter out the blanks.

Lastly, we group the words, aggregate them and drain the results to a Map. 

4.2. The Job

Now that our pipeline is defined, we create a job for executing the pipeline.

Here’s how we write a countWord function which accepts parameters and returns the count:

public Long countWord(List<String> sentences, String word) {
    long count = 0;
    JetInstance jet = Jet.newJetInstance();
    try {
        List<String> textList = jet.getList(LIST_NAME);
        textList.addAll(sentences);
        Pipeline p = createPipeLine();
        jet.newJob(p)
          .join();
        Map<String, Long> counts = jet.getMap(MAP_NAME);
        count = counts.get(word);
        } finally {
            Jet.shutdownAll();
      }
    return count;
}

We create a Jet instance first in order to create our job and use the pipeline. Next, we copy the input List to a distributed list so that it’s available over all the instances.

We then submit a job using the pipeline that we have built above. The method newJob() returns an executable job that is started by Jet asynchronously. The join method waits for the job to complete and throws an exception if the job is completed with an error.

When the job completes the results are retrieved in a distributed Map, as we defined in our pipeline. So, we get the Map from the Jet instance and get the counts of the word against it.

Lastly, we shut down the Jet instance. It is important to shut it down after our execution has ended, as Jet instance starts its own threads. Otherwise, our Java process will still be alive even after our method has exited.

Here is a unit test that tests the code we have written for Jet:

@Test
public void whenGivenSentencesAndWord_ThenReturnCountOfWord() {
    List<String> sentences = new ArrayList<>();
    sentences.add("The first second was alright, but the second second was tough.");
    WordCounter wordCounter = new WordCounter();
    long countSecond = wordCounter.countWord(sentences, "second");
    assertTrue(countSecond == 3);
}

5. Conclusion

In this article, we’ve learned about Hazelcast Jet. To learn more about it and its features, refer to the manual.

As usual, the code for the example used in this article can be found over on Github.

How to Configure Spring Boot Tomcat

$
0
0

1. Overview

Spring Boot web applications include a pre-configured, embedded web server by default. In some situations though, we’d like to modify the default configuration to meet custom requirements.

In this tutorial, we’ll look at a few common use cases for configuring the Tomcat embedded server through the application.properties file.

2. Common Embedded Tomcat Configurations

2.1. Server Address And Port

The most common configuration we may wish to change is the port number:

server.port=80

If we don’t provide the server.port parameter it’s set to 8080 by default.

In some cases, we may wish to set a network address to which the server should bind. In other words, we define an IP address where our server will listen:

server.address=my_custom_ip

By default, the value is set to 0.0.0.0 which allows connection via all IPv4 addresses. Setting another value, for example, localhost – 127.0.0.1 – will make the server more selective.

2.2. Error Handling

By default, Spring Boot provides a standard error web page. This page is called the Whitelabel. It’s enabled by default but if we don’t want to display any error information we can  disable it:

server.error.whitelabel.enabled=false

The default path to a Whitelabel is /error. We can customize it by setting the server.error.path parameter:

server.error.path=/user-error

We can also set properties that will determine which information about the error is presented. For example, we can include the error message and the stack trace:

server.error.include-exception=true
server.error.include-stacktrace=always

Our tutorials Exception Message Handling for REST and Customize Whitelabel Error Page explain more about handling errors in Spring Boot.

2.3. Server Connections

When running on a low resource container we might like to decrease the CPU and memory load. One way of doing that is to limit the number of simultaneous requests that can be handled by our application. Conversely, we can increase this value to use more available resources to get better performance.

In Spring Boot, we can define the maximum amount of Tomcat worker threads:

server.tomcat.max-threads=200

When configuring a web server, it also might be useful to set the server connection timeout. This represents the maximum amount of time the server will wait for the client to make their request after connecting before the connection is closed:

server.connection-timeout=5s

We can also define the maximum size of a request header:

server.max-http-header-size=8KB

The maximum size of a request body:

server.tomcat.max-swallow-size=2MB

Or a maximum size of the whole post request:

server.tomcat.max-http-post-size=2MB

2.4. SSL

To enable SSL support in our Spring Boot application we need to set the server.ssl.enabled property to true and define an SSL protocol:

server.ssl.enabled=true
server.ssl.protocol=TLS

We should also configure the password, type, and path to the key store that holds the certificate:

server.ssl.key-store-password=my_password
server.ssl.key-store-type=keystore_type
server.ssl.key-store=keystore-path

And we must also define the alias that identifies our key in the key store:

server.ssl.key-alias=tomcat

For more information about SSL configuration, visit our HTTPS using self-signed certificate in Spring Boot article.

2.5. Tomcat Server Access Logs

Tomcat access logs are very useful when trying to measure page hit counts, user session activity, and so on.

To enable access logs, simply set:

server.tomcat.accesslog.enabled=true

We should also configure other parameters such as directory name, prefix, suffix, and date format appended to log files:

server.tomcat.accesslog.directory=logs
server.tomcat.accesslog.file-date-format=yyyy-MM-dd
server.tomcat.accesslog.prefix=access_log
server.tomcat.accesslog.suffix=.log

3. Conclusion

In this tutorial, we’ve learned a few common Tomcat embedded server configurations. To view more possible configurations, please visit the official Spring Boot application properties docs page.

As always, the source code for these examples is available over on GitHub.

Spring Data Web Support

$
0
0

1. Overview

Spring MVC and Spring Data each do a great job simplifying application development in their own right. But, what if we put them together?

In this tutorial, we’ll take a look at Spring Data’s web support and how its resolvers can reduce boilerplate and make our controllers more expressive.

Along the way, we’ll peek at Querydsl and what its integration with Spring Data looks like.

2. A Bit of Background

Spring Data’s web support is a set of web-related features implemented on top of the standard Spring MVC platform, aimed at adding extra functionality to the controller layer.

Spring Data web support’s functionality is built around several resolver classes. Resolvers streamline the implementation of controller methods that interoperate with Spring Data repositories and also enrich them with additional features.

These features include fetching domain objects from the repository layer, without having to explicitly call the repository implementations, and constructing controller responses that can be sent to clients as segments of data that support pagination and sorting.

Also, requests to controller methods that take one or more request parameters can be internally resolved to Querydsl queries.

3. A Demo Spring Boot Project

To understand how we can use Spring Data web support to improve our controllers’ functionality, let’s create a basic Spring Boot project.

Our demo project’s Maven dependencies are fairly standard, with a few exceptions that we’ll discuss later on:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
    <groupId>com.h2database</groupId>
    <artifactId>h2</artifactId>
    <scope>runtime</scope>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-test</artifactId>
    <scope>test</scope>
</dependency>

In this case, we included spring-boot-starter-web, as we’ll use it for creating a RESTful controller, spring-boot-starter-jpa for implementing the persistence layer, and spring-boot-starter-test for testing the controller API.

Since we’ll use H2 as the underlying database, we included com.h2database as well.

Let’s keep in mind that spring-boot-starter-web enables Spring Data web support by default. Hence, we don’t need to create any additional @Configuration classes to get it working within our application.

Conversely, for non-Spring Boot projects, we’d need to define a @Configuration class and annotate it with the @EnableWebMvc and @EnableSpringDataWebSupport annotations.

3.1. The Domain Class

Now, let’s add a simple User JPA entity class to the project, so we can have a working domain model to play with:

@Entity
@Table(name = "users")
public class User {

    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private long id;
    private final String name;
   
    // standard constructor / getters / toString

}

3.2. The Repository Layer

To keep the code simple, the functionality of our demo Spring Boot application will be narrowed to just fetching some User entities from an H2 in-memory database.

Spring Boot makes it easy to create repository implementations that provide minimal CRUD functionality out-of-the-box. Therefore, let’s define a simple repository interface that works with the User JPA entities:

@Repository
public interface UserRepository extends PagingAndSortingRepository<User, Long> {}

There’s nothing inherently complex in the definition of the UserRepository interface, except that it extends PagingAndSortingRepository.

This signals Spring MVC to enable automatic paging and sorting capabilities on database records.

3.3. The Controller Layer

Now, we need to implement at least a basic RESTful controller that acts as the middle tier between the client and the repository layer.

Therefore, let’s create a controller class, which takes a UserRepository instance in its constructor and adds a single method for finding User entities by id:

@RestController
public class UserController {

    @GetMapping("/users/{id}")
    public User findUserById(@PathVariable("id") User user) {
        return user;
    }
}

3.4.  Running the Application

Finally, let’s define the application’s main class and populate the H2 database with a few User entities:

@SpringBootApplication
public class Application {

    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }

    @Bean
    CommandLineRunner initialize(UserRepository userRepository) {
        return args -> {
            Stream.of("John", "Robert", "Nataly", "Helen", "Mary").forEach(name -> {
                User user = new User(name);
                userRepository.save(user);
            });
            userRepository.findAll().forEach(System.out::println);
        };
    }
}

Now, let’s run the application. As expected, we see the list of persisted User entities printed out to the console on startup:

User{id=1, name=John}
User{id=2, name=Robert}
User{id=3, name=Nataly}
User{id=4, name=Helen}
User{id=5, name=Mary}

4. The DomainClassConverter Class

For now, the UserController class only implements the findUserById() method.

At first sight, the method implementation looks fairly simple. But it actually encapsulates a lot of Spring Data web support functionality behind the scenes.

Since the method takes a User instance as an argument, we might end up thinking that we need to explicitly pass the domain object in the request. But, we don’t.

Spring MVC uses the DomainClassConverter class to convert the id path variable into the domain class’s id type and uses it for fetching the matching domain object from the repository layer. No further lookup is necessary.

For instance, a GET HTTP request to the http://localhost:8080/users/1 endpoint will return the following result:

{
  "id":1,
  "name":"John"
}

Hence, we can create an integration test and check the behavior of the findUserById() method:

@Test
public void whenGetRequestToUsersEndPointWithIdPathVariable_thenCorrectResponse() throws Exception {
    mockMvc.perform(MockMvcRequestBuilders.get("/users/{id}", "1")
      .contentType(MediaType.APPLICATION_JSON_UTF8))
      .andExpect(MockMvcResultMatchers.status().isOk())
      .andExpect(MockMvcResultMatchers.jsonPath("$.id").value("1"));
    }
}

Alternatively, we can use a REST API test tool, such as Postman, to test the method.

The nice thing about DomainClassConverter is that we don’t need to explicitly call the repository implementation in the controller method.

By simply specifying the id path variable, along with a resolvable domain class instance, we’ve automatically triggered the domain object’s lookup.

5. The PageableHandlerMethodArgumentResolver Class

Spring MVC supports the use of Pageable types in controllers and repositories.

Simply put, a Pageable instance is an object that holds paging information. Therefore, when we pass a Pageable argument to a controller method, Spring MVC uses the PageableHandlerMethodArgumentResolver class to resolve the Pageable instance into a PageRequest object, which is a simple Pageable implementation.

5.1. Using Pageable as a Controller Method Parameter

To understand how the PageableHandlerMethodArgumentResolver class works, let’s add a new method to the UserController class:

@GetMapping("/users")
public Page<User> findAllUsers(Pageable pageable) {
    return userRepository.findAll(pageable);
}

In contrast to the findUserById() method, here we need to call the repository implementation to fetch all the User JPA entities persisted in the database.

Since the method takes a Pageable instance, it returns a subset of the entire set of entities, stored in a Page<User> object.

A Page object is a sublist of a list of objects that exposes several methods we can use for retrieving information about the paged results, including the total number of result pages, and the number of the page that we’re retrieving.

By default, Spring MVC uses the PageableHandlerMethodArgumentResolver class to construct a PageRequest object, with the following request parameters:

  • page: the index of page that we want to retrieve – the parameter is zero-indexed and its default value is 0
  • size: the number of pages that we want to retrieve – the default value is 20
  • sort: one or more properties that we can use for sorting the results, using the following format: property1,property2(,asc|desc) – for instance, ?sort=name&sort=email,asc

For example, a GET request to the http://localhost:8080/users endpoint will return the following output:

{
  "content":[
    {
      "id":1,
      "name":"John"
    },
    {
      "id":2,
      "name":"Robert"
    },
    {
      "id":3,
      "name":"Nataly"
    },
    {
      "id":4,
      "name":"Helen"
    },
    {
      "id":5,
      "name":"Mary"
    }],
  "pageable":{
    "sort":{
      "sorted":false,
      "unsorted":true,
      "empty":true
    },
    "pageSize":5,
    "pageNumber":0,
    "offset":0,
    "unpaged":false,
    "paged":true
  },
  "last":true,
  "totalElements":5,
  "totalPages":1,
  "numberOfElements":5,
  "first":true,
  "size":5,
  "number":0,
  "sort":{
    "sorted":false,
    "unsorted":true,
    "empty":true
  },
  "empty":false
}

As we can see, the response includes the first, pageSize, totalElements, and totalPages JSON elements. This is really useful since a front-end can use these elements for easily creating a paging mechanism.

In addition, we can use an integration test to check the findAllUsers() method:

@Test
public void whenGetRequestToUsersEndPoint_thenCorrectResponse() throws Exception {
    mockMvc.perform(MockMvcRequestBuilders.get("/users")
      .contentType(MediaType.APPLICATION_JSON_UTF8))
      .andExpect(MockMvcResultMatchers.status().isOk())
      .andExpect(MockMvcResultMatchers.jsonPath("$['pageable']['paged']").value("true"));
}

5.2. Customizing the Paging Parameters

In many cases, we’ll want to customize the paging parameters. The simplest way to accomplish this is by using the @PageableDefault annotation:

@GetMapping("/users")
public Page<User> findAllUsers(@PageableDefault(value = 2, page = 0) Pageable pageable) {
    return userRepository.findAll(pageable);
}

Alternatively, we can use PageRequest‘s of() static factory method to create a custom PageRequest object and pass it to the repository method:

@GetMapping("/users")
public Page<User> findAllUsers() {
    Pageable pageable = PageRequest.of(0, 5);
    return userRepository.findAll(pageable);
}

The first parameter is the zero-based page index, while the second one is the size of the page that we want to retrieve.

In the example above, we created a PageRequest object of User entities, starting with the first page (0), with the page having 5 entries.

Additionally, we can build a PageRequest object using the page and size request parameters:

@GetMapping("/users")
public Page<User> findAllUsers(@RequestParam("page") int page, 
  @RequestParam("size") int size, Pageable pageable) {
    return userRepository.findAll(pageable);
}

Using this implementation, a GET request to the http://localhost:8080/users?page=0&size=2 endpoint will return the first page of User objects, and the size of the result page will be 2:

{
  "content": [
    {
      "id": 1,
      "name": "John"
    },
    {
      "id": 2,
      "name": "Robert"
    }
  ],
   
  // continues with pageable metadata
  
}

6. The SortHandlerMethodArgumentResolver Class

Paging is the de-facto approach for efficiently managing large numbers of database records. But, on its own, it’s pretty useless if we can’t sort the records in some specific way.

To this end, Spring MVC provides the SortHandlerMethodArgumentResolver class. The resolver automatically creates Sort instances from request parameters or from @SortDefault annotations.

6.1. Using the sort Controller Method Parameter

To get a clear idea of how the SortHandlerMethodArgumentResolver class works, let’s add the findAllUsersSortedByName() method to the controller class:

@GetMapping("/sortedusers")
public Page<User> findAllUsersSortedByName(@RequestParam("sort") String sort, Pageable pageable) {
    return userRepository.findAll(pageable);
}

In this case, the SortHandlerMethodArgumentResolver class will create a Sort object by using the sort request parameter.

As a result, a GET request to the http://localhost:8080/sortedusers?sort=name endpoint will return a JSON array, with the list of User objects sorted by the name property:

{
  "content": [
    {
      "id": 4,
      "name": "Helen"
    },
    {
      "id": 1,
      "name": "John"
    },
    {
      "id": 5,
      "name": "Mary"
    },
    {
      "id": 3,
      "name": "Nataly"
    },
    {
      "id": 2,
      "name": "Robert"
    }
  ],
  
  // continues with pageable metadata
  
}

6.2. Using the Sort.by() Static Factory Method

Alternatively, we can create a Sort object by using the Sort.by() static factory method, which takes a non-null, non-empty array of String properties to be sorted.

In this case, we’ll sort the records only by the name property:

@GetMapping("/sortedusers")
public Page<User> findAllUsersSortedByName() {
    Pageable pageable = PageRequest.of(0, 5, Sort.by("name"));
    return userRepository.findAll(pageable);
}

Of course, we could use multiple properties, as long as they’re declared in the domain class.

6.3. Using the @SortDefault Annotation

Likewise, we can use the @SortDefault annotation and get the same results:

@GetMapping("/sortedusers")
public Page<User> findAllUsersSortedByName(@SortDefault(sort = "name", 
  direction = Sort.Direction.ASC) Pageable pageable) {
    return userRepository.findAll(pageable);
}

Finally, let’s create an integration test to check the method’s behavior:

@Test
public void whenGetRequestToSorteredUsersEndPoint_thenCorrectResponse() throws Exception {
    mockMvc.perform(MockMvcRequestBuilders.get("/sortedusers")
      .contentType(MediaType.APPLICATION_JSON_UTF8))
      .andExpect(MockMvcResultMatchers.status().isOk())
      .andExpect(MockMvcResultMatchers.jsonPath("$['sort']['sorted']").value("true"));
}

7. Querydsl Web Support

As we mentioned in the introduction, Spring Data web support allows us to use request parameters in controller methods to build Querydsl‘s Predicate types and to construct Querydsl queries.

To keep things simple, we’ll just see how Spring MVC converts a request parameter into a Querydsl BooleanExpression, which in turn is passed to a QuerydslPredicateExecutor.

To accomplish this, first we need to add the querydsl-apt and querydsl-jpa Maven dependencies to the pom.xml file:

<dependency>
    <groupId>com.querydsl</groupId>
    <artifactId>querydsl-apt</artifactId>
</dependency>
<dependency>
    <groupId>com.querydsl</groupId>
    <artifactId>querydsl-jpa</artifactId>
</dependency>

Next, we need to refactor our UserRepository interface, which must also extend the QuerydslPredicateExecutor interface:

@Repository
public interface UserRepository extends PagingAndSortingRepository<User, Long>,
  QuerydslPredicateExecutor<User> {
}

Finally, let’s add the following method to the UserController class:

@GetMapping("/filteredusers")
public Iterable<User> getUsersByQuerydslPredicate(@QuerydslPredicate(root = User.class) 
  Predicate predicate) {
    return userRepository.findAll(predicate);
}

Although the method implementation looks fairly simple, it actually exposes a lot of functionality beneath the surface.

Let’s say that we want to fetch from the database all the User entities that match a given name. We can achieve this by just calling the method and specifying a name request parameter in the URL:

http://localhost:8080/filteredusers?name=John

As expected, the request will return the following result:

[
  {
    "id": 1,
    "name": "John"
  }
]

As we did before, we can use an integration test to check the getUsersByQuerydslPredicate() method:

@Test
public void whenGetRequestToFilteredUsersEndPoint_thenCorrectResponse() throws Exception {
    mockMvc.perform(MockMvcRequestBuilders.get("/filteredusers")
      .param("name", "John")
      .contentType(MediaType.APPLICATION_JSON_UTF8))
      .andExpect(MockMvcResultMatchers.status().isOk())
      .andExpect(MockMvcResultMatchers.jsonPath("$[0].name").value("John"));
}

This is just a basic example of how Querydsl web support works. But it actually doesn’t reveal all of its power.

Now, let’s say that we want to fetch a User entity that matches a given id. In such a case, we just need to pass an id request parameter in the URL:

http://localhost:8080/filteredusers?id=2

In this case, we’ll get this result:

[
  {
    "id": 2,
    "name": "Robert"
  }
]

It’s clear to see that Querydsl web support is a very powerful feature that we can use to fetch database records matching a given condition.

In all the cases, the whole process boils down to just calling a single controller method with different request parameters.

8. Conclusion

In this tutorial, we took an in-depth look at Spring web support’s key components and learned how to use it within a demo Spring Boot project.

As usual, all the examples shown in this tutorial are available over on GitHub.

Anonymous Classes in Java

$
0
0

1. Introduction

In this tutorial, we’ll consider anonymous classes in Java.

We’ll describe how we can declare and create instances of them. We’ll also briefly discuss their properties and limitations.

2. Anonymous Class Declaration

Anonymous classes are inner classes with no name. Since they have no name, we can’t use them in order to create instances of anonymous classes. As a result, we have to declare and instantiate anonymous classes in a single expression at the point of use.

We may either extend an existing class or implement an interface.

2.1. Extend a Class

When we instantiate an anonymous class from an existent one, we use the following syntax:

In the parentheses, we specify the parameters that are required by the constructor of the class that we are extending:

new Book("Design Patterns") {
    @Override
    public String description() {
        return "Famous GoF book.";
    }
}

Naturally, if the parent class constructor accepts no arguments, we should leave the parentheses empty.

2.2. Implement an Interface

We may instantiate an anonymous class from an interface as well:

Obviously, Java’s interfaces have no constructors, so the parentheses always remain empty. This is the only way we should do it to implement the interface’s methods:

new Runnable() {
    @Override
    public void run() {
        ...
    }
}

Once we have instantiated an anonymous class, we can assign that instance to a variable in order to be able to reference it somewhere later.

We can do this using the standard syntax for Java expressions:

Runnable action = new Runnable() {
    @Override
    public void run() {
        ...
    }
};

As we already mentioned, an anonymous class declaration is an expression, hence it must be a part of a statement. This explains why we have put a semicolon at the end of the statement.

Obviously, we can avoid assigning the instance to a variable if we create that instance inline:

List<Runnable> actions = new ArrayList<Runnable>();
actions.add(new Runnable() {
    @Override
    public void run() {
        ...
    }
});

We should use this syntax with great care as it might easily suffer the code readability especially when the implementation of the run() method takes a lot of space.

3. Anonymous Class Properties

There are certain particularities in using anonymous classes with respect to usual top-level classes. Here we briefly touch the most practical issues. For the most precise and updated information, we may always look at the Java Language Specification.

3.1. Constructor

The syntax of anonymous classes does not allow us to make them implement multiple interfaces. During construction, there might exist exactly one instance of an anonymous class. Therefore, they can never be abstract. Since they have no name, we can’t extend them. For the same reason, anonymous classes cannot have explicitly declared constructors.

In fact, the absence of a constructor doesn’t represent any problem for us for the following reasons:

  1. we create anonymous class instances at the same moment as we declare them
  2. from anonymous class instances, we can access local variables and enclosing class’s members

3.2. Static Members

Anonymous classes cannot have any static members except for those that are constant.

For example, this won’t compile:

new Runnable() {
    static final int x = 0;
    static int y = 0; // compilation error!

    @Override
    public void run() {...}
};

Instead, we’ll get the following error:

The field y cannot be declared static in a non-static inner type, unless initialized with a constant expression

3.3. Scope of Variables

Anonymous classes capture local variables that are in the scope of the block in which we have declared the class:

int count = 1;
Runnable action = new Runnable() {
    @Override
    public void run() {
        System.out.println("Runnable with captured variables: " + count);
    }           
};

As we see, the local variables count and action are defined in the same block. For this reason, we can access count from within the class declaration.

Note that in order to be able to use local variables, they must be effectively final. Since JDK 8, it is not required anymore that we declare variables with the keyword final. Nevertheless, those variables must be final. Otherwise, we get a compilation error:

[ERROR] local variables referenced from an inner class must be final or effectively final

In order the compiler decides that a variable is, in fact, immutable, in the code, there should be only one place in which we assign a value to it. We might find more information about effectively final variables in our article “Why Do Local Variables Used in Lambdas Have to Be Final or Effectively Final?

Let us just mention that as every inner class, an anonymous class can access all members of its enclosing class.

4. Anonymous Class Use Cases

There might be a big variety of applications of anonymous classes. Let’s explore some possible use cases.

4.1. Class Hierarchy and Encapsulation

We should use inner classes in general use cases and anonymous ones in very specific ones in order to achieve a cleaner hierarchy of classes in our application. When using inner classes, we may achieve a finer encapsulation of the enclosing class’s data. If we define the inner class functionality in a top-level class, then the enclosing class should have public or package visibility of some of its members. Naturally, there are situations when it is not very appreciated or even accepted.

4.2. Cleaner Project Structure

We usually use anonymous classes when we have to modify on the fly the implementation of methods of some classes. In this case, we can avoid adding new *.java files to the project in order to define top-level classes. This is especially true if that top-level class would be used just one time.

4.3. UI Event Listeners

In applications with a graphical interface, the most common use case of anonymous classes is to create various event listeners. For example, in the following snippet:

button.addActionListener(new ActionListener() {
    public void actionPerformed(ActionEvent e) {
        ...
    }
}

we create an instance of an anonymous class that implements interface ActionListener. Its actionPerformed method gets triggered when a user clicks the button.

Since Java 8, lambda expressions seem to be a more preferred way though.

5. General Picture

Anonymous classes that we considered above are just a particular case of nested classes. Generally, a nested class is a class that is declared inside another class or interface:

Looking at the diagram, we see that anonymous classes along with local and nonstatic member ones form the so-called inner classes. Together with static member classes, they form the nested classes.

6. Conclusion

In this article, we’ve considered various aspects of Java anonymous classes. We’ve described as well a general hierarchy of nested classes.

As always, the complete code is available over in our GitHub repository.

Persisting Maps with Hibernate

$
0
0

1. Introduction

In Hibernate, we can represent one-to-many relationships in our Java beans by having one of our fields be a List.

In this quick tutorial, we’ll explore various ways of doing this with a Map instead.

2. Maps Are Different from Lists

Using a Map to represent a one-to-many relationship is different from a List because we have a key.

This key turns our entity relationship into a ternary association, where each key refers to a simple value or an embeddable object or an entity. Because of this, to use a Map, we’ll always need a join table to store the foreign key that references the parent entity – the key, and the value.

But this join table will be a bit different from other join tables in that the primary key won’t necessarily be foreign keys to the parent and the target. Instead, we’ll have the primary key be a composite of a foreign key to the parent and a column that is the key to our Map.

The key-value pair in the Map may be of two types: Value Type and Entity Type. In the following sections, we’ll look at the ways to represent these associations in Hibernate.

3. Using @MapKeyColumn

Let’s say we have an Order entity and we want to keep track of name and price of all the items in an order. So, we want to introduce a Map<String, Double> to Order which will map the item’s name to its price:

@Entity
@Table(name = "orders")
public class Order {
    @Id
    @GeneratedValue
    @Column(name = "id")
    private int id;

    @ElementCollection
    @CollectionTable(name = "order_item_mapping", 
      joinColumns = {@JoinColumn(name = "order_id", referencedColumnName = "id")})
    @MapKeyColumn(name = "item_name")
    @Column(name = "price")
    private Map<String, Double> itemPriceMap;

    // standard getters and setters
}

We need to indicate to Hibernate where to get the key and the value. For the key, we’ve used @MapKeyColumn, indicating that the Map‘s key is the item_name column of our join table, order_item_mapping. Similarly, @Column specifies that the Map’s value corresponds to the price column of the join table.

Also, itemPriceMap object is a value type map, thus we must use the @ElementCollection annotation.

In addition to basic value type objects, @Embeddable objects can also be used as the Map‘s values in a similar fashion.

4. Using @MapKey

As we all know, requirements changes over time — so, now, let’s say we need to store some more attributes of Item along with itemName and itemPrice:

@Entity
@Table(name = "item")
public class Item {
    @Id
    @GeneratedValue
    @Column(name = "id")
    private int id;

    @Column(name = "name")
    private String itemName;

    @Column(name = "price")
    private double itemPrice;

    @Column(name = "item_type")
    @Enumerated(EnumType.STRING)
    private ItemType itemType;

    @Temporal(TemporalType.TIMESTAMP)
    @Column(name = "created_on")
    private Date createdOn;
   
    // standard getters and setters
}

Accordingly, let’s change Map<String, Double> to Map<String, Item> in the Order entity class:

@Entity
@Table(name = "orders")
public class Order {
    @Id
    @GeneratedValue
    @Column(name = "id")
    private int id;

    @OneToMany(cascade = CascadeType.ALL)
    @JoinTable(name = "order_item_mapping", 
      joinColumns = {@JoinColumn(name = "order_id", referencedColumnName = "id")},
      inverseJoinColumns = {@JoinColumn(name = "item_id", referencedColumnName = "id")})
    @MapKey(name = "itemName")
    private Map<String, Item> itemMap;

}

Note that this time, we’ll use the @MapKey annotation so that Hibernate will use Item#itemName as the map key column instead of introducing an additional column in the join table. So, in this case, the join table order_item_mapping doesn’t have a key column — instead, it refers to the Item‘s name.

This is in contrast to @MapKeyColumn. When we use @MapKeyColumn, the map key resides in the join table. This is the reason why we can’t define our entity mapping using both the annotations in conjunction.

Also, itemMap is an entity type map, therefore we have to annotate the relationship using @OneToMany or @ManyToMany.

5. Using @MapKeyEnumerated and @MapKeyTemporal

Whenever we specify an enum as the Map key, we use @MapKeyEnumerated. Similarly, for temporal values, @MapKeyTemporal is used. The behavior is quite similar to the standard @Enumerated and @Temporal annotations respectively.

By default, these are similar to @MapKeyColumn in that a key column will be created in the join table. If we want to reuse the value already stored in the persisted entity, we should additionally mark the field with @MapKey.

6. Using @MapKeyJoinColumn

Next, let’s say we also need to keep track of the seller of each item. One way we might do this is to add a Seller entity and tie that to our Item entity:

@Entity
@Table(name = "seller")
public class Seller {

    @Id
    @GeneratedValue
    @Column(name = "id")
    private int id;

    @Column(name = "name")
    private String sellerName;
   
    // standard getters and setters

}
@Entity
@Table(name = "item")
public class Item {
    @Id
    @GeneratedValue
    @Column(name = "id")
    private int id;

    @Column(name = "name")
    private String itemName;

    @Column(name = "price")
    private double itemPrice;

    @Column(name = "item_type")
    @Enumerated(EnumType.STRING)
    private ItemType itemType;

    @Temporal(TemporalType.TIMESTAMP)
    @Column(name = "created_on")
    private Date createdOn;

    @ManyToOne(cascade = CascadeType.ALL)
    @JoinColumn(name = "seller_id")
    private Seller seller;
 
    // standard getters and setters
}

In this case, let’s assume our use-case is to group all Order‘s Items by Seller. Hence, let’s change Map<String, Item> to Map<Seller, Item>:

@Entity
@Table(name = "orders")
public class Order {
    @Id
    @GeneratedValue
    @Column(name = "id")
    private int id;

    @OneToMany(cascade = CascadeType.ALL)
    @JoinTable(name = "order_item_mapping", 
      joinColumns = {@JoinColumn(name = "order_id", referencedColumnName = "id")},
      inverseJoinColumns = {@JoinColumn(name = "item_id", referencedColumnName = "id")})
    @MapKeyJoinColumn(name = "seller_id")
    private Map<Seller, Item> sellerItemMap;

    // standard getters and setters

}

We need to add @MapKeyJoinColumn to achieve this since that annotation allows Hibernate to keep the seller_id column (the map key) in the join table order_item_mapping along with the item_id column. So then, at the time of reading the data from the database, we can perform a GROUP BY operation easily.

7. Conclusion

In this article, we learned about the several ways of persisting Map in Hibernate depending upon the required mapping.

As always, the source code of this tutorial can be found over Github.

Types of SQL Joins

$
0
0

1. Introduction

In this tutorial, we’ll show different types of SQL joins and how they can be easily implemented in Java.

2. Defining the Model

Let’s start by creating two simple tables:

CREATE TABLE AUTHOR
(
  ID int NOT NULL PRIMARY KEY,
  FIRST_NAME varchar(255),
  LAST_NAME varchar(255)
);

CREATE TABLE ARTICLE
(
  ID int NOT NULL PRIMARY KEY,
  TITLE varchar(255) NOT NULL,
  AUTHOR_ID int,
  FOREIGN KEY(AUTHOR_ID) REFERENCES AUTHOR(ID)
);

And fill them with some test data:

INSERT INTO AUTHOR VALUES 
(1, 'Siena', 'Kerr'),
(2, 'Daniele', 'Ferguson'),
(3, 'Luciano', 'Wise'),
(4, 'Jonas', 'Lugo');

INSERT INTO ARTICLE VALUES
(1, 'First steps in Java', 1),
(2, 'SpringBoot tutorial', 1),
(3, 'Java 12 insights', null),
(4, 'SQL JOINS', 2),
(5, 'Introduction to Spring Security', 3);

Note that in our sample data set, not all authors have articles, and vice-versa. This will play a big part in our examples, which we’ll see later.

Let’s also define a POJO that we’ll use for storing the results of JOIN operations throughout our tutorial:

class ArticleWithAuthor {

    private String title;
    private String authorFirstName;
    private String authorLastName;

    // standard constructor, setters and getters
}

In our examples, we’ll extract a title from the ARTICLE table and authors data from the AUTHOR table.

3. Configuration

For our examples, we’ll use an external PostgreSQL database running on port 5432. Apart from the FULL JOIN, which is not supported in either MySQL or H2, all provided snippets should work with any SQL provider.

For our Java implementation, we’ll need a PostgreSQL driver:

<dependency>
    <groupId>org.postgresql</groupId>
    <artifactId>postgresql</artifactId>
    <version>42.2.5</version>
    <scope>test</scope>
</dependency>

Let’s first configure a java.sql.Connection to work with our database:

Class.forName("org.postgresql.Driver");
Connection connection = DriverManager.
  getConnection("jdbc:postgresql://localhost:5432/myDb", "user", "pass");

Next, let’s create a DAO class and some utility methods:

class ArticleWithAuthorDAO {

    private final Connection connection;

    // constructor

    private List<ArticleWithAuthor> executeQuery(String query) {
        try (Statement statement = connection.createStatement()) {
            ResultSet resultSet = statement.executeQuery(query);
            return mapToList(resultSet);
        } catch (SQLException e) {
            e.printStackTrace();
        }
            return new ArrayList<>();
    }

    private List<ArticleWithAuthor> mapToList(ResultSet resultSet) throws SQLException {
        List<ArticleWithAuthor> list = new ArrayList<>();
        while (resultSet.next()) {
            ArticleWithAuthor articleWithAuthor = new ArticleWithAuthor(
              resultSet.getString("TITLE"),
              resultSet.getString("FIRST_NAME"),
              resultSet.getString("LAST_NAME")
            );
            list.add(articleWithAuthor);
        }
        return list;
    }
}

In this article, we’ll not dive into details about using ResultSet, Statement, and Connection. These topics are covered in our JDBC related articles.

Let’s start exploring SQL joins in sections below.

4. Inner Join

Let’s start with possibly the simplest type of join. The INNER JOIN is an operation that selects rows matching a provided condition from both tables. The query consists of at least three parts: select columns, join tables and join condition.

Bearing that in mind, the syntax itself becomes pretty straightforward:

SELECT ARTICLE.TITLE, AUTHOR.LAST_NAME, AUTHOR.FIRST_NAME
  FROM ARTICLE INNER JOIN AUTHOR 
  ON AUTHOR.ID=ARTICLE.AUTHOR_ID

We can also illustrate the result of INNER JOIN as a common part of intersecting sets:

Let’s now implement the method for the INNER JOIN in the ArticleWithAuthorDAO class:

List<ArticleWithAuthor> articleInnerJoinAuthor() {
    String query = "SELECT ARTICLE.TITLE, AUTHOR.LAST_NAME, AUTHOR.FIRST_NAME "
      + "FROM ARTICLE INNER JOIN AUTHOR ON AUTHOR.ID=ARTICLE.AUTHOR_ID";
    return executeQuery(query);
}

And test it:

@Test
public void whenQueryWithInnerJoin_thenShouldReturnProperRows() {
    List<ArticleWithAuthor> articleWithAuthorList = articleWithAuthorDAO.articleInnerJoinAuthor();

    assertThat(articleWithAuthorList).hasSize(4);
    assertThat(articleWithAuthorList)
      .noneMatch(row -> row.getAuthorFirstName() == null || row.getTitle() == null);
}

As we mentioned before, the INNER JOIN selects only common rows by a provided condition. Looking at our inserts, we see that we have one article without an author and one author without an article. These rows are skipped because they don’t fulfill the provided condition. As a result, we retrieve four joined results, and none of them has empty authors data nor empty title.

5. Left Join

Next, let’s focus on the LEFT JOIN. This kind of join selects all rows from the first table and matches corresponding rows from the second table. For when there is no match, columns are filled with null values.

Before we dive into Java implementation, let’s have a look at a graphical representation of the LEFT JOIN:

In this case, the result of the LEFT JOIN includes every record from the set representing the first table with intersecting values from the second table.

Now, let’s move to the Java implementation:

List<ArticleWithAuthor> articleLeftJoinAuthor() {
    String query = "SELECT ARTICLE.TITLE, AUTHOR.LAST_NAME, AUTHOR.FIRST_NAME "
      + "FROM ARTICLE LEFT JOIN AUTHOR ON AUTHOR.ID=ARTICLE.AUTHOR_ID";
    return executeQuery(query);
}

The only difference to the previous example is that we used the LEFT keyword instead of the INNER keyword.

Before we test our LEFT JOIN method, let’s again take a look at our inserts. In this case, we’ll receive all the records from the ARTICLE table and their matching rows from the AUTHOR table. As we mentioned before, not every article has an author yet, so we expect to have null values in place of author data:

@Test
public void whenQueryWithLeftJoin_thenShouldReturnProperRows() {
    List<ArticleWithAuthor> articleWithAuthorList = articleWithAuthorDAO.articleLeftJoinAuthor();

    assertThat(articleWithAuthorList).hasSize(5);
    assertThat(articleWithAuthorList).anyMatch(row -> row.getAuthorFirstName() == null);
}

6. Right Join

The RIGHT JOIN is much like the LEFT JOIN, but it returns all rows from the second table and matches rows from the first table. Like in case of the LEFT JOIN, empty matches are replaced by null values.

The graphical representation of this kind of join is a mirror reflection of the one we’ve illustrated for the LEFT JOIN:

Let’s implement the RIGHT JOIN in Java:

List<ArticleWithAuthor> articleRightJoinAuthor() {
    String query = "SELECT ARTICLE.TITLE, AUTHOR.LAST_NAME, AUTHOR.FIRST_NAME "
      + "FROM ARTICLE RIGHT JOIN AUTHOR ON AUTHOR.ID=ARTICLE.AUTHOR_ID";
    return executeQuery(query);
}

Again, let’s look at our test data. Since this join operation retrieves all records from the second table we expect to retrieve five rows, and because not every author has already written an article, we expect some null values in the TITLE column:

@Test
public void whenQueryWithRightJoin_thenShouldReturnProperRows() {
    List<ArticleWithAuthor> articleWithAuthorList = articleWithAuthorDAO.articleRightJoinAuthor();

    assertThat(articleWithAuthorList).hasSize(5);
    assertThat(articleWithAuthorList).anyMatch(row -> row.getTitle() == null);
}

7. Full Outer Join

This join operation is probably the most tricky one. The FULL JOIN selects all rows from both the first and the second table regardless of whether the condition is met or not.

We can also represent the same idea as all values from each of the intersecting sets:

Let’s have a look at the Java implementation:

List<ArticleWithAuthor> articleOuterJoinAuthor() {
    String query = "SELECT ARTICLE.TITLE, AUTHOR.LAST_NAME, AUTHOR.FIRST_NAME "
      + "FROM ARTICLE FULL JOIN AUTHOR ON AUTHOR.ID=ARTICLE.AUTHOR_ID";
    return executeQuery(query);
}

Now, we can test our method:

@Test
public void whenQueryWithFullJoin_thenShouldReturnProperRows() {
    List<ArticleWithAuthor> articleWithAuthorList = articleWithAuthorDAO.articleOuterJoinAuthor();

    assertThat(articleWithAuthorList).hasSize(6);
    assertThat(articleWithAuthorList).anyMatch(row -> row.getTitle() == null);
    assertThat(articleWithAuthorList).anyMatch(row -> row.getAuthorFirstName() == null);
}

Once more, let’s look at the test data. We have five different articles, one of which has no author, and four authors, one of which has no assigned article. As a result of the FULL JOIN, we expect to retrieve six rows. Four of them are matched against each other, and the remaining two are not. For that reason, we also assume that there will be at least one row with null values in both AUTHOR data columns and one with a null value in the TITLE column.

8. Conclusion

In this article, we explored the basic types of SQL joins. We looked at examples of four types of joins and how they can be implemented in Java.

As always, the complete code used in this article is available over on GitHub.

Spring Data JPA Repository Populators

$
0
0

1. Introduction

In this quick article, we’ll explore Spring JPA Repository Populators with a quick example. The Spring Data JPA repository populator is a great alternative for data.sql script.

Spring Data JPA repository populator supports JSON and XML file formats. In the following sections, we’ll see how to use Spring Data JPA repository populator.

2. Sample Application

First of all, let’s say we have a Fruit entity class and an inventory of fruits to populate our database:

@Entity
public class Fruit {
    @Id
    private long id;
    private String name;
    private String color;
    
    // getters and setters
}

We’ll extend JpaRepository to read Fruit data from the database:

@Repository
public interface FruitRepository extends JpaRepository<Fruit, Long> {
    // ...
}

In the following section, we’ll use the JSON format to store and populate the initial fruit data.

3. JSON Repository Populators

Let’s create a JSON file with Fruit data. We’ll create this file in src/main/resources and call it fruit-data.json:

[
    {
        "_class": "com.baeldung.entity.Fruit",
        "name": "apple",
        "color": "red",
        "id": 1
    },
    {
        "_class": "com.baeldung.entity.Fruit",
        "name": "guava",
        "color": "green",
        "id": 2
    }
]

The entity class name should be given in the _class field of each JSON object. The remaining keys map to columns of our Fruit entity.

Now, we’ll add the jackson-databind dependency in the pom.xml:

<dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-databind</artifactId>
    <version>2.9.8</version>
</dependency>

Finally, we’ll have to add a repository populator bean. This repository populator bean will read the data from the fruit-data.json file and populate it into the database when the application starts:

@Bean
public Jackson2RepositoryPopulatorFactoryBean getRespositoryPopulator() {
    Jackson2RepositoryPopulatorFactoryBean factory = new Jackson2RepositoryPopulatorFactoryBean();
    factory.setResources(new Resource[]{new ClassPathResource("fruit-data.json")});
    return factory;
}

We’re all set to unit test our configuration:

@Test
public void givenFruitJsonPopulatorThenShouldInsertRecordOnStart() {
    List<Fruit> fruits = fruitRepository.findAll();
    assertEquals("record count is not matching", 2, fruits.size());

    fruits.forEach(fruit -> {
        if (1 == fruit.getId()) {
            assertEquals("apple", fruit.getName());
            assertEquals("red", fruit.getColor());
        } else if (2 == fruit.getId()) {
            assertEquals("guava", fruit.getName());
            assertEquals("green", fruit.getColor());
        }
    });
}

4. XML Repository Populators

In this section, we’ll see how to use XML files with repository populators. Firstly, we’ll create an XML file with the required Fruit details.

Here, an XML file represents a single fruit’s data.

apple-fruit-data.xml:

<fruit>
    <id>1</id>
    <name>apple</name>
    <color>red</color>
</fruit>

guava-fruit-data.xml:

<fruit>
    <id>2</id>
    <name>guava</name>
    <color>green</color>
</fruit>

Again, we’re storing these XML files in src/main/resources.

Also, we’ll add the spring-oxm maven dependency in the pom.xml:

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-oxm</artifactId>
    <version>5.1.5.RELEASE</version>
</dependency>

In addition, we need to add @XmlRootElement annotation to our entity class:

@XmlRootElement
@Entity
public class Fruit {
    // ...
}

Finally, we’ll define a repository populator bean. This bean will read the XML file and populate the data:

@Bean
public UnmarshallerRepositoryPopulatorFactoryBean repositoryPopulator() {
    Jaxb2Marshaller unmarshaller = new Jaxb2Marshaller();
    unmarshaller.setClassesToBeBound(Fruit.class);

    UnmarshallerRepositoryPopulatorFactoryBean factory = new UnmarshallerRepositoryPopulatorFactoryBean();
    factory.setUnmarshaller(unmarshaller);
    factory.setResources(new Resource[] { new ClassPathResource("apple-fruit-data.xml"), 
      new ClassPathResource("guava-fruit-data.xml") });
    return factory;
}

We can unit test the XML repository populator just like we can with the JSON populator.

4. Conclusion

In this tutorial, we learned how to use Spring Data JPA repository populator. The complete source code used for this tutorial is available over on GitHub.


Groovy def Keyword

$
0
0

1. Overview

In this quick tutorial, we’ll explore the concept of the def keyword in Groovy. It provides an optional typing feature to this dynamic JVM language.

2. Meaning of the def Keyword

The def keyword is used to define an untyped variable or a function in Groovy, as it is an optionally-typed language.

When we’re unsure of the type of a variable or field, we can leverage def to let Groovy decide types at runtime based on the assigned values:

def firstName = "Samwell"  
def listOfCountries = ['USA', 'UK', 'FRANCE', 'INDIA']

Here, firstName will be a String, and listOfCountries will be an ArrayList.

We can also use the def keyword to define the return type of a method:

def multiply(x, y) {
    return x*y
}

Here, multiply can return any type of object, depending on the parameters we pass to it.

3. def Variables

Let’s understand how def works for variables.

When we use def to declare a variable, Groovy declares it as a NullObject and assign a null value to it:

def list
assert list.getClass() == org.codehaus.groovy.runtime.NullObject
assert list.is(null)

The moment we assign a value to the list, Groovy defines its type based on the assigned value:

list = [1,2,4]
assert list instanceof ArrayList

Let’s say that we want to have our variable type dynamic and change with assignment:

int rate = 20
rate = [12] // GroovyCastException
rate = "nill" // GroovyCastException

We cannot assign List or String to an int typed variable, as this will throw a runtime exception.

So, to overcome this problem and invoke the dynamic nature of Groovy, we’ll use the def keyword:

def rate
assert rate == null
assert rate.getClass() == org.codehaus.groovy.runtime.NullObject

rate = 12
assert rate instanceof Integer
        
rate = "Not Available"
assert rate instanceof String
        
rate = [1, 4]
assert rate instanceof List

4. def Methods

The def keyword is further used to define the dynamic return type of a method. This is handy when we can have different types of return values for a method:

def divide(int x, int y) {
    if (y == 0) {
        return "Should not divide by 0"
    } else {
        return x/y
    }
}

assert divide(12, 3) instanceof BigDecimal
assert divide(1, 0) instanceof String

We can also use def to define a method with no explicit returns:

def greetMsg() {
    println "Hello! I am Groovy"
}

5. def vs. Type

Let’s discuss some of the best practices surrounding the use of def.

Although we may use both def and type together while declaring a variable:

def int count
assert count instanceof Integer

The def keyword will be redundant there, so we should use either def or a type.

Additionally, we should avoid using def for untyped parameters in a method.

Therefore, instead of:

void multiply(def x, def y)

We should prefer:

void multiply(x, y)

Furthermore, we should avoid using def when defining constructors.

6. Groovy def vs. Java Object

As we’ve seen most of the features of the def keyword and its uses through examples, we might wonder if it’s similar to declaring something using the Object class in Java. Yes, def can be considered similar to Object:

def fullName = "Norman Lewis"

Similarly, we can use Object in Java:

Object fullName = "Norman Lewis";

7. def vs. @TypeChecked

As many of us would be from the world of strictly-typed languages, we may wonder how to force compile-time type checking in Groovy. We can easily achieve this using the @TypeChecked annotation.

For example, we can use @TypeChecked over a class to enable type checking for all of its methods and properties:

@TypeChecked
class DefUnitTest extends GroovyTestCase {

    def multiply(x, y) {
        return x * y
    }
    
    int divide(int x, int y) {
        return x / y
    }
}

Here, the DefUnitTest class will be type checked, and compilation will fail due to the multiply method being untyped. The Groovy compiler will display an error:

[Static type checking] - Cannot find matching method java.lang.Object#multiply(java.lang.Object).
Please check if the declared type is correct and if the method exists.

So, to ignore a method, we can use TypeCheckingMode.SKIP:

@TypeChecked(TypeCheckingMode.SKIP)
def multiply(x, y)

8. Conclusion

In this quick tutorial, we’ve seen how to use the def keyword to invoke the dynamic feature of the Groovy language and have it determine the types of variables and methods at runtime.

This keyword can be handy in writing dynamic and robust code.

As usual, the code implementations of this tutorial are available on the GitHub project.

Generic Constructors in Java

$
0
0

1. Overview

We previously discussed the basics of Java Generics. In this tutorial, we’ll have a look at Generic Constructors in Java.

A generic constructor is a constructor that has at least one parameter of a generic type.

We’ll see that generic constructors don’t have to be in a generic class, and not all constructors in a generic class have to be generic.

2. Non-Generic Class

First, we have a simple class Entry, which is not a generic class:

public class Entry {
    private String data;
    private int rank;
}

In this class, we’ll add two constructors: a basic constructor with two parameters, and a generic constructor.

2.1. Basic Constructor

The first Entry constructor is a simple constructor with two parameters:

public Entry(String data, int rank) {
    this.data = data;
    this.rank = rank;
}

Now, let’s use this basic constructor to create an Entry object:

@Test
public void givenNonGenericConstructor_whenCreateNonGenericEntry_thenOK() {
    Entry entry = new Entry("sample", 1);
    
    assertEquals("sample", entry.getData());
    assertEquals(1, entry.getRank());
}

2.2. Generic Constructor

Next, our second constructor is a generic constructor:

public <E extends Rankable & Serializable> Entry(E element) {
    this.data = element.toString();
    this.rank = element.getRank();
}

Although the Entry class isn’t generic, it has a generic constructor, as it has a parameter element of type E.

The generic type E is bounded and should implement both Rankable and Serializable interfaces.

Now, let’s have a look at the Rankable interface, which has one method:

public interface Rankable {
    public int getRank();
}

And, suppose we have a class Product that implements the Rankable interface:

public class Product implements Rankable, Serializable {
    private String name;
    private double price;
    private int sales;

    public Product(String name, double price) {
        this.name = name;
        this.price = price;
    }

    @Override
    public int getRank() {
        return sales;
    }
}

We can then use the generic constructor to create Entry objects using a Product:

@Test
public void givenGenericConstructor_whenCreateNonGenericEntry_thenOK() {
    Product product = new Product("milk", 2.5);
    product.setSales(30);
 
    Entry entry = new Entry(product);
    
    assertEquals(product.toString(), entry.getData());
    assertEquals(30, entry.getRank());
}

3. Generic Class

Next, we’ll have a look at a generic class called GenericEntry:

public class GenericEntry<T> {
    private T data;
    private int rank;
}

We’ll add the same two types of constructors as the previous section in this class as well.

3.1. Basic Constructor

First, let’s write a simple, non-generic constructor for our GenericEntry class:

public GenericEntry(int rank) {
    this.rank = rank;
}

Even though GenericEntry is a generic class, this is a simple constructor that doesn’t have a parameter of a generic type.

Now, we can use this constructor to create a GenericEntry<String>:

@Test
public void givenNonGenericConstructor_whenCreateGenericEntry_thenOK() {
    GenericEntry<String> entry = new GenericEntry<String>(1);
    
    assertNull(entry.getData());
    assertEquals(1, entry.getRank());
}

3.2. Generic Constructor

Next, let’s add the second constructor to our class:

public GenericEntry(T data, int rank) {
    this.data = data;
    this.rank = rank;
}

This is a generic constructor, as it has a data parameter of the generic type T. Note that we don’t need to add <T> in the constructor declaration, as it’s implicitly there.

Now, let’s test our generic constructor:

@Test
public void givenGenericConstructor_whenCreateGenericEntry_thenOK() {
    GenericEntry<String> entry = new GenericEntry<String>("sample", 1);
    
    assertEquals("sample", entry.getData());
    assertEquals(1, entry.getRank());        
}

4. Generic Constructor with Different Type

In our generic class, we can also have a constructor with a generic type that’s different from the class’ generic type:

public <E extends Rankable & Serializable> GenericEntry(E element) {
    this.data = (T) element;
    this.rank = element.getRank();
}

This GenericEntry constructor has a parameter element with type E, which is different from the T type. Let’s see it in action:

@Test
public void givenGenericConstructorWithDifferentType_whenCreateGenericEntry_thenOK() {
    Product product = new Product("milk", 2.5);
    product.setSales(30);
 
    GenericEntry<Serializable> entry = new GenericEntry<Serializable>(product);

    assertEquals(product, entry.getData());
    assertEquals(30, entry.getRank());
}

Note that:

  • In our example, we used Product (E) to create a GenericEntry of type Serializable (T)
  • We can only use this constructor when the parameter of type E can be cast to T

5. Multiple Generic Types

Next, we have the generic class MapEntry with two generic types:

public class MapEntry<K, V> {
    private K key;
    private V value;

    public MapEntry(K key, V value) {
        this.key = key;
        this.value = value;
    }
}

MapEntry has one generic constructor with two parameters, each of a different type. Let’s use it in a simple unit test:

@Test
public void givenGenericConstructor_whenCreateGenericEntryWithTwoTypes_thenOK() {
    MapEntry<String,Integer> entry = new MapEntry<String,Integer>("sample", 1);
    
    assertEquals("sample", entry.getKey());
    assertEquals(1, entry.getValue().intValue());        
}

6. Wildcards

Finally, we can use wildcards in a generic constructor:

public GenericEntry(Optional<? extends Rankable> optional) {
    if (optional.isPresent()) {
        this.data = (T) optional.get();
        this.rank = optional.get().getRank();
    }
}

Here, we used wildcards in this GenericEntry constructor to bound the Optional type:

@Test
public void givenGenericConstructorWithWildCard_whenCreateGenericEntry_thenOK() {
    Product product = new Product("milk", 2.5);
    product.setSales(30);
    Optional<Product> optional = Optional.of(product);
 
    GenericEntry<Serializable> entry = new GenericEntry<Serializable>(optional);
    
    assertEquals(product, entry.getData());
    assertEquals(30, entry.getRank());
}

Note that we should be able to cast the optional parameter type (in our case, Product) to the GenericEntry type (in our case, Serializable).

7. Conclusion

In this article, we learned how to define and use generic constructors in both generic and non-generic classes.

The full source code can be found over on GitHub.

Spring Data JPA Delete and Relationships

$
0
0

1. Overview

In this tutorial, we’ll have a look at how deleting is done in Spring Data JPA.

2. Sample Entity

As we know from the Spring Data JPA reference documentation, repository interfaces provide us some basic support for entities.

If we have an entity, like a Book:

@Entity
public class Book {

    @Id
    @GeneratedValue
    private Long id;
    private String title;

    // standard constructors

    // standard getters and setters
}

Then, we can extend Spring Data JPA’s CrudRepository to give us access to CRUD operations on Book:

@Repository
public interface BookRepository extends CrudRepository<Book, Long> {}

3. Delete from Repository

Among others, CrudRepository contains two methods: deleteById and deleteAll.

Let’s test these methods directly from our BookRepository:

@RunWith(SpringRunner.class)
@SpringBootTest(classes = {Application.class})
public class DeleteFromRepositoryUnitTest {

    @Autowired
    private BookRepository repository;

    Book book1;
    Book book2;
    List<Book> books;

    // data initialization

    @Test
    public void whenDeleteByIdFromRepository_thenDeletingShouldBeSuccessful() {
        repository.deleteById(book1.getId());
        assertThat(repository.count()).isEqualTo(1);
    }

    @Test
    public void whenDeleteAllFromRepository_thenRepositoryShouldBeEmpty() {
        repository.deleteAll();
        assertThat(repository.count()).isEqualTo(0);
    }
}

And even though we are using CrudRepository, note that these same methods exist for other Spring Data JPA interfaces like JpaRepository or PagingAndSortingRepository.

4. Derived Delete Query

We can also derive query methods for deleting entities. There is a set of rules for writing them, but let’s just focus on the simplest example.

A derived delete query must start with deleteBy, followed by the name of the selection criteria. These criteria must be provided in the method call.

Let’s say that we want to delete Books by title. Using the naming convention, we’d start with deleteBy and list title as our criteria:

@Repository
public interface BookRepository extends CrudRepository<Book, Long> {
    long deleteByTitle(String title);
}

The return value, of type long, indicates how many records the method deleted.

Let’s write a test and make sure that is correct:

@Test
@Transactional
public void whenDeleteFromDerivedQuery_thenDeletingShouldBeSuccessful() {
    long deletedRecords = repository.deleteByTitle("The Hobbit");
    assertThat(deletedRecords).isEqualTo(1);
}

Persisting and deleting objects in JPA requires a transaction, that’s why we should use a @Transactional annotation when using these derived delete queries, to make sure a transaction is running. This is explained in detail in the ORM with Spring documentation.

5. Custom Delete Query

The method names for derived queries can get quite long, and they are limited to just a single table.

When we need something more complex, we can write a custom query using @Query and @Modifying together.

Let’s check the equivalent code for our derived method from earlier:

@Modifying
@Query("delete from Book b where b.title=:title")
void deleteBooks(@Param("title") String title);

Again, we can verify it works with a simple test:

@Test
@Transactional
public void whenDeleteFromCustomQuery_thenDeletingShouldBeSuccessful() {
    repository.deleteBooks("The Hobbit");
    assertThat(repository.count()).isEqualTo(1);
}

Both solutions presented above are similar and achieve the same result. However, they take a slightly different approach.

The @Query method creates a single JPQL query against the database. By comparison, the deleteBy methods execute a read query, then delete each of the items one by one.

6. Delete in Relationships

Let’s see now what happens when we have relationships with other entities.

Assume we have a Category entity, that has a OneToMany association with the Book entity:

@Entity
public class Category {

    @Id
    @GeneratedValue
    private Long id;
    private String name;

    @OneToMany(mappedBy = "category", cascade = CascadeType.ALL, orphanRemoval = true)
    private List<Book> books;

    // standard constructors

    // standard getters and setters
}

The CategoryRepository can just be an empty interface that extends CrudRepository:

@Repository
public interface CategoryRepository extends CrudRepository<Category, Long> {}

We should also modify the Book entity to reflect this association:

@ManyToOne
private Category category;

Let’s now add two categories and associate them with the books we currently have. Now, if we try to delete the categories, the books will also be deleted:

@Test
public void whenDeletingCategories_thenBooksShouldAlsoBeDeleted() {
    categoryRepository.deleteAll();
    assertThat(bookRepository.count()).isEqualTo(0);
    assertThat(categoryRepository.count()).isEqualTo(0);
}

This is not bi-directional, though. That means that if we delete the books, the categories are still there:

@Test
public void whenDeletingBooks_thenCategoriesShouldAlsoBeDeleted() {
    bookRepository.deleteAll();
    assertThat(bookRepository.count()).isEqualTo(0);
    assertThat(categoryRepository.count()).isEqualTo(2);
}

We can change this behavior by changing the properties of the relationship, such as the CascadeType.

7. Conclusion

In this article, we looked at different ways to delete entities in Spring Data JPA. We looked at the provided delete methods from CrudRepository, as well as our derived queries or custom ones using @Query annotation.

We also had a look at how deleting is done in relationships. As always, all of the code snippets mentioned in this article can be found on our GitHub repository.

Guide to Google Tink

$
0
0

1. Introduction

Nowadays, many developers use cryptographic techniques to protect user data.

In cryptography, small implementation errors can have serious consequences, and understanding how to implement cryptography correctly is a complex and time-consuming task.

In this tutorial, we’re going to describe Tink – a multi-language, cross-platform cryptographic library that can help us to implement secure, cryptographic code.

2. Dependencies

We can use Maven or Gradle to import Tink.

For our tutorial, we’ll just add Tink’s Maven dependency:

<dependency>
    <groupId>com.google.crypto.tink</groupId>
    <artifactId>tink</artifactId>
    <version>1.2.2</version>
</dependency>

Though we could have used Gradle instead:

dependencies {
  compile 'com.google.crypto.tink:tink:latest'
}

3. Initialization

Before using any of Tink APIs we need to initialize them.

If we need to use all implementations of all primitives in Tink, we can use the TinkConfig.register() method:

TinkConfig.register();

While, for example, if we only need AEAD primitive, we can use AeadConfig.register() method:

AeadConfig.register();

A customizable initialization is provided for each implementation, too.

4. Tink Primitives

The main objects the library uses are called primitives which, depending on the type, contains different cryptographic functionality.

A primitive can have multiple implementations:

Primitive Implementations
AEAD AES-EAX, AES-GCM, AES-CTR-HMAC, KMS Envelope, CHACHA20-POLY1305
Streaming AEAD AES-GCM-HKDF-STREAMING, AES-CTR-HMAC-STREAMING
Deterministic AEAD AEAD: AES-SIV
MAC HMAC-SHA2
Digital Signature ECDSA over NIST curves, ED25519
Hybrid Encryption ECIES with AEAD and HKDF, (NaCl CryptoBox)

We can obtain a primitive by calling the method getPrimitive() of the corresponding factory class passing it a KeysetHandle:

Aead aead = AeadFactory.getPrimitive(keysetHandle);

4.1. KeysetHandle

In order to provide cryptographic functionality, each primitive needs a key structure that contains all the key material and parameters.

Tink provides an object – KeysetHandle – which wraps a keyset with some additional parameters and metadata.

So, before instantiating a primitive, we need to create a KeysetHandle object:

KeysetHandle keysetHandle = KeysetHandle.generateNew(AeadKeyTemplates.AES256_GCM);

And after generating a key, we might want to persist it:

String keysetFilename = "keyset.json";
CleartextKeysetHandle.write(keysetHandle, JsonKeysetWriter.withFile(new File(keysetFilename)));

Then, we can subsequently load it:

String keysetFilename = "keyset.json";
KeysetHandle keysetHandle = CleartextKeysetHandle.read(JsonKeysetReader.withFile(new File(keysetFilename)));

5. Encryption

Tink provides multiple ways of applying the AEAD algorithm. Let’s take a look.

5.1. AEAD

AEAD provides Authenticated Encryption with Associated Data which means that we can encrypt plaintext and, optionally, provide associated data that should be authenticated but not encrypted.

Note that this algorithm ensures the authenticity and integrity of the associated data but not its secrecy.

To encrypt data with one of the AEAD implementations, as we previously saw, we need to initialize the library and create a keysetHandle:

AeadConfig.register();
KeysetHandle keysetHandle = KeysetHandle.generateNew(
  AeadKeyTemplates.AES256_GCM);

Once we’ve done that, we can get the primitive and encrypt the desired data:

String plaintext = "baeldung";
String associatedData = "Tink";

Aead aead = AeadFactory.getPrimitive(keysetHandle); 
byte[] ciphertext = aead.encrypt(plaintext.getBytes(), associatedData.getBytes());

Next, we can decrypt the ciphertext using the decrypt() method:

String decrypted = new String(aead.decrypt(ciphertext, associatedData.getBytes()));

5.2. Streaming AEAD

Similarly, when the data to be encrypted is too large to be processed in a single step, we can use the streaming AEAD primitive:

AeadConfig.register();
KeysetHandle keysetHandle = KeysetHandle.generateNew(
  StreamingAeadKeyTemplates.AES128_CTR_HMAC_SHA256_4KB);
StreamingAead streamingAead = StreamingAeadFactory.getPrimitive(keysetHandle);

FileChannel cipherTextDestination = new FileOutputStream("cipherTextFile").getChannel();
WritableByteChannel encryptingChannel =
  streamingAead.newEncryptingChannel(cipherTextDestination, associatedData.getBytes());

ByteBuffer buffer = ByteBuffer.allocate(CHUNK_SIZE);
InputStream in = new FileInputStream("plainTextFile");

while (in.available() > 0) {
    in.read(buffer.array());
    encryptingChannel.write(buffer);
}

encryptingChannel.close();
in.close();

Basically, we needed WriteableByteChannel to achieve this.

So, to decrypt the cipherTextFile, we’d want to use a ReadableByteChannel:

FileChannel cipherTextSource = new FileInputStream("cipherTextFile").getChannel();
ReadableByteChannel decryptingChannel =
  streamingAead.newDecryptingChannel(cipherTextSource, associatedData.getBytes());

OutputStream out = new FileOutputStream("plainTextFile");
int cnt = 1;
do {
    buffer.clear();
    cnt = decryptingChannel.read(buffer);
    out.write(buffer.array());
} while (cnt>0);

decryptingChannel.close();
out.close();

6. Hybrid Encryption

In addition to symmetric encryption, Tink implements a couple of primitives for hybrid encryption.

With Hybrid Encryption we can get the efficiency of symmetric keys and the convenience of asymmetric keys.

Simply put, we’ll use a symmetric key to encrypt the plaintext and a public key to encrypt the symmetric key only.

Notice that it provides secrecy only, not identity authenticity of the sender.

So, let’s see how to use HybridEncrypt and HybridDecrypt:

TinkConfig.register();

KeysetHandle privateKeysetHandle = KeysetHandle.generateNew(
  HybridKeyTemplates.ECIES_P256_HKDF_HMAC_SHA256_AES128_CTR_HMAC_SHA256);
KeysetHandle publicKeysetHandle = privateKeysetHandle.getPublicKeysetHandle();

String plaintext = "baeldung";
String contextInfo = "Tink";

HybridEncrypt hybridEncrypt = HybridEncryptFactory.getPrimitive(publicKeysetHandle);
HybridDecrypt hybridDecrypt = HybridDecryptFactory.getPrimitive(privateKeysetHandle);

byte[] ciphertext = hybridEncrypt.encrypt(plaintext.getBytes(), contextInfo.getBytes());
byte[] plaintextDecrypted = hybridDecrypt.decrypt(ciphertext, contextInfo.getBytes());

The contextInfo is implicit public data from the context that can be null or empty or used as “associated data” input for the AEAD encryption or as “CtxInfo” input for HKDF.

The ciphertext allows for checking the integrity of contextInfo but not its secrecy or authenticity.

7. Message Authentication Code

Tink also supports Message Authentication Codes, or MACs.

A MAC is a block of a few bytes that we can use to authenticate a message.

Let’s see how we can create a MAC and then verify its authenticity:

TinkConfig.register();

KeysetHandle keysetHandle = KeysetHandle.generateNew(
  MacKeyTemplates.HMAC_SHA256_128BITTAG);

String data = "baeldung";

Mac mac = MacFactory.getPrimitive(keysetHandle);

byte[] tag = mac.computeMac(data.getBytes());
mac.verifyMac(tag, data.getBytes());

In the event that the data isn’t authentic, the method verifyMac() throws a GeneralSecurityException.

8. Digital Signature

As well as encryption APIs, Tink supports digital signatures.

To implement digital signature, the library uses the PublicKeySign primitive for the signing of data, and PublickeyVerify for verification:

TinkConfig.register();

KeysetHandle privateKeysetHandle = KeysetHandle.generateNew(SignatureKeyTemplates.ECDSA_P256);
KeysetHandle publicKeysetHandle = privateKeysetHandle.getPublicKeysetHandle();

String data = "baeldung";

PublicKeySign signer = PublicKeySignFactory.getPrimitive(privateKeysetHandle);
PublicKeyVerify verifier = PublicKeyVerifyFactory.getPrimitive(publicKeysetHandle);

byte[] signature = signer.sign(data.getBytes()); 
verifier.verify(signature, data.getBytes());

Similar to the previous encryption method, when the signature is invalid, we’ll get a GeneralSecurityException.

9. Conclusion

In this article, we introduced the Google Tink library using its Java implementation.

We’ve seen how to use to encrypt and decrypt data and how to protect its integrity and authenticity. Moreover, we’ve seen how to sign data using digital signature APIs.

As always, the sample code is available over on GitHub.

Set Operations in Java

$
0
0

1. Introduction

A set is a handy way to represent a unique collection of items.

In this tutorial, we’ll learn more about what that means and how we can use one in Java.

2. A Bit of Set Theory

2.1. What Is a Set?

A set is simply a group of unique things. So, a significant characteristic of any set is that it does not contain duplicates.

We can put anything we like into a set. However, we typically use sets to group together things which have a common trait. For example, we could have a set of vehicles or a set of animals.

Let’s use two sets of integers as a simple example:

setA : {1, 2, 3, 4}

setB : {2, 4, 6, 8}

We can show sets as a diagram by simply putting the values into circles:
A Venn Diagram of Two Sets

Diagrams like these are known as Venn diagrams and give us a useful way to show interactions between sets as we’ll see later.

2.2. The Intersection of Sets

The term intersection means the common values of different sets.

We can see that the integers 2 and 4 exist in both sets. So the intersection of setA and setB is 2 and 4 because these are the values which are common to both of our sets.

setA intersection setB = {2, 4}

In order to show the intersection in a diagram, we merge our two sets and highlight the area that is common to both of our sets:
A Venn Diagram of Interception

2.3. The Union of Sets

The term union means combining the values of different sets.

So let’s create a new set which is the union of our example sets. We already know that we can’t have duplicate values in a set. However, our sets have some duplicate values (2 and 4). So when we combine the contents of both sets, we need to ensure we remove duplicates. So we end up with 1, 2, 3, 4, 6 and 8.

setA union setB = {1, 2, 3, 4, 6, 8}

Again we can show the union in a diagram. So let’s merge our two sets and highlight the area that represents the union:
A Venn Diagram of Union

2.4. The Relative Complement of Sets

The term relative complement means the values from one set that are not in another. It is also referred to as the set difference.

Now let’s create new sets which are the relative complements of setA and setB.

relative complement of setA in setB = {6, 8}

relative complement of setB in setA = {1, 3}

And now, let’s highlight the area in setA that is not part of setB. This gives us the relative complement of setB in setA:
A Venn Diagram of Relative Complement

2.5. The Subset and Superset

A subset is simply part of a larger set, and the larger set is called a superset. When we have a subset and superset, the union of the two is equal to the superset, and the intersection is equal to the subset.

3. Implementing Set Operations with java.util.Set

In order to see how we perform set operations in Java, we’ll take the example sets and implement the intersection, union and relative complement. So let’s start by creating our sample sets of integers:

private Set<Integer> setA = setOf(1,2,3,4);
private Set<Integer> setB = setOf(2,4,6,8);
    
private static Set<Integer> setOf(Integer... values) {
    return new HashSet<Integer>(Arrays.asList(values));
}

3.1. Intersection

First, we’re going to use the retainAll method to create the intersection of our sample sets. Because retainAll modifies the set directly, we’ll make a copy of setA called intersectSet. Then we’ll use the retainAll method to keep the values that are also in setB:

Set<Integer> intersectSet = new HashSet<>(setA);
intersectSet.retainAll(setB);
assertEquals(setOf(2,4), intersectSet);

3.2. Union

Now let’s use the addAll method to create the union of our sample sets. The addAll method adds all the members of the supplied set to the other. Again as addAll updates the set directly, we’ll make a copy of setA called unionSet, and then add setB to it:

Set<Integer> unionSet = new HashSet<>(setA);
unionSet.addAll(setB);
assertEquals(setOf(1,2,3,4,6,8), unionSet);

3.3. Relative Complement

Finally, we’ll use the removeAll method to create the relative complement of setB in setA. We know that we want the values that are in setA that don’t exist in setB. So we just need to removeAll elements from setA that are also in setB:

Set<Integer> differenceSet = new HashSet<>(setA);
differenceSet.removeAll(setB);
assertEquals(setOf(1,3), differenceSet);

4. Implementing Set Operations with Streams

4.1. Intersection

Let’s create the intersection of our sets using Streams.

First, we’ll get the values from setA into a stream. Then we’ll filter the stream to keep all values that are also in setB. And lastly, we’ll collect the results into a new Set:

Set<Integer> intersectSet = setA.stream()
    .filter(setB::contains)
    .collect(Collectors.toSet());
assertEquals(setOf(2,4), intersectSet);

4.2. Union

Now let’s use the static method Streams.concat to add the values of our sets into a single stream.

In order to get the union from the concatenation of our sets, we need to remove any duplicates. We’ll do this by simply collecting the results into a Set:

Set<Integer> unionSet = Stream.concat(setA.stream(), setB.stream())
    .collect(Collectors.toSet());
assertEquals(setOf(1,2,3,4,6,8), unionSet);

4.3. Relative Complement

Finally, we’ll create the relative complement of setB in setA.

As we did with the intersection example we’ll first get the values from setA into a stream. This time we’ll filter the stream to remove any values that are also in setB. Then, we’ll collect the results into a new Set:

Set<Integer> differenceSet = setA.stream()
    .filter(val -> !setB.contains(val))
    .collect(Collectors.toSet());
assertEquals(setOf(1,3), differenceSet);

5. Utility Libraries for Set Operations

Now that we’ve seen how to perform basic set operations with pure Java, let’s use a couple of utility libraries to perform the same operations. One nice thing about using these libraries is that the method names clearly tell us what operation is being performed.

5.1. Dependencies

In order to use the Guava Sets and Apache Commons Collections SetUtils we need to add their dependencies:

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>27.1-jre</version>
</dependency>
<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-collections4</artifactId>
    <version>4.3</version>
</dependency>

5.2. Guava Sets

Let’s use the Guava Sets class to perform intersection and union on our example sets. In order to do this we can simply use the static methods union and intersection of the Sets class:

Set<Integer> intersectSet = Sets.intersection(setA, setB);
assertEquals(setOf(2,4), intersectSet);

Set<Integer> unionSet = Sets.union(setA, setB);
assertEquals(setOf(1,2,3,4,6,8), unionSet);

Take a look at our Guava Sets article to find out more.

5.3. Apache Commons Collections

Now let’s use the intersection and union static methods of the SetUtils class from the Apache Commons Collections:

Set<Integer> intersectSet = SetUtils.intersection(setA, setB);
assertEquals(setOf(2,4), intersectSet);

Set<Integer> unionSet = SetUtils.union(setA, setB);
assertEquals(setOf(1,2,3,4,6,8), unionSet);

Take a look at our Apache Commons Collections SetUtils tutorial to find out more.

6. Conclusion

We’ve seen an overview of how to perform some basic operations on sets, as well as details of how to implement these operations in a number of different ways.

All of the code examples can be found over on GitHub.

Viewing all 3702 articles
Browse latest View live