Quantcast
Channel: Baeldung
Viewing all 3818 articles
Browse latest View live

Java Web Weekly, Issue 108

$
0
0

I usually post about Dev stuff on Twitter - you can follow me there:

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> Programming with modularity and Project Jigsaw. A Tutorial Using the Latest Early Access Build [infoq.com]

A solid and quite exhaustive writeup about the new modularity work coming to Java 9.

>> Spring Batch Tutorial: Introduction [petrikainulainen.net]

A quick, solid intro to what Spring Batch is and what it’s super useful for.

>> How does JPA and Hibernate define the AUTO flush mode [vladmihalcea.com]

Very cool and to the point guide to how flushing – and in particular auto-flushing – behaves differently between Hibernate and JPA.

>> Using Exceptions to Write Robust Software for Stable Production [codecentric.de]

A high level writeup about using exception to control execution flow and actually having an disciplined approach to how logging is done and how these exceptions flow through the system.

>> Use JUnit’s expected exceptions sparingly [jooq.org]

An look at using annotations for flow control. And an entertaining peek into Lukas’ “love” for Java annotations.

>> Redefining java.lang.System With Byte Buddy [tersesystems.com]

A super cool look into JVM level security.

>> Introduction to CompletableFutures [kennethjorgensen.com]

A straightforward introduction to using the new(ish) CompletableFuture in Java 8.

>> Don’t tell me what to make, tell me how to make it [radicaljava.com]

You think Java object instantiation is simple? Think again.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Feature Toggles [martinfowler.com] and >> Categories of toggles [martinfowler.com]

The feature toggle is such a powerful technique done right. It can save you oh-so much merging time, testing work and just general sanity that it’s not even funny.

This article will be one to follow (because it’s getting published in installments) and come back to.

>> How a Smell in the Tests Points to a Risk in the Design [thecodewhisperer.com]

As always, a solid deep dive into the nuances of testing and maintainable code, and as always – a good read.

Also worth reading:

3. Musings

>> Enough with the IoT Naysaying Already [daedtech.com]

It’s sometimes easier to be negative about new technology that you don’t fully understand. But after being wrong a few times, it’s a good idea to re-evaluate that approach.

On a personal note – I didn’t get Twitter in the early years and it really took a while until I came around, so now I tend to twink twice before making any decision on something new (Spapchat?).

>> What To Avoid When Doing Code Reviews [daedtech.com]

Very insightful thoughts on doing code review in a way that is genuinely helpful and helps the developer receiving that feedback grow. Which is very hard to do, but also very worthwhile to strive for.

>> Microservices Use Cases [techblog.bozho.net]

The “microservice craze of 2015” (as it will be referred to by historians) is dissipating as sobering, experience-anchored tales are being published.

Here are some valid usecases for microservices. On a personal note, I do think that there are a good few more valid usecases where it’s worth paying the complexity cost.

But generally speaking, I wholeheartedly agree with the sentiment of – don’t jump into microservices because you think it would be cool, as that rarely works out.

Also worth reading:

 

4. Comics

And my favorite Dilberts of the week:

>> You did a good job on the high notes [dilbert.com]

>> My phone took care of it [dilbert.com]

>> A newly discovered stone age tribe that has never used skype [dilbert.com]

 

5. Pick of the Week

>> What’s in a Story? [dannorth.net]

 

I usually post about Dev stuff on Twitter - you can follow me there:



Introduction to Using FreeMarker in Spring MVC

$
0
0

1. Overview

FreeMarker is a Java based template engine from the Apache Software Foundation. Like other template engines, FreeMarker is designed to support HTML web pages in applications following the MVC pattern. This tutorial illustrates how to configure FreeMarker for use in Spring MVC as an alternative to JSP.

The article will not discuss the basics of Spring MVC usage. For an in-depth look at that, please refer to this article. Additionally, this is not intended to be a detailed look at FreeMarker’s extensive capabilities. For more information on FreeMarker usage and syntax, please visit its website.

2. Maven Dependencies

Since this is a Maven-based project, we first add the required dependencies to the pom.xml:

<dependency>
    <groupId>org.freemarker</groupId>
    <artifactId>freemarker</artifactId>
    <version>2.3.23</version>
</dependency>
<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-context-support</artifactId>
    <version>${spring.version}</version>
</dependency>

3. Configurations

Now let’s dive into the configuration in to project. This is an annotation-based Spring project, so we will not demonstrate the XML-based configuration.

3.1. Spring Web Configuration

Let’s create a class to configure the web components. For that we need to annotate the class with @EnableWebMvc, @Configuration and @ComponentScan.

@EnableWebMvc
@Configuration
@ComponentScan({"com.baeldung.freemarker"})
public class SpringWebConfig extends WebMvcConfigurerAdapter {
    // All web configuration will go here.
}

3.2. Configure ViewResolver

Spring MVC Framework provides the ViewResolver interface, that maps view names to actual views. We will create an instance of FreeMarkerViewResolver, which belongs to the spring-webmvc dependency.

That object needs to be configured with the required values that will be used at run-time. For example, we will configure the view resolver to use FreeMarker for views ending in .ftl:

@Bean 
public FreeMarkerViewResolver freemarkerViewResolver() { 
    FreeMarkerViewResolver resolver = new FreeMarkerViewResolver(); 
    resolver.setCache(true); 
    resolver.setPrefix(""); 
    resolver.setSuffix(".ftl"); 
    return resolver; 
}

Also, notice how we can also control the caching mode here – this should only be disabled for debugging and development.

3.3. FreeMarker Template Path Configuration

Next, we will set the template path, which indicates where the templates are located in the web context:

@Bean 
public FreeMarkerConfigurer freemarkerConfig() { 
    FreeMarkerConfigurer freeMarkerConfigurer = new FreeMarkerConfigurer(); 
    freeMarkerConfigurer.setTemplateLoaderPath("/WEB-INF/views/ftl/");
    return freeMarkerConfigurer; 
}

3.4. Spring Controller Configuration

Now we can use a Spring Controller to process a FreeMarker template for display. This is simply a conventional Spring Controller:

@RequestMapping(value = "/cars", method = RequestMethod.GET)
public String init(@ModelAttribute("model") ModelMap model) {
    model.addAttribute("carList", carList);
    return "index";
}

The FreeMarkerViewResolver and path configurations defined previously will take care of translating the view name index to the proper FreeMarker view.

4. FreeMarker HTML Template

4.1. Create Simple HTML Template View

It is now time to create a HTML template with FreeMarker. In our example, we added a list of cars to the model. FreeMarker can access that list and display it by iterating over its contents.

When a request is made for the /cars URI, Spring will process the template using the model that it is provided. In our template, the #list directive indicates that FreeMarker should loop over the carList object from the model, using car to refer to the current element, and render the content within that block.

The following code also includes FreeMarker expressions to refer to the attributes of each element in carList; or example, to display the current car element’s make property, we use the expression ${car.make}.

<div id="header">
  <h2>FreeMarker Spring MVC Hello World</h2>
</div>
<div id="content">
  <fieldset>
    <legend>Add Car</legend>
    <form name="car" action="add" method="post">
      Make : <input type="text" name="make" /><br/>
      Model: <input type="text" name="model" /><br/>
      <input type="submit" value="Save" />
    </form>
  </fieldset>
  <br/>
  <table class="datatable">
    <tr>
      <th>Make</th>
      <th>Model</th>
    </tr>
    <#list model["carList"] as car>
      <tr>
        <td>${car.make}</td>
        <td>${car.model}</td>
      </tr>
    </#list>
  </table>
</div>

After styling the output with CSS, the processed FreeMarker template generates a form and list of cars:

browser_localhost-300x235

5. Conclusion 

In this article, we discussed the how to integrate FreeMarker in a Spring MVC application. FreeMarker’s capabilities go far beyond what we demonstrated, so please visit the Apache FreeMarker website for more detailed information on its use.

The sample code in this article is available in a project on Github.

A Guide to CSRF Protection in Spring Security

$
0
0

Learn more about API Security in Course 3 and 6 of "REST With Spring":

>> THE "REST WITH SPRING" CLASSES

1. Overview

In this tutorial, we will discuss Cross-Site Request Forgery CSRF attacks and how to prevent them using Spring Security.

2. Two Simple CSRF Attacks

There are multiple forms of CSRF attacks – let’s discuss some of the most common ones.

2.1. GET Examples

Let’s consider the following GET request used by a logged in users to transfer money to specific bank account “1234”:

GET http://bank.com/transfer?accountNo=1234&amount=100

If the attacker wants to transfer money from a victims’ account to his own account instead – “5678” – he needs to make the victim trigger the request:

GET http://bank.com/transfer?accountNo=5678&amount=1000

There are multiple ways to make that happen:

  • Link: The attacker can convince the victim to click on this link for example, to execute the transfer:
<a href="http://bank.com/transfer?accountNo=5678&amount=1000">
Show Kittens Pictures
</a>
  • Image: The attacker may use an <img/> tag with the target URL as the image source – so the click isn’t even necessary. The request will be automatically executed when the page loads:
<img src="http://bank.com/transfer?accountNo=5678&amount=1000"/>

2.2. POST Example

If the main request needs to be a POST request – for example:

POST http://bank.com/transfer
accountNo=1234&amount=100

Then the attacker needs to have the victim run a similar:

POST http://bank.com/transfer
accountNo=5678&amount=1000

Neither the <a> or the <img/> will work in this case. The attacker will need a <form> – as follows:

<form action="http://bank.com/transfer" method="POST">
    <input type="hidden" name="accountNo" value="5678"/>
    <input type="hidden" name="amount" value="1000"/>
    <input type="submit" value="Show Kittens Pictures"/>
</form>

However, the form can be submitted automatically using Javascript – as follows:

<body onload="document.forms[0].submit()">
<form>
...

2.3. Practical Simulation

Now that we understand how a CSRF attack looks like, let’s simulate these examples within a Spring app.

We’re going to start with a simple controller implementation- the BankController:

@Controller
public class BankController {
    private Logger logger = LoggerFactory.getLogger(getClass());

    @RequestMapping(value = "/transfer", method = RequestMethod.GET)
    @ResponseBody
    public String transfer(@RequestParam("accountNo") int accountNo, 
      @RequestParam("amount") final int amount) {
        logger.info("Transfer to {}", accountNo);
        ...
    }

    @RequestMapping(value = "/transfer", method = RequestMethod.POST)
    @ResponseStatus(HttpStatus.OK)
    public void transfer2(@RequestParam("accountNo") int accountNo, 
      @RequestParam("amount") final int amount) {
        logger.info("Transfer to {}", accountNo);
        ...
    }
}

And let’s also have a basic HTML page that triggers the bank transfer operation:

<html>
<body>
    <h1>CSRF test on Origin</h1>
    <a href="transfer?accountNo=1234&amount=100">Transfer Money to John</a>
	
    <form action="transfer" method="POST">
        <label>Account Number</label> 
        <input name="accountNo" type="number"/>

        <label>Amount</label>         
        <input name="amount" type="number"/>

        <input type="submit">
    </form>
</body>
</html>

This is the page of the main application, running on the origin domain.

Note that we’ve simulated both a GET through a simple link as well as a POST through a simple <form>.

Now – let’s see how the attacker page would look like:

<html>
<body>
    <a href="http://localhost:8080/transfer?accountNo=5678&amount=1000">Show Kittens Pictures</a>
    
    <img src="http://localhost:8080/transfer?accountNo=5678&amount=1000"/>
	
    <form action="http://localhost:8080/transfer" method="POST">
        <input name="accountNo" type="hidden" value="5678"/>
        <input name="amount" type="hidden" value="1000"/>
        <input type="submit" value="Show Kittens Picture">
    </form>
</body>
</html>

This page will run on a different domain – the attacker domain.

Finally, let’s run the two applications – the original and the attacker application – locally, and let’s access the original page first:

http://localhost:8081/spring-security-rest-full/csrfHome.html

Then, let’s access the attacker page:

http://localhost:8081/spring-security-rest/api/csrfAttacker.html

Tracking the exact requests that originate from this attacker page, we’ll be able to immediately spot the problematic request, hitting the original application and fully authenticated.

3. Spring Security Configuration

In order to use the Spring Security CSRF protection, we’ll first need to make sure we use the proper HTTP methods for anything that modifies state (PATCH, POST, PUT, and DELETE – not GET).

3.1. Java Configuration

CSRF protection is enabled by default in the Java configuration. We can still disable it if we need to:

@Override
protected void configure(HttpSecurity http) throws Exception {
    http
      .csrf().disable();
}

3.2. XML Configuration

In the older XML config (pre Spring Security 4), CSRF protection was disabled by default and we could enable it as follows:

<http>
    ...
    <csrf />
</http>

Starting from Spring Security 4.x – the CSRF protection is enabled by default in the XML configuration as well; we can of course still disable it if we need to:

<http>
    ...
    <csrf disabled="true"/>
</http>

3.3. Extra Form Parameters

Finally, with CSRF protection enabled on the server side, we’ll need to include the CSRF token in our requests on the client side as well:

<input type="hidden" name="${_csrf.parameterName}" value="${_csrf.token}"/>

3.4. Using JSON

We can’t submit the CSRF token as a parameter if we’re using JSON; instead we can submit the token within the header.

We’ll first need to include the token in our page – and for that we can use meta tags:

<meta name="_csrf" content="${_csrf.token}"/>
<meta name="_csrf_header" content="${_csrf.headerName}"/>

Then we’ll construct the header:

var token = $("meta[name='_csrf']").attr("content");
var header = $("meta[name='_csrf_header']").attr("content");

$(document).ajaxSend(function(e, xhr, options) {
    xhr.setRequestHeader(header, token);
});

4. CSRF Disabled Test

With all of that in place, we’ll move to do some testing.

Let’s first try to submit a simple POST request when CSRF is disabled:

@ContextConfiguration(classes = { SecurityWithoutCsrfConfig.class, ...})
public class CsrfDisabledIntegrationTest extends CsrfAbstractIntegrationTest {

    @Test
    public void givenNotAuth_whenAddFoo_thenUnauthorized() throws Exception {
        mvc.perform(
          post("/foos").contentType(MediaType.APPLICATION_JSON)
                       .content(createFoo())
          ).andExpect(status().isUnauthorized());
    }

    @Test 
    public void givenAuth_whenAddFoo_thenCreated() throws Exception {
        mvc.perform(
          post("/foos").contentType(MediaType.APPLICATION_JSON)
                       .content(createFoo())
                       .with(testUser())
        ).andExpect(status().isCreated()); 
    } 
}

As you might have noticed, we’re using a base class to hold the common testing helper logic – the CsrfAbstractIntegrationTest:

@RunWith(SpringJUnit4ClassRunner.class)
@WebAppConfiguration
public class CsrfAbstractIntegrationTest {
    @Autowired
    private WebApplicationContext context;

    @Autowired
    private Filter springSecurityFilterChain;

    protected MockMvc mvc;

    @Before
    public void setup() {
        mvc = MockMvcBuilders.webAppContextSetup(context)
                             .addFilters(springSecurityFilterChain)
                             .build();
    }

    protected RequestPostProcessor testUser() {
        return user("user").password("userPass").roles("USER");
    }

    protected String createFoo() throws JsonProcessingException {
        return new ObjectMapper().writeValueAsString(new Foo(randomAlphabetic(6)));
    }
}

Note that, when the user had the right security credentials, the request was successfully executed – no extra information was required.

That means that the attacker can simply use any of previously discussed attack vectors to easily compromise the system.

5. CSRF Enabled Test

Now, let’s enable the CSRF protection and see the difference:

@ContextConfiguration(classes = { SecurityWithCsrfConfig.class, ...})
public class CsrfEnabledIntegrationTest extends CsrfAbstractIntegrationTest {

    @Test
    public void givenNoCsrf_whenAddFoo_thenForbidden() throws Exception {
        mvc.perform(
          post("/foos").contentType(MediaType.APPLICATION_JSON)
                       .content(createFoo())
                       .with(testUser())
          ).andExpect(status().isForbidden());
    }

    @Test
    public void givenCsrf_whenAddFoo_thenCreated() throws Exception {
        mvc.perform(
          post("/foos").contentType(MediaType.APPLICATION_JSON)
                       .content(createFoo())
                       .with(testUser()).with(csrf())
         ).andExpect(status().isCreated());
    }
}

Now how this test is using a different security configuration – one that has the CSRF protection enabled.

Now, the POST request will simply fail if the CSRF token isn’t included, which of course means that the earlier attacks are no longer an option.

Finally, notice the csrf() method in the test; this creates a RequestPostProcessor that will automatically populate a valid CSRF token in the request for testing purposes.

6. Conclusion

In this article, we discussed a couple of CSRF attacks and how to prevent them using Spring Security.

The full implementation of this tutorial can be found in the github project – this is an Eclipse based project, so it should be easy to import and run as it is.

I usually post about Security on Twitter - you can follow me there:


Working with Tree Model Nodes in Jackson

$
0
0

I usually post about Jackson and JSON stuff on Twitter - you can follow me there:

1. Overview

This tutorial will focus on working with tree model nodes in Jackson.

We’ll use JsonNode for various conversions as well as adding, modifying and removing nodes.

2. Creating a Node

The first step in creation of a node is to instantiate an ObjectMapper object by using the default constructor:

ObjectMapper mapper = new ObjectMapper();

Since creation of an ObjectMapper object is expensive, it’s recommended that the same one should be reused for multiple operations.

Next, we have three different ways to create a tree node once we have our ObjectMapper.

2.1. Construct a node from scratch

The most common way to create a node out of nothing is as follows:

JsonNode node = mapper.createObjectNode();

In addition, a JsonNode instance can be created using the default constructor:

JsonNode node = new JsonNode();

Alternatively, we can also create a node via the JsonNodeFactory:

JsonNode node = JsonNodeFactory.instance.objectNode();

2.2. Parse from a JSON source

This method is well covered in the Jackson – Marshall String to JsonNode article. Please refer to it if you need more info.

2.3. Convert from an object

A node may be converted from a Java object by calling the valueToTree(Object fromValue) method on the ObjectMapper:

JsonNode node = mapper.valueToTree(fromValue);

The convertValue API is also helpful here:

JsonNode node = mapper.convertValue(fromValue, JsonNode.class);

Let us see how it works in practice. Assume we have a class named NodeBean:

public class NodeBean {
    private int id;
    private String name;

    public NodeBean() {
    }

    public NodeBean(int id, String name) {
        this.id = id;
        this.name = name;
    }

    // standard getters and setters
}

Let’s write a test that makes sure that the conversion happens correctly:

@Test
public void givenAnObject_whenConvertingIntoNode_thenCorrect() {
    NodeBean fromValue = new NodeBean(2016, "baeldung.com");

    JsonNode node = mapper.valueToTree(fromValue);

    assertEquals(2016, node.get("id").intValue());
    assertEquals("baeldung.com", node.get("name").textValue());
}

3. Transforming a Node

3.1. Write out as JSON

The basic method to transform a tree node into a JSON string is the following:

mapper.writeValue(destination, node);

where the destination can be a File, an OutputStream or a Writer.
By reusing the class NodeBean declared in section 2.3, a test makes sure this method works as expected:

final String pathToTestFile = "node_to_json_test.json";

@Test
public void givenANode_whenModifyingIt_thenCorrect() throws IOException {
    String newString = "{\"nick\": \"cowtowncoder\"}";
    JsonNode newNode = mapper.readTree(newString);

    JsonNode rootNode = ExampleStructure.getExampleRoot();
    ((ObjectNode) rootNode).set("name", newNode);

    assertFalse(rootNode.path("name").path("nick").isMissingNode());
    assertEquals("cowtowncoder", rootNode.path("name").path("nick").textValue());
}

3.2. Convert to an object

The most convenient way to convert a JsonNode into a Java object is the treeToValue API:

NodeBean toValue = mapper.treeToValue(node, NodeBean.class);

Which is functionally equivalent to:

NodeBean toValue = mapper.convertValue(node, NodeBean.class)

We can also do that through a token stream:

JsonParser parser = mapper.treeAsTokens(node);
NodeBean toValue = mapper.readValue(parser, NodeBean.class);

Finally, let’s implement a test that verifies the conversion process:

@Test
public void givenANode_whenConvertingIntoAnObject_thenCorrect()
  throws JsonProcessingException {
    JsonNode node = mapper.createObjectNode();
    ((ObjectNode) node).put("id", 2016);
    ((ObjectNode) node).put("name", "baeldung.com");

    NodeBean toValue = mapper.treeToValue(node, NodeBean.class);

    assertEquals(2016, toValue.getId());
    assertEquals("baeldung.com", toValue.getName());
}

4. Manipulating Tree Nodes

The following JSON elements, contained in a file named example.json, are used as a base structure for actions discussed in this section to be taken on:

{
    "name": 
        {
            "first": "Tatu",
            "last": "Saloranta"
        },

    "title": "Jackson founder",
    "company": "FasterXML"
}

This JSON file, located on the classpath, is parsed into a model tree:

public class ExampleStructure {
    private static ObjectMapper mapper = new ObjectMapper();

    static JsonNode getExampleRoot() throws IOException {
        InputStream exampleInput = 
          ExampleStructure.class.getClassLoader()
          .getResourceAsStream("example.json");
        
        JsonNode rootNode = mapper.readTree(exampleInput);
        return rootNode;
    }
}

Note that the root of the tree will be used when illustrating operations on nodes in the following sub-sections.

4.1. Locating a Node

Before working on any node, the first thing we need to do is to locate and assign it to a variable.

If the path to the node is known beforehand, that’s pretty easy to do. For example, say we want a node named last, which is under the name node:

JsonNode locatedNode = rootNode.path("name").path("last");

Alternatively, the get or with APIs can also be used instead of path.

If the path isn’t known, the search will of course become more complex and iterative.

4.2. Adding a New Node

A node can be added as a child of another node as follows:

ObjectNode newNode = ((ObjectNode) locatedNode).put(fieldName, value);

Many overloaded variants of put may be used to add new nodes of different value types.

Many other similar methods are also available, including putArray, putObject, PutPOJO, putRawValue and putNull.

Finally – let’s have a look at an example – where we add an entire structure to the root node of the tree:

"address":
{
    "city": "Seattle",
    "state": "Washington",
    "country": "United States"
}

Here’s the full test going through all of these operations and verifying the results:

@Test
public void givenANode_whenAddingIntoATree_thenCorrect() throws IOException {
    JsonNode rootNode = ExampleStructure.getExampleRoot();
    ObjectNode addedNode = ((ObjectNode) rootNode).putObject("address");
    addedNode
      .put("city", "Seattle")
      .put("state", "Washington")
      .put("country", "United States");

    assertFalse(rootNode.path("address").isMissingNode());
    
    assertEquals("Seattle", rootNode.path("address").path("city").textValue());
    assertEquals("Washington", rootNode.path("address").path("state").textValue());
    assertEquals("United States", rootNode.path("address").path("country").textValue();
}

4.3. Editing a Node

An ObjectNode instance may be modified by invoking set(String fieldName, JsonNode value) method:

JsonNode locatedNode = locatedNode.set(fieldName, value);

Similar results might be achieved by using replace or setAll methods on objects of the same type.

To verify that the method works as expected, we will change value of the field name under root node from an object of first and last into another one consisting of only nick field in a test:

@Test
public void givenANode_whenModifyingIt_thenCorrect() throws IOException {
    String newString = "{\"nick\": \"cowtowncoder\"}";
    JsonNode newNode = mapper.readTree(newString);

    JsonNode rootNode = ExampleStructure.getExampleRoot();
    ((ObjectNode) rootNode).set("name", newNode);

    assertFalse(rootNode.path("name").path("nick").isMissingNode());
    assertEquals("cowtowncoder", rootNode.path("name").path("nick").textValue());
}

4.4. Removing a Node

A node can be removed by calling the remove(String fieldName) API on its parent node:

JsonNode removedNode = locatedNode.remove(fieldName);

In order to remove multiple nodes at once, we can invoke an overloaded method with the parameter of Collection<String> type, which returns the parent node instead of the one to be removed:

ObjectNode locatedNode = locatedNode.remove(fieldNames);

In the extreme case when we want to delete all subnodes of a given node the removeAll API comes in handy.

The following test will focus on the first method mentioned above – which is the most common scenario:

@Test
public void givenANode_whenRemovingFromATree_thenCorrect() throws IOException {
    JsonNode rootNode = ExampleStructure.getExampleRoot();
    ((ObjectNode) rootNode).remove("company");

    assertTrue(rootNode.path("company").isMissingNode());
}

5. Conclusion

This tutorial covered the common APIs and scenarios of working with a tree model in Jackson.

The implementation of all these examples and code snippets can be found in my github project – this is an Eclipse based project, so it should be easy to import and run as it is.

I usually post about Jackson and JSON stuff on Twitter - you should follow me there:


The Double Colon Operator in Java 8

$
0
0

I usually post about Java stuff on Twitter - you should follow me there:

1. Overview

In this quick article we’ll discuss the double colon operator ( :: ) in Java 8 and go over the scenarios where the operator can be used.

2. From Lambdas to Double Colon Operator

With Lambdas expressions we’ve seen that code can become very concise.

For example, to create a comparator, the following syntax is enough:

Comparator c = (Computer c1, Computer c2) -> c1.getAge().compareTo(c2.getAge());

Then, with type inference:

Comparator c = (c1, c2) -> c1.getAge().compareTo(c2.getAge());

But can we make the code above even more expressive and readable? Let’s have a look:

Comparator c = Comparator.comparing(Computer::getAge);

We’ve used the :: operator as shorthand for lambdas calling a specific method – by name. And the end result is of course even more readable syntax.

3. How Does It Work?

Very simply put, when we are using a method reference – the target reference is placed before the delimiter :: and the name of the method is provided after it.

For example:

Computer::getAge;

We’re looking at a method reference to the method getAge defined in the Computer class.

We can then operate with that function:

Function<Computer, Integer> getAge = Computer::getAge;
Integer computerAge = getAge.apply(c1);

Notice that we’re referencing the function – and then applying it to the right kind of argument.

4. Method References

We can make good use of this operator in quite a number of scenarios.

4.1. A Static Method

First we’re going to make use of a static utility method:

List inventory = Arrays.asList(
  new Computer( 2015, "white", 35), new Computer(2009, "black", 65));
inventory.forEach(ComputerUtils::repair);

4.2. An Instance Method of an Existing Object

Next, let’s have a look at an interesting scenario – referencing a method of an existing object instance.

We’re going to use the variable System.out – an object of type PrintStream which supports the print method:

Computer c1 = new Computer(2015, "white");
Computer c2 = new Computer(2009, "black");
Computer c3 = new Computer(2014, "black");
Arrays.asList(c1, c2, c3).forEach(System.out::print);

4.3. An Instance Method of an Arbitrary Object of a Particular Type

Computer c1 = new Computer(2015, "white", 100);
Computer c2 = new MacbookPro(2009, "black", 100);
List inventory = Arrays.asList(c1, c2);
inventory.forEach(Computer::turnOnPc);

As you can see, we’re referencing the turnOnPc method not on a specific instance, but on the type itself.

At line 4 the instance method turnOnPc will be called for every object of inventory.

And this naturally means that – for c1 the method turnOnPc will be called on the Computer instance and for c2 on MacbookPro instance.

4.4. A Super Method of a Particular Object

Suppose you have the following method in the Computer superclass:

public Double calculateValue(Double initialValue) {
    return initialValue/1.50;
}

and this one in MacbookPro subclass:

@Override
public Double calculateValue(Double initialValue){
    Function<Double, Double> function = super::calculateValue;
    Double pcValue = function.apply(initialValue);
    return pcValue + (initialValue/10) ;
}

A call to calculateValue method on a MacbookPro instance:

macbookPro.calculateValue(999.99);

will also produce also a call to calculateValue on the Computer superclass.

5. Constructor References

5.1. Create a New Instance

Referencing a constructor to instantiate an object can be quite simple:

@FunctionalInterface
public interface InterfaceComputer {
    Computer create();
}

InterfaceComputer c = Computer::new;
Computer computer = c.create();

What if you have two parameter in constructor?

BiFunction<Integer, String, Computer> c4Function = Computer::new; 
Computer c4 = c4Function.apply(2013, "white");

If parameters are three or more you have to define a new Functional interface:

@FunctionalInterface 
interface TriFunction<A, B, C, R> { 
    R apply(A a, B b, C c); 
    default <V> TriFunction<A, B, C, V> andThen( Function<? super R, ? extends V> after) { 
        Objects.requireNonNull(after); 
        return (A a, B b, C c) -> after.apply(apply(a, b, c)); 
    } 
}

Then, initialize your object:

TriFunction <Integer, String, Integer, Computer> c6Function = Computer::new;
Computer c3 = c6Function.apply(2008, "black", 90);

5.2. Create an Array

Finally, let’s see how to create an array of Computer objects with five elements:

Function <Integer, Computer[]> computerCreator = Computer[]::new;
Computer[] computerArray = computerCreator.apply(5);

6. Conclusion

As we’re starting to see, the double colon operator – introduced in Java 8 – will be very useful in some scenarios, and especially in conjunction with Streams.

It’s also quite important to have a look at functional interfaces for a better understanding of what happens behind the scenes.

The complete source code for the example is available in this github project – this is a Maven and Eclipse project, so it can be imported and used as-is.

I usually post about Java stuff on Twitter - you should follow me there:


Java Web Weekly, Issue 109

$
0
0

I usually post about Dev stuff on Twitter - you can follow me there:

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> Implied Readability [codefx.org]

A module granting visibility to another module – that’s something cool I wasn’t aware the Jigsaw was able to do.

Definitely a step beyond Maven.

>> Dockerizing Spring Boot Application [tomaszdziurko.pl]

Using Docker to deploy a Spring Boot app is like hitting two buzzwords with one stone. It’s also quite useful, just to be clear.

>> Introducing Spring Cloud Task [spring.io]

New Spring project that looks potentially quite useful.

>> How we accidentally doubled our JDBC traffic with Hibernate [plumbr.eu]

A fun read about a Hibernate problem and the solution.

>> Exploring CQRS with Axon Framework: Overview of the Testing infrastructure [geekabyte.com]

Another installment in a series I’m following along with, about CQRS with the Axon framework.

This one is all about testing.

>> Oracle to Deprecate Java Browser Plugin in 2017 [infoq.com]

Good.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> The Basics of Web Application Security [martinfowler.com]

An ambitious article diving deep into what it means to secure a system on the web.

Keep a close eye on this one (use RSS) – it’s an evolving publication that’s going to be a fantastic read when it’s out and done.

>> Writing Unit Tests With Spock Framework: Introduction to Specifications, Part Two [petrikainulainen.net]

Going deeper into testing with Spock in this second installment. Definitely have a read if you feel the trusty JUnit isn’t cutting it any more.

Also worth reading:

3. Musings

>> Why I Strive to be a 0.1x Engineer [benjiweber.co.uk]

Adding value by identifying when not to build something can have huge impact. I think this writeup is on point.

>> The Architect Title Over-Specialization [daedtech.com]

The accepted narrative of the “Architect” is definitely missing the mark.

And it’s by working with people that don’t conform to that narrative, and striving to be one of those people ourselves for someone else that we’ll have better results in our industry.

>> Startup Interviewing is Fucked [zachholman.com]

It’s not just “startup” interviews.

>> A eulogy for my 20s [steveklabnik.com]

A more personal post here by someone whose work I follow and admire. Maybe give it a read if you’re turning 30 yourself or just have.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Any difference between trust and stupidity? [dilbert.com]

>> Ninja economics [dilbert.com]

>> Jenny is a food werewolf [dilbert.com]

 

5. Pick of the Week

Thorben ( from thoughts-on-java.org ) has put together a video mini-course about fixing the N+1 select problem with Hibernate.

The material is quite well produced – so if you’re doing any JPA/Hibernate work, definitely give this one a go:

>> Free Mini Course: How to find and fix n+1 select issues with Hibernate [thoughts-on-java.org]

Also note that the early-bird pricing on his in-depth course/training on Hibernate performance tuning is about to expire in a few days.

We’re finally starting to see some high quality material in the Java ecosystem, which is about damn time.

Anyways, if you’re struggling with Hibernate performance, definitely pick that up in the next few days, before the price goes up:

>> Hibernate Performance Tuning

 

I usually post about Dev stuff on Twitter - you can follow me there:


Spring REST with a Zuul Proxy

$
0
0

I just announced the Master Class of my "REST With Spring" Course:

>> THE "REST WITH SPRING" CLASSES

1. Overview

In this article we’ll explore the communication between a front-end application and a REST API that are deployed separately.

The goal is to work around CORS and the Same Origin Policy restriction of the browser and allow the UI to call the API even though they don’t share the same origin.

We’ll basically create two separate applications – an UI application and a simple REST API, and we’ll use the Zuul proxy in the UI application to proxy calls to the REST API.

Zuul is a JVM based router and server side load balancer by Netflix. And Spring Cloud has a nice integration with an embedded Zuul proxy – which is what we’ll use here.

2. Maven Configuration

First, we need to add a dependency to the zuul support from Spring Cloud to our UI application’s pom.xml:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-zuul</artifactId>
    <version>1.0.4.RELEASE</version>
</dependency>

3. Zuul Properties

Next – we need to configure Zuul, and since we’re using Spring Boot, we’re going to do that in the application.yml:

zuul:
  routes:
    foos:
      path: /foos/**
      url: http://localhost:8081/spring-zuul-foos-resource/foos

Note that:

  • We are proxying to our resource server Foos.
  • All requests from the UI that starts with “/foos/” will be routed to our Foos Resource server at http://loclahost:8081/spring-zuul-foos-resource/foos/

4. The API

Our API application is a simple Spring Boot app.

Within this article, we’re going to consider the API deployed in a server running on port 8081.

Let’s first define the basic dto for the Resource we’re going to be using:

public class Foo {
    private long id;
    private String name;

    // standard getters and setters
}

And a simple controller:

@Controller
public class FooController {

    @RequestMapping(method = RequestMethod.GET, value = "/foos/{id}")
    @ResponseBody
    public Foo findById(
      @PathVariable long id, HttpServletRequest req, HttpServletResponse res) {
        return new Foo(Long.parseLong(randomNumeric(2)), randomAlphabetic(4));
    }
}

5. The UI Application

Our UI application is also a simple Spring Boot application.

Within this article, we’re going to consider the API deployed in a server running on port 8080.

Let’s start with the main index.html – using a bit of AngularJS:

<html>
<body ng-app="myApp" ng-controller="mainCtrl">
<script src="angular.min.js"></script>
<script src="angular-resource.min.js"></script>

<script>
var app = angular.module('myApp', ["ngResource"]);

app.controller('mainCtrl', function($scope,$resource,$http) {
    $scope.foo = {id:0 , name:"sample foo"};
    $scope.foos = $resource("/foos/:fooId",{fooId:'@id'});
    
    $scope.getFoo = function(){
        $scope.foo = $scope.foos.get({fooId:$scope.foo.id});
    }  
});
</script>

<div>
    <h1>Foo Details</h1>
    <span>{{foo.id}}</span>
    <span>{{foo.name}}</span>
    <a href="#" ng-click="getFoo()">New Foo</a>
</div>
</body>
</html>

The most important aspect here is how we’re accessing the API using relative URLs!

Keep in mind that the API application isn’t deployed in the same server as the UI application, so relative URLs shouldn’t work, and won’t work without the proxy.

With the proxy however, we’re accessing the Foo resources through the Zuul proxy, which is of course configured to route these requests to wherever the API is actually deployed.

And finally, the actually Boot enabled application:

@EnableZuulProxy
@SpringBootApplication
public class UiApplication {

    public static void main(String[] args) {
        SpringApplication.run(UiApplication.class, args);
    }
}

Beyond the simple Boot annotation, notice that we’re using the enable-style of annotation for the Zuul proxy as well, which is pretty cool, clean and concise.

6. Test The Routing

Now – let’s test our UI application – as follows:

@Test
public void whenSendRequestToFooResource_thenOK() {
    Response response = RestAssured.get("http://localhost:8080/foos/1");
    assertEquals(200, response.getStatusCode());
}

7. A Custom Zuul Filter

There are multiple Zuul filters available, and we can also create our own custom one:

@Component
public class CustomZuulFilter extends ZuulFilter {

    @Override
    public Object run() {
        RequestContext ctx = RequestContext.getCurrentContext();
        ctx.addZuulRequestHeader("Test", "TestSample");
        return null;
    }

    @Override
    public boolean shouldFilter() {
       return true;
    }
    ....
}

This simple filter just ads a header called “Test” to the request – but of course we can get as complex as we need to here augmenting our requests.

8. Test Custom Zuul Filter

Finally, let’s test make sure our custom filter is working – first we will modify our FooController at Foos resource server:

@Controller
public class FooController {

    @RequestMapping(method = RequestMethod.GET, value = "/foos/{id}")
    @ResponseBody
    public Foo findById(
      @PathVariable long id, HttpServletRequest req, HttpServletResponse res) {
        if (req.getHeader("Test") != null) {
            res.addHeader("Test", req.getHeader("Test"));
        }
        return new Foo(Long.parseLong(randomNumeric(2)), randomAlphabetic(4));
    }
}

Now – let’s test our it out:

@Test
public void whenSendRequest_thenHeaderAdded() {
    Response response = RestAssured.get("http://localhost:8080/foos/1");
    assertEquals(200, response.getStatusCode());
    assertEquals("TestSample", response.getHeader("Test"));
}

9. Conclusion

In this writeup we focused on using Zuul to route requests from a UI application to a REST API. We successfully worked around CORS and the same-origin policy and we also managed to customize and augment the HTTP request in transit.

The full implementation of this tutorial can be found in the github project – this is an Eclipse based project, so it should be easy to import and run as it is.

The Master Class of my "REST With Spring" Course is finally out:

>> CHECK OUT THE CLASSES

A Guide to the ViewResolver in Spring MVC

$
0
0

I just announced the release of my "REST With Spring" Classes:

>> THE "REST WITH SPRING" CLASSES

1. Overview

All MVC frameworks provide a way to work with views.

Spring does that via the view resolvers, which enable you to render models in the browser without tying the implementation to a specific view technology.

The ViewResolver maps view names to actual views.

And the Spring framework comes with quite a few view resolvers e.g. InternalResourceViewResolver, XmlViewResolver, ResourceBundleViewResolver and a few others.

This is a simple tutorial showing how to set up the most common view resolvers and how to use multiple ViewResolver in the same configuration.

2. The Spring Web Configuration

Let’s start with the web configuration; we’ll annotate it with @EnableWebMvc, @Configuration and @ComponentScan:

@EnableWebMvc
@Configuration
@ComponentScan("org.baeldung.web")
public class WebConfig extends WebMvcConfigurerAdapter {
    // All web configuration will go here
}

It’s here that we’ll set up our view resolver in the configuration.

3. Add an InternalResourceViewResolver

This ViewResolver allows us to set properties such as prefix or suffix to the view name to generate the final view page URL:

@Bean
public ViewResolver internalResourceViewResolver() {
    InternalResourceViewResolver bean = new InternalResourceViewResolver();
    bean.setViewClass(JstlView.class);
    bean.setPrefix("/WEB-INF/view/");
    bean.setSuffix(".jsp");
    return bean;
}

For such simplicity of the example, we don’t need a controller to process the request.

We only need a simple jsp page, placed in the /WEB-INF/view folder as defined in the configuration:

<html>
    <head></head>
    <body>
        <h1>This is the body of the sample view</h1>
    </body>
</html>

4. Add a ResourceBundleViewResolver

As the name of this resolver suggest a ResourceBundleViewResolver uses bean definitions in a ResourceBundle.

First, we add the ResourceBundleViewResolver to the previous configuration:

@Bean
public ViewResolver resourceBundleViewResolver() {
    ResourceBundleViewResolver bean = new ResourceBundleViewResolver();
    bean.setBasename("views");
    return bean;
}

The bundle is typically defined in a properties file, located in the classpath. Below is the views.properties file:

sample.(class)=org.springframework.web.servlet.view.JstlView
sample.url=/WEB-INF/view/sample.jsp

We can use the simple jsp page defined in the above example for this configuration as well.

5. Add an XmlViewResolver

This implementation of ViewResolver accepts a configuration file written in XML with the same DTD as Spring’s XML bean factories:

@Bean
public ViewResolver xmlViewResolver() {
    XmlViewResolver bean = new XmlViewResolver();
    bean.setLocation(new ClassPathResource("views.xml"));
    return bean;
}

Below is the configuration file, views.xml:

<beans xmlns="http://www.springframework.org/schema/beans"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://www.springframework.org/schema/beans
  http://www.springframework.org/schema/beans/spring-beans-4.2.xsd">
 
    <bean id="xmlConfig" class="org.springframework.web.servlet.view.JstlView">
        <property name="url" value="/WEB-INF/view/xmlSample.jsp" />
    </bean>
 
</beans>

As for the previous examples we can use our simple jsp page defined previously.

6. Chaining ViewResolvers and Define an Order Priority

Spring MVC also supports multiple view resolvers.

This allow you to override specific views in some circumstances. We can simply chain view resolvers by adding more than one resolver to the configuration.

Once we’ve done that, we’ll need to define an order for these resolvers. The order property is used to define which is the order of invocations in the chain. The higher the order property (largest order number), the later the view resolver is positioned in the chain.

To define the order we can add the follow line of code to the configuration of the our view resolvers:

bean.setOrder(0);

Be careful on the order priority as the InternalResourceViewResolver should have a higher order – because it’s intended to represent a very explicit mapping. And if other resolvers have a higher order, then the InternalResourceViewResolver might never be invoked.

7. Conclusion

In this tutorial we configured a chain of view resolvers using Java configuration. By playing with the order priority we can set the order of their invocation.

The implementation of this simple tutorial can be found in the github project – this is an Eclipse based project, so it should be easy to import and run as it is.

Sign Up to the newly released "REST With Spring" Master Class:

>> CHECK OUT THE CLASSES


Introduction to Spring Data Redis

$
0
0

I usually post about Persistence on Twitter - you can follow me there:

1. Overview

This article is an introduction to Spring Data Redis, which provides the abstractions of the Spring Data platform to Redis – the popular in-memory data structure store.

Redis is driven by a keystore-based data structure to persist data and can be used as a database, cache, message broker, etc.

We’ll be able to use the common patterns of Spring Data (templates, etc.), while also having the traditional simplicity of all Spring Data projects.

2. Maven Dependencies

Let’s start by declaring the Spring Data Redis dependencies in the pom.xml:

<dependency>
    <groupId>org.springframework.data</groupId>
    <artifactId>spring-data-redis</artifactId>
    <version>1.6.2.RELEASE</version>
</dependency>
<dependency>
    <groupId>redis.clients</groupId>
    <artifactId>jedis</artifactId>
    <version>2.5.1</version>
</dependency>
<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-core</artifactId>
    <version>4.2.2.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-context</artifactId>
    <version>4.2.2.RELEASE</version>
</dependency>

3. The Redis Configuration

Redis clients are used to define the connection settings between the application client and the Redis server instance.

There are number of Redis client implementations available for Java. In this tutorial, we’ll use Jedis – a simple and powerful Redis client implementation.

There is good support for both XML and Java configuration in the framework; for this tutorial, we’ll use Java-based configuration.

3.1. Java Configuration

Let’s start with the configuration bean definitions:

@Bean
JedisConnectionFactory jedisConnectionFactory() {
    return new JedisConnectionFactory();
}

@Bean
public RedisTemplate<String, Object> redisTemplate() {
    RedisTemplate<String, Object> template = new RedisTemplate<String, Object>();
    template.setConnectionFactory(jedisConnectionFactory());
    return template;
}

The configuration is quite simple. First, using the Jedis client, a Redis connectionFactory can be defined.

Using the jedisConnectionFactory, a RedisTemplate is defined, which will then be used for querying data with a custom repository.

3.2. Custom Connection Properties

You may have already noticed that the usual connection-related properties are missing in the above configuration. For example, the server address and port are missing in the configuration.The reason is simple:  for our example, we are using the defaults.

However, if you need to configure the connection details, you can always modify the jedisConnectionFactory configuration as follows:

@Bean
JedisConnectionFactory jedisConnectionFactory() {
    JedisConnectionFactory jedisConFactory = new JedisConnectionFactory();
    jedisConFactory.setHostName("localhost");
    jedisConFactory.setPort(6379);
    return jedisConFactory;
}

Let’s use a Student entity for our examples.

public class Student implements Serializable {
  
    public enum Gender { 
        MALE, FEMALE
    }

    private String id;
    private String name;
    private Gender gender;
    private int grade;
    ...
}

4.1. The Spring Data Repository

Let’s now create the StudentRepository as follows:

public interface StudentRepository {
    
    void saveStudent(Student person);
    ...
}

Note that, unlike other Spring Data Repository interfaces, this is just a standard interface to define a supported method. It doesn’t enable any Spring-related features.

And this is certainly unusual for a Spring Data project. Most of the other Spring Data projects are capable of building repositories based on the common Spring Data interfaces.

For example, Spring Data JPA provides several base repository interfaces that you can extend to get base features such basic CRUD operations, the ability to generate queries based on method names, etc. In most cases, there’s no need to write an implementation of the repository interface at all.

Spring Data Redis, however, does not have base repository interfaces to extend, nor does it have method name-based query generation. This partly has to do with the team who works on Spring Data Redis simply not having enough time to work on it, and partly has to do with the fact that Redis does not support “queries” the way we typically think of them. You can read more in this StackOverflow answer by one of the maintainers of the Spring Data Redis project.

4.2. A Repository Implementation

The StudentRepositoryImpl implementation is using the redisTemplate defined in the Java configuration above.

Redis supports different data structures such as strings, hashes, lists, sets, and sorted sets. In this example we will use the opsForHash() function, which uses hash-related operations for data manipulation. We will use the string “Student” as the name of the hash in which our Student entities will be stored.

@Repository
public class StudentRepositoryImpl implements StudentRepository {

    private static final String KEY = "Student";
    
    private RedisTemplate<String, Student> redisTemplate;
    private HashOperations hashOps;

    @Autowired
    private StudentRepositoryImpl(RedisTemplate redisTemplate) {
        this.redisTemplate = redisTemplate;
    }

    @PostConstruct
    private void init() {
        hashOps = redisTemplate.opsForHash();
    }
    
    public void saveStudent(Student student) {
        hashOps.put(KEY, student.getId(), student);
    }

    public void updateStudent(Student student) {
        hashOps.put(KEY, student.getId(), student);
    }

    public Student findStudent(String id) {
        return (Student) hashOps.get(KEY, id);
    }

    public Map<Object, Object> findAllStudents() {
        return hashOps.entries(KEY);
    }

    public void deleteStudent(String id) {
        hashOps.delete(KEY, id);
    }
}

The opsForHash() function returns the operations performed on hash values bound to the given key. It’s defined with three parameters: the key, the hash key, and the hash value type.

Just note that, in addition to opsForHash(), there are similar set-related, list-related, and simple value functions.

5. Data Access using StudentRepository

We have implemented the data accessing behaviors on the StudentRepositoryImpl class so that it can be used directly to persist data or manipulate persisted data.

5.1. Saving a New Student Object

Let’s save a new student object in data store:

Student student = new Student(
  "Eng2015001", "John Doe", Student.Gender.MALE, 1);
studentRepository.saveStudent(student);

5.2. Retrieving an Existing Student Object

We can verify the correct insertion of the student in the previous section by fetching the inserted student data:

Student retrievedStudent = 
  studentRepository.findStudent("Eng2015001");

5.3. Updating an Existing Student Object

Let’s change the name of the student retrieved above and save it again:

retrievedStudent.setName("Richard Watson");
studentRepository.saveStudent(student);

Finally you can retrieve the student’s data again and verify that the name is updated in the datastore.

5.4. Deleting an Existing Student Data

We can delete the above-inserted student data:

studentRepository.deleteStudent(student.getId());

Now we can search for the student object and verify that the result is null.

5.5. Find All Student Data

We can insert a few student objects:

Student engStudent = 
  new Student("Eng2015001", "John Doe", Student.Gender.MALE, 1);
Student medStudent = 
  new Student("Med2015001", "Gareth Houston", Student.Gender.MALE, 2);
studentRepository.saveStudent(engStudent);
studentRepository.saveStudent(medStudent);

However this can also be done by inserting a Map object. For that, there is a different method – putAll() – which accepts a single Map object containing multiple student objects to be persisted.

To find all inserted students:

Map<Object, Object> retrievedStudents = 
  studentRepository.findAllStudents();

Then we can easily check the size of the retrievedStudents or verify for a greater granularity by checking the properties of the each object.

6. Conclusion

In this tutorial, we went through the basics of Spring Data Redis. The source code of the examples above can be found in a GitHub project.

I usually post about Persistence on Twitter - you can follow me there:


Java Web Weekly 110

$
0
0

I usually post about Dev stuff on Twitter - you can follow me there:

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> Q&A with Aleksey Shipilev on Compact Strings Optimization in OpenJDK 9 [infoq.com]

If you’re interested in the inner workings of JDK 9, this interview is well worth a read.

>> O Java EE 7 Application Servers, Where Art Thou? [antoniogoncalves.org]

Very interesting numbers on the current state of Java EE 7 application servers.

>> Introduction to Spring Rest Docs [yetanotherdevblog.com]

A solid and super-practical intro to a very cool new(ish) project out of the Spring ecosystem – Spring REST Docs.

>> Hystrix To Prevent Hysterix [keyholesoftware.com]

An good intro to Hystrix for a resilient system architecture.

The writeup is a bit verbose at the beginning, but it gets quite interesting and useful later on.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Thank you Waitrose, now fix your insecure site [troyhunt.com]

The right way to do HTTPS when it comes to sending user credentials over the wire. Not super complicated, but it seems that not everybody’s doing it right.

A useful and fun one to read.

>> Basics of Web Application Security: Encode HTML output [martinfowler.com]

The next installment in the security series I covered last week.

And a quick side-note – this article is what’s called an “evolving publication” – kind of a unique concept and maybe something that shows us that we don’t have to publish work in the same old way we’re used to.

Also worth reading:

3. Musings

>> We’re Not Beasts, So Let’s Not Act Like It [daedtech.com]

Being an employee and being a consultant coming in for a limited amount of time are two very different things. Not only different financially and organizationally, but at a fundamental level that has a lot more to do with mindset.

This writeup explores that difference in a practical and funny way – definitely have a read if you skirting the border between employee and consultant (or thinking about it).

>> The Tyranny of the P1 [dandreamsofcoding.com]

Dan is taking a page out of Amy Hoy‘s playbook and getting back to product basics.

Of course the problem is that these aren’t the fun features to build into the product and it takes oh so much discipline to stay away from those.

>> Is Unlimited PTO a Good Deal for Me? [daedtech.com]

The first time I read about the concept of unlimited vacation time, I was excited about the idea for about five minutes, and then started to understand the nuances and implications of what that actually meant.

This piece explores those nuances in a clear and insightful way.

>> The software engineer’s guide to asserting dominance in the workplace [medium.com]

Funniest thing I read all week.

Also worth reading:

4. Comics

And my favorite comisc of the week:

>> I didn’t know you could gift-wrap creepiness [dilbert.com]

>> Can we watch? [dilbert.com]

>> Backslashes [xkcd.com]

 

5. Pick of the Week

This week – a fantastic presentation about testing (an oldie but a goodie):

>> Test – Know Your Units (Oredev 2008)

 

I usually post about Dev stuff on Twitter - you can follow me there:


Modular RAML Using Includes, Libraries, Overlays and Extensions

$
0
0

I just announced the Master Class of my "REST With Spring" Course:

>> THE "REST WITH SPRING" CLASSES

1. Introduction

In our first two articles on RAML – the RESTful API Modeling Language – we introduced some basic syntax, including the use of data types and JSON schema, and we showed how to simplify a RAML definition by extracting common patterns into resource types and traits.

In this article, we show how you can break your RAML API definition into modules by making use of includes, libraries, overlays, and extensions.

2. Our API

For the purpose of this article, we shall focus on the portion of our API involving the entity type called Foo.

Here are the resources making up our API:

  • GET /api/v1/foos
  • POST /api/v1/foos
  • GET /api/v1/foos/{fooId}
  • PUT /api/v1/foos/{fooId}
  • DELETE /api/v1/foos/{fooId}

3. Includes

The purpose of an include is to modularize a complex property value in a RAML definition by placing the property value in an external file.

Our first article touched briefly on the use of includes when we were specifying data types and examples whose properties were being repeated inline throughout the API.

3.1. General Usage and Syntax

The !include tag takes a single argument: the location of the external file containing the property value. This location may be an absolute URL, a path relative to the root RAML file, or a path relative to the including file.

A location starting with a forward slash (/) indicates a path relative to the location of the root RAML file, and a location beginning without a slash is interpreted to be relative to the location of the including file.

The logical corollary to the latter is that an included file may itself contain other !include directives.

Here is an example showing all three uses of the !include tag:

#%RAML 1.0
title: Baeldung Foo REST Services API
...
types: !include /types/allDataTypes.raml
resourceTypes: !include allResourceTypes.raml
traits: !include http://foo.com/docs/allTraits.raml

3.2. Typed Fragments

Rather than placing all the typesresource types or traits in their own respective include files, you can also use special types of includes known as typed fragments to break each of these constructs into multiple include files, specifying a different file for each typeresource type or trait.

You can also use typed fragments to define user documentation itemsnamed examples, annotationslibraries, overlays, and extensions. We will cover the use of overlays and extensions later in the article.

Although it is not required, the first line of an include file that is a typed fragment may be a RAML fragment identifier of the following format:

#%RAML 1.0 <fragment-type>

For example, the first line of a typed fragment file for a trait would be:

#%RAML 1.0 Trait

If a fragment identifier is used, then the contents of the file MUST contain only valid RAML for the type of fragment being specified.

Let’s look first at a portion of the traits section of our API:

traits:
  - hasRequestItem:
      body:
        application/json:
          type: <<typeName>>
  - hasResponseItem:
      responses:
          200:
            body:
              application/json:
                type: <<typeName>>
                example: !include examples/<<typeName>>.json

In order to modularize this section using typed fragments, we first rewrite the traits section as follows:

traits:
  - hasRequestItem: !include traits/hasRequestItem.raml
  - hasResponseItem: !include traits/hasResponseItem.raml

We would then write the typed fragment file hasRequestItem.raml:

#%RAML 1.0 Trait
body:
  application/json:
    type: <<typeName>>

The typed fragment file hasResponseItem.raml would look like this:

#%RAML 1.0 Trait
responses:
    200:
      body:
        application/json:
          type: <<typeName>>
          example: !include /examples/<<typeName>>.json

4. Libraries

RAML libraries may be used to modularize any number and combination of data types, security schemes, resource types, traits, and annotations.

4.1. Defining a Library

Although usually defined in an external file, which is then referenced as an include, a library may also be defined inline. A library contained in an external file can reference other libraries as well.

Unlike a regular include or typed fragment, a library contained in an external file must declare the top-level element names that are being defined.

Let’s rewrite our traits section as a library file:

#%RAML 1.0 Library
# This is the file /libraries/traits.raml
usage: This library defines some basic traits
traits:
  hasRequestItem:
    usage: Use this trait for resources whose request body is a single item
    body:
      application/json:
        type: <<typeName>>
  hasResponseItem:
    usage: Use this trait for resources whose response body is a single item
    responses:
        200:
          body:
            application/json:
              type: <<typeName>>
              example: !include /examples/<<typeName>>.json

4.2. Applying a Library

Libraries are applied via the top-level uses property, the value of which is one or more objects whose property names are the library names and whose property values make up the contents of the libraries.

Once we have created the libraries for our security schemes, data types, resource types, and traits, we can apply the libraries to the root RAML file:

#%RAML 1.0
title: Baeldung Foo REST Services API
uses:
  mySecuritySchemes: !include libraries/security.raml
  myDataTypes: !include libraries/dataTypes.raml
  myResourceTypes: !include libraries/resourceTypes.raml
  myTraits: !include libraries/traits.raml

4.3. Referencing a Library

A library is referenced by concatenating the library name, a dot (.), and the name of the element (e.g. data type, resource type, trait, etc) being referenced.

You may recall from our previous article how we refactored our resource types using the traits that we had defined. The following example shows how to rewrite our “item” resource type as a library, how to include the traits library file (shown above) within the new library, and how to reference the traits by prefixing the trait names with their library name qualifier (“myTraits“):

#%RAML 1.0 Library
# This is the file /libraries/resourceTypes.raml
usage: This library defines the resource types for the API
uses:
  myTraits: !include traits.raml
resourceTypes:
  item:
    usage: Use this resourceType to represent any single item
    description: A single <<typeName>>
    get:
      description: Get a <<typeName>> by <<resourcePathName>>
      is: [ myTraits.hasResponseItem, myTraits.hasNotFound ]
    put:
      description: Update a <<typeName>> by <<resourcePathName>>
      is: [ myTraits.hasRequestItem, myTraits.hasResponseItem, myTraits.hasNotFound ]
    delete:
      description: Delete a <<typeName>> by <<resourcePathName>>
      is: [ myTraits.hasNotFound ]
      responses:
        204:

5. Overlays and Extensions

Overlays and extensions are modules defined in external files that are used to extend an API. An overlay is used to extend non-behavioral aspects of an API, such as descriptions, usage directions, and user documentation items, whereas an extension is used to extend or override behavioral aspects of the API.

Unlike includes, which are referenced by other RAML files to be applied as if they were being coded inline, all overlay and extension files must contain a reference (via the top-level masterRef property) to its master file, which can be either a valid RAML API definition or another overlay or extension file, to which they are to be applied.

5.1. Definition

The first line of an overlay or extension file must be formatted as follows:

RAML 1.0 Overlay

And the first line of an overlay file must be formatted similarly:

RAML 1.0 Extension

5.2. Usage Constraints

When using a set of overlays and/or extensions, all of them must refer to the same master RAML file. In addition, RAML processing tools usually expect the root RAML file and all overlay and extension files to have a common file extension (e.g. “.raml”).

5.3. Use Cases for Overlays

The motivation behind overlays is to provide a mechanism for separating interface from implementation, thus allowing the more human-oriented parts of a RAML definition to change or grow more frequently, while the core behavioral aspects of the API remain stable.

A common use case for overlays is to provide user documentation and other descriptive elements in multiple languages. Let’s rewrite the title of our API and add some user documentation items:

#%RAML 1.0
title: API for REST Services used in the RAML tutorials on Baeldung.com
documentation:
  - title: Overview
  - content: |
      This document defines the interface for the REST services
      used in the popular RAML Tutorial series at Baeldung.com.
  - title: Copyright
  - content: Copyright 2016 by Baeldung.com. All rights reserved.

Here is how we would define a Spanish language overlay for this section:

#%RAML 1.0 Overlay
# File located at (archivo situado en):
# /overlays/es_ES/documentationItems.raml
masterRef: /api.raml
usage: |
  To provide user documentation and other descriptive text in Spanish
  (Para proporcionar la documentación del usuario y otro texto descriptivo
  en español)
title: |
  API para servicios REST utilizados en los tutoriales RAML
  en Baeldung.com
documentation:
  - title: Descripción general
  - content: |
      Este documento define la interfaz para los servicios REST
      utilizados en la popular serie de RAML Tutorial en Baeldung.com.
  - title: Derechos de autor
  - content: |
      Derechos de autor 2016 por Baeldung.com.
      Todos los derechos reservados.

Another common use case for overlays is to externalize annotation metadata, which essentially are a way of adding non-standard constructs to an API in order to provide hooks for RAML processors such as testing and monitoring tools.

5.4. Use Cases for Extensions

As you may infer from the name, extensions are used to extend an API by adding new behaviors and/or modifying existing behaviors of an API. An analogy from the object-oriented programming world would be a subclass extending a superclass, where the subclass can add new methods and/or override existing methods. An extension may also extend an API’s non-functional aspects.

An extension might be used, for example, to define additional resources that are exposed only to a select set of users, such as administrators or users having been assigned a particular role. An extension could also be used to add features for a newer version of an API.

Below is an extension that overrides the version of our API and adds resources that were unavailable in the previous version:

#%RAML 1.0 Extension
# File located at:
# /extensions/en_US/additionalResources.raml
masterRef: /api.raml
usage: This extension defines additional resources for version 2 of the API.
version: v2
/foos:
  /bar/{barId}:
    get:
      description: |
        Get the foo that is related to the bar having barId = {barId}
      typeName: Foo
      queryParameters:
        barId?: integer
        typeName: Foo
        is: [ hasResponseItem ]

And here is a Spanish-language overlay for that extension:

#%RAML 1.0 Overlay
# Archivo situado en:
# /overlays/es_ES/additionalResources.raml
masterRef: /api.raml
usage: |
  Se trata de un español demasiado que describe los recursos adicionales
  para la versión 2 del API.
version: v2
/foos:
  /bar/{barId}:
    get:
      description: |
        Obtener el foo que se relaciona con el bar tomando barId = {barId}

It is worth noting here that although we used an overlay for the Spanish-language overrides in this example because it does not modify any behaviors of the API, we could just as easily have defined this module to be an extension. And it may be more appropriately defined as an extension, given that its purpose is to override properties found in the English-language extension above it.

6. Conclusion

In this tutorial, we have introduced several techniques to make a RAML API definition more modular by separating common constructs into external files.

First, we showed how the include feature in RAML can be used to refactor individual, complex property values into reusable external file modules known as typed fragments. Next, we demonstrated a way of using the include feature to externalize certain sets of elements into reusable libraries. Finally, we extended some behavioral and non-behavioral aspects of an API through the use of overlays and extensions.

To learn even more about RAML modularization techniques, please visit the RAML 1.0 spec.

You can view the full implementation of the API definition used for this tutorial in the github project.

The Master Class of my "REST With Spring" Course is finally out:

>> CHECK OUT THE CLASSES

Introduction to Spring Data Elasticsearch

$
0
0

I usually post about Persistence on Twitter - you can follow me there:

1. Overview

In this article we’ll explore the basics of Spring Data Elasticsearch in a code-focused, practical manner.

We’ll show how to index, search, and query Elasticsearch in a Spring application using Spring Data – a Spring module for interaction with a popular open-source, Lucene-based search engine.

While Elasticsearch is schemaless, it can use mappings in order to tell the type of a field. When a document is indexed, its fields are processed according to their types. For example, a text field will be tokenized and filtered according to mapping rules. You could also create filters and tokenizers of your own.

2. Spring Data

Spring Data helps avoid boilerplate code. For example, if we define a repository interface that extends the ElasticsearchRepository interface provided by Spring Data Elasticsearch, CRUD operations for the corresponding document class will be made available by default.

Additionally, simply by declaring methods with names in a prescribed format, method implementations are generated for you – there is no need to write an implementation of the repository interface.

You can read more about Spring Data here.

2.1. Maven Dependency

Spring Data Elasticsearch provides a Java API for the search engine. In order to use it we need to add a new dependency to the pom.xml:

<dependency>
    <groupId>org.springframework.data</groupId>
    <artifactId>spring-data-elasticsearch</artifactId>
    <version>1.3.2.RELEASE</version>
</dependency>

2.2. Defining Repository Interfaces

Next we need to extend one of provided repository interfaces, replacing the generic types with our actual document and primary key types.

Notice that ElasticsearchRepository extends PagingAndSortingRepository, which provides built-in support for pagination and sorting.

In our example, we will use the paging feature in our custom search method:

public interface ArticleRepository extends ElasticsearchRepository<Article, String> {

    Page<Article> findByAuthorsName(String name, Pageable pageable);

    @Query("{\"bool\": {\"must\": [{\"match\": {\"authors.name\": \"?0\"}}]}}")
    Page<Article> findByAuthorsNameUsingCustomQuery(String name, Pageable pageable);
}

Notice that we added two custom methods. With the findByAuthorsName method, the repository proxy will create an implementation based on the method name. The resolution algorithm will determine that it needs to access the authors property and then search the name property of each item.

The second method, findByAuthorsNameUsingCustomQuery, uses an Elasticsearch boolean query, defined using the @Query annotation, which requires strict matching between the author’s name and the provided name argument.

2.3. Java Configuration

Let’s now explore the Spring configuration of our persistence layer here:

@Configuration
@EnableElasticsearchRepositories(basePackages = "com.baeldung.spring.data.es.repository")
@ComponentScan(basePackages = {"com.baeldung.spring.data.es.service"})
public class Config {

    @Bean
    public NodeBuilder nodeBuilder() {
        return new NodeBuilder();
    }

    @Bean
    public ElasticsearchOperations elasticsearchTemplate() {
        ImmutableSettings.Builder elasticsearchSettings = 
          ImmutableSettings.settingsBuilder()
          .put("http.enabled", "false") // 1
          .put("path.data", tmpDir.toAbsolutePath().toString()); // 2

        logger.debug(tmpDir.toAbsolutePath().toString());

        return new ElasticsearchTemplate(nodeBuilder()
          .local(true)
          .settings(elasticsearchSettings.build())
          .node()
          .client());
    }
}

Notice that we’re using a standard Spring enable-style annotation – @EnableElasticsearchRepositories  – to scan the provided package for Spring Data repositories.

We are also:

  1. Starting the Elasticsearch node without HTTP transport support
  2. Setting the location of the data files of each index allocated on the node

Finally – we’re also setting up an ElasticsearchOperations bean – elasticsearchTemplate – as our client to work against the Elasticsearch server.

3. Mappings

Let’s now define our first entity – a document called Article with a String id:

@Document(indexName = "blog", type = "article")
public class Article {

    @Id
    private String id;
    
    private String title;
    
    @Field(type = FieldType.Nested)
    private List<Author> authors;
    
    // standard getters and setters
}

Note that in the @Document annotation, we indicate that instances of this class should be stored in Elasticsearch in an index called “blog“, and with a document type of “article“. Documents with many different types can be stored in the same index.

Also notice that the authors field is marked as FieldType.Nested. This allows us to define the Author class separately, but have the individual instances of author embedded in an Article document when it is indexed in Elasticsearch.

4. Indexing Documents

Spring Data Elasticsearch generally auto-creates indexes based on the entities in the project.

However, you can also create an index programmatically, via the client template:

elasticsearchTemplate.createIndex(Article.class);

After the index is available, we can add a document to the index.

Let’s have a quick look at an example – indexing an article with two authors:

Article article = new Article("Spring Data Elasticsearch");
article.setAuthors(asList(new Author("John Smith"), new Author("John Doe")));
articleService.save(article);

5. Querying

5.1. Method Name-Based Query

The repository class we defined earlier had a findByAuthorsName method – which we can use for finding articles by author name:

String nameToFind = "John Smith";
Page<Article> articleByAuthorName
  = articleService.findByAuthorName(nameToFind, new PageRequest(0, 10));

By calling findByAuthorName with a PageRequest object, we obtain the first page of results (page numbering is zero-based), with that page containing at most 10 articles.

5.2. A Custom Query

There are a couple of ways to define custom queries for Spring Data Elasticsearch repositories. One way is to use the @Query annotation, as demonstrated in section 2.2.

Another option is to use a builder for custom query creation.

For example, we could search for articles that have word “data” in the title by building a query with the NativeSearchQueryBuilder:

SearchQuery searchQuery = new NativeSearchQueryBuilder()
  .withFilter(regexpFilter("title", ".*data.*"))
  .build();
List<Article> articles = elasticsearchTemplate.queryForList(searchQuery, Article.class);

6. Updating and Deleting

In order to update or delete a document, we first need to retrieve that document.

String articleTitle = "Spring Data Elasticsearch";
SearchQuery searchQuery = new NativeSearchQueryBuilder()
  .withQuery(matchQuery("title", articleTitle).minimumShouldMatch("75%"))
  .build();

List<Article> articles = elasticsearchTemplate.queryForList(searchQuery, Article.class);

Now, to update the title of the article – we can modify the document, and use the save API:

article.setTitle("Getting started with Search Engines");
articleService.save(article);

As you may have guessed, in order to delete a document you can use the delete method:

articleService.delete(articles.get(0));

7. Conclusion

This was a quick and practical discussion of the basic use of Spring Data Elasticsearch.

To read more about the impressive features of Elasticsearch, you can find its documentation on the official website.

The example used in this article is available as a sample project in GitHub.

I usually post about Persistence on Twitter - you can follow me there:


Learn Spring Security

$
0
0

Introduction to Spring Data Elasticsearch

$
0
0

I usually post about Persistence on Twitter - you can follow me there:

1. Overview

In this article we’ll explore the basics of Spring Data Elasticsearch in a code-focused, practical manner.

We’ll show how to index, search, and query Elasticsearch in a Spring application using Spring Data – a Spring module for interaction with a popular open-source, Lucene-based search engine.

While Elasticsearch is schemaless, it can use mappings in order to tell the type of a field. When a document is indexed, its fields are processed according to their types. For example, a text field will be tokenized and filtered according to mapping rules. You could also create filters and tokenizers of your own.

2. Spring Data

Spring Data helps avoid boilerplate code. For example, if we define a repository interface that extends the ElasticsearchRepository interface provided by Spring Data Elasticsearch, CRUD operations for the corresponding document class will be made available by default.

Additionally, simply by declaring methods with names in a prescribed format, method implementations are generated for you – there is no need to write an implementation of the repository interface.

You can read more about Spring Data here.

2.1. Maven Dependency

Spring Data Elasticsearch provides a Java API for the search engine. In order to use it we need to add a new dependency to the pom.xml:

<dependency>
    <groupId>org.springframework.data</groupId>
    <artifactId>spring-data-elasticsearch</artifactId>
    <version>1.3.2.RELEASE</version>
</dependency>

2.2. Defining Repository Interfaces

Next we need to extend one of provided repository interfaces, replacing the generic types with our actual document and primary key types.

Notice that ElasticsearchRepository extends PagingAndSortingRepository, which provides built-in support for pagination and sorting.

In our example, we will use the paging feature in our custom search method:

public interface ArticleRepository extends ElasticsearchRepository<Article, String> {

    Page<Article> findByAuthorsName(String name, Pageable pageable);

    @Query("{\"bool\": {\"must\": [{\"match\": {\"authors.name\": \"?0\"}}]}}")
    Page<Article> findByAuthorsNameUsingCustomQuery(String name, Pageable pageable);
}

Notice that we added two custom methods. With the findByAuthorsName method, the repository proxy will create an implementation based on the method name. The resolution algorithm will determine that it needs to access the authors property and then search the name property of each item.

The second method, findByAuthorsNameUsingCustomQuery, uses an Elasticsearch boolean query, defined using the @Query annotation, which requires strict matching between the author’s name and the provided name argument.

2.3. Java Configuration

Let’s now explore the Spring configuration of our persistence layer here:

@Configuration
@EnableElasticsearchRepositories(basePackages = "com.baeldung.spring.data.es.repository")
@ComponentScan(basePackages = {"com.baeldung.spring.data.es.service"})
public class Config {

    @Bean
    public NodeBuilder nodeBuilder() {
        return new NodeBuilder();
    }

    @Bean
    public ElasticsearchOperations elasticsearchTemplate() {
        ImmutableSettings.Builder elasticsearchSettings = 
          ImmutableSettings.settingsBuilder()
          .put("http.enabled", "false") // 1
          .put("path.data", tmpDir.toAbsolutePath().toString()); // 2

        logger.debug(tmpDir.toAbsolutePath().toString());

        return new ElasticsearchTemplate(nodeBuilder()
          .local(true)
          .settings(elasticsearchSettings.build())
          .node()
          .client());
    }
}

Notice that we’re using a standard Spring enable-style annotation – @EnableElasticsearchRepositories  – to scan the provided package for Spring Data repositories.

We are also:

  1. Starting the Elasticsearch node without HTTP transport support
  2. Setting the location of the data files of each index allocated on the node

Finally – we’re also setting up an ElasticsearchOperations bean – elasticsearchTemplate – as our client to work against the Elasticsearch server.

3. Mappings

Let’s now define our first entity – a document called Article with a String id:

@Document(indexName = "blog", type = "article")
public class Article {

    @Id
    private String id;
    
    private String title;
    
    @Field(type = FieldType.Nested)
    private List<Author> authors;
    
    // standard getters and setters
}

Note that in the @Document annotation, we indicate that instances of this class should be stored in Elasticsearch in an index called “blog“, and with a document type of “article“. Documents with many different types can be stored in the same index.

Also notice that the authors field is marked as FieldType.Nested. This allows us to define the Author class separately, but have the individual instances of author embedded in an Article document when it is indexed in Elasticsearch.

4. Indexing Documents

Spring Data Elasticsearch generally auto-creates indexes based on the entities in the project.

However, you can also create an index programmatically, via the client template:

elasticsearchTemplate.createIndex(Article.class);

After the index is available, we can add a document to the index.

Let’s have a quick look at an example – indexing an article with two authors:

Article article = new Article("Spring Data Elasticsearch");
article.setAuthors(asList(new Author("John Smith"), new Author("John Doe")));
articleService.save(article);

5. Querying

5.1. Method Name-Based Query

The repository class we defined earlier had a findByAuthorsName method – which we can use for finding articles by author name:

String nameToFind = "John Smith";
Page<Article> articleByAuthorName
  = articleService.findByAuthorName(nameToFind, new PageRequest(0, 10));

By calling findByAuthorName with a PageRequest object, we obtain the first page of results (page numbering is zero-based), with that page containing at most 10 articles.

5.2. A Custom Query

There are a couple of ways to define custom queries for Spring Data Elasticsearch repositories. One way is to use the @Query annotation, as demonstrated in section 2.2.

Another option is to use a builder for custom query creation.

For example, we could search for articles that have word “data” in the title by building a query with the NativeSearchQueryBuilder:

SearchQuery searchQuery = new NativeSearchQueryBuilder()
  .withFilter(regexpFilter("title", ".*data.*"))
  .build();
List<Article> articles = elasticsearchTemplate.queryForList(searchQuery, Article.class);

6. Updating and Deleting

In order to update or delete a document, we first need to retrieve that document.

String articleTitle = "Spring Data Elasticsearch";
SearchQuery searchQuery = new NativeSearchQueryBuilder()
  .withQuery(matchQuery("title", articleTitle).minimumShouldMatch("75%"))
  .build();

List<Article> articles = elasticsearchTemplate.queryForList(searchQuery, Article.class);

Now, to update the title of the article – we can modify the document, and use the save API:

article.setTitle("Getting started with Search Engines");
articleService.save(article);

As you may have guessed, in order to delete a document you can use the delete method:

articleService.delete(articles.get(0));

7. Conclusion

This was a quick and practical discussion of the basic use of Spring Data Elasticsearch.

To read more about the impressive features of Elasticsearch, you can find its documentation on the official website.

The example used in this article is available as a sample project in GitHub.

I usually post about Persistence on Twitter - you can follow me there:


Guava 19: What’s New?

$
0
0

1. Overview

Google Guava provides libraries with utilities that ease Java development. In this tutorial we will take a look to new functionality introduced in the Guava 19 release.

2. common.base Package Changes

2.1. Added CharMatcher Static Methods

CharMatcher, as its name implies, is used to check whether a string matches a set of requirements.

String inputString = "someString789";
boolean result = CharMatcher.javaLetterOrDigit().matchesAllOf(inputString);

In the example above, result will be true.

CharMatcher can also be used when you need to transform strings.

String number = "8 123 456 123";
String result = CharMatcher.whitespace().collapseFrom(number, '-');

In the example above, result will be “8-123-456-123”.

With the help of CharMatcher, you can count the number of occurrences of a character in a given string:

String number = "8 123 456 123";
int result = CharMatcher.digit().countIn(number);

In the example above, result will be 10.

Previous versions of Guava have matcher constants such as CharMatcher.WHITESPACE and CharMatcher.JAVA_LETTER_OR_DIGIT.

In Guava 19, these have been superseded by equivalent methods (CharMatcher.whitespace() and CharMatcher.javaLetterOrDigit(), respectively). This was changed to reduce the number of classes created when CharMatcher is used.

Using static factory methods allows classes to be created only as needed. In future releases, matcher constants will be deprecated and removed.

2.2. lazyStackTrace Method in Throwables

This method returns a List of stacktrace elements (lines) of a provided Throwable. It can be faster than iterating through the full stacktrace (Throwable.getStackTrace()) if only a portion is needed, but can be slower if you will iterate over the full stacktrace.

IllegalArgumentException e = new IllegalArgumentException("Some argument is incorrect");
List<StackTraceElement> stackTraceElements = Throwables.lazyStackTrace(e);

3. common.collect Package Changes

3.1.  Added FluentIterable.toMultiset()

In a previous Baeldung article, Whats new in Guava 18, we looked at FluentIterable. The toMultiset() method is used when you need to convert a FluentIterable to an ImmutableMultiSet.

User[] usersArray = {new User(1L, "John", 45), new User(2L, "Max", 15)};
ImmutableMultiset<User> users = FluentIterable.of(usersArray).toMultiset();

A Multiset is collection, like Set, that supports order-independent equality. The main difference between a Set and a Multiset is that Multiset may contain duplicate elements. Multiset stores equal elements as occurrences of the same single element, so you can call Multiset.count(java.lang.Object) to get the total count of occurrences of a given object.

Lets take a look to few examples:

List<String> userNames = Arrays.asList("David", "Eugen", "Alex", "Alex", "David", "David", "David");

Multiset<String> userNamesMultiset = HashMultiset.create(userNames);

assertEquals(7, userNamesMultiset.size());
assertEquals(4, userNamesMultiset.count("David"));
assertEquals(2, userNamesMultiset.count("Alex"));
assertEquals(1, userNamesMultiset.count("Eugen"));
assertThat(userNamesMultiset.elementSet(), anyOf(containsInAnyOrder("Alex", "David", "Eugen")));

You can easily determine the count of duplicate elements, which is far cleaner than with standard Java collections.

3.2. Added RangeSet.asDescendingSetOfRanges() and asDescendingMapOfRanges()

RangeSet is used to operate with nonempty ranges (intervals). We can describe a RangeSet as a set of disconnected, nonempty ranges. When you add a new nonempty range to a RangeSet, any connected ranges will be merged and empty ranges will be ignored:

Let’s take a look at some methods we can use to build new ranges: Range.closed(), Range.openClosed(), Range.closedOpen(), Range.open().

The difference between them is that open ranges don’t include their endpoints. They have different designation in mathematics. Open intervals are denoted with “(” or “)”, while closed ranges are denoted with “[” or “]”.

For example (0,5) means “any value greater than 0 and less than 5”, while (0,5] means “any value greater than 0 and less than or equal to 5”:

RangeSet<Integer> rangeSet = TreeRangeSet.create();
rangeSet.add(Range.closed(1, 10));

Here we added range [1, 10] to our RangeSet. And now we want to extend it by adding new range:

rangeSet.add(Range.closed(5, 15));

You can see that these two ranges are connected at 5, so RangeSet will merge them to a new single range, [1, 15]:

rangeSet.add(Range.closedOpen(10, 17));

These ranges are connected at 10, so they will be merged, resulting in a closed-open range, [1, 17). You can check if a value included in range or not using the contains method:

rangeSet.contains(15);

This will return true, because the range [1,17) contains 15. Let’s try another value:

rangeSet.contains(17);

This will return false, because range [1,17) doesn’t contain it’s upper endpoint, 17. You can also check if range encloses any other range using the encloses method:

rangeSet.encloses(Range.closed(2, 3));

This will return true because the range [2,3] falls completely within our range, [1,17).

There are a few more methods that can help you operate with intervals, such as Range.greaterThan(), Range.lessThan(), Range.atLeast(), Range.atMost(). The first two will add open intervals, the last two will add closed intervals. For example:

rangeSet.add(Range.greaterThan(22));

This will add a new interval (22, +∞) to your RangeSet, because it has no connections with other intervals.

With the help of new methods such as asDescendingSetOfRanges (for RangeSet)  and asDescendingMapOfRanges (for RangeSet) you can convert a RangeSet to a Set or Map.

3.3. Added Lists.cartesianProduct(List…) and Lists.cartesianProduct(List<List>>)

A Cartesian product returns every possible combination of two or more collections:

List<String> first = Lists.newArrayList("value1", "value2");
List<String> second = Lists.newArrayList("value3", "value4");

List<List<String>> cartesianProduct = Lists.cartesianProduct(first, second);

List<String> pair1 = Lists.newArrayList("value2", "value3");
List<String> pair2 = Lists.newArrayList("value2", "value4");
List<String> pair3 = Lists.newArrayList("value1", "value3");
List<String> pair4 = Lists.newArrayList("value1", "value4");

assertThat(cartesianProduct, anyOf(containsInAnyOrder(pair1, pair2, pair3, pair4)));

As you can see from this example, the resulting list will contain all possible combinations of provided lists.

3.4. Added Maps.newLinkedHashMapWithExpectedSize(int)

The initial size of a standard LinkedHashMap is 16 (you can verify this in the source of LinkedHashMap). When it reaches the load factor of HashMap (by default, 0.75), HashMap will rehash and double it size. But if you know that your HashMap will handle many key-value pairs, you can specify an initial size greater then 16, allowing you to avoid repeated rehashings:

LinkedHashMap<Object, Object> someLinkedMap = Maps.newLinkedHashMapWithExpectedSize(512);

3.5. Re-added Multisets.removeOccurrences(Multiset, Multiset)

This method is used to remove specified occurrences in Multiset:

Multiset<String> multisetToModify = HashMultiset.create();
Multiset<String> occurrencesToRemove = HashMultiset.create();

multisetToModify.add("John");
multisetToModify.add("Max");
multisetToModify.add("Alex");

occurrencesToRemove.add("Alex");
occurrencesToRemove.add("John");

Multisets.removeOccurrences(multisetToModify, occurrencesToRemove);

After this operation only “Max” will be left in multisetToModify.

Note that, if multisetToModify contained multiple instances of a given element while occurrencesToRemove contains only one instance of that element, removeOccurrences will only remove one instance.

4. common.hash Package Changes

4.1. Added Hashing.sha384()

The Hashing.sha384() method returns a hash function that implements the SHA-384 algorithm:

int inputData = 15;
        
HashFunction hashFunction = Hashing.sha384();
HashCode hashCode = hashFunction.hashInt(inputData);

The SHA-384 has for 15 is “0904b6277381dcfbddd…2240a621b2b5e3cda8”.

4.2. Added Hashing.concatenating(HashFunction, HashFunction, HashFunction…) and Hashing.concatenating(Iterable<HashFunction>)

With help of the Hashing.concatenating methods, you concatenate the results of a series of hash functions:

int inputData = 15;

HashFunction crc32Function = Hashing.crc32();
HashCode crc32HashCode = crc32Function.hashInt(inputData);

HashFunction hashFunction = Hashing.concatenating(Hashing.crc32(), Hashing.crc32());
HashCode concatenatedHashCode = hashFunction.hashInt(inputData);

The resulting concatenatedHashCode will be “4acf27794acf2779”, which is the same as the crc32HashCode (“4acf2779”) concatenated with itself.

In our example, a single hashing algorithm was used for clarity. This is not particularly useful, however. Combining two hash functions is useful when you need to make your hash stronger, as it can only be broken if two of your hashes are broken. For most cases, use two different hash functions.

5. common.reflect Package Changes

5.1. Added TypeToken.isSubtypeOf

TypeToken is used to manipulate and query generic types even in runtime, avoiding problems due to type erasure.

Java doesn’t retain generic type information for objects at runtime, so it is impossible to know if a given object has a generic type or not. But with the assistance of reflection, you can detect generic types of methods or classes. TypeToken uses this workaround to allow you to work with and query generic types without extra code.

In our example, you can see that, without the TypeToken method isAssignableFrom, will return true even though ArrayList<String> is not assignable from ArrayList<Integer>:

ArrayList<String> stringList = new ArrayList<>();
ArrayList<Integer> intList = new ArrayList<>();
boolean isAssignableFrom = stringList.getClass().isAssignableFrom(intList.getClass());

To solve this problem, we can check this with the help of TypeToken.

TypeToken<ArrayList<String>> listString = new TypeToken<ArrayList<String>>() { };
TypeToken<ArrayList<Integer>> integerString = new TypeToken<ArrayList<Integer>>() { };

boolean isSupertypeOf = listString.isSupertypeOf(integerString);

In this example, isSupertypeOf will return false.

In previous versions of Guava there was method isAssignableFrom for this purposes, but as of Guava 19, it is deprecated in favor of isSupertypeOf. Additionally, the method isSubtypeOf(TypeToken) can be used to determine if a class is a subtype of another class:

TypeToken<ArrayList<String>> stringList = new TypeToken<ArrayList<String>>() { };
TypeToken<List> list = new TypeToken<List>() { };

boolean isSubtypeOf = stringList.isSubtypeOf(list);

ArrayList is a subtype of List, so the result will be true, as expected.

6. common.io Package Changes

6.1. Added ByteSource.sizeIfKnown()

This method returns the size of the source in bytes, if it can be determined, without opening the data stream:

ByteSource charSource = Files.asByteSource(file);
Optional<Long> size = charSource.sizeIfKnown();

6.2. Added CharSource.length()

In previous version of Guava there was no method to determine the length of a CharSource. Now you can use CharSource.length() for this purpose.

6.3. Added CharSource.lengthIfKnown()

The same as for ByteSource, but with CharSource.lengthIfKnown() you can determine length of your file in characters:

CharSource charSource = Files.asCharSource(file, Charsets.UTF_8);
Optional<Long> length = charSource.lengthIfKnown();

7. Conclusion

Guava 19 introduced many useful additions and improvements to its growing library. It is well-worth considering for use in your next project.

The code samples in this article are available in the GitHub repository.


Java Web Weekly, Issue 111

$
0
0

I usually post about Dev stuff on Twitter - you can follow me there:

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> Reactive Spring [spring.io]

A quick announcement of the plans towards the reactive programming in Spring 5.

>> How to enable bytecode enhancement dirty checking in Hibernate [vladmihalcea.com]

An interesting Hibernate 5 feature – using bytecode enhancement to do dirty checking. Quick and to the point.

>> Dear API Designer. Are You Sure, You Want to Return a Primitive? [jooq.org]

Good API design is hard – that much should be clear by now.

But we’re all working towards getting better at it, and this writeup definitely makes some good points towards that.

>> Designing your own Spring Boot starter – part 1 [frankel.ch]

The first steps in putting together a Spring Boot style auto configuration – leveraging the wide array of flexible annotations in Boot.

This is no longer a new concept, but it’s still super powerful, especially if you chose to go beyond what the framework provides out of the box.

>> Preventing Session Hijacking With Spring [broadleafcommerce.com]

Solid read on protecting your system against session fixation attacks with Spring Security.

>> Java for small teams [ncrcoe.gitbooks.io]

This looks like a very useful collection of tactics and general practical advice for your first few years of doing Java work.

I haven’t read the whole thing, but the bits that I did read, I fully agreed with.

>> IntelliJ IDEA Pro Tips [medium.com]

A good array of more advanced tips to using IntelliJ well.

Getting the most out of your IDE can really make a day to day difference in your coding flow. I personally learned the most out of pairing sessions and watching my pair do stuff better than I did.

So this is definitely recommended reading if you’re an IntelliJ user (I’m not).

>> Announcing Extras for Eclipse [codeaffine.com]

And on that note – here’s some Eclipse goodness as well.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Data breaches, vBulletin and weak password hashing [troyhunt.com]

Read up on this if you’re doing any kind of security online. Good stuff.

>> Elasticsearch Cluster in a Jiffy [codecentric.de]

To the point options to bootstrap an Elasticsearch cluster. I’ll definitely give this a try soon, as I’m doing a lot of Elasticsearch work lately.

>> Jepsen: RethinkDB 2.2.3 reconfiguration [aphyr.com]

As always, if you’re interesting in the inner-workings of how persistence works, have a read.

This one is about RethinkDB – which I’ve personally never used, which didn’t make this piece any less interesting.

Also worth reading:

3. Musings

>> Costs And Benefits Of Comments [codefx.org]

Another interesting installment in the “comments” series.

This one is on my weekend reading list, but I wanted to include it here because I really enjoyed the past writeups.

>> Working with feature-toggled systems [martinfowler.com]

>> Final part of Feature Toggles [martinfowler.com]

The final two parts in what is now a complete reference article on using feature toggles in a system.

>> Mistakes Dev Managers Make [daedtech.com]

I fully agree that doing a good job as a manager comes down to trust. The trust the manager has in the team, and of course the way to team trust (or doesn’t trust) the manager.

>> Taobao’s Security Breach from a Log Perspective [loggly.com]

Yet another security breach story, and of course something that could have been avoided with just a few straightforward safeguards in place.

Looks like I timed the announcement of my next course – Learn Spring Security – at the perfect time :)

>> The 5 Golden Rules of Giving Awesome Customer Support [jooq.org]

Good advice all around.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> You read those same policies last week [dilbert.com]

>> Did you know it was hideous before I told you? [dilbert.com]

>> I don’t do that [dilbert.com]

 

5. Pick of the Week

After a couple of months of winding down after the intensity of writing and recording the Master Class of my last course, I’m finally well rested and ready to announce my next big project:

>> Learn Spring Security

 

I usually post about Dev stuff on Twitter - you can follow me there:


A Guide to RESTEasy

$
0
0

I just announced the Master Class of my "REST With Spring" Course:

>> THE "REST WITH SPRING" CLASSES

1. Introduction

JAX-RS (Java API for RESTful Web Services) is a set of Java API that provides support in creating REST APIs. And the framework makes good use of annotations to simplify the development and deployment of these APIs.

In this tutorial, we’ll use RESTEasy, the JBoss provided portable implementation of JAX-RS specification, in order to create a simple RESTful Web services.

2. Project Setup

We are going two consider two possible scenarios:

  • Standalone Setup – intended for working on every application server
  • JBoss AS Setup – to consider only for deployment in JBoss AS

2.1. Standalone Setup

Let’s start by using JBoss WildFly 10 with standalone setup.

JBoss WildFly 10 comes with RESTEasy version 3.0.11, but as you’ll see, we’ll configure the pom.xml with the new 3.0.14 version.

And thanks to the resteasy-servlet-initializer, RESTEasy provides integration with standalone Servlet 3.0 containers via the ServletContainerInitializer integration interface.

Let’s have a look at the pom.xml:

<properties>
    <resteasy.version>3.0.14.Final</resteasy.version>
</properties>
<dependencies>
    <dependency>
        <groupId>org.jboss.resteasy</groupId>
        <artifactId>resteasy-servlet-initializer</artifactId>
        <version>${resteasy.version}</version>
    </dependency>
    <dependency>
        <groupId>org.jboss.resteasy</groupId>
        <artifactId>resteasy-client</artifactId>
        <version>${resteasy.version}</version>
    </dependency>
</dependencies>

jboss-deployment-structure.xml

Within JBoss everything that is deployed as WAR, JAR or EAR is a module. These modules are referred to as dynamic modules.

Beside these, there are also some static modules in $JBOSS_HOME/modules. As JBoss has the RESTEasy static modules – for standalone deployment, the jboss-deployment-structure.xml is mandatory in order to exclude some of them.

In this way, all classes and JAR files contained in our WAR will be loaded:

<jboss-deployment-structure>
    <deployment>
        <exclude-subsystems>
            <subsystem name="resteasy" />
        </exclude-subsystems>
        <exclusions>
            <module name="javaee.api" />
            <module name="javax.ws.rs.api"/>
            <module name="org.jboss.resteasy.resteasy-jaxrs" />
        </exclusions>
        <local-last value="true" />
    </deployment>
</jboss-deployment-structure>

2.2. JBoss AS Setup

If you are going to run RESTEasy with JBoss version 6 or higher you can choose to adopt the libraries already bundled in the application server, thus simplifying the pom:

<dependencies>
    <dependency>
        <groupId>org.jboss.resteasy</groupId>
        <artifactId>resteasy-jaxrs</artifactId>
        <version>${resteasy.version}</version>
    </dependency>
<dependencies>

Notice that jboss-deployment-structure.xml is no longer needed.

3. Server Side Code

3.1. Servlet Version 3 web.xml

Let’s now have a quick look at the web.xml of our simple project here:

<?xml version="1.0" encoding="UTF-8"?>
<web-app version="3.0" xmlns="http://java.sun.com/xml/ns/javaee"
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xsi:schemaLocation="http://java.sun.com/xml/ns/javaee 
     http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd">

   <display-name>RestEasy Example</display-name>

   <context-param>
      <param-name>resteasy.servlet.mapping.prefix</param-name>
      <param-value>/rest</param-value>
   </context-param>

</web-app>

resteasy.servlet.mapping.prefix is needed only if you want prepend a relative path to the API application.

At this point, it’s very important to notice that we haven’t declared any Servlet in the web.xml  because resteasy servlet-initializer has been added as dependency in pom.xml. The reason for that is – RESTEasy provides org.jboss.resteasy.plugins.servlet.ResteasyServletInitializer class that implements javax.server.ServletContainerInitializer.

ServletContainerInitializer is an initializer and it’s executed before any servlet context will be ready – you can use this initializer to define servlets, filters or listeners for your app.

3.2. The Application Class

The javax.ws.rs.core.Application class is a standard JAX-RS class that you may implement to provide information on your deployment:

@ApplicationPath("/rest")
public class RestEasyServices extends Application {

    private Set<Object> singletons = new HashSet<Object>();

    public RestEasyServices() {
        singletons.add(new MovieCrudService());
    }

    @Override
    public Set<Object> getSingletons() {
        return singletons;
    }
}

As you can see – this is simply a class the lists all JAX-RS root resources and providers, and it is annotated with the @ApplicationPath annotation.

If you return any empty set for by classes and singletons, the WAR will be scanned for JAX-RS annotation resource and provider classes.

3.3. A Services Implementation Class

Finally, let’s see an actual API definition here:

@Path("/movies")
public class MovieCrudService {

    private Map<String, Movie> inventory = new HashMap<String, Movie>();

    @GET
    @Path("/getinfo")
    @Produces({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML })
    public Movie movieByImdbId(@QueryParam("imdbId") String imdbId) {
        if (inventory.containsKey(imdbId)) {
            return inventory.get(imdbId);
        } else {
            return null;
        }
    }

    @POST
    @Path("/addmovie")
    @Consumes({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML })
    public Response addMovie(Movie movie) {
        if (null != inventory.get(movie.getImdbId())) {
            return Response
              .status(Response.Status.NOT_MODIFIED)
              .entity("Movie is Already in the database.").build();
        }

        inventory.put(movie.getImdbId(), movie);
        return Response.status(Response.Status.CREATED).build();
    }
}

4. Conclusions

In this quick tutorial we introduced RESTEasy and we built a super simple API with it.

The example used in this article is available as a sample project in GitHub.

The Master Class of my "REST With Spring" Course is finally out:

>> CHECK OUT THE CLASSES

Define Custom RAML Properties Using Annotations

$
0
0

1. Introduction

In this, the fourth article in our series on RAML – the RESTful API Modeling Language – we demonstrate how to use annotations to define custom properties for a RAML API specification. This process is also referred to as extending the metadata of the specification.

Annotations may be used to provide hooks for RAML processing tools requiring additional specifications that lie outside the scope of the official language.

2. Declaring Annotation Types

One or more annotation types may be declared using the top-level annotationTypes property.

In the simplest of cases, the annotation type name is all that is needed to specify it, in which case the annotation type value is implicitly defined to be a string:

annotationTypes:
  simpleImplicitStringValueType:

This is the equivalent to the more explicit annotation type definition shown here:

annotationTypes:
  simpleExplicitStringValueType:
    type: string

In other cases, an annotation type specification will contain a value object that is considered to be the annotation type declaration.

In these cases, the annotation type is defined using the same syntax as a data type with the addition of two optional attributes: allowedTargets, whose value is either a string or an array of strings limiting the types of target locations to which an annotation may be applied, and allowMultiple, whose boolean value states whether or not the annotation may be applied more than once within a single target (default is false).

Here is a brief example declaring an annotation type containing additional properties and attributes:

annotationTypes:
  complexValueType:
    allowMultiple: true
    properties:
      prop1: integer
      prop2: string
      prop3: boolean

2.1. Target Locations Supporting the Use of Annotations

Annotations may be applied to (used in) several root-level target locations, including the root level of the API itself, resource types, traits, data types, documentation items, security schemeslibraries, overlaysextensions, and other annotation types.

Annotations may also be applied to security scheme settings, resources, methods, response declarations, request bodies, response bodies, and named examples.

2.2. Restricting an Annotation Type’s Targets

To restrict an annotation type to one or more specific target location types, you would define its allowedTargets attribute.

When restricting an annotation type to a single target location type, you would assign the allowedTargets attribute a string value representing that target location type:

annotationTypes:
  supportsOnlyOneTargetLocationType:
    allowedTargets: TypeDeclaration

To allow multiple target location types for an annotation type, you would assign the allowedTargets attribute an array of string values representing those target location types:

annotationTypes:
  supportsMultipleTargetLocationTypes:
    allowedTargets: [ Library, Overlay, Extension ]

If the allowedTargets attribute is not defined on an annotation type, then by default, that annotation type may be applied to any of the supporting target location types.

3. Applying Annotation Types

Once you have defined the annotation types at the root level of your RAML API spec, you would apply them to their intended target locations, providing their property values at each instance. The application of an annotation type within a target location is referred to simply as an annotation on that target location.

3.1. Syntax

In order to apply an annotation type, add the annotation type name enclosed in parentheses () as an attribute of the target location and provide the annotation type value properties that the annotation type is to use for that specific target. If the annotation type is in a RAML library, then you would concatenate the library reference followed by a dot (.) followed by the annotation type name.

3.2. Example

Here is an example showing how we might apply some of the annotation types listed in the above code snippets to various resources and methods of our API:

/foos:
  type: myResourceTypes.collection
  (simpleImplicitStringValueType): alpha
  ...
  get:
    (simpleExplicitStringValueType): beta
  ...
  /{fooId}:
    type: myResourceTypes.item
    (complexValueType):
      prop1: 4
      prop2: testing
      prop3: true

4. Use Case

One potential use case for annotations would be defining and configuring test cases for an API.

Suppose we wanted to develop a RAML processing tool that can generate a series of tests against our API based on annotations. We could define the following annotation type:

annotationTypes:
  testCase:
    allowedTargets: [ Method ]
    allowMultiple: true
    usage: |
      Use this annotation to declare a test case.
      You may apply this annotation multiple times per location.
    properties:
      scenario: string
      setupScript?: string[]
      testScript: string[]
      expectedOutput?: string
      cleanupScript?: string[]

We could then configure a series of tests cases for our /foos resource by applying annotations as follows:

/foos:
  type: myResourceTypes.collection
  get:
    (testCase):
      scenario: No Foos
      setupScript: deleteAllFoosIfAny
      testScript: getAllFoos
      expectedOutput: ""
    (testCase):
      scenario: One Foo
      setupScript: [ deleteAllFoosIfAny, addInputFoos ]
      testScript: getAllFoos
      expectedOutput: '[ { "id": 999, "name": Joe } ]'
      cleanupScript: deleteInputFoos
    (testCase):
      scenario: Multiple Foos
      setupScript: [ deleteAllFoosIfAny, addInputFoos ]
      testScript: getAllFoos
      expectedOutput: '[ { "id": 998, "name": "Bob" }, { "id": 999, "name": "Joe" } ]'
      cleanupScript: deleteInputFoos

5. Conclusion

In this tutorial, we have shown how to extend the metadata for a RAML API specification through the use of custom properties called annotations.

First, we showed how to declare annotation types using the top-level annotationTypes property and enumerated the types of target locations to which they are allowed to be applied.

Next, we demonstrated how to apply annotations in our API and noted how to restrict the types of target locations to which a given annotation can be applied.

Finally, we introduced a potential use case by defining annotation types that could potentially be supported by a test generation tool and showing how one might apply those annotations to an API.

For more information about the use of annotations in RAML, please visit the RAML 1.0 spec.

You can view the full implementation of the API definition used for this tutorial in the github project.

Java Web Weekly 112

$
0
0

I usually post about Dev stuff on Twitter - you can follow me there:

At the very beginning of last year, I decided to track my reading habits and share the best stuff here, on Baeldung. Haven’t missed a review since.

Here we go…

1. Spring and Java

>> JUnit 5 – Setup [codefx.org]

A quick intro to what’s shaping up to become a very good step forward for JUnit – which bodes well for the entire ecosystem.

>> Reactor 2.5 : A Second Generation Reactive Foundation for the JVM [spring.io]

An update on what’s going on with the story reactive systems – seems like a lot of progress is being made.

>> An Ingenious Workaround to Emulate Sum Types in Java [jooq.org]

Some fun pushing the boundaries of java generics.

>> The New Hibernate ORM User Guide [in.relation.to]

A big update to the Hibernate docs, which are now going to 5.1 by default.

>> Memory Leaks: Fallacies and Misconceptions [plumbr.eu]

Some of the basics of what’s healthy and what’s when looking at the memory consumption of a JVM – simple and to the point.

>> Setting Up Distributed Infinispan Cache with Hibernate and Spring [techblog.bozho.net]

A conversationally written guide on setting up a caching layer for Hibernate with Spring. This will definitely come in handy for at least a few developers out there.

>> The Mute Design Pattern [jooq.org]

Hehe – now let’s have some fun.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Is Your Computer Stable? [codinghorror.com]

A solid set of tests you can (and should) run against your rig to make sure it’s in working order.

>> Stack Overflow: The Architecture – 2016 Edition [nickcraver.com]

Some cool numbers and behind the scene details of running StackOverflow. Very interesting to see what it takes to run SO the old-school way.

Also worth reading:

3. Musings

>> Everything you need to know about the Apple versus FBI case [troyhunt.com]

This is a long read, but an important one given the recent news in the privacy/security world.

>> The Paradox of Autonomy and Recognition [queue.acm.org]

An interesting (yet long) read about office politics and evaluating the work of other developers.

>> High Stakes Programming by Coincidence [daedtech.com]

Committing a fix you don’t quite understand is almost never a good idea, and imagining the stakes are high is an interesting way to think about it and quickly reach a decision.

Also worth reading:

 

4. Comics

And my favorite Dilberts of the week:

>> Why are you picking this vendor? [dilbert.com]

>> Let’s just say I’m “comfortable” [dilbert.com]

>> This is tech support. How may I abuse you? [dilbert.com]

 

5. Pick of the Week

>> Shields Down [randsinrepose.com]

 

I usually post about Dev stuff on Twitter - you can follow me there:


RESTEasy Client API

$
0
0

I just announced the Master Class of my "REST With Spring" Course:

>> THE "REST WITH SPRING" CLASSES

1. Introduction

In the previous article we focused on the RESTEasy server side implementation of JAX-RS 2.0.

JAX-RS 2.0 introduces a new client API so that you can make HTTP requests to your remote RESTful web services. Jersey, Apache CXF, Restlet and RESTEasy are only a subset of the most popular implementations.

In this article we’ll explore how to consume the REST API by sending requests with a RESTEasy API.

2. Project Setup

Add in your pom.xml the following dependency:

<properties>
    <resteasy.version>3.0.14.Final</resteasy.version>
</properties>
<dependencies>
    <dependency>
        <groupId>org.jboss.resteasy</groupId>
        <artifactId>resteasy-client</artifactId>
        <version>${resteasy.version}</version>
    </dependency>
    ...
</dependencies>

3. Client Side Code

The client implementation is quite, being made up of 3 main classes:

    • Client
    • WebTarget
    • Response

The Client interface is a builder of WebTarget instances.

WebTarget represents a distinct URL or URL template from which you can build more sub-resource WebTargets or invoke requests on.

There are really two ways to create a Client:

  • The standard way, by using the org.jboss.resteasy.client.ClientRequest
  • RESTeasy Proxy Framework: by using the ResteasyClientBuilder class

We will focus on the RESTEasy Proxy Framework here.

Instead of using JAX-RS annotations to map an incoming request to your RESTFul Web Service method, the client framework builds an HTTP request that it uses to invoke on a remote RESTful Web Service.

So let’s start writing a Java interface and using JAX-RS annotations on the methods and on the interface.

3.1. The ServicesClient Interface

@Path("/movies")
public interface ServicesInterface {

    @GET
    @Path("/getinfo")
    @Produces({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML })
    Movie movieByImdbId(@QueryParam("imdbId") String imdbId);

    @POST
    @Path("/addmovie")
    @Consumes({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML })
    Response addMovie(Movie movie);

    @PUT
    @Path("/updatemovie")
    @Consumes({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML })
    Response updateMovie(Movie movie);

    @DELETE
    @Path("/deletemovie")
    Response deleteMovie(@QueryParam("imdbId") String imdbId);
}

3.2. The Movie Class

@XmlAccessorType(XmlAccessType.FIELD)
@XmlType(name = "movie", propOrder = { "imdbId", "title" })
public class Movie {

    protected String imdbId;
    protected String title;

    // getters and setters
}

3.3. The Request Creation

We’ll now generate a proxy client that we can use to consume the API:

String transformerImdbId = "tt0418279";
Movie transformerMovie = new Movie("tt0418279", "Transformer 2");
final String path = "http://127.0.0.1:8080/RestEasyTutorial/rest"; 
 
ResteasyClient client = new ResteasyClientBuilder().build();
ResteasyWebTarget target = client.target(UriBuilder.fromPath(path));
ServicesInterface proxy = target.proxy(ServicesInterface.class);

// POST
Response moviesResponse = proxy.addMovie(transformerMovie);
System.out.println("HTTP code: " + moviesResponse.getStatus());
moviesResponse.close();

// GET
Movie movies = proxy.movieByImdbId(transformerImdbId);

// PUT
transformerMovie.setTitle("Transformer 4");
moviesResponse = proxy.updateMovie(transformerMovie);
moviesResponse.close();

// DELETE
moviesResponse = proxy.deleteMovie(batmanMovie.getImdbId());
moviesResponse.close();

Note that the RESTEasy client API is based on the Apache HttpClient.

Also note that, after each operation, we’ll need to close the response before we can perform a new operation. This is necessary because, by default, the client only has a single HTTP connection available.

Finally, note how we’re working with the DTOs directly – we’re not dealing with the marshal/unmarshal logic to and from JSON or XML; that happens behind the scenes using JAXB or Jackson since the Movie class was properly annotated.

3.4. The Request Creation with Connection Pool

One note from the previous example was that we only had a single connection available. If – for example, we try to do:

Response batmanResponse = proxy.addMovie(batmanMovie);
Response transformerResponse = proxy.addMovie(transformerMovie);

without invoke close() on batmanResponse – an exception will be thrown when second line is executed:

java.lang.IllegalStateException:
Invalid use of BasicClientConnManager: connection still allocated.
Make sure to release the connection before allocating another one.

Again – this simply happens because the default HttpClient used by RESTEasy is org.apache.http.impl.conn.SingleClientConnManager – which of course only makes a single connection available.

Now – to work around that limitation – the RestEasyClient instance must be created differently (with a connection pool):

PoolingHttpClientConnectionManager cm = new PoolingHttpClientConnectionManager();
CloseableHttpClient httpClient = HttpClients.custom().setConnectionManager(cm).build();
cm.setMaxTotal(200); // Increase max total connection to 200
cm.setDefaultMaxPerRoute(20); // Increase default max connection per route to 20
ApacheHttpClient4Engine engine = new ApacheHttpClient4Engine(httpClient);

ResteasyClient client = new ResteasyClientBuilder().httpEngine(engine).build();
ResteasyWebTarget target = client.target(UriBuilder.fromPath(path));
ServicesInterface proxy = target.proxy(ServicesInterface.class);

Now we can benefit from a proper connection pool and can have multiple requests running through our client without necessarily having to release the connection each time.

4. Conclusion

In this quick tutorial we introduced the RESTEasy Proxy Framework and we built a super simple client API with it.

The framework gives us a few more helper methods to configure a client and can be defined as the mirror opposite of the JAX-RS server-side specifications.

The example used in this article is available as a sample project in GitHub.

The Master Class of my "REST With Spring" Course is finally out:

>> CHECK OUT THE CLASSES

Viewing all 3818 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>