Quantcast
Channel: Baeldung

A Practical Guide to RecordBuilder in Java

$
0
0

1. Introduction

Java records, introduced in Java 16, offer a concise way to model immutable data. They automatically generate constructors, accessors, equals(), hashCode(), and toString() methods, reducing boilerplate and improving readability.

Despite these benefits, records come with notable limitations. For example, all fields must be declared in the record header, setter methods aren’t allowed, and extensibility is restricted due to their implicit final nature. These constraints make records less than ideal when we need flexible object creation, especially in scenarios involving optional parameters or modifications to existing instances.

To address these limitations, the RecordBuilder library offers a simple yet effective solution. It enhances records with a builder pattern, bridging the gap between the elegance of immutability and the practicality of flexible construction.

2. Getting Started

To begin using RecordBuilder, we first need to add the annotation processor to our build setup. For Maven users, this looks like:

<dependency>
    <groupId>io.soabase.record-builder</groupId>
    <artifactId>record-builder-core</artifactId>
    <version>47</version>
</dependency>

After setup, we’ll annotate our record with @RecordBuilder, and optionally implement the With interface it generates, to enable inline builders and fluent withX() methods.

3. Why Use RecordBuilder?

At first glance, records in their pure form are powerful – they give us concise, immutable data structures with minimal syntax:

public record Person(String name, int age) {}

However, their all-args constructors and lack of setters make them rigid when we need flexibility. In domains where we construct data with optional or varying values, this can quickly become limiting. We could write our own builder to solve this, but doing so reintroduces the very boilerplate that records are meant to eliminate.

RecordBuilder bridges this gap elegantly. With a single annotation, we gain a fluent, safe, and readable way to build and modify record instances. It offers support for staged builders, withX() methods, customization hooks, and more – all while respecting immutability. For example, let’s suppose we annotate our record like this:

@RecordBuilder
public record Person(String name, int age) implements PersonBuilder.With {}

From this, RecordBuilder generates an entire suite of builder methods, withX() accessors, and even static factory helpers, all adhering to the best practices of immutability.

4. Using the Generated Builder

Let’s walk through some practical ways the builder improves our workflow, using the RecordBuilderDemo class.

First, we construct the initial record in the standard way:

Person p1 = new Person("foo", 123);
assertEquals("foo", p1.name());
assertEquals(123, p1.age());

Next, we update individual fields via generated withName() or withAge() methods:

Person p2 = p1.withName("bar");
assertEquals("bar", p2.name());
assertEquals(123, p2.age());
Person p3 = p2.withAge(456);
assertEquals("bar", p3.name());
assertEquals(456, p3.age());

Here, we preserve the immutability and modify only the intended field with each transformation. Even so, RecordBuilder offers more. We can build an entirely new instance based on a previous one with a fluent builder syntax:

Person p4 = p3.with()
  .age(101)
  .name("baz")
  .build();
assertEquals("baz", p4.name());
assertEquals(101, p4.age());

This style improves clarity when multiple fields need to change. It avoids constructor chaining and makes the intent of each transformation explicit. As we’ll see next, the builder also supports inline consumer-based updates for even more expressive modification logic.

5. Advanced Features

One of the highlights of RecordBuilder is the support for inline, consumer-based modifications. Here’s how we can modify a record with a lambda:

Person p5 = p4.with(p -> p.age(200).name("whatever"));
assertEquals("whatever", p5.name());
assertEquals(200, p5.age());

In addition, we can apply conditional logic in the builder context:

Person p6 = p5.with(p -> {
    if (p.age() > 13) {
        p.name("Teen " + p.name());
    } else {
        p.name("whatever");
    }
});
assertEquals("Teen whatever", p6.name());
assertEquals(200, p6.age());

For more control, we can also use the static builder factory, especially helpful when operating outside the record instance:

Person p7 = PersonBuilder.from(p6)
  .with(p -> p.age(300).name("Manual Copy"));
assertEquals("Manual Copy", p7.name());
assertEquals(300, p7.age());

Alternatively, we can directly apply updates to individual fields:

Person p8 = PersonBuilder.from(p6)
  .withName("boop");
assertEquals("boop", p8.name());
assertEquals(200, p8.age());

This static form is particularly useful when working with detached builders, external data transformations, or in test utilities. As a result, it provides a clean separation between construction logic and business logic, making the builder pattern a versatile tool across service layers.

6. Customization and Options

With RecordBuilder, we go beyond basic builder generation by offering customization features that let us adapt the builder to our domain-specific needs. This way, we support staged builders for compile-time enforcement of required fields, ensuring that essential values are always set before construction. This helps us to prevent subtle runtime bugs and to improve the safety of our object creation logic.

We can also integrate RecordBuilder with testing frameworks or dependency injection tools, making it an excellent choice for setting up fixtures or injecting partially constructed objects. The ability to align builder logic with application architecture – without losing the benefits of immutability – makes RecordBuilder both flexible and production-ready.

7. Comparison: RecordBuilder vs Manual Builder

It’s entirely possible to write our own builder for a record, especially if it has just a few fields. However, as records grow and evolve, manual builders quickly become a maintenance headache. Each change to the record requires updates across the builder: new fields, constructor logic, withX() methods, and build() logic, all need to be manually adjusted.

RecordBuilder removes that burden through annotation processing. It generates the builder at compile time and keeps it perfectly in sync with the record’s structure. Any modification to the record header – adding, removing, or reordering field – is automatically handled. This eliminates a common source of bugs and improves long-term maintainability.

Additionally, RecordBuilder brings in useful patterns like consumer-based updates and static cloning methods that would require significant effort to implement manually. In terms of developer efficiency, correctness, and consistency, RecordBuilder provides more value than a hand-rolled builder ever could.

8. Conclusion

The RecordBuilder library makes working with Java records easier and more practical. It generates fluent, immutable-friendly builders that remove the need for manual constructors or update methods.

Its real strength is balance: flexible updates without losing immutability, and expressive code without boilerplate. Whether we’re building DTOs, API responses, or config objects, it fits naturally into our workflow. With almost no setup, RecordBuilder becomes a powerful tool for clean, scalable Java development.

The complete source code for this article can be found over on GitHub.

The post A Practical Guide to RecordBuilder in Java first appeared on Baeldung.
       

Using Groq Chat with Spring AI

$
0
0

1. Overview

In this tutorial, we’ll learn to use Spring AI to integrate with Models offered by the LPU-based Groq AI inference engine, building a chatbot.

Groq exposes REST APIs that applications can invoke to consume its services. Additionally, SDKs are available in multiple programming languages, such as Python and JavaScript. In Python, popular libraries like LangChain and LiteLLM also support it. Furthermore, the Vercel AI SDK is a JavaScript-based library that has a module for integration with Groq.

Applications can comfortably switch from OpenAI to Groq AI services because Groq is fully compatible with OpenAI client libraries. As a result, Spring AI’s OpenAI chat client can connect to Groq with minimal configuration changes. Moreover, Spring AI doesn’t provide any separate library for Groq.

2. Prerequisites

Spring AI framework provides integration with a myriad of LLM services through its corresponding starter libraries. Similarly, for integrating with Groq services, Spring Boot applications must import the Spring AI OpenAI starter library:

<dependency>
   <groupId>org.springframework.ai</groupId>
   <artifactId>spring-ai-openai-spring-boot-starter</artifactId>
   <version>1.0.0-M6</version>
</dependency>

Typically, we use the Spring Initializr tool to import the necessary libraries optimally.

Finally, we must register on the Groq cloud and create an API key to use in the Spring AI configuration:

Groq API Key

When users register on the Groq cloud, they receive a free subscription to get started with the Groq APIs. However, Groq applies model-specific rate limits on the API requests to ensure fair usage and availability.

3. Key Spring AI Components and Configurations

To effectively utilize Spring AI’s OpenAI library for accessing the Groq service APIs, it’s essential to understand its key classes, such as OpenAiChatModel and OpenAiChatOptions. The OpenAIChatModel class is the client class that takes the Prompt object to call the underlying OpenAI services.

However, for this article, we’ll demonstrate connecting to the Groq service. Additionally, the OpenAiChatOptions class helps specify the model name available on Groq, temperature, maxTokens, and other essential properties. We use it when we have to override the default properties of the chat client by passing it as an argument to the OpenAiChatModel#call() method. Finally, there are a few more general Spring AI classes, such as Prompt and ChatResponse, to help create the chat prompt and receive the response, respectively.

To autoconfigure the OpenAiChatModel, we can specify Spring AI’s OpenAI chat configurations under the spring.ai.openai.chat namespace. At a minimum, we must override the spring.ai.openai.chat.base-url configuration to point to the Groq API’s endpoint, which is api.groq.com/openai. Also, we must set the Groq API key corresponding to the spring.ai.openai.chat.api-key property. As a best practice, we recommend reading it from an environment property or a secure vault and then setting it to the configuration key.

We can also set the URL and the API keys in spring.ai.openai.base-url and spring.ai.openai.api-key configurations. However, the keys under the namespace spring.ai.openai.chat takes precedence.

4. Autoconfigure Groq Client

In this section, we’ll see how the OpenAiAutoConfiguration class creates an OpenAiChatModel bean with the configuration from the application properties file.

First, let’s make a few important OpenAI configuration entries in the application-groq.properties file:

spring.application.name=spring-ai-groq-demo
spring.ai.openai.base-url=https://api.groq.com/openai
spring.ai.openai.api-key=gsk_XXXX
spring.ai.openai.chat.base-url=https://api.groq.com/openai
spring.ai.openai.chat.api-key=gsk_XXXX
spring.ai.openai.chat.options.temperature=0.7
spring.ai.openai.chat.options.model=llama-3.3-70b-versatile

As discussed earlier, in the property file, we’ve configured the Groq API endpoint, API keys, and a large language model. Interestingly, we’ve also specified the API endpoint and keys in the spring.ai.openai namespace. Because a few of the Spring AI beans depend on them, and the application would fail to start in their absence.

Next, let’s define a custom Spring GroqChatService class that’ll be responsible for calling the Groq service:

@Service
public class GroqChatService {
    @Autowired
    private OpenAiChatModel groqClient;
    public String chat(String prompt) {
        return groqClient.call(prompt);
    }
    public ChatOptions getChatOptions() {
        return groqClient.getDefaultOptions();
    }
}

The OpenAiAutoConfiguration bean instantiates the OpenAiChatModel class with properties from the configuration file, along with a few predefined default properties. In the service class, we’ve autowired the Spring OpenAiChatModel bean. The GroqChatservice#chat() method uses its call() method to invoke the Groq service. Additionally, the GroqChatService#getChatOptions() method returns the ChatOptions object containing the chat client’s configurations.

Finally, let’s see the Groq chat client in action with the help of a Junit test:

void whenCallOpenAIClient_thenReturnResponseFromGroq() {
    String prompt = """
      Context:
      Support Ticket #98765:
      Product: XYZ Wireless Mouse
      Issue Description: The mouse connects intermittently to my laptop.
      I've tried changing batteries and reinstalling drivers, 
      but the cursor still freezes randomly for a few seconds before resuming normal movement.
      It affects productivity significantly.
      Question:
      Based on the support ticket, what is the primary technical issue
      the user is experiencing with their 'XYZ Wireless Mouse'?;
     """;
    String response = groqChatService.chat(prompt);
    logger.info("Response from Groq:{}", response);
    assertThat(response.toLowerCase()).isNotNull()
      .isNotEmpty()
      .containsAnyOf("laptop", "mouse", "connect");
    ChatOptions openAiChatOptions = groqChatService.getChatOptions();
    String model = openAiChatOptions.getModel();
    Double temperature = openAiChatOptions.getTemperature();
    assertThat(openAiChatOptions).isInstanceOf(OpenAiChatOptions.class);
    assertThat(model).isEqualTo("llama-3.3-70b-versatile");
    assertThat(temperature).isEqualTo(Double.valueOf(0.7));
}

The Spring groqChatService bean is autowired in the class containing the test method. The method invokes groqChatService#chat() with a prompt consisting of a question and a pertaining context embedded in it. The context includes information about a support ticket raised on a computer mouse. Mostly, in a real-world application, a context is retrieved corresponding to a user query from a vector DB. Following this, the Groq service responds with an answer referring to the context:

The primary technical issue the user is experiencing with their 
'XYZ Wireless Mouse' is intermittent connectivity, resulting in the
cursor freezing randomly for a few seconds before resuming normal movement.

Finally, towards the end, the method asserts that the chat options, such as the model and temperature, match the configuration values from the property file.

5. Customize Groq Client

So far, we’ve used the Spring AI configurations in the application properties file to help autoconfigure the chat client. However, in real-world applications, we’ll mostly end up customizing it to set properties such as model, temperature, etc.

First, let’s define such a custom OpenAiChatModel bean in a Spring configuration class:

@Configuration(proxyBeanMethods = false)
public class ChatAppConfiguration {
    @Value("${groq.api-key}")
    private String GROQ_API_KEY;
    @Value("${groq.base-url}")
    private String GROQ_API_URL;
    @Bean
    public OpenAiChatModel customGroqChatClient() {
        OpenAiApi groqOpenAiApi = new OpenAiApi.Builder()
          .apiKey(GROQ_API_KEY)
          .baseUrl(GROQ_API_URL)
          .build();
        return OpenAiChatModel.builder()
          .openAiApi(groqOpenAiApi)
          .build();
    }
}

The ChatAppConfiguration#customGroqChatClient() method in the class builds an OpenAiChatModel bean using the low-level OpenAiApi class. Here, we read the API key and the URL from a property file. Moreover, we can also modify the class to include complex logics and read them from downstream systems. When the Spring Boot application starts, the chat client object is available as a Spring bean with the name customGroqChatClient.

Next, let’s define a Spring Boot service class where we can autowire the custom OpenAIChatModel bean that we built in the Spring configuration class:

@Service
public class CustomGroqChatService {
    @Autowired
    private OpenAiChatModel customGroqChatClient;
    public String chat(String prompt, String model, Double temperature) {
        ChatOptions chatOptions = OpenAiChatOptions.builder()
          .model(model)
          .temperature(temperature)
          .build();
        return customGroqChatClient.call(new Prompt(prompt, chatOptions))
          .getResult()
          .getOutput()
          .getText();
    }
}

In the chat() method, we set the chat client’s configurations, such as model and temperature, in the ChatOptions object. Further, we pass the prompt and the ChatOptions object into the customGroqChatClient#call() method and finally extract the response text from the ChatResponse object.

Moving on, this time let’s see the custom Groq client in action in a Junit test:

void whenCustomGroqClientCalled_thenReturnResponse() {
    String prompt = """
      Context: 
      The Eiffel Tower is one of the most famous landmarks
      in Paris, attracting millions of visitors each year.
      Question:
      In which city is the Eiffel Tower located?
    """;
    String response = customGroqChatService.chat(prompt, "llama-3.1-8b-instant", 0.8);
    assertThat(response)
      .isNotNull()
      .isNotEmpty()
      .contains("Paris");
    logger.info("Response from custom Groq client: {}", response);
}

The test initiates a call to the chat() method of the autowired customGroqChatService bean, passing the prompt (comprising context and a related question), model, and temperature. The CustomGroqChatService#call() method then provides an answer. Subsequently, we validate that the response accurately addresses the question, In which city is the Eiffel Tower located?.

Finally, let’s take a look at the response from Groq:

The Eiffel Tower is located in Paris, France.

6. Conclusion

In this article, we highlight the integration of the Groq inference engine with Spring AI’s OpenAI library. Furthermore, the library allows for the use of Groq’s tooling feature to register and invoke external tools for action.

Unfortunately, Groq has limitations supporting multimodal models, and hence, Spring AI also lacks the feature. Inherently, Groq is not fully compatible with OpenAI. Therefore, we must be aware of these constraints when using its API.

As usual, the code used in this article is available over on GitHub.

The post Using Groq Chat with Spring AI first appeared on Baeldung.
       

How to Implement a Thread-Safe Singleton in Java?

$
0
0

1. Introduction

The Singleton pattern is one of the most widely used design patterns in software development. It ensures that a class has only one instance throughout the application lifecycle and provides global access to that instance.

Common use cases for the Singleton pattern include:

  • Database connection pools that manage limited database connections efficiently
  • Logger instances that centralize logging functionality across an application
  • Configuration managers that store application-wide settings
  • Cache managers that maintain shared data across multiple components
  • Thread pools that manage worker threads for concurrent operations

However, when implementing the Singleton pattern in multi-threaded environments, things quickly become more complex. Without proper thread-safety guarantees, multiple threads may simultaneously create separate instances, breaking the core promise of Singleton and potentially leading to resource conflicts or inconsistent state. This can lead to resource conflicts, inconsistent state, and unpredictable application behavior.

In this guide, we’ll explore various approaches to implement thread-safe Singleton patterns in Java, examining their trade-offs and best practices.

2. The Classic Problem with Singleton

Let’s start by examining why the basic lazy-initialized Singleton implementation fails in multi-threaded environments.

Here’s a typical non-thread-safe Singleton implementation:

public class SimpleSingleton {
    private static SimpleSingleton instance;
    private SimpleSingleton() {
    }
    public static SimpleSingleton getInstance() {
        if (instance == null) {
            instance = new SimpleSingleton();
        }
        return instance;
    }
}

This implementation works perfectly in single-threaded applications. However, in multi-threaded environments, a race condition can occur:

  1. Thread A calls getInstance() and finds instance is null
  2. Thread B calls getInstance() simultaneously and finds instance is null
  3. Both threads proceed to create new instances
  4. The application now has instantiated multiple Singleton instances, violating the pattern

Let’s demonstrate this with a test that exposes the race condition using CountDownLatch, which will help us to run the threads in parallel:

@Test
void givenUnsafeSingleton_whenAccessedConcurrently_thenMultipleInstancesCreated() throws InterruptedException {
    int threadCount = 1000;
    Set<SimpleSingleton> instances = ConcurrentHashMap.newKeySet();
    CountDownLatch latch = new CountDownLatch(threadCount);
    for (int i = 0; i < threadCount; i++) {
        new Thread(() -> {
            instances.add(SimpleSingleton.getInstance());
            latch.countDown();
        }).start();
    }
    latch.await();
    assertTrue(instances.size() > 1, "Multiple instances were created");
}

This test demonstrates how concurrent access can create multiple instances, breaking the Singleton contract. In a proper Singleton, instances should be 1. But due to race conditions, we might get multiple instances.

3. Synchronized Accessor: Simple and Safe

We can make the getInstance() method synchronized:

public static synchronized SynchronizedSingleton getInstance() { ... }

This guarantees mutual exclusion but introduces performance overhead, as synchronization happens on every access

@Test
void givenMultipleThreads_whenUsingSynchronizedSingleton_thenOnlyOneInstanceCreated() {
    Set<Object> instances = ConcurrentHashMap.newKeySet();
    IntStream.range(0, 100).parallel().forEach(i ->
      instances.add(SynchronizedSingleton.getInstance()));
    assertEquals(1, instances.size());
}

This approach is straightforward and effective in low-concurrency scenarios or cases where singleton creation is rarely accessed.

4. Eager Initialization: Thread Safety by Class Loading

An eager Singleton uses static field initialization:

private static final EagerSingleton INSTANCE = new EagerSingleton();

It’s inherently thread-safe, as the JVM guarantees class initialization is atomic. The downside? The instance is created even if never used, which may not be optimal for expensive resources:

@Test
void whenCallingEagerSingleton_thenSameInstanceReturned() {
    assertSame(EagerSingleton.getInstance(), EagerSingleton.getInstance());
}

This pattern is ideal when the singleton is guaranteed to be needed at startup.

5. Double-Checked Locking (DCL): Lazy and Efficient

DCL combines lazy initialization with reduced synchronization:

if (instance == null) {
    synchronized (...) {
        if (instance == null) {
            instance = new Singleton();
        }
    }
}

This pattern is both lazy and thread-safe, but requires the instance variable to be declared volatile

@Test
void givenDCLSingleton_whenAccessedFromThreads_thenOneInstanceCreated() {
    List<Object> instances = Collections.synchronizedList(new ArrayList<>());
    IntStream.range(0, 100).parallel().forEach(i ->
      instances.add(DoubleCheckedSingleton.getInstance()));
    assertEquals(1, new HashSet<>(instances).size());
}

This approach improves performance by avoiding synchronization once the instance is initialized. The volatile keyword ensures visibility of changes across threads. It has a good use case for high-concurrency environments where performance matters.

6. Bill Pugh Singleton: Lazy and Elegant

Bill Pugh Singleton technique uses a static inner class:

public class BillPughSingleton {
    private BillPughSingleton() {
    }
    private static class SingletonHelper {
        private static final BillPughSingleton BILL_PUGH_SINGLETON_INSTANCE = new BillPughSingleton();
    }
    public static BillPughSingleton getInstance() {
        return SingletonHelper.BILL_PUGH_SINGLETON_INSTANCE;
    }
}

The class remains unloaded until the system references it, which ensures both laziness and thread safety without synchronization:

@Test
void testThreadSafety() throws InterruptedException {
    int numberOfThreads = 10;
    CountDownLatch latch = new CountDownLatch(numberOfThreads);
    Set<BillPughSingleton> instances = ConcurrentHashMap.newKeySet();
    for (int i = 0; i < numberOfThreads; i++) {
        new Thread(() -> {
            instances.add(BillPughSingleton.getInstance());
            latch.countDown();
        }).start();
    }
    latch.await(5, TimeUnit.SECONDS);
    assertEquals(1, instances.size(), "All threads should get the same instance");
}

7. Enum Singleton: The Simplest Thread-Safe Singleton

Enums provide a robust Singleton solution. The JVM instantiates enum values only once:

@Test
void givenEnumSingleton_whenAccessedConcurrently_thenSingleInstanceCreated()
  throws InterruptedException {
    Set<EnumSingleton> instances = ConcurrentHashMap.newKeySet();
    CountDownLatch latch = new CountDownLatch(100);
    for (int i = 0; i < 100; i++) {
        new Thread(() -> {
            instances.add(EnumSingleton.INSTANCE);
            latch.countDown();
        }).start();
    }
    latch.await();
    assertEquals(1, instances.size());
}

It’s also serialization-safe and reflection-safe.

8. Conclusion

Thread safety in Singleton implementations is critical in concurrent Java applications. While synchronized methods are simple to implement, they come at a cost—they don’t scale well under high concurrency. The best options today include:

  • Bill Pugh Singleton for most use cases
  • Double-Checked Locking for performance-critical lazy initialization
  • Enum Singleton for simplicity and safety

Each method solves the same problem with different trade-offs. Choose the one that best fits your application’s requirements. As always, the full source code and tests are available over on GitHub.

The post How to Implement a Thread-Safe Singleton in Java? first appeared on Baeldung.
       

How to Fix PatternSyntaxException: “Illegal repetition near index” in Java

$
0
0
start here featured

1. Overview

Few issues in Java regex are as frustrating as encountering “java.util.regex.PatternSyntaxException: Illegal repetition near index X”. This common exception occurs when regex quantifiers, the powerful symbols that control repetition, are misused or misplaced within the pattern.

As a result, working with regular expressions, which are essential tools for pattern matching, validation, and parsing, can become challenging due to their unforgiving syntax. Consequently, misusing quantifiers can cause both visible exceptions and hidden matching issues.

In this tutorial, we’ll explore what this exception means, examine its common causes, and most importantly, show how to fix it so our regex patterns work smoothly.

2. Understanding the Exception

Java throws a PatternSyntaxException whenever the regex pattern contains a syntax error, stopping the compilation of the pattern. The specific “Illegal repetition near index” message indicates that one of the repetition operators (quantifiers) in the regex is incorrectly used or placed.

Before diving into fixes, it’s important to refresh our understanding of common quantifiers in regex:

  • ‘*’ matches the preceding element zero or more times
  • ‘+’ matches one or more times
  • ‘?’ matches zero or one time
  • ‘{n}‘ matches exactly n times
  • ‘{n,}’ matches n or more times
  • ‘{n,m}’ matches between n and m times

Each quantifier operates on the element immediately preceding it, whether a character, group, or character class.

However, this exception occurs when quantifiers are improperly positioned within the pattern or written with invalid syntax, such as missing braces or incorrect escaping.

Therefore, understanding the strict syntax rules behind quantifiers is crucial to preventing this error.

3. What Causes “Illegal Repetition” in Regex?

This exception can arise from various common mistakes related to quantifiers in regular expression patterns. We’ll examine the most frequent causes and how to fix them, accompanied by illustrative examples.

3.1. Orphaned Quantifier

A quantifier must follow a valid character, group, or character class. If it appears at the beginning of a pattern or follows an invalid element, it has nothing to apply to and causes an error:

Pattern.compile("*[a-z]"); // Invalid: orphaned quantifier

In this example, the asterisk is placed before any valid element, so the engine cannot determine what to repeat. To correct this, we ensure the quantifier follows a valid target:

Pattern.compile("[a-z]*abc"); // Valid

We must always double-check that quantifiers are not orphaned or placed at the beginning.

3.2. Nested Quantifiers Without Grouping

It is not allowed to place one quantifier directly after another without grouping. When quantifiers are stacked without clarity, the engine cannot interpret the pattern correctly:

Pattern.compile("\\d+\\.?\\d+*"); // Invalid: nested quantifiers without grouping

Here, the ‘*’ quantifier attempts to apply directly to ‘\\d+’ without grouping, which is an illegal operation. The correct approach is to group the quantified expression so that the outer quantifier applies to the entire repetition:

Pattern.compile("\\d+(\\.\\d+)*"); // Valid

Grouping clarifies what is being quantified, preventing ambiguity and syntax errors.

3.3. Unclosed or Malformed Curly Braces

Curly braces are used for specifying exact or ranged repetition counts. They must follow the correct syntax. Leaving them unclosed or malformed causes a PatternSyntaxException:

Pattern.compile("\\d{2,"); // Invalid: unclosed curly brace

This example is incomplete, and the regex engine cannot parse the repetition instruction. To fix this, use a properly formatted range:

Pattern.compile("\\d{2,4}"); // Valid

We should also avoid using incomplete repetition ranges or missing numbers in the curly braces, such as ‘d{,4}’, which Java does not accept.

3.4. Quantifying Unrepeatable or Improper Elements

A quantifier must apply to a valid and complete element. When a quantifier is placed immediately after another, the engine cannot interpret the pattern unless the first part is grouped:

Pattern.compile("\\w+\\s+*"); // Invalid: improper quantification

In this case, the ‘*’ quantifier tries to apply to the preceding quantifier ‘+’ on ‘\\s’ without grouping, which is invalid. The solution is to wrap the quantified expression in parentheses:

Pattern.compile("(\\w+\\s+)*"); // Valid

This tells the regex engine to apply the second quantifier to the entire grouped repetition.

3.5. Escaping Literal Quantifier Characters

Sometimes, we need to match quantifier symbols as literal characters rather than as regular expression operators. Forgetting to escape them causes the regex engine to treat them as quantifiers, which may lead to errors if they’re misused:

Pattern.compile("abc+*"); // Invalid: unescaped literal quantifiers

In this case, the intent might be to match the literal string ‘abc+*’, but the pattern fails because ‘*’ is being treated as a quantifier applied to ‘c+’. To match the symbols, they must be escaped, and in Java strings, this requires double backslashes:

Pattern.compile("abc\\+\\*"); // Valid

Proper escaping ensures that the regex engine treats these characters as literals, not operators.

4. Best Practices to Avoid This Exception

To avoid PatternSyntaxException related to illegal repetition, we can follow a few best practices:

  • Always place quantifiers only after a valid token (character, character class, or group)
  • Never start a regex pattern with a quantifier
  • Use parentheses to group elements when applying multiple quantifiers to the same section
  • Ensure all brace quantifiers are fully written, with both opening and closing braces
  • Escape special characters when the goal is to match them literally
  • Use the index in the exception message to quickly locate errors
  • Test and validate the regex patterns using trusted online regex testing tools
  • When building regex patterns dynamically, sanitize inputs and validate the final pattern

5. Conclusion

The “PatternSyntaxException: Illegal repetition near index” is a frequent error encountered when working with Java regular expressions. Fortunately, it’s usually easy to resolve. This exception typically arises from incorrect placement of quantifiers, malformed syntax, or attempts to apply quantifiers to elements that cannot be repeated, such as anchors or certain special characters.

Mastering quantifier usage and systematically reviewing patterns helps efficiently troubleshoot these errors. Important areas to verify include ensuring that quantifiers are correctly positioned, repetition ranges are properly closed, and special characters are appropriately escaped.

As always, the code in the article is available over on GitHub.

The post How to Fix PatternSyntaxException: “Illegal repetition near index” in Java first appeared on Baeldung.
       

Java Weekly, Issue 603

$
0
0

1. Spring and Java

>> Spring Debugger: Working With Dynamic Database Connections Just Got Simpler [blog.jetbrains.com]

A new IntelliJ IDEA plugin simplifies debugging dynamic database connections by automatically detecting and registering Testcontainers or Docker-based DataSources, streamlining the development and debugging of Spring Boot applications that are directly accessible within the IDE. Nice work.

>> Introducing Quarkus quickjs4j: Seamless JavaScript Integration in Your Quarkus Applications [quarkus.io]

Quarkus quickjs4j enables seamless integration of JavaScript into your application, allowing developers to execute dynamic JS code such as user-defined business rules or algorithms within the Java environment.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical & Musings

>> Three worthwhile articles yesterday [martinfowler.com]

Exploring the three interconnected forms of coupling: semantic, structural, and usage that influence software design. This way it helps developers understand how to manage dependencies, leading to more maintainable and adaptable systems. Always a good weekend read here.

Also worth reading:

3. Pick of the Week

>> Leading your engineers towards an AI-assisted future [thepete.net]

The post Java Weekly, Issue 603 first appeared on Baeldung.
       

Secure Kafka With SASL/PLAIN Authentication

$
0
0

1. Introduction

In this article, we’ll learn how to implement the SASL/PLAIN authentication mechanism in a Kafka service. We’ll also implement client-side authentication using the support provided by Spring Kafka.

Kafka supports multiple authentication options, providing enhanced security and compatibility. This includes SASL, SSL, and delegated token authentication.

Simple Authentication and Security Layer (SASL) is an authentication framework that allows other authentication mechanisms such as GSSAPI, OAuthBearer, SCRAM, and PLAIN, to be easily integrated.

SASL/PLAIN authentication is not secure! This is because user credentials are exposed over the network as plaintext. However, it’s still useful for local development due to fewer configuration requirements.

We should note that SASL/PLAIN authentication should not be used in production environments unless it’s used in conjunction with SSL/TLS. When SSL is combined with SASL/PLAIN authentication, referred to as SASL-SSL in Kafka, it encrypts traffic, including sensitive credentials between the client and server.

2. Implement Kafka With SASL/PLAIN Authentication

Let’s imagine we need to build a Kafka service that supports SASL/PLAIN authentication in a Docker environment.
For that, we’ll utilize JAAS configuration to add the user credentials required by the SASL/PLAIN.

2.1. Configure Kafka Credentials

To configure user credentials in Kafka, we’ll use the PlainLoginModule security implementation.

Let’s include a kafka_server_jaas.conf file to configure the admin and user1 credentials:

KafkaServer {
  org.apache.kafka.common.security.plain.PlainLoginModule required
    username="admin"
    password="admin-secret"
    user_admin="admin-secret"
    user_user1="user1-secret";
};

In the above code, we define the admin and user1 users, to be used for both Kafka’s inter-broker and external client authentication respectively. The user1 is defined as a one-liner in this format user_<username> along with the secret.

2.2. Configure Zookeeper Credentials

As we’ve included the client user credentials in the Kafka service, we’ll also secure the Zookeeper service with the SASL/PLAIN authentication. It’s also good practice to secure the Zookeeper service.

Let’s include a zookeeper_jaas.conf file to configure the zookeeper user credentials:

Server {
  org.apache.zookeeper.server.auth.DigestLoginModule required
    username="zookeeper"
    password="zookeeper-secret"
    user_zookeeper="zookeeper-secret";
};

In the above configuration, we’re using the Zookeeper-specific security implementation DigestLoginModule instead of Kafka’s PlainLoginModule for improved compatibility.

Additionally, we’ll include the zookeeper credentials in the previously created kafka_server_jaas.conf file:

Client {
  org.apache.kafka.common.security.plain.PlainLoginModule required
    username="zookeeper"
    password="zookeeper-secret";
};

The above Client credentials are used by the Kafka service to authenticate with the Zookeeper service.

2.3. Setup Kafka Service With Zookeeper

We can set up our Kafka and Zookeeper services using a Docker Compose file.

First, we’ll implement a Zookeeper service and include the zookeeper_jaas.conf file:

services:
  zookeeper:
    image: confluentinc/cp-zookeeper:7.6.6
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000
      KAFKA_OPTS: "-Djava.security.auth.login.config=/etc/kafka/zookeeper_jaas.conf"
    volumes:
      - ./config/zookeeper_jaas.conf:/etc/kafka/zookeeper_jaas.conf
    ports:
      - 2181

Next, we’ll implement a Kafka service with the SASL/PLAIN authentication:

kafka:
  image: confluentinc/cp-kafka:7.6.6
  depends_on:
    - zookeeper
  environment:
    KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
    KAFKA_LISTENERS: SASL_PLAINTEXT://0.0.0.0:9092
    KAFKA_ADVERTISED_LISTENERS: SASL_PLAINTEXT://localhost:9092
    KAFKA_INTER_BROKER_LISTENER_NAME: SASL_PLAINTEXT
    KAFKA_SASL_ENABLED_MECHANISMS: PLAIN
    KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAIN
    KAFKA_OPTS: "-Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf"
    KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
  volumes:
    - ./config/kafka_server_jaas.conf:/etc/kafka/kafka_server_jaas.conf
  ports:
    - "9092:9092"

In the above code, we’ve included the previously created kafka_server_jaas.conf file to set up the SASL/PLAIN users.

We should note that the KAFKA_ADVERTISED_LISTENERS property is the endpoint that the Kafka client will send messages and listen to.

Finally, we’ll run the entire Docker setup using the docker compose command:

docker compose up --build

We’ll get similar logs in the Docker console:

kafka-1      | [2025-06-19 14:32:00,441] INFO Session establishment complete on server zookeeper/172.18.0.2:2181, session id = 0x10000004c150001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn)
kafka-1      | [2025-06-19 14:32:00,445] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
zookeeper-1  | [2025-06-19 14:32:00,461] INFO Successfully authenticated client: authenticationID=zookeeper;  authorizationID=zookeeper. (org.apache.zookeeper.server.auth.SaslServerCallbackHandler)

We confirm that the Kafka and Zookeeper services are integrated without any errors.

3. Implement Kafka Client With Spring

We’ll implement the producer and consumer services using the Spring Kafka implementation.

3.1. Maven Dependencies

First, we’ll include the Spring Kafka dependency:

<dependency>
    <groupId>org.springframework.kafka</groupId>
    <artifactId>spring-kafka</artifactId>
    <version>3.1.2</version>
</dependency>

Next, we’ll implement a producer service to send messages.

3.2. Kafka Producer

Let’s implement a Kafka producer service using the KafkaTemplate class:

public void sendMessage(String message, String topic) {
    LOGGER.info("Producing message: {}", message);
    kafkaTemplate.send(topic, "key", message)
        .whenComplete((result, ex) -> {
            if (ex == null) {
                LOGGER.info("Message sent to topic: {}", message);
            } else {
                LOGGER.error("Failed to send message", ex);
            }
        });
}

In the above code, we’re sending a message using the send method of KafkaTemplate.

3.3. Kafka Consumer

We’ll use Spring Kafka’s KafkaListener and ConsumerRecord classes to implement the consumer service.

Let’s implement a consumer method with the @KafkaListener annotation:

@KafkaListener(topics = TOPIC)
public void receive(ConsumerRecord<String, String> consumerRecord) {
    LOGGER.info("Received payload: '{}'", consumerRecord.toString());
    messages.add(consumerRecord.value());
}

In the above code, we receive a message and add it to the messages list.

3.4. Configure Spring Application With Kafka

Next, we’ll create an application.yml file and include a few Spring Kafka-related properties:

spring:
  kafka:
    bootstrap-servers: localhost:9092
    consumer:
      group-id: test-group
      auto-offset-reset: earliest
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      value-deserializer: org.apache.kafka.common.serialization.StringDeserializer

Now, let’s run the application and verify the setup:

kafka-1 | [2025-06-19 14:38:33,188] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1001] Failed authentication with /192.168.65.1 (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)

As expected, the client application is unable to authenticate with the Kafka server.

3.5. Configure Client With JAAS Config

To resolve the above error, we’ll use the spring.kafka.properties configuration to provide the SASL/PLAIN settings.

Now, we’ll include a few additional configurations related to the user1 credentials and set the sasl.mechanism property to PLAIN:

spring:
  kafka:
    bootstrap-servers: localhost:9092
    properties:
      sasl.mechanism: PLAIN
      sasl.jaas.config: >
        org.apache.kafka.common.security.plain.PlainLoginModule required
        username="user1"
        password="user1-secret";
    security:
      protocol: "SASL_PLAINTEXT"

In the above code, we’ve included the matching username and password as part of the sasl.jaas.config property.

Sometimes, we can encounter common errors due to missing or incorrect SASL configuration. For example, we’ll get the below error if the sasl.mechanism property is PLAINTEXT instead of PLAIN:

Caused by: org.apache.kafka.common.KafkaException: org.apache.kafka.common.errors.SaslAuthenticationException: Failed to configure SaslClientAuthenticator
	... 25 common frames omitted
Caused by: org.apache.kafka.common.errors.SaslAuthenticationException: Failed to create SaslClient with mechanism PLAINTEXT

We’ll get a different error when the sasl.mechanism property is incorrectly named as security.mechanism:

Caused by: java.lang.IllegalArgumentException: No serviceName defined in either JAAS or Kafka config

Let’s verify the Kafka application with the entire setup.

4. Testing

We’ll use the Testcontainers framework to test the Kafka client application.

First, we’ll create a DockerComposeContainer object using the docker-compose.yml:

@Container
public DockerComposeContainer<?> container =
  new DockerComposeContainer<>("src/test/resources/sasl-plaintext/docker-compose.yml")
    .withExposedService("kafka", "9092", Wait.forListeningPort());

Next, let’s implement a test method to validate the consumer:

@Test
void givenSaslIsConfigured_whenProducerSendsMessageOverSasl_thenConsumerReceivesOverSasl() {
    String message = UUID.randomUUID().toString();
    kafkaProducer.sendMessage(message, "test-topic");
    await().atMost(Duration.ofMinutes(2))
      .untilAsserted(() -> assertThat(kafkaConsumer.messages).containsExactly(message));
}

Finally, we’ll run the test case and verify the output:

16:56:44.525 [kafka-producer-network-thread | producer-1] INFO c.b.saslplaintext.KafkaProducer - Message sent to topic: 82e8a804-0269-40a2-b8ed-c509e6951011
16:56:48.566 INFO  c.b.saslplaintext.KafkaConsumer - Received payload: ConsumerRecord(topic = test-topic, ... key = key, value = 82e8a804-0269-40a2-b8ed-c509e6951011

From the above logs, we can see that the consumer service has successfully received the message.

5. Conclusion

In this tutorial, we’ve learned how to set up SASL/PLAIN authentication in a Kafka service using JAAS config in a Docker environment.

We’ve also implemented producer/consumer services and configured the authentication using a similar JAAS config. Finally, we tested the entire setup by sending and receiving a message using a Docker TestContainer.

As always, the example code can be found over on GitHub.

The post Secure Kafka With SASL/PLAIN Authentication first appeared on Baeldung.
       

Does Spring @Transactional Annotation Work on a Private Method?

$
0
0

1. Overview

In this tutorial, we’ll resolve the question of whether Spring’s @Transactional annotation works on private methods. As a cornerstone of transaction management in Spring applications, @Transactional simplifies ensuring data consistency across database operations. However, developers often encounter unexpected behavior when applying it, leading to questions about its compatibility with method visibility.

We’ll delve into Spring’s transaction management mechanics and provide clear explanations and practical insights to address this.

2. Understanding Spring’s @Transactional Annotation

The @Transactional annotation in Spring is used to define transactional boundaries around methods or classes. It ensures that operations within the annotated scope are executed as a single unit of work. When a method annotated with @Transactional is invoked, Spring’s transaction management creates a transaction, handles commits or rollbacks, and manages resources like database connections.

2.1. Example Service Method

Let’s look at a code example, where we apply the annotation on a service method:

@Service
public class OrderService {
    @Autowired
    private TestOrderRepository repository;
    @Transactional
    public void createOrder(TestOrder order) {
        repository.save(order);
    }
}

When we apply @Transactional to the createOrder method, it doesn’t execute any logic itself; instead, it serves as a marker. Spring’s Aspect-Oriented Programming (AOP) framework processes this marker using proxies to intercept method calls and weave in transactional behavior.

Let’s dive into what these proxies are and how they tie into @Transactional.

2.2. Aspects and Proxies

A proxy acts as an intermediary object that wraps a target object to intercept and control access to its methods. It can enable additional behavior, like transaction management.

To illustrate a custom @Transactional aspect, let’s consider a simplified example:

@Around("@annotation(org.springframework.transaction.annotation.Transactional)")
public Object manageTransaction(ProceedingJoinPoint joinPoint) throws Throwable {
    // Start transaction
    try {
        Object result = joinPoint.proceed(); // Execute the target method, like 'createOrder'
        // Commit transaction
        return result;
    } catch (Throwable t) {
        // Rollback transaction
        throw t;
    }
}

This code example illustrates an interception function, manageTransaction, that mimics @Transactional behavior. Using the @Around advice, it intercepts methods annotated with @Transactional annotation, initiating a transaction before execution, committing it upon success, or rolling it back on failure.

Method visibility – public, protected, package-private, or private – plays a significant role, detecting the methods to be intercepted.

With this foundation, let’s tackle the core question: Does @Transactional work on private methods?

3. Does @Transactional Work on private Methods?

The short answer is: No, not by default.

To understand why, let’s examine how Spring’s AOP proxies work. These proxies wrap the target object to intercept calls and add transactional logic around the method.

However, proxies can only intercept methods they can access, which traditionally means public methods.

As of Spring 6.0, @Transactional also supports protected and package-visible methods in class-based proxies. But note: For interface-based proxies (JDK dynamic proxies), methods must still be public and defined in the interface.

Why do these limitations exist?

3.1. The Role of Method Visibility

Spring uses two main proxy types for @Transactional: JDK dynamic proxies (interface-based, requiring an interface implementation) and CGLIB proxies (class-based, via subclassing). Neither can intercept private methods – or effectively private ones like final or static methods.

To determine which proxy mode an application uses, note that Spring defaults to JDK dynamic proxies if the bean implements an interface; otherwise, it uses CGLIB. We can force CGLIB by setting proxyTargetClass=true in @EnableTransactionManagement or in our AOP config.

In Java, private methods aren’t inherited and can’t be overridden. So, when we annotate a private method with @Transactional, the proxy can’t see or intercept it, and the transactional behavior is skipped.

3.2. What Happens if We Annotate private Methods Anyway?

To demonstrate, let’s add methods with varying visibilities to our OrderService. Each saves an order and throws an exception. If @Transactional works, the exception should trigger a rollback, leaving the repository empty. If not, the order persists.

First, let’s focus on the private method, as it’s the core case where we expect failure:

@Transactional
private void createOrderPrivate(TestOrder order) {
    repository.save(order);
    throw new RuntimeException("Rollback createOrderPrivate");
}
public void callPrivate(TestOrder order) {
    createOrderPrivate(order);
}

Now, let’s test this. We call the private method indirectly via a public wrapper, since private methods aren’t directly callable from outside:

@Test
void givenPrivateTransactionalMethod_whenCallingIt_thenShouldNotRollbackOnException() {
    assertThat(repository.findAll()).isEmpty();
    assertThatThrownBy(() -> underTest.callPrivate(new TestOrder())).isNotNull();
    assertThat(repository.findAll()).hasSize(1);
}

Here, the repository ends up with one order after the exception, showing no rollback, indicating that @Transactional was ignored because the proxy couldn’t intercept the private method.

Next, let’s contrast this with the behavior of a public method:

@Transactional
public void createOrderPublic(TestOrder order) {
    repository.save(order);
    throw new RuntimeException("Rollback createOrderPublic");
}
@Test
void givenPublicTransactionalMethod_whenCallingIt_thenShouldRollbackOnException() {
    assertThat(repository.findAll()).isEmpty();
    assertThatThrownBy(() -> underTest.createOrderPublic(new TestOrder())).isNotNull();
    assertThat(repository.findAll()).isEmpty();
}

This time, the repository stays empty post-exception, confirming the rollback worked via proxy interception.

Similarly, for package-private (default visibility):

@Transactional
void createOrderPackagePrivate(TestOrder order) {
    repository.save(order);
    throw new RuntimeException("Rollback createOrderPackagePrivate");
}
@Test
void givenPackagePrivateTransactionalMethod_whenCallingIt_thenShouldRollbackOnException() {
    assertThat(repository.findAll()).isEmpty();
    assertThatThrownBy(() -> underTest.createOrderPackagePrivate(new TestOrder())).isNotNull();
    assertThat(testOrderRepository.findAll()).isEmpty();
}

Assuming we’re using class-based proxies (CGLIB), this rolls back as expected in Spring 6.0+.

Finally, let’s test the @Transatcional annotation’s behavior for protected methods:

@Transactional
protected void createOrderProtected(TestOrder order) {
    repository.save(order);
    throw new RuntimeException("Rollback createOrderProtected");
}
@Test
void givenProtectedTransactionalMethod_whenCallingIt_thenShouldRollbackOnException() {
    assertThat(repository.findAll()).isEmpty();
    assertThatThrownBy(() -> underTest.createOrderProtected(new TestOrder())).isNotNull();
    assertThat(testOrderRepository.findAll()).isEmpty();
}

Again, rollback succeeds with class-based proxies.

Ultimately, @Transactional is just metadata – a marker for Spring’s runtime to act on. If the proxy can’t detect it (as with private methods), it’s as if the annotation isn’t there: The method runs without transactions.

4. How to Resolve the Issue with private Methods

Since @Transactional doesn’t work on private methods, we need workarounds to avoid issues like missed rollbacks.

Let’s explore a few options.

4.1. Use public Methods

The simplest fix is to move the logic to a public method, which proxies can reliably intercept.

That said, making methods public might break encapsulation. To balance this, design interfaces thoughtfully or use package-private methods (supported in Spring 6.0+ with CGLIB).

4.2. Switch to AspectJ Weaving

For more flexibility, we can use AspectJ, which weaves aspects directly into bytecode at compile- or load-time, bypassing proxy limits.

This approach allows @Transactional to work on private methods but requires additional configuration, such as enabling AspectJ weaving.

4.3. Extract to a Separate Bean

Another approach is to extract the transactional logic into a separate Spring bean with a public method. Then, inject this bean into the original service and call it. This preserves encapsulation in the main class while letting the proxy handle the new public method.

5. Conclusion

In this article, we explored Spring’s @Transactional annotation and its limitations with private methods, stemming from proxy-based AOP. Proxies can’t access private methods, so the annotation gets ignored.

We can overcome this with public methods, AspectJ weaving, or separate beans. By grasping these mechanics and choosing the right fix, we ensure reliable transactions in our Spring applications.

As always, the entire code used in this article can be found over on GitHub.

The post Does Spring @Transactional Annotation Work on a Private Method? first appeared on Baeldung.
       

Convert JSON Object to JSON Array in Java

$
0
0

1. Overview

JSON (JavaScript Object Notation) is a lightweight, structured format used for data exchange. Modern software widely uses JSON for data exchange, configuration, and API communication. In Java, working with JSON often involves libraries like org.json, Jackson, or Gson. While converting a JSON object to a JSON array may seem straightforward, the correct approach depends on the input structure and the desired output format.

In this tutorial, we’ll demonstrate how to convert a JSON object to a JSON array using different libraries, with code examples and JUnit tests for validation.

2. Use Cases and Considerations

Converting a JSON object to a JSON array is a common requirement in applications that interact with REST APIs, data pipelines, or configuration files. This transformation often becomes necessary when data originally modeled as key-value pairs must be serialized, iterated over, or reformatted for compatibility with front-end frameworks or external systems. For example, APIs may expect an array of objects rather than a map, especially when rendering tabular data or processing form submissions.

Before choosing a conversion method, developers should assess whether they need only the values from the JSON object or both keys and values as distinct entities. We should also consider how to use the resulting JSON array. Whether for display, transmission, or further manipulation, as this directly impacts the choice of library and transformation strategy.

3. Approaches to Conversion

Let’s now walk through three different approaches for converting a JSON object into a JSON array. For each library, we’ll first build a method to encapsulate the logic and then verify its correctness using the respective Junit.

3.1. Using the org.json Library

The org.json library is simple, widely used, and already included in many Java projects. This library provides a minimal and straightforward API for working with JSON objects. It offers straightforward APIs for both JSONObject and JSONArray. Let’s create a few methods that handle both common scenarios:

JSONArray convertValuesToArray(JSONObject jsonObject) {
    return new JSONArray(jsonObject.toMap().values());
}

This method then extracts the values() collection and wraps it in a JSONArray, resulting in an array of values only.

Let’s now validate the method by passing a simple JSONObject and checking that all values appear in the JSONArray:

@Test
void givenFlatJSONObject_whenConvertValues_thenJSONArrayOfValues() {
    JSONObject jsonObject = new JSONObject();
    jsonObject.put("id", 1);
    jsonObject.put("name", "Alice");
    OrgJsonConverter converter = new OrgJsonConverter();
    JSONArray result = converter.convertValuesToArray(jsonObject);
    assertEquals(2, result.length());
    assertTrue(result.toList().contains("Alice"));
}

Next, we add another method that constructs a JSONArray where each element is a key-value pair represented as its own JSONObject:

JSONArray convertToEntryArray(JSONObject jsonObject) {
    JSONArray result = new JSONArray();
    for (String key : jsonObject.keySet()) {
        JSONObject entry = new JSONObject();
        entry.put("key", key);
        entry.put("value", jsonObject.get(key));
        result.put(entry);
    }
    return result;
}

This method iterates over each key in the JSONObject and creates a new JSONObject containing two fields, “key” and “value”.

In this test, we verify that each key-value pair is correctly converted into a JSONObject with key and value fields inside the resulting JSONArray.

@Test
void givenFlatJSONObject_whenConvertToEntryArray_thenJSONArrayOfObjects() {
    JSONObject jsonObject = new JSONObject();
    jsonObject.put("language", "Java");
    jsonObject.put("framework", "Spring");
    OrgJsonConverter converter = new OrgJsonConverter();
    JSONArray result = converter.convertToEntryArray(jsonObject);
    assertEquals(2, result.length());
    assertEquals("language", result.getJSONObject(0).get("key"));
}

3.2. Using Jackson with Custom Logic

Jackson is a multi-purpose Java library for processing JSON data. It provides greater control over JSON structures using its capabilities. Although Jackson primarily works with its tree model, we can still integrate it with JSONObject by converting it internally.

Let’s write a method that accepts a JSONObject, transforms it into Jackson’s internal model, and returns an ArrayNode:

ArrayNode convertToArray(JSONObject jsonObject) {
    ObjectMapper mapper = new ObjectMapper();
    JsonNode jsonNode = mapper.convertValue(jsonObject.toMap(), JsonNode.class);
    ArrayNode result = mapper.createArrayNode();
    jsonNode.fields().forEachRemaining(entry -> {
        ObjectNode obj = mapper.createObjectNode();
        obj.put("key", entry.getKey());
        obj.set("value", entry.getValue());
        result.add(obj);
    });
    return result;
}

We first convert the JSONObject to a Map and then use Jackson’s convertValue() to treat it as a JSONNode. From there, we loop over the fields and build an ArrayNode containing new key-value pairs. This method is powerful for more complex integrations.

Let’s test the Jackson-based method by converting a JSONObject with two key-value pairs and asserting that it structures them as individual key-value objects in the resulting ArrayNode:

@Test
void givenJSONObject_whenConvertToArray_thenArrayNodeOfKeyValueObjects() {
    JSONObject jsonObject = new JSONObject();
    jsonObject.put("country", "India");
    jsonObject.put("code", "IN");
    JacksonConverter converter = new JacksonConverter();
    ArrayNode result = converter.convertToArray(jsonObject);
    assertEquals(2, result.size());
    assertEquals("country", result.get(0).get("key").asText());
}

3.3. Using Gson With JSON Object

Gson is the next Java JSON library. It doesn’t accept JSON objects directly, but we can still use it by converting it to a map internally. Let’s build a method that creates a JsonArray from a JSONObject, where each element is a key-value pair:

JsonArray convertToKeyValueArray(JSONObject jsonObject) {
    JsonArray result = new JsonArray();
    jsonObject.keySet().forEach(key -> {
        JsonObject entry = new JsonObject();
        entry.addProperty("key", key);
        entry.add("value", com.google.gson.JsonParser.parseString(jsonObject.get(key).toString()));
        result.add(entry);
    });
    return result;
}

We iterate through the keys of the JSONObject and create a new JSONObject for each entry. Each new object holds the key and the parsed value. This approach works well with Gson APIs.

Here, we check if each entry in the JSONObject becomes a key-value object in the JsonArray:

@Test
void givenJSONObject_whenConvertToKeyValueArray_thenJsonArrayWithObjects() {
    JSONObject jsonObject = new JSONObject();
    jsonObject.put("brand", "Tesla");
    jsonObject.put("year", 2024);
    GsonConverter converter = new GsonConverter();
    JsonArray result = converter.convertToKeyValueArray(jsonObject);
    assertEquals(2, result.size());
    assertEquals("brand", result.get(0).getAsJsonObject().get("key").getAsString());
}

4. Conclusion

In this article, we explored ways to transform a JSON object to a JSON array using the most popular Java JSON libraries. The org.json library provides us with straightforward tools for both value-only and key-value pair conversions. Jackson excels when working with complex data models or integrating with other Jackson-based features. Gson provides a lightweight and clean way to achieve the same in Android or microservice environments.

We should choose an approach based on our application requirements. Each method is ready for use and easy to validate. As always, the code for these examples is available over on GitHub.

The post Convert JSON Object to JSON Array in Java first appeared on Baeldung.
       

Introduction to Jimmer ORM

$
0
0

1. Introduction

In this tutorial, we’re going to review the Jimmer ORM framework. This ORM is relatively new at the time of this article, but it has some promising features. We’re going to review Jimmer’s philosophy, and then write some examples with it.

2. Overall Architecture

First things first, Jimmer is not a JPA implementation. That means that Jimmer does not implement every JPA feature. For instance, Jimmer does not have a dirty checking mechanism as such. However, it is worth mentioning that Jimmer, like Hibernate, has a lot of similar concepts. That is intentional in order to make the transition from Hibernate smoother. So, in general, the JPA knowledge would be helpful in understanding the Jimmer in general.

As an example, Jimmer has a concept of an entity, although its shape and design differ from Hibernate extensively. However, concepts like lazy loading or cascading are not present in Jimmer as such. The reason is that they don’t really make much sense in the Jimmer because of the way it is designed. We’ll see that shortly.

Final note for this section: Jimmer supports multiple databases, including MySQL, Oracle, PostgreSQL, SQL Server, SQLite, and H2.

3. Entity Sample

As mentioned, Jimmer has a lot of differences from Hibernate and many other ORM frameworks; it has several key design principles. The first one is that our entities serve a sole purpose – representing the schema of the underlying database. But, the important thing here is that we do not specify the way we intend to interact with it via annotations. Instead, Jimmer requires the developer to provide all the information necessary to derive the query to be executed on the call site.

So, what does that mean? To understand, let’s review the following Jimmer entity:

import org.babyfish.jimmer.client.TNullable;
import org.babyfish.jimmer.sql.Column;
import org.babyfish.jimmer.sql.Entity;
import org.babyfish.jimmer.sql.GeneratedValue;
import org.babyfish.jimmer.sql.GenerationType;
import org.babyfish.jimmer.sql.Id;
import org.babyfish.jimmer.sql.JoinColumn;
import org.babyfish.jimmer.sql.ManyToOne;
import org.babyfish.jimmer.sql.OneToMany;
@Entity
public interface Book {
    @Id
    @GeneratedValue(strategy = GenerationType.USER)
    long id();
    @Column(name = "title")
    String title();
    @Column(name = "created_at")
    Instant createdAt();
    @ManyToOne
    @JoinColumn(name = "author_id")
    Author author();
    @TNullable
    @Column(name = "rating")
    Long rating();
    @OneToMany(mappedBy = "book")
    List<Page> pages();
    // equals and hashcode implementation
}

As you can notice, it has annotations similar to JPA. But one thing is missing – we do not specify any cascading for the relations, such as pages in our case. Similar for fetch type (lazy or eager) – on the declaration side – it is not specified. We also cannot specify the insertable or updatable attributes of the @Column annotation as we probably would in JPA and so on.

We do not do that because Jimmer expects us to provide it explicitly when we try to execute the appropriate operation. We’ll see that in detail in the sections below.

4. DTO Language

Another thing that hits us instantly is that the Book is an interface, not a class. This is intentional, since in Jimmer, we’re not supposed to work with entities directly, that is to say, that we’re not supposed to instantiate them. Instead, the assumption is that we’re going to both read and write data via DTOs. And those DTOs should have the exact shape that we want to write or read from the database. Let’s look at an example (do not focus on the exact API calls that we make right now):

public void saveAdHocBookDraft(String title) {
    Book book = BookDraft.$.produce(bookDraft -> {
        bookDraft.setCreatedAt(Instant.now());
        bookDraft.setTitle(title);
        bookDraft.setAuthor(AuthorDraft.$.produce(authorDraft -> {
            authorDraft.setId(1L);
        }));
        bookDraft.setId(1L);
    });
    sqlClient.save(book);
}

In general, in most interactions, we need to use the SqlClient in order to interact with the database.

In the sample above, we’re creating an ad-hoc DTO via the BookDraft interface. Jimmer generated the BookDraft interface along with AuthorDraft for us, it is not handwritten code. The generation itself happens during compile time via the Java Annotation Processing Tool in case we’re using Java, or via Kotlin Symbol Processing in case we’re using Kotlin.

These two generated interfaces allow for the construction of a DTO object of an arbitrary shape, which Jimmer internally converts later to a Book entity. So, we’re indeed saving an entity, it is just that we’re not instantiating it ourselves, rather, Jimmer does it for us.

5. Null Handling

Also, Jimmer would only save the components that are present in the DTO. That is because Jimmer has a strict distinction between the property that was not set in the first place and the property that is explicitly set to null. In other words, if we do not want to include the given scalar property in the generated SQL, we simply create a DTO without explicitly setting it. By scalar, we mean fields that do not represent the relation property:

public void insertOnlyIdAndAuthorId() {
    Book book = BookDraft.$.produce(
        bookDraft -> {
            bookDraft.setAuthor(AuthorDraft.$.produce(authorDraft -> {
                authorDraft.setId(1L);
            }));
            bookDraft.setId(1L);
        });
    sqlClient.insert(book);
}

The generated INSERT for Book in the case above would look like this:

INSERT INTO BOOK(ID, author_id) VALUES(?, ?)

If we explicitly set a scalar property to null, then Jimmer would include this property in the underlying INSERT/UPDATE statement and assign a null value to it:

public void insertExplicitlySetRatingToNull() {
    Book book = BookDraft.$.produce(bookDraft -> {
        bookDraft.setAuthor(AuthorDraft.$.produce(authorDraft -> {
            authorDraft.setId(1L);
        }));
        bookDraft.setRating(null);
        bookDraft.setId(1L);
    });
    sqlClient.insert(book);
}

The generated INSERT statement would look like this:

INSERT INTO BOOK(ID, author_id, rating) VALUES(?, ?, ?)

Notice that INSERT includes the rating property. The bind value of this rating property would be set to null in the underlying JDBC Statement.

Last word, for properties that represent relations (non-scalar properties), the behavior is more complicated and deserves a separate article.

6. DTO Explosion Problem

Now, the experienced developers may notice a problem. Jimmer’s approach of working with the database would imply the creation of dozens of DTOs, each for some unique operation. The answer is – not quite. Although we would indeed need a lot of DTOs, we can significantly reduce the overhead of writing them manually. The reason for that is a dedicated DTO language that Jimmer has. Here is an example of it:

export com.baeldung.jimmer.models.Book
    -> package com.baeldung.jimmer.dto
BookView {
   #allScalars(Book)
   author {
     id
   }
   pages {
    #allScalars(Page)
   }
}

The above example represents a markup, written in Jimmer DTO language. The generation of POJOs out of this markup language happens during compilation time, as with the examples in the previous section.

In the markup above, for instance, we’ve asked Jimmer to include all the scalar fields in the generated DTO by using the #allScalars instruction. Apart from them, we’ve also mentioned that the DTO would only have the ID of the Author, not the Author itself. The collections of pages would be present in the DTO in their entirety (only scalar fields).

So, in general, with Jimmer, we indeed need a lot of DTOs in order to describe the desired behavior in every case. But we can either create the ad-hoc version or rely on the POJOs that the compiler plugin generates for us during the build.

7. Reading Path

Up until now, we’ve only talked about the ways of saving data into the database. Let’s review the reading path. In order to read the data, we also need to specify exactly what data we need to fetch via the DTO. The very shape of the DTO instructs the Jimmer of what exactly fields need to be fetched. If the field is not present in the DTO, it will not be fetched:

public List<BookView> findAllByTitleLike(String title) {
    List<BookView> values = sqlClient.createQuery(BookTable.$)
      .where(BookTable.$.title()
        .like(title))
      .select(BookTable.$.fetch(BookView.class))
      .execute();
        
    return values;
}

Here, we’re using the BookView DTO from the previous section. We can also specify the columns we need to read via Fetcher’s ad-hoc API. It is very similar to one that we used during writing to the database:

public List<BookView> findAllByTitleLikeProjection(String title) {
    List<Book> books = sqlClient.createQuery(BookTable.$)
      .where(BookTable.$.title()
        .like(title))
      .select(BookTable.$.fetch(Fetchers.BOOK_FETCHER.title()
        .createdAt()
        .author()))
      .execute();
    return books.stream()
      .map(BookView::new)
      .collect(Collectors.toList());
}

Here, we’re using the Object Fetcher API to construct the DTO that represents the shape of the structure we want to read. But we still signal the columns we want to read on the call site and not the declaration site. This approach is very similar to an ad-hoc creation of a DTO for saving.

7. Transaction Management

Finally, we’re going to quickly review the way Jimmer manages transactions. In general, Jimmer does not have a built-in transaction management mechanism on its own. Therefore, Jimmer relies on the Spring Framework’s transaction management infrastructure heavily. For instance, let’s review the local transaction management usage (not distributed), which is the most often scenario. In this case, Jimmer relies on the Spring’s TransactionSynchronizationManager capabilities and the transactional connection to be bound to the current thread.

To sum up the above, the traditional usage of Spring’s @Transactional is going to work for Jimmer. The imperative transaction management via Spring’s TransactionTemplate is also possible for Jimmer.

8. Conclusion

In this article, we’ve talked about Jimmer ORM. As we saw, Jimmer takes a unique approach in terms of data manipulation. While JPA and Hibernate, in particular, express the means of interactions with the database primarily via annotations, Jimmer requires developers to provide all the information dynamically at the call site. For this, Jimmer uses DTOs, which we typically would generate via Jimmer itself using its DTO language. However, we can also create them ad-hoc. In terms of transaction management, Jimmer relies on the Spring Framework’s infrastructure.

As always, the source code for this article is available over on GitHub.

The post Introduction to Jimmer ORM first appeared on Baeldung.
       

Using a Different Client Certificate per Connection in Java

$
0
0

1. Introduction

In this tutorial, we’ll configure Java’s SSLContext to use different client certificates based on the target server. We’ll start with a straightforward approach using Apache HttpComponents, then implement a custom solution using a routing KeyManager and TrustManager.

2. Scenario and Setup

We’ll simulate a Java client that needs to make HTTPS calls to two different endpoints requiring different client certificates for mutual TLS authentication. For demonstration, we’ll imagine both https://api.service1/ and https://api.service2/ as intranet services.

2.1. Generating the CA, Keys, and Trust Stores

Our setup uses a client/server key pair for each host. Each key is signed by a shared certificate authority (CA), and both sides of the TLS connection trust the CA.

Creating the CA, signed certificates, keystores, and trust stores manually can be error-prone and repetitive. To simplify this, we provide a helper script, automating the process:

  • Create a private certificate authority (CA) and add it to a truststore; for example, trust.api.service1.p12
  • Generate separate key pairs for the client and server
  • Sign both client and server certificates using the CA
  • Package each certificate and key into a PKCS12 keystore for easy loading; for example, client.api.service1.p12 and server.api.service1.p12

Furthermore, we’ll use an alias with the same value as the hostname of each endpoint, which will always be part of each generated file, so it’s easier to distinguish between them by prefix. We also use the same password to simplify the examples. In a real scenario, we’d use different passwords.

2.2. Mocking the Servers and Certificate Setup

We’ll use WireMock to simulate our servers, relying on two properties: CERTS_DIR, with the directory containing the p12 files for both servers, and PASSWORD:

private static WireMockServer mockHttpsServer(String hostname, int port) {
    return new WireMockServer(WireMockConfiguration.options()
      .bindAddress(hostname)
      .httpsPort(port)
      .trustStorePath(CERTS_DIR + "/trust." + host + ".p12")
      .trustStorePassword(password)
      .keystorePath(CERTS_DIR + "/server." + host + ".p12")
      .keystorePassword(PASSWORD)
      .keyManagerPassword(PASSWORD)
      .needClientAuth(true));
}

Let’s go through the most important options:

  • Bind address: Each certificate is bound to a specific hostname, so we specify it here
  • HTTPS port: This is the port we’ll use in our tests
  • Keystore path and trust store path: Since our certificate is self-signed, we also need to specify a trust store. Additionally, to simplify the example, we’re assuming the file names use the prefix “trust.” for the trust store and “server.” for the server key store
  • Client auth: Enabled explicitly for mTLS
  • Passwords: In this example, we’re using the same password

3. Using Apache HTTP Components

Let’s first examine how we can set up different clients with their own SSL contexts using Apache’s HTTP library.

3.1. Configuring the Client

Since we’re assuming the prefix “trust.” for the trust store and “client.” for the client key store, we only need one parameter to receive the endpoint hostname to build our SSLContext. With the help of the library’s SSLContexts, we can load the trust and key material:

private CloseableHttpClient httpsClient(String host) {
    char[] password = PASSWORD.toCharArray();
    SSLContext context = SSLContexts.custom()
      .loadTrustMaterial(Paths.get(CERTS_DIR + "/trust." + host + ".p12"), password)
      .loadKeyMaterial(Paths.get(CERTS_DIR + "/client." + host + ".p12"), password, password)
      .build();
    // ...
}

Then, we create a connection manager and set our newly created SSL context as a TLS socket strategy. We’ll use DefaultClientTlsStrategy, introduced in HttpComponents Core 5:

var manager = PoolingHttpClientConnectionManagerBuilder.create() 
  .setTlsSocketStrategy(new DefaultClientTlsStrategy(context)) 
  .build();

Finally, we use the manager to return a configured HTTPS client, ready to be used to make secure API calls:

return HttpClients.custom()
  .setConnectionManager(manager)
  .build();

3.2. Calling the Endpoints

With the boilerplate ready, we’ll create different client configurations for each API call:

@Test
void whenBuildingSeparateContexts_thenCorrectCertificateUsed() {
    CloseableHttpClient client1 = httpsClient("api.service1");
    HttpGet api1Get = new HttpGet("https://api.service1:10443/test");
    client1.execute(api1Get, response -> {
        assertEquals(HttpStatus.SC_OK, response.getCode());
        return response;
    });
    CloseableHttpClient client2 = httpsClient("api.service2");
    HttpGet api2Get = new HttpGet("https://api.service2:20443/test");
    client2.execute(api2Get, response -> {
        assertEquals(HttpStatus.SC_OK, response.getCode());
        return response;
    });
}

Let’s see what this looks like without third-party dependencies and a single SSLContext.

4. Creating a Custom KeyManager and TrustManager

The X509ExtendedKeyManager and X509ExtendedTrustManager are abstract classes from Java’s SSL package that we can extend to have complete control over how certificate keys and trust stores are loaded and used during SSL handshakes. We’ll use them to create a RoutingSslContextBuilder class that can choose the correct certificate based on the hostname.

4.1. Loading a KeyStore

Let’s start with utilities to load keys and trust managers. We’ll use this method to load a KeyStore into memory:

public class CertUtils {
    private static KeyStore loadKeyStore(Path path, String password) {
        KeyStore store = KeyStore.getInstance(path.toFile(), password.toCharArray());
        try (InputStream stream = Files.newInputStream(path)) {
            store.load(stream, password.toCharArray());
        }
        return store;
    }
    // ...
}

4.2. Loading a KeyManager and a TrustManager

Now let’s put it together to load a key manager of type X509KeyManager. We use this type because KeyManager is only a marker interface, and X.509 is the standard for certificate formats:

public static X509KeyManager loadKeyManager(Path path, String password) {
    KeyStore store = loadKeyStore(path, password);
    KeyManagerFactory factory = 
      KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm());
    factory.init(store, password.toCharArray());
    return (X509KeyManager) Stream.of(factory.getKeyManagers())
      .filter(X509KeyManager.class::isInstance)
      .findAny()
      .orElseThrow();
}

And also a trust manager for the trust store:

public static X509TrustManager loadTrustManager(Path path, String password) {
    KeyStore store = loadKeyStore(path, password);
    TrustManagerFactory factory = 
      TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
    factory.init(store);
    return (X509TrustManager) 
      filter(factory.getTrustManagers(), X509TrustManager.class::isInstance);
}

4.3. Creating the RoutingKeyManager

With a custom key manager, we’ll decide which store to use based on the hostname or certificate alias. We’ll do that by storing every keystore in a Map and overriding the methods in X509ExtendedKeyManager.

When extending this class, it’s not always possible to get the hostname. In those cases, we’ll receive the alias in our select() method instead. Therefore, this strategy only works when the hostname of the target server matches the alias of the certificate:

public class RoutingKeyManager extends X509ExtendedKeyManager {
    private final Map<String, X509KeyManager> hostMap = new HashMap<>();
    public void put(String host, X509KeyManager manager) {
        hostMap.put(host, manager);
    }
    private X509KeyManager select(String host) {
        X509KeyManager manager = hostMap.get(host);
        if (manager == null)
            throw new IllegalArgumentException("key manager not found for " + host);
        return manager;
    }
    // ...
}

The chooseEngineClientAlias() method is called to choose an alias for authenticating the client. We get the host from the SSLEngine parameter and delegate the call to its manager’s chooseClientAlias(), ignoring the Socket parameter:

@Override
public String chooseEngineClientAlias(
  String[] keyType, Principal[] issuers, SSLEngine engine) {
    String host = engine.getPeerHost();
    return select(host).chooseClientAlias(keyType, issuers, (Socket) null);
}

Next, we’ll override and delegate getCertificateChain(). Notice that this time we’re selecting the key manager based on the alias:

@Override
public X509Certificate[] getCertificateChain(String alias) {
    return select(alias).getCertificateChain(alias);
}

The last method we need to delegate is getPrivateKey():

@Override
public PrivateKey getPrivateKey(String alias) {
    return select(alias).getPrivateKey(alias);
}

For our purposes, we don’t need the other methods in X509KeyManager, so we’ll throw an UnsupportedOperationException for the remaining overrides:

@Override
public String chooseClientAlias(String[] keyType, Principal[] issuers, Socket socket) {
    throw new UnsupportedOperationException();
}
// ...

4.4. Creating the RoutingTrustManager

Our custom trust manager follows the same idea as our RoutingKeyManager, delegating to one of the registered trust managers based on the hostname:

public class RoutingTrustManager extends X509ExtendedTrustManager {
    private final Map<String, X509TrustManager> hostMap = new HashMap<>();
    public void put(String host, X509TrustManager manager) {
        hostMap.put(host, manager);
    }
    private X509TrustManager select(String host) {
        X509TrustManager manager = hostMap.get(host);
        if (manager == null)
            throw new IllegalArgumentException("trust manager not found for " + host);
        return manager;
    }
    // ...
}

This time, the only implementation we’ll need to care about is the checkServerTrusted() with an SSLEngine parameter:

@Override
public void checkServerTrusted(X509Certificate[] chain, String authType, SSLEngine engine)
  throws CertificateException {
    String host = engine.getPeerHost();
    select(host).checkServerTrusted(chain, authType);
}

Again, we need to throw UnsupportedOperationExceptions for the other overrides:

@Override
public void checkServerTrusted(X509Certificate[] chain, String authType)
  throws CertificateException {
    throw new UnsupportedOperationException();
}

5. Putting It All Together to Create a RoutingSslContextBuilder

The final piece is building the SSL context. Since our implementation handles routing internally, we’ll need a single HttpClient for all API calls, even if they require different certificates.

5.1. Creating a Builder

We’ll start with a builder class to combine our custom managers:

public class RoutingSslContextBuilder {
    private final RoutingKeyManager routingKeyManager;
    private final RoutingTrustManager routingTrustManager;
    public RoutingSslContextBuilder() {
        routingKeyManager = new RoutingKeyManager();
        routingTrustManager = new RoutingTrustManager();
    }
    public static RoutingSslContextBuilder create() {
        return new RoutingSslContextBuilder();
    }
    // ...
}

When building an instance, we’ll call this method for every combination of host and certificate. This loads both the key and trust manager for every server we need to access:

public RoutingSslContextBuilder trust(String host, String certsDir, String password) {
    routingTrustManager.put(host, CertUtils.loadTrustManager(
      Paths.get(certsDir, "trust." + host + ".p12"), password));
    routingKeyManager.put(host, CertUtils.loadKeyManager(
      Paths.get(certsDir, "client." + host + ".p12"), password));
    return this;
}

Lastly, we’ll initialize the SSL context with our custom managers, leaving the SecureRandom parameter null to get the default implementation:

public SSLContext build() throws NoSuchAlgorithmException, KeyManagementException {
    SSLContext context = SSLContext.getInstance("TLS");
    context.init(
      new KeyManager[] { routingKeyManager }, 
      new TrustManager[] { routingTrustManager }, 
      null);
    return context;
}

5.2. Testing With Java’s HttpClient

Since we’re not dependent on the Apache HttpComponents library, we’ll use core Java to make the calls. Let’s start by building the SSL context and an HttpClient:

@Test
void whenBuildingCustomSslContext_thenCorrectCertificateUsedForEachConnection() {
    SSLContext context = RoutingSslContextBuilder.create()
      .trust("api.service1", CERTS_DIR, PASSWORD)
      .trust("api.service2", CERTS_DIR, PASSWORD)
      .build();
    HttpClient client = HttpClient.newBuilder()
      .sslContext(context)
      .build();
    // ...
}

Now, let’s make the first call:

HttpRequest request = HttpRequest.newBuilder()
  .uri(URI.create("https://api.service1:10443/test"))
  .GET()
  .build();
HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString());
assertEquals("ok from server 1", response.body());

Then, the second call:

request = HttpRequest.newBuilder()
  .uri(URI.create("https://api.service2:20443/test"))
  .GET()
  .build();
response = client.send(request, HttpResponse.BodyHandlers.ofString());
assertEquals("ok from server 2", response.body());

As expected, we were able to make secure requests to different servers that require different certificates using the same SSL context.

6. Conclusion

In this article, we demonstrated how to use multiple client certificates in Java when interacting with different HTTPS endpoints. We started with a multi-client solution using Apache HttpComponents and then built a custom solution with core Java, using custom KeyManager and TrustManager implementations. This approach is especially useful in systems requiring dynamic TLS credential selection, such as service meshes, multitenant clients, or API gateways.

As always, the source code is available over on GitHub.

The post Using a Different Client Certificate per Connection in Java first appeared on Baeldung.
       

Creating an AI Agent in Java Using Embabel Agent Framework

$
0
0

1. Overview

Modern applications are increasingly using Large Language Models (LLMs) to build solutions that go beyond simple question-answering. To achieve most real-world use cases, we need an AI agent capable of orchestrating complex workflows between LLMs and external tools.

The Embabel Agent Framework, created by Spring Framework founder Rod Johnson, aims to simplify creating AI agents on the JVM by providing a higher level of abstraction on top of Spring AI.

Through the use of Goal-Oriented Action Planning (GOAP), it enables agents to dynamically find paths to achieve goals without explicitly programming every workflow.

In this tutorial, we’ll explore the Embabel agent framework by building a basic quiz generation agent named Quizzard. Our agent fetches content from a blog post URL and then uses it to generate multiple-choice questions.

2. Setting up the Project

Before we start implementing our Quizzard agent, we’ll need to include the necessary dependency and configure our application correctly.

2.1. Dependencies

Let’s start by adding the necessary dependency to our project’s pom.xml file:

<dependency>
    <groupId>com.embabel.agent</groupId>
    <artifactId>embabel-agent-starter</artifactId>
    <version>0.1.0-SNAPSHOT</version>
</dependency>

For our Spring Boot application, we import the embabel-agent-starter dependency, which provides us with all the core classes required to build our Quizzard agent.

Since we’re using a snapshot version of the library, we’ll also need to add the Embabel snapshot repository to our pom.xml:

<repositories>
    <repository>
        <id>embabel-snapshots</id>
        <url>https://repo.embabel.com/artifactory/libs-snapshot</url>
        <snapshots>
            <enabled>true</enabled>
        </snapshots>
    </repository>
</repositories>

This is where we’ll be able to access the snapshot artifacts of Embabel, as opposed to the standard Maven Central repository.

2.2. Configuring an LLM Model

Next, let’s configure a chat model in our application.yaml file:

embabel:
  models:
    default-llm: claude-opus-4-20250514

For our demonstration, we specify Claude 4 Opus by Anthropic, as our default model for all agent operations, using the claude-opus-4-20250514 model ID.

Notably, Embabel supports using multiple LLMs together, through which we can achieve cost-effectiveness and choose the best LLM for a given task based on its complexity.

Additionally, we’ll have to pass our Anthropic API key in the ANTHROPIC_API_KEY environment variable when running the application.

2.3. Defining a Quiz Generation Prompt Template

Finally, to make sure that our LLM generates high-quality quizzes against a blog’s content, we’ll define a detailed prompt template.

Let’s create a quiz-generation.txt file in the src/main/resources/prompt-templates directory:

Generate multiple choice questions based on the following blog content:
Blog title: %s
Blog content: %s
Requirements:
- Create exactly 5 questions
- Each question must have exactly 4 options
- Each question must have only one correct answer
- The difficulty level of the questions should be intermediate
- Questions should test understanding of key concepts from the blog
- Make the incorrect options plausible but clearly wrong
- Questions should be clear and unambiguous

Here, we clearly outline the requirements for the quiz. We leave two %s placeholders in our prompt template for the blog title and blog content, respectively. We’ll replace them with the actual values in our agent class.

For simplicity, we’re hardcoding the number of questions, options, and difficulty level directly in the prompt template. However, in a production application, we could make these configurable through properties or accept them as user input based on requirements.

3. Creating Our Agent

In Embabel, an agent is the central component, which encapsulates a set of capabilities, known as actions, and uses them to achieve goals.

And now that we have our configuration in place, let’s build our Quizzard agent. We’ll first define an action that fetches blog content from the web using a Model Context Protocol (MCP) server and then another that uses our configured LLM to generate a quiz from it.

To extract content from blogs, we’ll use the Fetch MCP server. It provides a tool that fetches content from a URL and converts it to markdown. Alternatively, we could use the Brave Search MCP server instead.

For our demonstration, we’ll add this MCP server to Docker Desktop using the MCP toolkit, which is available in version 4.42 or later. If using an older version, we can add the MCP toolkit as a Docker extension.

To use this MCP server, let’s first configure our application as an MCP client:

@SpringBootApplication
@EnableAgents(mcpServers = {
  McpServers.DOCKER_DESKTOP
})
class Application {
    // ...
}

Using the @EnableAgents annotation, our application will be able to act as an MCP client and connect to tools available through the Docker Desktop integration.

Next, let’s define our agent and its first action:

@Agent(
  name = "quizzard",
  description = "Generate multiple choice quizzes from documents"
)
class QuizGeneratorAgent {
    @Action(toolGroups = CoreToolGroups.WEB)
    Blog fetchBlogContent(UserInput userInput) {
        return PromptRunner
          .usingLlm()
          .createObject(
            "Fetch the blog content from the URL given in the following request: '%s'".formatted(userInput),
            Blog.class
          );
    }
}

Here, we annotate our QuizGeneratorAgent class with the @Agent annotation to declare it as an agent.

We provide its purpose in the description attribute, as it helps Embabel select the correct agent to handle a user’s request, especially when we define multiple agents in an application.

Next, we define the fetchBlogContent() method and annotate it with @Action, which marks it as a capability that the agent can perform. Additionally, to grant our action access to the fetch MCP server we enabled, we specify CoreToolGroups.WEB in the toolGroups attribute.

Inside our method, we use the PromptRunner class to pass a prompt to extract the URL’s content from the user’s input. To populate a Blog object with this extracted information, we pass the Blog class as the second argument in the createObject() method. Embabel automatically adds instructions to our prompt for the LLM to produce a structured output.

It’s worth noting that Embabel provides the UserInput and Blog domain models, so we don’t need to create them ourselves. The UserInput contains the user’s text request and timestamp, while the Blog type includes fields for the blog’s title, content, author, etc.

3.2. Generating Quiz From Fetched Blog

Now that we have an action capable of fetching a blog’s content, the next step is to define an action to generate the quiz.

First, let’s define a record to represent the structure of our quiz:

record Quiz(List<QuizQuestion> questions) {
    record QuizQuestion(
        String question,
        List<String> options,
        String correctAnswer
    ) {
    }
}

Our Quiz record contains a list of nested QuizQuestion records, which hold the question, each with its options and the correct answer.

Finally, let’s add the second action to our agent to generate the quiz from the fetched blog content:

@Value("classpath:prompt-templates/quiz-generation.txt")
private Resource promptTemplate;
@Action
@AchievesGoal(description = "Quiz has been generated")
Quiz generateQuiz(Blog blog) {
    String prompt = promptTemplate
      .getContentAsString(Charset.defaultCharset())
      .formatted(
        blog.getTitle(),
        blog.getContent()
      );
    return PromptRunner
      .usingLlm()
      .createObject(
        prompt,
        Quiz.class
      );
}

We create a new generateQuiz() method that takes the Blog object returned by our previous action as an argument.

In addition to @Action, we annotate this method with the @AchievesGoal annotation, to indicate that the successful execution of this action fulfils the agent’s primary purpose.

In our method, we replace the placeholders in our promptTemplate with the blog’s title and content. Then, we use the PromptRunner class again to generate a Quiz object based on this prompt.

4. Interacting With Our Agent

Now that we’ve built our agent, let’s interact with it and test it out.

First, let’s enable the interactive shell mode in our application:

@EnableAgentShell
@SpringBootApplication
@EnableAgents(mcpServers = {
  McpServers.DOCKER_DESKTOP
})
class Application {
    // ...
}

We annotate our main Spring Boot class with the @EnableAgentShell annotation, which provides us an interactive CLI to interact with our agent.

Alternatively, we can also use the @EnableAgentMcpServer annotation to run our application as an MCP server where Embabel exposes our agent as an MCP-compatible tool, which an MCP client can consume. But for our demonstration, we’ll keep things simple.

Let’s start our application and give our agent a command:

execute 'Generate quiz for this article: https://www.baeldung.com/spring-ai-model-context-protocol-mcp'

Here, we use the execute command to send a request to our Quizzard agent. We provide natural language instructions that contain the URL of the Baeldung article we want to create a quiz for.

Let’s see the generated logs when we execute this command:

[main] INFO  Embabel - formulated plan 
	com.baeldung.quizzard.QuizGeneratorAgent.fetchBlogContent ->
		com.baeldung.quizzard.QuizGeneratorAgent.generateQuiz
[main] INFO  Embabel - executing action com.baeldung.quizzard.QuizGeneratorAgent.fetchBlogContent
[main] INFO  Embabel - (fetchBlogContent) calling tool embabel_docker_mcp_fetch({"url":"https://www.baeldung.com/spring-ai-model-context-protocol-mcp"})
[main] INFO  Embabel - received LLM response com.baeldung.quizzard.QuizGeneratorAgent.fetchBlogContent-com.embabel.agent.domain.library.Blog of type Blog from DefaultModelSelectionCriteria in 11 seconds
[main] INFO  Embabel - executing action com.baeldung.quizzard.QuizGeneratorAgent.generateQuiz
[main] INFO  Embabel - received LLM response com.baeldung.quizzard.QuizGeneratorAgent.generateQuiz-com.baeldung.quizzard.Quiz of type Quiz from DefaultModelSelectionCriteria in 5 seconds
[main] INFO  Embabel - goal com.baeldung.quizzard.QuizGeneratorAgent.generateQuiz achieved in PT16.332321S
You asked: UserInput(content=Generate quiz for this article: https://www.baeldung.com/spring-ai-model-context-protocol-mcp, timestamp=2025-07-09T15:36:20.056402Z)
{
  "questions":[
    {
      "question":"What is the primary purpose of the Model Context Protocol (MCP) introduced by Anthropic?",
      "options":[
        "To provide a standardized way to enhance AI model responses by connecting to external data sources",
        "To replace existing LLM architectures with a new protocol",
        "To create a new programming language for AI development",
        "To establish a security protocol for AI model deployment"
      ],
      "correctAnswer":"To provide a standardized way to enhance AI model responses by connecting to external data sources"
    },
    {
      "question":"In the MCP architecture, what is the relationship between MCP Clients and MCP Servers?",
      "options":[
        "MCP Clients establish many-to-many connections with MCP Servers",
        "MCP Clients establish 1:1 connections with MCP Servers",
        "MCP Servers establish connections to MCP Clients",
        "MCP Clients and Servers communicate through a central message broker"
      ],
      "correctAnswer":"MCP Clients establish 1:1 connections with MCP Servers"
    },
    {
      "question":"Which transport mechanism is used for TypeScript-based MCP servers in the tutorial?",
      "options":[
        "HTTP transport",
        "WebSocket transport",
        "stdio transport",
        "gRPC transport"
      ],
      "correctAnswer":"stdio transport"
    },
    {
      "question":"What annotation is used to expose custom tools in the MCP server implementation?",
      "options":[
        "@Tool",
        "@MCPTool",
        "@Function",
        "@Method"
      ],
      "correctAnswer":"@Tool"
    },
    {
      "question":"What type of transport is used for custom MCP servers in the tutorial?",
      "options":[
        "stdio transport",
        "HTTP transport",
        "SSE transport",
        "TCP transport"
      ],
      "correctAnswer":"SSE transport"
    }
  ]
}
LLMs used: [claude-opus-4-20250514]
Prompt tokens: 3,053, completion tokens: 996
Cost: $0.0141
Tool usage:
	ToolStats(name=embabel_docker_mcp_fetch, calls=1, avgResponseTime=1657 ms, failures=0)

The logs provide a clear view of the agent’s execution flow.

First, we can see that Embabel automatically and correctly formulated a plan to achieve our defined goal.

Next, the agent executes the plan, starting with the fetchBlogContent action. Additionally, we can see that it calls the fetch MCP server with the URL we provided. Once it fetches the content, the agent proceeds to the generateQuiz action.

Finally, the agent confirms that it achieved the goal and prints the final quiz in JSON format, along with useful metrics like token usage and cost.

5. Conclusion

In this article, we’ve explored how to build intelligent agents using the Embabel agent framework.

We built a practical, multi-step agent, Quizzard, that can fetch web content and generate a quiz from it. Our agent was able to dynamically determine the sequence of actions needed to achieve its goal.

As always, all the code examples used in this article are available over on GitHub.

The post Creating an AI Agent in Java Using Embabel Agent Framework first appeared on Baeldung.
       

Executing SQL Scripts in H2 Database

$
0
0

1. Overview

When working with Spring Boot applications, especially in integration tests, using the H2 in-memory database offers a lightweight and fast option for simulating real database interactions. As developers, we often need to initialize schemas, preload data, or execute custom SQL scripts during our tests.

In this tutorial, let’s walk through the common ways we can execute SQL scripts using H2 in a Spring Boot test environment.

2. Specifying Scripts in the JDBC URL

One handy feature of H2 is that it allows us to run SQL scripts automatically when the database is initialized, right from the JDBC URL. Next, let’s demonstrate how this works through an example.

First, let’s create a SQL file init_my_db.sql under the resources/sql directory:

CREATE TABLE TASK_TABLE
(
    ID   INT PRIMARY KEY,
    NAME VARCHAR(255)
);
INSERT INTO TASK_TABLE (ID, NAME) VALUES (1, 'Start the application');
INSERT INTO TASK_TABLE (ID, NAME) VALUES (2, 'Check if data table is filled');

The script creates a table and inserts two records into it. Next, let’s see how to instruct the H2 database to execute this script automatically in a Spring Boot application.

Let’s create a simple Spring Boot YAML configuration:

spring:
  datasource:
    driverClassName: org.h2.Driver
    url: jdbc:h2:mem:demodb;INIT=RUNSCRIPT FROM 'classpath:/sql/init_my_db.sql'
    username: sa
    password:

In this example, we defined a datasource that connects to an in-memory H2 database. Further, we added an INIT clause in the JDBC URL:

INIT=RUNSCRIPT FROM 'classpath:/sql/init_my_db.sql'

RUNSCRIPT FROM <script_path> is an H2 database command to run a given script. Therefore, the line above tells H2 to run the resource/sql/init_my_db.sql script as soon as the in-memory database starts. It’s a great way to bootstrap schema definitions or seed test data.

Next, let’s create a test to verify if the table and expected data exist after the application starts:

List<String> expectedTaskNames = List.of("Start the application", "Check if data table is filled");
List<String> taskNames = entityManager.createNativeQuery("SELECT NAME FROM TASK_TABLE ORDER BY ID")
  .getResultStream()
  .map(Object::toString)
  .toList();
assertEquals(expectedTaskNames, taskNames);

The test passes if we run this test. That is to say, the script has been executed by H2 during the initialization.

It’s worth noting that the RUNSCRIPT command understands Java’s classpath. Therefore, we used “classpath:/sql/…” in this example. If we want to provide H2 the script through an absolute path, we can use the “file:” prefix, something like “RUNSCRIPT FROM ‘file:/path/to/my/script.sql“.

3. Spring Boot’s Built-in Script Detection: schema.sql and data.sql

We’ve learned that H2 will automatically execute scripts if we add the INIT=RUNSCRIPT FROM … clause to H2’s JDBC URL.

Additionally, when we rely on Spring Data JPA conventions, Spring Boot automatically detects and runs two special files from the classpath:

  • schema.sql – For defining the database schema, such as creating schemas, tables, or views
  • data.sql – For populating initial data

That is to say, if our application is based on Spring Data JPA, we can put the desired SQL statements into the corresponding files. Spring automatically executes them on the application startup after the datasource is created. In this way, we don’t need to modify the H2 database’s JDBC URL.

Next, let’s understand how it works through an example.

First, let’s create a resources/schema.sql file:

CREATE TABLE CITY
(
    ID   INT PRIMARY KEY ,
    NAME VARCHAR(255)
);

Here, we create a CITY table. Then, we’d like to insert some initial data into the CITY tableSo, let’s put some INSERT SQL statements in the resources/data.sql file:

INSERT INTO CITY (ID, NAME) VALUES (1, 'New York');
INSERT INTO CITY (ID, NAME) VALUES (2, 'Hamburg');
INSERT INTO CITY (ID, NAME) VALUES (3, 'Shanghai');

It’s essential to note that Spring Boot executes schema.sql first, followed by data.sql.

Now, let’s write a test to verify that the table is created and the expected data is inserted automatically by Spring Boot:

List<String> expectedCityNames = List.of("New York", "Hamburg", "Shanghai");
List<String> cityNames = entityManager.createNativeQuery("SELECT NAME FROM CITY ORDER BY ID")
  .getResultStream()
  .map(Object::toString)
  .toList();
assertEquals(expectedCityNames, cityNames);

If we run this test, it passes. Therefore, Spring Boot executed the two script files as we expected.

Sometimes, we may want to place schema.sql and data.sql in a different directory, rather than at the root of our classpath. Then, we can set the following properties to achieve it:

spring:
  sql:
    init:
      schema-locations: classpath:/the/path/to/schema.sql
      data-locations: classpath:/the/path/to/data.sql

The auto-detection of schema.sql and data.sql is enabled by default. However, if it’s required, we can turn off this default behavior entirely:

spring:
  sql:
    init:
      mode: never

This can be useful to prevent unwanted SQL from being loaded during application startup.

4. Executing SQL Scripts Dynamically via EntityManager

Sometimes, we want to execute existing SQL scripts on an H2 database from our Spring Boot application programmatically, perhaps to simulate specific data scenarios or reset the database state. Then, we can execute H2’s RUNSCRIPT command via a native query.

As usual, let’s see how it works through an example.

For simplicity, we’ll reuse the CITY table. Let’s create the resources/sql/add_cities.sql file to insert three new city records into the CITY table:

INSERT INTO CITY (ID, NAME) VALUES (4, 'Paris');
INSERT INTO CITY (ID, NAME) VALUES (5, 'Berlin');
INSERT INTO CITY (ID, NAME) VALUES (6, 'Tokyo');

Now, let’s execute this script using RUNSCRIPT in a native query:

entityManager.createNativeQuery("RUNSCRIPT FROM 'classpath:/sql/add_cities.sql'")
  .executeUpdate();
List<String> expectedCityNames = List.of("New York", "Hamburg", "Shanghai", "Paris", "Berlin", "Tokyo");
List<String> cityNames = entityManager.createNativeQuery("SELECT NAME FROM CITY ORDER BY ID")
  .getResultStream()
  .map(Object::toString)
  .toList();
assertEquals(expectedCityNames, cityNames);

The test passes if we give it a run.

The RUNSCRIPT command is native to H2 and supports reading SQL files from the classpath or the file system. We can use this inside test methods or setup blocks to execute scripts as needed.

It’s worth noting that when we created a native query with the RUNSCRIPT command, we should call the executeUpdate() method to execute the script. Calling getResultList() or a similar method raises an exception, even if the SQL script file includes SELECT statements.

5. Conclusion

In this article, we’ve learned several ways to execute scripts in an H2 database. Whether we initialize the database using the INIT clause in the JDBC URL, execute scripts dynamically via EntityManager, or rely on Spring Boot’s built-in support for schema.sql and data.sql, each approach serves a specific need in the testing lifecycle.

Embracing these strategies empowers us to write cleaner, more maintainable integration tests and, ultimately, more robust applications.

As always, the complete source code for the examples is available over on GitHub.

The post Executing SQL Scripts in H2 Database first appeared on Baeldung.
       


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>