Quantcast
Channel: Baeldung
Viewing all articles
Browse latest Browse all 3627

Exploring Model Context Protocol (MCP) With Spring AI

$
0
0

1. Overview

Modern web applications are increasingly integrating with Large Language Models (LLMs) to build solutions, which are not just limited to general knowledge-based question-answering.

To enhance an AI model’s response and make it more context-aware, we can connect it to external sources like search engines, databases, and file systems. However, integrating and managing multiple data sources with different formats and protocols is a challenge.

The Model Context Protocol (MCP), introduced by Anthropic, addresses this integration challenge and provides a standardized way to connect AI-powered applications with external data sources. Through MCP, we can build complex agents and workflows on top of a native LLM.

In this tutorial, we’ll understand the concept of MCP by practically implementing its client-server architecture using Spring AI. We’ll create a simple chatbot and extend its capabilities through MCP servers to perform web searches, execute filesystem operations, and access custom business logic.

2. Model Context Protocol 101

Before we dive into the implementation, let’s take a closer look at MCP and its various components:

Architecture diagram of the model context protocol (MCP) demonstrating the relationship between host, clients, servers, and external sources.

MCP follows a client-server architecture that revolves around several key components:

  • MCP Host: is our main application that integrates with an LLM and requires it to connect with external data sources
  • MCP Clients: are components that establish and maintain 1:1 connections with MCP servers
  • MCP Servers: are components that integrate with external data sources and expose functionalities to interact with them
  • Tools: refer to the executable functions/methods that the MCP servers expose for clients to invoke

Additionally, to handle communication between clients and servers, MCP provides two transport channels.

To enable communication through standard input and output streams with local processes and command-line tools, it provides the Standard Input/Output (stdio) transport type. Alternatively, for HTTP-based communication between clients and servers, it provides the Server-Sent Events (SSE) transport type.

MCP is a complex and vast topic, refer to the official documentation to learn more.

3. Creating an MCP Host

Now that we’ve got a high-level understanding of MCP, let’s start implementing the MCP architecture practically.

We’ll be building a chatbot using Anthropic’s Claude model, which will act as our MCP host. Alternatively, we can use a local LLM via Hugging Face or Ollama, as the specific AI model is irrelevant for this demonstration.

3.1. Dependencies

Let’s start by adding the necessary dependencies to our project’s pom.xml file:

<dependency>
    <groupId>org.springframework.ai</groupId>
    <artifactId>spring-ai-anthropic-spring-boot-starter</artifactId>
    <version>1.0.0-M6</version>
</dependency>
<dependency>
    <groupId>org.springframework.ai</groupId>
    <artifactId>spring-ai-mcp-client-spring-boot-starter</artifactId>
    <version>1.0.0-M6</version>
</dependency>

The Anthropic starter dependency is a wrapper around the Anthropic Message API, and we’ll use it to interact with the Claude model in our application.

Additionally, we import the MCP client starter dependency, which will allow us to configure clients inside our Spring Boot application that maintain 1:1 connections with the MCP servers.

Since the current version, 1.0.0-M6, is a milestone release, we’ll also need to add the Spring Milestones repository to our pom.xml:

<repositories>
    <repository>
        <id>spring-milestones</id>
        <name>Spring Milestones</name>
        <url>https://repo.spring.io/milestone</url>
        <snapshots>
            <enabled>false</enabled>
        </snapshots>
    </repository>
</repositories>

This repository is where milestone versions are published, as opposed to the standard Maven Central repository.

Given that we’re using multiple Spring AI starters in our project, let’s also include the Spring AI Bill of Materials (BOM) in our pom.xml:

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework.ai</groupId>
            <artifactId>spring-ai-bom</artifactId>
            <version>1.0.0-M6</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>

With this addition, we can now remove the version tag from both of our starter dependencies. The BOM eliminates the risk of version conflicts and ensures our Spring AI dependencies are compatible with each other.

Next, let’s configure our Anthropic API key and chat model in the application.yaml file:

spring:
  ai:
    anthropic:
      api-key: ${ANTHROPIC_API_KEY}
      chat:
        options:
          model: claude-3-5-sonnet-20241022

We use the ${} property placeholder to load the value of our API Key from an environment variable.

Additionally, we specify Claude 3.5 Sonnet, the most intelligent model by Anthropic, using the claude-3-5-sonnet-20241022 model ID. Feel free to explore and use a different model based on requirements.

On configuring the above properties, Spring AI automatically creates a bean of type ChatModel, allowing us to interact with the specified model.

3.2. Configuring MCP Clients for Brave Search and Filesystem Servers

Now, let’s configure MCP clients for two pre-built MCP server implementations: Brave Search and Filesystem. These servers will enable our chatbot to perform web searches and filesystem operations.

Let’s start by registering an MCP client for the Brave search MCP server in the application.yaml file:

spring:
  ai:
    mcp:
      client:
        stdio:
          connections:
            brave-search:
              command: npx
              args:
                - "-y"
                - "@modelcontextprotocol/server-brave-search"
              env:
                BRAVE_API_KEY: ${BRAVE_API_KEY}

Here, we configure a client with stdio transport. We specify the npx command to download and run the TypeScript-based @modelcontextprotocol/server-brave-search package and use the -y flag to confirm all the installation prompts.

Additionally, we provide the BRAVE_API_KEY as an environment variable.

Next, let’s configure an MCP client for the Filesystem MCP server:

spring:
  ai:
    mcp:
      client:
        stdio:
          connections:
            filesystem:
              command: npx
              args:
                - "-y"
                - "@modelcontextprotocol/server-filesystem"
                - "./"

Similar to the previous configuration, we specify the command and arguments required to run the Filesystem MCP server package. This setup allows our chatbot to perform operations like creating, reading, and writing files in the specified directory.

Here, we only configure the current directory (./) to be used for filesystem operations, however, we can specify multiple directories by adding them to the args list.

During application startup, Spring AI will scan our configurations, create the MCP clients, and establish connections with their corresponding MCP servers. It also creates a bean of type SyncMcpToolCallbackProvider, which provides a list of all the tools exposed by the configured MCP servers.

3.3. Building a Basic Chatbot

With our AI model and MCP clients configured, let’s build a simple chatbot:

@Bean
ChatClient chatClient(ChatModel chatModel, SyncMcpToolCallbackProvider toolCallbackProvider) {
    return ChatClient
      .builder(chatModel)
      .defaultTools(toolCallbackProvider.getToolCallbacks())
      .build();
}

We start by creating a bean of type ChatClient using the ChatModel and SyncMcpToolCallbackProvider beans. The ChatClient class will act as our main entry point for interacting with our chat completion model, i.e., Claude 3.5 Sonnet.

Next, let’s inject the ChatClient bean to create a new ChatbotService class:

String chat(String question) {
    return chatClient
      .prompt()
      .user(question)
      .call()
      .content();
}

We create a chat() method where we pass the user’s question to the chat client bean and simply return the AI model’s response.

Now that we’ve implemented our service layer, let’s expose a REST API on top of it:

@PostMapping("/chat")
ResponseEntity<ChatResponse> chat(@RequestBody ChatRequest chatRequest) {
    String answer = chatbotService.chat(chatRequest.question());
    return ResponseEntity.ok(new ChatResponse(answer));
}
record ChatRequest(String question) {}
record ChatResponse(String answer) {}

We’ll use the above API endpoint to interact with our chatbot later in the tutorial.

4. Creating a Custom MCP Server

In addition to using pre-built MCP servers, we can create our own MCP servers to extend the capabilities of our chatbot with our business logic.

Let’s explore how to create a custom MCP server using Spring AI.

We’ll be creating a new Spring Boot application in this section.

4.1. Dependencies

First, let’s include the necessary dependency in our pom.xml file:

<dependency>
    <groupId>org.springframework.ai</groupId>
    <artifactId>spring-ai-mcp-server-webmvc-spring-boot-starter</artifactId>
    <version>1.0.0-M6</version>
</dependency>

We import Spring AI’s MCP server dependency, which provides the necessary classes for creating a custom MCP server that supports the HTTP-based SSE transport.

4.2. Defining and Exposing Custom Tools

Next, let’s define a few custom tools that our MCP server will expose.

We’ll create an AuthorRepository class that provides methods to fetch author details:

class AuthorRepository {
    @Tool(description = "Get Baeldung author details using an article title")
    Author getAuthorByArticleTitle(String articleTitle) {
        return new Author("John Doe", "john.doe@baeldung.com");
    }
    @Tool(description = "Get highest rated Baeldung authors")
    List<Author> getTopAuthors() {
        return List.of(
          new Author("John Doe", "john.doe@baeldung.com"),
          new Author("Jane Doe", "jane.doe@baeldung.com")
        );
    }
    record Author(String name, String email) {
    }
}

For our demonstration, we’re returning hardcoded author details, but in a real application, the tools would typically interact with a database or an external API.

We annotate our two methods with @Tool annotation and provide a brief description for each of them. The description helps the AI model decide if and when to call the tools based on the user input and incorporate the result in its response.

Next, let’s register our author tools with the MCP server:

@Bean
ToolCallbackProvider authorTools() {
    return MethodToolCallbackProvider
      .builder()
      .toolObjects(new AuthorRepository())
      .build();
}

We use the MethodToolCallbackProvider to create a ToolCallbackProvider bean from the tools defined in our AuthorRepository class. The methods annotated with @Tool will get exposed as MCP tools when the application starts up.

4.3. Configuring an MCP Client for Our Custom MCP Server

Finally, to use our custom MCP server in our chatbot application, we need to configure an MCP client against it:

spring:
  ai:
    mcp:
      client:
        sse:
          connections:
            author-tools-server:
              url: http://localhost:8081

In the application.yaml file, we configure a new client against our custom MCP server. Note, that we’re using the sse transport type here.

This configuration assumes the MCP server to be running at http://localhost:8081, make sure to update the url if it’s running on a different host or port.

With this configuration, our MCP client can now invoke the tools exposed by our custom server, in addition to the tools provided by the Brave Search and Filesystem MCP servers.

5. Interacting With Our Chatbot

Now that we’ve built our chatbot and integrated it with various MCP servers, let’s interact with it and test it out.

We’ll use the HTTPie CLI to invoke the chatbot’s API endpoint:

http POST :8080/chat question="How much was Elon Musk's initial offer to buy OpenAI in 2025?"

Here, we send a simple question to the chatbot about an event that occurred after the LLM’s knowledge cut-off date. Let’s see what we get as a response:

{
    "answer": "Elon Musk's initial offer to buy OpenAI was $97.4 billion. [Source](https://www.reuters.com/technology/openai-board-rejects-musks-974-billion-offer-2025-02-14/)."
}

As we can see, the chatbot was able to perform a web search using the configured Brave Search MCP server and provide an accurate answer along with a source.

Next, let’s verify that the chatbot can perform filesystem operations using the Filesystem MCP server:

http POST :8080/chat question="Create a text file named 'mcp-demo.txt' with content 'This is awesome!'."

We instruct the chatbot to create an mcp-demo.txt file with certain content. Let’s see if it’s able to fulfill the request:

{
    "answer": "The text file named 'mcp-demo.txt' has been successfully created with the content you specified."
}

The chatbot responds with a successful response. We can verify that the file was created in the directory we specified in the application.yaml file.

Finally, let’s verify if the chatbot can call one of the tools exposed by our custom MCP server. We’ll enquire about the author details by mentioning an article title:

http POST :8080/chat question="Who wrote the article 'Testing CORS in Spring Boot?' on Baeldung, and how can I contact them?"

Let’s invoke the API and see if the chatbot response contains the hardcoded author details:

{
    "answer": "The article 'Testing CORS in Spring Boot' on Baeldung was written by John Doe. You can contact him via email at [john.doe@baeldung.com](mailto:john.doe@baeldung.com)."
}

The above response verifies that the chatbot fetches the author details using the getAuthorByArticleTitle() tool that our custom MCP server exposes.

We highly recommend setting up the codebase locally and playing around with the chatbot using different prompts.

6. Conclusion

In this article, we’ve explored the Model Context Protocol and implemented its client-server architecture using Spring AI.

First, we built a simple chatbot using Anthropic’s Claude 3.5 Sonnet model to act as our MCP host.

Then, to provide our chatbot with web search capability and enable it to execute filesystem operations, we configured MCP clients against pre-built MCP server implementations of Brave Search API and Filesystem.

Finally, we created a custom MCP server and configured its corresponding MCP client inside our MCP host application.

As always, all the code examples used in this article are available over on GitHub.

The post Exploring Model Context Protocol (MCP) With Spring AI first appeared on Baeldung.
       

Viewing all articles
Browse latest Browse all 3627

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>