
1. Overview
Modern web applications are increasingly integrating with Large Language Models (LLMs) to build solutions.
DeepSeek is a Chinese AI research company that develops powerful LLMs and has recently disrupted the AI world with its DeepSeek-V3 and DeepSeek-R1 models. The latter model, along with its response, exposes its Chain of Thought (CoT), which gives us an insight into how the AI model interprets and approaches the given prompt.
In this tutorial, we’ll explore integrating DeepSeek models with Spring AI. We’ll build a simple chatbot capable of engaging in multi-turn textual conversations.
2. Dependencies and Configuration
There are multiple ways to integrate DeepSeek models into our application, and in this section, we’ll discuss a few popular options. We can choose the one that best fits our requirements.
2.1. Using OpenAI APIs
DeepSeek models are fully compatible with the OpenAI APIs and can be accessed with any OpenAI client or library.
Let’s start by adding Spring AI’s OpenAI starter dependency to our project’s pom.xml file:
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-openai-spring-boot-starter</artifactId>
<version>1.0.0-M6</version>
</dependency>
Since the current version, 1.0.0-M6, is a milestone release, we’ll also need to add the Spring Milestones repository to our pom.xml:
<repositories>
<repository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/milestone</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
</repositories>
This repository is where milestone versions are published, as opposed to the standard Maven Central repository. We’ll need to add this milestone repository irrespective of the configuration option we choose.
Next, let’s configure our DeepSeek API key and chat model in the application.yaml file:
spring:
ai:
openai:
api-key: ${DEEPSEEK_API_KEY}
chat:
options:
model: deepseek-reasoner
base-url: https://api.deepseek.com
embedding:
enabled: false
Additionally, we specify the DeepSeek API’s base URL and disable embeddings since DeepSeek currently doesn’t offer any embedding-compatible models.
On configuring the above properties, Spring AI automatically creates a bean of type ChatModel, allowing us to interact with the specified model. We’ll use it to define a few additional beans for our chatbot later in the tutorial.
2.2. Using Amazon Bedrock Converse API
Alternatively, we can use the Amazon Bedrock Converse API to integrate the DeepSeek R1 model into our application.
To follow along with this configuration step, we’ll need an active AWS account. The DeepSeek-R1 model is available through Amazon Bedrock Marketplace and can be hosted using Amazon SageMaker. This deployment guide can be referenced to set it up.
Let’s start by adding the Bedrock Converse starter dependency to our pom.xml:
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-bedrock-converse-spring-boot-starter</artifactId>
<version>1.0.0-M6</version>
</dependency>
Next, to interact with Amazon Bedrock, we need to configure our AWS credentials for authentication and the region where the DeepSeek model is hosted in the application.yaml file:
spring:
ai:
bedrock:
aws:
region: ${AWS_REGION}
access-key: ${AWS_ACCESS_KEY}
secret-key: ${AWS_SECRET_KEY}
converse:
chat:
options:
model: arn:aws:sagemaker:REGION:ACCOUNT_ID:endpoint/ENDPOINT_NAME
We use the ${} property placeholder to load the values of our properties from environment variables.
Additionally, we specify the SageMaker endpoint URL ARN where the DeepSeek model is being hosted. We should remember to replace the REGION, ACCOUNT_ID, and ENDPOINT_NAME placeholders with the actual values.
Finally, to interact with the model, we’ll need to assign the following IAM policy to the IAM user we’ve configured in our application:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "bedrock:InvokeModel",
"Resource": "arn:aws:bedrock:REGION:ACCOUNT_ID:marketplace/model-endpoint/all-access"
}
]
}
Again, we should remember to replace the REGION and ACCOUNT_ID placeholders with the actual values in the Resource ARN.
2.3. Local Setup With Ollama
For local development and testing, we can run the DeepSeek models via Ollama, which is an open-source tool that allows us to run LLMs on our local machines.
Let’s import the necessary dependency in our project’s pom.xml file:
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-ollama-spring-boot-starter</artifactId>
<version>1.0.0-M6</version>
</dependency>
The Ollama starter dependency helps us to establish a connection with the Ollama service.
Next, let’s configure our chat model in the application.yaml file:
spring:
ai:
ollama:
chat:
options:
model: deepseek-r1
init:
pull-model-strategy: when_missing
embedding:
enabled: false
Here, we specify the deepseek-r1 model, however, we can also try this implementation with a different available model.
Additionally, we set the pull-model-strategy to when_missing. This ensures that Spring AI pulls the specified model if it’s not available locally.
Spring AI automatically connects to Ollama when running on localhost on its default port of 11434. However, we can override the connection URL using the spring.ai.ollama.base-url property. Alternatively, we can use Testcontainers to set up the Ollama service.
Here, again, Spring AI will automatically create the ChatModel bean for us. If for some reason we have all three – OpenAI API, Bedrock Converse, and Ollama dependencies on our classpath, we can reference the specific bean we want using the qualifier of openAiChatModel, bedrockProxyChatModel, or ollamaChatModel, respectively.
3. Building a Chatbot
Now that we’ve discussed the various configuration options, let’s build a simple chatbot using the configured DeepSeek model.
3.1. Defining Chatbot Beans
Let’s start by defining the necessary beans for our chatbot:
@Bean
ChatMemory chatMemory() {
return new InMemoryChatMemory();
}
@Bean
ChatClient chatClient(ChatModel chatModel, ChatMemory chatMemory) {
return ChatClient
.builder(chatModel)
.defaultAdvisors(new MessageChatMemoryAdvisor(chatMemory))
.build();
}
First, we define a ChatMemory bean using the InMemoryChatMemory implementation, which stores the chat history in memory to maintain conversation context.
Next, we create a ChatClient bean using the ChatModel and ChatMemory beans. The ChatClient class serves as our main entry point for interacting with the DeepSeek model we’ve configured.
3.2. Creating a Custom StructuredOutputConverter
As previously mentioned, the DeepSeek-R1 model’s response includes its CoT, and we get the response in the following format:
<think>
Chain of Thought
</think>
Answer
Unfortunately, due to this unique format, all the structured output converters present in the current version of Spring AI fail and throw an exception when we try to parse the response into a Java class.
So let’s create our own custom StructuredOutputConverter implementation to parse the AI model’s answer and CoT separately:
record DeepSeekModelResponse(String chainOfThought, String answer) {
}
class DeepSeekModelOutputConverter implements StructuredOutputConverter<DeepSeekModelResponse> {
private static final String OPENING_THINK_TAG = "<think>";
private static final String CLOSING_THINK_TAG = "</think>";
@Override
public DeepSeekModelResponse convert(@NonNull String text) {
if (!StringUtils.hasText(text)) {
throw new IllegalArgumentException("Text cannot be blank");
}
int openingThinkTagIndex = text.indexOf(OPENING_THINK_TAG);
int closingThinkTagIndex = text.indexOf(CLOSING_THINK_TAG);
if (openingThinkTagIndex != -1 && closingThinkTagIndex != -1 && closingThinkTagIndex > openingThinkTagIndex) {
String chainOfThought = text.substring(openingThinkTagIndex + OPENING_THINK_TAG.length(), closingThinkTagIndex);
String answer = text.substring(closingThinkTagIndex + CLOSING_THINK_TAG.length());
return new DeepSeekModelResponse(chainOfThought, answer);
} else {
logger.debug("No <think> tags found in the response. Treating entire text as answer.");
return new DeepSeekModelResponse(null, text);
}
}
}
Here, our converter extracts the chainOfThought and answer from the AI model’s response and returns them as a DeepSeekModelResponse record.
If the AI response doesn’t contain <think> tags, we treat the entire response as the answer. This ensures compatibility with other DeepSeek models that don’t include CoT in their responses.
3.3. Implementing the Service Layer
With our configurations in place, let’s create a ChatbotService class. We’ll inject the ChatClient bean we defined earlier to interact with the specified DeepSeek model.
But first, let’s define two simple records to represent the chat request and response:
record ChatRequest(@Nullable UUID chatId, String question) {}
record ChatResponse(UUID chatId, String chainOfThought, String answer) {}
The ChatRequest contains the user’s question and an optional chatId to identify an ongoing conversation.
Similarly, the ChatResponse contains the chatId, along with the chatbot’s chainOfThought and answer.
Now, let’s implement the intended functionality:
ChatResponse chat(ChatRequest chatRequest) {
UUID chatId = Optional
.ofNullable(chatRequest.chatId())
.orElse(UUID.randomUUID());
DeepSeekModelResponse response = chatClient
.prompt()
.user(chatRequest.question())
.advisors(advisorSpec ->
advisorSpec
.param("chat_memory_conversation_id", chatId))
.call()
.entity(new DeepSeekModelOutputConverter());
return new ChatResponse(chatId, response.chainOfThought(), response.answer());
}
If the incoming request doesn’t contain a chatId, we generate a new one. This allows the user to start a new conversation or continue an existing one.
We pass the user’s question to the chatClient bean and set the chat_memory_conversation_id parameter to the resolved chatId to maintain conversation history.
Finally, we create an instance of our custom DeepSeekModelOutputConverter class and pass it to the entity() method to parse the AI model’s response into a DeepSeekModelResponse record. Then, we extract the chainOfThought and answer from it and return them along with the chatId.
3.4. Interacting With Our Chatbot
Now that we’ve implemented our service layer, let’s expose a REST API on top of it:
@PostMapping("/chat")
ResponseEntity<ChatResponse> chat(@RequestBody ChatRequest chatRequest) {
ChatResponse chatResponse = chatbotService.chat(chatRequest);
return ResponseEntity.ok(chatResponse);
}
Let’s use the HTTPie CLI to invoke the above API endpoint and start a new conversation:
http POST :8080/chat question="What was the name of Superman's adoptive mother?"
Here, we send a simple question to the chatbot, let’s see what we receive as a response:
The response contains a unique chatId, as well as the chatbot’s chainOfThought and answer to our question. We can see how the AI model reasons through and approaches the given prompt using the chainOfThought attribute.
Let’s continue this conversation by sending a follow-up question using the chatId from the above response:
http POST :8080/chat question="Which bald billionaire hates him?" chatId="1e3c151f-cded-4f10-a5fc-c52c5952411c"
Let’s see if the chatbot can maintain the context of our conversation and provide a relevant response:
As we can see, the chatbot does indeed maintain the conversation context. The chatId remains the same, indicating that the follow-up answer is a continuation of the same conversation.
4. Conclusion
In this article, we’ve explored using DeepSeek models with Spring AI.
We discussed various options to integrate DeepSeek models into our application, including one where we use the OpenAI API directly since DeepSeek is compatible with it, and another where we work with Amazon’s Bedrock Converse API. Additionally, we explored setting up a local test environment using Ollama.
Then, we built a simple chatbot capable of multi-turn textual conversations and used a custom StructuredOutputConverter implementation to extract the chain of thought and answer from the AI model’s response.
As always, all the code examples used in this article are available over on GitHub.
The post Building an AI Chatbot Using DeepSeek Models With Spring AI first appeared on Baeldung.