In the previous tutorial, we learned how to build a chatbot that can remember conversational context using message history and session IDs. At that stage, we were directly passing lists of messages—human messages, AI messages, and system messages—into the language model.
While that approach works, it is not how real-world chatbot systems are typically designed.
In this tutorial, we take the next step and introduce prompt templates, which allow us to structure conversations more cleanly, add system-level instructions, and support multiple input variables while still preserving chat history.
Why Prompt Templates Matter in Chatbots
Until now, we were passing raw messages directly to the model. This is simple, but it quickly becomes limiting as the chatbot grows in complexity.
Prompt templates solve this problem by acting as a translation layer. They convert raw user input and contextual data into a structured format that the language model can reliably work with. This becomes especially important when:
- You want to inject system instructions
- You want to support multiple input fields (for example, language preferences)
- You want memory and prompts to work together cleanly
In short, prompt templates turn a basic chatbot into a controlled conversational system.
Step 1: Importing Prompt Template Components
To work with prompt templates in LangChain, we start by importing two key components:
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
ChatPromptTemplatedefines the structure of the promptMessagesPlaceholdertells LangChain where the conversation history should be injected
This placeholder is critical when combining prompt templates with chat history.
Step 2: Creating a Prompt Template with a System Message
The first improvement we make is adding a system message. System messages allow us to guide the behavior of the language model.
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant. Answer all questions to the best of your ability."),
MessagesPlaceholder(variable_name="messages")
]
)
Here’s what is happening conceptually:
- The system message sets the assistant’s role and behavior
- The
messagesplaceholder marks where conversational history will be inserted - We no longer manually pass lists of messages into the model
This structure is far more flexible than passing raw message lists.
Step 3: Creating a Simple Prompt-Based Chain
Once the prompt is defined, we combine it with the model to form a chain.
chain = prompt | model
This chain now represents:
- A system instruction
- A placeholder for chat history
- A language model invocation
At this point, the chain is stateless—memory is not yet involved.
Step 4: Invoking the Chain Using the Messages Placeholder
Because we used a MessagesPlaceholder named "messages", all human input must be passed using that key.
from langchain_core.messages import HumanMessage
response = chain.invoke(
{
"messages": [
HumanMessage(content="Hello, my name is Alex.")
]
}
)
print(response.content)
This is an important shift:
- We no longer pass messages directly to the model
- We pass them through the prompt template
- The template decides how messages are arranged internally
Step 5: Adding Chat History to the Prompt-Based Chain
Now we combine prompt templates with message history, which is where things become powerful.
Assuming we already have a session history function:
def get_session_history(session_id):
if session_id not in store:
store[session_id] = ChatMessageHistory()
return store[session_id]
We wrap the chain using RunnableWithMessageHistory:
from langchain_core.runnables import RunnableWithMessageHistory
chatbot = RunnableWithMessageHistory(
chain,
get_session_history
)
At this point:
- The prompt template handles structure
- The message history handles memory
- The model remains stateless
Step 6: Invoking the Chatbot with Session Memory
We now invoke the chatbot using both:
- The
messageskey - A session configuration
config = {
"configurable": {
"session_id": "chat_3"
}
}
response = chatbot.invoke(
{
"messages": [
HumanMessage(content="Hello, my name is Alex.")
]
},
config=config
)
print(response.content)
If we ask a follow-up question using the same session ID:
response = chatbot.invoke(
{
"messages": [
HumanMessage(content="What is my name?")
]
},
config=config
)
print(response.content)
The chatbot remembers the context, because:
- The same session ID is used
- Message history is automatically injected into the prompt
Step 7: Adding Multiple Input Variables to the Prompt
Now let’s make the prompt more sophisticated by introducing multiple input variables.
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant. Answer all questions in {language}."),
MessagesPlaceholder(variable_name="messages")
]
)
chain = prompt | model
Here, {language} becomes an additional input variable alongside messages.
Step 8: Invoking a Multi-Input Prompt
Because the prompt now expects two inputs, we must supply both.
response = chain.invoke(
{
"messages": [
HumanMessage(content="Hello, my name is Alex.")
],
"language": "Hindi"
}
)
print(response.content)
The model now:
- Uses chat messages for conversation
- Uses the
languagevariable to control response output
Step 9: Combining Multi-Input Prompts with Chat History
When multiple input keys exist, we must explicitly tell LangChain which key represents chat messages.
chatbot = RunnableWithMessageHistory(
chain,
get_session_history,
input_messages_key="messages"
)
This step is crucial. Without it, LangChain would not know which input should be stored as conversational memory.
Step 10: Invoking the Stateful, Multi-Input Chatbot
config = {
"configurable": {
"session_id": "chat_4"
}
}
response = chatbot.invoke(
{
"messages": [
HumanMessage(content="Hello, my name is Alex.")
],
"language": "Hindi"
},
config=config
)
print(response.content)
A follow-up question works as expected:
response = chatbot.invoke(
{
"messages": [
HumanMessage(content="What is my name?")
],
"language": "Hindi"
},
config=config
)
print(response.content)
The chatbot remembers the name and responds in the specified language.
Example
import os
from dotenv import load_dotenv
from langchain_groq import ChatGroq
from langchain_core.messages import HumanMessage
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_community.chat_message_histories import ChatMessageHistory
from langchain_core.chat_history import BaseChatMessageHistory
from langchain_core.runnables import RunnableWithMessageHistory
# -------------------------------------------------
# Load environment variables
# -------------------------------------------------
load_dotenv()
GROQ_API_KEY = os.getenv("GROQ_API_KEY")
# -------------------------------------------------
# Initialize LLM
# -------------------------------------------------
model = ChatGroq(
api_key=GROQ_API_KEY,
model="gemma2-9b-it"
)
# -------------------------------------------------
# Create Prompt Template
# -------------------------------------------------
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a helpful assistant. "
"Answer all questions to the best of your ability in {language}."
),
MessagesPlaceholder(variable_name="messages")
]
)
# -------------------------------------------------
# Create Prompt → Model Chain
# -------------------------------------------------
chain = prompt | model
# -------------------------------------------------
# In-memory session store
# -------------------------------------------------
store = {}
def get_session_history(session_id: str) -> BaseChatMessageHistory:
"""
Returns chat history for a given session ID.
Creates a new history if session does not exist.
"""
if session_id not in store:
store[session_id] = ChatMessageHistory()
return store[session_id]
# -------------------------------------------------
# Wrap Chain with Message History
# -------------------------------------------------
chatbot = RunnableWithMessageHistory(
chain,
get_session_history,
input_messages_key="messages"
)
# -------------------------------------------------
# Session configuration
# -------------------------------------------------
config = {
"configurable": {
"session_id": "chat_session_1"
}
}
# -------------------------------------------------
# Conversation Begins
# -------------------------------------------------
print("\n--- Conversation Start ---\n")
response = chatbot.invoke(
{
"messages": [
HumanMessage(content="Hello, my name is Alex.")
],
"language": "English"
},
config=config
)
print("AI:", response.content)
response = chatbot.invoke(
{
"messages": [
HumanMessage(content="What is my name?")
],
"language": "English"
},
config=config
)
print("AI:", response.content)
response = chatbot.invoke(
{
"messages": [
HumanMessage(content="Now respond in Hindi.")
],
"language": "Hindi"
},
config=config
)
print("AI:", response.content)
response = chatbot.invoke(
{
"messages": [
HumanMessage(content="What is my name?")
],
"language": "Hindi"
},
config=config
)
print("AI:", response.content)
print("\n--- Conversation End ---\n")
Output
--- Conversation Start --- AI: Hello Alex, it's nice to meet you. I'm here to help with any questions or information you may need. Is there something specific on your mind, or would you like to chat? AI: Your name is Alex. AI: आपका नाम अलेक्स है। AI: आपका नाम अलेक्स है। --- Conversation End ---
