1. Prerequisites and Setup
In the previous video, we:
- Installed LangChain and LangSmith related libraries
- Generated:
OPENAI_API_KEYLANGCHAIN_API_KEYLANGCHAIN_PROJECT(for LangSmith)
- Stored them in a
.envfile
Example .env (already created earlier):
OPENAI_API_KEY=sk-... LANGCHAIN_API_KEY=ls-... LANGCHAIN_PROJECT=my-langsmith-project
1.1 Required Python packages
Make sure your requirements.txt has at least:
langchain langchain-openai python-dotenv ipykernel
Then install:
pip install -r requirements.txt
ipykernelis needed so Jupyter can run this environment as a kernel.
2. Loading Environment Variables in Python
First, we load variables from .env and ensure they’re available via os.environ.
import os
from dotenv import load_dotenv
# Load all variables from .env into the process environment
load_dotenv()
# Explicitly set variables we care about
os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY", "")
os.environ["LANGCHAIN_API_KEY"] = os.getenv("LANGCHAIN_API_KEY", "")
os.environ["LANGCHAIN_TRACING_V2"] = "true" # enable LangSmith tracing
os.environ["LANGCHAIN_PROJECT"] = os.getenv("LANGCHAIN_PROJECT", "")
What each variable does:
OPENAI_API_KEY– authenticates with OpenAI’s APILANGCHAIN_API_KEY– allows LangChain to send traces to LangSmithLANGCHAIN_TRACING_V2– when set to"true", enables LangSmith trackingLANGCHAIN_PROJECT– groups all your traces under a named project in LangSmith
At this point, everything you do with LangChain and OpenAI will automatically be logged in LangSmith.
3. Creating a Chat Model with ChatOpenAI
Now we’ll create our LLM object using ChatOpenAI.
from langchain_openai import ChatOpenAI
# Create the LLM (chat) model
llm = ChatOpenAI(
model="gpt-4", # or "gpt-4o" / "gpt-4.1" depending on what your account supports
temperature=0.7
)
print(llm)
If everything is configured correctly, this will print a representation of the ChatOpenAI object.
We are not passing the API key here explicitly.
Since we’ve already setOPENAI_API_KEYin the environment, the library picks it up automatically.
4. Making Your First Call: llm.invoke(...)
Let’s ask the model a simple question.
result = llm.invoke("What is generative AI?")
print(result)
You’ll see something like:
AIMessage(content='Generative AI is ...')
The actual text you care about is in result.content:
print(result.content)
Behind the scenes:
- The request goes to OpenAI
- LangChain wraps it as an
AIMessage - Because tracing is enabled, the request and response are also sent to LangSmith
If you open your LangSmith project in the browser, you’ll already see this call logged as a ChatOpenAI request, with:
- Prompt
- Response
- Latency
- Token usage
- Estimated cost
5. Introducing ChatPromptTemplate
Right now we’re sending a plain string to the model.
In real applications, we usually want:
- A fixed system role (how the model should behave)
- A dynamic user message (the user’s question)
ChatPromptTemplate gives us this structure.
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_messages([
("system", "You are an expert AI engineer. Provide clear, practical answers."),
("user", "{input}")
])
print(prompt)
Explanation:
"system"– defines the role and behaviour of the model"user"– represents the user’s message{input}– a placeholder that we’ll fill in at runtime
6. Creating a Simple Chain: prompt | llm
LangChain’s “runnable” concept allows us to chain components using the | (pipe) operator.
chain = prompt | llm
What this chain does:
- Takes an input dictionary, e.g.
{"input": "What is LangSmith used for?"} - Fills the prompt template
- Sends the final prompt to the LLM
- Returns an
AIMessage
Let’s run it:
response = chain.invoke({"input": "What is LangSmith useful for?"})
print(response)
print(type(response))
You’ll see an AIMessage again.
To get the text only:
print(response.content)
Check LangSmith again; you’ll now see a more detailed trace:
- A runnable sequence:
ChatPromptTemplate -> ChatOpenAI - The exact system and user messages
- Tokens, latency, cost
7. Cleaning Up the Output with StrOutputParser
Working with AIMessage objects is fine, but often we just want a plain string.
For that, we use StrOutputParser.
from langchain_core.output_parsers import StrOutputParser output_parser = StrOutputParser()
Now let’s extend our chain:
# New chain: Prompt → LLM → OutputParser
chain = prompt | llm | output_parser
response = chain.invoke({
"input": "Explain what LangSmith is and why it is helpful."
})
print(response)
print(type(response))
This time, response is a plain str, not an AIMessage.
Why this is useful:
- Easier to log or display in your UI
- Perfect for simple experiments and notebooks
- You can later replace
StrOutputParserwith a custom parser (e.g., JSON, Pydantic models, etc.)
In LangSmith, you’ll now see the full runnable stack:
ChatPromptTemplateChatOpenAIStrOutputParser
You can inspect each step’s input and output.
