LangSmith allows you to:
- Trace every LLM call
- Inspect prompts and responses
- Measure latency
- Debug chains visually
- Monitor local and production apps
LangSmith works even with Ollama and local LLMs. It does not require OpenAI.
1. What LangSmith Tracks in Your App
Once enabled, LangSmith automatically captures:
- Prompt template used
- Input variables (
question) - LLM call (Ollama in your case)
- Model output
- Execution time
- Errors (if any)
- Chain structure (
prompt → llm → parser)
You do not need to change your chain logic.
2. Create a LangSmith Account
- Go to https://smith.langchain.com
- Sign in using GitHub / Google
- Create a workspace (default is fine)
After login, you’ll see:
- Dashboard
- Traces
- Projects
3. Get Your LangSmith API Key
- Open Settings → API Keys
- Create a new key
- Copy it (you’ll use it as an environment variable)
4. Enable LangSmith via Environment Variables (IMPORTANT)
LangSmith is enabled entirely via environment variables.
Required Variables
| Variable | Purpose |
|---|---|
LANGCHAIN_TRACING_V2 | Enables tracing |
LANGCHAIN_API_KEY | Auth with LangSmith |
LANGCHAIN_PROJECT | Groups related traces |
Windows (Command Prompt)
set LANGCHAIN_TRACING_V2=true set LANGCHAIN_API_KEY=your_api_key_here set LANGCHAIN_PROJECT=ollama-streamlit-demo
5. NO Code Changes Required
Your existing app.py stays exactly the same.
LangChain automatically detects LangSmith when:
LANGCHAIN_TRACING_V2=trueis set- API key is present
This works because:
- LangChain Core has LangSmith hooks built-in
- Ollama is still treated as an LLM trace
6. Run the App Again
Run using the same command:
python -m streamlit run app.py
Now:
- Ask a question in the UI
- Let the model generate a response
7. View Traces in LangSmith Dashboard
- Open https://smith.langchain.com
- Select your project.
- Click Traces

