Learnitweb

Tracking LangChain App with LangSmith

LangSmith allows you to:

  • Trace every LLM call
  • Inspect prompts and responses
  • Measure latency
  • Debug chains visually
  • Monitor local and production apps

LangSmith works even with Ollama and local LLMs. It does not require OpenAI.

1. What LangSmith Tracks in Your App

Once enabled, LangSmith automatically captures:

  • Prompt template used
  • Input variables (question)
  • LLM call (Ollama in your case)
  • Model output
  • Execution time
  • Errors (if any)
  • Chain structure (prompt → llm → parser)

You do not need to change your chain logic.

2. Create a LangSmith Account

  1. Go to https://smith.langchain.com
  2. Sign in using GitHub / Google
  3. Create a workspace (default is fine)

After login, you’ll see:

  • Dashboard
  • Traces
  • Projects

3. Get Your LangSmith API Key

  1. Open Settings → API Keys
  2. Create a new key
  3. Copy it (you’ll use it as an environment variable)

4. Enable LangSmith via Environment Variables (IMPORTANT)

LangSmith is enabled entirely via environment variables.

Required Variables

VariablePurpose
LANGCHAIN_TRACING_V2Enables tracing
LANGCHAIN_API_KEYAuth with LangSmith
LANGCHAIN_PROJECTGroups related traces

Windows (Command Prompt)

set LANGCHAIN_TRACING_V2=true
set LANGCHAIN_API_KEY=your_api_key_here
set LANGCHAIN_PROJECT=ollama-streamlit-demo

5. NO Code Changes Required

Your existing app.py stays exactly the same.

LangChain automatically detects LangSmith when:

  • LANGCHAIN_TRACING_V2=true is set
  • API key is present

This works because:

  • LangChain Core has LangSmith hooks built-in
  • Ollama is still treated as an LLM trace

6. Run the App Again

Run using the same command:

python -m streamlit run app.py

Now:

  • Ask a question in the UI
  • Let the model generate a response

7. View Traces in LangSmith Dashboard

  1. Open https://smith.langchain.com
  2. Select your project.
  3. Click Traces