Author: Editorial Team
-
Exposing LangChain (LCEL) Applications as REST APIs using LangServe
1. What Is LangServe? LangServe is an official tool from the LangChain ecosystem that lets you expose LangChain chains and runnables as REST APIs with almost no extra code. In simple terms: LangServe turns your LangChain logic into a production-ready API. 2. Why LangServe Exists? When you build an LLM app using LangChain, you usually…
-
Getting started with Open source models using Groq API
1. What is Groq? Groq is a company that builds specialized AI hardware and software designed specifically for ultra-fast inference of Large Language Models (LLMs). Unlike traditional AI hardware vendors that focus on training, Groq is laser-focused on inference — the phase where trained models generate responses for real users. Groq’s key innovation is the…
-
Getting started with Open source models using Groq API
1. What Is Groq? Groq is an AI infrastructure company focused entirely on high-performance inference for large language models (LLMs). Groq does not train models. Instead, it: In simple terms, Groq makes LLMs run extremely fast in production. 2. Models Available on Groq Groq hosts popular open-source LLMs, including: 3. Groq API: How Developers Use…
-
What is LCEL (LangChain Expression Language)?
1. What is LCEL (LangChain Expression Language)? LCEL (LangChain Expression Language) is a declarative way of defining how different LangChain components are connected and how data flows between them. Instead of focusing on how to call functions step by step, LCEL focuses on what the pipeline looks like. In simple terms, LCEL allows you to…
-
Understanding Synchronous and Asynchronous Programming in JavaScript
1. A Real-Life Analogy: The Birthday Cake Story Let’s start with a situation from daily life. Your best friend has a birthday party tonight. He asks you to bring the cake because your mom works at the best pastry shop in town. At 4 PM, you call your mom and ask her to start baking…
-
Tracking LangChain App with LangSmith
LangSmith allows you to: LangSmith works even with Ollama and local LLMs. It does not require OpenAI. 1. What LangSmith Tracks in Your App Once enabled, LangSmith automatically captures: You do not need to change your chain logic. 2. Create a LangSmith Account After login, you’ll see: 3. Get Your LangSmith API Key 4. Enable…
-
Simple GenAI App Using Ollama
This tutorial shows how to build a local Generative AI web application using: 1. Prerequisites Before starting, make sure you have: Download the model once using: Once downloaded, it will be reused automatically. 2. requirements.txt (Complete) Create a file named requirements.txt with the following content: Install dependencies: 3. .env file These properties help you to…
-
Introduction to Ollama
1. What Is Ollama? Ollama is a lightweight runtime that allows you to run large language models (LLMs) locally on your own machine. Instead of calling cloud-based APIs (like OpenAI or Anthropic), Ollama enables you to download open-source models and perform inference completely offline. In simple terms: 2. Why Ollama Exists Most Generative AI tutorials…
-
Understanding the JVM Code Cache
When studying the JVM and its Just-In-Time (JIT) compilation process, one important component to understand is the code cache.This cache stores the optimized native machine code generated by the JIT compilers.Knowing how the code cache works, how to inspect it, and how to tune its size can help improve application performance, especially in large or…
-
Understanding JVM Tiered Compilation
1. JVM Has Two JIT Compilers: C1 and C2 Modern JVMs include two different compilers, each with a specific purpose: C1 (Client Compiler) C2 (Server Compiler) This structure is known as tiered compilation, and it dramatically improves how Java warms up and stabilizes performance. 2. Understanding Compilation Levels (Tiers) Every compiled method receives a compilation…
