🍋LLM Observability
Open Source

Fresh, Open Source
LLM Observability 🍋

Stop guessing what your agents are doing. Trace execution flows, debug tool calls, and control costs with zero latency overhead. Squeeze the best out of your stack.

Zero configSuper lightweight SDKNo overhead

Unify, Observe, and Control Your LLMs

Transform black-box AI agents into transparent systems. Lelemon provides complete visibility into every execution, allowing you to inspect prompts, tool calls, and model decisions effortlessly. Achieve zero latency with our asynchronous data ingestion and gain real-time cost control, from token counting to budget alerts. Simplify debugging, optimize performance, and manage your AI spend with confidence.

Drop-in integration for Node.js & Vercel AI SDK

app.ts
import { init, observe } from '@lelemondev/sdk';
import OpenAI from 'openai';

init({ apiKey: process.env.LELEMON_API_KEY });

const openai = observe(new OpenAI());

const res = await openai.chat.completions.create({
  model: 'gpt-4',
  messages: [{ role: 'user', content: 'Hello!' }],
});

Built for developers

No complications, straight to the code.

  • Trace the complete flow: prompts, tool calls and outputs.
  • Understand decisions and retries: see why the agent did what it did.
  • Useful metrics: tokens and latency without friction.

Works seamlessly with your favorite ingredients:

OpenAI
Anthropic
OpenRouter
Vercel AI SDK
LangChain
Bedrock
Gemini
OpenAI
Anthropic
OpenRouter
Vercel AI SDK
LangChain
Bedrock
Gemini