Stop guessing what your agents are doing. Trace execution flows, debug tool calls, and control costs with zero latency overhead. Squeeze the best out of your stack.
Transform black-box AI agents into transparent systems. Lelemon provides complete visibility into every execution, allowing you to inspect prompts, tool calls, and model decisions effortlessly. Achieve zero latency with our asynchronous data ingestion and gain real-time cost control, from token counting to budget alerts. Simplify debugging, optimize performance, and manage your AI spend with confidence.
Drop-in integration for Node.js & Vercel AI SDK
import { init, observe } from '@lelemondev/sdk';
import OpenAI from 'openai';
init({ apiKey: process.env.LELEMON_API_KEY });
const openai = observe(new OpenAI());
const res = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello!' }],
});No complications, straight to the code.
Works seamlessly with your favorite ingredients: