Know your AI rate limits before your app does.
Three lines of code. Every provider. One dashboard.
import { delimiter } from '@delimiter/sdk'
delimiter.init('dlm_your_project_key')
const openai = delimiter.wrap(new OpenAI({ apiKey: process.env.OPENAI_KEY }))How it works
Wrap your AI clients
Pass your OpenAI or Anthropic client to delimiter.wrap(). A lightweight Proxy wraps it transparently. No code changes beyond one line.
We read the headers
Every AI API response includes rate limit headers. Delimiter extracts them silently after each call. Async, fire-and-forget.
See everything in one place
Real-time dashboard shows usage across all providers. Health rings go green, yellow, red. One glance tells you everything.
What it never does
Never touches your API keys — you create your client, we just wrap it
Never modifies requests or responses — your calls work identically
Never adds latency — reporting is async, fire-and-forget
Never fails loudly — if we're down, your app doesn't notice
Supported providers
OpenAI
SupportedAnthropic
SupportedGoogle Gemini
Coming soonMistral
Coming soonPricing
Pro
$20/month
Per workspace, billed monthly
- Unlimited providers & apps
- Unlimited reports
- Slack + webhook alerts
- 90-day history