Know your AI rate limits before your app does.

Three lines of code. Every provider. One dashboard.

import { delimiter } from '@delimiter/sdk'
delimiter.init('dlm_your_project_key')
const openai = delimiter.wrap(new OpenAI({ apiKey: process.env.OPENAI_KEY }))

How it works

Wrap your AI clients

Pass your OpenAI or Anthropic client to delimiter.wrap(). A lightweight Proxy wraps it transparently. No code changes beyond one line.

We read the headers

Every AI API response includes rate limit headers. Delimiter extracts them silently after each call. Async, fire-and-forget.

See everything in one place

Real-time dashboard shows usage across all providers. Health rings go green, yellow, red. One glance tells you everything.

What it never does

Never touches your API keys — you create your client, we just wrap it

Never modifies requests or responses — your calls work identically

Never adds latency — reporting is async, fire-and-forget

Never fails loudly — if we're down, your app doesn't notice

Supported providers

OpenAI

Supported

Anthropic

Supported

Google Gemini

Coming soon

Mistral

Coming soon

Pricing

Free

$0

Forever, no credit card

  • 1 provider
  • 1 app
  • 1,000 reports/day
  • Email alerts
  • 7-day history
Get started

Pro

$20/month

Per workspace, billed monthly

  • Unlimited providers & apps
  • Unlimited reports
  • Slack + webhook alerts
  • 90-day history
Start free trial