Open source

Never get an
AI blackout.

Monitor rate limits and spend across every AI provider.
Two lines of code. One dashboard.

++++
app.ts
import { delimiter } from '@delimiter/sdk'

delimiter.init('dlm_your_project_key')

// That's it. Every AI API call is now monitored.
How It Works

Three steps. Zero config.

01

Initialize once

Call delimiter.init() at your app's entry point. Two lines of code — that's the entire setup. No wrapping, no per-provider config.

02

We read the headers

Every AI API response includes rate limit headers. Delimiter extracts them silently after each call. Async, fire-and-forget.

03

See everything in one place

Real-time dashboard shows usage across all providers. Health indicators go green, yellow, red. One glance tells you everything.

Trust & Safety

What it never does

Zero interference with your production traffic.

Never touches your API keys — the SDK only reads response headers, not request headers where keys live

Never modifies requests or responses — your calls work identically

Never adds latency — reporting is async, fire-and-forget

Never fails loudly — if we're down, your app doesn't notice

Providers

Works with every provider

If it makes an HTTP request to an AI API, Delimiter sees it. No plugins. No configuration. Auto-detected at the network layer.

OpenAI
Anthropic
Google Gemini
Mistral
DeepSeek
Meta Llama
Cohere
Groq
xAI
Perplexity
Together AI
Fireworks AI
Replicate
AI21 Labs
Stability AI
Hugging Face
OpenRouter
Azure OpenAI
Amazon Bedrock
Cerebras
SambaNova
Lepton AI
Anyscale
Baseten
Modal
Voyage AI
Reka
Writer

...and any provider that speaks HTTP

Pricing

Simple pricing. All inclusive.

  • $20 per month, per workspace
  • Unlimited projects and providers
  • 50,000 events per month
  • Fallback chains and alerts
  • Priority support included

Solo developer? Free plan with 3,000 events/month, unlimited providers and API keys.