HatchedDocs
Reference

Rate limits

Per-customer and per-endpoint quotas, the 429 response shape, and how to back off gracefully.

Rate limits protect shared infrastructure. They're generous for normal product integrations but exist to prevent runaway loops.

Quotas

SurfaceLimitWindow
POST /events500 req1 second per customer
POST /eggs / POST /eggs/:id/hatch60 req1 minute per customer
POST /widget-sessions300 req1 minute per customer
All other write endpoints60 req1 second per customer
All read endpoints300 req1 second per customer
Widget bundle fetches (CDN)not rate-limited

Higher tiers lift the event ingestion ceiling. Talk to sales if you need more than 500 events/s.

The 429 response

HTTP/1.1 429 Too Many Requests
Retry-After: 2
Content-Type: application/json

{
  "error": {
    "code": "rate_limited",
    "message": "Too many requests on POST /events (500/s)",
    "details": { "retryAfter": 2 },
    "requestId": "…"
  }
}

The SDK surfaces this as RateLimitError with .retryAfter populated (in seconds).

Built-in retry

The SDK retries 429s automatically (honouring Retry-After) up to maxRetries (default 3) with exponential backoff + jitter. You only need manual backoff for sustained overages or custom queue drains.

new HatchedClient({
  apiKey: process.env.HATCHED_API_KEY!,
  maxRetries: 5, // default 3
});

Backoff pattern

import { RateLimitError } from '@hatched/sdk-js';

async function sendWithBackoff(event) {
  for (let attempt = 0; attempt < 5; attempt++) {
    try {
      return await hatched.events.send(event);
    } catch (err) {
      if (err instanceof RateLimitError) {
        await sleep((err.retryAfter + Math.random()) * 1000);
        continue;
      }
      throw err;
    }
  }
  throw new Error('rate limit exhausted');
}

Add jitter (the Math.random() term) so concurrent callers don't all retry on the same millisecond.

Bulk ingestion

For high-volume backfills, don't serialise through events.send. Instead:

  1. Batch historical events into a single file.
  2. Upload via the bulk-ingest endpoint (see HTTP API).
  3. The bulk endpoint bypasses the per-second cap and processes asynchronously with its own throughput quota.

Headers

Every response includes:

X-RateLimit-Limit: 500
X-RateLimit-Remaining: 498
X-RateLimit-Reset: 1745327400

Use them to pace yourself before a 429 fires.