Rate Limits
The Rekall API enforces rate limits to ensure fair usage and platform stability. Limits are applied per API key and vary by plan tier.
Rate Limit Tiers
Each plan tier has a different rate limit. The limits below apply per API key and are measured in requests per minute.
| Tier | Requests / Minute | Burst Limit | Notes |
|---|---|---|---|
| Free | 100 | 120 | Suitable for personal projects and prototyping |
| Pro | 1,000 | 1,200 | For production applications and teams |
| Team | 5,000 | 6,000 | High-throughput applications and multi-agent systems |
| Sandbox | 50 | 60 | Development and testing environment |
Rate Limit Headers
Every API response includes rate limit information in the response headers. Use these headers to monitor your usage and implement proactive throttling.
| Header | Description |
|---|---|
X-RateLimit-Limit | Maximum number of requests allowed per minute for your tier |
X-RateLimit-Remaining | Number of requests remaining in the current rate limit window |
X-RateLimit-Reset | Unix timestamp (in seconds) when the rate limit window resets |
HTTP/1.1 200 OKContent-Type: application/jsonX-RateLimit-Limit: 1000X-RateLimit-Remaining: 987X-RateLimit-Reset: 1706284800
Handling 429 Responses
When you exceed the rate limit, the API returns a 429 Too Many Requests status code. The response includes a Retry-After header indicating how many seconds to wait before retrying.
HTTP/1.1 429 Too Many RequestsContent-Type: application/jsonRetry-After: 12X-RateLimit-Limit: 100X-RateLimit-Remaining: 0X-RateLimit-Reset: 1706284812{"error": {"code": "rate_limit_exceeded","message": "Rate limit exceeded. Please retry after 12 seconds.","details": {"limit": 100,"remaining": 0,"reset_at": "2025-01-26T12:00:12Z"}}}
Respect Retry-After
Always check the Retry-After header and wait the specified duration before retrying. Continuing to send requests while rate limited may result in longer backoff periods.
Retry Strategies
We recommend implementing exponential backoff with jitter for handling rate limit errors. This approach distributes retries over time and prevents thundering herd problems.
async function fetchWithRetry(url: string, options: RequestInit, maxRetries = 3) {for (let attempt = 0; attempt < maxRetries; attempt++) {const response = await fetch(url, options);if (response.status === 429) {const retryAfter = parseInt(response.headers.get('Retry-After') || '1', 10);const backoff = retryAfter * 1000;const jitter = Math.random() * 1000;console.log(`Rate limited. Retrying in ${backoff + jitter}ms (attempt ${attempt + 1})`);await new Promise(resolve => setTimeout(resolve, backoff + jitter));continue;}return response;}throw new Error('Max retries exceeded');}
SDK handles retries automatically
The official Rekall SDKs for TypeScript, Python, and Go include built-in retry logic with exponential backoff. You only need to implement custom retry logic if you are using the REST API directly.
Burst Allowance
Each tier includes a short burst allowance that allows temporary spikes above the sustained rate limit. The burst window is 10 seconds, and the burst limit is approximately 20% above the sustained rate. This is useful for batch operations that complete quickly.
If you consistently need higher throughput than your current tier allows, consider upgrading your plan or contact the Rekall team for custom rate limit arrangements.
