Rate limits β±οΈ
Conduit uses rate limiting to ensure fair usage and maintain service quality for everyone. Here's how to work with them.
If you are working an active 429 incident, start with Troubleshooting and FAQ for the short recovery path, then use this page for detailed limits, headers, and pacing strategies.
Quick referenceβ
| Endpoint | Limit | Notes |
|---|---|---|
File uploads (POST /v1/files) | 10/min | Large files count as 1 request |
Report jobs (POST /v1/reports/jobs) | 20/min | Per API key |
Job status (GET /v1/jobs/:id) | 60/min | Use webhooks instead! |
| File operations | 30/min | Get, list, delete |
| Entity operations | 30/min | Get, list, tag |
| Credit checks | 100/min | Balance and transactions |
| Everything else | 100/min | General limit |
Use webhooks instead of polling job statusβsaves 90% of your rate limit budget!
Reading rate limit headersβ
Every response includes rate limit information:
X-RateLimit-Limit: 100 # Total requests allowed per window
X-RateLimit-Remaining: 95 # Requests remaining in this window
X-RateLimit-Reset: 1704067200 # Unix timestamp when limit resets
Example:
const response = await fetch("https://api.mappa.ai/v1/credits", {
headers: { "Mappa-Api-Key": apiKey },
});
const limit = response.headers.get("X-RateLimit-Limit");
const remaining = response.headers.get("X-RateLimit-Remaining");
const reset = response.headers.get("X-RateLimit-Reset");
console.info(`${remaining}/${limit} requests remaining`);
console.info(`Resets at ${new Date(Number.parseInt(reset!) * 1000)}`);
Handling 429 responsesβ
When you exceed the limit, you'll get a 429 Too Many Requests response:
{
"error": {
"code": "RATE_LIMITED",
"message": "Rate limit exceeded for this endpoint",
"retryAfter": 60
}
}
SDK handling (automatic)β
The SDK retries automatically with exponential backoff:
import { Conduit } from "@mappa-ai/conduit";
const conduit = new Conduit({
apiKey: process.env.CONDUIT_API_KEY!,
maxRetries: 3, // Will retry on 429
});
// SDK handles rate limits automatically
const receipt = await conduit.reports.create({
source: { mediaId: "media_123" },
output: { template: "general_report" },
target: { strategy: "dominant" },
});
console.info(receipt.jobId)
Manual handling (REST API)β
If using the REST API directly:
async function withRetry<T>(fn: () => Promise<T>, maxRetries = 3): Promise<T> {
for (let i = 0; i < maxRetries; i++) {
try {
return await fn();
} catch (err) {
if (err.status === 429) {
const retryAfter = err.retryAfter || Math.pow(2, i) * 1000;
console.warn(`Rate limited. Retrying after ${retryAfter}ms`);
await sleep(retryAfter);
continue;
}
throw err;
}
}
throw new Error("Max retries exceeded");
}
// Usage
const job = await withRetry(() =>
fetch("https://api.mappa.ai/v1/reports/jobs", {
method: "POST",
headers: {
"Mappa-Api-Key": apiKey,
"Content-Type": "application/json",
},
body: JSON.stringify({...}),
})
);
Best practicesβ
1. Use webhooks, not pollingβ
β Bad: Polling wastes your rate limit
// Polls 120 times for a 4-minute job (720 requests/hour!)
while (true) {
const job = await conduit.primitives.jobs.get(jobId); // Uses rate limit
if (job.status === "succeeded") break;
await sleep(2000);
}
β Good: Webhooks save your rate limit
// Single request, webhook notifies when done
const receipt = await conduit.reports.create({
...params,
webhook: { url: "https://yourapp.com/webhooks/conduit" },
target: { strategy: "dominant" },
});
console.info(receipt.jobId)
// No polling needed!
2. Monitor rate limit headersβ
async function makeRequest(url: string) {
const response = await fetch(url, {
headers: { "Mappa-Api-Key": process.env.CONDUIT_API_KEY! },
});
const remaining = Number.parseInt(
response.headers.get("X-RateLimit-Remaining") || "0"
);
if (remaining < 10) {
console.warn(`β οΈ Only ${remaining} requests remaining!`);
// Alert your workspace or throttle requests
}
return response.json();
}
3. Implement request queuingβ
For high-volume apps, queue requests to stay within limits:
class RateLimitedQueue {
private queue: Array<() => Promise<any>> = [];
private processing = false;
private requestsPerMinute = 20;
private interval = 60000 / this.requestsPerMinute; // ms between requests
async add<T>(fn: () => Promise<T>): Promise<T> {
return new Promise((resolve, reject) => {
this.queue.push(async () => {
try {
const result = await fn();
resolve(result);
} catch (err) {
reject(err);
}
});
if (!this.processing) {
this.process();
}
});
}
private async process() {
this.processing = true;
while (this.queue.length > 0) {
const fn = this.queue.shift()!;
await fn();
await sleep(this.interval);
}
this.processing = false;
}
}
// Usage
const queue = new RateLimitedQueue();
// Queues requests to respect rate limit
const job1 = await queue.add(() => conduit.reports.create({...}));
const job2 = await queue.add(() => conduit.reports.create({...}));
4. Cache responsesβ
Don't re-fetch data that hasn't changed:
const cache = new Map<string, { data: any; expiry: number }>();
async function getReportWithCache(reportId: string) {
const key = `report:${reportId}`;
const cached = cache.get(key);
if (cached && Date.now() < cached.expiry) {
return cached.data; // Return cached data
}
// Fetch fresh data
const report = await conduit.reports.get(reportId);
// Cache for 5 minutes
cache.set(key, {
data: report,
expiry: Date.now() + 300000,
});
return report;
}
5. Batch operationsβ
Process multiple items concurrently (but within limits):
async function batchUploadFiles(files: File[]) {
const BATCH_SIZE = 5; // Stay under 10/min limit
for (let i = 0; i < files.length; i += BATCH_SIZE) {
const batch = files.slice(i, i + BATCH_SIZE);
// Upload batch concurrently
const uploads = await Promise.all(
batch.map(file => conduit.primitives.media.upload({ file }))
);
console.info(`Uploaded batch ${i / BATCH_SIZE + 1}`);
// Wait 1 minute before next batch (if more batches remain)
if (i + BATCH_SIZE < files.length) {
await sleep(60000);
}
}
}
Increasing limitsβ
Need higher limits for production? Contact us with:
- Use case - What you're building
- Volume - Expected requests per minute/hour
- Timeline - When you need increased limits
We're happy to work with production apps that need higher throughput!
Debugging rate limit issuesβ
Check your current usageβ
// Track requests in your app
let requestCount = 0;
let windowStart = Date.now();
async function trackRequest<T>(fn: () => Promise<T>): Promise<T> {
requestCount++;
const elapsed = Date.now() - windowStart;
if (elapsed >= 60000) {
console.info(`Made ${requestCount} requests in last minute`);
requestCount = 0;
windowStart = Date.now();
}
return await fn();
}
// Usage
const receipt = await trackRequest(() => conduit.reports.create({...}));
console.info(receipt.jobId)
Common causes of rate limitingβ
| Issue | Solution |
|---|---|
| Polling job status | Use webhooks instead |
| Uploading many files | Batch uploads with delays |
| Multiple API keys | Consolidate to one key per environment |
| Retry loops without backoff | Implement exponential backoff |
| Not checking headers | Monitor X-RateLimit-Remaining |
What's next? πβ
Optimize your integration:
- Webhooks - Stop polling, save rate limit
- Production Guide - Best practices for production
- Error Handling - Handle 429 errors gracefully
Need help?
- Troubleshooting and FAQ - Recover from
429spikes and polling pressure - API Reference - Complete API documentation
- Support - Request higher limits