Skip to main content

Rate Limits

API rate limits protect the service and ensure fair usage. Learn how they work and how to handle them.

Rate Limit Overview

Limits by Plan

PlanPer MinutePer HourPer Day
Pro601,00010,000
Enterprise3005,000100,000

Per-Key Limits

Limits apply per API key, not per account. Multiple keys have independent limits.

Rate Limit Headers

Every response includes rate limit info:

X-RateLimit-Limit: 60
X-RateLimit-Remaining: 45
X-RateLimit-Reset: 1640000060
HeaderDescription
X-RateLimit-LimitMax requests in window
X-RateLimit-RemainingRequests left in window
X-RateLimit-ResetUnix timestamp when window resets

Rate Limit Response

When exceeded, you receive 429 Too Many Requests:

{
"success": false,
"error": {
"code": "rate_limit_exceeded",
"message": "Rate limit exceeded. Retry after 30 seconds.",
"retry_after": 30
}
}

Headers also include:

Retry-After: 30

Handling Rate Limits

Check Headers

Monitor remaining requests:

async function makeRequest(url) {
const response = await fetch(url, {
headers: { 'Authorization': `Bearer ${apiKey}` }
});

const remaining = response.headers.get('X-RateLimit-Remaining');
console.log(`Requests remaining: ${remaining}`);

return response;
}

Exponential Backoff

Implement retry with backoff:

async function requestWithRetry(url, maxRetries = 3) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
const response = await fetch(url, {
headers: { 'Authorization': `Bearer ${apiKey}` }
});

if (response.status !== 429) {
return response;
}

const retryAfter = response.headers.get('Retry-After') || 1;
const delay = retryAfter * 1000 * Math.pow(2, attempt);

console.log(`Rate limited. Retrying in ${delay}ms`);
await new Promise(resolve => setTimeout(resolve, delay));
}

throw new Error('Max retries exceeded');
}

Queue Requests

For bulk operations, use a queue:

class RateLimitedQueue {
constructor(requestsPerMinute = 60) {
this.interval = 60000 / requestsPerMinute;
this.queue = [];
this.processing = false;
}

add(request) {
return new Promise((resolve, reject) => {
this.queue.push({ request, resolve, reject });
this.process();
});
}

async process() {
if (this.processing) return;
this.processing = true;

while (this.queue.length > 0) {
const { request, resolve, reject } = this.queue.shift();
try {
const result = await request();
resolve(result);
} catch (error) {
reject(error);
}
await new Promise(r => setTimeout(r, this.interval));
}

this.processing = false;
}
}

Reducing API Calls

Batch Operations

Instead of multiple calls:

# ❌ Multiple requests
POST /v1/tags {"title": "Tag 1"}
POST /v1/tags {"title": "Tag 2"}
POST /v1/tags {"title": "Tag 3"}

Use batch endpoint:

# ✅ Single request
POST /v1/tags/batch
{
"tags": [
{"title": "Tag 1"},
{"title": "Tag 2"},
{"title": "Tag 3"}
]
}

Caching

Cache responses when possible:

const cache = new Map();
const CACHE_TTL = 60000; // 1 minute

async function getCachedTag(tagId) {
const cached = cache.get(tagId);
if (cached && Date.now() - cached.time < CACHE_TTL) {
return cached.data;
}

const response = await fetch(`/v1/tags/${tagId}`, {
headers: { 'Authorization': `Bearer ${apiKey}` }
});
const data = await response.json();

cache.set(tagId, { data, time: Date.now() });
return data;
}

Use Webhooks

Instead of polling:

// ❌ Polling
setInterval(async () => {
const tags = await fetchTags();
checkForChanges(tags);
}, 5000);

Set up webhooks:

// ✅ Webhooks
app.post('/webhook', (req, res) => {
const event = req.body;
handleTagChange(event);
res.sendStatus(200);
});

Endpoint-Specific Limits

Some endpoints have additional limits:

EndpointAdditional Limit
File upload10/minute
Bulk operations5/minute
QR generation30/minute

Monitoring Usage

Dashboard

View API usage:

  1. Go to AccountAPI Keys
  2. Click on a key
  3. View Usage tab

See:

  • Requests per day
  • Requests per endpoint
  • Error rates

Usage Alerts

Set up notifications:

  1. Go to API KeysAlerts
  2. Configure thresholds:
    • 80% of daily limit
    • High error rate
    • Sustained rate limiting

Increasing Limits

Upgrade Plan

Higher tier = higher limits:

  • Pro: 60/min, 10K/day
  • Enterprise: 300/min, 100K/day

Request Increase

For special cases:

  1. Contact support@tagd-ai.com
  2. Explain use case
  3. Request limit increase
  4. Custom limits available (Enterprise)

Best Practices

Efficient API Usage

  1. Batch when possible - Use bulk endpoints
  2. Cache responses - Don't re-fetch unchanged data
  3. Use webhooks - Avoid polling
  4. Paginate wisely - Request appropriate page sizes
  5. Handle errors - Implement proper retry logic

Monitoring

  1. Track remaining - Monitor rate limit headers
  2. Alert early - Warn at 80% usage
  3. Log failures - Track 429 responses
  4. Analyze patterns - Optimize high-frequency calls

Troubleshooting

Unexpected Rate Limits

If hitting limits unexpectedly:

  1. Check for runaway loops in code
  2. Verify caching is working
  3. Look for duplicate requests
  4. Check if multiple services share key

Inconsistent Limits

Limits are per-key:

  • Multiple keys = independent limits
  • Test and prod keys have same limits
  • Per-minute window rolls continuously

Next Steps