429 Too Many Requests
Client Error - Rate limit exceeded
HTTP 429 Too Many Requests
What It Means
The HTTP 429 Too Many Requests status code indicates that you've sent too many requests in a given time window. The server is rate limiting your requests to protect itself from abuse or overload.
Example Response
HTTP/1.1 429 Too Many Requests
Retry-After: 60
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1705320000
Content-Type: application/json
{
"error": "Too Many Requests",
"message": "Rate limit exceeded. Try again in 60 seconds."
}
Common Rate Limit Headers
| Header | Meaning |
|---|---|
| Retry-After | Seconds to wait before retrying |
| X-RateLimit-Limit | Max requests allowed per window |
| X-RateLimit-Remaining | Requests left in current window |
| X-RateLimit-Reset | Unix timestamp when limit resets |
How to Handle 429s
async function fetchWithRateLimit(url) {
const response = await fetch(url);
if (response.status === 429) {
// Get retry delay from header (or default to 60s)
const retryAfter = response.headers.get('Retry-After') || 60;
console.log(`Rate limited. Waiting ${retryAfter}s...`);
await new Promise(r => setTimeout(r, retryAfter * 1000));
return fetchWithRateLimit(url); // Retry
}
return response;
}
Exponential Backoff
async function fetchWithBackoff(url, attempt = 0) {
const response = await fetch(url);
if (response.status === 429 && attempt < 5) {
// Exponential backoff: 1s, 2s, 4s, 8s, 16s
const delay = Math.pow(2, attempt) * 1000;
const jitter = Math.random() * 1000; // Add randomness
await new Promise(r => setTimeout(r, delay + jitter));
return fetchWithBackoff(url, attempt + 1);
}
return response;
}
Rate Limiting Strategies
| Strategy | Description |
|---|---|
| Fixed Window | 100 requests per minute, resets at :00 |
| Sliding Window | 100 requests in last 60 seconds |
| Token Bucket | Tokens refill gradually, burst allowed |
| Leaky Bucket | Fixed output rate, smooths traffic |
Best Practices
- Always respect
Retry-Afterheaders - Implement exponential backoff with jitter
- Cache responses to reduce API calls
- Use webhooks instead of polling when available
- Batch requests when the API supports it
- Monitor your rate limit headers proactively
Server Implementation
Express.js with express-rate-limit
const rateLimit = require('express-rate-limit');
const limiter = rateLimit({
windowMs: 60 * 1000, // 1 minute
max: 100, // 100 requests per minute
standardHeaders: true, // Return rate limit info
handler: (req, res) => {
res.status(429).json({
error: 'Too Many Requests',
retryAfter: Math.ceil(req.rateLimit.resetTime / 1000)
});
}
});
app.use('/api/', limiter);