Webhook 429 Too Many Requests — Rate Limits Explained
A complete guide to understanding webhook rate limiting — covering both directions, how to read Retry-After headers, and implementing backoff strategies that actually work.
Quick Answer
A 429 means you're sending too many requests to an endpoint or a provider is throttling your webhook deliveries. Read the Retry-After response header, back off for that duration, then resume with a lower request rate.
What Causes a 429 on a Webhook Endpoint
Rate limiting in webhooks runs in two directions, and it's important to identify which one you're dealing with:
Your endpoint returning 429
Your server is the one applying rate limits. A provider is sending webhook events too fast and your endpoint is throttling the delivery. The provider should back off and retry.
Fix: return a well-formed 429 with a Retry-After header, then increase your endpoint's capacity or add a queue.
An API you call returning 429
Your webhook handler calls a downstream API (Stripe, SendGrid, Slack) and that API is rate-limiting your requests. The 429 comes from the third party, not your endpoint.
Fix: implement backoff in your downstream API calls, and never make those calls synchronously inside the webhook handler.
Both scenarios require the same underlying fix — read the Retry-After header and implement backoff — but the place you implement it differs.
How to Read the Retry-After Header
The Retry-After header appears in 429 responses and tells the client how long to wait before retrying. It can take two forms:
# Form 1: seconds to wait HTTP/1.1 429 Too Many Requests Retry-After: 60 X-RateLimit-Limit: 100 X-RateLimit-Remaining: 0 X-RateLimit-Reset: 1712345678 # Form 2: HTTP date to wait until HTTP/1.1 429 Too Many Requests Retry-After: Sat, 05 Apr 2026 14:30:00 GMT
Here is how to parse both forms in Node.js:
function getRetryAfterMs(response) {
const header = response.headers.get('retry-after');
if (!header) return 1000; // default 1 second fallback
// Try parsing as a number (seconds)
const seconds = parseInt(header, 10);
if (!isNaN(seconds)) {
return seconds * 1000; // convert to milliseconds
}
// Try parsing as an HTTP date
const date = new Date(header);
if (!isNaN(date.getTime())) {
return Math.max(0, date.getTime() - Date.now());
}
return 1000; // fallback
}Note that some APIs also include X-RateLimit-Remaining and X-RateLimit-Reset headers that let you slow down proactively before hitting the 429.
Implementing Exponential Backoff
When Retry-After is not present, use exponential backoff with jitter. Backoff without jitter causes a "thundering herd" — all clients retry at the same moment and immediately re-trigger the rate limit.
async function fetchWithBackoff(url, options, maxAttempts = 5) {
for (let attempt = 0; attempt < maxAttempts; attempt++) {
const response = await fetch(url, options);
if (response.status !== 429) {
return response; // Success or a different error
}
if (attempt === maxAttempts - 1) {
throw new Error(`Rate limited after ${maxAttempts} attempts`);
}
// Read Retry-After header first
const retryAfterMs = getRetryAfterMs(response);
// If no header, use exponential backoff with jitter
const backoffMs = retryAfterMs > 1000
? retryAfterMs
: Math.min(1000 * Math.pow(2, attempt), 32000) // cap at 32 seconds
+ Math.random() * 1000; // add jitter
console.log(`Rate limited. Retrying in ${Math.round(backoffMs)}ms (attempt ${attempt + 1})`);
await new Promise(resolve => setTimeout(resolve, backoffMs));
}
}The jitter (Math.random() * 1000) spreads retries across a window so that multiple clients that all got rate-limited at the same time don't all retry at the exact same millisecond.
Using Requex to Test Rate Limiting
Before deploying your backoff logic to production, test it against a controlled 429 response. Requex lets you configure your webhook URL to return any status code — including 429 — with custom headers.
- Open Requex and open the Response Configuration panel.
- Set the status code to
429. - Add a
Retry-After: 30response header. - Point your webhook sender to the Requex URL and trigger an event.
- Observe how the sender behaves — does it read the header? Does it back off for 30 seconds? Does it retry correctly?
This is particularly useful for testing how retry loops in your own code behave when the downstream service they call is rate-limited. You can simulate a 429 storm and confirm your backoff logic terminates cleanly and doesn't create an infinite retry loop.
Common Platform Rate Limits
| Platform | Limit | Headers Provided |
|---|---|---|
| GitHub API | 5,000 requests/hour (authenticated) | X-RateLimit-Remaining, X-RateLimit-Reset |
| Slack API | 1+ request/second (varies by method) | Retry-After |
| Discord API | Per-route limits (varies) | X-RateLimit-Remaining, Retry-After |
| Stripe API | 100 reads/second, 100 writes/second | Retry-After |
Fix Checklist
- ✓Always read the
Retry-Afterheader before deciding how long to wait. - ✓Implement exponential backoff with jitter to avoid thundering-herd retry storms.
- ✓Check
X-RateLimit-Remainingproactively and slow down before you hit 0. - ✓Move downstream API calls out of the synchronous webhook handler and into a background queue with its own rate-limit aware scheduler.
- ✓Set a maximum retry count on your backoff loop — never retry indefinitely.
- ✓If your own endpoint is returning 429, return a well-formed response with a
Retry-Afterheader so senders know when to try again.
Related Resources
Webhook Retry Failed
Understand retry schedules and how providers handle delivery failures
Webhook 408 Timeout
Fix slow handlers that miss provider timeout windows
Webhook 401 Unauthorized
Authentication failures — tokens, API keys, and HMAC signatures
Webhook Simulator
Simulate 429 responses to test your backoff implementation
Test Your Backoff Logic Against a Real 429 Response
Configure Requex to return 429 with a Retry-After header and verify your backoff implementation handles it correctly before going to production.
Open Requex →