Limits by Plan
| Plan | Standard Requests | Batch Operations | Burst Allowance |
|---|---|---|---|
| Developer | 100 req/min | 10 req/min | 20 req/s peak |
| Pro | 500 req/min | 50 req/min | 100 req/s peak |
| Enterprise | Custom | Custom | Custom |
Ops Tip: Standard requests include
GET, POST, PATCH, and DELETE on nodes, analytics queries, and webhook management. Batch operations include POST /batches and batch generation endpoints.Rate Limit Headers
Every API response includes rate limit headers:| Header | Description |
|---|---|
X-RateLimit-Limit | Maximum requests allowed in the current window |
X-RateLimit-Remaining | Requests remaining in the current window |
X-RateLimit-Reset | Unix timestamp when the window resets |
Handling 429 Too Many Requests
When you exceed the limit, the API returns:
Retry-After header with the number of seconds to wait:
Recommended Retry Strategy
Implement exponential backoff with jitter:Best Practices
Check remaining quota before bursting
Check remaining quota before bursting
Read
X-RateLimit-Remaining from each response. If you’re approaching zero, throttle proactively instead of hitting the wall.Use bulk endpoints for batch operations
Use bulk endpoints for batch operations
Creating 100 nodes individually burns 100 requests. Use
POST /batches to generate them in a single call — it counts as one batch operation.Cache read-heavy queries
Cache read-heavy queries
If you’re polling
GET /nodes frequently, cache the response on your side and use webhooks to invalidate the cache when data changes.Spread requests over time
Spread requests over time
If you need to update 200 nodes, spread the requests across 2+ minutes rather than firing them all at once. Your integration is more resilient and other tenants are unaffected.