Skip to main content
NearNode enforces rate limits to ensure platform stability across all tenants. Limits are applied per API key and vary by plan tier and operation type.

Limits by Plan

PlanStandard RequestsBatch OperationsBurst Allowance
Developer100 req/min10 req/min20 req/s peak
Pro500 req/min50 req/min100 req/s peak
EnterpriseCustomCustomCustom
Ops Tip: Standard requests include GET, POST, PATCH, and DELETE on nodes, analytics queries, and webhook management. Batch operations include POST /batches and batch generation endpoints.

Rate Limit Headers

Every API response includes rate limit headers:
HTTP/1.1 200 OK
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 97
X-RateLimit-Reset: 1707494400
HeaderDescription
X-RateLimit-LimitMaximum requests allowed in the current window
X-RateLimit-RemainingRequests remaining in the current window
X-RateLimit-ResetUnix timestamp when the window resets

Handling 429 Too Many Requests

When you exceed the limit, the API returns:
{
  "data": null,
  "error": "Rate limit exceeded. Retry after 12 seconds."
}
The response includes a Retry-After header with the number of seconds to wait:
HTTP/1.1 429 Too Many Requests
Retry-After: 12
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1707494412
Implement exponential backoff with jitter:
async function fetchWithRetry(url, options, maxRetries = 3) {
  for (let attempt = 0; attempt <= maxRetries; attempt++) {
    const response = await fetch(url, options);

    if (response.status !== 429) return response;

    const retryAfter = parseInt(response.headers.get('Retry-After') || '1');
    const jitter = Math.random() * 1000;
    const delay = retryAfter * 1000 + jitter;

    console.log(`Rate limited. Retrying in ${Math.round(delay / 1000)}s...`);
    await new Promise(resolve => setTimeout(resolve, delay));
  }

  throw new Error('Max retries exceeded');
}

Best Practices

Read X-RateLimit-Remaining from each response. If you’re approaching zero, throttle proactively instead of hitting the wall.
Creating 100 nodes individually burns 100 requests. Use POST /batches to generate them in a single call — it counts as one batch operation.
If you’re polling GET /nodes frequently, cache the response on your side and use webhooks to invalidate the cache when data changes.
If you need to update 200 nodes, spread the requests across 2+ minutes rather than firing them all at once. Your integration is more resilient and other tenants are unaffected.
Need higher limits? Contact us at support@nearnode.io or upgrade to Enterprise for custom rate limits tailored to your workload.