Skip to main content

Rate Limits

Rate limits protect the API from abuse and ensure fair usage for all clients. This guide explains our rate limiting policies and how to handle them in your applications.

Overview

Rate limits are applied per authentication credential (API key or access token) and vary by endpoint and plan tier.

Rate Limit Tiers

TierRequests/MinuteRequests/HourBurst
Free601,00010
Developer30010,00050
Business1,00050,000100
EnterpriseCustomCustomCustom

Rate Limit Headers

Every API response includes rate limit information:

HTTP/1.1 200 OK
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 998
X-RateLimit-Reset: 1704067200
X-RateLimit-Policy: 1000;w=60
HeaderDescription
X-RateLimit-LimitMaximum requests allowed in the window
X-RateLimit-RemainingRequests remaining in current window
X-RateLimit-ResetUnix timestamp when the limit resets
X-RateLimit-PolicyRate limit policy (requests per window in seconds)

Handling Rate Limits

429 Too Many Requests

When you exceed the rate limit:

{
"error": {
"code": "RATE_LIMIT_EXCEEDED",
"message": "Too many requests. Please retry after 45 seconds.",
"details": {
"retry_after": 45,
"limit": 1000,
"window": 60
}
}
}

The response includes a Retry-After header:

HTTP/1.1 429 Too Many Requests
Retry-After: 45
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1704067245

Implementing Retry Logic

import time
import requests
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry

class RateLimitHandler:
def __init__(self, api_key):
self.session = requests.Session()
self.session.headers['X-API-Key'] = api_key

def request(self, method, url, **kwargs):
max_retries = 3

for attempt in range(max_retries):
response = self.session.request(method, url, **kwargs)

if response.status_code == 429:
retry_after = int(response.headers.get('Retry-After', 60))
print(f"Rate limited. Waiting {retry_after}s...")
time.sleep(retry_after)
continue

return response

raise Exception("Max retries exceeded")

# Usage
client = RateLimitHandler('ak_live_xxx')
response = client.request('GET', 'https://platform.powerverse.com/inventory-service/sites')

Endpoint-Specific Limits

Some endpoints have additional limits:

EndpointLimitWindow
POST /auth/token201 minute
POST /inventory-service/assets/bulk101 minute
POST /asset-controls/sessions/{id}/action301 minute
GET /asset-controls/events601 minute

Best Practices

1. Monitor Rate Limit Headers

Track your usage proactively:

function checkRateLimit(response) {
const remaining = parseInt(response.headers.get('X-RateLimit-Remaining'));
const limit = parseInt(response.headers.get('X-RateLimit-Limit'));
const usage = ((limit - remaining) / limit * 100).toFixed(1);

console.log(`Rate limit usage: ${usage}%`);

if (remaining < limit * 0.1) {
console.warn('Warning: Approaching rate limit!');
}
}

2. Implement Exponential Backoff

For sustained rate limiting:

import time
import random

def exponential_backoff(attempt, base_delay=1, max_delay=60):
delay = min(base_delay * (2 ** attempt), max_delay)
jitter = random.uniform(0, delay * 0.1)
return delay + jitter

# Usage
for attempt in range(5):
response = make_request()
if response.status_code == 429:
delay = exponential_backoff(attempt)
time.sleep(delay)
else:
break

3. Use Bulk Endpoints

Instead of many individual requests:

# ❌ Don't: 100 individual requests
for id in asset_ids:
GET /inventory-service/assets/{id}

# ✅ Do: Single bulk request
POST /inventory-service/assets/bulk
{
"ids": ["ast_1", "ast_2", ..., "ast_100"]
}

4. Cache Responses

Reduce unnecessary API calls:

const cache = new Map();
const CACHE_TTL = 60000; // 1 minute

async function getCachedSite(siteId) {
const cacheKey = `site:${siteId}`;
const cached = cache.get(cacheKey);

if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
return cached.data;
}

const response = await api.getSite(siteId);
cache.set(cacheKey, { data: response, timestamp: Date.now() });
return response;
}

5. Spread Requests Over Time

Avoid bursting:

import asyncio

async def process_items(items, requests_per_second=10):
delay = 1.0 / requests_per_second

for item in items:
await process_item(item)
await asyncio.sleep(delay)

Rate Limit Increase

Requesting Higher Limits

Contact us for limit increases:

  1. Business justification: Explain your use case
  2. Current usage: Provide metrics on your API usage
  3. Expected growth: Estimate future needs

Email: api-support@powerverse.com

Enterprise Plans

Enterprise customers can configure:

  • Custom rate limits per endpoint
  • Dedicated rate limit pools
  • Burst allowances
  • Priority queuing

Monitoring & Alerts

Check Current Usage

curl -X GET "https://platform.powerverse.com/usage/rate-limits" \
-H "Authorization: Bearer YOUR_TOKEN"

Response:

{
"data": {
"current_window": {
"requests": 450,
"limit": 1000,
"remaining": 550,
"reset_at": "2024-01-15T10:30:00Z"
},
"hourly": {
"requests": 8500,
"limit": 50000,
"remaining": 41500
}
}
}

Webhook Notifications

Configure alerts when approaching limits:

curl -X POST "https://platform.powerverse.com/webhooks" \
-H "Authorization: Bearer YOUR_TOKEN" \
-d '{
"url": "https://yourapp.com/webhooks/rate-limit",
"events": ["rate_limit.warning", "rate_limit.exceeded"],
"threshold": 80
}'

Next Steps