Rate Limits
Rate limits protect the API from abuse and ensure fair usage for all clients. This guide explains our rate limiting policies and how to handle them in your applications.
Overview
Rate limits are applied per authentication credential (API key or access token) and vary by endpoint and plan tier.
Rate Limit Tiers
| Tier | Requests/Minute | Requests/Hour | Burst |
|---|---|---|---|
| Free | 60 | 1,000 | 10 |
| Developer | 300 | 10,000 | 50 |
| Business | 1,000 | 50,000 | 100 |
| Enterprise | Custom | Custom | Custom |
Rate Limit Headers
Every API response includes rate limit information:
HTTP/1.1 200 OK
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 998
X-RateLimit-Reset: 1704067200
X-RateLimit-Policy: 1000;w=60
| Header | Description |
|---|---|
X-RateLimit-Limit | Maximum requests allowed in the window |
X-RateLimit-Remaining | Requests remaining in current window |
X-RateLimit-Reset | Unix timestamp when the limit resets |
X-RateLimit-Policy | Rate limit policy (requests per window in seconds) |
Handling Rate Limits
429 Too Many Requests
When you exceed the rate limit:
{
"error": {
"code": "RATE_LIMIT_EXCEEDED",
"message": "Too many requests. Please retry after 45 seconds.",
"details": {
"retry_after": 45,
"limit": 1000,
"window": 60
}
}
}
The response includes a Retry-After header:
HTTP/1.1 429 Too Many Requests
Retry-After: 45
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1704067245
Implementing Retry Logic
- Python
- JavaScript
- Go
import time
import requests
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry
class RateLimitHandler:
def __init__(self, api_key):
self.session = requests.Session()
self.session.headers['X-API-Key'] = api_key
def request(self, method, url, **kwargs):
max_retries = 3
for attempt in range(max_retries):
response = self.session.request(method, url, **kwargs)
if response.status_code == 429:
retry_after = int(response.headers.get('Retry-After', 60))
print(f"Rate limited. Waiting {retry_after}s...")
time.sleep(retry_after)
continue
return response
raise Exception("Max retries exceeded")
# Usage
client = RateLimitHandler('ak_live_xxx')
response = client.request('GET', 'https://platform.powerverse.com/inventory-service/sites')
class APIClient {
constructor(apiKey) {
this.apiKey = apiKey;
}
async request(url, options = {}) {
const maxRetries = 3;
for (let attempt = 0; attempt < maxRetries; attempt++) {
const response = await fetch(url, {
...options,
headers: {
'X-API-Key': this.apiKey,
...options.headers
}
});
if (response.status === 429) {
const retryAfter = parseInt(response.headers.get('Retry-After')) || 60;
console.log(`Rate limited. Waiting ${retryAfter}s...`);
await new Promise(resolve => setTimeout(resolve, retryAfter * 1000));
continue;
}
return response;
}
throw new Error('Max retries exceeded');
}
}
// Usage
const client = new APIClient('ak_live_xxx');
const response = await client.request('https://platform.powerverse.com/inventory-service/sites');
package main
import (
"fmt"
"net/http"
"strconv"
"time"
)
type Client struct {
apiKey string
httpClient *http.Client
}
func (c *Client) Request(method, url string) (*http.Response, error) {
maxRetries := 3
for attempt := 0; attempt < maxRetries; attempt++ {
req, _ := http.NewRequest(method, url, nil)
req.Header.Set("X-API-Key", c.apiKey)
resp, err := c.httpClient.Do(req)
if err != nil {
return nil, err
}
if resp.StatusCode == 429 {
retryAfter, _ := strconv.Atoi(resp.Header.Get("Retry-After"))
if retryAfter == 0 {
retryAfter = 60
}
fmt.Printf("Rate limited. Waiting %ds...\n", retryAfter)
time.Sleep(time.Duration(retryAfter) * time.Second)
continue
}
return resp, nil
}
return nil, fmt.Errorf("max retries exceeded")
}
Endpoint-Specific Limits
Some endpoints have additional limits:
| Endpoint | Limit | Window |
|---|---|---|
POST /auth/token | 20 | 1 minute |
POST /inventory-service/assets/bulk | 10 | 1 minute |
POST /asset-controls/sessions/{id}/action | 30 | 1 minute |
GET /asset-controls/events | 60 | 1 minute |
Best Practices
1. Monitor Rate Limit Headers
Track your usage proactively:
function checkRateLimit(response) {
const remaining = parseInt(response.headers.get('X-RateLimit-Remaining'));
const limit = parseInt(response.headers.get('X-RateLimit-Limit'));
const usage = ((limit - remaining) / limit * 100).toFixed(1);
console.log(`Rate limit usage: ${usage}%`);
if (remaining < limit * 0.1) {
console.warn('Warning: Approaching rate limit!');
}
}
2. Implement Exponential Backoff
For sustained rate limiting:
import time
import random
def exponential_backoff(attempt, base_delay=1, max_delay=60):
delay = min(base_delay * (2 ** attempt), max_delay)
jitter = random.uniform(0, delay * 0.1)
return delay + jitter
# Usage
for attempt in range(5):
response = make_request()
if response.status_code == 429:
delay = exponential_backoff(attempt)
time.sleep(delay)
else:
break
3. Use Bulk Endpoints
Instead of many individual requests:
# ❌ Don't: 100 individual requests
for id in asset_ids:
GET /inventory-service/assets/{id}
# ✅ Do: Single bulk request
POST /inventory-service/assets/bulk
{
"ids": ["ast_1", "ast_2", ..., "ast_100"]
}
4. Cache Responses
Reduce unnecessary API calls:
const cache = new Map();
const CACHE_TTL = 60000; // 1 minute
async function getCachedSite(siteId) {
const cacheKey = `site:${siteId}`;
const cached = cache.get(cacheKey);
if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
return cached.data;
}
const response = await api.getSite(siteId);
cache.set(cacheKey, { data: response, timestamp: Date.now() });
return response;
}
5. Spread Requests Over Time
Avoid bursting:
import asyncio
async def process_items(items, requests_per_second=10):
delay = 1.0 / requests_per_second
for item in items:
await process_item(item)
await asyncio.sleep(delay)
Rate Limit Increase
Requesting Higher Limits
Contact us for limit increases:
- Business justification: Explain your use case
- Current usage: Provide metrics on your API usage
- Expected growth: Estimate future needs
Email: api-support@powerverse.com
Enterprise Plans
Enterprise customers can configure:
- Custom rate limits per endpoint
- Dedicated rate limit pools
- Burst allowances
- Priority queuing
Monitoring & Alerts
Check Current Usage
curl -X GET "https://platform.powerverse.com/usage/rate-limits" \
-H "Authorization: Bearer YOUR_TOKEN"
Response:
{
"data": {
"current_window": {
"requests": 450,
"limit": 1000,
"remaining": 550,
"reset_at": "2024-01-15T10:30:00Z"
},
"hourly": {
"requests": 8500,
"limit": 50000,
"remaining": 41500
}
}
}
Webhook Notifications
Configure alerts when approaching limits:
curl -X POST "https://platform.powerverse.com/webhooks" \
-H "Authorization: Bearer YOUR_TOKEN" \
-d '{
"url": "https://yourapp.com/webhooks/rate-limit",
"events": ["rate_limit.warning", "rate_limit.exceeded"],
"threshold": 80
}'
Next Steps
- Error Handling - Handle rate limit errors gracefully
- Pagination - Efficiently fetch large datasets
- API Reference - Endpoint reference