🚦 Rate Limits

Explanation of what rate limits are and how to handle them.

β€‹πŸ€” What is a Rate Limit?

Your plan has a certain capacity for the number of requests per second your application can make.

Often times, if you send queries too quickly in succession you will get a rate limit response. For most cases this is totally fine and will not affect your users at all as long as you handle them properly. Some users see these errors and request for rate limit increases rather than appropriately refactoring their code to handle this case. Check out how to handle rate limits using retries below.

Note: Rate limits will be lower for development environment apps in comparison to staging/ production environment apps. This is to prevent runaway scrips in development from eating up the entire monthly FCU allowance.

β€‹πŸ“œ Rate Limit Types

There are two types of rate limits:

  1. Requests per second

  2. Concurrent requests

These may seem like the same thing, but most requests take a few milliseconds so your app's requests per second will be significantly higher than its concurrent requests.

​πŸ“₯ Response

When you exceed your capacity, you will receive a rate limit response. This response will be different depending on whether you are connecting to Alchemy using HTTP or WebSockets.

If you would like to test receiving a 429 response, send a POST request to https://httpstat.us/429.


You will receive an HTTP 429 (Too Many Requests) response status code.


You will receive a JSON-RPC error response with error code 429. For example, the response might be:

"jsonrpc": "2.0",
"error": {
"code": 429,
"message": "Too many concurrent requests"

β€‹πŸ€œ Retries

All you need to do to easily handle rate limits is to retry the request with an exponential backoff. This is a great idea to ensure great user experiences with any API even if you aren't hitting rate limits as often the internet just randomly drops requests in transit.

Option 1: Alchemy.js

If you're using Web3.js, just use the Alchemy.js wrapper for Web3. We handle all of the retry logic for you!

Option 2: Implement Retries

When you see a 429 response, retry the request with a small delay. We suggest waiting a random interval between 1000 and 1250 milliseconds and sending the request again, up to some maximum number of attempts you are willing to wait.

Option 3: Exponential Backoff

Exponential backoff is a standard error-handling strategy for network applications. It is a similar solution to retries, however, instead of waiting random intervals, an exponential backoff algorithm retries requests exponentially, increasing the waiting time between retries up to a maximum backoff time.

Example Algorithm:

  1. Make a request.

  2. If the request fails, wait 1 + random_number_milliseconds seconds and retry the request.

  3. If the request fails, wait 2 + random_number_milliseconds seconds and retry the request.

  4. If the request fails, wait 4 + random_number_milliseconds seconds and retry the request.

  5. And so on, up to a maximum_backoff time.

  6. Continue waiting and retrying up to some maximum number of retries, but do not increase the wait period between retries.


  • The wait time is min(((2^n)+random_number_milliseconds), maximum_backoff), with n incremented by 1 for each iteration (request).

  • random_number_milliseconds is a random number of milliseconds less than or equal to 1000. This helps to avoid cases in which many clients are synchronized by some situation and all retry at once, sending requests in synchronized waves. The value of random_number_milliseconds is recalculated after each retry request.

  • maximum_backoff is typically 32 or 64 seconds. The appropriate value depends on the use case.

The client can continue retrying after it has reached the maximum_backoff time. Retries after this point do not need to continue increasing backoff time. For example, suppose a client uses a maximum_backoff time of 64 seconds. After reaching this value, the client can retry every 64 seconds. At some point, clients should be prevented from retrying indefinitely.

Option 4: Batch Requests

A batch request consists of multiple API calls combined into one HTTP request, reducing the total number of requests to help solve rate limit issues. Rather than hitting the rate limit from many individual requests, you can combine them into batches and drastically reduce the total number of requests.

To send several request objects in a batch, you can send an array filled with the request objects. The server will respond with an array containing the corresponding response objects once each request object has been processed. The requests are processed as a set of concurrent tasks, so order is not guaranteed.


  • The response objects may be returned in any order within the array, so you should match contents between the requests objects and the response objects based on their id numbers

  • There will not be any response objects for notifications

If the batch call is not a valid JSON-RPC request, or does not contain any values in it, the response from the server will be a single response object. If there is no response objects contained in the response array, the server will return nothing at all instead of an empty response array.



{"jsonrpc": "2.0", "method": "sum", "params": [1,2,4], "id": "1"},
{"jsonrpc": "2.0", "method": "notify_hello", "params": [7]},
{"jsonrpc": "2.0", "method": "subtract", "params": [42,23], "id": "2"},
{"foo": "boo"},
{"jsonrpc": "2.0", "method": "foo.get", "params": {"name": "myself"}, "id": "5"},
{"jsonrpc": "2.0", "method": "get_data", "id": "9"}


{"jsonrpc": "2.0", "result": 7, "id": "1"},
{"jsonrpc": "2.0", "result": 19, "id": "2"},
{"jsonrpc": "2.0", "error": {"code": -32600, "message": "Invalid Request"}, "id": null},
{"jsonrpc": "2.0", "error": {"code": -32601, "message": "Method not found"}, "id": "5"},
{"jsonrpc": "2.0", "result": ["hello", 5], "id": "9"}

β€‹πŸ’‘ Final Tips

Use a different key for each part of your project (ex: frontend, backend, development) since rate limits vary depending on the type of app (development rate limits are lower, see hint above).

We're here to help! If you have any questions reach out to us!​