API Rate Limits

Per User or Per Application

Rate limiting in version 1.1 of the API is primarily considered on a per-user basis — or more accurately described, per access token in your control. If a method allows for 15 requests per rate limit window, then it allows you to make 15 requests per window per leveraged access token. This is similar to the way API v1 had per-user/per-access token limits when leveraging OAuth.

When using application-only authentication, rate limits are determined globally for the entire application. If a method allows for 15 requests per rate limit window, then it allows you to make 15 requests per window — on behalf of your application. This limit is considered completely separately from per-user limits.

15 Minute Windows

Rate limits in version 1.1 of the API are divided into 15 minute intervals, which is a change from the 60 minute blocks in version 1.0. Additionally, all 1.1 endpoints require authentication, so no longer will there be a concept of unauthenticated calls and rate limits.

While in version one of the API, an OAuth-enabled application could initiate 350 GET-based requests per hour per access token, API v1.1’s rate limiting model allows for a wider ranger of requests through per-method request limits. There are two initial buckets available for GET requests: 15 calls every 15 minutes, and 180 calls every 15 minutes. 


Search will be rate limited at 180 queries per 15 minute window for the time being, but we may adjust that over time. A friendly reminder that search queries will need to be authenticated in version 1.1.

HTTP Headers and Response Codes

New HTTP headers are returned in v1.1. Ensure that you inspect these headers, as they provide pertinent data on where your application is at for a given rate limit on the method that you just utilized. Please note that these headers are similar, but not identical to the headers returned in API v1.0’s rate limiting model.

Note that these HTTP headers are contextual. When using app-only auth, they indicate the rate limit for the application context. When using user-based auth, they indicate the rate limit for that user-application context.

  • X-Rate-Limit-Limit: the rate limit ceiling for that given request
  • X-Rate-Limit-Remaining: the number of requests left for the 15 minute window
  • X-Rate-Limit-Reset: the remaining window before the rate limit resets in UTC epoch seconds

When an application exceeds the rate limit for a given API endpoint, the Twitter API will now return an HTTP 429 “Too Many Requests” response code instead of the variety of codes you would find across the v1’s Search and REST APIs.

If you hit the rate limit on a given endpoint, this is the body of the HTTP 429 message that you will see:

 { "errors": [ { "code": 88, "message": "Rate limit exceeded" } ] } 

To better predict the rate limits available to you, consider periodically using GET application / rate_limit_status. Like the rate limiting HTTP headers, this resource’s response will indicate the rate limit status for the calling context — when using app-only auth, the limits will pertain to that auth context. When using user-based auth, the limits will pertain to the application-user context.

GET and POST Request Limits

Rate limits on “reads” from the system are defined on a per user and per application basis, while rate limits on writes into the system are defined solely at the user level. In other words, for reading rate limits consider the following scenario:

  • If user A launches application Z, and app Z makes 10 calls to user A’s mention timeline in a 15 minute window, then app Z has 5 calls left to make for that window
  • Then user A launches application X, and app X calls user A’s mention timeline 3 times, then app X has 12 calls left for that window
  • The remaining value of calls on application X is isolated from application Z’s, despite the same user A

Contrast this with write allowances, which are defined on a per user basis. So if user A ends up posting 5 Tweets with application Z, then for that same period, regardless of any other application that user A opens, those 5 POSTs will count against any other application acting on behalf of user A during that same window of time.

Lastly, there may be times in which the rate limit values that we return are inconsistent, or cases where no headers are returned at all. Perhaps memcache has been reset, or one memcache was busy so the system spoke to a different instance: the values may be inconsistent now and again. We will make a best effort to maintain consistency, but we will err toward giving an application extra calls if there is an inconsistency.

Tips to avoid being Rate Limited

The tips below are there to help you code defensively and reduce the possibility of being rate limited. Some application features that you may want to provide are simply impossible in light of rate limiting, especially around the freshness of results. If real-time information is an aim of your application, look into The Streaming APIs along with User streams and Site streams.


Store API responses in your application or on your site if you expect a lot of use. For example, don’t try to call the Twitter API on every page load of your website landing page. Instead, call the API infrequently and load the response into a local cache. When users hit your website load the cached version of the results.

Prioritize active users

If your site keeps track of many Twitter users (for example, fetching their current status or statistics about their Twitter usage), consider only requesting data for users who have recently signed into your site.

Adapt to the search results

If your application monitors a high volume of search terms, query less often for searches that have no results than for those that do. By using a back-off you can keep up to date on queries that are popular but not waste cycles requesting queries that very rarely change. Alternatively, consider using the The Streaming APIs and filter on your search terms.

Use application-only auth as a “reserve”

Requests using Application-only authentication are evaluated in a separate context to an application’s per-user rate limits. For many scenarios, you may want to use this additional rate limit pool as a “reserve” for your typical user-based operations.


We ask that you honor the rate limit. If you or your application abuses the rate limits we will blacklist it. If you are blacklisted you will be unable to get a response from the Twitter API. If you or your application has been blacklisted and you think there has been an error you can contact the email address on our Support page. So we can get you back online quickly please include the following information:

  1. If you are using the REST API, make a call to the GET application / rate_limit_status from the account or computer which you believe to be blacklisted.
  2. Explain why you think your application was blacklisted.
  3. Describe in detail how you have fixed the problem that you think caused you to be blacklisted.

Streaming API

The Streaming API has rate limiting and access levels that are appropriate for long-lived connections. See the Streaming APIs documentation for more details.

Leveraging the Streaming API is a great way to free-up your rate limits for more inventive uses of the Twitter API.

Rate Limiting information for the Streaming API is detailed on Connecting to a streaming endpoint.

Limits Per Window Per Resource

Our API rate limit window duration is currently 15 minutes long. Visit our API Rate Limit: Chart page to see the limits by resource.

Note that endpoints/resources not listed in the above chart are default to 15 requests per allotted user.