Workers abuse protection limit throws HTTP 429 status

Hey
I’m using Workers for making an alternative response with some random REST request I make, I’m accessing from the very same IP for stress test purposes and keep getting ‘error code: 1015’ with an HTTP status code of 429

It seems like Cloudflare’s abuse protection makes me fail as i’m accessing from a single IP address
The documentation suggests to contact Cloudflare support and increase this limit - ‘or expect your application to incur these errors, contact your Cloudflare account team to increase your limit.’ but i couldn’t find any way to contact the support nor to increase it by myself on Cloudflare UI

I’m using the Paid version of Workers, and tried to use both Bundled and Unbound Models
How should I increase this limit?

Thanks

Does your Worker send HTTP requests to an external domain? How many requests per second are you sending with this tool?

You could be running into the per-PoP subrequest rate limit. I believe this limit is present to make sure Workers can’t effectively be used to launch a DDoS attack. I have never heard of anyone who have run into this limit during production, but you will most certainly hit it when load-testing from a single location.

I believe zones on the Enterprise plan are exempt from this limit, but you should reach out to sales if you expect to actually handle several thousand requests per second during production.

1 Like

Hey Albert,
Thanks for the quick reply :slight_smile:

My worker do sends HTTP requests to an external domain, but I have some batching mechanism which should significantly decrease the number of these outbound requests
Should I expect 429 status for subrequest rate limit?

Regarding the Cloudflare’s abuse protection - If I understand correctly you are saying that this limit comes from Cloudflare and not the necessarily from the worker?
Unfortunately I’m not using the Enterprise subscription, but I forgot to mention - without the worker enabled on my domain (when deleting the route) i’m not getting any rate limit errors even though using the same benchmark tool
BTW - is there any other way to increase this limit with a paid-non-enterprise program?

Thanks again

The rate limit comes from Cloudflare but it is only applied to Worker subrequests. This is why you only see the error when requests are proxied through a Worker.

How many requests per second are you sending with this tool? How many requests per second do you expect to receive in production?

Remember, this is a local per PoP limit, not a global one. In a real world scenario requests come from users in many different locations and hit different PoPs, so I think you are unlikely to hit this limit in production.

I don’t know if this limit can be manually adjusted - you’d have to reach out to sales. But I personally would not be worried about it for the reasons stated above.

1 Like

So the reason for this benchmark is to understand the side effects of the Worker code on my APIs latency, I’m sending ~2000RPS, I’m expecting these number in production as well, but of course, as you mentioned, from a various of IPs

The PoP limit you are talking about is the 6 simultaneous open connections?

Thanks

‘PoP’ stands for ‘Point of Presence’ and is synonymous with ‘colo’ or ‘colocation’. It is a datacenter where Cloudflare has rented space for their servers. Cloudflare has PoPs in more than 275 cities around the world.

Cloudflare’s IP addresses are anycasted so any one of those PoPs can handle your request. Which PoP you connect to depends on your ISP’s routing, but you generally hit one of the PoPs close to your location. You are likely to hit the same PoP for all requests.

The subrequest rate limit has nothing to do with the client IP address. The limit is shared by all Workers running in the same PoP.

As long as your users aren’t all from the same city you shouldn’t run into issues. Even 2,000 requests per second spread over 20 PoPs is only 100 requests per PoP per second.

Just note that 2,000 requests per second amounts to 63,072,000,000 requests per year. At $0.50/million requests that’s more than $31K per year in usage-based charges. If you’re actually expecting these numbers in production I strongly recommend reaching out to sales for a bulk discount.

How does 2000RPS test the side effects of workers on your API latency in a way that 200RPM wouldn’t?

Got it. Thank you very much for the detailed explanation

You’re basically right :slight_smile:
Will decrease that number

1 Like