DDoS protection for REST API service

I started to get to know about workers as I was researching on how to protect my rest api service from DDoS attacks.

Read an article that
“Powerful DDOS protection. Cloudflare has one of the best DDOS protections available for small-businesses included in the price.”"

I was wondering if I can move my authentication code to workers so that I can avoid these attacks at cloud flare workers level.

I’m not still sure as some other threads still mention that we are still charged for the worker invocations during a DDoS attack which actually nullifies the whole effort.

Also $5/1M requests Rate Limiting service offered separately on cloud flare will be too expensive for us as we scale up.

Initially I wanted to go with Kong as an API Gateway but after seeing that post I felt like workers come with double edge - DDoS protection + Rate Limiting(with in workers using memory, IP based ) options.

Can someone suggest Is it ideal to rely on workers to handle ddos attacks + rate limiting for the rest api service?

The request rate limiting will only be charged for every successful connection, a DDOS categorized attack wouldn’t count towards that.

Thanks Thomas for the reply.

Are you talking about $5/Million Requests Rate-Limiting feature offered separately? If so that will turn out to be too costly for us as we are not driven by API request count billing to our customers.

Does Rate-Limiting set to Workers (Per IP) will work to defend DDoS attacks as well?


The only method that I know of, is using the RayID and globals in a Worker.

See this for example:

The RayID is a sort of session for each visitor, CF way of keeping track of visitors and get them to the same colo on new requests. Keep in mind though, it’s not “definitive” and users might get a new RayID occasionally (for example if the worker restarts).

Just to confirm, do you see it helps in defending DDoS attack? I see still the worker will be invoked during the attack, which will be billed. Right?

I mean, workers are built to handle enormous loads of distributed data - which is, essentially, a kind of DDOS but without malicious intent.

So there’s only a limited set of interventions you can do:

  1. Monitor event samples to a monitoring service
    • So you can see trends forming.
  2. Create session-request-limitations (like described above)
    • When the limits are reached, use the Cloudflare Firewall API to block the IP, you can do that in the worker directly. Since Firewall runs before your Worker does, you won’t be billed for blocked IPs.
  3. Create a quick kill-switch to enable “I’m under attack” mode.
    • Either via a worker with a custom URL and a button or use the CF dashboard.

In general, if you’re that prone to attacks from DDOS, I’d suggest not building on serverless infrastructures at all and go with dedicated server providers like OVH (which have DDOS protections too), because the amount of requests on serverless will eat your wallet - fast - unless prepared for it, like described above.

1 Like

Thats an amazing summary! Will definitely look into it! Thank you :slight_smile:
I’m not aware of defending mechanisms for REST api endpoint as it will be public always. If its website, I see it works directly with cloud flare. But there is no info on how to protect a public rest end point, which is what bothering me and went for if serverless has a solution.

As soon as you go over 1-server usage, you need an API gateway and a load-balancer in any case. That’s why both Cloudflare and Amazon spend so much effort on these services alone, it’s the entry-point to any service. They are hard to setup securely, manage and update properly. Sure, there’s tutorials that will put up a gateway and load-balancer in 5 minutes, but that’s a single machine - not a cluster of servers or even a datacenter (such as Clodflare scale) - adding to that, Cloudflare do all of that on the datacenter edge, for less cost than Amazon by far.

I’ve managed servers for more than a decade and am migrating away from all of them in favor of serverless. Clients think I’m crazy, there’s money in running servers - yes and no, the cost of hiring engineers and sysadmins to mange it all (sysops), makes most of the profit from managing a few racks of servers go away (And those cost keep rising due to lack of sysadmins!).

I’d rather invest the money into scaling my apps and hiring more programmers to build even more apps. This makes more sense to me, not scaling hardware which is better handled by professionals like Cloudflare or Amazon (with many other alternatives).

TLDR; Servers look cheap, but it’s not the full picture. I’d choose serverless any day - it’s worth it - but build cost management into it from the start.

Totally makes sense.
We choose serverless as we don’t have much resources to handle the server if we choose traditional way. But one thing thats really scary is how to stop any attacks which can lead us into to deep debt once we wake up the next morning :expressionless: (as GCF currently don’t have any limits on scaling part with respect to billing) . On this aspect only option we have is to have a shield our rest api endpoints which can avoid this nightmare.

As per your answer, I see cloud flare workers with IP block api might help well rather than my initial thought about Kong for API gateway (as it again needs server management once we scale up and totally beats the purpose of we using serverless in the core rest api service). Hope you are on the same lines.

1 Like

If you’re running CF Worker before your current REST API, then it makes sense to add the rate-limiting counter to the origin system (There might be a middle-ware you can plugin) and block rate-limit offenders via the CF Firewall API.

I looked at Kong too, before CF Workers existed. Unfortunately not cheap to license when you need more than a single instance. OVH Load-Balancer/gateway is much cheaper and fully managed.

1 Like