How to block wordpress search results?

Here’s a more thorough guide on handling DDoS attacks:


I read the guide but it’s impossible to do it without going insane… I have 160 million events, more than 60 million IPs from all over the world and a shitload of user agents.

The only recurring scheme seems to be this:
They target the / path
They mainly use HTTP/1.1 or HTTP/1.0 version
They use the GET method

Now If I make a rule that block these 3 won’t I block legit user as well?

(By the way the attack is still going from 15 hours sraight)

The amount of requests doesn’t matter at all, It’s extremely odd that the entropy of the attack is big enough for the data to appear as completely random.
Can you show pictures of your analytics, in particular, the firewall section?

1 Like

Hello, you can check here, I made 12 screen: Imgur: The magic of the Internet
If you need other screen please tell me what you need. Thank you

@Sandro so I can block them just fine, thanks

I’d consider the following:

  1. Block the first ASN or Challenge them.
  2. Challenge ANYTHING to path / that has a THREAT score equal or greater than 8.
  3. Challenge GET requests to path / that have score equal to or greater than 4.
  4. Challenge or block HTTP1.0 / HTTP 1.1 requests.
  5. Challenge or block non-ssl/tls requests. I’d set a minimum of TLS 1.2 as valid, many botnets fail to accomplish TLS >1.2

Can you show us the most common user agents? the 5 most common aren’t enough in this case.

1 Like

I addressed that part actually in the article.

That far I wouldn’t go, though.


From my experience, most botnets/attacks will use 1.0 or 1.1, and given that all modern browsers support HTTP/2, I’d say it’s fine especially given that a major chunk of the requests are 1.1.

Hi, sadly with my plan I can only check 15 user agent at once apparently. But I can exclude them from the list and the others major user agents appears.
I extracted these 60 top user agents, hope they are enough: Imgur: The magic of the Internet

1.0, definitely. 1.1 would be a bit risky. Yes, the site is on Cloudflare, most likely has HTTP 2.0 support, and mainstream browsers do support it, but I’d be careful to assume it will be necessarily used. If anything, a JavaScript challenge.


I can agree that blocking is a bit too much, should have mentioned challenge.

@jeansureau98 Given that the user agents are way too distributed, we will discard that for now.
Please try with javascript challenge versions 1.0 and 1.1 and let us know if that helps, if not we will escalate the JS challenge to CAPTCHA.

@jeansureau98, you might also want to check out [FirewallTip] User agent pt. 2 - It's Mozillaaaaa.

And Search results for '[FirewallTip] #tutorials' - Cloudflare Community in general.

1 Like

So I tried to set the rules.
Where you used the term “challange” I only used js challange.
I also blocked non-ssl requests but couldn’t place a minimum of tld 1.2, maybe it’s not on my plan I don’t know.

Anyway, they were accessing my site and making the researches so I guess it doesn’t work or works partially.
Then, I moved the http to 1.0 and 1.1 to a separated rule and set the captcha challange. They can still access the site and make researches

EDIT: Sorry, nevermind. I just noticed I challanged http 1.2 instead of of 1.1. Now after fixing it looks like it’s working. Let’s see. Does normal users are getting challanged as well? Becuase every legit users is using 1.1, as far as I can see in the log

Bad idea.

1 Like

It looks like everything is normal now. Thank you very much for the settings!

@Sandro yes soon after that I stop doing that, thanks

1 Like

Hello. A couple of questions

  1. My CSR for the above rules increased recently and now it’s 3.79%. Should I keep it enabled?
  2. It looks like the ddos is still going on. On certain hours I get peaks of traffic and challange that shouldn’t be there. I think not all the bot requests are being challanged but the good thing is the server doesn’t go down. Should I worry about this?
  1. That means that legitimate traffic is being challenged/blocked, those numbers are certainly high than what I’d like them to be. Mind showing the exact rule that is triggering it? I guess its due to HTTP 1.1, we can work on the rule to add some extra patterns that affect less legitimate traffic.
  2. It’s normal that some requests go through when receiving an HTTP DDoS attack, if your server usage is fair then I wouldn’t worry too much.

Regarding your first issue, I think that its exactly in those scenarios where a proposal I made not so long ago would be handy, however, it didn’t get much attention, unfortunately.

This is the rule that has high CSR… all get js challange for this

Can you remove the ORs and make each one of the conditions a rule by itself? That way we will be able to see which one is the most efficient and tune the ones with the most legitimate user hits.

I did it.
So far the separated http 1.1 challange has a csr of 5.71%
while all the others 0 issued