How to block wordpress search results?

I addressed that part actually in the article.

That far I wouldn’t go, though.


From my experience, most botnets/attacks will use 1.0 or 1.1, and given that all modern browsers support HTTP/2, I’d say it’s fine especially given that a major chunk of the requests are 1.1.

Hi, sadly with my plan I can only check 15 user agent at once apparently. But I can exclude them from the list and the others major user agents appears.
I extracted these 60 top user agents, hope they are enough: Imgur: The magic of the Internet

1.0, definitely. 1.1 would be a bit risky. Yes, the site is on Cloudflare, most likely has HTTP 2.0 support, and mainstream browsers do support it, but I’d be careful to assume it will be necessarily used. If anything, a JavaScript challenge.


I can agree that blocking is a bit too much, should have mentioned challenge.

@jeansureau98 Given that the user agents are way too distributed, we will discard that for now.
Please try with javascript challenge versions 1.0 and 1.1 and let us know if that helps, if not we will escalate the JS challenge to CAPTCHA.

@jeansureau98, you might also want to check out [FirewallTip] User agent pt. 2 - It's Mozillaaaaa.

And Search results for '[FirewallTip] #tutorials' - Cloudflare Community in general.

1 Like

So I tried to set the rules.
Where you used the term “challange” I only used js challange.
I also blocked non-ssl requests but couldn’t place a minimum of tld 1.2, maybe it’s not on my plan I don’t know.

Anyway, they were accessing my site and making the researches so I guess it doesn’t work or works partially.
Then, I moved the http to 1.0 and 1.1 to a separated rule and set the captcha challange. They can still access the site and make researches

EDIT: Sorry, nevermind. I just noticed I challanged http 1.2 instead of of 1.1. Now after fixing it looks like it’s working. Let’s see. Does normal users are getting challanged as well? Becuase every legit users is using 1.1, as far as I can see in the log

Bad idea.

1 Like

It looks like everything is normal now. Thank you very much for the settings!

@Sandro yes soon after that I stop doing that, thanks

1 Like

Hello. A couple of questions

  1. My CSR for the above rules increased recently and now it’s 3.79%. Should I keep it enabled?
  2. It looks like the ddos is still going on. On certain hours I get peaks of traffic and challange that shouldn’t be there. I think not all the bot requests are being challanged but the good thing is the server doesn’t go down. Should I worry about this?
  1. That means that legitimate traffic is being challenged/blocked, those numbers are certainly high than what I’d like them to be. Mind showing the exact rule that is triggering it? I guess its due to HTTP 1.1, we can work on the rule to add some extra patterns that affect less legitimate traffic.
  2. It’s normal that some requests go through when receiving an HTTP DDoS attack, if your server usage is fair then I wouldn’t worry too much.

Regarding your first issue, I think that its exactly in those scenarios where a proposal I made not so long ago would be handy, however, it didn’t get much attention, unfortunately.

This is the rule that has high CSR… all get js challange for this

Can you remove the ORs and make each one of the conditions a rule by itself? That way we will be able to see which one is the most efficient and tune the ones with the most legitimate user hits.

I did it.
So far the separated http 1.1 challange has a csr of 5.71%
while all the others 0 issued

That’s good, feel free to merge the requests with 0% CSR into one.
In the meantime change the HTTP 1.1 rule to:

  1. Request method GET
  2. Threat score greater or equal than 5
  3. URI is eq /

Note that if they find out, they might change the request URI or Method, but until then, this should reduce the CSR greatly.


I setup as you say but now the csr of the http 1.1 rule has grown to 10%

In that case, consider increasing the Threat score or matching more patterns that are shown in your attack.

The threat score values that I’m giving you are wild guesses, Cloudflare is very obscure with this value and I guess that depending on the traffic that you have, certain threat scores are allowed while others aren’t.
Try increasing that value to 10.

Also by reading the stats I notice that my own server is trying to run the wp cron but it’s blocked, the majority of requests blocked are those from my own server… maybe the attack it’s over?

Oh, in that case, create a firewall rule that ALLOWS your server IP.

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.