How to block wordpress search results?

This worked, thank you! In the meanwhile I also upgraded to the pro plan.
Now what to do next? Since this is a temporary solution since may block something else?
Also this is the third attack in 24 hours under different targets on my website (all through HTTP(s)), So I expect more maybe

(also I’m time limited to reply on the forum)

Now take a closer look at the search queries that have been blocked. It should show you the query string that wasn’t being blocked by our “s=” attempt, or…hopefully…something else they all have in common.


My bad. Thanks.
I think I noticed now and apparently this may have been my mistake.
I blocked this query string initially: /?s

While after double checking now, you told me to block ?s

Indeed the query string that is blocked with the / URI now, all start the query string with ?s and not with /?s

Can you confirm me this little missed detail allowed them to bypass the firewall?

The query string should be: s=
The question mark itself means that the stuff after that is the query string.


Hello, this is lasting from 12 hours now. Will this ever stop? Also on the firewall on the top events by path, the most hit path is / how is this possible since every request is a uri string search?

Here’s a more thorough guide on handling DDoS attacks:


I read the guide but it’s impossible to do it without going insane… I have 160 million events, more than 60 million IPs from all over the world and a shitload of user agents.

The only recurring scheme seems to be this:
They target the / path
They mainly use HTTP/1.1 or HTTP/1.0 version
They use the GET method

Now If I make a rule that block these 3 won’t I block legit user as well?

(By the way the attack is still going from 15 hours sraight)

The amount of requests doesn’t matter at all, It’s extremely odd that the entropy of the attack is big enough for the data to appear as completely random.
Can you show pictures of your analytics, in particular, the firewall section?

1 Like

Hello, you can check here, I made 12 screen: Imgur: The magic of the Internet
If you need other screen please tell me what you need. Thank you

@Sandro so I can block them just fine, thanks

I’d consider the following:

  1. Block the first ASN or Challenge them.
  2. Challenge ANYTHING to path / that has a THREAT score equal or greater than 8.
  3. Challenge GET requests to path / that have score equal to or greater than 4.
  4. Challenge or block HTTP1.0 / HTTP 1.1 requests.
  5. Challenge or block non-ssl/tls requests. I’d set a minimum of TLS 1.2 as valid, many botnets fail to accomplish TLS >1.2

Can you show us the most common user agents? the 5 most common aren’t enough in this case.

1 Like

I addressed that part actually in the article.

That far I wouldn’t go, though.


From my experience, most botnets/attacks will use 1.0 or 1.1, and given that all modern browsers support HTTP/2, I’d say it’s fine especially given that a major chunk of the requests are 1.1.

Hi, sadly with my plan I can only check 15 user agent at once apparently. But I can exclude them from the list and the others major user agents appears.
I extracted these 60 top user agents, hope they are enough: Imgur: The magic of the Internet

1.0, definitely. 1.1 would be a bit risky. Yes, the site is on Cloudflare, most likely has HTTP 2.0 support, and mainstream browsers do support it, but I’d be careful to assume it will be necessarily used. If anything, a JavaScript challenge.


I can agree that blocking is a bit too much, should have mentioned challenge.

@jeansureau98 Given that the user agents are way too distributed, we will discard that for now.
Please try with javascript challenge versions 1.0 and 1.1 and let us know if that helps, if not we will escalate the JS challenge to CAPTCHA.

@jeansureau98, you might also want to check out [FirewallTip] User agent pt. 2 - It's Mozillaaaaa.

And Search results for '[FirewallTip] #tutorials' - Cloudflare Community in general.

1 Like

So I tried to set the rules.
Where you used the term “challange” I only used js challange.
I also blocked non-ssl requests but couldn’t place a minimum of tld 1.2, maybe it’s not on my plan I don’t know.

Anyway, they were accessing my site and making the researches so I guess it doesn’t work or works partially.
Then, I moved the http to 1.0 and 1.1 to a separated rule and set the captcha challange. They can still access the site and make researches

EDIT: Sorry, nevermind. I just noticed I challanged http 1.2 instead of of 1.1. Now after fixing it looks like it’s working. Let’s see. Does normal users are getting challanged as well? Becuase every legit users is using 1.1, as far as I can see in the log

Bad idea.

1 Like

It looks like everything is normal now. Thank you very much for the settings!

@Sandro yes soon after that I stop doing that, thanks

1 Like