This worked, thank you! In the meanwhile I also upgraded to the pro plan.
Now what to do next? Since this is a temporary solution since may block something else?
Also this is the third attack in 24 hours under different targets on my website (all through HTTP(s)), So I expect more maybe
Now take a closer look at the search queries that have been blocked. It should show you the query string that wasn’t being blocked by our “s=” attempt, or…hopefully…something else they all have in common.
The amount of requests doesn’t matter at all, It’s extremely odd that the entropy of the attack is big enough for the data to appear as completely random.
Can you show pictures of your analytics, in particular, the firewall section?
Hi, sadly with my plan I can only check 15 user agent at once apparently. But I can exclude them from the list and the others major user agents appears.
I extracted these 60 top user agents, hope they are enough: Imgur: The magic of the Internet
I can agree that blocking is a bit too much, should have mentioned challenge.
@jeansureau98 Given that the user agents are way too distributed, we will discard that for now.
So I tried to set the rules.
Where you used the term “challange” I only used js challange.
I also blocked non-ssl requests but couldn’t place a minimum of tld 1.2, maybe it’s not on my plan I don’t know.
Anyway, they were accessing my site and making the researches so I guess it doesn’t work or works partially.
Then, I moved the http to 1.0 and 1.1 to a separated rule and set the captcha challange. They can still access the site and make researches
EDIT: Sorry, nevermind. I just noticed I challanged http 1.2 instead of of 1.1. Now after fixing it looks like it’s working. Let’s see. Does normal users are getting challanged as well? Becuase every legit users is using 1.1, as far as I can see in the log