This worked, thank you! In the meanwhile I also upgraded to the pro plan.
Now what to do next? Since this is a temporary solution since may block something else?
Also this is the third attack in 24 hours under different targets on my website (all through HTTP(s)), So I expect more maybe
Now take a closer look at the search queries that have been blocked. It should show you the query string that wasn’t being blocked by our “s=” attempt, or…hopefully…something else they all have in common.
Hello, this is lasting from 12 hours now. Will this ever stop? Also on the firewall on the top events by path, the most hit path is / how is this possible since every request is a uri string search?
I read the guide but it’s impossible to do it without going insane… I have 160 million events, more than 60 million IPs from all over the world and a shitload of user agents.
The only recurring scheme seems to be this:
They target the / path
They mainly use HTTP/1.1 or HTTP/1.0 version
They use the GET method
Now If I make a rule that block these 3 won’t I block legit user as well?
(By the way the attack is still going from 15 hours sraight)
The amount of requests doesn’t matter at all, It’s extremely odd that the entropy of the attack is big enough for the data to appear as completely random.
Can you show pictures of your analytics, in particular, the firewall section?
From my experience, most botnets/attacks will use 1.0 or 1.1, and given that all modern browsers support HTTP/2, I’d say it’s fine especially given that a major chunk of the requests are 1.1.
Hi, sadly with my plan I can only check 15 user agent at once apparently. But I can exclude them from the list and the others major user agents appears.
I extracted these 60 top user agents, hope they are enough: Imgur: The magic of the Internet
1.0, definitely. 1.1 would be a bit risky. Yes, the site is on Cloudflare, most likely has HTTP 2.0 support, and mainstream browsers do support it, but I’d be careful to assume it will be necessarily used. If anything, a JavaScript challenge.
I can agree that blocking is a bit too much, should have mentioned challenge.
@jeansureau98 Given that the user agents are way too distributed, we will discard that for now.
Please try with javascript challenge versions 1.0 and 1.1 and let us know if that helps, if not we will escalate the JS challenge to CAPTCHA.
So I tried to set the rules.
Where you used the term “challange” I only used js challange.
I also blocked non-ssl requests but couldn’t place a minimum of tld 1.2, maybe it’s not on my plan I don’t know.
Anyway, they were accessing my site and making the researches so I guess it doesn’t work or works partially.
Then, I moved the http to 1.0 and 1.1 to a separated rule and set the captcha challange. They can still access the site and make researches
EDIT: Sorry, nevermind. I just noticed I challanged http 1.2 instead of of 1.1. Now after fixing it looks like it’s working. Let’s see. Does normal users are getting challanged as well? Becuase every legit users is using 1.1, as far as I can see in the log