Rate limiting combining rules

I have rate limiting enabled and the two following rules:

  • A - “subdomain.domain.com/*” - 135 request per min = JS challenge
  • B - “*” - 1000 requests in 10 min = block 1 hour

Both methods are set to ANY and protocol to include both http & https

However in the Analytics I see rule A has 2.75K hits in a 6 minute window, and Rule B has 0.

  • Do multiple rules not fire?
  • Does “*” not include sub domains?

I just tried this and it works:

I used * for the match and block after 5 hits in a minute.
I then used Curl to hit www.example.com/1 , then /2, then /3. It blocked after /5

Then I had to move to a Paid Plan to test combined rules. And since I can’t order my rules, I guess it runs through all of them until the attacker crosses any threshold.

Block sub.example.com/* for more than 5 hits per minute
Block * for more than 10 hits per minute.

Hitting example.com/test got blocked on the 11th try.

Thanks for the effort.

indeed no order in the rules and it seems to just go through them all. Could it have anything to do with JS Challenge resetting the count or something?

So in your example sub.example.com/* as a JS challenge after 5 hits.

My goal would be that your second rule (* block after 10) would also block the 11th hit on sub.example.com/test

Since I don’t know what traffic you’re getting, I only set out to test Rule B traffic that doesn’t apply to Rule A and found it works.

It sounds like you’re expecting Rule B to kick in even though Rule A has already triggered and dealt with the request. While I think your goal is to block a bot that abuses Rule A, it appears that won’t work.

Correct, I want accidental hits of the first limit to allow to solve the JS challenge. But a longer running, probably bot, hit of this limit to be blocked.

I’ll do some more trial and error… but guess I have to end up with a blocking rule only.

1 Like

In regards to this, it does reset the count for that rule. I am unable to find out if it resets it for all rules but I don’t believe it does. I believe it counts for each rule and changes to the rule cause the reset. When you put the new rule in, it starts counting fresh, as well.

Also, if you modify a rate limiting rule, it resets the count again.

What I can tell you is that when we have faced similar traffic patterns, we identified the path structure [if one exists] and installed the limits on subsequent pages. I’d have the block one on page 1 followed by the challenge on page 2. Normal users have no clue page 1 has a rule. On page 2 they may get challenged. Minimal interruption to real users and bots/malicious traffic gets blocked at some point. Only downside is bandwidth consumption for page 1. Again,this is valid when a logical path follows, such as for a login sequence and the block is going to occur at the page specified. (useful in brute force attacks)

Another method is to challenge all traffic via another feature (Security Level) and then set a really long challenge passage time period (if possible). Then you effectively screen out bots with the JS challenge with minimal UX impact although I get not wanting to challenge people at all, if possible.

You’d only have to worry about Rate Limiting for a bot that could solve captchas, which exist but are rare.

I don’t have historical data to tell me if layered rules work but I don’t believe they do from my experience. I believe only one gets triggered and it’s the one with fewer requests required to trigger but that’s a guess. Maybe a CF SE knows the actual functionality and can share? I’d love to know too!

2 Likes

Your theory about only the one getting triggered with lowest limit to match looks accurate.

I’ve disabled my lower bound JS rule and now my upper bound Block rule does get hits.

Since I had some doubts about the wildcard matching rule * also including sub domains I created an identical block rule with matching rule to *domain.com*. In this case (as all seemed to match) the rule getting hit seems to be the TOP one by order. Which basically comes down to the last modified one.


In regards to your other tips my application has a login embedded on every page (and gets brute force attempts basically random on every page) so no logical path. But might be able to see if the home page is a popular one!

My security Level is set to medium, if we’re still being hit a lot I’ll see if setting it to high make a difference!

Thanks all for your help :smiley:

1 Like

Well, one idea might be to have the “login” link go to a place where you enter the username, you can then send them to a page where they enter their password. That would build in the structure and give you a chance to layer rules. Just a thought.

I always thought that the order mattered, which is affected by last modified, but didn’t have data to confirm. Interesting test. If you run into more brute force type issues, let me know. I’ve dealt with them for customers of ours. We’ve seen over 100M malicious attempts per day at times. Layering the various controls can go a long way.

1 Like