In regards to this, it does reset the count for that rule. I am unable to find out if it resets it for all rules but I don’t believe it does. I believe it counts for each rule and changes to the rule cause the reset. When you put the new rule in, it starts counting fresh, as well.
Also, if you modify a rate limiting rule, it resets the count again.
What I can tell you is that when we have faced similar traffic patterns, we identified the path structure [if one exists] and installed the limits on subsequent pages. I’d have the block one on page 1 followed by the challenge on page 2. Normal users have no clue page 1 has a rule. On page 2 they may get challenged. Minimal interruption to real users and bots/malicious traffic gets blocked at some point. Only downside is bandwidth consumption for page 1. Again,this is valid when a logical path follows, such as for a login sequence and the block is going to occur at the page specified. (useful in brute force attacks)
Another method is to challenge all traffic via another feature (Security Level) and then set a really long challenge passage time period (if possible). Then you effectively screen out bots with the JS challenge with minimal UX impact although I get not wanting to challenge people at all, if possible.
You’d only have to worry about Rate Limiting for a bot that could solve captchas, which exist but are rare.
I don’t have historical data to tell me if layered rules work but I don’t believe they do from my experience. I believe only one gets triggered and it’s the one with fewer requests required to trigger but that’s a guess. Maybe a CF SE knows the actual functionality and can share? I’d love to know too!