Firewall rules and bots


I’m using the free level of cloudflare for caching a Joomla site with about 300,000+ requests per day and about 10,000+ unique visitors per day.

I’ve just noticed there are quite a large number of blocked attempts at access some static files on our site, such as our RSS/RDF feeds with files ending in .rdf, .rss and .xml.

I’ve tried adding firewall rules that exclude blocking these requests, but it still seems not all requests are allowed. All of these rules are part of a single firewall using the “OR” function for each, such as:

URI Path contains /static-files/file1.rss
URI Path contains /file2.rdf
URI Path contains /static-files/file3.xml

e’re only using it for caching images. For the site that we’re using for caching, I have the following page rules:
Disable Security, Browser Cache TTL: 4 hours, Security Level: Essentially Off, Cache Level: Cache Everything, Edge Cache TTL: a month, Disable Performance

For the site itself, we have the following:
Auto Minify: Off, Cache Level: Bypass, Disable Apps, Disable Performance

I’m reluctant to disable security altogether, but I don’t fully understand why the rules I’ve added don’t seem to be working reliably.

  • Does Cloudflare still consider potentially malicious bots despite having a firewall rule that allows access to static files? The accesses that are still being blocked are of the form:
    05 Sep, 2020 18:07:03
    Russian Federation
    Browser integrity check

  • I’ve also added an “allow” firewall rule that “contains” simply “/” as the “URI Path”. Should that not effectively disable the firewall? I’ve noticed after adding this rule that it is now allowing access to URLs including query strings, like “?option=com_content&view=article&id=158282”

  • Do the “contains” firewall rules allow wildcards? Or is it just a subset? In other words, if I just allow “/articles”, does that include any of the specific articles within the /articles/ tree?


There are many security layers within Cloudflare, and allowing some pattern in a Firewall Rule may or may not disable other features. You should familiarize yourself with Firewall Rules and how it works. This should clear most of your doubts.

You mentioned “browser integrity check”. That is a setting available at the Firewall > Settings section of the dashboard, and it applies to the whole zone (domain and subdomains). You can toggle it off there and enable for specific parts of the zone with page rules.

Thanks very much for your help. I’ve actually read quite a bit of that already.

For example, http.request.uri.path says " The path of the request Example value: /articles/index" but it wasn’t entirely clear to me why requests are continuing to be blocked with “browser integrity check” despite having the rule.

Many of the requests that continue to be blocked are from libraries like “LWP::Simple/6.00 libwww-perl/6.05” - is it possible these are being interpreted for being malicious when they’re not? Perhaps it’s just a bad implementation of a program using the library?

I’m hesitant to disable the browser integrity check, particularly since there’s hundreds of thousands of these requests per day, unless I know for sure they are malicious.

Can you explain the precedence of these rules with the browser integrity check option?

Have a look at the chart in the page below, it gives an idea of what happens when, though new features have been added that are not in that chart (bot fight mode, for instance).

The top level would be IP Access Rules (under Firewall > Tools), where you can bypass requests by IP/ASN, and, to a certain extent, user agent. These are complete bypasses and should be enabled only for IP addresses you control.

Firewall Rules have recently included the action that allows you to bypass individual services down the chain, but if you have more than one firewall rule, you need to adjust accordingly.

Hi, thanks again for your help. That’s not exactly what I meant with my question.

I’m trying to understand why, even when I’ve created an explicit rule that allows access to a particular static file that otherwise would be denied due to a “browser integrity check” that it is still being blocked.

Do the default “browser integrity check” rules override any manual rules otherwise allowing access to a file?

Which has a higher precedence - manual rules or the “browser integrity check” feature, if enabled?

I’d imagine the specific action that was performed which lead to the access being blocked is not shared for security reasons, but what are the chances the access being blocked due to “browser integrity check” is a false positive?

When I see accesses to a static file on our site being blocked due to “browser integrity check” and the client is “LWP::Simple/6.00 libwww-perl/6.05”, could it be a bad client implementation or is it most definitely a malicious attempt using (or purporting to use) that library?

Here is the specific json code associated with the blocked attempt

 "action": "drop",
"clientASNDescription": "LINODE-AP Linode, LLC",
"clientAsn": "63949",
"clientCountryName": "US",
"clientIP": "",
"clientRequestHTTPHost": "",
"clientRequestHTTPMethodName": "GET",
"clientRequestHTTPProtocol": "HTTP/1.1",
"clientRequestPath": "/linuxsecurity_articles.rdf",
"clientRequestQuery": "",
"datetime": "2020-09-07T17:32:35Z",
"rayName": "5cf22b804e8cfdb1",
"ruleId": "bic",
"source": "bic",
"userAgent": "LWP::Simple/6.00 libwww-perl/6.05",
"matchIndex": 0,
"metadata": [],
"sampleInterval": 1

Hi, my previous reply was delayed with being posted by the site admin, so I hoped I could ask again.

Do the default “browser integrity check” rules override any manual rules otherwise allowing access to a file?

As far as I understand them (and that is not very far), they apply at different stages.

In a Firewall Rule you can set the action to Bypass (not the same as Allow) and pick the service BIC. This should ensure any request that match the rule’s logic will not be subject to BIC.

The json you posted shows the request was blocked (“drop”) by BIC. The source: "bic" shows the service that did the block was BIC itself, not a firewall rule. So, yes, BIC is doing its job. In case you want to disable it for a path, you should use the Bypass action of the Firewall Rule, not Allow.

This topic was automatically closed after 30 days. New replies are no longer allowed.