Ability to set page rules by headers (i.e. user-agent, referrer etc)

Is this something you are interested in? Please provide your use case and details on this topic and vote up if it is important.

I would be interested in this, my scenario would be: all my visitors to one of my websites all give a unique header because they are coming from a set top box, to have the ability to allow my legitimate traffic from the set top box only and disallow all other traffic by page rules would be great also would be another security feature for my website!

yes - that would come in handy

I’d also like to be able to do it based on TLS version - so I can send people to an upgrade your browser page if they are using TLS1.0



If particular bad user-agent matches, do DNS error.

Is it possible?

@anon13899255 you want to return a page that looks like the DNS error? That’s actually pretty creative and funny. Not sure bots would get the humor but it could certainly mess with scraping applications. I’ll pass it along.

1 Like

I want like this. Assuming that, making cluless is better than giving hint of a specific error.

A post was split to a new topic: Country redirections

Yes, I am interested in the HTTP referer header.
Example: if (HTTP referer contains “something”) perform a 302 redirect.


Is there any update on User-agent sniffing feature? I want to bypass cache for a evil bot.

I want this because I am facing a real problem due to HIT response CACHE.

I have a rule like this

Force https, avoid redirect chain, excluding robots.txt path and evil bot.
RewriteEngine On 
RewriteCond %{HTTP_HOST} ^example\.com [NC]
RewriteCond %{HTTP:X-Forwarded-Proto} !https
RewriteCond %{REQUEST_URI} !^/robots\.txt$
RewriteCond %{HTTP_USER_AGENT} !^.*(evil).*$ [NC]
RewriteRule ^(.*)$ https://www.example.com/$1 [R=301,L]
Again… dont give any clue to evil bot, simply return to aim = HTTP response 404 for all requests
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} .(evil).*$ [NC]
RewriteCond %{REQUEST_URI} !^/robots\.txt$
RewriteRule .* - [R=404,L,NC]

Unfortunately, above rules based on user-agent doesn’t work as long I use Cache Everything page rule.

I can bypass cache for few path. But really, that’s not a complete solution.

I think here user-agent sniffing should work from Cloudflare page rule side, then only I can get a perfect solution.

Please let me know the answer

Thanks & Regards,

1 Like

This feature will be very useful.

i have some question can its use with URL matches? like
URL Pattern: img.example.com/*
HEADER Pattern: referrer: example.com/

i can think like only allow referrer from some domain otherwise redirect to example.com/do-not-hotlink.jpg

I agree,
and vice versa (allowing everybody except for a referer from some domain)

@ryan are you still considering implementing the “rules by headers”?
We are very interested in this feature.

Was this feature ever added?

This sounds like it might be great for pages that display different content based on user-agent. Anything to make the user-agent detection faster!

I would like to see this feature. Use case: externaldomain.com that I don’t control is redirecting to my domain. I want a visitor with a referrer of externaldomain.com to go a specific page within my website.

If I can’t do this in the page rules, I am stuck doing it in my CMS, and if I cache all my HTML, I am stuck doing it on the front-end, which isn’t ideal.

1 Like

This would be great, I wanted to display different contents for users who embed my webpage but there was not way to do when cache everything is on. So I had to use “x-frame-options: den” header rule that was not wanted as I wanted different contents. So a page rule based on page referrer will be awesome.

Interested in this also. Would love to be able to create page rules based on more than just URL.
Page Rule on useragent (google/Bing) to cache all would be nice to make google page speed test happier

Page rule on useragent (google/Bing) to route their traffic and other crawlers/scrapers/etc to a specific set of servers and allow my live (human traffic) to production servers.
Google/Bing crawls account for 60% of my traffic on the sites I host. Having them on an identical platform (just not production) would greatly increase my response time for real humans on my site.

I’d like to be able to turn off caching when a specific header (or a specific header value) is present. For example, we want to provide public access to our APIs. We’d like to provide users with the same URL we use in our apps. The difference is in our apps we want to cache the response at the edge, however when we know it’s a public user, we want those requests to hit our server so we can count it towards their API request limit.

You can do this (and most of the other things in this thread) using Cloudflare Workers. https://developers.cloudflare.com/workers/


I want to serve cached page to bot and crawlers.


Yes some kind of rules based on user agent (and other headers) would be awesome.

I’m currently looking into using dynamic rendering for SPAs using something like: https://prerender.io/

…basically what it does is check the browser user agent and:

  • For regular human user browsers: it just returns the regular client-side-rendered SPA
  • For bots/scrapers e.g. Google/Facebook etc: it uses a headless browser on the server to fetch and render the page as static HTML

This means you get server-side-rendering, but only when needed for SEO. Everyone else just gets the regular non-SSR SPA.

And your SPA projects don’t actually need to be built with any kind of built-in SSR support. So you don’t need Next.js/Nuxt etc, you can just build plain React/Vue projects without worrying about SSR stuff.

Being able to keep two separate cached copies of pages on Cloudflare (based on user agent) without hitting the origin server every time would be amazing. I guess doing this is quite a bit more than the main idea being discussed here. But header-based rules would be a great start to getting into doing stuff like this.