Nginx Ingress Whitelisting Doesn't Work With Proxy Enabled

We are trying to secure some of our public cloud (GCP) based applications hosted on GKE using nginx by whitelisting the VPN range so that users can only access our internal sites if on the VPN. This works if proxy is disabled in CloudFlare but because the ingress whitelisting needs the remote IP to determine whether it is on the whitelist, it won’t work. Is there any way around this? Is there any reason to have proxy if we’re using a whitelist?

Cloudflare adds a header with the IP address, this can be used by your local nginx to operate based on the client request.

There are some benefits to Cloudflare, primarily the DDoS and possibly caching (many internal sites will still have a lot of fixed/static resources).

You could also consider Argo’s smart routing (paid product) to reduce latency for your end-users, this is especially significant if your users are geographically distributed and not otherwise near your main hosting.

Probably difficult to use Argo with Kubernetes deployments. However, we got it working by adding the Cloudflare CIDR blocks to our controller config and some other settings as well:

controller:
  config:
proxy-real-ip-cidr: "173.245.48.0/20,103.21.244.0/22,103.22.200.0/22,103.31.4.0/22,141.101.64.0/18,108.162.192.0/18,190.93.240.0/20,188.114.96.0/20,197.234.240.0/22,198.41.128.0/17,162.158.0.0/15,104.16.0.0/12,172.64.0.0/13,131.0.72.0/22,2400:cb00::/32,2606:4700::/32,2803:f800::/32,2405:b500::/32,2405:8100::/32,2a06:98c0::/29,2c0f:f248::/32,10.0.0.0/8"
#use-proxy-protocol: "true"
use-forwarded-headers: "true"
forwarded-for-header: "CF-Connecting-IP"
#server-snippet: |
# real_ip_header CF-Connecting-IP;
  publishService:
enabled: true
  addHeaders:
use-forwarded-headers: "true"
forwarded-for-header: "X-Forwarded-For"
  service:
externalTrafficPolicy: "Local"
defaultBackend:
  service:
type: "LoadBalancer"

If you want Cloudflare Access to work properly, you need to use CF-Connecting-IP exclusively–you can’t use X-Forwarded-For, as Cloudflare Workers can override it (or, at least, they could last I checked).

You also have to authenticate the requests per the Access documentation. IP address whitelisting is NOT sufficient. Cloudflare customers can make arbitrary web requests from Cloudflare IP addresses via Workers.

We are testing our new Cloudflare Access setup and wondering why we’re seeing our own personal IP addresses in the nginx ingress logs when hitting the site. When I hit the site I authenticate via our Azure AD connection which works, then it forwards me on to the site where I get a 403. I get a 403 because we used an nginx ingress whitelist that includes all the Cloudflare IP CIDR blocks and it’s blocking me because (in the logs) it sees that I’m coming from my personal IP address, not Cloudflare’s range. Why is this happening? Shouldn’t I be tunneled through Cloudflare?

You’re proxied by Cloudflare, but Cloudfare does pass your IP address through in the headers. Most servers should be configured to restore this IP address for accurate logging and security.

If you’re using Nginx’s realip module and have it properly configured, most parts of Nginx will see your real IP address rather than Cloudflare’s. You can use $realip_remote_addr to get Cloudflare’s IP address.

Note: IP address whitelisting is NOT sufficient to ensure security with Cloudflare Access. You need to authenticate the requests according to the documentation or use Argo Tunnel. Cloudflare Workers are capable of making mostly-arbitrary requests from Cloudflare IP addresses, meaning that they can bypass IP address whitelisting.

NGINX should be configured to only respond to that hostname. Workers aren’t allowed to override the hostname in the request.

1 Like

Ideally, yes. However, the documentation does make it clear that you should also be authenticating the requests. Security in layers. :slight_smile:

All our apps are in Kubernetes (GKE), not sure argo tunnel is feasible here unless that can be set up somehow in the nginx controller. Nginx ingresses are configured to only respond to the hostname. I don’t know how we can use Cloudflare Access here…

1 Like

As @sdayman said, that should generally be sufficient, but the documentation for Access does encourage you to actually authenticate the requests. It provides examples for doing so; those examples can be adapted to Lua if you want to perform the authentication without leaving Nginx.

On second thought I need to make sure I’m right about that configuration.

Any ideas on how we can test to confirm this is the case? So far as I know, you have to have the hostname for kubernetes (nginx or otherwise) to even know how to route your request (to what pod). So not sure how anyone could get around that since entering the IP is not sufficient.

A command like this will insert the hostname and go direct to IP:
curl -Ik --resolve example.com:443:12.34.56.78 https://example.com/

Would that IP be the true IP or the CF IP? i.e.

curl -lk --resolve kayak-dev.bdtrust.org:443:104.31.95.40 https://kayak-dev.bdtrust.org

That 104 address is what it resolves to via cloudflare proxy.

The true IP. It should block you if it’s properly set up.

Could anyone actually get the true IP somehow? I thought proxy prevents that.

Another question is, what about API calls to the hostname. Would these be affected by CFA since they’re automated and called by other apps, Google Cloud Scheduler for example?

Where are these API calls coming from? You can add an Access Policy to “Bypass” for certain IP addresses.

Neither. You’ll need to perform it from an IP address that will be allowed through your firewall–which, in your case, looks like it might need to be within 10.0.0.0/8.

This won’t actually work as expected. It should always succeed, if I’m not mistaken.

Isn’t that the opposite of what needs to be tested? I think that will include the correct Host header, which is undesirable here, if I’m not mistaken.

It’s always possible to get the underlying IP address. It can be made very difficult, but you should never assume that your IP address is private, and you shouldn’t rely on that for security.

For example, if your web servers send automated email, depending on your email provider, your servers’ IP addresses may be included in the headers of the emails.

In your Nginx configuration, ensure that any listen directive with the default_server flag is within a server block that does nothing. Your “real” server block shouldn’t contain default_server. For example:

server {
	listen 443 ssl default_server;
	listen [::]:443 ssl default_server;
	listen 80 default_server;
	listen [::]:80 default_server;

	server_name _;

	return 404;
}

server {
	listen 443 ssl;
	listen [::]:443 ssl;

	server_name example.com www.example.com;  # This server block will only be used when the Host header equals example.com or www.example.com

	# Normal config here
}