How can I guarantee that a request to my database only comes from a legitimate Cloudflare worker?

Let’s imagine my Cloudflare worker needs to make a query/request to an internal RDMS (database) to fetch something or get a response.

Obviously, we don’t want to expose our RDS (database) instance to the outside web. It’s currently behind a private VPC and connecting to it is highly restricted to inside the VPC.

If we created a Postgres database user, which only had limited read-only rights, we could consider letting outside connections connect from the public web to our database.

How do we guarantee that a request that arrives at a specific endpoint (an attempt to connect to our database) only comes from a legitimate Cloudflare datacenter or worker? Is there a way of restricting access to our database by only allowing through specific IP’s and/or specific signed worker requests that we can guarantee came from our Cloudflare worker JS?

I guess a bit of context helps. We run a workflow app called Tallyfy which has a REST API. We would like to compute permissions at the edge using a worker, but our database holds who can do what and where. Hence, our worker must connect to our database to compute an answer to an incoming request which has these properties:

  1. A bearer token to a session in our API - which we will validate first.
  2. A HTTP verb e.g. GET
  3. A request URL - which would be to a method, etc. and would contain an objectID.

As a flow, here’s our thoughts - we wanted to see if an entire “initial check” for permissions could be done via a worker, at the edge. It will require the following for each HTTP request to our API - in order, I believe:

  1. A session key to be checked to know that a given session = this user, in this org.
  2. A way to parse the request URL then check (either back at origin or via the KV store) - whether this user, can do this method, on this object. Going back to the origin would obviously remove performance gains here, but how else do we cache everything in Cloudflare’s KV store from Postgres, while avoiding stale cache?
  3. Respond by “letting through” the request if it’s allowed, or denying it with a response.

I noticed that Cloudflare itself faced this problem of syncing a “master” Postgres database with a fast key-value store, covered here:

Can that sync between Postgres and Cloudflare KV store be made “a service”?

Does limiting IP range solve the legitimate problem?

1 Like

I think it might @adaptive - yes.

It leads me to my next (and unrelated question) - can workers be set to run on a scheduled basis, or is it only through inbound http requests?

We’d want to constantly monitor that IP range and keep our checks/array in sync.

You can you KV as your session storage.
I for example, on db change write to kv.

@adaptive note follow-up question under a different thread!

Is it possible to "spoof" http requests to pretend they're coming from the IP range of Cloudflare workers?

The FAQ lists a few suggestions under the “Security” part