Cache API requests with previously seen HTTP Idempotency-Key header value

I want Cloudflare to enhance Page Rules to allow caching of duplicate HTTP POST and PATCH requests that have a previously seen Idempotency-Key HTTP Header value.

This would help developers make their webapps more robust and also save Cloudflare lots of money by greatly reducing traffic between Cloudflare’s network and origin servers.

I frequently use Cloudflare as a proxy and load balancer for my custom-built webapps that provide an HTTP API to clients.

Anyone implementing webapps with a robust HTTP API should be incorporating idempotency keys to protect against accidental duplicate API calls causing unintended consequences. These often result from momentary internet outages, especially when clients are connecting from mobile networks with intermittent reception.

The popularly adopted defacto standard is for the API request to include a HTTP header named “Idempotency-Key” with a randomly generated UUID string, e.g.:

Idempotency-Key: "8e03978e-40d5-43e8-bc93-6894a57f9324"

It is described in this draft RFC “The Idempotency-Key HTTP Header Field”:

If a duplicate HTTP POST or PATCH request is issued that has the same idempotency key and payload, it should receive the result of the previously completed operation, success or an error.

It would be incredibly convenient for Cloudflare to provide this functionality, namely by caching the response to the first API request with a given idempotency key. Cloudflare’s idempotency key cache would expire after a configurable time period, e.g. 24 hours.

I imagine this would be best achieved via an enhancement to the Cloudflare Page Rules, which currently give you the ability to configure caching behaviour for specific URLs based on various criteria.

Without such an enhancement, I think this can only be implemented within Cloudflare’s infrastructure by deploying Cloudflare Workers with custom code. I think the need for this feature is fundamental enough to deserve a simpler mode of deployment.

Otherwise, people will keep handling idempotency keys themselves, using their own databases or KV stores, which is not only a hassle, but additional performance overhead and another point of failure.