I would like to ask about the working mechanism of cloudflare cache

Static website test, only a single html file, page rules enable cache everything, some nodes in the test cannot hit the cache and return MISS
Is it just that some nodes cache files? I have repeatedly tested it on itdog.cn (a global get test website). No matter how many times the nodes in a certain area are accessed, they are not cached. Isn’t it synchronized with all nodes in the world?
I would like to ask about the working mechanism of CDN caching. When a user requests a CDN node and finds that there is no cache, then requests caching to that node, or when a node caches the file, it will push the file to all CDN servers.

By default html files are not cached…

You can adjust behaviour using cache rules.

The caching topology can also be configured…

Basically look in the developer docs, lots of info there.

Under the whole-site caching rule, html can be cached. I tested that most nodes can complete the caching. But I don’t understand why some nodes cannot be cached

There are various different ways that the cache works.

In a traditional setup each edge location maintains its own cache, and on a cache MISS, that edge location will make cache fill requests to the Origin. Each edge location consists of several servers, and an individual server can use the cache on other servers in the same location (POP) to check for a cached object.

In a Tiered setup, rather than going back to the Origin, there are higher tier POPs that edge servers funnel their requests through. In a simple example, all requests from POPs in Europe could look to the Frankfurt POP for a cached object.

The Tiered cache can either configure itself, or you can manage what the topology looks like.

But no matter what the cache topology, objects that are not popular will be evicted faster than very popular objects.

If you require persistent caching of objects, you could look at using Cache Reserve, where Cloudflare will use R2 to persist your objects on a very long term basis.

That would not be practical for any design of CDN. I have a large volume of content that is very popular locally, and in certain anglophile locations around the world. Those locations get a very high cache hit rate for all requests (99%+ for static assets). Synchronising all of my cached content to Ulaanbaatar would be a pointless waste of resources just to service one or two requests for an object.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.