I am planning for an unveiling event that will see many (up to 10K) users hit a certain page at the exact same time. Users will be sitting on the page and watching a countdown timer to an exact time. When that time arrives, the page will refresh automatically (via browser request). We will purge Cloudflare cache at that exact time (because the origin server will begin serving different html)
I have a question about how Cloudflare’s caching will behave in this scenario.
It will look a bit like this:
- First Request (Cloudflare calls the origin server as resource will not be in cache)
- Up to 10K requests immediately following (likely before the origin request will have responded)
- The call to the origin server responds and the resource is established in cache
My question is about what happens to the 10K requests in step #2. Will Cloudflare queue them up (knowing that caching is in progress) or will Cloudflare pass the requests through to the origin server?
Some things to keep in mind:
- Cloudflare doesn’t cache HTML by default.
- Every server in every data center has its own cache.
- Doing a Purge Everything will probably take 5 seconds (or a bit more) across all data centers.
How will it behave? I bet a bunch fill get through to the origin server because it will take a few moments for the cache to populate. Who will get through?
- The first few to hit that specific edge server because first request doesn’t guarantee a cache load.
- The bunch of people who hit it at the same time because the cache still hasn’t loaded at that instant.
So you’ll have to do the math. I wouldn’t be surprised it that amounts to 100 hits from the datacenter(s) with the most visitors for your site.
If you want another option, use Workers/Wrangler to host the HTML for that URL, but you’d need to activate that route at the exact time you want it to begin serving.
The answer is yes, but it depends (sort of). Cloudflare by default handles concurrent requests and concurrent streaming of content (see https://blog.cloudflare.com/introducing-concurrent-streaming-acceleration/ for an example).
Cache is maintained on a per colo basis however (or if you are an Enterprise customer using tiered caching by a tiered caching system). So you would have nn colos making a request for the object even if yy simultaneous requests were made at a given colo.
I was thinking one could do something with workers as well to have 0 hits to origin, but one could probably write a time check in the code or something to make new content ‘active’ after a certain time with a new cache key.
This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.