Memcached a In-Memory Low Latency Distributed Storage for Dynamic Changing Content


Why does Cloudflare not Provide the Distributed MemCached In-Memory Low Latency Storage in combination with Workers to serve Dynamic Content of lets say 128 kB from Inside the Workers ?

From my knoweledge Companys like Google use the Distributed LowLatency Key/Value MemCached solution to serve fast changing dynamic Content to around the Globe.

Lot of Website have a big chunk of static Content that can be writen/stored in Workers/KV and a small but very important dynamic changing Data that need to be central processed and pushed to the Cloud.

For this having the distributed Low Latency Key/Value MemCached Solution in combination with Workers would be a perfect Solution.

Pushing Dynamic Content to KV is bad as it is not Low Latency.
It need around 30 Seconds till its synced to all PoPs. That is to Long. 1 Second max wold be acceptable.

Also not that much Data need to be Updated regulary.
In my Case it would be 128 KB Data.

Having the Possibility to use Workers in Combination with the Distributed In-Memory Key/Value MemCached Solution to update Dynamic Changing Data of up to 128 KB would be really helpfull to speed up WebSites.

1 Like

IF MemCached is allready used by CloudFlare then only
Parallel Writes need to be Implemented for synced InMemory updates from Inside the Workers.
The Reads from the Workers are Local and don’t need to be Global, only the Writing is done in Parallel Global in the background to the specific POPs.
That way a fast Synchronisation around the Globe for Dynamic Changing Content is done and can be served local from the edge very fast !
This should be easy to implement.

My experience with Memcache is that it is ephemeral and expensive as it sits on RAM, quite volatile.
Saying this, the Memcached might be the solution Cloudflare already uses for the edge cache, as it is not an issue if the cache is lost.

Now Cloudflare could use Tokio.RS for Workers KV and Durable Objects, or maybe they are using it as its a natural choice, just it is taking time to offer a stable LTS solution.


CF is designed that individual POPs have no dependencies on other POPs or on long haul fiber links between POPs/DCs. Its obviously to decrease inter-POP traffic. If you compare Amazon to CF, Amazon has only FOUR, thats 4!!! DCs for the whole USA. CF has 36-40 DCs for USA alone. Excluding canada. Edge means edge. TTFB would never be better than 300ms or 500ms per request if all HTTP requests had to long haul to a per-country CF POP to make all requests atomic. Your monthly fee wont pay for each JSON request to be multiplied by CF cloud to 20 or 100 DCs for every last client transaction. CF is supposed to cut inter-city traffic through caching, not generate 100x more inter-city traffic than having 1 Nginx box facing the web.

1 Like

A former Cloudflare member has said here before that they might offer such low-latency store in the future, but I haven’t really heard anything about it since.

1 Like

Hi Bulk88

Thanks for your Reply.

My Understanding of the CloudFlare Network is that it has several Central KV Storages.

At the Moment Pushing Dynamic Content to KV takes about 30 seconds or more to be in sync and availble worldwide.

What is needed is the Possibility to push Dynamic Changing Content and have it synced and availble in less than <= 1 second.

Technicaly this is easy to be done and Cloudflare has the possibility to do this very easy.

My Understanding is that KV Database which is very similar to the MemCached distributed IN-Memory Storage has allready a few Central Servers from which all other servers are Synced.
That way Cloudflare is able to make a fine grained Caching of its services based on request per region.
As we know not everything is stored in each Edge Node Localy.
Most data is stored only in a few central Servers around the World.

For having now the Possibility to deliver ultra fast low latency Websites with Cloudflare we need to be able to fetch Dynamic fast Chaning Content from the KV which does not require 30 Seconds or more to be Availble after the push.

To make this happen all what is needed is to have some endpoint like
where a api call can be maked from the Origin Server (best option) or a Worker (least best option as dynamic data is mostly availble on the origin server)
And this Cloudflare Endpoint
would then make a few Parallel Request to the Central KV / MemCached Servers around the World to store the small Data Chunk that changes every second and need to be displayed on the Page in parallel in a few miliseconds and make it availble by this world wide in less than 1 second.

Lets say we have as a Example a Home Page where you need show the latest loged Users or the latest Forex Prices and Forex Data which is all Dynamic only availble on the Origin Server.

This Data is not needed to be stored Atomic.

Actually we dont need Atomic Storage for majority of Data if not all of the Data that need to be Low Latency and World Wide availble.

Such a requirement would defy Physics and Natural Law !

For such a Webpage to Display fast changing Dynamic Content with Low Latency you will be making every second as a example a call from the Origin Server to the Cloudflare Endpoint over the API and send the small Chunk of the Dynamic Data to be synced worldwide in less than a Second.

The Endpoint itself upon sucess will then make 5 or 10 Parallel Request to the few Central Servers all around the World like North America, South America, Europe, Asia, Australia and send the Data to this central Servers which will make the Dynamic Data then Worldwide Availble as low latency Data close to the Suers so users in San Fracisco dont need to make anymore Requests to the Origin Servers in Europe or Asia anymore but have the Data allready very Close.

You will have the Static HTML/JS Code for this Page inside the Worker and the Dynamic Data Stored IN-Memory in the Few KV Central Servers.

The Point is making Dynamic Data Sync in less than 1 Second worldwide and be availble as Low Latency.
The 30 Seconds Delay at the Moment with KV is Bad !!!

Technically this is more than doable and i really dont understand why this is not allready solved.

I forgot to post these tweets earlier:

Seems like the CEO of Cloudflare is hinting that Workers will get new database features in the future

1 Like

@ arunesh90

Thank you very much arunesh.
I just found out this new recent blog about a KV technology.

Hope really there is a possibility to Push small chunks of Data from the Original Server
over the API and Cloudflare does the sync in less than <= 1 Second around the World to the Central Servers to avoid HTTP Get Request to the Origin Server slow down the Website speed !

The Sync worldwide of In-Memory Data Push to the KV database really need to be less than 1 Second and not 30 Seconds or more like now.

1 Like

Heh, the 30 seconds rule was “dying SSDs” problem. “click of death”. Not saturated inter-city/ocean fiber links. Seems like CF’s billing is price of SSD’s write cycles, not routers/switches/links.

Exactly what I thought, each KV write is a DOS amplification attack to all 155 POPs. CF has to “debounce” the zone’s KV reqs at multiple layers. Im waiting for CF to announce they now use Infiniband, eliminated all SSDs, boot every blade server over the network, and their whole network is 1 Nginix Linux process triple executed and compared with shared memory between all 155 POPs :crazy_face:


This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.