Workers Sites and KV cold keys performances

Hello,

We’ve read the Workers Sites uses the KV to store/access all html pages and assets. In our case, our website could contains about 1 million of pages, and 99% of them are not requested enough to be considered as a KV hot keys (7-8ms), so I suppose we’ll have a very poor performances (100-300ms) for 99% of our pages at least if Workers Sites set a special flag in theses KV keys to force them to be a hot key ?

Our website need to be fast from Europe & USA but also others countries in target (Australia, New Zealand, Singapore, … ) and the performances are really important for us.

We known we can also use cache, but the CF cache is very too limited to include 1 millions pages in cache with assets.

Thanks
Regards

EDIT (Didn’t think regional enough):
Right, so you’d need to have an external server in each region (like a VPS to be cost effective) and “crawl” the URL’s for time-periods. Then log the status of the crawls in the worker to a remote logging system, so you know how often you’d need to crawl the pages to keep the KV’s hot.

I would bet that in the near future, Cloudflare might add a feature to always store the KV’s hot, but that would probably be more expensive.

1 Like

Thanks for your feedback @thomas4, so this mean Cloudflare Workers and Workers Sites offer very poor performances to host websites with a lot of pages if each page doesn’t have already 2 ~ 3 requests / second to keep it in warm key… bad news ! :-1:

Yeah, it isn’t optimal, that’s for sure.

Still, they have drastically cut the loading times the past year, so I’m hopeful it will improve even further.

I’m hoping to see improvements in performance for the Workers KV product in 2021. I use Workers KV for storing encrypted user auth tokens, and the “cold key” issue can definitely be frustrating at times.

It would be nice if you could specify what datacenter you want to store your KV’s in. For example if you have static assets for your website, you would probably want them in all of the data centers right…? However for storing auth tokens, users are only making requests to 1 main datacenter at a time (the one nearest to them)… so I would only need to store them there.

Personally I would even pay extra for the ability to have my KV static assets stored at each data center and not evicted every few minutes for non-use. Same with the ability to store my KV’s at 1 specified location, and not be evicted (like storing my auth tokens).

1 Like

I’m also storing auth tokens, so I’m in the same boat. However, I’m aiming at using Durable Objects for this, since it will be more instant and reliable (no wait). Cloudflare already said they aim at lower cost than the similar Google Cloud Storage.

The only part that scares me about Durable Objects for storing auth tokens (at least right now) is:

When using string-derived object IDs, the Durable Object is constructed at a Cloudflare point-of-presence chosen based on a hash of the string. The chosen location has no relationship to the location where the object was requested; it could be on the opposite side of the world. In the future, these objects will be constructed nearby the location where they were first requested (although a global lookup will still be needed to verify that the same name hasn’t been used on the other side of the world at the same time).

I think I’ll end up using Durable Objects for like real time data types of features (messaging, newsfeed, notifications, etc) that don’t necessarily need to be instant.

You mean latency wise?
I’m expecting it to be at least equal to S3 or Cloud Storage in terms for performance, which means anywhere from ~100ms (GCS) to 5 seconds (S3 EU <-> US). (Best to worst case in my tests)

Yep, latency wise. I haven’t done any tests with Durable Objects yet, so i’m curious how the latency will be from US -> EU.