For me - the most powerful and obvious use of durable objects is as a caching layer - in front of the API of our SaaS.
For a given incoming URL pattern that’s a GET on our API, my idea is to fetch the data off a durable object, avoiding a call to our origin.
The fact that consistency is guaranteed is the real reason this works - since an object with lots of writes would not have worked very well with KV store.
Our API (upon write) would push the latest object into the durable object. This makes any GET much faster since the cache serves it.
There’s two things I would need, for the Cloudflare team:
I need a larger storage limit in the 100kb - 3Mb type of range. On a brief look at limits, per-object limits were quite low.
Just like KV store has auto-expiration of keys/values - I need the same in durable objects e.g. expire this object (so no billing would accrue and it simply wouldn’t exist) for a given lifetime e.g. 3 days. In addition, this lifetime should be resettable i.e. on every fetch, reset the countdown to expire to 3 days. This makes stale cache/objects go away automatically.
It seems they have missed to update that page, it changed quite a while ago to 10MB per KV value.
(They changed it when they launched Worker Sites, because images and scripts where often larger than 2MB)
However, we’re talking about Durable Objects here - I’m not aware of any of the limits of that yet.
I just realized our (outside of Cloudflare) API cannot access a Durable Object directly from outside Cloudflare, which makes that a deal breaker for now.
Upon an update of primary data - we want to push it to the Durable Object (to keep it at the latest version).
Hi folks, sorry for the confusion. To clarify a few things:
Workers KV and Durable Objects have different storage limits.
Workers KV values are limited to 25 MiB as of last week. Prior to that, the limit has been 10 MiB for quite some time. The page @sdayman shared a screenshot of is unfortunately out of date. Thanks for bringing that up, I’ll get it fixed.
Durable Objects have no limit on the size of an individual object.
Durable Objects do limit the size of each individual key and value stored within the object, to 2 KiB for keys and 32 KiB for values.
You cannot talk directly to a Durable Object from outside Cloudflare, but it’s very easy to define a normal Worker that accepts an incoming request and routes it to the desired Durable Object. For instance, that’s what’s done in the counter example in our docs.
@arobinson thanks for clarifying! Further questions:
For Durable Objects - you limit to 2 KiB for keys and 32 KiB for values. Isn’t the value the object itself, which in the line above, is unlimited?
I need an object to auto-expire, just like KV has this (for use in caching use cases) and so that I don’t run up bills for dead objects that live forever. Any plans for this?
Not quite. The Durable Object is actually a special worker instance routed to by name/ID that has access to its own in-memory state and persistent state, the latter of which is exposed via a key-value interface.
We do plan on improving the lifecycle management of objects soon, and auto-expiration is very likely to be part of that. I don’t have any specifics to share right now though.
No real update, sorry. Except that in theory you could now build your own TTLs using alarms, but I understand that that’s much more work than having the feature built-in.
Is there any update on raising the max size of a key’s value in a Durable Object? I have exactly the same use case as @tallyfy in caching JSON API responses from our origin - we’ve obviously reached the same conclusions as to what DO could be great for! Given the size restrictions we may have to look at R2 instead but that has max reads/second and the Workers Cache API is not great for purging selectively.
@arobinson@KentonVarda look forward to DDO’s with larger objects and for us particularly - TTL (time to live) - i.e. object auto-deletes via a lifecycle. It’s exciting as it would shield our origin from serving GET requests and also serve them way faster at the edge. The TTL is important - as we just can’t accumulate or store these DO’s forever.
You can build a TTL using alarms. Whenever your object is accessed, you would use setAlarm() to set the alarm time to Date.now() + timeout. There is only one alarm per object, so each call to setAlarm() replaces the previous alarm with the new alarm time. When that time is reached, the alarm handler will be invoked. In your alarm handler, you would use storage.deleteAll() to delete your object.
Since it’s possible to build TTL using alarms, we probably won’t be adding a separate TTL feature.
Regarding value size limits, there’s work being done on a new storage engine which would have no limits, but at present I cannot make any promises about when or even if that new storage engine will ship. In the meantime you will need to split up large values across multiple keys and use storage.list() (e.g. with a prefix search) to read them all at once.
Is the DO’s value limit of 128kb one that can be increased with a request to 256kb? We could obviously do anything with the R2 buckets, but the harder global DO structure is appealing for something like close-to-real-time bid/inventory counts across large search result pages.