Durable Objects - larger storage limit per object + time-invalidated storage?

For me - the most powerful and obvious use of durable objects is as a caching layer - in front of the API of our SaaS.

For a given incoming URL pattern that’s a GET on our API, my idea is to fetch the data off a durable object, avoiding a call to our origin.

The fact that consistency is guaranteed is the real reason this works - since an object with lots of writes would not have worked very well with KV store.

Our API (upon write) would push the latest object into the durable object. This makes any GET much faster since the cache serves it.

There’s two things I would need, for the Cloudflare team:

  1. I need a larger storage limit in the 100kb - 3Mb type of range. On a brief look at limits, per-object limits were quite low.
  2. Just like KV store has auto-expiration of keys/values - I need the same in durable objects e.g. expire this object (so no billing would accrue and it simply wouldn’t exist) for a given lifetime e.g. 3 days. In addition, this lifetime should be resettable i.e. on every fetch, reset the countdown to expire to 3 days. This makes stale cache/objects go away automatically.

Is this on the roadmap?

Where did you see these limits? I haven’t found the usual listing of specs for Durable Objects, but see that KV allows up to 2MB.

It’s 10MB now for the standard KV.

Don’t believe everything you read.

1 Like

I believe that page is wrong… @cloonan?

1 Like

It seems they have missed to update that page, it changed quite a while ago to 10MB per KV value.
(They changed it when they launched Worker Sites, because images and scripts where often larger than 2MB)

However, we’re talking about Durable Objects here - I’m not aware of any of the limits of that yet.


I would expect at least that would be the lowest limit, but it would be my assumption… I haven’t tested it :confused:

Can’t find where I saw this anymore!


I just realized our (outside of Cloudflare) API cannot access a Durable Object directly from outside Cloudflare, which makes that a deal breaker for now.

Upon an update of primary data - we want to push it to the Durable Object (to keep it at the latest version).

You could always make a Worker to update the data from outside…


Hi folks, sorry for the confusion. To clarify a few things:

  • Workers KV and Durable Objects have different storage limits.
  • Workers KV values are limited to 25 MiB as of last week. Prior to that, the limit has been 10 MiB for quite some time. The page @sdayman shared a screenshot of is unfortunately out of date. Thanks for bringing that up, I’ll get it fixed.
  • Durable Objects have no limit on the size of an individual object.
  • Durable Objects do limit the size of each individual key and value stored within the object, to 2 KiB for keys and 32 KiB for values.
  • You cannot talk directly to a Durable Object from outside Cloudflare, but it’s very easy to define a normal Worker that accepts an incoming request and routes it to the desired Durable Object. For instance, that’s what’s done in the counter example in our docs.

@arobinson thanks for clarifying! Further questions:

  1. For Durable Objects - you limit to 2 KiB for keys and 32 KiB for values. Isn’t the value the object itself, which in the line above, is unlimited?
  2. I need an object to auto-expire, just like KV has this (for use in caching use cases) and so that I don’t run up bills for dead objects that live forever. Any plans for this?
  1. Not quite. The Durable Object is actually a special worker instance routed to by name/ID that has access to its own in-memory state and persistent state, the latter of which is exposed via a key-value interface.
  2. We do plan on improving the lifecycle management of objects soon, and auto-expiration is very likely to be part of that. I don’t have any specifics to share right now though.