Space time trade offs with KV / Worker

I suddenly have this question regarding the 50ms for workers processing time. I’m just trying to accommodate future growth / proofing of the programming…

Question is this:
given i’m thinking about using msgpack / json to store inside KV … BUT let’s say I store 512MB, of which processing time for this “serialization and deserialization” of this 512MB data (compressed or otherwise), may be highly processor intensive.

So… let’s say i have a large application running inside workers, do the tech guys recommend storing an optimized version (compressed) of KV value or do you guys recommend a faster processing version of code with consideration to the 50ms cpu limit imposed.

Please dont tell me you’ll never / rarely hit the 50ms cpu time or the 128mb RAM limit etc, mentioning that i can just write bloated code to go with the storage values.

In summary, question is : i am thinking of retrieving 512MB json value from KV store or compressed the 512MB json value to 64mb but with the 50ms cpu time limitation, how does it work? is the transfer between the 512MB to “cpu working time” part of the 50ms processing time? I’m not sure how to phrase it properly but hope to have more indepth insights on whether it is worth optimizing my code vs just simple storage processing that’s all. if anyone comes across issues similar hitting limits with worker processing / kv stuff in production use, do mention too. thx

You’ll not be able to parse more than that, even just doing…

const kvValues = myKV.get("mykey", "json")

…will consume 50ms when you load more than 1.7MB of KV Values into a variable.

On the distributed load-testing I did yesterday, 500 items seem to be the limit, not 1000 as I wrote before.

2 Likes

i’m so glad you are able to comprehend what i’ve asked. the answer is exactly the kind of question i’m asking about.

so what’s the point to being able to store 10mb limit (is it?) for kv when processing timeout at 50ms? this is for enterprise only?

the cpu time of 50ms does not include the data in transit time i think. or am i wrong?

I wanted to test if streaming the KV response and parse it in chunks, would help with the CPU-time but I doubt it will make any difference. The built in functions for parsing the KV should be the most efficient method (or Cloudflare have done it wrong).

Problem is, just adding 1.7MB of objects into a new variable takes a considerable amount of CPU-time.

what u hv mentioned is like saying u can only do lesser than 0.85mb of data to kv to have any meaningful processing time on the 50ms