I suddenly have this question regarding the 50ms for workers processing time. I’m just trying to accommodate future growth / proofing of the programming…
Question is this:
given i’m thinking about using msgpack / json to store inside KV … BUT let’s say I store 512MB, of which processing time for this “serialization and deserialization” of this 512MB data (compressed or otherwise), may be highly processor intensive.
So… let’s say i have a large application running inside workers, do the tech guys recommend storing an optimized version (compressed) of KV value or do you guys recommend a faster processing version of code with consideration to the 50ms cpu limit imposed.
Please dont tell me you’ll never / rarely hit the 50ms cpu time or the 128mb RAM limit etc, mentioning that i can just write bloated code to go with the storage values.
In summary, question is : i am thinking of retrieving 512MB json value from KV store or compressed the 512MB json value to 64mb but with the 50ms cpu time limitation, how does it work? is the transfer between the 512MB to “cpu working time” part of the 50ms processing time? I’m not sure how to phrase it properly but hope to have more indepth insights on whether it is worth optimizing my code vs just simple storage processing that’s all. if anyone comes across issues similar hitting limits with worker processing / kv stuff in production use, do mention too. thx