I’m trying to design a serverless commenting system based on Workers KV.
List calls look to be kind of expensive, so really want avoid doing a list call per request (or the cpu time rendering the comments), so it seems smart to cache rendered comments in a KV object and have the worker update that on comment write.
But Workers KV is eventually consistent, with the last write taking priority, and I’m having a hard time working out how to avoid dropping comments from the cache when two edge nodes receive comments at the same time.
My thought was to make edge nodes always check the list when seeing a new version of the cache key. But what if the updates to the comment list from a node writing an intermediate version of the cache aren’t visible to that edge node yet?
Sequence of this edge case:
- Worker one writes comment to list and updates rendered cache object
- Worker two writes comment to list and updates rendered cache object
- Worker two’s writes become visible to Worker three
- Worker three receives user request for comments, sees cache object from two is new and checks list and sees no other comments.
- Worker one’s writes becomes visible to Worker three. Because worker’s two write is newer, it is simply discarded.
- Worker three never sees Worker one’s comment (unless another comment comes in later)
I also wonder if there is a guarantee that writes from other workers arrive in order? If I see the cache object from one worker, am I also guaranteed to see it’s list updates?
Another option I considered was making worker one watch for updates to the cache that don’t contain it’s comment. But what if worker one gets restarted before it sees worker two’s comment comes in?
Am I missing something? Is there are way to do this with the documented restrictions?
Or should I just give up and do all KV writes from a centralised server?