I’m trying to design a serverless commenting system based on Workers KV.
List calls look to be kind of expensive, so really want avoid doing a list call per request (or the cpu time rendering the comments), so it seems smart to cache rendered comments in a KV object and have the worker update that on comment write.
But Workers KV is eventually consistent, with the last write taking priority, and I’m having a hard time working out how to avoid dropping comments from the cache when two edge nodes receive comments at the same time.
My thought was to make edge nodes always check the list when seeing a new version of the cache key. But what if the updates to the comment list from a node writing an intermediate version of the cache aren’t visible to that edge node yet?
Sequence of this edge case:
Worker one writes comment to list and updates rendered cache object
Worker two writes comment to list and updates rendered cache object
Worker two’s writes become visible to Worker three
Worker three receives user request for comments, sees cache object from two is new and checks list and sees no other comments.
Worker one’s writes becomes visible to Worker three. Because worker’s two write is newer, it is simply discarded.
Worker three never sees Worker one’s comment (unless another comment comes in later)
I also wonder if there is a guarantee that writes from other workers arrive in order? If I see the cache object from one worker, am I also guaranteed to see it’s list updates?
Another option I considered was making worker one watch for updates to the cache that don’t contain it’s comment. But what if worker one gets restarted before it sees worker two’s comment comes in?
Am I missing something? Is there are way to do this with the documented restrictions?
Or should I just give up and do all KV writes from a centralised server?
The only thing I could think of is a queue system. Either off site on another server and have that post every x minutes or y number of comments. That layer would sort for that and could even compare to the last post to double check.
Building it in a cloudflare worker or posting to your KV db directly as a queue is going to probably be pretty costly.
I’m fine with eventual consistency. A 60 second delay (or inconsistency, with different nodes showing different comments) is fine for my use-case.
But most eventual consistent systems provide some mechanism to detect write conflicts and allow correction (or at least notification). As far as I can tell, Workers KV just discards and ignores these conflicts.
I think the “easiest” way to solve this is to have an external index that just count unique ID’s and increment this on every new comment. You then set this ID for each new KV entry. This way, comments would always be in order, have a unique ID and you’ll be able to avoid conflicts. Such a counter could scale quite easily, since it’s only doing a very simple thing, something like a Golang executable that keep the counter in RAM and then persist to disk.
Thanks for the reply; It’s kind of the answer I was expecting.
For anyone else encountering this type of problem in the future, I’m probably going to go with a small external server (or Lambda/Function) that receives completed comments from the workers, renders them and then updates the rendered cache object in Workers KV.
It works around the problem by having writes to keys coming from a single, centralised source.
I do like @thomas4’s suggestion, as it works around the problem with the minimum amount of external complexity. But for my use-case as soon as I have an external component, I might as well give it other responsibilities to simplify the rest of my design.