OK - so the current latency is expected then… so the answer might be to re-look at the current setup and reduce the writes if possible, and/or use event.waitUntil - which I assume has the effect of returning a response to the user, whilst the writes happen in the background.
Thanks
Are you reading and writing on each, or just writing 2-3 times? If you’re only writing, then you should be able to do the writes async and it should be as fast as writing once.
Looks for a short lived key (representing a session), whose value contains a key for another KV pair containing a counter. If found, it updates a counter, and resaves the session key with a new TTL. If not it creates two new KV pairs (session key and session details).
I have a very similar PoC running as a Django app on Heroku with postgres as the datastore - initial testing was showing that it’s around 2x faster. I’m guessing that the Workers version would probably scale better (i.e. more consistently, with less monitoring and potentially cost) than the Heroku version?
Workers are best when you need to scale horizontally. For example, with a normal server you have a limit of how large you can scale it before you need to add additional servers, with workers you never have to think about the scale if you build inside it’s constraints.
Some constraints:
50ms CPU-time per request (Don’t parse large HTML/XML & no DOM)
Long build-times before deploy (Even a 200kb script will take ~10 sec)
Unreliable debugging (Local testing might not behave the same as the actual worker)
50 Sub-requests per request.
Max 6 open connections per request.
1MB Max script-size
128MB memory limit (will be consumed when processing large amounts of data)