Workers KV read / write speed

I have a simple proof of concept worker that’s doing 2 KV reads, and 3 KV writes per request.

This appears to increase the round-trip from a simple hello-world at ~150ms up to 600ms.

Is this normal? or should I look at doing something different? I’m reading / writing a 4/5 lines of JSON per read/write.

Keys look like pv_123_bbccb12c021e115f7a3b67a57b069812d2968b966be5181a0f73462fbc863554_1578590680190

And each read has an expirationTtl passed with it.

Try writing with event.waitUntil(...)

1 Like

KV is meant for write-low, read-heavy workloads; three writes per request is going to add a bunch of latency.

OK - so the current latency is expected then… so the answer might be to re-look at the current setup and reduce the writes if possible, and/or use event.waitUntil - which I assume has the effect of returning a response to the user, whilst the writes happen in the background.
Thanks

Are you reading and writing on each, or just writing 2-3 times? If you’re only writing, then you should be able to do the writes async and it should be as fast as writing once.

It’s doing a few things:

  1. Looks for a short lived key (representing a session), whose value contains a key for another KV pair containing a counter. If found, it updates a counter, and resaves the session key with a new TTL. If not it creates two new KV pairs (session key and session details).
  2. Inserts a new KV pair

In this case there’s no avoiding the delays unfortunately.

You could try using Asure Cosmos DB, there’s a client here:

Keep in mind though, that it’s easy to rack up high costs on Cosmos.

1 Like

OK, thanks.

I have a very similar PoC running as a Django app on Heroku with postgres as the datastore - initial testing was showing that it’s around 2x faster. I’m guessing that the Workers version would probably scale better (i.e. more consistently, with less monitoring and potentially cost) than the Heroku version?

Workers are best when you need to scale horizontally. For example, with a normal server you have a limit of how large you can scale it before you need to add additional servers, with workers you never have to think about the scale if you build inside it’s constraints.

Some constraints:

  • 50ms CPU-time per request (Don’t parse large HTML/XML & no DOM)
  • Long build-times before deploy (Even a 200kb script will take ~10 sec)
  • Unreliable debugging (Local testing might not behave the same as the actual worker)
  • 50 Sub-requests per request.
  • Max 6 open connections per request.
  • 1MB Max script-size
  • 128MB memory limit (will be consumed when processing large amounts of data)
  • 1 write per second per KV-key.
  • No JS Eval or function invocation from strings.
  • None of the Node API’s (like crypto and http)
1 Like

Thanks for the info - though I think you mean horizontally?

Ah, yes, sorry.

Though, both apply since you don’t need to worry about it all.

And no reboots and it’s always up as long as CF-network is up.

Beware of KV’s consistency rules here; you may lose some counts.

HTMLRewriter should help with this! HTMLRewriter · Cloudflare Workers docs

1 Like