Hello, my application is expecting a lot of writes, so my idea was to use Workers as an intermediary database. So the frontend connects to Workers and creates 100.000-1.000.000 records on a KV DB.
And then use the “new” function of cron to send the data to my server. Once the server saved those entries, i want to delete those from the KV. I know I can use the API to delete up to 10.000 entries. But do this count as 1 delete or 10K deletes?
Is this a good use case of Workers? Any experience with anything similar?
If you’re sending to your BE anyway and just want a buffer ingestion to acknowledge requests quickly use a Queuing system like AWS SQS or Azure Storage Queue, with a lambda/serverless function on a built in cron job to then to the processing. Queue messages autodelete after processing also.
This is actually an excellent use-case for workers and if you have two domains (worker to worker) - one to initiate/manage the queue transfer - you can keep excellent flow-rate control of how much and how often the data get transferred.
Having actually tried Lambdas and AWS for the same, it’s not nearly as easy as using workers with KV to accomplish a similar result. Just keep in mind though, that the write-order will NOT be consistent, it will be out-of-order.
And the 1 KV write per second only applies to the SAME KV entry, you can write as many keys as you want as long as you don’t overwrite any of the existing.
TIP: If your database writes are small (1kb), you can use the meta-content to store the entries and then use the listing feature to mass-read the data and transfer a sh*t-ton of entries in one go.
Thanks thomas for the tip, can you give me a bit more information to search about this “meta-content” thing? Looks promising but don’t know how to research it :S