Bulk storage write in worker (not through Bulk API) causing 500 and 503 etc error responses


General question if anyone has experience with using a worker to run many KV actions. We are seeing quite a few different errors. Maybe we are above the 50ms CPU limit? Not sure how to deal with the errors we are seeing.

We are currently utilizing worker and KV storage to handle large amount +1m redirects for many domains. We do not have access to the actual enterprise Cloudflare account (CF Bulk API endpoint) hosting the worker. Instead we have a second worker that acts as a API endpoint that can write as many key value pairs we send to the endpoint (json array).

When I am testing this on our bundled + free hosting account I am running into quite a lot error responses.

getaddrinfo ENOTFOUND (from axios)
Request failed with status code 503
Request failed with status code 500

I have tried to call the endpoint with a lot of different intervals, below is a few examples

Total Key value pairs sent (500k)

  1. 1 request with 1000 kv / sequential
  2. 2 request with 1000 kv (each) / parallell
    Pause 250ms between each batch
  3. 15 requests with 100kv (each) / parallell
    Pause 5 sec between each batch

In case of any error I am pausing the script for up to a minute

So far #3 works best but the server complains quite a lot. Any suggestions how to go about this?