I have an interesting thing happening. I have a Worker, and that Worker receives multiple POST requests concurrently. For example, let’s say it received 100 requests in a given second. In this context, I consider an instance of a Worker to be the worker spun up to handle a request.
Here’s the weird thing. One instance of the Worker is handling multiple requests, rather than just its own request. So, 100 requests leads to 100 Workers spinning up to respond, but those 100 Workers fetch listeners are triggered 20-50 times each, so now you’ve got the function the Worker is responding with running 2000-5000 times total, rather than just 100 times for the 100 requests as I would expect.
I’ve found that a given worker is responding to multiple concurrent requests by collecting the request data for each Worker, logging it, and seeing that the POST requests collected are indeed duplicates it should not be getting.
When you send an HTTP request to Cloudflare it is handled by one of the available edge servers (selected at random for each connection) in the point-of-presence your packets are routed to.
Each of these edge servers run their own instance of the Workers Runtime, so each edge server spins up a separate “isolate” where your Worker script will run. Each isolate has it’s own global context but may handle more than one request. This should not be a problem unless you are doing stuff in the global scope.
May I ask whether you are using the Service Worker format (addEventListener('fetch', event => {})) or the Modules syntax (export default { async fetch() {} })?
If you send only a single request, does that generate more than one log entry?
Could you share the source code of the Worker that is experiencing these issues?
It was indeed a global scope issue. Turns out the Worker wasn’t actually running each of those requests, it was just collecting the requests. Now off to find the actual issue, but good to know its not with Workers. Thanks @albert !