I want to know whether Cloudflare routes the worker traffic to another edge server where memory is free when the nearest worker instance exceeds the memory. If not does enabling argo routes it to the next nearest server?
A worker is limited to 128MB of memory… Workers run on every metal in every Cloudflare datacenter.
A worker request wouldn’t be routed to another datacenter.
@cscharff Thanks for the info. Since the traffic won’t get rerouted to another server even when nearest worker instance is going over the limit , Is worker’s unbound the only option to prevent that?
Well, technically to prevent you just don’t have 128MB of data stored in memory
I’m not sure the memory limit for Unbond but that’s where you pay for the CPU time that you use.
I’d (and I’m sure the team would too) be interested in your use case where you will exceed 128MB per request
I’m not talking about per request basis. I’m talking about a high traffic situation
You mean when a Datacenter gets enough Traffic that all of its Resources gets depleted?
Then i believe Cloudflare will Route the Request to the next avaible Datacenter
I’m not talking about a situation where a specific Cloudflare server gets huge traffic. if that’s the case then Cloudflare may reroute to another server as you said. But was about talking about a scenario where my worker instance gets high traffic. @cscharff already answered my question. Now what I wanted to know was whether workers unbound is the only option to prevent that since (I think) it doesn’t have any memory restriction in place or if something else exists
CF spawns new worker “process” per datacenter, if there is not an idled process (waiting on IO/timeout/KV) to accept another “fiber”/req promise. Some people tested how many processes CF launches per worker in a POP with a high load with a file level global JS variable and never successfully launched more than 2 simultaneous worker processes per POP (but the tester is always connecting to CF using 1 IP address but many HTTP connections to CF to the same worker). You need to process things as streams/block data, not as strings if you are hitting the 128 MB limit, or if you have a global JS obj/array to act as a faster than Cache API and faster than KV that is faster than origin, you need to evict entries from your JS per process global var cache after a hard limit or de/promote things from your JS global var cache to Cache API/KV.