Worker consistently exceeds CPU limit without being killed

I’m experimenting with WASM on workers and have a script that does a fair bit of processing. I’m on the free plan now and the worker consistently exceeds 150ms of CPU time without being killed. Is this expected behavior (i.e. can I rely on it?). If not, am I misinterpreting the metrics compared to the stated CPU time limit (10ms)? I understand that the CPU limit is burstable, but this is a consistent overrage on all requests.

I think what you’re seeing is due to the “rollover bank” allocated to workers. This allows workers to run reliably even when some requests go over the limit, to account for things like V8 incremental optimization and garbage collection. The bank is tracked per instance of the runtime and there are many runtime instances in each Cloudflare data center. So if you are making infrequent requests, it’s possible that enough time has passed since your worker last ran on a chosen instance that the worker has been evicted and needs to be restarted, resetting the rollover bank.

So, it’s not unexpected behavior, but you cannot rely upon it, since it would change if your worker gets more requests, or if more of your requests happen to get routed to the same instance, or if the load in the data center changes significantly.

See also: Worker CPU Time "rollover"