How do we know the CPU timing?

The CPU limit is based on the plan. How do we know the CPU timing? Deploying it is not a good way to test. Plus it would be good to have an idea of how much headroom there is.

cc @KentonVarda


Hi @matteo,

We are working on providing better measurement tools and analytics.

Note that if you go over your time limit, in the preview you’ll see an error “script exceeded time limit”, and in production you’ll see error code 1102.

In practice, it is very rare for a correctly-functioning script to go over the limit. We see the vast majority of scripts take less than 1ms. When a script goes over the limit, it’s usually because of a bug in the script, and it usually goes way over the limit. We rarely see scripts that “hug the boundary”.

So, in practice, if you aren’t doing anything that you’d expect to take a lot of CPU time, and your script works OK in the preview, then you probably have nothing to worry about.

That said, I do agree we need better tools here, and we’re working on it!

1 Like


Sorry to reopen this old of a question, but it is really on the same topic.

I have a worker script that runs completely fine on the Preview, but as soon as I go to the link on another tab I see a time limit exceeded (and sometimes not in every case). The thing is that it’s just a simple object parsing + restructuring and reply (with an optional pretty print), nothing fancy. What could that be?

The links are:, and

Pinging @harris as well…

Hi @matteo,

From your first link, it looks like your data is almost 3MB of JSON. If you’re parsing that with JSON.parse(), my guess is that will take quite a while – very possibly more than 50ms. Moreover, it will take lots of RAM, because each value in the JSON becomes a separate object on the heap. A 10x blowup is typical, so you may be using ~30MB of RAM to process this. On the first request after the worker starts, with the heap starting out pretty empty, that may be fine, but once the heap starts filling up, it will trigger GC pauses. My guess is that’s why you see the CPU time limit being hit on subsequent requests.

If you don’t actually need to process the data, but just re-format it, you might be able to implement that in a way that doesn’t require doing a full JSON parse. For example, you could scan the string for commas, inserting a newline and an indent after each one, with the intend computed by counting the number of open braces that haven’t been closed. This would be a lot more efficient – but of course will require more code.

Thanks for the reply!

It is way less data at the beginning, but still why would that work in the preview?

My issue is that it would need to transform an array into a GeoJSON object… I don’t think it would work with substitutions.

Would I be able to somehow split the work? Like request from another worker a piece of it?

If you want I can post the code and a sample of the data.

Hi @matteo,

I’m not sure why it would work in the preview. Maybe the preview is seeing different data, or maybe the CPUs in the preview servers perform a little bit better for some reason.

I think you will need to find a way to design your code so that it processes smaller chunks of data. Maybe you could shard the data in some way so that for any particular request, you only need to parse a smaller piece? The details really depend on what you’re trying to do.

(Or for enterprise customers, we may be able to come up with custom solutions with the help of engineers here.)

The funny thing is that the […]vodafone link has almost double the data of the […]wind one. The data is identical (and not under my control, unfortunately, it would make no sense to just refactor it on a Worker in this case…), the preview can parse, refactor and pretty print the largest one while the production cannot even manage the smallest one without the pretty printing… This is the strange thing, but alas.

I would have to do the same thing via a Cloud Function (that Google isn’t pushing worldwide, or at least to Europe for some reason)…

1 Like

Gonna add also that using the testing tab in the Preview also works, the doubt is: that is on a preview or production server?

I also like to know that. Previously the Worker execution time was in the ‘Workers’ tab, including the 99th percentile time. But with the updated analytics that information isn’t there anymore. At least, I cannot find it.

That’s quite shame because speed is quite important, obviously.