Streaming response catch " Worker exceeded CPU time limit." error

Hi everyone,
I am trying implement on Cloudflare workers next case:

  1. download encrypted file
  2. convert it to byte array
  3. with TransformStream I am trying to send decrypted data to response
    let { readable, writable } = new TransformStream();

method(writable)

return new Response(readable, {headers, status: 200})

async method(…) {
for loop …
await writer.write(decrypted)

await writer.close()
}

  1. On my client side I get only part of file content
    and in Cloudflare workers I got error:
     "className": "Error",
      "description": "Error: Worker exceeded CPU time limit.",
      "preview": {

Can I solve this problem on Cloudflare workers?

Simple case to reproduce:

  let { readable, writable } = new TransformStream();

    write(writable)

    return getWebResponse(readable, 200);

####
const write = async (writable: WritableStream) => {
    const writer = writable.getWriter()
    const encoder = new TextEncoder();
    for (let i = 0; i < 1_000_000_000; i++) {
        console.log('i = ', i)
        await writer.write(encoder.encode("Hello world\n"))
    }
    await writer.close()
}

Example above breaks after 50000-70000 iteration

Hi there!

This error is thrown when your Worker exceeds the CPU limit for a single invocation. This is a fatal error and there is no way to catch it. You need to either make sure the Worker will not exceed this limit or handle the error client-side when it does.

The limit is 10 ms on the Workers Free plan and 50 ms on the Workers Paid plan. Encryption/decryption is very CPU intensive, so you will exceed the CPU limit quickly if that’s what you’re doing. If you need more CPU time than this, you should take a look at Workers Unbound which is billed based on duration and has much higher CPU time limits.

I would also recommend reconsidering if Workers is the right tool for the job. Real-time decryption of large files will get expensive quite quickly.

2 Likes

Hi @user22058

If your origin server can support “Accept-Range” HTTP 206 Partial Content then you can split up the work load into chunks that is executed in parallel 2 workers at a time and automatically cycling through Workers as their CPU Time is reached. Although you cannot measure the CPU Time from within a Worker, your workload should be fairly predictable so you can tune-in the right amount of work per sub-request as to not exceed the CPU Time.

The figures quoted by @albert are the CPU Time which I want to clarify that unlike other serverless platforms (namely GCP and AWS) will quote the milliseconds in “Wall Time”. Cloudflare Free and Bundle plans are quoting “CPU Time” which is very different. For the Worker Bundle plan, 50 ms CPU Time is actually very generous and should be plenty time to process at least a few hundred megabytes of your file decryption, obviously depending on your algorithm.

With this approach, you can achieve equivalent “Wall Time” far greater than what you could achieve with even the Worker Unbound plan. And you can have single long-running request of real time over 12 hours despite what Cloudflare officially says Limits · Cloudflare Workers docs.

I have a real example I can share with you which is built on the Workers Bundle plan which regularly has single long-running requests open for over 12 hours even though the CPU Time is limited to 50ms.

Although you maybe cycling through multiple sub-requests, the end-user and their browser is none-the-wiser as the Worker is seamlessly stitched together sub-requests into a single long-running request.

It’s also worth noting about the CPU Time “limit”. When they say the CPU Time is “limited”, it’s actually not a hard limit, even if the CPU Time goes over on an individual request, that individual request won’t be killed nor will the Worker be terminated. These CPU Time “limits” are more of an upper policy for a certain percentile of your requests to be under.

1 Like

@user22058 did you get this solved already?

@Hi user2765. Finally, we decided not decrypt on Cloudflare. Thanks for a detailed response.