Worker code 1104 - Script not found

I manage domain where we had workers that were working perfectly in the past and today they started giving error code 1104 -Script cannot be found. I’ve searched the community archive for quick-fix suggestions with no luck. It is a simple worker that makes a fetch call to a service returning ‘{“status”: “ok”}.’

We have not touched the workers in days, and there is no firewall rule to block workers from reaching our origin. They were all working well three days ago before the replication outage.
We tried to redeploy one of the workers but no luck - error 1104 (Script not found)

On further investigation - any worker with a sub-request generates 1104, others without origin seem to be working fine.

Have you seen this before?
Any Ideas?

1 Like

Hi there,

We’re looking into this. I’m going to DM you some questions.

1 Like

I’m just seeing this … Noob at discuss:)

The issue is resolved.
Iff you are getting 1104 due to wasm memory expansion to 2GB, and you used emscripten to build you wasm, then you can use compile time options the following to manage memory settings e.g.


We were able to run our worker with ZZZ=12582912(12MB).
Life is good.

1 Like

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.

Just to follow up, the problem here was:

  • Wasm modules can define both an “initial” and a “maximum” memory size.
  • We previously enforced that the “initial” size was within the 128MB memory limit, but we never cared what the declared “maximum” size was. Instead, the worker would simply be shut down if and when it grew past the limit.
  • However, a change in V8 8.7 made is so that V8 started rejecting wasm files whose declared “maximum” memory was over the limit, even if they never actually used that much memory. We didn’t know about this change, and none of our test cases covered this scenario.
  • As a result, when we rolled out V8 8.7, a very very small number of Wasm workers started failing at startup. :frowning: Luckily, the vast majority of wasm workers in production have a “maximum” memory value that it within the limit, but for a handful that didn’t, this caused a breakage.
  • We have since rolled out a patch which causes V8 to clamp the “maximum” memory value in a wasm module to 128MB, instead of rejecting the module. Thus, Wasm workers that were broken should now be fixed. So, there’s no need to take any action if you haven’t already.

I apologize for the breakage. We try very hard to avoid breakages like this, but sometimes a bug gets through, especially if it affects an obscure scenario present in a tiny number of workers. We’re always improving our testing to make this sort of thing less likely going forward.

1 Like