Bug: Service Binding error when sub-worker is too large

When I call service binding worker from another worker, errors happen:

✘ [ERROR] Uncaught (in promise) Error: internal error


✘ [ERROR] Uncaught (in response) Error: internal error

After some expirements, errors seem to happen when service binding worker is too large.


Sub worker that does not work:

$ npm run deploy

> [email protected] deploy
> wrangler publish src/index.ts

 ⛅️ wrangler 2.20.0 (update available 3.0.1)
------------------------------------------------------
Total Upload: 2073.13 KiB / gzip: 526.13 KiB
Uploaded zh-translate (2.88 sec)
Published zh-translate (0.32 sec)
  https://zh-translate.zczc.workers.dev
Current Deployment ID: 83b2188c-ede5-4eb6-af20-ec0932d70877

Sub worker that does work:

$ npm run deploy

> [email protected] deploy
> wrangler publish src/index.ts

 ⛅️ wrangler 2.20.0 (update available 3.0.1)
------------------------------------------------------
Total Upload: 15.74 KiB / gzip: 3.88 KiB
Uploaded zh-translate (0.89 sec)
Published zh-translate (0.61 sec)
  https://zh-translate.zczc.workers.dev
Current Deployment ID: 74cec5da-51e2-4230-80ef-9b6d9674754b

Hey,

Worker size shouldn’t have anything to do with it here. What’s causing the broken one to be so much bigger? I’m wondering if it’s a module or something causing issues for some reason.

I imported an npm package for simplified and traditional Chinese translation, which resulted in a larger file size.

After several attempts, including recreating the worker and renaming it, I managed to successfully fetch the service. However, there was no significant difference between the failed and successful setups, indicating the presence of a random internal bug.

During my troubleshooting process, I noticed some unusual phenomena:

  • Only the service binding experienced issues; “.workers.dev” worked fine.
  • I bound three workers, with the “translate” worker consistently failing while the other two succeeded. The order of workers in wrangler.toml and the service binding name didn’t affect the outcome.
  • I deployed a minimal worker setup for the “translate” worker, which only returned “Hello World” for all paths, but the issue still persisted.
  • I even created a new worker with a different name, but the problem persisted.
  • Ultimately, I retried my initial setup and it succeeded.

Since “.workers.dev” worked fine, I don’t think that it’s a module causing issues.