Thanks for the help @john.spurlock that removed the error.
The problem is that if I use event.waitUntil the worker will wait until I close the writer before sending the response with all the chunks at once to the browser. What I’m trying to accomplish is sending bits of text to the browser in batches instead of all at once.
Edit:
it worked!
The issue was that I was sending small bits of text and the stream was waiting for larger chunks.
Edit:
So I created this little repo as an example:
It seems the behavior of sending chunks to the browser is very inconsistent. Is there a way to force the stream to flush to the browser at a precise moment or to define the size at which the stream should be flushed?
See this video for the inconsistent behavior I’m talking about:
As far as I know, there is no explicit flush control on writer, but you may want to poke around in the streams spec [1].
Note each writer.write call can also be awaited, but I would not recommend relying on that for message framing, etc. See the comment on MDN [2]:
Note that what “success” means is up to the underlying sink; it might indicate simply that the chunk has been accepted, and not necessarily that it is safely saved to its ultimate destination.
There are several buffers (including TCP buffers) that we don’t have explicit control over in a high-level api like this. You’ll need to do your own framing (e.g. with a length prefix) if you want to use this for an http pull system.
I’ve been reading the spec and I believe I could control this using ByteLengthQueuingStrategy unfortunately it seems this class is not available in Workers:
ReferenceError: ByteLengthQueuingStrategy is not defined