Workers is described as “V8 isolates” “concurrent” “spun up automatically” and worker process (sorry thats my word) sometimes stay hot (let reqcnt = 0; addEventListener("fetch", event => {reqcnt++;});
and sometimes are killed in a few milliseconds and relaunched in 1-5 ms (supposedly) on each client request. There is also the talk that I/O blocked workers take no CPU/“resources”. So this is confusing to me.
Will a single “process” or “isolate” ever handle 2 different client requests, doing a “stack swap” (but there is no “stack” in promises) at await fetch
, like if client req 1 is blocked on IO at await fetch
, will the CFW runtime automatically execute client req 2 and have it block on await fetch?
Example, if this CFW runs on 2 different domains, I assume each “on” is a promise/event firing after blocking I/O. Would curDomain in client req 1 in .on("a", {
ever be actually seeing client req 2’s .on("base", {
global var setter
since the same process/isolate was blocked on IO in client req 1 waiting for req 1 chunk 2, accepted client req 2, got first chunk of origin req 2, set the global, switch back to second I/O chunk of origin req 1, and then read the global and write origin req 2 data into rewritten/post processed origin req 1? Basically can I store a customerID number as a IIFE closure or a JS global in a CFW, and HTMLRewriter is “single threaded” and not a promises random reentrancy stack swaping soup of 1 “processs”/“isolate” getting round robin 1.5KB origin stream chunks from different open client reqs sockets and different open origin sockets, all in the same “process” sharing same JS globals? is callback function in addEventListener(“fetch”, function(req) {whatever();}) atomic/not re-enter even with promises/await? or the moment blocking I/O happens in the callback function, the callback function will run again on client req 2 until client req 2 does blocking I/O, then callback function will run again on client req 3, then block, then the stack/promise resolve of randomly req 1 or 2 or 3 executes, then 1 or 2 executes, then the last promise resolves? totally unpredictable resume/resolve order?
`
let curDomain;
let rewriter = new HTMLRewriter()
.on(“base”, {
element: function(element) {
curDomain = (new URL(element.getAttribute(‘href’))).hostname;
}
})
.on(“a”, {
element: function(element) {
element.setAttribute(‘href’,element.getAttribute(‘href’)
.replace(curDomain, ‘cdn.’+curDomain));
}
});
addEventListener(‘fetch’, event => {
event.respondWith(handleEvent(event))
})
async function handleEvent(request) {
let url = new URL(request.url);
url.hostname = ‘myrealorigin.com’;
request = new Request(url.href, request);
let response = await fetch(request);
return rewriter.transform(response);
}
`
The easy way out is construct a new HTMLRewriter object for each addEventListener(‘fetch’
event, which creates a new scope for the callback functions. But… can my CFW be “faster” TTFB/latency by reusing the HTMLRewriter object? But is reusing a HTMLRewriter safe or not? because each on
is an event callback. Any would 2 different client reqs ever be interleaved 1.5KB TCP packet by 1.5KB TCP packet on 1 process inside 1 HTMLRewriter object?