Slow Worker response with Fetch

I’m not too familiar with waits/async javascript, and was wondering if I could do anything to improve my response time. Basically I have a worker that loads when end users access a 100-300kb file from my API. When those files are accessed, I send the api key to the origin server and make sure it’s authorized, increase account usage, etc, and then return a simple status response. After that I try to retrieve the file from cf cache, and if not successful I get it from the origin server. For cached content this takes anywhere from 175-600ms with an average of 350ms. This is a bit slow and erratic for a use case like panning images. If I remove the fetch for api authorization it lowers it to 80ms-150ms. Checking Api auth directly from the origin shows a stable response of 60-70ms. I’d expect responses around 175-200ms, but not 350ms and higher? Any help would be greatly appreciated!

addEventListener('fetch', event => {
  event.respondWith(handle(event.request))
})

async function handle(request) {
  let originalURL = new URL(request.url)
  let authUrl = new URL(request.url)
  authUrl.pathname = "/api-authorization-page"
  let authResponse = await fetch(authUrl, {
    ...request,
    method: "GET",
    body: null
  })
  
  if (authResponse.status === 200) {
    const modifiedRequest = new Request(originalURL.href, {    body: request.body,    headers: request.headers,    method: request.method,    redirect: request.redirect  });
    const cache = caches.default;
    let response = await cache.match(modifiedRequest);
    if (!response) {
      modifiedRequest.headers.set("cache-control", "public, max-age=14400, s-maxage=604800, immutable");
      response = await fetch(modifiedRequest, {cf: { cacheEverything: true, cacheTtl: 2419200, }  });
      response = new Response(response.body, response);
      response.headers.set("cache-control", "public, max-age=14400, s-maxage=604800, immutable");
    }
    return response
  } 
}

@justin4 if the files from the origin server don´t require Authorization Headers, you could store them in cache only using the fetch + cacheEverything. In this case, you don´t need the Cache API (cache.put, cache.match).

If really need to use the Cache API, you’ll need to change part of the code:

addEventListener('fetch', event => {
  event.respondWith(handle(event))
})


const handle = async (request) => {
	const {request} = event
	const authUrl = new URL('/api-authorization-page', request.url)
	const authResponse = await fetch(authUrl, {
		...request,
		method: 'GET',
		body: null
	})	
	
	if(authResponse.status === 200)
	{
		const cacheUrl = new URL(request.url);
		
		const cacheKey = new Request(cacheUrl.toString(), {
			...request,
			method: 'GET'
		});
		
		const cache = caches.default;
		let response = await cache.match(cacheKey);
		
		if(!response)
		{
			response = await fetch(cacheKey);
			response = new Response(response.body, response)
			response.headers.set('Cache-Control', 's-maxage=604800')
			
			//here you tell the Worker to compute cache.put without interrupting the response
			event.waitUntil(cache.put(cacheKey, response.clone()))	
		}
		
		return response
	}
	
	return new Response('forbbiden', {
		status: 403,
		statusText: 'forbbiden'
	})
}

My example is a bit simplified, but in my worker I use the Cache Api for deleting files from the cache if the authResponse indicates it has changed recently. I tried avoiding the Cache Api, but other methods for deleting or forcing revalidation on cached content from Cloudflare wouldn’t work. I switched to using cache.match after that since it seemed to behave the same. Unfortunately your changes didn’t make a noticeable difference, but I appreciate the help cleaning up some of the code.

One big issue I just noticed is that all of my cache is served from LAX, even though me and the origin server are in Texas. When looking at other sites with cloudflare, they serve cached pages from DFW and have a much better response time. I’m pretty sure my site used to server from DFW, and I don’t understand why it changed?

And now it has changed to ATL. Checking from the origin server it uses MIA, so it would seem that it doesn’t want to use the nearest Colo at DFW which isn’t helping on latency.

Also, I’ve noticed that when accessing the same origin server file, but cross site from another host, that a different cache seems to be used. I’ve tried stripping all identifying pieces from cacheKey so that it is basically just an empty get request with no body or headers. I’ve also stripped most headers out of the cache.put response as well. Still, a cached file has to recache when accessed through another site.

If the Response is always the same you could use a URL String as the cacheKey for both (cache.put and cache.match).

Ok, I think I understand the problem a bit more now. My server is able to process the key check in about 50-60ms over the internet, except for on the first request which has to go through dns resolution, connecting, and tls setup. That can jump the total time up to over 300ms. It seems like the cloudflare worker has to go through this every time it runs, unless I send the exact same request within a few seconds. If I hit it quickly with the same request it shows the 50-60ms timing for the fetch to the origin. So is there a way to have the worker maintain the connection or not have to reconnect for every worker request?

As for the issue with different cross site caching, I think the cache was a bit behind, which is why it looked like my simplified cacheKey wasn’t working. Now it’s serving the same cache file across all hosts.

As a test today I output the time taken to fetch into the headers, and was able to see real world results of 270-400ms just for the auth fetch. This is even worse than when editing the worker and using console.log, which usually shows the request at around 120-200ms. I’m really struggling to find a solution to this.

Argo is not an option, as it would cost too much unless I was able to restrict it to only auth api traffic.
I’ve looked into gRPC and websocket, but I haven’t been able to work out if it would actually help with my problem of having to restart a request for every worker call. Other things like durable objects and switching to key value would effectively double my costs for workers, which already takes a huge fraction of my own api cost. Am I missing something else that could be used? I really just need a way to authorize a cached file by api key, which so far is only something I’ve only gotten working through workers. After that it’s just sending my origin server some basic info, and then getting a response. It doesn’t need to be done over http. If anyone has any solutions out there I would be hugely grateful!

Found out something really interesting today in my journal of worker fetch weirdness. So I built a super basic golang hello world server to fetch from. No tls neqotiation, no processing, just simple hello response.

Using the worker editor, I timed the fetch and spit out to console.

htttp:// ip : port - 60ms on first fetch, and 30ms on next.
http:// non-cloudflare-dns-name : port - 120ms on first fetch, and 30ms on next.
http:// cloudflare-acct-dns-name : port - 265ms on first fetch, and 175ms on next.

So completely based off of dns, there is a 6x slower fetch. 60/30ms though would be plenty acceptable.

The big problem here is that if I save and deploy using ip for fetch, then I get a 403 “Direct IP access not allowed”. If I deploy using a non cloudflare dns, then it seems like it strips the port and then of course fails. Neither errors occur under editing, only deploying, and is evidently a known problem for the direct ip issue at least. So best case currently, I have to setup a new non CF dns name on an unused ip, and then I will still suffer from an extra 60ms dns lookup for about half of the requests. Surely I’m missing something?

Hrm why don’t you cache every API key in memory? Unintuitively in memory cache lives a long time (as long as the runtime, we’re talking potentially several days, according to a cf dev) and that means in memory cache gets hit most of the time… just use a simple js object on the upper most level you write into in first api key approval

Also I hope you have the bundled plan, free has some restrictions. I think going to Pro instead of Free also helps, as first request won’t be re-routed to less congested colo in typical traffic peak times

  let originalURL = new URL(request.url)
  let authUrl = new URL(request.url)
  authUrl.pathname = "/api-authorization-page"
  let authResponse = await fetch(authUrl, {

Stick originalURL into your dead CPU time between await and fetch.

  let authResponse = fetch(authUrl, { blah: blah});
  let originalURL = new URL(request.url);
  authResponse = await authResponse;

Second, I dont think non-CF dev ever timed it, but “await cache.match(req)” is synchronous blocking I/O, somewhere CF wrote about it in their blog, but the cache is not always on the same rack sever’s SSDs/RAM ( A Tour Inside CloudFlare's Latest Generation Servers ) so “cdn cache” is a trip inside a POP over ethernet/infiniband. many microseconds/upto a ms or 2 or 3.

If I would write it, I would call fetch(’/api-authorization-page’) and cache.match(url), same time, WITHOUT await. If await fetch(’/api-authorization-page’) returns 200, then await on cache.match(url), which is latency free by now since by now “cache.match(url)” in sitting in RAM in your worker process.

Only in an auth failed/error/exceptional situation, you would throw away the free (cpu/ram/IO you dont pay for) cache api result. Bad auth client has NO IDEA cache API pulled up his forbidden content and threw it away.

Although this is sketchy as **** and not FBI/CIA-resistant, have CF to origin be cleartext HTTP, NOT HTTPS, and whitelist incoming connections to Cloudflare’s publish IP ranges. Also check the incoming header “Cf-Worker” origin side to make sure YOUR worker is connecting to your origin and not a 3rd party (hacker) worker Is the CF-Worker header from worker fetch() reliable? - #2 by KentonVarda Workers fetch API adds unwanted headers Tons of secret headers on outgoing Fetch .

As others said, you need to write a JS global API auth key cache, so after the “first” (per worker process) auth check, additional client reqs hit the bearer token cache, not to origin, unless you have extreme security design requirements (revoking a key/bearer token on password change must be “effective” within milliseconds, globally). You could also blocking check to origin token first time, put in JS global var cache, then further client reqs, if token found in JS global var cache, return (blocking) cache asset or (blocking) from origin asset, so req >=2 has 1 round trip either to origin for asset or lightning fast (1-2ms) RT to CF POP cache, BUTTTTTTT, BUTTTTTT also no-await, or event.waitUntil call fetch() to check the token each time, then wipe the JS global var toke cache slot. Your ex-good, now-malicious client only succeeded getting 1 unauthorized HTTP req to client side before his account was terminated.

You also brought up DNS query latency, is your origin server’s DNS name orange clouded or fetch() is calling to a “3rd party” non-CF origin from CF’s eyes. What is the TTL of the DNS query to your origin server? are you using CF nameservers or AWS/random register or dynamic dns (cable modem) provider?

Also in addition to using a JS global var as a cache for bearer tokens, you know cache API can be used as a poor man’s KV? except its per-POP/per-DC. This way you save fetch() to origin latency check, even if your customer is changing between wifi/cell constantly (launching new worker process on every couple requests, or launches a new worker process on every unique TCP connection and the 2nd browser to CF TCP connection hits another CF front end server or goes down a different BGP link or gets load balanced to another server by CF’s juniper router or worker #1 is using cpu, not idle, not IO blocked, so a 2nd worker spawns instantly on same front end server and both stay alive getting round robin load balanced so there is always 1 compiled, but idled worker available), or CF infrastructure kills your worker process every 5-10 seconds because your on the “free plan”.

Also I wonder how efficient CF’s OUTGOING TLS implementation is. And how about your TLS config? Without a TLS session resume ID or resume ticket, the origin server must send the FULL 5-15 KB TLS cert every time to CF edge server on every incoming TLS TCP connection from CF to origin. Again, do you clear text HTTP with IP whitelist between CF edge and origin? CF has the argo product to stop the TLS overhead AFAIK. Linux/Windows VPN app on origin server, making 24/7 outgoing connection to CF edge, no public IP required, and then all CF edge to origin requests go down the VPN connection cleartext inside encrypted VPN, and your HTTP server process only sees loopback 127.0.0.1 as the client IP.

@HannesK, That’s good to know about the memory cache, and gives me ideas for other things. I use the Auth origin check to do a lot more though, like providing analytics and billing tracking, user adjustable per ip rate limiting, product limitations, key changing/revoking, and api abuse monitoring checks. I am indeed using the bundled plan on a Pro account. So, ideally I really need to be able to hit the origin server on every request.

@bulk88 , I did already add the cache.match in a non await fetch along with the auth request thanks to a previous post giving me the idea. It looks like cache.match can actually take up to 10ms, so even more of an improvement in my tests! I’m currently testing on http only since there isn’t a HUGE need for security, and I’m trying to eliminate variables atm. TLS would be preferred, but I rather have speed if I have to choose and the security risk would be minimal. Thanks for the CF-Worker idea, as I’d wondered about that, and needed something beyond whitelisting!

Those are some fascinating workarounds, although I don’t think anything beyond the pricey KV storage would be reliable enough for usage based billing purposes. All information I can use though!

In the 3 tests I posted above, the non-cloudflare-dns-name test is to 3rd party namecheap hosted dns provider, and the cloudflare-acct-dns-name was to an orange cloud CF hosted dns name. I didn’t think about a grey cloud name, but just ran the test on a grey cloud with a result of 95-120ms on first fetch, and 30ms on next in editor. Which helps, and is on par with the namecheap dns, but still much slower than direct IP results of 60/30ms. When I deploy with the grey cloud, it works but is reliably 90-100ms. TTL is “auto” on CF dns, and 4 hours on third party. So it shouldn’t be forgetting DNS for every worker call when deployed, and on the editor, it seems to at least remember for 10sec (I am on pro plan).

Argo does seem like it would help, but would require me to make a separate CF account just for the API auth, since I can’t see any way to restrict Argo to certain traffic and applying it to all bandwidth would cost too much. It is doable though, assuming I’m not breaking any rules, and might be a necessity to get security and speed.

CFW Preview editor runs on Google Cloud Services. Its missing the cf object/WAF/cache API behavior. Dont use it for performance testing. “0ms cold start” isn’t true atleast for CFW free plan. IME after a “publish”/“deploy” the first HTTP request is 1-2 seconds TTFB (lets pretend the POP downloads your CFW from the global account billing DB in San Francisco). Paid plans workers might be sent instant to all POPs in a few seconds and stored in each POP forever as some CF employees say the architecture works but Im on free, there have to be feature limitations beyond 100K a day rule. On free plan, if I wait 5 or 10 minutes, its 100-120 ms for a cache API hit TTFB cleartext HTTP. Using a global counter/timestamp in resp header, where time stamp is of first HTTP req ever on the worker process, I kno my worker was restarted. http 1.1 cleartext keep alive requests are answered 25-40 ms worst case for me. I can’t measure cache API vs worker process global JS var difference. its under 10 ms difference if any which is the jitter on full bars LTE or a cable modem. What I found FASCINATING is, you MUST MUST MUST have a JS global var cache, and a cache API cache at all times in your worker. a cache API put() takes 10-100 ms!!! to take effect on a match(), even inside the SAME WORKER process. for performance reasons, doing a await on put(), and delaying the client response objecy until put() promise resolves is criminal, but after sending the client resp, the same worker will accept and run again on my next load testing http req, and NOT HIT cache API match(). Cache API is NEVER on the rackserver atleast on free CFW plan. This means using a CFW, and cache API alone, under “extreme load testing” for 10-50 ms, the origin server will be hit with 5 uncached HTTP reqs for the same file until 50 ms wall time goes by. yes, for 50 ms your origin server will get 5 TCP syn packets, 5 TLS connections, and 5 HTTP 1.1 requests for the same identical file until the per POP cache (SAN?) publishes the cache entry back to the client front end server. When I added a global JS var cache infront of the cache API

for(i=0; i<50;i++){fetch("http://mysite.myname.workers.dev/filename")
In Chrome dev console always had a 30-50 ms response instead of 600-700 ms (origin) response after the 1st request in network tab.

When before the JS global var cache, in network tab, first 5-8 responses were 600 ms or interleaved 40ms and 600 ms, until cache API started returning my data. I dont have any proper retiring/aging out logic in my JS global cache, unlike CF’s proper cache API would expire the entry for you perfectly, but my data i need cached is valid for weeks or forever. CF wrote somewhere, its undefined behavior officially, but unofficially all workers are restarted every 2 weeks. So 1 month Cache-Control header+2 weeks in a CFW in V8’s RAM, is 6 weeks theoretical between origin fetch if stars align.

The 0ms cold start gimmick is, the CFW is compiled and launched from TLS handshake SNI field speculatively, before the HTTP path is known, and before the CF WAF can apply the route logic. So the 5 to 30 ms TCP RT between every packet cycle, on a TLS handshake, the CFW is compiled and launched in between the layer 4-6 packets, even if it never gets the request because of the http router. It wont happen for cleartext HTTP like my sites.

That makes a lot more sense why the CFW Preview editor doesn’t line up with live numbers. In addition to using the editor, I am also testing by deploying and getting real numbers through my own timestamp in the resp header. That’s where I get the 90-100ms for deployed grey cloud site, and that is after running it many times to stabilize and get past initial warm up. So it is hard to say if having a direct ip would actually help outside of the editor, since I can’t actually test on a deployed CFW.

I’m having a little trouble wrapping my head around your comments on global variables for cache. Could you give me a code example of what you mean? Are you saying it’s faster to store caches.default in a global variable? Here is a simplified version of my current code:

addEventListener('fetch', event => {
  event.respondWith(handle(event))
})

async function handle(event) {
  const {request} = event
  const cacheKey = originalURL.toString();

  let cache = caches.default;
  var start = new Date().getTime();
  
  const responses = await Promise.all(
    [fetch(authUrl, {...request, method: "GET", body: null }), 
     cache.match(cacheKey)]);

  let authResponse = responses[0];
  let response = responses[1];
  if (authResponse.status === 200) {
    // Client is authenticated.
    var end = new Date().getTime();
    var dur = end - start;
    if (!response) {
       response = await fetch(cacheKey, {cf: { cacheEverything: doCache, cacheTtl: cacheTime, }  });
       response = new Response(response.body, response);
       response.headers.set("api-timing", dur);
       event.waitUntil(cache.put(cacheKey, response.clone()))	
    }
    return response
}
var carrierCache = {};
//let v8start;

addEventListener("fetch", event => {
  //  if(!v8start) {v8start = Date.now()};
  event.respondWith(new Promise(async function(resolveCB) {
    //    try {
    let num = new URL(event.request.url).pathname.substring(1);
    if (num == 'favicon.ico') {
      resolveCB(new Response(null, {
        status: 404
      }));
      return;
    }
    //match foo.com/2125551234 or foo.com/2125551 only
    if (!/^\d{7,10}$/.test(num)) {
      resolveCB(new Response(null, {
        status: 400
      }));
      //console.log tracing shows exec continues even though
      //client gets code 400
      return;
    }
    //reformat and cut off last 3 digits if any of num
    num = num.substr(0, 3) + '-' + num.substr(3, 3) + '-' + num.substr(6, 1);
    //console.log('search cache ' + 'http://carrier.example.com/' + num);
    if (carrierCache[num])
    {
      resolveCB(new Response(carrierCache[num], {
        headers: {
          "content-type": "text/javascript",
          "cache-control": "no-transform, max-age=2629800",
          "x-HSCAPI": 'true'
          //,'x-hsdbg': JSON.stringify(carrierCache)
        }
      }));
      return;
    }
    let response = await caches.default.match('http://carrier.example.com/' + num);

    if (response) {

      //response = new Response(response.body, response);
      //response.headers.set('x-i', runI++);
      //response.headers.set('x-CAPI', 'true');
      //response.headers.set('x-hsdbg', JSON.stringify(carrierCache));
      //response.headers.set('x-v8st', v8start);
      resolveCB(response);
      //promote Cache API entry to HS cache
      carrierCache[num] = await response.clone().text();
      return;
    }
    let responseOrigin = fetch('https://www.telcodata.us/search-area-code-exchange-detail?npa=' +
      num.substr(0, 3) + '&exchange=' + num.substr(4, 3));
    let metaCarrier;
    let saw1000s;

    let textBuf;
    let curExch;
    //reformat number to origin-like string

    let rewriter = new HTMLRewriter()
      .on('tr[class="results"]>td:nth-child(1)>a', {
        element: function() {
          textBuf = '';
        },
        text: function(text) {
          textBuf += text.text; // concatenate new text with existing text buffer
          if (text.lastInTextNode) {
            curExch = textBuf;
            //console.log("saw xch "+textBuf);
          }
        }
      })
      .on('tr[class="results"]>td:nth-child(3)>a', {
        element: function() {
          textBuf = '';
        },
        text: async function(text) {
          textBuf += text.text; // concatenate new text with existing text buffer
          if (text.lastInTextNode) {
            metaCarrier ??= textBuf;
            //console.log(textBuf + 'cur xchg ' + curExch + ' match xch ' + num);
            textBuf = 'carrierUpdate("'+textBuf+'")';
            let response = new Response(textBuf, {
              headers: {
                "content-type": "text/javascript",
                "cache-control": "no-transform, max-age=2629800"
                //,"x-i": runI++
                //,'x-hsdbg': JSON.stringify(carrierCache)
              }
            });
            //only add 1000s block entries to cache
            //not the useless whole exchange owner
            if (curExch.length == 9) {
              carrierCache[curExch] = textBuf;
            }
            if (curExch == num) {
              resolveCB(response.clone());
              saw1000s = !0;
            }
            //only add 1000s block entries to cache
            //not the useless whole exchange owner
            if (curExch.length == 9) {
              //console.log('put http://carrier.example.com/' + curExch);
              caches.default.put('http://carrier.example.com/' + curExch, response);
            }
          }
        }
      })
      //originally was origin file end, but cpu/parse time, abandon
      //the 1000s block search an element right after the <table>
      //element
      //   .onDocument({
      //       end: function() {
      //         if (!response) {
      //           response = new Response(metaCarrier, {
      .on('div[id="WSPadding"]', {
        element: function() {
          if (!saw1000s) {
            if (metaCarrier) {
              metaCarrier = 'carrierUpdate("'+metaCarrier+'")';
              let response = new Response(metaCarrier, {
                headers: {
                  "content-type": "text/javascript",
                  "cache-control": "no-transform, max-age=2629800"
                  //,"x-i": runI++,
                  ,'x-hsdbg': JSON.stringify(carrierCache)
                }
              });
              //console.log('meta resp');
              //fill all 1000s blocks with same resp in cache
              for (let i = 0; i <= 9; i++) {
                //let metaurl = 'http://carrier.example.com/' + num.substr(0, 8) + i;
                //console.log('put ' + metaurl);
                carrierCache[num.substr(0, 8) + i] = metaCarrier;
                caches.default.put('http://carrier.example.com/' + num.substr(0, 8) + i, response.clone());
              }
              resolveCB(response.clone());
            } else {
              resolveCB(new Response(null, {
                status: 404
              }));
            }
          }
        }
      });
    //Promise {[[PromiseState]]: "pending", [[PromiseResult]]: undefined}
    responseOrigin.then(function(resp) {
      //without event.waitUntil, after resolveCB() call in a matched 1000s
      //block, runtime will kill this worker, stopping parsing and storing
      //to cache rest of the 1000s block of the exchange
      event.waitUntil(rewriter.transform(resp).arrayBuffer());
      //just in case event.waitUntil returns a promise, toss it away
      return;
    });
    //toss promise just in case
    return;
    //    } catch (e) {
    //      resolveCB(new Response(e));
    //
  }));
})

“CAPI” is CF Cache API, HSCAPI is a global JS var. This CFW looks up the phone company that an input tel number belongs to, and returns it JSONP style for maximum caching at 3 separate layers (browser, CFW RAM, CF POP cache). Random process restarts of the CFW doesn’t abandon the cache, as the POP cache is behind the CFW global var cache carrierCache. Very high hit frequency will go through the RAM cache instead of the promise-based POP cache. Also my code is partially resistant to CF POP cache dumping my entry, but my lookup row still lives in V8 JS RAM. I cant remember if I ever observed POP cache entry being discarded, while my CFW JS RAM keeps going. But I skipped pushing JS RAM entry to CF Cache entry on High Speed cache (JS RAM) hit since that could cause an infinite loop and the data is never refreshed worst case (bounced between Cache API and JS RAM forever). I’d like to hit the origin server once a week or once a month :wink:

Oh ok, so this is more about speeding up the caching part, as opposed to the authUrl fetch. Cache.match seems to only take 10-30ms at most, which means it should finish before my authUrl fetch and won’t slow anything down. Right? I never thought about cache.put slowing things down though. I thought this was fixed by using “event.waitUntil(cache.put(cacheKey, response.clone()))”, and that it was non blocking and would return the response object, followed by keeping the worker running until the put finished. I changed it to an await, and saw that it could take over 40ms. If it doesn’t block the response from being sent to the browser though, then I don’t understand how I can speed any of the caching part up on my worker since I really HAVE to do the authUrl fetch for my usage tracking.

I went ahead and setup Argo tunnel for testing on another domain. That was able to further get the fetch down in the 60-70ms range, with occasional 90-100ms. I think the tunnel is bouncing between DFW and ATL colos, which would explain the differences. I’d prefer lower, but I’m not sure I’ll get any better than this, and it provides better security in the end.

My main concern was that both the match() and fetch(authurl) dispatch/execute same time and you call put() after return new Response('{body: "value"}'); as i did in

    if (curExch == num) {
      resolveCB(response.clone());

but from your code I think you are calling a blocking, or atleast chewing a few 100 more microseconds to call clone() and create the put() promise/packet and have the CFW runtime send off the put() promise/packet through the kernel/unix domain port/syscall to the ethernet to the Cache API SAN. In my code the client got a Response object on the wire BEFORE I create and dispatch the put() promise/packet to the SAN (also on wire kindda). You might save 1-3 ms. calling clone() and put() after you return Response object. Note the very non-boilerplate strange syntax I use to call .resolve() manually to the CFW runtime to return a Response object/promise/packet without falling off the bottom of my code through a return;

Ok, I think I understand a bit better now. Thanks for everything! I’ll see if I can figure out how to make the put and clone more asynchronous based off your example. I’m not used to async programming, so right now your example just looks like magic, lol. I think the biggest thing is going to be battling with getting the initial authUrl fetch down as fast as it can possibly be. I will try and post back if I find any significant settings beyond what I’ve already documented.