Cloudflare Workers failing without pushing any new code

I have deployed my VueJS app to Cloudflare Workers. It’s looking amazing for now, but we have a weird problem.

This is the configuration i use:

The problem is, this night the whole website went down. We didn’t push any new changes in days. The exception was Error 1101 that indicates that there is a JS Error, but as I told you we didn’t push any new code.

So, this morning I did “wrangler publish --env prod” WITHOUT changing anything and it worked again.

It looks like it needed a restart or something. We had multiple reports of this, but this is the first time it happened where I could verify it.

I’ve searched on this forum and saw that sometimes this happens when there is no RAM left. Is this true? What can I do to prevent this?

Thanks

Afaik, Cloudflare worker sites shouldn’t be affected by this at all, you’re not really processing anything in the worker, just serving files. @harris maybe you know more about this?

And yes, looks like a RAM accumulating from the workers being alive a long time. When you deploy the scripts again, you restart them, thus clearing the RAM.

Yes, it looked like it needed a restart.

This is my configuration in case it helps:

import { getAssetFromKV, mapRequestToAsset } from '@cloudflare/kv-asset-handler'

/**
 * The DEBUG flag will do two things that help during development:
 * 1. we will skip caching on the edge, which makes it easier to
 *    debug.
 * 2. we will return an error message on exception in your Response rather
 *    than the default 404.html page.
 */
const DEBUG = false

addEventListener('fetch', event => {
  try {
    event.respondWith(handleEvent(event))
  } catch (e) {
    if (DEBUG) {
      return event.respondWith(
        new Response(e.message || e.toString(), {
          status: 500,
        }),
      )
    }
    event.respondWith(new Response('Internal Error', { status: 500 }))
  }
})

async function handleEvent(event) {
  const url = new URL(event.request.url)
  let options = {}

  /**
   * You can add custom logic to how we fetch your assets
   * by configuring the function `mapRequestToAsset`
   */
  options.mapRequestToAsset = req => {
    // First let's apply the default handler, which we imported from
    // '@cloudflare/kv-asset-handler' at the top of the file. We do
    // this because the default handler already has logic to detect
    // paths that should map to HTML files, for which it appends
    // `/index.html` to the path.
    req = mapRequestToAsset(req)

    // Now we can detect if the default handler decided to map to
    // index.html in some specific directory.
    if (req.url.endsWith('/index.html')) {
      // Indeed. Let's change it to instead map to the root `/index.html`.
      // This avoids the need to do a redundant lookup that we know will
      // fail.
      return new Request(`${new URL(req.url).origin}/index.html`, req)
    } else {
      // The default handler decided this is not an HTML page. It's probably
      // an image, CSS, or JS file. Leave it as-is.
      return req
    }
  }

  try {
    if (DEBUG) {
      // customize caching
      options.cacheControl = {
        bypassCache: true,
        browserTTL: 120,
        edgeTTL: null,
      }
    }else{
      options.cacheControl = {
        bypassCache: true,
        browserTTL: 120,
        edgeTTL: null,
      }
    }
    return await getAssetFromKV(event, options)
  } catch (e) {
   // Fall back to serving `/index.html` on errors.
   return getAssetFromKV(event, {
      mapRequestToAsset: req => new Request(`${new URL(req.url).origin}/index.html`, req),
    })
  }
}

The problem described does sound like it could be caused by a slow memory leak. Another symptom of such a slow memory leak would be gradually increasing response times, up into the multiple-seconds range, so I would ask if there were reports of the site simply getting slower.

That said, I don’t see any such memory leak in the code you posted, or in the kv-asset-handler code. That doesn’t prove there isn’t one, but I am skeptical. This might be worth a support ticket to see if they have any insight on what might have happened.