Starting from 1.12.2022 we are experiencing an issue with our web application (hosted on Heroku, running PHP Laravel) proxied trough Cloudflare.
When we sent HTTP request without using Cloudflare proxy, the response works fine, but when we enable Cloudflare proxy, the response timeouts at 60 seconds and return truncated output with size of apx 1MB. This happens in multiple parts of application, where response contains bigger amount of data.
We are proxying our app trough Cloudflare for few years already (still free plan), we have not changed any settings for months. Starting this month we see issues with receiving data and we tracked the issue to Cloudflare.
Is this a Cloudflare bug, was there any update on Cloudflare side, or should we do something on our side, change configuration, upgrade plan, anything?
I cannot provide any insights, but I’ve seen the same thing (and posted about it here). For us, the immediate solution was reducing document size and lots of praying.
It seems to have gotten better over the past days, and now happens much less often (but we now also have much fewer responses > 1MB, so who knows).
I seem to have the same problem with all my CF sites since last thursday this problem persists and is also visible in google search console → extreme server response times.
We are seeing the exact same thing and have been for about 2 weeks. We have tried all sorts on our IIS .NET application. We came to the conclusion it was network related and as soon as we stop proxying requests through Cloudflare everything works fine. We get sometimes a blank page and sometimes a partially loaded page and a CURL request taken from the chrome network tab shows a malformed request.
Unfortunately I cannot find anything consistent between user or platform and it seems totally random!
If you have a HAR file while reproducing the issue, let me know I’d be happy to take a look. What you can do is also right-click on the failing request in the Developer Tools > Network tab of Chrome-based browsers and click “Copy as cURL” then you can re-rerun the request at the command line.
If you add a couple of command switches to it, you can force it via Cloudflare or direct to your server to compare behaviour by modifying the --connect-to ::1.2.3.4 command. For example:
Brill, I’ll get you that info. How do I send the info to you as for us to reproduce you have to be logged in and I don’t fancy posting in the public domain.
Development mode does not change it for me either.
However, what does change things in my case is content encoding.
Trying with curl, if I do not send any Accept-Encoding headers (and CF correctly does not encode the content), I’ll hit the issue pretty reliably. Sending Accept-Encoding: gzip gets me very few issues (can hardly reproduce), and I haven’t seen an issue with curl while sending Accept-Encoding: br. Accept-Encoding: deflate produces the same results as not sending any Accept-Encoding header: common stalling.
The client’s Accept-Encoding header has no impact on CF’s communication with the backend, CF will always send Accept-Encoding: gzip and re-encode the content on the fly if the client supports brotli or doesn’t indicate any compatible encoding algorithm, so it does very much point to Cloudflare.
I’ve set up an example for debugging at https://cf.istkaputt.de/index.html 762kb, text + , and https://cf.istkaputt.de/error-index.html 704kb
tags, text, . What’s interesting is that /index.html works fine for me even though it’s larger than /error-index.html, but contains less HTML (no
wrapping the lines).
curl -v "https://cf.istkaputt.de/error-index.html" > /dev/null # stalls, but only in content, headers are fine
curl -v -H "Accept-Encoding: gzip" "https://cf.istkaputt.de/error-index.html" > /dev/null # works just fine
curl -v -H "Accept-Encoding: br" "https://cf.istkaputt.de/error-index.html" > /dev/null # works just fine
curl -v "https://cf.istkaputt.de/index.html" > /dev/null # works just fine
curl -v -H "Accept-Encoding: gzip" "https://cf.istkaputt.de/index.html" > /dev/null # works just fine
curl -v -H "Accept-Encoding: br" "https://cf.istkaputt.de/index.html" > /dev/null # works just fine
Hi folks, internally we found an issue with a new version of a HTML parser on some servers that was causing this problem - this problem has been fixed.
If you disabled any features (e.g. enabled dev mode) to workaround it, you can re-enable now.
Every since yesterday when the random 503 errors were appearing, larger documents have started to hang sporadically. Is any one else seeing this? Cloudflare Status doesn’t mention anything (but they didn’t mention the increased error count yesterday either, so that’s really not saying much).
Any documents larger than ~1mb seem to be affected, binaries (e.g. images) seem to work. It’s not perfectly reproducable, but it happens across multiple accounts with different origin data centers, and for people in different countries with different CF access points, so it’s unlikely to be something local or related to specific infrastructure.
Reducing document size seems to help, but unfortunately isn’t always an option (e.g. using Symfony forms, complex and detailed forms will produce multi-megabyte HTML).