Random 502 errors for last 3 days (caused by illegal request header injected by CF reverse proxy)

We’ve been seeing an increasing amount of 502 errors for last 3 days, which never happened before.

Althought it’s 502, which indicates timeout. We’re certain it’s not a timeout, as all response happen within less than 100ms.

After investigation, we found the error is actually from our IIS server (but wait, it’s CF related).

The detailed error is 502.3, with system error code 87 (ERROR_INVALID_PARAMETER). After further investigation, it indicates that cause of error might because Cloudflare’s proxy writes illegal header into the request, it explains why only 1-10% of request failds. But due to some technically issue, we’re not able to retain a detailed log about which header.

By temporary turning off CF proxy, no any error happen again. Which could be another sign of issue is CF related.

Further anaylsis on the log, suggest all the request results 502 error come from specific CF server.
in log, for exact same request it looks like:

2020-08-31 13:43:44 GET /api/abc - 443 - 200 0 0 15
2020-08-31 13:43:44 GET /api/abc - 443 - 200 0 0 30
2020-08-31 13:43:44 GET /api/abc - 443 - 200 0 0 15
2020-08-31 13:43:44 GET /api/abc - 443 - 200 0 0 46
2020-08-31 13:43:44 GET /api/abc - 443 - 200 0 0 15
2020-08-31 13:43:44 GET /api/abc - 443 - 200 0 0 46
2020-08-31 13:43:44 GET /api/abc - 443 - 200 0 0 0
2020-08-31 13:43:44 GET /api/abc - 443 - 502 3 87 15
2020-08-31 13:43:44 GET /api/abc - 443 - 200 0 0 15

the request that results error always from for a period, then change to another CF IP.


That’s definitely worth mentioning to Support. I’d love to hear how this resolves.

To contact Cloudflare Customer Support, login & go to https://dash.cloudflare.com/?account=support and select get more help. If you receive an automatic response that does not help you, please reply and indicate you need more help.

I’ve not 100% sure yet. But I found all the success request been using same request header as documented in https://support.cloudflare.com/hc/en-us/articles/200170986-How-does-Cloudflare-handle-HTTP-Request-headers-.

But the failed request, all using those headers but in lower cases. Not sure if it’s relevant.

For example:

Success GET request:

CF-RAY: 5cb83a3d3dd506e1-LHR
X-Forwarded-Proto: https
CF-Visitor: {"scheme":"https"}
CF-Request-ID: 04e702ba3e000006e140ba0200000001
CDN-Loop: cloudflare

Failed GET requet:

cdn-loop: cloudflare
cf-ipcountry: GB
cf-ray: 5cb74e780cb906e1-LHR
x-forwarded-proto: https
cf-visitor: {"scheme":"https"}
cf-request-id: 04e66f5f07000006e140a73200000001
Transfer-Encoding: chunked

And the failed requet contains an extra Transfer-Encoding: chunked in it. Not sure if it’s relevant.

Just to confirm, “some” of Cloudflare’s servers are adding Transfer-Encoding: chunked to the header of proxied request. And this illegal header is the reason of 502 error.

This header only should be existed in response.

By using our own Nginx reverse proxy removing the added header, issue temporary fixed from our side. But this issue has to be resolved from Cloudflare.

Submitted the ticket this Monday, but no one from CF bother to response :woozy_face:

We have also seen 502 errors in the same time frame. We are using an azure app service (IIS 10) which is intermittently responding with 502.7

I have just had a chat with MS support & they informed me:

  • From our logs i can see that the error code 502 and sub_status 7
  • and win32_status as 87
  • Win32 Error Description = The parameter is incorrect, which means the parameter send is incorrect when it is getting routed from cloudflare

This is in response to a get request for a static image file.

When running the same test by-passing cloudflare we get no failures

the “parameter is incorrect” and win32_status as 87 indicates it’s the same issue as I describe above. IIS rejects the request when there’s a " Transfer-Encoding" in the header. And the issue still exists today.

Our temporary solution is put an temporay Nginx reverse proxy, removing that illegal header.

Tickets always get a response, even if automated. Can you post the ticket #?

After chase them on twitter, they finally responsed after 4 days. But the guy responding it give a response that inrelevant and closed it without resolving… I have 2 ticket raised #1966911 and #1965969.

For #1965969, I got an immediate response but automatic. Becuase it has keyword “502” in it. After I replied to the auto response, they left it hanging for 4 days, until today, replied after twitter contact.

For #1966911 they replied after 3 days. (Again, after contacted in twitter).

Both reply are meanling less. No much difference from the automatic response (because 502 most of time is a common network issue).

We’ve been hitting this same issue over the same time period. Also using Azure App Service and seeing 502.7 intermittently. Very interested in the resolution to this.

For the moment, before CF fix the issue, solution is to use your own Nginx reverse proxy, remove the header…

And just a note, in our case, this error only happens if it’s CF proxy + IIS + Asp.net core. Classic asp.net doesn’t happen.

1 Like

Sorry for my lack of knowledge here but where would we host that Nginx reverse proxy? The problem that we have is that the request dies on the load balancer hosted by Azure before it gets to our AppService. We’re not able to put anything in front of that on the Azure side. Can we do that on CF after it’s done processing?

Thanks in advance.

I don’t think the request dies at the load balancer.

In our case, we have CF proxy → Azure load balancer → Nginx → IIS ( inside IIS → Asp.net core)
The request failed within IIS BEFORE hit Asp.net core process.
I didn’t find easy way to patch IIS remove the header before it pass the request into internal process, so used Nginx.

In your case, if you put a Nginx after Azure load balancer and before AppService, it would work.

We have been seeing a lot of random 502 errors occurring when using cloudflare to access our azure web app. The web app functions correctly when accessed directly from the azure URL. Ive logged a support ticket also with a har file attached. Hopefully Cloudflare will address the issue as it seems its affecting multiple CF accounts.

Good to see more cases reported.

This is really a tricky problem, since 502 is normally caused from network or hosting server (even this case, it’s from the hosting server). So difficult to get Cloudflare’s attention.

It cost me entire day to find the reason…

I have started getting these over the last few days too, in Azure it shows:

Win32 Error Description = The parameter is incorrect

Hi @steven.xi & @alan21, we’ve just had a new release, could you please try one more time, and let us know if you have any issues.


Thank you @stefano1! I do not appear to be able to replicate the problem anymore. Also from logs doesn’t appear to have had the error since 05/09 1am utc. Looks promising, will continue to monitor.

The issue seems be fixed.

@stefano1, so this actually was a Cloudflare issue? A response header in the request? Any details?