Growing quantity of 502 pages from Cloudflare

In the last few days we’ve observed a growing quantity of 502 errors from Cloudflare served to our customers and some of our internal tools that uses the public internet. Before, we had a few 502 here and there, nothing too bothering but in the last days it was reported much more often by our monitoring tool (Sentry).

<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>cloudflare</center>
</body>
</html>

These requests don’t reach our infrastructure and they only happen in a sporadic fashion. I can reproduce it with a simple curl loop and some time:

*   Trying 172.67.70.146:443...
* Connected to redacted.domain.dev (172.67.70.146) port 443 (#0)
* ALPN: offers h2
* ALPN: offers http/1.1
*  CAfile: /etc/ssl/cert.pem
*  CApath: none
* [CONN-0-0][CF-SSL] (304) (OUT), TLS handshake, Client hello (1):
} [329 bytes data]
* [CONN-0-0][CF-SSL] (304) (IN), TLS handshake, Server hello (2):
{ [122 bytes data]
* [CONN-0-0][CF-SSL] (304) (IN), TLS handshake, Unknown (8):
{ [19 bytes data]
* [CONN-0-0][CF-SSL] (304) (IN), TLS handshake, Certificate (11):
{ [4163 bytes data]
* [CONN-0-0][CF-SSL] (304) (IN), TLS handshake, CERT verify (15):
{ [79 bytes data]
* [CONN-0-0][CF-SSL] (304) (IN), TLS handshake, Finished (20):
{ [52 bytes data]
* [CONN-0-0][CF-SSL] (304) (OUT), TLS handshake, Finished (20):
} [52 bytes data]
* SSL connection using TLSv1.3 / AEAD-AES256-GCM-SHA384
* ALPN: server accepted h2
* Server certificate:
*  subject: CN=redacted.domain.dev
*  start date: Aug  3 08:47:01 2023 GMT
*  expire date: Nov  1 08:47:00 2023 GMT
*  subjectAltName: host "redacted.domain.dev" matched cert's "redacted.domain.dev"
*  issuer: C=US; O=Let's Encrypt; CN=E1
*  SSL certificate verify ok.
* Using HTTP2, server supports multiplexing
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* h2h3 [:method: GET]
* h2h3 [:path: /redacted/url]
* h2h3 [:scheme: https]
* h2h3 [:authority: redacted.domain.dev]
* h2h3 [user-agent: curl/7.87.0]
* h2h3 [accept: */*]
* Using Stream ID: 1 (easy handle 0x7fa8c5813400)
> GET /redacted/url HTTP/2
> Host: redacted.domain.dev
> user-agent: curl/7.87.0
> accept: */*
> 
* Connection state changed (MAX_CONCURRENT_STREAMS == 64)!
< HTTP/2 502 
< server: cloudflare
< date: Tue, 29 Aug 2023 18:48:50 GMT
< content-type: text/html
< content-length: 155
< cf-ray: 7fe6f8767c29a214-YYZ
< 
{ [155 bytes data]
* Connection #0 to host redacted.domain.dev left intact
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>cloudflare</center>
</body>
</html>
http_code: 502: dnslookup: 0.007566 | connect: 0.213097 | total: 0.381508 

Our DNS record is proxied through Cloudflare.

What can I do to debug that ?

Welcome to the Cloudflare community!

Thanks for including the response body. In this case, there is a known issue causing elevated 5xx/499 errors that aligns with that response body (default nginx/blank and white looking error page).
You can watch the status of this incident here:

I too have had increased 502 errors on my site.
I’m not a dev.
How can I resolve this, or communicate this to my dev?
Thanks.

There is an incident ongoing that can explain the elevated 502 ATM https://www.cloudflarestatus.com/incidents/2n51msydpw39

Thanks Vincent, but I’m still experiencing this issue.

The error message faults the host (WPengine).

But straggly they don’t see the issue on their end.

Looks like the issue is back:

Just keep an eye on cloudflarestatus.com when stuff like this happens.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.