Bypassing HTTP 5XX errors for free (for a hobby project)

I’m working on making my own Git server (hosted with Gitea) and it’s been going great so far. I’ve gotten the thing online and can finally access it from networks other than my own (LOL). There’s just one issue, however. (And I really do mean just one, since literally everything else about Cloudflare is perfect.)

The issue is that when I’m cloning a big repository, such as a mirror of GCC, it fails. Details:

  • I am not using classic port forwarding. I’m using Cloudflare Tunnel (argo) to get my website online. This is for security purposes. I’ve been through a 6-day DDoS attack on my home network and it really ain’t pretty.
  • My server computer is a Raspberry Pi 4B 8GB. It has four ARM Cortex-A72 cores each clocked at 1.8GHz. It’s storing all my data on a super fast micro-SD card from SanDisk. It’s running Ubuntu Server 22.04 LTS for arm64. (The storage medium is not the issue here, I have verified that with the iotop and top tools while the issue happened through several tests.)
  • When I attempt to clone said big repository, Gitea makes a connection with the client and THEN initiates a long Git action that packs all the data to be delivered to the client into one file. Git is working at full speed using just one CPU core, and the micro-SD card is keeping up just fine so the single-core performance is causing the bottleneck here. (It cannot be multi-threaded.)
  • Since said repository is so big, Git takes a REALLY long time to pack everything. (It worked when I connected to the server via local network since there is no timeout on my router.) Cloudflare then assumes that the server is overloaded (when it’s not since the web UI is still responsive) because Gitea establishes a connection with the client before the packed file is ready. About sixty seconds of anxiety later, the client’s Git executable exits saying that the Cloudflare proxy server returned HTTP 524. The clone is unsuccessful, not even partial since the pack file didn’t even start downloading to the client’s machine yet.

I have done some research and apparently this stuff doesn’t happen when you’re using the Enterprise plan. I currently cannot afford any plan other then Free, and I am not a company LOL. So, is it possible for Cloudflare to please make an exception for my (sub)domain or eliminate the timeout limitation for everyone? Since technically speaking, keeping so-called “time-outed” connections alive on the “superfast” proxy servers wouldn’t hurt since there’s no data flowing through it. (Or I’m misunderstanding and that’s how DDoS attacks work. Please clarify.)

I’m also new to all this server hosting stuff, but I have touched it before and understand what TCP/IP is and how the Internet works (DNS, difference between HTTP/HTTPS) etc. I’m not sure if this is a Cloudflare Tunnel issue or a CF Proxy issue, but I’m pretty sure it’s the latter. Correct me if I’m wrong. Throw in a possible solution while you’re at it, please.

My website is https://gitea.hdg57.eu.org. The root domain, https://hdg57.eu.org, is not active yet.

Unfortunately that setting (the proxy timeout) is 100 seconds so if your origin server cannot return a HTTP response within that time, you will encounter a 524 error. It can only be customised for Enterprise customers.

Probably the best thing to do would be to find ways to optimise your git application so that it can do that operation in under 100 seconds, or explore ways to pre-warm that operation if that’s possible.

If you can’t do that, in terms of alternatives - you could try exposing your git service via a TCP tunnel instead (which should skip the HTTP CDN and the 524 timeout) or via Private Networking. Both would require you to be using the WARP application on the client side to connect to git.

2 Likes

Ok then.

Will grey-clouding the Git server remove the 100 seconds limit?

Grey Clouding means Cloudflare are only doing DNS, and all traffic goes directly to your own server.

Ok. I tried to grey cloud the entry but then my browser complains it can’t find the server. The website still works on my side (using the server computer’s local IP address). Something’s off with my Cloudflare configuration or something. What do you reckon is happening here?

Cloudflare Tunnel only works with proxied :orange: DNS records.

1 Like

Alright, so I’ve gotten port forwarding working through UPnP. Now, there is yet another problem.

The port is mapped like this:
<public IP>:443 mapped to <server's local IP>:<server software port>

Web UI traffic goes through nicely, whether the subdomain is orange-clouded or not. Everything Web UI-related still works and the website is still very responsive, and everything is coming directly from my origin server.

So what is so special about the Cloudflare proxy service that without it Git clones won’t even work? When the subdomain is orange-clouded then the Git clone is successful. If it’s grey clouded then running the exact same clone command will complain that it couldn’t even establish a connection since it timed out trying to connect. I’ve tried manually entering my credentials and server port in the clone URL itself, but still no avail. I also waited out the entire TTL when editing the DNS entries plus an entire five minutes extra, but again, still no avail. Please help.

Ah. Apparently Xfinity does not like its users using NAT Loopback. I just had to sign up for a free VPN service (the speed is livable tbh) and now it works. So the server works on every single Internet connection in the world except for mine. :joy:

Problem solved.