Tunnelling used to work but recently changing request resulting in digest error

What is the name of the domain?

Not important (will disclose if necessary)

What is the error number?

400 Bad Request

What is the error message?

time=“2025-04-25T14:33:49.087428720Z” level=error msg=“Upload failed: digest invalid: provided digest did not match uploaded content”

What is the issue you’re encountering

My harbor registry, which get exposed through cloudflare tunnel, used to work perfectly fine. I could push and pull images with no errors at all. Nothing has changed by my end of infrastructure but recently all my pushes are failing cause the digest is not matching what expected.

What steps have you taken to resolve the issue?

Im quite sure the error is in the cloudflare end, cause the hashing that the registry calculates on the received blob (and that is causing the 400 Bad Request face error) is “e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855” which for sha256 should be the hashing of an empty string.

By inspecting tunnel connector logs, I do have a double confirmation that the PUTs api calls that gets delivered while trying to push the image, have “content-length”: 0 (which makes sense with the digest above).

Everything makes me believe that cloudflare is messing with my requests cutting out the payload.

For instance the image to push is 40-50 MB (under the 100MB limit of the free plan). Still, shall be considered, I was able to push larger images.

Thinking that I might be outdated with the cloudflared connector I updated it to the last available (2025.4.0) .

Any insights?

2 Likes

Hey, interesting issue — thanks for the detailed breakdown.

That digest e3b0...b855 definitely points to an empty payload being received.

Since your PUT request ends up with content-length: 0, it does sound like Cloudflare Tunnel is stripping or blocking the body. I’ve seen similar behavior in the past when:

  • Content filtering/security features were turned on in Cloudflare
  • POST/PUT requests lacked certain headers (like Content-Type)
  • Chunked encoding wasn’t handled well over the tunnel

A couple of things you could try:

  • Add/force Content-Type: application/octet-stream or whatever matches your payload
  • Enable chunked encoding if your client supports it
  • Bypass tunnel temporarily (use direct IP mapping or expose through another reverse proxy) to confirm if the issue is tunnel-specific
  • Check cloudflared logs with --loglevel debug — sometimes they show dropped body/payload issues

Since you’re under the 100MB free tier limit and already using the latest connector, the problem may be introduced by a recent Cloudflare change.

Let us know if you manage to get around it — could be helpful for others running Harbor too.

1 Like

thanks sdgsvzve03, apprecciate the quick response :slight_smile:

I’ve enstablished without doubts that cloudflare is the issue here. I temporary bypassed it using my local dns and magically my blobs get pushed like sugar.

cloudflared logs on debug level revealed that there was no Content-Type setted ( as you were suggesting ) but no error or security/filtering operations are present.

Unlucky docker (or podman) do not allow headers customization on the underline API calls determined by the push command, I can’t control it, can I?
Just to be safe, after checking documentation, I used podman to push the image and got the same digest error.

I have been able to try chunked encoding with podman (--compression-format=zstd:chunked) but that does not the job either → same digest error.

Im wondering if recently cloudflare added some default security rules which I can disable or relax a bit through my control panel.

1 Like

Just to corroborate this, we have seen a similar issue in the last week or so with a stable (no change in last two years) deployed application where we have been sending medium size (couple of MB) JSON POSTs from our cloud based platform over a cloudflare tunnel to a remote machine.

I’ve not been able to narrow it down 100% yet, but reducing the size of the request temporarily fixed this (which was the easiest thing for us to change). I was going to try and disable chunked encoding in the .NET 9 HttpClient we are using next, but after reading this report we are seeing exactly the same…

May 2, 2025 • 11:19:48 AM
{
“connIndex”: 1,
“path”: “/nextevent/venue_1e99b3e4-9dfb-4b23-ba8c-89ce480d7c5b/barcodes”,
“headers”: {
“Cf-Visitor”: [
“{"scheme":"https"}”
],
“Content-Type”: [
“application/json; charset=utf-8”
],
“Correlation-Context”: [
“workstationId=works_01960f52-7119-4aca-b73f-5132d091bf65, organisationId=organ_76610b61-f709-4947-afe3-124c15ca0c76, channelId=chanl_09aff10c-f004-431a-950f-fab77cd8e03d, userId=user_00000000-0000-0000-0002-000000000001, clientIpAddress=81.178.126.247, requestId=0HNC90RM40876%3A00000001, jobId=0620f2cc-7d27-4058-b5f4-71a9f06afae9”
],
“Authorization”: [
“apikey ”
],
“Cdn-Loop”: [
“cloudflare; loops=1”
],
“Cf-Connecting-Ip”: [
“34.147.230.7”
],
“Cf-Ray”: [
“9396c84bb80848c4-LHR”
],
“X-Forwarded-For”: [
“34.147.230.7”
],
“X-Forwarded-Proto”: [
“https”
],
“Accept-Encoding”: [
“gzip, br”
],
“Cf-Ipcountry”: [
“GB”
],
“Cf-Warp-Tag-Id”: [
“96d0b1b3-1d97-45c1-8d15-53a275085b31”
],
“Traceparent”: [
“00-95ca4c44f435930efdd7caab12044d8e-d31dc72d8ee9afb7-01”
]
},
“content-length”: 0,
“originService”: “http://localhost:7766”,
“ingressRule”: 0,
“host”: “mansfield-accesscontrol.ktckts-connectors.com
}

I can see other (smaller) requests land ok with non-zero content-lengths. Im at a bit of a loss as to how to diagnose this further, I cant easily add packet tracing on the client side for this.

Upgraded our (old) cloudflared service (on Windows) to 2025.4.0 but we’re still getting the problem.

Thanks, for contributing. Glad to hear it’s not just a “me” problem :slight_smile:

Just wanted to share that as a temporary work around I “solved” the issue by setting a local DNS that would bypass cloudflare.
This was possible 'cause technically the origin and destination of my requests were both under my network. Yet this pose a new boundary for my infrastructure and I hope to go back to my old configuration.

So in the end I’m still looking for a well-formed solution, any other contributes would be apprecciated.

Yeah it’s always nice to know you’re not alone! :slightly_smiling_face:

Bypassing cloudflare for this for us isn’t an option unfortunately, we’re using tunnels to expose a machine in the middle of a private network for we can make API calls to it. Good to know though!

I can’t believe this isn’t affecting more people. There must be something more specific stack wise to us perhaps which means we’re affected when others aren’t?

Our payload sizes aren’t massive, at most a couple of MB, but seems to be happening now with much smaller ones. We’re full .NET on both the client (httpclient) and the server (aspnet core) which you would have thought would have made compatibility easy but obviously not!

I have raised a support ticket with CF to hopefully get to the bottom of it, if I hear anything useful I’ll let you know.

This topic was automatically closed after 15 days. New replies are no longer allowed.