Cloudflare R2 HTTP/2 Error

I am uploading large files (about 150Gi) to Cloudflare R2.

When downloading with CURL, the download defauls to HTTP/2. It downloads a few gigabytes, then fails with this error:

curl: (92) HTTP/2 stream 0 was not closed cleanly: INTERNAL_ERROR (err 2)

Forcing curl to use http1.1 with --http1.1 flag fixes the problem.

Note that I used the s3 CLI to upload the file, piping from stdin. I had to give an estimate of the file size, and this estimate is higher than the actual file size (I am unable to calculate an estimate beforehand). I am not sure if it is related.

Hi! Can anyone have a look at this? I still have an issue with downloading large (>100Gi) files from R2 such as “stream was not closed cleanly” or “connection reset by peer”.

I had previously said it only happens on http2, but http1.1 shows the same problem after all.

What are you making the HTTP requests to? Is it a custom domain?

Actually, it looks like it isn’t support based on a message from an employee on 2022-11-02

Yeah, looks like it’s not supported. That’s probably not something we’ll put much effort into in the near future, since I can’t find much information about clients actually allowing you to use http2. E.g. it’s difficult to find information about if boto even supports http2. I don’t think it’s likely that support for that is widespread, since AWS S3 doesn’t support http2 either as far as I can tell.

Message link

Thanks for your reply. I am simply using curl to download a file hosted on R2 on a custom domain. Curl uses http/2 by default.

However, I have the same problem when forcing HTTP 1.1 with curl --http1.1

So, simply put, the issue is that downloading a 100Gi+ file on R2 with curl often fails in the middle of the download.

You may try for yourself at https://polkashots.io

Hi Nicolas, I’d be interested in looking into this. Could you DM me in discord with more details about the file you’re trying to download? My Discord ID is Frederik#6268

As an aside, we’ve recently disabled http2, so clients should automatically use http1.1 now. But I understand that that doesn’t solve your problem.

Hello, are you going to enable http2 back on r2 buckets?
We are serving static assets though r2 bucket and all requests are http1.1 now, thats decrease frontend performance

Just to give an update, @fbaetens and I exchanged privately on the matter, and I was told that support for public download of large files with curl (>100Gi) was not currently a priority.

In many cases, the download gets interrupted. Rarely, it goes on until the end. To everyone affected, I recommend implementing a retry loop.

This makes R2 an incomplete substitute for S3. I hope the Cloudflare team reconsiders this!

To add a little more context to this, no internet connection is going to be perfectly reliable, for large file downloads like this, it’s always better to download the file in chunks with range requests, and piece it back together in the client. This is what RClone automatically does with multi threaded downloads, which are enabled by default: Documentation

But Amazon S3 and Google Cloud buckets support my use case without a problem.

Wget’s default behavior is to retry upon failure, using the Range headers, however this fails for me in many cases, I get:

wget: connection closed prematurely

Also I should specify that I am downloading these large files to stdout and piping to lz4, so rclone is not appropriate as it expects to write to a file. I would greatly appreciate if this could be fixed.

Hi, is there any ETA for enabling HTTP2 for R2?

2 Likes

I’m got this issue too. but my js file only 4.4MB. If I disable proxy (DNS only) that file can load successfully.

If you can solve, Please let me know the what is the solution for this issue.

Thank you in advance