I am uploading large files (about 150Gi) to Cloudflare R2.
When downloading with CURL, the download defauls to HTTP/2. It downloads a few gigabytes, then fails with this error:
curl: (92) HTTP/2 stream 0 was not closed cleanly: INTERNAL_ERROR (err 2)
Forcing curl to use http1.1 with --http1.1 flag fixes the problem.
Note that I used the s3 CLI to upload the file, piping from stdin. I had to give an estimate of the file size, and this estimate is higher than the actual file size (I am unable to calculate an estimate beforehand). I am not sure if it is related.
Actually, it looks like it isn’t support based on a message from an employee on 2022-11-02
Yeah, looks like it’s not supported. That’s probably not something we’ll put much effort into in the near future, since I can’t find much information about clients actually allowing you to use http2. E.g. it’s difficult to find information about if boto even supports http2. I don’t think it’s likely that support for that is widespread, since AWS S3 doesn’t support http2 either as far as I can tell.
To add a little more context to this, no internet connection is going to be perfectly reliable, for large file downloads like this, it’s always better to download the file in chunks with range requests, and piece it back together in the client. This is what RClone automatically does with multi threaded downloads, which are enabled by default: Documentation
Also I should specify that I am downloading these large files to stdout and piping to lz4, so rclone is not appropriate as it expects to write to a file. I would greatly appreciate if this could be fixed.