@aws-sdk/client-s3 v3.729.0 Breaks UploadPart and PutObject R2 S3 API Compatibility

For Workers & Pages, what is the name of the domain?

N/A

What is the error number?

501

What is the error message?

NotImplemented: Header ‘x-amz-checksum-crc32’ with value ‘Cu/HOQ==’ not implemented

What is the issue or error you’re encountering

Upon updating to @aws-sdk/client-s3 v3.729.0 which requires checksum algorithms on UploadPart and PutObject commands, I get this issue with every request. I can also reproduce the issue on version 3.726.1 by setting the ChecksumAlgorithm to any valid value.

What steps have you taken to resolve the issue?

I have downgraded to @aws-sdk/client-s3 version 3.726.1 for now, but this is not sustainable long-term. Please either add support for checksum algorithms on upload or support for an alternate way to actually upload files on newer versions of @aws-sdk/client-s3 without this problem.

What are the steps to reproduce the issue?

Install @aws-sdk/client-s3 v3.729.0
Create S3 client, configure it to an R2 account
Use a PutObjectCommand to upload any object to any bucket

2 Likes


Attached is a screenshot of the error. For some reason it didn’t upload properly when I made the original post.

1 Like

same here. i get the same error.
image

1 Like

and workaround is to change the checksum algorithm. it fixed the issue for me.

const headers: Record<string, string> = {
      "x-amz-checksum-algorithm": '"CRC32"',
    };

    command.middlewareStack.add(
      (next) =>
        async (args): Promise<any> => {
          const request = args.request as RequestInit;

          Object.entries(headers).forEach(
            ([key, value]: [string, string]): void => {
              if (!request.headers) {
                request.headers = {};
              }
              (request.headers as Record<string, string>)[key] = value;
            }
          );

          return next(args);
        },
      { step: "build", name: "customHeaders" }
    );
2 Likes

refer below link - Configure custom headers · Cloudflare R2 docs

2 Likes

also happens with the latest aws/aws-sdk-php Version 3.337.0. Please fix

downgrading to aws/aws-sdk-php to “3.336.15” works again

1 Like

Same here. Is cloudflare working on this?

for php it seems to be related to this: Announcement: S3 default integrity change · Issue #3062 · aws/aws-sdk-php · GitHub

https://my.diffend.io/gems/aws-sdk-s3/1.177.0/1.178.0

same in ruby gem

  • Feature - This change enhances integrity protections for new SDK requests to S3. S3 SDKs now support the CRC64NVME checksum algorithm, full object checksums for multipart S3 objects, and new default integrity protections for S3 requests.

The team has been communicating in Discord. They said they’re working on checksum support. I’m hoping they publish a status page about this as well, and we asked them to do so.
The S3 Libs should also let you specify the checksum calculation to avoid this issue for now, for example for the client-s3 lib

const S3 = new S3Client({
    region: 'auto',
    endpoint: `https://${ACCOUNT_ID}.r2.cloudflarestorage.com`,
    credentials: {
            accessKeyId: ACCESS_KEY_ID,
            secretAccessKey: SECRET_ACCESS_KEY,
        },
    // Add this config
    requestChecksumCalculation: 'WHEN_REQUIRED',
});

(taken from Harshil Agrawal in discord)
Or you can downgrade to the versions before the ones released yesterday.
For CLI, I think your only option is to downgrade to avoid entirely.

Edit, Status Page: Cloudflare Status - AWS S3 SDK compatibility inconsistencies with R2
CF updated their docs with workarounds as well

4 Likes

Even after downgrading aws-sdk-s3 for Ruby, this error still happening.

Header 'x-amz-checksum-crc32' with value 'awLXQA==' not implemented

WORKAROUND Laravel:

config/filesystem.php

'r2' => [
            'driver' => 's3',
            'key' => env('R2_ACCESS_KEY_ID'),
            'secret' => env('R2_SECRET_ACCESS_KEY'),
            'region' => env('R2_DEFAULT_REGION'),
            'bucket' => env('R2_BUCKET'),
            'url' => env('R2_URL'),
            'endpoint' => env('R2_ENDPOINT'),
            'use_path_style_endpoint' => env('R2_USE_PATH_STYLE_ENDPOINT', false),
            'visibility' => 'private',
            'throw' => false,
            'options' => [
                'StorageClass' => 'STANDARD',
            ],
            'request_checksum_calculation' => 'when_required',
            'response_checksum_validation' => 'when_required',
        ],
3 Likes

I use @aws-sdk/client-s3 for Node. The config requestChecksumCalculation: 'WHEN_REQUIRED' works for me when putting files, but it doesn’t work for getting files. So, I needed to create a middleware to also remove the x-amz-checksum-mode header. For me, the combination of the requestChecksumCalculation config and removing the x-amz-checksum-mode header makes everything work great again!

@rafael.rodrigues also try this

requestChecksumCalculation: ‘WHEN_REQUIRED’,
responseChecksumValidation: ‘WHEN_REQUIRED’,

2 Likes

@it-can
It worked great for me without the middleware, thanks!

This solution works for getting and putting files, but it does not currently work for deleting files.
See: s3.deleteObjects sends CRC32 checksum even if `requestChecksumCalculation` is set to `WHEN_REQUIRED` · Issue #6819 · aws/aws-sdk-js-v3 · GitHub

This issue was closed as not planned because they’ve decided to not fix their own broken config flag. It seems the only mitigation right now for those intending on deleting files in their R2 buckets is either sketchy header-removing middleware or simply downgrading to v3.726.1 until Cloudflare adds checksum support on their end (see status page).

Worked for my case.

Thank you

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.