403 error when uploading to R2 bucket from client via a pre-signed url

It works fine locally. This is my CORS config:

[
  {
    "AllowedOrigins": [
      "http://localhost:3000",
      "https://www.innersound.art",
      "https://innersound.art",
      "https://www.innersound.vercel.app",
      "https://innersound.vercel.app"
    ],
    "AllowedMethods": [
      "GET",
      "PUT"
    ],
    "AllowedHeaders": [
      "Content-Type"
    ]
  }
]

The OPTIONS request returns 204 suggesting that the CORS config is correct right? But then the PUT request to upload the image failed with 403.

I’m using next.js and Vercel.

This is the code for getting the upload urls:

export async function GET() {
  try {
    const s3 = new AWS.S3({
      endpoint: process.env.CLOUDFLARE_R2_ENDPOINT,
      accessKeyId: process.env.CLOUDFLARE_ACCESS_KEY_ID,
      secretAccessKey: process.env.CLOUDFLARE_SECRET_ACCESS_KEY,
      signatureVersion: "v4",
      region: "auto",
    });

    const params = {
      Bucket: bucketName,
      Expires: 60,
      ContentType: "image/jpeg",
    };

    const resultId = nanoid();
    const printfulKey = `full/${resultId}.jpg`;
    const previewKey = `preview/${resultId}.jpg`;

    const urls = await Promise.all([
      s3.getSignedUrlPromise("putObject", {
        Key: printfulKey,
        ...params,
      }),
      s3.getSignedUrlPromise("putObject", {
        Key: previewKey,
        ...params,
      }),
    ]);

    return NextResponse.json({
      printfulUrl: urls[0],
      previewUrl: urls[1],
      resultId,
    });
  } catch (err: any) {
    return NextResponse.json({ error: err.message }, { status: 500 });
  }
}

The uploaded images will always be image/jpeg.

I upload the images like this:

const uploadUrlResponse = await fetch("/api/cloudflare/get-upload-url", {
        method: "GET",
      });
      const uploadUrlJson = await uploadUrlResponse.json();
      const { printfulUrl, previewUrl, resultId } = uploadUrlJson;

      const responses = await Promise.all([
        fetch(printfulUrl, {
          method: "PUT",
          headers: { "Content-Type": "image/jpeg" },
          body: resultBlob,
        }),
        fetch(previewUrl, {
          method: "PUT",
          headers: { "Content-Type": "image/jpeg" },
          body: previewBlob,
        }),
      ]);

      if (!responses[0].ok || !responses[1].ok) {
        throw new Error(responses[0].statusText);
      }

Any ideas?

the following configuration works for uploading the file via the presign url attaching the cors policy along with a sample snippet to generate presign url and command to verify the file is uploading properly or not.

[
  {
    "AllowedOrigins": [
      "*"
    ],
    "AllowedMethods": [
      "PUT"
    ],
    "AllowedHeaders": [
      "Content-Type",
      "Content-Length"
    ]
  }
]

update the allowedOrigins if you planning on restricting from a specific doamins even can be added to local. example:

[
  {
    "AllowedOrigins": [
      "http://localhost:5173",
      "https://jsonstore.online"
    ],
    "AllowedMethods": [
      "PUT"
    ],
    "AllowedHeaders": [
      "Content-Type",
      "Content-Length"
    ]
  }
]

snippet for generating a presign url.

import { S3Client } from "@aws-sdk/client-s3";

const s3Client = new S3Client({
    region: "auto",
    endpoint: R2_BASE_URL,
    credentials: {
        accessKeyId: R2_ACCESS_KEY_ID,
        secretAccessKey: R2_SECRET_ACCESS_KEY,
    },
});

const objectKey = "<foldername>/dog.png";

const presignedUrl = await getSignedUrl(
        s3Client,
        new PutObjectCommand({
            Bucket: R2_BUCKET_NAME,
            Key: objectKey,
        }),
        {
            expiresIn: 60 * 5, // 5 minutes
        }
    );

console.log({presignedUrl})

run the following command to upload the file via the presign url

curl -X PUT "presignUrl" -F "[email protected]"

Now you should be able to see the file uploaded to the r2 bucket under the folder name you have provided.