From d481aa86130ab19dfd1065ee50f23d1a5b804c75 Mon Sep 17 00:00:00 2001 From: Nick Craig-Wood Date: Tue, 21 Mar 2023 11:43:35 +0000 Subject: [PATCH] Revert "s3: fix InvalidRequest copying to a locked bucket from a source with no MD5SUM" This reverts commit e5a1bcb1ce771bc80fee0072565bb4bfa1e86dca. This causes a lot of integration test failures so may need to be optional. --- backend/s3/s3.go | 15 +-------------- 1 file changed, 1 insertion(+), 14 deletions(-) diff --git a/backend/s3/s3.go b/backend/s3/s3.go index 785969bd7..4d4994302 100644 --- a/backend/s3/s3.go +++ b/backend/s3/s3.go @@ -1912,9 +1912,6 @@ size (e.g. from "rclone rcat" or uploaded with "rclone mount" or google photos or google docs) they will be uploaded as multipart uploads using this chunk size. -Files with no source MD5 will also be uploaded with multipart uploads -as will all files if --s3-disable-checksum is set. - Note that "--s3-upload-concurrency" chunks of this size are buffered in memory per transfer. @@ -1970,11 +1967,7 @@ The minimum is 0 and the maximum is 5 GiB.`, Normally rclone will calculate the MD5 checksum of the input before uploading it so it can add it to metadata on the object. This is great for data integrity checking but can cause long delays for large files -to start uploading. - -Note that setting this flag forces all uploads to be multipart uploads -as we can't protect the body of the transfer unless we have an MD5. -`, +to start uploading.`, Default: false, Advanced: true, }, { @@ -5507,12 +5500,6 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op } } } - // If source MD5SUM not available then do multipart upload - // otherwise uploads are not hash protected and locked buckets - // will complain #6846 - if !multipart && md5sumHex == "" { - multipart = true - } // Set the content type it it isn't set already if req.ContentType == nil {