From c41d0f7d3a2cb051375b21cce6c0b0cdf8e867b3 Mon Sep 17 00:00:00 2001 From: SublimePeace <184005903+SublimePeace@users.noreply.github.com> Date: Mon, 3 Nov 2025 17:35:33 +0100 Subject: [PATCH] docs: s3: clarify multipart uploads memory usage Clarified phrasing to avoid confusion. Fixed a typo. Fixes #8525 --- docs/content/s3.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/content/s3.md b/docs/content/s3.md index a630f467e..b5f73ebbf 100644 --- a/docs/content/s3.md +++ b/docs/content/s3.md @@ -685,9 +685,9 @@ The chunk sizes used in the multipart upload are specified by `--s3-chunk-size` and the number of chunks uploaded concurrently is specified by `--s3-upload-concurrency`. -Multipart uploads will use `--transfers` * `--s3-upload-concurrency` * -`--s3-chunk-size` extra memory. Single part uploads to not use extra -memory. +Multipart uploads will use extra memory equal to: `--transfers` × +`--s3-upload-concurrency` × `--s3-chunk-size`. Single part uploads do not +use extra memory. Single part transfers can be faster than multipart transfers or slower depending on your latency from S3 - the more latency, the more likely