You've already forked pgbackrest
mirror of
https://github.com/pgbackrest/pgbackrest.git
synced 2025-11-25 22:12:03 +02:00
Previously an S3 upload with default repo-storage-upload-chunk-size would only work for files <= 50GiB because of the limited number of chunks allowed. GCS has a smaller chunk size default so it topped out at 40GiB. Azure allows 50,000 chunks so it allowed up to 200GiB. These are all far larger than files PostgreSQL will create but these days a data directory might also contain files created by plugins that can be much larger. Since the eventual file size is not known in advance (due to compression) it is hard to pick an appropriate chunk size in advance. Instead, dynamically grow the chunk size over time to reach 5TiB for S3 and GCS (their upper limit). Azure has more parts so it will reach 45TiB, which is smaller than the upper limit of 190TiB, but seems sufficient for now. The default buffer size is used for the first GiB (plus some) to provide compatibility with any clones that do not support variable block sizes. There is no evidence that this is a problem but better to be safe. The minimum values for repo-storage-upload-chunk-size have been increased to match vendor minimums and simply the chunk size algorithm.