We were already retrying 500 errors but 503 (rate-limiting) errors were not being retried and would cause an instant failure which aborted the command.
There are only two 5xx errors currently implemented by S3 but instead of adding 503 simply retry all 5xx errors. This is consistent with the http definition of this error class, "the server failed to fulfill an apparently valid request."
Suggested by Craig A. James.
% characters caused issues in backup/restore due to filenames being appended directly into a format string.
Reserved XML characters (<>&') caused issues in the S3 driver due to improper escaping.
Add a file with all common special characters to regression testing.
File names with uncommon characters (e.g. @) caused authentication failures due to S3 encoding them correctly while the S3 driver did not.
Reported by Dan Farrell.
pgBackRest currently has no way to request new credentials so the entire command (e.g. backup, restore) must complete before the credentials expire.
Contributed by Yogesh Sharma.
Previously, functions with sensitive options had to be logged at trace level to avoid exposing them. Trace level logging may still expose secrets so use with caution.
Any beginning literal portion of a filter expression is used to generate a search prefix which often helps keep the request small enough to avoid rate limiting.
Suggested by Mihail Shvein.
The options accommodate systems where CAs are not automatically found by IO::Socket::SSL, i.e. RHEL7, or to load custom CAs.
Suggested by Scott Frazer.
* Combine hardlink and non/compressed in synthetic tests to reduce test time and improve coverage.
* Change log level of hardlink logging to detail.
* Cast size in S3 manifest to integer.