Previously the process id was skipped if it did not exist. Instead, throw an error and handle the errors in downstream code.
This was probably ignored at some point to provide backward-compatibility, but that is no longer required, if it ever was.
Sometimes we need to read a lock from another process. This was done two different ways and in the case of cmdStop() was definitely hacky.
Centralize the logic to make it easier to read the locks for another process. This will also make it easier to add new lock data.
When archive-mode-check is disabled and archive-push is running from multiple hosts, it is very likely that the file will already exist with the same checksum, so disable the warning.
However, if the checksums do not match, an error will still be thrown.
If a boolean option had an unresolved dependency then the value would be NULL, which meant the dependency would need to be checked in the code to avoid an error. For example, cfgOptionBool(cfgOptOnline) needed to be checked before it was safe to call cfgOptionBool(cfgOptArchiveCheck).
Allow a default for boolean options when they are unresolved to simplify the code. This makes using the options easier and less prone to error. Not all boolean options get a dependency default in this commit, but more may be added in the future.
For PITR with --type=lsn, attempt to auto-select the appropriate backup set based on the --target LSN provided. Pick the most recent backup where backup-lsn-stop is less than or equal to the provided LSN.
It is possible that a file will be be truncated to zero-length after the backup manifest has been built. We could build logic into backupFile() to handle this case but it is hard to test well because of the race condition so tests would need to written directly against backupFile() and backupJobResult(). It hardly seems worth all that effort for a condition that occurs rarely, if ever.
Instead just remove the manifest check and add tests to restore to make sure it handles bundled zero-length files correctly. Logging will show that the file was bundled so if it happens a lot (which seems very unlikely) then we can think about an alternate implementation.
This rule was added because there were not sufficient tests to demonstrate that the repo-hardlink option could be changed in a backup set.
Remove the restriction and add/update tests to show that it works.
This is necessary now because bundling requires that hardlinking be disabled. Rather than add code complexity, it seems better just to address this limitation.
Check for invalid path in repo-* commands. Perform path validation and throw an error when appropriate. Path may not contain '//'. Strip trailing '/' from path. Absolute path must fall under repo path.
IMDSv2 provides additional security to prevent instance metadata from being read by an attacker.
All AWS instances should provide IMDSv2 but still fail back to IMDSv1 if the IMDSv2 token request fails. This is in case there are any services outside AWS that are emulating IMDSv1 but have not implemented IMDSv2.
It seems best for these to be repo options so they can be configured per repo, rather than globally.
All clarify usage for repo-bundle-size and repo-bundle-limit.
Since files are stored sequentially in a bundle, it is often possible to restore multiple files with a single read. Previously, each restored file required a separate read. Reducing the number of reads is particularly beneficial for object stores, but performance should benefit on any file system.
Currently if there is a gap then a new read is required. In the future we might set a limit for how large a gap we'll skip without starting a new read.
Improve the stop command, when force and stanza options are specified, to terminate only processes holding lock files for the given stanza. Prior to these changes, termination of all processes holding lock files regardless of stanza occurred.
For very large backups only getting an update per percent may not be often enough.
Add hundredths to the percent complete logging to provide more timely information.
IMPORTANT NOTE: Repository size reported by the info command is now entirely based on what pgBackRest has written to storage. Previously, in certain cases, pgBackRest could detect if additional compression was being applied by the storage but this is no longer supported.
Bug Fixes:
* Retry errors in S3 batch file delete. (Reviewed by Reid Thompson. Reported by Alex Richman.)
* Allow case-insensitive matching of HTTP connection header values. (Reviewed by Reid Thompson. Reported by Rémi Vidier.)
Features:
* Add support for AWS S3 server-side encryption using KMS. (Contributed by Christoph Berg. Reviewed by David Steele, Tharindu Amila.)
* Add archive-missing-retry option. (Reviewed by Stefan Fercot.)
* Add backup type filter to info command. (Contributed by Stefan Fercot. Reviewed by David Steele.)
Improvements:
* Retry on page validation failure during backup. (Reviewed by Stephen Frost, David Christensen.)
* Handle TLS servers that do not close connections gracefully. (Reviewed by Rémi Vidier, David Christensen, Stephen Frost.)
* Add backup LSNs to info command output. (Contributed by Stefan Fercot. Reviewed by David Steele.)
* Automatically strip trailing slashes for repo-ls paths. (Contributed by David Christensen. Reviewed by David Steele.)
* Do not retry fatal errors. (Reviewed by Reid Thompson.)
* Remove support for PostgreSQL 8.3/8.4. (Reviewed by Reid Thompson, Stefan Fercot.)
* Remove logic that tried to determine additional file system compression. (Reviewed by Reid Thompson, Stefan Fercot.)
Documentation Bug Fixes:
* Move repo options in TLS documentation to the global section. (Reported by Anton Kurochkin.)
* Remove unused backup-standby option from stanza commands. (Reported by Stefan Fercot.)
* Fix typos in help and release notes. (Fixed by Daniel Gustafsson. Reviewed by David Steele.)
Documentation Improvements:
* Add aliveness check to systemd service configuration. (Suggested by Yogesh Sharma.)
* Add FAQ explaining WAL archive suffix. (Contributed by Stefan Fercot. Reviewed by David Steele.)
* Note that replications slots are not restored. (Contributed by Reid Thompson. Reviewed by David Steele, Stefan Fercot. Suggested by Christophe Courtois.)
Some TLS server implementations will simply close the socket rather than correctly closing the TLS connection. This causes problems when connection: close is specified with no content-length or chunked encoding and we are forced to read to EOF. It is hard to know if this is a real EOF or a network error.
In cases where we can parse the content and (hopefully) ensure it is correct, allow the closed socket to serve as EOF. This is not ideal, but the change in 8e1807c means that currently working servers with this issue will stop working after 2.35 is installed, which seems too risky.
da0f3a855 used a workaround to get the documentation building on aarch64 but recent changes to the PGDG yum repo have broken this workaround. Installing the regular way still doesn't work, either.
Reverting for now to get the CI pipeline working again.
Trailing slashes in at least some of the repository storage types were preventing repo-ls from displaying any content (presumably due to storage-specific behavior).
Since the path with the slash should be equivalent to the path without the slash, just remove it if provided by the user.
Checking that pd_upper == 0 is not enough since this field may be corrupted. Still use pd_upper as a quick check, but when it is zero proceed to check the rest of the page to ensure it is also all zeroes.
Rather than attempting to filter page checksum failures by LSN, just retry when there is a page checksum failure. If the page has not changed since the last read report it as an error. If the page has changed, then PostgreSQL must be modifying the page so we can ignore the error because a full page write (and possibly updates) will be in the WAL.
Also remove tests made redundant by the test merge in b4897077.
When there was an issue with the system library path during building, the build-help rule would fail during executing ./build-help with the effect that main.c wouldn't build.
Break out help.auto.c generation from the build-help stage to allow it to be re-executed when the library path has been corrected.
Retry a WAL segment that was previously reported as missing by the archive-get command. This prevents notifications in the spool path from a prior restore from being used and possibly causing a recovery failure if consistency has not been reached.
Disabling this option allows PostgreSQL to more reliably recognize when the end of the WAL in the archive has been reached, which permits it to switch over to streaming from the primary. With retries enabled, a steady stream of WAL being archived will cause PostgreSQL to continue getting WAL from the archive rather than switch to streaming.
When disabling this option it is important to ensure that the spool path for the stanza is empty. The restore command does this automatically if the spool path is configured at restore time. Otherwise, it is up to the user to ensure the spool path is empty.
Coverity complained that this pass by value was inefficient:
CID 376402: Performance inefficiencies (PASS_BY_VALUE)
Passing parameter file of type "ManifestFile" (size 136 bytes) by value.
This was completely intentional since it gives us a copy of the struct that we can change without bothering the caller. However, updating fields is fine and may benefit the caller at some future data, and in any case does no harm now.
And as usual it is easier not to fight with Coverity.
Limit which files can be added to bundles, which allows resume to work reasonably well. On resume, the bundles are removed and any remaining file is eligible to be to be resumed.
Also reduce the bundle-size default to 20MiB. This is pretty arbitrary, but a smaller default seems better.
Bundle (combine) smaller files during backup to reduce the number of files written to the repository (enable with --bundle). Reducing the number of files is a benefit on all file systems, but especially so on object stores such as S3 that have a high file creation cost. Another benefit is that zero-length files are only stored as metadata in the manifest.
Files are batched up to bundle-size and then compressed/encrypted individually and stored sequentially in the bundle. The bundle id and offset of each file is stored in the manifest so files can be retrieved randomly without needing to read the entire bundle. Files are ordered by timestamp descending when being assigned to bundles to reduce the amount of random access that needs to be done. The idea is that bundles with older files can be read in their entirety on restore and only bundles with newer files will get fragmented.
Bundles are a custom format with metadata stored in the manifest. Tar was considered but it is too limited a format, the major issue being that the size of the file must be known in advance and that is very contrary to how pgBackRest works, especially once we introduce page-level incremental backups.
Bundles are stored numbered in the bundle directory. Some files may still end up in pg_data if they are added after the backup is complete. backup_label is an example.
Currently, only the backup command works in batches. The restore and verify commands use the offsets to pull individual files out of the bundle. It seems better to finalize how this is going to work before optimizing the other commands. Even as is, this is a major step forward, and all commands function with bundling.
One caveat: resume is currently not supported when bundle is enabled.
There is some evidence that retrying fatal errors, especially out of memory errors, may cause lockups. It makes sense to report fatal errors as quickly as possible and bypass retries. This may or not fix the lockup issue but it is worth doing either way.
For now, the only fatal errors will be AssertError and MemoryError.
If the entire batch failed it would be retried, but individual file errors were not retried. This could cause pgBackRest to terminate during expiration or when removing an unresumable backup.
Rather than retry the entire batch, delete the errored files individually to take advantage of the HTTP retry rather than adding a new retry loop. These errors seem rare enough that it should not be a performance issue.
In theory, the additional stat() call after a file has been copied to the repo can determine if additional compression has been applied by the file system. However, it has been a very long time since we tested this in practice. There are currently no unit tests that accurately test this feature since it requires a compressed file system like ZFS to work, which never seemed worth the extra cost.
It can also add a lot of time to backups if there are a large quantity of small files.
In addition, it stands as a blocker for combining files for small file support since it is no longer possible to get per-file sizes from the viewpoint of the file system. There are several ways this could be reworked but none of them are easy while at the same time maintaining current info functionality.
It doesn't seem worth keeping an untested feature that will only work in some special cases (if it still works) when it is blocking development.
The most recent release of Minio has broken CI builds but there is no logging to indicate what is wrong.
For now, just use the prior release to get CI builds working again. This kind if breakage is not uncommon for Minio but they usually resolve it in the next release.
Update lock code to use standard common/io functions and module patterns. This module was developed before the common/io module existed and our patterns had stabilized.