When loading prior manifests without the new reference list, the code failed to add the current backup to the reference list. Since the current backup is never explicitly referenced, building references from the file list was not sufficient to generate a complete list.
The main problem here was a bad test, fixed in 28f6604. This masked the issue and prevented it from being found. Now it is clear in the test that the current label is missing from the reference list.
Fix by adding the current label to the reference list if a reference list is not stored in the manifest.
Changing the label of a manifest that already had a label was not a good test and it ended up masking a bug where the current backup label was not being added to the reference list on manifest load, since manifestBackupLabelSet() added the label to the reference list. In fact, manifestBackupLabelSet() should never be called after a manifest load or even after the label has been set.
Add an assertion to prevent manifestBackupLabelSet() being called when the label is already set.
The bug exposed here will be fixed in a subsequent commit.
Hopefully this will make it a little clearer to the user what is wrong when they specify an indexed option without an index.
Also fix an ambiguous use of cfgParseOptionP(). The prior code worked in that it set prefixMatch = true but it was not very readable.
Bug Fixes:
* Fix memory leak in file bundle backup/restore. (Reviewed by John Morris, Oscar. Reported by Oscar.)
* Fix protocol error on short read of remote file. (Reviewed by Stephen Frost.)
Improvements:
* Do not store references for zero-length files when bundling. (Reviewed by Stefan Fercot.)
* Use more generic descriptions for pg_start_backup()/pg_stop_backup(). (Reviewed by Greg Sabino Mullane, David Christensen. Suggested by Greg Sabino Mullane.)
Test Suite Improvements:
* Update test.pl --psql-bin option to match command-line help. (Contributed by Koshi Shibagaki. Reviewed by David Steele.)
The option to specify the path to psql was shown in the command-line help as --psql-bin but the option was actually named --pgsql-bin.
Rename to match the help so they are consistent.
The magic in the header is only required so that command-line openssl will recognize the file as being encrypted. In cases where the encrypted data cannot be read with the command-line tool it makes sense to omit the header magic to save some space.
Unfortunately this cannot be enabled for file bundling because it would break backward compatibility. However, it should be possible to enable it for the combination of bundling and block incremental.
The prior code required coverage in the storage/remote module for all filters that could be used remotely.
Now the filter handlers are set at runtime so any filter list can be used with a remote. This is more flexible and makes coverage testing easier. It also resolves a test dependency.
Move the command/remote unit test near the end so it will have access to all filters without using depends.
This flag skips truncation when opening a file for write on drivers that support it, currently Posix and CIFS. This is convenient for cases where the file needs to be manipulated directly using the file descriptor. Using the file descriptor is not ideal and additional functionality should be added to the storage interface, but for now at least this avoids code duplication, especially on close which updates owners, the timestamp, syncs, etc.
The remote driver forbids no truncate because a file descriptor is never available for a remote storage write object.
Update two instances in the current code which benefit from this new functionality, but the primary reason for the change is to support more complex restore deltas in the upcoming block incremental feature.
If a remote file read was stopped before the read was complete or if an error occurred in the middle of the read then the protocol would end up in a bad state and produce this error:
ProtocolError: client state is 'data-get' but expected 'idle'
Prevent this by reading the rest of the file on close() or free() to leave the protocol in an idle state for the next command.
This was a possible issue for bundling because the amount to read is known in advance and therefore eof may not be reached. However, I was only able to reproduce this issue with unreleased code.
On error this issue would cause the original error to be lost. The process may still fail with this fix (if the error comes from another source) but hopefully we'll get better information about the original error.
The names were changed in PostgreSQL 15, so update the code and docs to make the naming more generic where needed to avoid using a version-specific name in the logs and documentation.
The prior result was hex-encoded, which is not optimal. This was legacy from the interface with Perl and then the JSON protocol. The new binary protocol natively supports binary so it makes sense to use it and convert to hex where needed.
A number of these hex conversions can now be removed but that will need to be handled in another commit.
This makes it more efficient to read/write (especially read) varint-128 to/from IO.
Update the Pack type to take advantage of the more efficient read and remove some duplicate code.
The reference list was previously built at load time from whichever references existed in the file list. This was sufficient since the list was for informational purposes only.
The block incremental feature will require a reference list that contains all prior backups, even those that are not explicitly referenced from the manifest. Therefore it makes sense to build and persist a manifest list rather than building it at load time.
This list can still be used for informational purposes, though it needs to be sorted since the list it sill built for older manifest versions and may not be in sorted order.
Add strLstFindIdx() to find references in the list.
The prior method was to check a combination of fields to determine if a file needed to be copied, delta'd, or resumed. This was complicated and ultimately imposed a limitation on the number of operations that could be performed.
Introduce copy, delta, and resume flags in the manifest to make it clearer which operations need to be performed and to reduce complex and duplicated logic.
This also allows zero-length bundled files to be completed during manifest build rather than later on during backup processing.
The prior manifestFileUpdate() function was pretty difficult to use since all the parameters had to specified. Instead, pass a ManifestFile struct that has all members set as needed.
When new struct members are added the manifestFileUpdate() call sites will still need to be reviewed, but this should make the process of adding members a bit simpler.
This appears to have been an oversight in 34d6495. Storing the reference is not really correct since the file is not stored in a prior backup. It also uses more space.
There is no real harm in storing the reference, since it is always ignored on restore, but the code is simpler if the zero-length files can be dealt with during the manifest and don't need additional handling later on. This is also an important part of some upcoming optimizations.
It is possible to log the bundle info correctly but the information is useless with the backup reference, which does not appear until later. For now just omit the bundle info so we are not logging something incorrect.
This test exposes a small logging issue. The bundle information for the matched delta on PG_VERSION is not correct. This issue will be fixed in the next commit.
The information stored in the manifest *is* correct so this bug is essentially cosmetic.
The current call site, manifestFileUnpack(), does not know the total buffer size but the buffer has always been maintained in memory so there should be no corruption. However, there are upcoming use cases where the buffer will be read from IO, the buffer size will be known, and additional sanity checking on buffer overruns will be valuable.
Also rename params to align better with cvtUInt64ToVarInt128().
Direct link creation via Posix functions has been moved to the Posix driver.
This change allows adding SFTP softlink creation in the SFTP driver using the standard interface.
Ninja produces quite a bit of output so error messages are often truncated by the default error/log buffers. Use large buffers in the test harness to capture the error even when there is a lot of output.
Ninja has introduced a --quiet option, but it is currently too new to be in any of our test distributions.
Bug Fixes:
* Fix incorrect time expiration being used for non-default repositories. (Reviewed by Stefan Fercot. Reported by Adam Brusselback.)
* Fix issue when listing directories recursively with a filter. (Reviewed by Stephen Frost. Reported by Efremov Egor.)
Features:
* Backup key/value annotations. (Contributed by Stefan Fercot. Reviewed by David Steele. Suggested by Adam Berlin.)
Improvements:
* Support --set in JSON output for info command. (Contributed by Stefan Fercot. Reviewed by David Steele. Suggested by Anton Kurochkin.)
* Update archive.info timestamps after a successful backup. (Reviewed by Stefan Fercot. Suggested by Alex Richman.)
* Move standby timeline check after checkpoint. (Reviewed by Stefan Fercot, Keith Fiske. Suggested by Keith Fiske.)
* Improve warning message on backup resume. (Suggested by Cynthia Shang.)
Documentation Improvements:
* Add absolute path for kill in pgbackrest.service. (Suggested by Don Seiler.)
While recursing and filtering, if the last entry in a directory was another directory containing entries then the parent list would get freed too early, causing a double free error or segfault.
Fix by ensuring that the completed list is at the top of the stack before freeing it. This will defer freeing parent lists until the contents of paths have been processed.
Lifecycle policies can cause the archive.info file and its copy to be removed since they are only updated on a stanza-upgrade. Update the timestamps after a successful backup to prevent this.
This does not mean that lifecycle policies should be used as a replacement for expiration. However, in some cases there may be policies in place that are out of admin control. If the lifecycle expiration is less than pgbackrest expiration then corruption of the earliest backup will occur at the very least and there might be other corruption which would make the repo unrecoverable.
An error that gets raised all the way to the top TRY block might need to free a lot of resources and any of these callbacks could throw an error and mask the original error. In fact this is pretty likely since we are already in an error state. For example, the Db object will try to close the remote db connection, but if the protocol is in a bad state it will not be able to do so.
Solve this, for now, by not freeing memory or calling callbacks in the CATCH_FATAL() block. This gives us a better chance if being able to report the error without encountering another error first.
For the most part, we don't need to worry about freeing resources (file handles, TLS contexts, etc.) if the program is going to exit immediately. However, it is important to attempt to terminate all active protocol connections, which is done by protocolFree() in main() since the protocol objects live in the top context.
Another way to handle this would be to implement an error stack and that is probably something we will do in the future. But, in the case of a segfault the original error would still be lost. Yet another option would be to still do cleanup but defer it until after the CATCH_FATAL() block.
If a repo is not specified for the expire command then the lowest repo becomes the default. The repo-retention-full value for time was being retrieved from the default rather than a specific repo which led to an incorrect expiration being applied.
Get the value from the specific repo and add a test.
It would be better if the default repo could not be queried in this case but it is not clear how to do that since the repo option is valid for expire (unlike, e.g., archive-push).
Allow key/value annotations to be added with the backup command and added/modified/removed with the new annotate command.
Annotations can be viewed with the info command in text mode when --set is specified and are always included in JSON output.
There are performance benefits to increasing the upload chunk size as long as the tradeoff with additional memory consumption is acceptable.
Make the chunk size configurable for S3, GCS, and Azure, but don't attempt to do any validation of the chunk size beyond some sane limits. The defaults remain as is for each storage type to avoid any unintentional regressions.
Catching individual fatal errors was only used in testing so the tests have been updated to use other errors instead. CATCH_FATAL() is now the only way to catch fatal errors.
This simplifies the logic a bit for upcoming changes to error handling and cleanup.
Also fix an issue where passing errorMessage() directly to THROW*() would attempt to copy the message buffer instead of preserving it, which is undefined behavior. Since there were no instances of this behavior before this commit, this was not a live bug.
All unit and performance tests are now built by the C harness.
Remove all unit/performance test build code from Perl.
Remove code from C harness that is no longer used. This code was included so the C harness could be run separately, but that is no longer needed with this full integration.
The C test harness is used for unit tests from the Perl harness where possible. Currently, unit tests can be run in the C harness when --no-coverage is specified and --profile is not specified.
C harness tests work on meson 0.45.
The C harness runs with valgrind by default. Valgrind can be disabled with --no-valgrind.
Also rebuild containers to add meson and update the documentation so that meson builds will work (even though we don't do them yet).
The standby timeline check was being performed using pg_control data loaded before the backup started. If the backup was started immediately after a promotion the standby might not have executed a checkpoint and written the new timeline to pg_control.
Instead perform the timeline check after the checkpoint is executed. This should ensure that the new timeline is in pg_control.
The prior warning made it sound as if some action was required on the part of the user.
The new message should make it clearer that this action will be performed by pgBackRest.
Build pgbackrest binary and auto-generated code automatically.
Remove --module option and allow modules to run by parameter. This is less verbose and multiple modules can be run at a time.
Allow filtering of modules. Multiple tests can be passed as parameters and if the module ends in / it will be used as a prefix filter. For example, common/ will run all the common modules.
If a test errors the remaining tests will still run but the test process will eventually exit with an error.
CI tests are included but unit tests remain on the development branch.
With these changes all unit tests run except those that specify the define (e.g. common/assert-off) or containerReq (e.g. protocol/protocol) keywords.
Building the C test harness has been simplified:
meson -Dwerror=true -Dfatal-errors=true -Dbuildtype=debug test/build/none pgbackrest
ninja -C test/build/none test/src/test-pgbackrest
To run all modules:
test/build/none/test/src/test-pgbackrest test
Just the common/error module:
test/build/none/test/src/test-pgbackrest test common/error
All info modules:
test/build/none/test/src/test-pgbackrest test info/
Add tzdata package so timezone tests in command/restore work correctly.
Mark default git path as safe. This is a security fix that is not applicable in this environment, but must be set.
Also remove package cleanup, which is inconvenient when new packages need to be installed. It makes sense for containers that will be downloaded from Dockerhub but not so much for a locally-maintained container.