The current call site, manifestFileUnpack(), does not know the total buffer size but the buffer has always been maintained in memory so there should be no corruption. However, there are upcoming use cases where the buffer will be read from IO, the buffer size will be known, and additional sanity checking on buffer overruns will be valuable.
Also rename params to align better with cvtUInt64ToVarInt128().
Direct link creation via Posix functions has been moved to the Posix driver.
This change allows adding SFTP softlink creation in the SFTP driver using the standard interface.
Ninja produces quite a bit of output so error messages are often truncated by the default error/log buffers. Use large buffers in the test harness to capture the error even when there is a lot of output.
Ninja has introduced a --quiet option, but it is currently too new to be in any of our test distributions.
Bug Fixes:
* Fix incorrect time expiration being used for non-default repositories. (Reviewed by Stefan Fercot. Reported by Adam Brusselback.)
* Fix issue when listing directories recursively with a filter. (Reviewed by Stephen Frost. Reported by Efremov Egor.)
Features:
* Backup key/value annotations. (Contributed by Stefan Fercot. Reviewed by David Steele. Suggested by Adam Berlin.)
Improvements:
* Support --set in JSON output for info command. (Contributed by Stefan Fercot. Reviewed by David Steele. Suggested by Anton Kurochkin.)
* Update archive.info timestamps after a successful backup. (Reviewed by Stefan Fercot. Suggested by Alex Richman.)
* Move standby timeline check after checkpoint. (Reviewed by Stefan Fercot, Keith Fiske. Suggested by Keith Fiske.)
* Improve warning message on backup resume. (Suggested by Cynthia Shang.)
Documentation Improvements:
* Add absolute path for kill in pgbackrest.service. (Suggested by Don Seiler.)
While recursing and filtering, if the last entry in a directory was another directory containing entries then the parent list would get freed too early, causing a double free error or segfault.
Fix by ensuring that the completed list is at the top of the stack before freeing it. This will defer freeing parent lists until the contents of paths have been processed.
Lifecycle policies can cause the archive.info file and its copy to be removed since they are only updated on a stanza-upgrade. Update the timestamps after a successful backup to prevent this.
This does not mean that lifecycle policies should be used as a replacement for expiration. However, in some cases there may be policies in place that are out of admin control. If the lifecycle expiration is less than pgbackrest expiration then corruption of the earliest backup will occur at the very least and there might be other corruption which would make the repo unrecoverable.
An error that gets raised all the way to the top TRY block might need to free a lot of resources and any of these callbacks could throw an error and mask the original error. In fact this is pretty likely since we are already in an error state. For example, the Db object will try to close the remote db connection, but if the protocol is in a bad state it will not be able to do so.
Solve this, for now, by not freeing memory or calling callbacks in the CATCH_FATAL() block. This gives us a better chance if being able to report the error without encountering another error first.
For the most part, we don't need to worry about freeing resources (file handles, TLS contexts, etc.) if the program is going to exit immediately. However, it is important to attempt to terminate all active protocol connections, which is done by protocolFree() in main() since the protocol objects live in the top context.
Another way to handle this would be to implement an error stack and that is probably something we will do in the future. But, in the case of a segfault the original error would still be lost. Yet another option would be to still do cleanup but defer it until after the CATCH_FATAL() block.
If a repo is not specified for the expire command then the lowest repo becomes the default. The repo-retention-full value for time was being retrieved from the default rather than a specific repo which led to an incorrect expiration being applied.
Get the value from the specific repo and add a test.
It would be better if the default repo could not be queried in this case but it is not clear how to do that since the repo option is valid for expire (unlike, e.g., archive-push).
Allow key/value annotations to be added with the backup command and added/modified/removed with the new annotate command.
Annotations can be viewed with the info command in text mode when --set is specified and are always included in JSON output.
There are performance benefits to increasing the upload chunk size as long as the tradeoff with additional memory consumption is acceptable.
Make the chunk size configurable for S3, GCS, and Azure, but don't attempt to do any validation of the chunk size beyond some sane limits. The defaults remain as is for each storage type to avoid any unintentional regressions.
Catching individual fatal errors was only used in testing so the tests have been updated to use other errors instead. CATCH_FATAL() is now the only way to catch fatal errors.
This simplifies the logic a bit for upcoming changes to error handling and cleanup.
Also fix an issue where passing errorMessage() directly to THROW*() would attempt to copy the message buffer instead of preserving it, which is undefined behavior. Since there were no instances of this behavior before this commit, this was not a live bug.
All unit and performance tests are now built by the C harness.
Remove all unit/performance test build code from Perl.
Remove code from C harness that is no longer used. This code was included so the C harness could be run separately, but that is no longer needed with this full integration.
The C test harness is used for unit tests from the Perl harness where possible. Currently, unit tests can be run in the C harness when --no-coverage is specified and --profile is not specified.
C harness tests work on meson 0.45.
The C harness runs with valgrind by default. Valgrind can be disabled with --no-valgrind.
Also rebuild containers to add meson and update the documentation so that meson builds will work (even though we don't do them yet).
The standby timeline check was being performed using pg_control data loaded before the backup started. If the backup was started immediately after a promotion the standby might not have executed a checkpoint and written the new timeline to pg_control.
Instead perform the timeline check after the checkpoint is executed. This should ensure that the new timeline is in pg_control.
The prior warning made it sound as if some action was required on the part of the user.
The new message should make it clearer that this action will be performed by pgBackRest.
Build pgbackrest binary and auto-generated code automatically.
Remove --module option and allow modules to run by parameter. This is less verbose and multiple modules can be run at a time.
Allow filtering of modules. Multiple tests can be passed as parameters and if the module ends in / it will be used as a prefix filter. For example, common/ will run all the common modules.
If a test errors the remaining tests will still run but the test process will eventually exit with an error.
CI tests are included but unit tests remain on the development branch.
With these changes all unit tests run except those that specify the define (e.g. common/assert-off) or containerReq (e.g. protocol/protocol) keywords.
Building the C test harness has been simplified:
meson -Dwerror=true -Dfatal-errors=true -Dbuildtype=debug test/build/none pgbackrest
ninja -C test/build/none test/src/test-pgbackrest
To run all modules:
test/build/none/test/src/test-pgbackrest test
Just the common/error module:
test/build/none/test/src/test-pgbackrest test common/error
All info modules:
test/build/none/test/src/test-pgbackrest test info/
Add tzdata package so timezone tests in command/restore work correctly.
Mark default git path as safe. This is a security fix that is not applicable in this environment, but must be set.
Also remove package cleanup, which is inconvenient when new packages need to be installed. It makes sense for containers that will be downloaded from Dockerhub but not so much for a locally-maintained container.
This was clearly an attempt to set the mode when creating a directory, but it never worked and instead created a "750" directory in the current working directory.
Detected when running in an environment where the current working directory was read-only.
Add harness depends when present.
Include libyaml in all test builds.
Fix mode on paths before trying to remove and set test path with mode 770 to match the Perl test harness.
With these changes all unit tests run except those that specify the define (e.g. common/assert-off), binReq (e.g. command/archive-get), or containerReq (e.g. protocol/protocol) keywords.
Builds and code generation need to be done in advance. The following commands are required for setup:
meson setup -Dwerror=true -Dfatal-errors=true -Dbuildtype=debug build pgbackrest
ninja -C build test/src/test-pgbackrest
build/src/build-code help pgbackrest
build/src/build-code postgres pgbackrest
Now tests can be run, e.g.:
build/test/src/test-pgbackrest --module=postgres/interface
Creating new binaries was convenient at first but has now become a maintenance issue.
Solve this by combining that into a single binary that takes an additional parameter to indicate which code should be built.
Also clean up path handling to make it easier to build code from the command line.
This makes the test code a bit simpler where we are listing a path but not following links.
Links in the repository can be used for testing but should never be committed to the main branch.
NOTE TO PACKAGERS: An experimental meson build has been added but packagers should continue to use the autoconf/make build for the foreseeable future.
Improvements:
* OpenSSL 3 support. (Reviewed by Stephen Frost.)
* Create snapshot when listing contents of a path. (Reviewed by John Morris, Stephen Frost.)
* Force target-timeline=current when restore type=immediate. (Reviewed by Stephen Frost.)
* Truncate files during delta restore when they are larger than expected. (Reviewed by Stephen Frost.)
* Disable incremental manifest save when resume=n. (Contributed by Reid Thompson. Reviewed by David Steele.)
* Set backup percent complete to zero before copy start. (Contributed by Reid Thompson. Reviewed by David Steele.)
* Use S3 IsTruncated flag to determine list continuation. (Reviewed by John Morris, Soulou. Suggested by Christian Montagne.)
Documentation Bug Fixes:
* Skip internal options in the configuration reference. (Reported by Francisco Miguel Biete.)
Documentation Improvements:
* Add link to PostgreSQL configuration in repository host section. (Reviewed by Stefan Fercot. Suggested by Julien Cigar.)
Test Suite Improvements:
* Add experimental Meson build. (Reviewed by Eli Schwartz, Sam Bassaly.)
* Allow any path to be passed to the --test-path option. (Contributed by Andrey Sokolov. Reviewed by David Steele.)
* Fix compile error when DEBUG_EXEC_TIME is defined without DEBUG. (Contributed by Andrey Sokolov. Reviewed by David Steele.)
Explicitly set target timeline to "current" when type=immediate and PostgreSQL >= 12. We do this because type=immediate means there won't be any actual attempt to change timelines, but if we leave the target timeline as the default of "latest" then PostgreSQL might fail to restore because it can't reach the "latest" timeline in the repository from this backup.
This is really a PostgreSQL bug and will hopefully be addressed there, but we'll handle it here for older versions, at least until they aren't really seen in the wild any longer.
PostgreSQL < 12 defaults to "current" (but does not accept "current" as a parameter) so no need set it explicitly.
Previously a callback was used to list path contents and if no sort was specified then a snapshot was not required. When deleting files from the path some filesystems could omit files that still existed, which meant the path could not be removed.
Filter . out of lists in the Posix driver since this special entry was only used by test code (and filtered everywhere in the core code).
Also remove callbacks from the storage interface and replace with an iterator that should be easier to use and guarantees efficient use of the snapshots.
v0.45 ships with Ubuntu 18.04, which is currently the oldest distro we support. We may never do a Meson release on Ubuntu 18.04 but this allows us to start running unit tests with Meson in the meantime.
Some more granular options are not available so we use buildtype in more places.
The check for a in-tree autoconf/make build had to be removed since the filesystem APIs are not available.
Finally, alias_target was removed. This means that full paths must be used for build targets, which does not seem too bad. For instance, test/src/test-pgbackrest must now be used as a build target instead of simple test-pgbackrest.
Coverage for these checks was dependent on the order the files were read from disk, which made the tests fragile.
Rearrange the checks and add a test that won't depend on order.
Previously we were just checking for the existence of NextContinuationToken, which the S3 documentation indicates will not be present when the list is not truncated. However, recent versions of Scality send a blank NextContinuationToken when IsTruncated is false. Sending the blank continuation token back causes Scality to send another blank continuation token and an infinite loop occurs.
Instead use IsTruncated (which is required to be present) to determine whether NextContinuationToken should be present. Error if NextContinuationToken is then missing or empty, since an empty token caused an infinite loop with the Scality server (which arguably should have errored when passed an empty token).
The TEST_STORAGE_LIST() macro is more robust and hides the callback mechanism from the caller.
Add features to TEST_STORAGE_LIST() that hrnStorageInfoListCallback() had.
Update tests to use the abbreviated type output (e.g. path/) generated by TEST_STORAGE_LIST().
Having the test harness in C will allow us to remove duplicated Perl code and test on systems where Perl support is not present.
Custom harnesses and shims are currently not implemented, which means only the following tests in the common module will run: error, stack-trace, type-convert, assert-on, mem-context, time, encode, type-object, type-string, type-list, type-buffer, type-variant, reg-exp, log.
The experimental test harness is being committed with partial functionality so it can be used in Windows development. The remaining features will follow as needed.
The meson builds are still experimental so for now the configure/make build process is preferred for release builds. This message should help prevent any automated build systems from picking up meson instead.
Some of the replacements that were being done already existed as constants, so use the constants instead.
Also fix a minor formatting error introduced when testAdd() was renamed to hrnAdd().
This module has dependencies on command/command so it does not make sense for it to be in the common module. Also move protocolFree() to main() since this is a very large dependency.
Adjust the tests so command/exit can be tested later. This is a bit messy but will get adjusted as we improve the test harness.
Both have newer gcc and OpenSSL 3.
Fedora 36 runs horribly slow with valgrind enabled so run the valgrind tests on Ubuntu 22.04. Fedora 36 has a newer gcc so it is still worth testing on.
There are two changes:
* Suppress deprecation warnings so we can build with -Werror and -Wfatal-errors. At some point we'll need to migrate to the new APIs but there does not seem to be a good reason to support two sets of code right now.
* Update the handling for unexpected EOF to handle EOF or error. The error code for EOF has changed and become harder to identify, but we probably don't care whether it is an error or EOF.
Maintaining the version interfaces was complicated by the fact that each interface needed to be in separate compilation unit to avoid type conflicts. This also meant that various build/test files needed to be updated to add the new interfaces.
Solve these problems by auto-generating all the interfaces into a single file. This is made possible by parsing defines and types out of the header files and creating macros to rename the types. At the end of the version interface everything is undef'd. Another benefit is that the auto-generated interfaces can be static and included directly into postgres/interface.c.
Since some code generation is now always required for tests, change --no-gen to --min-gen in test.pl.
It would also make sense to auto-generate the version defines in postgres/version.h, but that will be left for a future commit.
Meson is a new build system that offers simpler syntax and superior performance to autoconf/make. In addition, Windows is supported natively.
The Meson build appears complete, but currently is used only for auto-generation of code and the host build of pgbackrest. Some container upgrades will be required before Meson can be used for container builds.
Also patch the Debian package to force autoconf/make rather than Meson.
Stopping the cluster has started consistently running out of memory on PostgreSQL 9.1. This seems to have happened after pulling in new packages at some point so it might be build related.
Stopping the cluster is not critical for 9.1 so skip it.
These files were never intended to be compiled on their own so the .c extension was a bit misleading. In particular Meson does not like .c files that are not intended to be compiled independently.
Leave header files as is since they are already protected against being included more than once and are never expected to be compiled.
The manifest is saved on a regular basis during a backup so a failed backup can be resumed. For backups that the user has configured/invoked as not resumable, skip the incremental save of the manifest.
Previously the behavior was to download the file from the repository when it was not exactly the same size in PGDATA. However, it may just be that the file was extended and the contents are the same up to the file size recorded in the manifest. This could also be very valuable for files that are always append only, like logs.
Change info.size to file->size in one place. Both are technically correct but file->size makes more sense.
Use the new fileName variable in a few existing places.
Also adjust some existing comments to make them clearer.
Remove VM_OS_REPO since it is no longer required.
Rebalance PostgreSQL versions for more efficient test times.
Always print version of PostgreSQL when testing. This helps verify that new minor releases are being used.
Each mem context can track child contexts, allocations, and a callback. Before this change memory was allocated for tracking all three even if they were not used for a particular context. This made mem contexts unsuitable for String and Variant objects since they are plentiful and need to be as small as possible.
This change allows mem contexts to be configured to track any combination of child contexts, allocations, and a callback. In addition, the mem context can be configured to track a single child context and/or allocation, which saves memory and is a common use case.
Another benefit is that Variants can own objects (e.g. KeyValue) that they encapsulate. All of this makes memory accounting simpler because mem contexts have names while allocations do not. No more memory is used than before since Variants and Strings still had to store the memory context they were originally allocated in so they could be easily freed.
Update the String and Variant objects to use this new functionality. The custom strFree() and varFree() functions are no longer required and can now be a wrapper around objFree().
Lastly, this will allow strMove() and varMove() to be implemented and used in cases where strDup() and varDup() are being used to move a String or Variant to a new context. Since this will be a bit noisy it is saved for a future commit.
Because there is a lot of repetition in this file, changes can look very jumbled with existing data in a diff. Also, if can be hard to tell what is being modified if the diff does not show enough lines before and after.
This change adds labels to the end of the line to localize the diff and make it easier to see what has been changed. Also, remove some linefeeds and make separators more consistent.
The change to parse.auto.c will be committed separately so it can be ignored in history/blame.
Bug Fixes:
* Fix error thrown from FINALLY() causing an infinite loop. (Reviewed by Stephen Frost.)
* Error on all lock failures except another process holding the lock. (Reviewed by Reid Thompson, Geir Råness. Reported by Geir Råness.)
Features:
* Backup file bundling for improved small file support. (Reviewed by Reid Thompson, Stefan Fercot, Chris Bandy.)
* Verify command to validate the contents of a repository. (Contributed by Cynthia Shang, Reid Thompson. Reviewed by David Steele, Stefan Fercot.)
* PostgreSQL 15 support. (Reviewed by Stefan Fercot.)
* Show backup percent complete in info output. (Contributed by Reid Thompson. Reviewed by David Steele.)
* Auto-select backup for restore command --type=lsn. (Contributed by Reid Thompson. Reviewed by Stefan Fercot, David Steele.)
* Suppress existing WAL warning when archive-mode-check is disabled. (Contributed by Reid Thompson. Reviewed by David Steele.)
* Add AWS IMDSv2 support. (Contributed by Nuno Pires. Reviewed by David Steele.)
Improvements:
* Allow repo-hardlink option to be changed after full backup. (Reviewed by Reid Thompson.)
* Increase precision of percent complete logging for backup and restore. (Contributed by Reid Thompson. Reviewed by David Steele.)
* Improve path validation for repo-* commands. (Contributed by Reid Thompson. Reviewed by David Steele.)
* Improve stop command to honor stanza option. (Contributed by Reid Thompson. Reviewed by David Steele. Suggested by ragaoua.)
* Improve error message for invalid repo-azure-key. (Contributed by Reid Thompson. Reviewed by David Steele. Suggested by Seth Daniel.)
* Add hint to check the log on archive-get/archive-push async error. (Reviewed by Reid Thompson.)
* Add ClockError for unexpected clock skew and timezone changes. (Reviewed by Greg Sabino Mullane, Stefan Fercot. Suggested by Greg Sabino Mullane.)
* Strip extensions from history manifest before showing in error message. (Reviewed by Stefan Fercot.)
* Add user:group to lock permission error. (Reviewed by Reid Thompson.)
Documentation Bug Fixes:
* Fix incorrect reference to stanza-update in the user guide. (Fixed by Abubakar Mohammed. Reviewed by David Steele.)
* Fix example for repo-gcs-key-type option in configuration reference. (Reviewed by Reid Thompson.)
* Fix tls-server-auth example and add clarifications. (Reviewed by Reid Thompson.)
Documentation Improvements:
* Simplify messaging around supported versions in the documentation. (Reviewed by Stefan Fercot, Reid Thompson, Greg Sabino Mullane.)
* Add option type descriptions. (Contributed by Reid Thompson. Reviewed by David Steele.)
* Add FAQ about backup types and restore speed. (Contributed by David Christensen. Reviewed by Reid Thompson.)
* Document required base branch for pull requests. (Contributed by David Christensen. Reviewed by Reid Thompson.)
If the user requested the exact repo path then strSub() would be passed an invalid start value leading to an assertion:
$ pgbackrest --stanza=test repo-ls /var/lib/pgbackrest
ASSERT: [025]: start <= this->pub.size (on dev builds)
ASSERT: [025]: string size must be <= 1073741824 bytes (on prod builds)
Fix this by checking if the requested path exactly equals the repo path and returning an empty relative path in this case.
Another issue was that invalid subpaths were not detected if they started with the repo path. For example, /var/lib/pgbackrestsub would not generate an error if the repo path was /var/lib/pgbackrest. Fix this by explictly checking for a / between the repo path and the subpath. This also requires special handling when the repo path is /.
This is not a live bug since the issues were found in an unreleased feature introduced in 5ae84d5.
The encrypted archive-push and repo tests were running very slowly on 32-bit with Valgrind enabled. This appears to be an issue with a newer version of Valgrind, but it has been going on long enough that bisecting does not seem to be worthwhile.
Reduce the size of the encrypted test segments where possible to improve overall test performance.
Integration expect log testing was originally used as a rough-and-ready way to make sure that certain code paths were being executed before the unit tests existed. Now that we have 100% unit test coverage (with expect log testing) the value of the integration expect tests seems minimal at best.
But they do cause numerous issues:
- Maintenance of the expect code and replacements that are required to keep logs reproducible.
- Even a trivial change can cause massive churn in the expect logs, e.g. d9088b2. These changes should be minutely audited but since the expect logs have little value now it is seldom worth the effort.
- The OS version used to do expect testing (RHEL7) can only be used to test one version of PostgreSQL. This makes it hard to balance the PostgreSQL version testing between OS versions.
- When a commit affects expect logs it is not clear (especially for new developers) how to regenerate them and our contributing guide is silent on the issue.
The goal is to migrate the integration tests to C and expect testing is not part of that plan. It seems best to get rid of them now.
Once upon a time the allocation array was allocated up front so this test was required for the top context, which did not allocate up front.
Now allocations are done on demand so this case is covered for every context that does not allocate memory.
This helps rebalance some of the tests that are running long, i.e. d9 and u20.
I would be better to move more PostgreSQL versions to d9, but the base VM does not contain more versions. New minor versions will be out later in the week so that seems a better time to be rebuilding containers.
The emulation is so slow that running all the unit tests would be too expensive, but this at least shows that the build works and some of the more complex tests run. In particular, it is good to test on one big-endian architecture to be sure that checksums are correct.
Update checksums in the tests where they had gotten out of date since the last time we were testing on s390x. Also use a different test in command/archivePushTest to show the name of the file when a checksum does not match to aid in debugging.
The command/archive-push test was updated but not included because there is also a permissions issue, which looks to be the same as what we see on MacOS/FreedBSD. Hopefully we'll be able to fix all of those at the same time.
The function worked fine, but Coverity was unable to determine that the finally block was run, which led to false positives about unfreed memory.
Using a boolean in the block makes it clear to Coverity that the finally block will always be run no matter what else happens.
We'll depend on the compiler to optimize away the boolean if it is not used in a finally block. The cost of the boolean is fairly low in comparison to everything else being done in these macros, so it does not seem worth having a separate block even if the compiler is not able to eliminate the boolean.
This reverts most of 9a271e9 that fixed a bug caused by c5b5b58, which was also attempting to help Coverity understand FINALLY() blocks.
Since the packSize field is 7 bits, it could never fail the check for > 127.
The compiler will catch any packs that are larger than 7 bits and then the pack size will need to be adjusted. For now just adjust the comment to reflect what the test does and give a clearer indication of what to do when a pack grows too large.
This saves a bit of space and should not affect processing speed.
On MacOS (clang) this unexpectedly reduces the size of the binary by 16kiB but on Linux (gcc) there are no savings at all.
The separator parameter in cfgParseCommandRoleName() was useless since it was always set to : and COLON_STR did not provide any clarity its the single other usage.
In cases where clock skew or timezone issues are preventing backup label generation the user could see an error like this:
new backup label '20220504-152308F' is not later than latest backup label '20220504-222042F_20220504-222141I.manifest.gz'
This will happen if the most recent label is drawn from the history. It is cleaner (and probably less confusing) to strip off the extensions so the user sees:
new backup label '20220504-152308F' is not later than latest backup label '20220504-222042F_20220504-222141I'
The order of callbacks and frees meant that memory needed during a callback (for logging in all known cases) might end up being freed before a callback needed it.
Requiring callbacks and logging to check the validity of their allocations is pretty risky and it is not clear that all possible cases have been accounted for.
Instead recursively execute all the callbacks first and then come back and recursively free the context. This is safer and it removes the need to check if a context is freeing so a simple active flag (in debug builds) will do. The caller no longer needs this information at all so remove memContextFreeing() and objMemContextFreeing().
In the JSON output the percent complete is storage as an integer of the percent complete * 100. So, before display it should be converted to double and divided by 100, or split using integer mod and div.
Note that percent complete will only be displayed on the host where the backup was executed. Remote hosts will show a backup/expire running with no percent complete.
PostgreSQL 15 drops support for exclusive backup and renames the start/stop backup commands.
This is based on the pgdg-testing repo since beta1 has not been released yet, but it seems unlikely that breaking changes will be made at this point. beta1 should be tagged just before our next release so we'll retest before the release.
This column has been removed in PostgreSQL 15. Rather than add a lot of special handling, it seems better just to update all versions to not depend on this column.
Add centralized functions to identify the type of database (i.e. system or user) by name and use FirstNormalObjectId when a name is not available.
The new query in the db module will still return the prior result for PostgreSQL <= 15, which will be stored in the manifest. This is important to preserve behavior when downgrading pgBackRest. There are no concerns here for PostgreSQL 15 since older versions of pgBackRest won't be able to restore backups for PostgreSQL 15 anyway.
Any error thrown resets execution to the last setjmp(), which means that parts of the try block need to make sure they don't get run again. FINALLY() was not doing this so if it threw an error it would end up back in the FINALLY() block, where the error would likely be thrown again, causing an infinite loop.
Fix this by tracking the state of FINALLY() and only running it once. This requires cleaning the error stack like CATCH*() and clearing the error like TRY_END() depending on the order of execution.
The archive-get/archive-push commands would not error for, .e.g permissions errors, when attempting to get a lock before launching the async process. Since the async process was not launched there would be no error status file and the user would get a generic failure message. Also, there would be no async log.
Refactor lockAcquireFile() to throw an error when failOnNoLock = false unless the file is locked by another process. This seems to be the original intent of this parameter and there may have been a mistake when porting from Perl. In any case it looks wrong enough to be considered a bug.
The mem context name is used to produce clearer debug errors but it has no purpose in production builds.
Also remove memContextName() and access the struct directly since the name is only used within the common/memContext module.
Note that a few errors that were thrown in production builds (and required the name) are now only thrown in debug builds. In practice we have not seen these errors in production builds due to extensive coverage so it does not seem worth modifying the error to work without the context name.
This saves some memory, which is worthwhile, but the goal is to refactor Strings and Variants to have their own mem contexts and this change will prevent them from using more memory than they are now, along with other changes that will be coming later.
If this error is thrown rather than a specific error returned from the async process, it means the async process is unable to write the status files for some reason and the only way to get the error is out of the async log.
This hint includes the exact async log path and name to make finding errors easier.
Only set -DDEBUG_MEM for the modules currently being tested rather than globally.
Also run tests in a temp mem context. Running in the top context can confuse memory accounting when a new context is created in the top context.
Reuse the section/key/value Strings by truncating them instead of creating a new one every time.
Also add an error for empty sections. This function is only used for loading info files (not config files), which should never contain an empty section.
These functions allow conversion from substrings without needing to create a String or a temporary buffer.
httpDateToTime() no longer requires a temp mem context. Also improve handling of month search to avoid an allocation.
httpUriDecode() no longer requires a temp mem context.
jsonReadStr() no longer requires a temp mem context.
pgLsnFromWalSegment() no longer requires a temp mem context.
pgVersionFromStr() no longer requires a temp mem context. Also do a bit of refactoring.
storageGcsCvtTime() no longer leaks six Strings per call.
storageS3CvtTime() no longer leaks six Strings per call.
Object variables were begin allocated in the calling context rather than the object context.
This is not a live bug because Exec objects are currently created and opened in a long-lived context.
It is not clear why these were split out, but it probably had something to do with testing before storageList() could return NULL for an empty directory.
Also remove the tests that depended on a boolean return, which are no longer needed for coverage.
Previously read/writing JSON required parsing/render via a variant, which add many more memory allocations and loops.
Instead allow JSON to be read/written serially to improve performance and simplify the code. This also allows us to get rid of many String and Variant constant which are no longer required.
The goal is to be able to read/write very large (e.g. gigabyte manifest) JSON structures, which would not be practical with the current code.
Note that external JSON (GCS, S3, etc) is still handled using variants. Converting these will require more consideration about key ordering since it cannot be guaranteed as in our own formats.
This allows code to run after the return type has been generated in the case where it is an expression.
No new functionality here yet, but this will be used by a future commit that audits memory usage.
All fields should be alphabetical. Currently the read code is tolerant of this, but that will not always be the case.
Fields are always written alphabetically so this is just a test issue introduced by d8d41321.
This is not a very realistic case since archive start/stop are always written, but it appears in many other unit tests so it should also be tested here.
Packs support stronger typing than JSON and are more efficient. For the small result sets that we deal with efficiency is probably not very important, but this removes another place where we are using JSON instead of Pack.
Push checking for result struct (e.g. single row) down into PgClient since it has easy access to this information rather than needing to parse the result set to find out.
Refactor all code downstream that depends on PgClient results.
There have been some behavioral changes in libpq which require changes to the test.
Also update the instructions since it is now a bit easier to run against a real cluster.
There is no need to process the stats so a KeyValue is overkill.
Also remove the performance tests that check the stat totals since this is covered in the unit tests.
A missing field and a NULL field are not exactly the same so it seems best to test both.
Because of the way KeyValue objects work the error is the same, but that will not always be true.
The line number was one less than it should have been, which could cause some confusion.
Since this only affected ini files with JSON values, which are always written programmatically, there is almost zero chance this has ever been a problem in the field.
Previously the process id was skipped if it did not exist. Instead, throw an error and handle the errors in downstream code.
This was probably ignored at some point to provide backward-compatibility, but that is no longer required, if it ever was.
Sometimes we need to read a lock from another process. This was done two different ways and in the case of cmdStop() was definitely hacky.
Centralize the logic to make it easier to read the locks for another process. This will also make it easier to add new lock data.
When archive-mode-check is disabled and archive-push is running from multiple hosts, it is very likely that the file will already exist with the same checksum, so disable the warning.
However, if the checksums do not match, an error will still be thrown.
Using the path variable directly resulted in a path with (null) in it, which caused the remove to fail.
The pathFull variable already exists for this purpose so use it.
Determining the length of arrays that could be calculated at compile time was a bit piecemeal, with special macros used sometimes and with the math done directly other times.
This macro makes the task easier, uses less space, and automatically adjusts when the type changes.
Most of these looked like copy/paste from a prior required strCatFmt() call.
There is no issue here since strCatFmt() works the same in these cases, but using strCat()/strCatZ() is more efficient.
If a boolean option had an unresolved dependency then the value would be NULL, which meant the dependency would need to be checked in the code to avoid an error. For example, cfgOptionBool(cfgOptOnline) needed to be checked before it was safe to call cfgOptionBool(cfgOptArchiveCheck).
Allow a default for boolean options when they are unresolved to simplify the code. This makes using the options easier and less prone to error. Not all boolean options get a dependency default in this commit, but more may be added in the future.
In offline mode the pg_wal directory is copied, but that is not the same as archive-copy, which copies the exact set of WAL required from the archive.
This flag is purely for informational purposes so there is no live bug here, but the prior behavior was certainly misleading.
For PITR with --type=lsn, attempt to auto-select the appropriate backup set based on the --target LSN provided. Pick the most recent backup where backup-lsn-stop is less than or equal to the provided LSN.
The unit tests were ignoring stderr but nothing being output there was important. Now a test will fail if there is anything on stderr.
This makes it easier to work with -fsanitize, which outputs to stderr.
The manifest test module was setting a blank value here and causing a stack overflow because memcpy() is used instead of strcpy().
This was really just a test issue but add an assert just in case the same were to happen in production code.
Also update a bogus checksum in the integration tests to the correct length to avoid running afoul of the assert.
Found with -fsanitize=address.
If a variable assigned with STRDEF() is referenced out of scope of the STRDEF() assignment then the value is undefined.
Luckily most of the instances are in tests but there is one in the core code. It is not clear if this is a live bug or not but it certainly needs to be fixed.
Found with -fsanitize=address.
If the value and multiplier were large enough then the return value could overflow unpredictably.
Check the value to make sure it will not overflow with the current multiplier.
It would be better to present an "out of range" error to the user rather than "is not valid" but it doesn't seem worth the effort since the error is extremely unlikely.
Found with -fsanitize=undefined.
It is possible that a file will be be truncated to zero-length after the backup manifest has been built. We could build logic into backupFile() to handle this case but it is hard to test well because of the race condition so tests would need to written directly against backupFile() and backupJobResult(). It hardly seems worth all that effort for a condition that occurs rarely, if ever.
Instead just remove the manifest check and add tests to restore to make sure it handles bundled zero-length files correctly. Logging will show that the file was bundled so if it happens a lot (which seems very unlikely) then we can think about an alternate implementation.
This rule was added because there were not sufficient tests to demonstrate that the repo-hardlink option could be changed in a backup set.
Remove the restriction and add/update tests to show that it works.
This is necessary now because bundling requires that hardlinking be disabled. Rather than add code complexity, it seems better just to address this limitation.
Check for invalid path in repo-* commands. Perform path validation and throw an error when appropriate. Path may not contain '//'. Strip trailing '/' from path. Absolute path must fall under repo path.
IMDSv2 provides additional security to prevent instance metadata from being read by an attacker.
All AWS instances should provide IMDSv2 but still fail back to IMDSv1 if the IMDSv2 token request fails. This is in case there are any services outside AWS that are emulating IMDSv1 but have not implemented IMDSv2.
It seems best for these to be repo options so they can be configured per repo, rather than globally.
All clarify usage for repo-bundle-size and repo-bundle-limit.
Since files are stored sequentially in a bundle, it is often possible to restore multiple files with a single read. Previously, each restored file required a separate read. Reducing the number of reads is particularly beneficial for object stores, but performance should benefit on any file system.
Currently if there is a gap then a new read is required. In the future we might set a limit for how large a gap we'll skip without starting a new read.
Improve the stop command, when force and stanza options are specified, to terminate only processes holding lock files for the given stanza. Prior to these changes, termination of all processes holding lock files regardless of stanza occurred.
For very large backups only getting an update per percent may not be often enough.
Add hundredths to the percent complete logging to provide more timely information.
Checking percentage and size in every test can cause quite a bit of churn when changes are made.
Follow the example of the backup tests and replace percentage and size after the few tests to reduce churn.
These tests were written before the restore command was fully migrated to C so many of them have become redundant.
In the cases were they still provide coverage, add tests to synthetic restores to replace them. In general, these higher level tests provide better coverage than poking at the restoreFile() function directly.
IMPORTANT NOTE: Repository size reported by the info command is now entirely based on what pgBackRest has written to storage. Previously, in certain cases, pgBackRest could detect if additional compression was being applied by the storage but this is no longer supported.
Bug Fixes:
* Retry errors in S3 batch file delete. (Reviewed by Reid Thompson. Reported by Alex Richman.)
* Allow case-insensitive matching of HTTP connection header values. (Reviewed by Reid Thompson. Reported by Rémi Vidier.)
Features:
* Add support for AWS S3 server-side encryption using KMS. (Contributed by Christoph Berg. Reviewed by David Steele, Tharindu Amila.)
* Add archive-missing-retry option. (Reviewed by Stefan Fercot.)
* Add backup type filter to info command. (Contributed by Stefan Fercot. Reviewed by David Steele.)
Improvements:
* Retry on page validation failure during backup. (Reviewed by Stephen Frost, David Christensen.)
* Handle TLS servers that do not close connections gracefully. (Reviewed by Rémi Vidier, David Christensen, Stephen Frost.)
* Add backup LSNs to info command output. (Contributed by Stefan Fercot. Reviewed by David Steele.)
* Automatically strip trailing slashes for repo-ls paths. (Contributed by David Christensen. Reviewed by David Steele.)
* Do not retry fatal errors. (Reviewed by Reid Thompson.)
* Remove support for PostgreSQL 8.3/8.4. (Reviewed by Reid Thompson, Stefan Fercot.)
* Remove logic that tried to determine additional file system compression. (Reviewed by Reid Thompson, Stefan Fercot.)
Documentation Bug Fixes:
* Move repo options in TLS documentation to the global section. (Reported by Anton Kurochkin.)
* Remove unused backup-standby option from stanza commands. (Reported by Stefan Fercot.)
* Fix typos in help and release notes. (Fixed by Daniel Gustafsson. Reviewed by David Steele.)
Documentation Improvements:
* Add aliveness check to systemd service configuration. (Suggested by Yogesh Sharma.)
* Add FAQ explaining WAL archive suffix. (Contributed by Stefan Fercot. Reviewed by David Steele.)
* Note that replications slots are not restored. (Contributed by Reid Thompson. Reviewed by David Steele, Stefan Fercot. Suggested by Christophe Courtois.)
Some TLS server implementations will simply close the socket rather than correctly closing the TLS connection. This causes problems when connection: close is specified with no content-length or chunked encoding and we are forced to read to EOF. It is hard to know if this is a real EOF or a network error.
In cases where we can parse the content and (hopefully) ensure it is correct, allow the closed socket to serve as EOF. This is not ideal, but the change in 8e1807c means that currently working servers with this issue will stop working after 2.35 is installed, which seems too risky.
This is a bit of legacy from the current Vagrant environment used to do the release, but since it is not as easy to change the user in Vagrant, just make the Docker environment conform.
This allows documentation to be built in a Vagrant environment (or any environment with the same user name) and to be deployed in a Docker environment.
Trailing slashes in at least some of the repository storage types were preventing repo-ls from displaying any content (presumably due to storage-specific behavior).
Since the path with the slash should be equivalent to the path without the slash, just remove it if provided by the user.
Checking that pd_upper == 0 is not enough since this field may be corrupted. Still use pd_upper as a quick check, but when it is zero proceed to check the rest of the page to ensure it is also all zeroes.
Rather than attempting to filter page checksum failures by LSN, just retry when there is a page checksum failure. If the page has not changed since the last read report it as an error. If the page has changed, then PostgreSQL must be modifying the page so we can ignore the error because a full page write (and possibly updates) will be in the WAL.
Also remove tests made redundant by the test merge in b4897077.
There have been cases where pgBackRest has failed on invalid XML but it is not possible to determine what was wrong with the XML.
This will only work for XML up to about 8KiB (which is the error message limit) but it should work in most cases.
Retry a WAL segment that was previously reported as missing by the archive-get command. This prevents notifications in the spool path from a prior restore from being used and possibly causing a recovery failure if consistency has not been reached.
Disabling this option allows PostgreSQL to more reliably recognize when the end of the WAL in the archive has been reached, which permits it to switch over to streaming from the primary. With retries enabled, a steady stream of WAL being archived will cause PostgreSQL to continue getting WAL from the archive rather than switch to streaming.
When disabling this option it is important to ensure that the spool path for the stanza is empty. The restore command does this automatically if the spool path is configured at restore time. Otherwise, it is up to the user to ensure the spool path is empty.
Coverity complained that this pass by value was inefficient:
CID 376402: Performance inefficiencies (PASS_BY_VALUE)
Passing parameter file of type "ManifestFile" (size 136 bytes) by value.
This was completely intentional since it gives us a copy of the struct that we can change without bothering the caller. However, updating fields is fine and may benefit the caller at some future data, and in any case does no harm now.
And as usual it is easier not to fight with Coverity.
As much as possible it is better to get coverage with more realistic tests. Merging these modules will allow the page checksum code to be covered with real backups.
Limit which files can be added to bundles, which allows resume to work reasonably well. On resume, the bundles are removed and any remaining file is eligible to be to be resumed.
Also reduce the bundle-size default to 20MiB. This is pretty arbitrary, but a smaller default seems better.
Bundle (combine) smaller files during backup to reduce the number of files written to the repository (enable with --bundle). Reducing the number of files is a benefit on all file systems, but especially so on object stores such as S3 that have a high file creation cost. Another benefit is that zero-length files are only stored as metadata in the manifest.
Files are batched up to bundle-size and then compressed/encrypted individually and stored sequentially in the bundle. The bundle id and offset of each file is stored in the manifest so files can be retrieved randomly without needing to read the entire bundle. Files are ordered by timestamp descending when being assigned to bundles to reduce the amount of random access that needs to be done. The idea is that bundles with older files can be read in their entirety on restore and only bundles with newer files will get fragmented.
Bundles are a custom format with metadata stored in the manifest. Tar was considered but it is too limited a format, the major issue being that the size of the file must be known in advance and that is very contrary to how pgBackRest works, especially once we introduce page-level incremental backups.
Bundles are stored numbered in the bundle directory. Some files may still end up in pg_data if they are added after the backup is complete. backup_label is an example.
Currently, only the backup command works in batches. The restore and verify commands use the offsets to pull individual files out of the bundle. It seems better to finalize how this is going to work before optimizing the other commands. Even as is, this is a major step forward, and all commands function with bundling.
One caveat: resume is currently not supported when bundle is enabled.
There is some evidence that retrying fatal errors, especially out of memory errors, may cause lockups. It makes sense to report fatal errors as quickly as possible and bypass retries. This may or not fix the lockup issue but it is worth doing either way.
For now, the only fatal errors will be AssertError and MemoryError.
If the entire batch failed it would be retried, but individual file errors were not retried. This could cause pgBackRest to terminate during expiration or when removing an unresumable backup.
Rather than retry the entire batch, delete the errored files individually to take advantage of the HTTP retry rather than adding a new retry loop. These errors seem rare enough that it should not be a performance issue.
In theory, the additional stat() call after a file has been copied to the repo can determine if additional compression has been applied by the file system. However, it has been a very long time since we tested this in practice. There are currently no unit tests that accurately test this feature since it requires a compressed file system like ZFS to work, which never seemed worth the extra cost.
It can also add a lot of time to backups if there are a large quantity of small files.
In addition, it stands as a blocker for combining files for small file support since it is no longer possible to get per-file sizes from the viewpoint of the file system. There are several ways this could be reworked but none of them are easy while at the same time maintaining current info functionality.
It doesn't seem worth keeping an untested feature that will only work in some special cases (if it still works) when it is blocking development.
The most recent release of Minio has broken CI builds but there is no logging to indicate what is wrong.
For now, just use the prior release to get CI builds working again. This kind if breakage is not uncommon for Minio but they usually resolve it in the next release.
Update lock code to use standard common/io functions and module patterns. This module was developed before the common/io module existed and our patterns had stabilized.
The /etc/profile.d/lang.sh script was causing issues but it does not exist on amd64, so it seems the easiest thing was to remove it.
Fix how 32-bit VMs are determined now that another 64-bit architecture has been added.
And remove some obsolete VM hashes.
Previously manifest load required two passes through the file list, one to load the data and one to set the defaults. This required each file to be packed twice.
Instead simply note that the file value is default and then set the file defaults when they are loaded from the manifest. This is made possible by the different internal/external representations for files so the same method cannot be applied to paths and links.
This change seems to resolve the performance issues noted in 61ce586 but there is no obvious reason why.
Manifests with a very large number of files can use a considerable amount of memory. There are a lot of zeroes in the data so it can be stored more efficiently by using base-128 varint encoding for the integers and storing the strings in the same allocation.
The downside is that the data needs to be unpacked in order to be used, but in most cases this seems fast enough (about 10% slower than before) except for saving the manifest, which is 10% slower up to 10 million files and then gets about 5x slower by 100 million (two minutes on my M1 Mac). Profiling does not show this slowdown so I wonder if this is related to the change in memory layout. Curiously, the function that increased most was jsonFromStrInternal(), which was not modified. That gives more weight to the idea that there is some kind of memory issue going on here and one hopes that servers would be less affected. Either way, they largest use cases we have seen are for about 6 million files so if we can improve that case I believe we will be better off.
Further analysis showed that most of the time was taken up writing the size and timestamp fields, which makes almost no sense. The same amount of time was used if they were hard-coded to 0, which points to some odd memory issue on the M1 architecture.
This change has been planned for a while, but the particular impetus at this time is that small file support requires additional fields that would increase manifest memory usage by about 20%, even if the feature is not used.
Note that the Pack code has been updated to use the new varint encoder, but the decoder remains separate because it needs to fetch one byte at a time.
Manifest defaults for user, group, and mode were previously generated by scanning the data to find the most common values. This was very accurate but slow and complicated. It could also lead to surprising changes in the manifest when a default value suddenly changed.
Instead, use the $PGDATA path to generate defaults. In the vast majority of cases the same user/group should own all the path/files and the default file mode is easily derived from the path mode. There may be some edge cases where this generates larger manifests, but in general it reduces time and complexity when saving the manifest.
Remove the MCV code since it is longer longer used.
Change the mode back to 0700 earlier to reduce churn in the expect logs.
This will be especially important in a future commit that gets the defaults exclusively from the base path.
This flag was only being used by the backup command after manifestNewBuild() and had no other uses. There was a time when it was important for integration testing but the unit tests now fulfill this role.
Since backup is the only code concerned with the primary flag, move the code into the backup module.
We don't have any cross-version testing but this change was tested manually with the most recent version of pgBackRest to make sure it was tolerant of the missing primary info. When an older version of pgBackRest loads a newer manifest the primary flag will always be set to false, which is fine since it is not used.
Updating the manifest this way was not a great idea because it broke abstraction for the object. This meant certain changes to the interface and internals were not possible because the code was modifying internal manifest data.
Instead track the user replacements entirely in the restore module.
This also has the benefit of eliminating a pass over the manifest path/file/link lists.
AWS S3 integrates with AWS Key Management Service (AWS KMS) to provide server side encryption of S3 objects. This integration protects objects under encryption keys that never leave AWS KMS unencrypted.
The range feature allows reading out an arbitrary chunk of a file and will be important for efficient small file support.
Now that all drivers are required to support ranges remove the storageFeatureLimitRead feature flag that was implemented only by the Posix driver.
Do the replacement anywhere cfgOptionGroupIdxToKey() is being used to construct a group name in a message. cfgOptionGroupName() is better for this case since it also includes the name of the group so that it does not need to be repeated in each message.
Functionality to copy from IoRead to IoWrite is frequently used so centralize it. This also simplifies coverage testing in places where a loop was required before.
The backup LSNs are useful for performing LSN-based PITR. LSNs will not be displayed in the general text output (without --set) because they are probably not useful enough to deserve their own line.
There is no evidence that users need 8.3/8.4 anymore but it does cost us in terms of development and testing, especially now that we have a number of new backup/restore features planned.
It seems to make sense to remove this support now. If there are users who need to use/migrate from these versions they can use an older version of pgBackRest.
Bug Fixes:
* Fix restore delta link mapping when path/file already exists. (Reviewed by Reid Thompson. Reported by Younes Alhroub.)
* Fix socket leak on connection retries. (Reviewed by Reid Thompson. Reported by James Coleman.)
Features:
* Add TLS server. (Reviewed by Stephen Frost, Reid Thompson, Andrew L'Ecuyer.)
* Add --cmd option. (Contributed by Reid Thompson. Reviewed by Stefan Fercot, David Steele. Suggested by Virgile CREVON.)
Improvements:
* Check archive immediately after backup start. (Reviewed by Reid Thompson, David Christensen.)
* Add timeline and checkpoint checks to backup. (Reviewed by Stefan Fercot, Reid Thompson.)
* Check that clusters are alive and correctly configured during a backup. (Reviewed by Stefan Fercot.)
* Error when restore is unable to find a backup to match the time target. (Reviewed by Reid Thompson, Douglas J Hunley. Suggested by Douglas J Hunley.)
* Parse protocol/port in S3/Azure endpoints. (Contributed by Reid Thompson. Reviewed by David Steele.)
* Add warning when checkpoint_timeout exceeds db-timeout. (Contributed by Stefan Fercot. Reviewed by David Steele.)
* Add verb to HTTP error output. (Contributed by Christoph Berg. Reviewed by David Steele.)
* Allow y/n arguments for boolean command-line options. (Contributed by Reid Thompson. Reviewed by David Steele.)
* Make backup size logging exactly match info command output. (Contributed by Reid Thompson. Reviewed by David Steele. Suggested by Mahomed Hussein.)
Documentation Improvements:
* Display size option default and allowed values with appropriate units. (Reviewed by Reid Thompson.)
* Fix typos and improve documentation for the tablespace-map-all option. (Reviewed by Reid Thompson. Suggested by Reid Thompson.)
* Remove obsolete statement about future multi-repository support. (Suggested by David Christensen.)
Utilize httpUrlNewParseP() to parse endpoint and port from the URL in the S3 and Azure helpers to avoid issues where protocol was not expected to be part of the URL.
This leak was caused by the file descriptor variable getting clobbered after a long jump. Mark it as volatile to fix.
Testing this is a bit complex because the issue only happens in optimized builds, if at all. Put the test into the performance suite, which is always optimized, until a better idea presents itself.
If a path/file was remapped to a link using either --link-map or --link-all there would be no affect if the path/file already existed. If a link existed it would be properly updated and converting a link to a path/file also worked.
The issue happened during delta cleanup, which failed to check if the existing path/file had been remapped to a link.
Add checks for newly mapped path/file links and remove the old path/file we required.
This was previously a warning but the warning is easy to miss so a lot of time may be lost restoring and recovering a backup that will not hit the target.
Since this is technically a breaking change, add an "important note" about the change to the release.
In the backup command, add a warning if start-fast is disabled and the PostgreSQL checkpoint_timeout is greater than db-timeout.
In such cases, we might timeout before the checkpoint occurs and the backup really starts.
Fail the backup if a cluster stops or the standby is promoted. Previously, shutting down the primary would cause an error but it was not detected until the end of the backup. Now the error will happen sooner and a promotion on the standby will also cause an error.
SIGHUP allows the configuration to be reloaded. Note that the configuration will not be updated in child processes that have already started.
SIGTERM terminates the server process gracefully and sends SIGTERM to all child processes. This also gives the tests an easy way to stop the server.
Add the following checks:
* Checkpoint is updated in pg_control after pg_start_backup(). This helps ensure that PostgreSQL and pgBackRest have a consistent view of the storage and that PGDATA paths match.
* Timeline of backup start WAL file matches pg_control. Hard to see how this one could get hit, but we have the power...
* Standby is on the same timeline as the primary. If not, this standby is not following the primary.
* Last standby checkpoint is not greater than the backup checkpoint. If so, this standby is not following the primary.
This also requires some additional plumbing to read/write timeline/checkpoint from pg_control and parse timelines from WAL filenames. There were some changes in the backup tests caused by the fact that pg_control now has different contents for each backup.
The check to ensure that the required checkpoint was reached on the standby should also be updated to use pg_control (it currently uses pg_control_checkpoint()), but that requires non-trivial changes to the test harness and will need to wait.
A CHECK() worked exactly like ASSERT() except that it was compiled into production code. However, over time many checks have been added that should not throw AssertError, which should be reserved for probable coding errors.
Allow the error code to be specified so other error types can be thrown. Also add a human-readable message since many of these could be seen by users even when there is no coding error.
Update coverage exceptions for CHECK() to match ASSERT() since all conditions will never be covered.
These macros simplify management of pg_control test files.
Centralize time updates for pg_control in the command/backup module. This caused some time updates in the logs.
Finally, move the postgres module after the storage module so it can use storage macros.
hrnPgControlToBuffer() and hrnPgWalToBuffer() now generate the system id based on the version of Postgres. If a value less than 100 is specified for systemId then it will be added to the default system id so there can be multiple ids for a single version of PostgreSQL.
Add constants to represent version system ids in tests. These will eventually be auto-generated.
This changes some checksums and we no longer have big-endian tests systems, so X those checksums out so it is obvious they are no longer valid.
Tests that run without DEBUG for performance did not have ASSERT() and were using CHECK() instead.
Instead ensure that the ASSERT() macro is always available in tests.
Eliminate summing and passing of copied files sizes for logging backup size.
Instead, utilize infoBackupDataByLabel() to pull the backup size for the log message.
This allows boolean boolean command-line options to work like their config file equivalents.
At least for now this behavior will remain undocumented since all examples in the documentation will continue to use the standard syntax. The idea is that it will "just work" when options are copied out of config files rather than generating an error.
Previously the archive was only checked at the end of the backup to ensure all WAL required to make the backup consistent was present. The problem was that if archiving was not functioning then the backup had to complete before the user found out, which could be a while if the database was large enough.
Add an archive check immediately after backup start so failures are reported earlier.
The trick is to determine which WAL to check. If the repo is new there may not be any WAL in it and pg_start_backup() will not switch the WAL segment if it is empty. These are both likely scenarios when setting up and/or testing pgBackRest.
If the WAL segment is switched by pg_start_backup(), then check the archive for the segment that was detected prior to backup start. This should be common on normal running clusters with regular activity. Note that this might not be the segment immediately prior to the backup start segment if WAL volume is high.
If pg_start_backup() did not switch the WAL then we can force a switch on PostgreSQL >= 9.3 by creating a restore point. In that case the WAL to check will be the backup start WAL. This is most likely to happen on idle systems, during testing, or immediately after a repo switch.
An advantage of this approach other than earlier notification is that the backup directory will not be created so no resume will be attempted on the next backup.
Note that some additional churn was created in backup.c because the load of archive.info needs to be done earlier.
This is easier to read than using infoBackupDataByLabel() != NULL.
It also allows an assertion to be added to infoBackupDataByLabel() to ensure that a NULL return value is not used unsafely.
This test was lost due to a syntax issue in a58635ac.
Update the test to use system() to better mimic what postgres does and add logging so pgBackRest timing can be determined.
Properly log the size of files copied during the backup, matching the backup size returned from the info command.
In the reference issue, the incremental backup after switchover logs the size of all files evaluated rather than only the size of the files copied in the backup.
This appears to have been an attempt to not delete files that we don't recognize, but it only works in narrow cases and could leave the user is a position of not being able to complete the stanza delete without manual intervention. It seems better just to proceed with the delete, especially since the info files have already been removed.
In addition, deleting the manifests individually could be slow on object stores if there were a very large number of backups.
Size option default and allowed values were displayed in bytes, which was confusing for the user.
This also lays the groundwork for adding units to time options.
Move option parsing functions into a common module so they can be used from the build module.
Allows users to provide an executable to be used when pgbackrest generates command strings that expect to invoke pgbackrest. These generated commands are written to files by pgbackrest, e.g. recovery.conf.
The error handler used a loop to process try, catch, and finally blocks. This worked fine but static analysis tools like Coverity did not understand that the finally block would always run and so there were false positives about double-free, unfreed resource, etc.
This implementation removes the loop, which simplifies everything, and makes it clear that the finally block will always run. This cuts down on Coverity false positives.
This implementation also catches lack of coverage on empty catch blocks so a few test fixes were committed separately in d74fe7a.
A small refactor in backup.c is required because gcc 10.3.1 on Fedora 33 complains that the reason variable may be used uninitialized. It's not clear why this is the case, but reducing the scope of the TRY block fixes the issue.
Rather the converting String to StringIds at runtime, store defaults in StringId format in parse.auto.c and convert user input to StringId during parsing.
The compress-type, repo-type and log-level-* options have allow lists, which means it is more efficient to treat them as StringIds.
For compress-type and log-level-* also update the functions that convert them to enums.
The strIdFrom*() forced the caller to pick an encoding, which led to a number of TRY...CATCH blocks in the code. In practice the caller does not care which encoding is used as long as the string is valid for some encoding.
Update the strIdFrom*() function to try all possible encodings and only throw an error when the string is not valid for any of them.
Bug Fixes:
* Allow "global" as a stanza prefix. (Reviewed by Stefan Fercot. Reported by Younes Alhroub.)
* Fix segfault on invalid GCS key file. (Reviewed by Stephen Frost. Reported by Henrik Feldt.)
Improvements:
* Allow link-map option to create new links. (Reviewed by Don Seiler, Stefan Fercot, Chris Bandy. Suggested by Don Seiler.)
* Increase max index allowed for pg/repo options to 256. (Reviewed by Cynthia Shang.)
* Add WebIdentity authentication for AWS S3. (Reviewed by James Callahan, Reid Thompson, Benjamin Blattberg, Andrew L'Ecuyer.)
* Report backup file validation errors in backup.info. (Contributed by Stefan Fercot. Reviewed by David Steele.)
* Add recovery start time to online backup restore log. (Reviewed by Tom Swartz, Stefan Fercot. Suggested by Tom Swartz.)
* Report original error and retries on local job failure. (Reviewed by Stefan Fercot.)
* Rename page checksum error to error list in info text output. (Reviewed by Stefan Fercot.)
* Add hints to standby replay timeout message. (Reviewed by Cynthia Shang, Stefan Fercot. Suggested by Leigh Downs.)
Since CentOS 8 will be EOL at the end of the year it makes sense to do this now. The centos:8 image is still used in documentation.xml because changes there require manual testing, which will need to be done at a later date. The changes are not user-facing, however, and can be done at any time.
Also update CentOS references to RHEL since that is what we are emulating for testing purposes.
Currently empty CATCH() blocks are always marked as covered because of the loop structure of error handling.
A prototype implementation of error handling without looping has shown that these CATCH() blocks are not covered without new tests. Whether or not that prototype gets committed it is worth adding the tests.
This is mostly to revert some comment changes in b11ab9f7 that will break the ppc64le patch, but at the same time keep the spelling consistent in all comments and documentation.
Also revert some space changes for the same reason.
Azurite released another breaking change (see fbd018cd, 096829b3, c38d6926, and Azurite issue 1039) so make adjustments as needed to documentation and tests.
Also remove some dead code that hid the repo-storage-host option and was made obsolete by all these changes.
The variants were needed to easily serialize configurations for the Perl code.
Unions are more efficient and will allow us to add new types that are not supported by variants, e.g. StringId.
These flags are used for all tests but it was not possible to add them to configure before the change in 046d6643. This is especially important for adhoc tests to ensure the flags are not forgotten.
Remove the flags from test make commands where they were being applied.
There is no change for production builds.
The TLS server is an alternative to using SSH for protocol connections to remote hosts.
This command is currently experimental and intended only for trial and testing. As such, the new commands and options will not show up in the command-line help unless directly requested.
Some tests can generate very large error messages for diffs and they often get cut off before the end.
Also fix a test so it does not create too large a buffer on the stack.