The encrypted archive-push and repo tests were running very slowly on 32-bit with Valgrind enabled. This appears to be an issue with a newer version of Valgrind, but it has been going on long enough that bisecting does not seem to be worthwhile.
Reduce the size of the encrypted test segments where possible to improve overall test performance.
This was an experiment that attempted to create immutable structs (at least without casting). It turned out to be a bit burdensome and required unsafe-looking casting in some cases.
Integration expect log testing was originally used as a rough-and-ready way to make sure that certain code paths were being executed before the unit tests existed. Now that we have 100% unit test coverage (with expect log testing) the value of the integration expect tests seems minimal at best.
But they do cause numerous issues:
- Maintenance of the expect code and replacements that are required to keep logs reproducible.
- Even a trivial change can cause massive churn in the expect logs, e.g. d9088b2. These changes should be minutely audited but since the expect logs have little value now it is seldom worth the effort.
- The OS version used to do expect testing (RHEL7) can only be used to test one version of PostgreSQL. This makes it hard to balance the PostgreSQL version testing between OS versions.
- When a commit affects expect logs it is not clear (especially for new developers) how to regenerate them and our contributing guide is silent on the issue.
The goal is to migrate the integration tests to C and expect testing is not part of that plan. It seems best to get rid of them now.
Once upon a time the allocation array was allocated up front so this test was required for the top context, which did not allocate up front.
Now allocations are done on demand so this case is covered for every context that does not allocate memory.
This helps rebalance some of the tests that are running long, i.e. d9 and u20.
I would be better to move more PostgreSQL versions to d9, but the base VM does not contain more versions. New minor versions will be out later in the week so that seems a better time to be rebuilding containers.
The emulation is so slow that running all the unit tests would be too expensive, but this at least shows that the build works and some of the more complex tests run. In particular, it is good to test on one big-endian architecture to be sure that checksums are correct.
Update checksums in the tests where they had gotten out of date since the last time we were testing on s390x. Also use a different test in command/archivePushTest to show the name of the file when a checksum does not match to aid in debugging.
The command/archive-push test was updated but not included because there is also a permissions issue, which looks to be the same as what we see on MacOS/FreedBSD. Hopefully we'll be able to fix all of those at the same time.
The version ranges given in the user guides caused confusion. For example, because the user guide for RHEL specified PostgreSQL 9.6-11, users questioned whether pgBackRest worked for PostgreSQL 12 on RHEL.
Remove these ranges and add more explanatory text to the introduction to try and make it clearer how the user guides work and which versions are covered (basically all of them).
The function worked fine, but Coverity was unable to determine that the finally block was run, which led to false positives about unfreed memory.
Using a boolean in the block makes it clear to Coverity that the finally block will always be run no matter what else happens.
We'll depend on the compiler to optimize away the boolean if it is not used in a finally block. The cost of the boolean is fairly low in comparison to everything else being done in these macros, so it does not seem worth having a separate block even if the compiler is not able to eliminate the boolean.
This reverts most of 9a271e9 that fixed a bug caused by c5b5b58, which was also attempting to help Coverity understand FINALLY() blocks.
Travis-CI is now strictly a paid service. Multiple attempts to use their "free" service have failed due to lack of community credit and general issues with their plugin.
Remove the configuration so it does not appear we are testing on Travis-CI.
Since the packSize field is 7 bits, it could never fail the check for > 127.
The compiler will catch any packs that are larger than 7 bits and then the pack size will need to be adjusted. For now just adjust the comment to reflect what the test does and give a clearer indication of what to do when a pack grows too large.
Dividers were used in some files, but not others, and some had section names (which are hard to maintain) and others did not.
Try to make this more consistent by putting a divider on front of every section, variable block, and wherever else seems appropriate.
The idea was to make this file easier to browse and edit, but in fact it is much easier to just search for the command/option needed.
The dividers were never applied consistently and at some point we decided to get rid of the comments because they were hard to keep updated. The result was a mix of styles which did nobody any favors.
This saves a bit of space and should not affect processing speed.
On MacOS (clang) this unexpectedly reduces the size of the binary by 16kiB but on Linux (gcc) there are no savings at all.
The separator parameter in cfgParseCommandRoleName() was useless since it was always set to : and COLON_STR did not provide any clarity its the single other usage.
Most of the time these were not making the code any clearer.
For cases where they were used to construct Strings and Buffers, replace with constants.
Also cleanup unused Buffers and Strings.
In cases where clock skew or timezone issues are preventing backup label generation the user could see an error like this:
new backup label '20220504-152308F' is not later than latest backup label '20220504-222042F_20220504-222141I.manifest.gz'
This will happen if the most recent label is drawn from the history. It is cleaner (and probably less confusing) to strip off the extensions so the user sees:
new backup label '20220504-152308F' is not later than latest backup label '20220504-222042F_20220504-222141I'
The order of callbacks and frees meant that memory needed during a callback (for logging in all known cases) might end up being freed before a callback needed it.
Requiring callbacks and logging to check the validity of their allocations is pretty risky and it is not clear that all possible cases have been accounted for.
Instead recursively execute all the callbacks first and then come back and recursively free the context. This is safer and it removes the need to check if a context is freeing so a simple active flag (in debug builds) will do. The caller no longer needs this information at all so remove memContextFreeing() and objMemContextFreeing().
In the JSON output the percent complete is storage as an integer of the percent complete * 100. So, before display it should be converted to double and divided by 100, or split using integer mod and div.
Note that percent complete will only be displayed on the host where the backup was executed. Remote hosts will show a backup/expire running with no percent complete.
PostgreSQL 15 drops support for exclusive backup and renames the start/stop backup commands.
This is based on the pgdg-testing repo since beta1 has not been released yet, but it seems unlikely that breaking changes will be made at this point. beta1 should be tagged just before our next release so we'll retest before the release.
It also makes sense to lock old PRs. They can be manually unlocked if they are needed for some reason.
Also add output logging to make it easier to determine if thread locking is completing.
The old plugin has been defunct for some time so there are currently a lot of unlocked issues.
Running this once per week seems sufficient for now. Worst case it can be run manually if it gets behind.
This column has been removed in PostgreSQL 15. Rather than add a lot of special handling, it seems better just to update all versions to not depend on this column.
Add centralized functions to identify the type of database (i.e. system or user) by name and use FirstNormalObjectId when a name is not available.
The new query in the db module will still return the prior result for PostgreSQL <= 15, which will be stored in the manifest. This is important to preserve behavior when downgrading pgBackRest. There are no concerns here for PostgreSQL 15 since older versions of pgBackRest won't be able to restore backups for PostgreSQL 15 anyway.
Any error thrown resets execution to the last setjmp(), which means that parts of the try block need to make sure they don't get run again. FINALLY() was not doing this so if it threw an error it would end up back in the FINALLY() block, where the error would likely be thrown again, causing an infinite loop.
Fix this by tracking the state of FINALLY() and only running it once. This requires cleaning the error stack like CATCH*() and clearing the error like TRY_END() depending on the order of execution.
The archive-get/archive-push commands would not error for, .e.g permissions errors, when attempting to get a lock before launching the async process. Since the async process was not launched there would be no error status file and the user would get a generic failure message. Also, there would be no async log.
Refactor lockAcquireFile() to throw an error when failOnNoLock = false unless the file is locked by another process. This seems to be the original intent of this parameter and there may have been a mistake when porting from Perl. In any case it looks wrong enough to be considered a bug.
The mem context name is used to produce clearer debug errors but it has no purpose in production builds.
Also remove memContextName() and access the struct directly since the name is only used within the common/memContext module.
Note that a few errors that were thrown in production builds (and required the name) are now only thrown in debug builds. In practice we have not seen these errors in production builds due to extensive coverage so it does not seem worth modifying the error to work without the context name.
This saves some memory, which is worthwhile, but the goal is to refactor Strings and Variants to have their own mem contexts and this change will prevent them from using more memory than they are now, along with other changes that will be coming later.