Once upon a time the allocation array was allocated up front so this test was required for the top context, which did not allocate up front.
Now allocations are done on demand so this case is covered for every context that does not allocate memory.
This helps rebalance some of the tests that are running long, i.e. d9 and u20.
I would be better to move more PostgreSQL versions to d9, but the base VM does not contain more versions. New minor versions will be out later in the week so that seems a better time to be rebuilding containers.
The emulation is so slow that running all the unit tests would be too expensive, but this at least shows that the build works and some of the more complex tests run. In particular, it is good to test on one big-endian architecture to be sure that checksums are correct.
Update checksums in the tests where they had gotten out of date since the last time we were testing on s390x. Also use a different test in command/archivePushTest to show the name of the file when a checksum does not match to aid in debugging.
The command/archive-push test was updated but not included because there is also a permissions issue, which looks to be the same as what we see on MacOS/FreedBSD. Hopefully we'll be able to fix all of those at the same time.
The function worked fine, but Coverity was unable to determine that the finally block was run, which led to false positives about unfreed memory.
Using a boolean in the block makes it clear to Coverity that the finally block will always be run no matter what else happens.
We'll depend on the compiler to optimize away the boolean if it is not used in a finally block. The cost of the boolean is fairly low in comparison to everything else being done in these macros, so it does not seem worth having a separate block even if the compiler is not able to eliminate the boolean.
This reverts most of 9a271e9 that fixed a bug caused by c5b5b58, which was also attempting to help Coverity understand FINALLY() blocks.
Since the packSize field is 7 bits, it could never fail the check for > 127.
The compiler will catch any packs that are larger than 7 bits and then the pack size will need to be adjusted. For now just adjust the comment to reflect what the test does and give a clearer indication of what to do when a pack grows too large.
This saves a bit of space and should not affect processing speed.
On MacOS (clang) this unexpectedly reduces the size of the binary by 16kiB but on Linux (gcc) there are no savings at all.
The separator parameter in cfgParseCommandRoleName() was useless since it was always set to : and COLON_STR did not provide any clarity its the single other usage.
In cases where clock skew or timezone issues are preventing backup label generation the user could see an error like this:
new backup label '20220504-152308F' is not later than latest backup label '20220504-222042F_20220504-222141I.manifest.gz'
This will happen if the most recent label is drawn from the history. It is cleaner (and probably less confusing) to strip off the extensions so the user sees:
new backup label '20220504-152308F' is not later than latest backup label '20220504-222042F_20220504-222141I'
The order of callbacks and frees meant that memory needed during a callback (for logging in all known cases) might end up being freed before a callback needed it.
Requiring callbacks and logging to check the validity of their allocations is pretty risky and it is not clear that all possible cases have been accounted for.
Instead recursively execute all the callbacks first and then come back and recursively free the context. This is safer and it removes the need to check if a context is freeing so a simple active flag (in debug builds) will do. The caller no longer needs this information at all so remove memContextFreeing() and objMemContextFreeing().
In the JSON output the percent complete is storage as an integer of the percent complete * 100. So, before display it should be converted to double and divided by 100, or split using integer mod and div.
Note that percent complete will only be displayed on the host where the backup was executed. Remote hosts will show a backup/expire running with no percent complete.
PostgreSQL 15 drops support for exclusive backup and renames the start/stop backup commands.
This is based on the pgdg-testing repo since beta1 has not been released yet, but it seems unlikely that breaking changes will be made at this point. beta1 should be tagged just before our next release so we'll retest before the release.
This column has been removed in PostgreSQL 15. Rather than add a lot of special handling, it seems better just to update all versions to not depend on this column.
Add centralized functions to identify the type of database (i.e. system or user) by name and use FirstNormalObjectId when a name is not available.
The new query in the db module will still return the prior result for PostgreSQL <= 15, which will be stored in the manifest. This is important to preserve behavior when downgrading pgBackRest. There are no concerns here for PostgreSQL 15 since older versions of pgBackRest won't be able to restore backups for PostgreSQL 15 anyway.
Any error thrown resets execution to the last setjmp(), which means that parts of the try block need to make sure they don't get run again. FINALLY() was not doing this so if it threw an error it would end up back in the FINALLY() block, where the error would likely be thrown again, causing an infinite loop.
Fix this by tracking the state of FINALLY() and only running it once. This requires cleaning the error stack like CATCH*() and clearing the error like TRY_END() depending on the order of execution.
The archive-get/archive-push commands would not error for, .e.g permissions errors, when attempting to get a lock before launching the async process. Since the async process was not launched there would be no error status file and the user would get a generic failure message. Also, there would be no async log.
Refactor lockAcquireFile() to throw an error when failOnNoLock = false unless the file is locked by another process. This seems to be the original intent of this parameter and there may have been a mistake when porting from Perl. In any case it looks wrong enough to be considered a bug.
The mem context name is used to produce clearer debug errors but it has no purpose in production builds.
Also remove memContextName() and access the struct directly since the name is only used within the common/memContext module.
Note that a few errors that were thrown in production builds (and required the name) are now only thrown in debug builds. In practice we have not seen these errors in production builds due to extensive coverage so it does not seem worth modifying the error to work without the context name.
This saves some memory, which is worthwhile, but the goal is to refactor Strings and Variants to have their own mem contexts and this change will prevent them from using more memory than they are now, along with other changes that will be coming later.
If this error is thrown rather than a specific error returned from the async process, it means the async process is unable to write the status files for some reason and the only way to get the error is out of the async log.
This hint includes the exact async log path and name to make finding errors easier.
Only set -DDEBUG_MEM for the modules currently being tested rather than globally.
Also run tests in a temp mem context. Running in the top context can confuse memory accounting when a new context is created in the top context.
Reuse the section/key/value Strings by truncating them instead of creating a new one every time.
Also add an error for empty sections. This function is only used for loading info files (not config files), which should never contain an empty section.
These functions allow conversion from substrings without needing to create a String or a temporary buffer.
httpDateToTime() no longer requires a temp mem context. Also improve handling of month search to avoid an allocation.
httpUriDecode() no longer requires a temp mem context.
jsonReadStr() no longer requires a temp mem context.
pgLsnFromWalSegment() no longer requires a temp mem context.
pgVersionFromStr() no longer requires a temp mem context. Also do a bit of refactoring.
storageGcsCvtTime() no longer leaks six Strings per call.
storageS3CvtTime() no longer leaks six Strings per call.
Object variables were begin allocated in the calling context rather than the object context.
This is not a live bug because Exec objects are currently created and opened in a long-lived context.
It is not clear why these were split out, but it probably had something to do with testing before storageList() could return NULL for an empty directory.
Also remove the tests that depended on a boolean return, which are no longer needed for coverage.
Previously read/writing JSON required parsing/render via a variant, which add many more memory allocations and loops.
Instead allow JSON to be read/written serially to improve performance and simplify the code. This also allows us to get rid of many String and Variant constant which are no longer required.
The goal is to be able to read/write very large (e.g. gigabyte manifest) JSON structures, which would not be practical with the current code.
Note that external JSON (GCS, S3, etc) is still handled using variants. Converting these will require more consideration about key ordering since it cannot be guaranteed as in our own formats.
This allows code to run after the return type has been generated in the case where it is an expression.
No new functionality here yet, but this will be used by a future commit that audits memory usage.
All fields should be alphabetical. Currently the read code is tolerant of this, but that will not always be the case.
Fields are always written alphabetically so this is just a test issue introduced by d8d41321.
This is not a very realistic case since archive start/stop are always written, but it appears in many other unit tests so it should also be tested here.
Packs support stronger typing than JSON and are more efficient. For the small result sets that we deal with efficiency is probably not very important, but this removes another place where we are using JSON instead of Pack.
Push checking for result struct (e.g. single row) down into PgClient since it has easy access to this information rather than needing to parse the result set to find out.
Refactor all code downstream that depends on PgClient results.
There have been some behavioral changes in libpq which require changes to the test.
Also update the instructions since it is now a bit easier to run against a real cluster.
There is no need to process the stats so a KeyValue is overkill.
Also remove the performance tests that check the stat totals since this is covered in the unit tests.