LZ4 compresses data faster than gzip but at a lower ratio. This can be a good tradeoff in certain scenarios.
Note that setting compress-type=lz4 will make new backups and archive incompatible (unrestorable) with prior versions of pgBackRest.
This was the interface between Perl and C introduced in 36a5349b but since f0ef73db has only been used by the Perl integration tests. This is expensive code to maintain just for testing.
The main dependency was the interface to storage, no matter where it was located, e.g. S3. Replace this with the new-introduced repo commands (d3c83453) that allow access to repo storage via the command line.
The other dependency was on various cfgOption* functions and CFGOPT_ constants that were convenient but not necessary. Replace these with hard-coded strings in most places and create new constants for commonly used values.
Remove all auto-generated Perl code. This means that the error list will no longer be maintained automatically so copy used errors to Common::Exception.pm. This file will need to be maintained manually going forward but there is not likely to be much churn as the Perl integration tests are being retired.
Update test.pl and related code to remove LibC builds.
Ding, dong, LibC is dead.
These commands are generally useful but more importantly they allow removing LibC by providing the Perl integration tests an alternate way to work with repository storage.
All the commands are currently internal only and should not be used on production repositories.
If the command was passed a file it would return no results since it was originally intended to list files when passed a path.
However, as a general purpose command working directly with files makes sense.
This command only makes sense for the repository storage since other storage (e.g. pg and spool) must be located on a local Posix filesystem and can be listed using standard unix commands. Since the repo storage can be located lots of places having a common way to list it makes sense.
Prefix with repo- to make the scope of this command clear.
Update documentation to reflect this change.
Add compress-type option and deprecate compress option. Since the compress option is boolean it won't work with multiple compression types. Add logic to cfgLoadUpdateOption() to update compress-type if it is not set directly. The compress option should no longer be referenced outside the cfgLoadUpdateOption() function.
Add common/compress/helper module to contain interface functions that work with multiple compression types. Code outside this module should no longer call specific compression drivers, though it may be OK to reference a specific compression type using the new interface (e.g., saving backup history files in gz format).
Unit tests only test compression using the gz format because other formats may not be available in all builds. It is the job of integration tests to exercise all compression types.
Additional compression types will be added in future commits.
All the methods in this module will need to be implemented via the command-line in order to get rid of LibC, so the first step is to reduce the code in the module as much as possible.
First remove storageDb() and use storageTest() instead. Then create storageTest() using pgBackRestTest::Common::Storage which has no dependencies on LibC. Now the only storage using the LibC interface is storageRepo().
Remove all link functions since those operations cannot be performed on a repo unless it is Posix, in which case the LibC interface is not needed. Same for owner().
Remove pathSync() because syncs are not required in the tests. No test data is reused after a crash.
Path create/exists functions should never be explicitly performed on a repo so remove those. File exists can be implemented by calling info() instead.
Remove encryption detection functions which were only used by Backup/Archive::Info reconstruct() which are now obsolete.
Remove all filters except pgBackRest::Storage::Filter::CipherBlock since they are not being used. That also means there are no filters returning results so remove all the result code.
Move hashSize() and pathAbsolute() into pgBackRest::Storage::Base where they can be shared between pgBackRest::Storage::Storage and pgBackRestTest::Common::Storage.
This was mostly dead code except the DB_BACKUP_ADVISORY_LOCK constant, moved to the real/all test module, and the function that pulls info from pg_control, moved to ExpireEnvTest.pm.
The postgres/pageChecksum module was designed as an interface to the C structs for the Perl code. The new C code can do this directly so no need for an interface.
Move the remaining test for pgPageChecksum() into the postgres/interface test module.
We were using a customized version which worked fine but was hard to merge with upstream changes. Now this code is maintained much like the types in static.auto.h that we copy and check with each release.
The goal is to eventually build directly against PostgreSQL (either source or libcommon) and this brings us one step closer.
All zero pages should not have checksums. Not only is this test invalid but it will not work with the stock page checksum implementation in PostgreSQL, which checks for zero pages. Since we will be using that code verbatim soon this test needs to go.
Using static values serves as a better cross-check against the page checksum code. The downside is that these checksums may not work with some big endian systems but in that case neither will the unit tests.
We can also remove the page checksum interface from LibC which brings us one step closer to eliminating it.
pgPageChecksum() must modify the page header in order to calculate the checksum. The modification is temporary but make it clear that it happens by removing the const.
Also make a note about our non-entirely-kosher usage of a const Buffer in the PageChecksum filter. This is safe as currently coded but at the least we need to be aware of what is going on.
Page size is passed around a lot but in fact it can only have one value, PG_PAGE_SIZE_DEFAULT, which is checked when pg_control is loaded. There may be an argument for supporting multiple page sizes in the future but for now just use the constant to simplify the code.
There is also a significant performance benefit. Because pageSize was being used in pageChecksumBlock() the main loop was neither unrolled nor vectorized (-funroll-loops -ftree-vectorize) as it is now with a constant loop boundary.
This function made validation faster in Perl because fewer calls (and buffer transformations) were required when all checksums were valid.
In C calling pageChecksumTest() directly is just as efficient so there is no longer a need for pageChecksumBufferTest().
These data structures were copied a few places (but only once in the core code) so put them in a place where everyone can use them.
To do this create a new file, static.auto.h, to contain data types and macros that have stayed the same through all the versions of PostgreSQL that we support. This allows us to have single, non-versioned set of headers and code for stable data structures like page headers.
Migrate a few types from version.auto.h that are required for page header structures and pull the remaining types from PostgreSQL directly.
We had previously renamed xlog to wal so update those where required since we won't be modifying the PostgreSQL names anymore.
The S3 driver depends on being able to generate a common prefix to limit the number of results from list commands, which saves on bandwidth.
The prior implementation could be tricked by an expression like ^ABC|^DEF where there is more than one possible prefix. To fix this disallow any prefix when another ^ anchor is found in the expression. [^ and \^ are OK since they are not anchors.
Note that this was not an active bug because there are currently no expressions with multiple ^ anchors.
The restore test function was passing strBackup to the restoreCompare function but when the restore is expected to pick a backup based on a timestamp, then strBackup may not be the one chosen.
Modified the code so that strBackupExpected is set based on the parameters passed to the function and this is then passed to restoreCompare.
Adding a sleep before was necessary since only adding a sleep after did not always work. This helps to ensure the backup stop time for the previous backup does not equal time-recovery-timestamp. The sleep after allows enough time between the time retrieval and dropping important_table so PostgreSQL can consistently recover to before the table drop.
Note that these issues were caused by picking a timestamp too close to the restore command or a database operation, not due to any problem in backup selection of the restore command.
This option was used for boolean testing but it will soon be deprecated and the semantics changed. To reduce churn it seems easiest to just use other options for testing. This will also be helpful when the option is eventually removed.
These commands (e.g. restore, archive-get) never used the compress options but allowed them to be passed on the command line. Now they will error when these options are passed on the command line. If these errors occur then remove the unused options.
This was a minor optimization used in protocol layer compression. Even though it was slightly faster, it omitted the crc-32 that is generated during normal compression which could lead to corrupt data after a bad network transmission. This would be caught on restore by our checksum but it seems better to catch an issue like this early.
The raw option also made the function signature different than future compression formats which may not support raw, or require different code to support raw.
In general, it doesn't seem worth the extra testing to support a format that has minimal benefit and is seldom used, since protocol compression is only enabled when the transmitted data is uncompressed.
"gz" was used as the extension but "gzip" was generally used for function and type naming.
With a new compression format on the way, it makes sense to standardize on a single abbreviation to represent a compression format in the code. Since the extension is standard and we must use it, also use the extension for all naming.
The prior code used TRY...CATCH blocks to cleanup mem contexts when an error occurred. This included freeing new mem contexts that were still being initialized when the error occurred and ensuring that the prior memory context was restored.
This worked fine in production but it involved a lot of setjmp()/longjmp() calls that resulted in longer compilation times and sluggish performance under valgrind, profiling, and coverage testing.
Instead maintain a stack of new contexts and context switches that can be used to do cleanup after an error. Normally, the stack is not used for this purpose and pushing/popping is a cheap operation. In the prior implementation most of the TRY...CATCH logic needed to be run even on success.
One bonus is that the binary is about 8% smaller after this change. Another benefit is that new contexts *must* be explicitly freed/discarded or an error will occur. See info/manifest.c for an example of where this is useful outside the standard macros.
Bug Fixes:
* Prevent defunct processes in asynchronous archive commands. (Reviewed by Stephen Frost. Reported by Adam Brusselback, ejberdecia.)
* Error when archive-get/archive-push/restore are not run on a PostgreSQL host. (Reviewed by Stephen Frost. Reported by Jesper St John.)
* Read HTTP content to eof when size/encoding not specified. (Reviewed by Cynthia Shang. Reported by Christian ROUX.)
* Fix resume when the resumable backup was created by Perl. In this case the resumable backup should be ignored, but the C code was not able to load the partial manifest written by Perl since the format differs slightly. Add validations to catch this case and continue gracefully. (Reported by Kacey Holston.)
Features:
* Auto-select backup set on restore when time target is specified. Auto-selection is performed only when --set is not specified. If a backup set for the given target time cannot not be found, the latest (default) backup set will be used. (Contributed by Cynthia Shang.)
Improvements:
* Skip pg_internal.init temp file during backup. (Reviewed by Cynthia Shang. Suggested by Michael Paquier.)
* Add more validations to the manifest on backup. (Reviewed by Cynthia Shang.)
Documentation Improvements:
* Prevent lock-bot from adding comments to locked issues. (Suggested by Christoph Berg.)
If PostgreSQL crashes it can leave behind a pg_internal.init temp file with the pid as the extension, as discussed in https://www.postgresql.org/message-id/flat/20200131045352.GB2631%40paquier.xyz#7700b9481ef5b0dd5f09cc410b4750f6. On restart this file is not cleaned up so it can persist for the lifetime of the cluster or until another process with the same id happens to write pg_internal.init.
This is arguably a bug in PostgreSQL, but in any case it makes sense not to backup this file.
This error was lost during the migration to C. The error that occurred instead (generally an SSH auth error) was hard to debug.
Restore the original behavior by throwing an error immediately if pg1-host is configured for any of these commands. reset-pg1-host can be used to suppress the error when required.
The main improvement is a double-fork to prevent zombie processes if the parent process exits after the (child) async process. This is a real possibility since the parent process sticks around to monitor the results of the async process.
In the first fork, ignore SIGCHLD in the very unlikely case that the async process exits before the first fork. This is probably only possible if the async process exits immediately, perhaps due to a chdir() failure. Set SIGCHLD back to default in the async process so waitpid() will work as expected.
Also update the comment on chdir() to more accurately reflect what is happening.
Finally, add a test in certain debug builds to ensure the first fork exits very quickly. This only works when valgrind is not in use because valgrind makes forking so slow that it is hard to tell if the async process performed work or not (in the case that the second fork goes missing and the async process is a direct child).