Previously it was not possible to read or write two files at the same time on the same remote because the protocol was entirely taken over by the read or write command. Multiple reads are required to make restores efficient when a list of bundled files is being read but blocks need to be retrieved from a separate file or a different part of the same file.
Improve that situation with sessions that allow related commands to be run with shared state. Also break read/write into separate requests (rather than pushing all data at once) so they can be multiplexed.
The disadvantage for read/write is that they now require more back and forth to transfer a file. This is mitigated by sending asynchronous read/write requests to keep both server and client as busy as possible. Reads that can fit into a single buffer are optimized to transfer in a single command. Reads that transfer the entire file can also skip the close command since it is implicit on end-of-file.
These changes allow the protocol to be simplified to provide one response per request, which makes the data end message obsolete. Any data sent for the request is now added to the parameters so no data needs to be sent separately to the server outside the request parameters.
Also update the Db protocol to use the new sessions. Previously this code had tracked its own sessions.
IMPORTANT NOTE: The log-level-stderr option default has been changed from warn to off. This makes it easier to capture errors when only redirecting stdout. To preserve the prior behavior set log-level-stderr=warn.
NOTE TO PACKAGERS: The lz4 library is now required by the meson build.
NOTE TO PACKAGERS: Compiler support for __builtin_clzl() and __builtin_bswap64() is now required by the meson build.
Bug Fixes:
* Fix SFTP renaming failure when file already exists. (Fixed by Reid Thompson. Reviewed by David Steele. Reported by ahmed112212.)
Features:
* Allow backups to run concurrently on different repositories. (Reviewed by Reid Thompson, Stefan Fercot.)
* Support IP-based SANs for TLS certificate validation. (Contributed by David Christensen. Reviewed by David Steele.)
Improvements:
* Default log-level-stderr option to off. (Reviewed by Greg Sabino Mullane, Stefan Fercot.)
* Allow alternative WAL segment sizes for PostgreSQL ≤ 10. (Contributed by Viktor Kurilko. Reviewed by David Steele.)
* Add hint to check SFTP authorization log. (Contributed by Vitalii Zurian. Reviewed by Reid Thompson, David Steele.)
Documentation Improvements:
* Clarify archive-push multi-repo behavior. (Reviewed by Stefan Fercot.)
Since 1141dc20 it has been possible to request that cfgParse() skip loading the config file. Use this logic to replace the code used to ignore config files in doc/test config load.
Typically we use the oldest Debian/Ubuntu to run 32-bit unit and integration tests. However, 32-bit is no longer fully supported by Ubuntu (multiple packages we need are missing) and apt.postgresql.org no longer packages for any 32-bit version.
To address these changes, do 64-bit integration testing on the oldest Debian/Ubuntu (currently Ubuntu 20.04) and 32-bit unit/integration testing on the oldest Debian (currently 11) using the included version for integration testing.
Mock integration tests were removed in d41b21c8 but CI was still trying to run them, so remove from CI.
Also rename mock to integration in test unit tests to avoid confusion in the future.
The prior SAN code only recognized DNS-based SANs, which meant that it would not properly validate if using an IP-based SAN.
Add support for IPv4 and IPv6 SANs with exact matching only.
This simplifies testing when certificate generation tools have trouble generating a DNS:1.2.3.4-style SAN, preferring to include the SAN as IP:1.2.3.4.
Reduce redundancy by improving cfgParseCommandId() to work when a command role is present. This way the function does not need to be called twice.
Also, remove the use of StringList in cfgParse() since checking for a colon is faster and saves memory allocations. Modify cfgParseCommandRoleEnum() to accept char * since a String is no longer produced for the role name.
Errors in code generation can cause the test binary build to fail and then it is not possible to see the generated code.
Instead, generate code first so any errors can be seen and analyzed.
Create mappings between integer, size, time, and stringid option values and their string equivalents. This allows for better error messages and means that the mappings do not need to be stored with defaults, allow lists, etc.
Update error handling for libssh2_sftp_rename_ex() in storageWriteSftpClose() when a file already exists.
The SFTP servers used during development and testing never returned LIBSSH2_FX_FILE_ALREADY_EXISTS, rather they returned LIBSSH2_FX_FAILURE when a file already existed. However, it is clear that some SFTP servers use LIBSSH2_FX_FILE_ALREADY_EXISTS so add support.
The prior locking only allowed one backup per stanza, which was required by PostgreSQL <= 9.5 and didn't present a problem when only one stanza could be created.
Now that multiple stanzas are allowed relax this restriction so that backups can run concurrently for PostgreSQL > 9.5. To do this, update the locking to be per stanza and repo rather than per stanza. Remotes are not aware of the repos that require locking so send an explicit list of files to be locked to the remote. Also remove the advisory lock for PostgreSQL > 9.5.
For info output the running backups are combined for progress output in order to avoid changing the JSON format. It definitely makes sense to have per repo progress as well but that will be left for a future commit.
Similar to size options in 038abaa7, time option defaults and allowed values were displayed in seconds, which could be confusing when the values were large.
The time options were not updated in 038abaa7 because it required removing the ability to do fractional seconds, e.g. 0.5 seconds. In theory this could cause breakage for users but it seems really unlikely. Fractional seconds are used in tests, however, so the tests have been changed to use milliseconds where required, e.g. 500ms.
Rather than using the full enum just use the part of the enum that is unique. This makes the output a bit more readable by removing the repetitive elements. The prefix for each enum is built into its macro.
Writing warnings and errors to stderr by default leads to error messages being lost when the user does not correctly redirect stderr while generating logs for analysis. This happens so often that it seems worth changing the default to increase the quality of the logs we receive.
If the user has explicitly set log-level-stderr then there is no change in behavior.
These were broken while code was being migrated to C and went unnoticed because the options are generally only used when doing performance testing.
The C code can only take one --run param so add a check for that in test.pl.
Refactor the lock module to split command-specific logic from the basic file locking functionality. Command specific logic is now in command/lock.c. This will make it easier to implement new features such as repository locking and updating lock file contents on remotes.
This implementation is essentially a drop-in replacement but there are a few differences. First, the lock names no longer require a path (the path is added in the lock module). Second, the timeout functionality has been removed since it was not being used.
If a file on the primary was larger than on the replica then the next diff/incr backup would store the primary size instead of the replica size when block incremental was enabled. On the next diff/incr backup this would lead to a repo size must be > 0 for file error when validating the manifest.
Fix this by limiting copy based on sizeOriginal rather than size so size can be set to the value expected to be stored in the manifest. As a bonus sizePrior is no longer needed since size can be used for the same purpose.
This means valgrind is no longer built from source, which caused image builds to run for a very long time.
Valgrind is only required in a few images for testing.
Alternative WAL segment sizes can be configured in PostgreSQL <= 10 with compile-time options. We have not allowed these before since it was not a well-tested feature of PostgreSQL.
However, forks such as Greenplum allow alternative WAL segment sizes at initdb time (which are presumably well-tested) so it makes sense to allow it.
Since the PostgreSQL versions in question are all EOL it is not important to have this restriction in place anymore.
These tests are important for an upcoming bug fix related to differing sizes of a file on a primary vs standby.
The test that demonstrates the bug cannot be included here since it causes a test failure, but this commit introduces the infrastructure and one test to guard against a regression in the bug fix.
lcov does not seem to be very well maintained and is often not compatible with the version of gcc it ships with until a few months after a new distro is released. In any case, lcov is that not useful for us because it generates reports on all coverage while we are mainly interested in missing coverage during development.
Instead use the JSON output generated by gcov to generate our minimal coverage report and metrics for the documentation.
There are some slight differences in the metrics. The difference in the common module was due to a bug in the old code -- build/common was being added into common as well as being reported separately. The source of the two additional branches in the backup module is unknown but almost certainly down to how exclusions are processed with regular expressions. Since there is additional coverage rather than coverage missing this seems fine.
Since this was pretty much a rewrite it was also a good time to migrate to C.
NOTE TO PACKAGERS: The build system for pgBackRest is now meson. The autoconf/make build will not receive any new features and will be removed after a few releases.
Features:
* Add GCS batch delete support. (Reviewed by Reid Thompson.)
* S3 SSE-C encryption support. (Reviewed by Tim Jones. Suggested by Tim Jones.)
* PostgreSQL 17 support. (Reviewed by Stefan Fercot.)
Improvements:
* Allow explicit disabling of optional dependencies in meson builds. (Contributed by Michael Schout. Reviewed by David Steele.)
* Dynamically find python in meson build. (Contributed by Michael Schout. Reviewed by David Steele.)
* Tag pgbackrest build target in meson as installable. (Contributed by Bradford Boyle. Reviewed by David Steele.)
Documentation Improvements:
* Update start/stop documentation to reflect actual functionality. (Reviewed by Stefan Fercot.)
Update the catalog version for beta 1 so pgbackrest will not work with any prior development versions.
Also improve the integration/all test so the catalog version does not need to be updated again during the beta period.
If a host defaults to ipv6 then it can confuse the tests and lead to connection failures and inconsistent error messages.
For now just hard-code the servers to run on ipv4 but this is an area for later improvement.
Coverage of the documentation code is not important enough to report to users. If it were reported it should be in a separate section (along with test code coverage).
3c8819e1 replaced gmtime/localtime with gmtime_r/localtime_r but did not take into account a subtle difference in how they operate. While gmtime/localtime operate as if tzset() has been called, i.e. they operate on the TZ env variable directly, gmtime_r/localtime_r require tzset() to be called after changing TZ for consistent results.
Rather than call tzset() every time TZ is changed, add hrnTzSet() to encapsulate both operations.
This was copied from storagePosixInfo() in a474ba54 but there is no guarantee that errno will be valid at this point. In most cases errno was zero so no system error message was displayed, but when using the Posix driver it could output "[2] No such file or directory". For other drivers errno was generally not set but could output a random error message in that case that errno was set by some unrelated action.
Use THROW_FMT() instead since errno will not always be set correctly and in any case "[2] No such file or directory" is not very useful information since the main error message already says that.
While this is technically a bug it is so harmless that it doesn't merit mention in the release notes.
This was discovered while testing on Fedora 40 which threw "[38] Function not implemented" -- clearly unrelated to missing paths/files.
The GCS driver sent a single file delete request for each file while deleting a path. Depending on latency this could lead to rather long delete times, especially noticeable during expiration.
Improve GCS delete to use batches, which require multipart HTTP, so also add multipart HTTP infrastructure.
This is better than requiring a python3 binary to be on the path because some installations might have, e.g. python3.9.
Also add the python3-distutils package to Debian builds to make this work.
This should have been done in 434938e3 but somehow it didn't happen.
Fedora 38 requires 2048 bit keys so update the VM builds to use them. Update the documentation to use 2048 bit keys. This is not technically required by this commit but it makes sense to do it now.
Also update the key location for the yum.p.o repository.
Lastly, shuffle test PostgreSQL versions since PostgreSQL 11 is not longer available in the yum.p.o repository.
This feature (enabled with --repo-s3-sse-customer-key) provides an encryption key to encrypt the data after it has been transmitted to the server.
While not as secure as encrypting data before transmission (--repo-cipher-type), this may be useful in certain configurations.
These should have been removed when the mock integration tests were removed.
Ideally we would also remove filecopy.table.bin but it serves to provide realistic page data for performance testing.
A valid StringId can never be zero so it more or less serves as a NULL value. In most cases zero will not be valid, but it is better to catch this condition with an assert rather than an error in logging.
NOTE TO PACKAGERS: The build system for pgBackRest is now meson. The autoconf/make build will not receive any new features and will be removed after a few releases.
Bug Fixes:
* Skip zero-length files for block incremental delta restore. (Reviewed by Sebastian Krause, René Højbjerg Larsen. Reported by Sebastian Krause.)
* Fix performance regression in storage list. (Reviewed by Stephen Frost. Reported by Maksym Boguk.)
* Fix progress logging when file size changes during backup. (Reviewed by Stephen Frost. Reported by samkingno.)
Improvements:
* Improved support for dual stack connections. (Reviewed by Stephen Frost. Suggested by Timothée Peignier.)
* Make meson the primary build system. (Reviewed by Stephen Frost.)
* Detect files that have not changed during non-delta incremental backup. (Reviewed by Stephen Frost.)
* Prevent invalid recovery when backup_label removed. (Reviewed by Stephen Frost.)
* Improve archive-push WAL segment queue handling. (Reviewed by Stephen Frost.)
* Limit resume functionality to full backups. (Reviewed by Stephen Frost, Stefan Fercot.)
* Update resume functionality for block incremental. (Reviewed by Stephen Frost.)
* Allow --version and --help for version and help. (Reviewed by Greg Sabino Mullane. Suggested by Greg Sabino Mullane.)
* Add detailed backtrace to autoconf/make build. (Reviewed by Stephen Frost.)
Documentation Improvements:
* Update references to recovery.conf. (Reviewed by Stefan Fercot. Suggested by Stephen Frost.)
If the file size changed during backup then the progress percentage in the log would not be accurate.
Fix this by using the original size to increment the progress since progress total was calculated from original file sizes.
For this to be practically useful secure options must be redacted. Otherwise, no user is likely to share the report.
Since this feature is still internal, there is no real world impact.
Resume was not updated for block incremental so block incremental files were always removed during a resume. Resume worked but was very inefficient with block incremental enabled.
Update resume to preserve block incremental files and add tests.
If backup_label is removed from a restored backup then PostgreSQL will instead use checkpoint information from pg_control to attempt (what is thinks is) crash recovery. This will nearly always result in a corrupt cluster because the checkpoint will not be from the beginning of the backup, and even if it is, the end point will not be specified, which could lead to recovery stopping too early.
To prevent this, invalidate the checkpoint LSN in pg_control on restore. If backup_label is removed then recovery will still fail because PostgreSQL will not be able to find the invalid checkpoint. The LSN of the checkpoint is not logged but it will be visible in pg_controldata output as 0/DEAD. This value is invalid because PostgreSQL always skips the first WAL segment when initializing a cluster.
This serves as an additional sanity check to be sure the pg_control format is as expected. The field is useful for being near the end and containing a limited number of discrete values.
This serves as an additional sanity check to be sure the pg_control format is as expected. The field is useful for being all the way at the end and being four bytes that can only have one of two values. Something more distinctive than 0 and 1 would be better, but this is what we have to work with.
Convert PgControl.pageChecksum to unsigned int and rename to PgControl.pageChecksumVersion and make all downstream changes required for the new datatype.
Connections are established using the "happy eyeballs" approach from RFC 8305, i.e. new addresses (if available) are tried if the prior address has already had a reasonable time to connect. This prevents waiting too long on a failed connection but does not try all the addresses at once. Prior connections that are still waiting are rechecked periodically if no subsequent connection is successful.
This improves substantially on 39bb8a0, which failed to take into account connection attempts that do not fail (but never connect) and use up all the available time.
Meson has a lot of advantages over autoconf/make, primarily in ease-of-use and performance. Make meson the only build system used for testing and building the Debian documentation, but leave the RHEL documentation using autoconf/make for now so it gets some testing.
The leak kind is usually definite but sometimes flaps to possible. For stability purposes accept any leak kind.
Note that this is a leak in a specific version of libssh2 and not a bug in pgBackRest.
Resume does not work correctly with delta diff/incr backups because the presence of a reference causes it to remove the file with the idea that it can just be referenced again. This is true for timestamp-based backups but for deltas all existing files need to be rechecked (which requires a reference).
This is fixable but not without significant effort and new tests and it calls into question the usefulness of non-full resumes. For diff/incr, if the file was changed since the prior backup there is a good chance it will be modified again before the resume occurs.
In order to keep this feature as useful as possible for the most valuable case, limit resumes to full backups.
02eea55 added code to load a buffer of data from a file being backup up to detect files that have been truncated to zero after manifest generation. This mechanism can also be used to detect files that have not changed since the prior backup.
If the result of the file copy fits into a single buffer, then the size and checksum can be compared to the prior file before anything gets stored. If the file matches then it is referenced to the file in to prior backup.
The size that can be compared for normal copies is limited by the buffer size but for block incremental it works with any size file since there is no output from block incremental when the file is identical.
Infer the size of all WAL segments from the size of the first segment rather than getting info for all segments (up to queue size). If the segments are not the same size then there are larger issues than the WAL queue.
storageListP() returns a list of entries in a path and should not need to stat/head, etc. in order to get more detailed info. This was broken by 75623d4 which failed to set the level correctly.
Set the correct level and update tests.
There's no easy way to directly test for a regression here but the SFTP tests will fail if more detailed info is requested since it would require script changes.
The Perl integration tests were migrated as faithfully as possible, but there was some cruft and a few unit tests that it did not make sense to migrate.
Also remove all Perl code made obsolete by this migration.
All unit, performance, and integration tests are now written in C but significant parts of the test harness remain to be migrated.
a42614e introduced the capability to preserve smaller than expected files for block incremental restore delta, but failed to take into account that zero-length files are both useless and cause the block checksum filter to error.
Fix this by skipping zero-length files during block incremental restore delta.
The core Exec object is efficient but geared toward the specific needs of core and not ease-of-use as required for build, documentation, and testing.
execOne() works similarly to system() except that it automatically redirects stderr to stdout and captures the output.
This has never been a problem for performance tests since they do not call functions that log at info level or above, but the upcoming integration tests may do so. In any case it is better to disable this functionality outside of unit tests.
These tests have not been maintained for several years, i.e. no tests for new features have been added. They are highly duplicative of the unit tests but do have the advantage of mixing in different storage drivers. They were allowed to remain because they were not doing any harm even if they were probably not doing any good.
However, the real integration tests (that run directly against PostgreSQL) also test storage drivers and have been updated with new features over time. The real integration tests are now being migrated to C and as part of that effort the mock integration tests need to be removed or migrated, and they do not provide enough value to migrate.
Remove all mock integration tests and a leftover Perl performance test.
Update references to recovery.conf to include postgresql.auto.conf used in newer versions.
Also update a broken recovery URL and point it to the current version (with a hint to select the proper version of PostgreSQL).
The output combined a representation of the files/paths/links in the manifest along with output of what was validated on disk. This was redundant and made maintenance of the tests difficult, especially with all the quoting in the manifest output (which also made it hard to search for output).
Instead use primarily the output created during validation and add fields from the manifest that were missing. Exclude paths from the output when there are files in the path since the path is then implied.
One major change here is that checksums are no longer output. This makes it easier to write tests that work on multiple architectures and all checksums are already verified during validation.
'pgbackrest help help' just displayed the help overview, rather than display help for the help command.
Fix this by making sure the command is set and routing correctly in the help command.
It is a bit confusing that --help and --version do not work like most command-line programs. For example, git allows either --help or help.
Make these work by making them shortcuts (not actual options) to the applicable commands.
The user will still need to use help (not --help) to get help on specific commands/options, but at least they can get to the main help (which will tell them this) via --help.
Bug Fixes:
* Fix short read in block incremental restore. (Reviewed by Stephen Frost, Brent Graveland. Reported by Adol Rodriguez, Brent Graveland.)
* Fix overflow suppressing backup progress in info output. (Fixed by Robert Donovan. Reviewed by Joe Wildish.)
Improvements:
* Preserve partial files during block incremental delta restore. (Reviewed by Stephen Frost.)
* Add support for alternate compile-time page sizes. (Contributed by Viktor Kurilko. Reviewed by David Steele.)
* Skip files truncated during backup when bundling. (Contributed by Georgy Shelkovy. Reviewed by David Steele.)
* Improve SFTP storage error messages. (Contributed by Reid Thompson. Reviewed by David Steele.)
Tablespaces were enabled for tests that did not have tablespaces, resulting in tablespace_map being present in backups even when it was not needed.
Instead of specifying if tablespaces are present, automatically detect tablespaces in hrnBackupPqScript().
Long scripts followed by a number of tests are really challenging to debug and update.
Instead, break up the scripts to be inline with the tests that they drive. This should make maintenance of the tests much simpler.
Use storageSftpEvalLibSsh2Error() in more locations to provide better error information. Also add storageSftpLibSsh2SessionLastError() for the same reason.
During restore it is possible to read all the blocks out of a compressed super block without reading all the input. This is because the compression format may have some trailing bytes that are not required for decompression but are required to indicate that data has ended. If a buffer aligned with the compressed data in a certain way, these last bytes might not be read.
Explicitly read out any final bytes at the end of each super block to handle this case. This should always result in no additional data out and we check for that, but it does move the read position to the beginning of the next compressed super block so decompression can begin without error.
If a file stored with block incremental shrinks below the prior block size then the map is useless and the entire file needs to be stored again.
In this case use the new block incremental values (even if none) rather than preserving the old ones.
This warning has had a note since the C migration that it should be moved below the backup file log message, so do that.
Also update the warning message a bit to correct for tense. This message was likely in a different place originally.
Previously files that were smaller than the expected size were not preserved for block incremental, even though it is possible that block incremental could make use of a partial file.
One example is when a restore encounters an error. On retry the partial file can be used as a starting point rather than copying again from the beginning. Another example is restoring a backup where a file is larger than what already exists in the data directory.
Preserve any size file when block incremental will be used for the delta in order to reuse partial files when possible. If the file is smaller than expected then disable the whole-file checksum to reduce overhead.
Writing the sz and szCplt parameters in the lock file used jsonWriteUInt64() but reading these parameters used jsonReadUInt(). This caused a silent exception for any backups larger than MAX_UINT and prevented the info command from reporting progress.
Correct this so the reads are symmetric and verified before/after with a test.
If delta is not enabled, then the timestamp is used to determine if a file has changed. If the timestamp changes but the file is the same then the prior map will be stored unchanged in the new backup. This is not quite as bad as storing the entire file but it is obviously not ideal.
This will be fixed in a future commit, but add the test now to show the current behavior.
In bundle mode pgBackRest skips files of zero size, that is, it does not queue them for copying.
After splitting the files into bundles, pgBackRest launches one or more processes that directly perform the backup, namely, read the files and, if necessary, write them to the bundles.
If during the time between the distribution of all files among bundles and the direct copying of a file to a bundle, this file of non-zero size was truncated to zero size (for example, when the table was truncated), then pgBackRest still unconditionally places such a zero-size file in the bundle, taking up space in it equal to the size of the headings, and additionally writes the original file size to the manifest.
In debug build an assertion was added, that does not allow zero-size files to be written to bundles, which leads to an error.
To solve the problem, this patch, when reading the next file, loads one buffer from the file to detect if it is zero-size. If so it marks the file as truncated and continues on to the next file.
The advantages of the solution are that, firstly, the assert will not fire on debug builds, and secondly, we will not place zero-size files in bundles, which exactly corresponds to the specification.
The patch adds the backupCopyResultTruncate value to the BackupCopyResult enumeration to use it to indicate the result when a non-zero size file is truncated to zero size during the backup process.
Alternate pages sizes can be selected at compile-time, .e.g. 4096. While compile-time settings are generally not well tested by core, some established forks such as Greenplum use them.
Older versions of ninja may fail to rebuild correctly when changes are made to the configuration. In this case there is an automatic retry but the unexpected log output would cause the test to fail.
For tests that are expected to succeed, check that the log is empty but also accept a retry message as long as the test does eventually succeed.
Add a new harness function, harnessLogResultEmptyOrContains(), to make this work and also clean up some adjacent code.
If the same port is reused too quickly bind may fail with this error:
FileOpenError: unable to bind socket: [98] Address already in use
We specify SO_REUSEADDR when creating the socket but apparently this is not always enough if the port is reused very rapidly.
Fix this (hopefully) by using a unique port for each test that needs one. This does in theory limit the number of tests that can use ports, but we allow 768 per test, whereas the test that uses the most ports is common/io-tls with 4.
This allows analysis of coverage failures that only happen in CI. It is not ideal since the report needs to be copied from the log output into an HTML file where it can be viewed, but better than nothing.