The pq scripts were pretty static which had already led to a lot of code duplication in the backup test harness.
Instead allow the scripts to be built dynamically, which allows for much more flexibility and reduces duplication. For now just make these changes in the backup harness, but they may be useful elsewhere.
While we are making big changes, also update the macro/function names to hew closer to our current harness naming conventions.
encodeToStrSizeBase64() is definitely more efficient (pulled from the PostgreSQL implementation).
encodeToStrSizeBase64Url() is probably about as efficient as the prior implementation but is certainly more compact.
Also add tests for zero byte encoding sizes.
Document maintainer options in a separate section with appropriate explanation and caveats.
Also make the pg-version-force option user visible now that maintainer caveats have been documented.
The reference documentation was still using a very old version of rendering from before the user guide was introduced. This was preserved in the initial C migration to reduce the diff between Perl and C for testing purposes. The old version used hard linefeeds to simulate paragraphs and reduce the amount of markup that needed to be used. In retrospect this was not a great idea.
Instead use more natural rendering that does not depend on using hard linefeeds between paragraphs.
For some reason the internal section id was included in the title. This was probably copied from another section title where it made more sense, e.g. including the option name after the title.
Also add release note missed in 1eb01622.
Migrate generation of these files from help.xml to the intermediate documentation format. This allows us to share a lot of code that is already in C and remove duplicated code in Perl. More duplicate code can be removed in Perl once man generation is migrated.
Also update the unit test harness to allow testing of modules in the doc directory.
This test was failing coverage pretty regularly because the retry in tlsClientOpen() was not always being reached. Make the TLS timeouts longer to ensure reliable coverage.
Bug Fixes:
* Fix issue restoring block incremental without a block list. (Reviewed by Stephen Frost, Burak Yurdakul. Reported by Burak Yurdakul.)
Features:
* Add --repo-storage-tag option to create object tags. (Reviewed by Stephen Frost, Stefan Fercot, Timothée Peignier.)
* Add known hosts checking for SFTP storage driver. (Contributed by Reid Thompson. Reviewed by Stephen Frost, David Steele.)
* Support for dual stack connections. (Reviewed by Stephen Frost.)
* Add backup size completed/total to info command JSON output. (Contributed by Stefan Fercot. Reviewed by David Steele.)
Improvements:
* Multi-stanza check command. (Reviewed by Stephen Frost.)
* Retry reads of pg_control until checksum is valid. (Reviewed by Stefan Fercot, Stephen Frost.)
* Optimize WAL segment check after successful backup. (Reviewed by Stephen Frost.)
* Improve GCS multi-part performance. (Reviewed by Reid Thompson.)
* Allow archive-get command to run when stanza is stopped. (Reviewed by Tom Swartz, David Christensen, Reid Thompson.)
* Accept leading tilde in paths for SFTP public/private keys. (Contributed by Reid Thompson. Reviewed by David Steele.)
* Reload GCS credentials before renewing authentication token. (Reviewed by Stephen Frost. Suggested by Daniel Farina.)
Documentation Bug Fixes:
* Fix configuration reference example for the tls-server-address option. (Fixed by Hartmut Goebel. Reviewed by David Steele.)
* Fix command reference example for the filter option.
Test Suite Improvements:
* Allow storage/sftp unit test to run without libssh2 installed. (Contributed by Reid Thompson. Reviewed by David Steele. Suggested by Wu Ning.)
It is currently possible for a block map to be written without any accompanying blocks. This happens when a file timestamp is updated but the file has not changed. On restore, this caused problems when encryption was enabled, the block map was bundled after a file that had been stored without block incremental, and both files were included in a single bundle read. In this case the block map would not be decrypted and the encrypted data was passed to blockMapNewRead() with unpredictable results. In many cases built-in retries would rectify this problem as long as delta was enabled since block maps would move to the beginning of the bundle read and be decrypted properly. If enough files were affected, however, it could overwhelm the retries and throw an error. Subsequent delta restores would eventually be able to produce a valid result.
Fix this by moving block map decryption so it works correctly no matter where the block map is located in the read. This has the additional benefit of limiting how far the block map can read so it will error earlier if corrupt. Though in this case there was no repository corruption involved, it appeared that way to blockMapNewRead() since it was reading encrypted data.
Arguably block maps without blocks should not be written at all, but it would be better to consider that as a separate change. This pattern clearly exists in the wild and needs to be handled, plus the new implementation has other benefits.
pg_data/ was appended at the beginning of the filter, which meant that files in tablespaces could never be queried directly.
Update the filter to require the full path, including pg_data/ or pg_tblspc/.
By default require a known hosts match as part of the SFTP storage driver's authentication process, i.e. repo-sftp-host-key-check-type=strict. The match is expected to be found in the default list or in a list of known hosts files provided by the user. An exception is made if a fingerprint has been manually configured with repo-sftp-host-fingerprint or repo-sftp-host-key-check-type=accept-new can be used to automatically add new hosts.
Also allow host key verification to be skipped, as before, but require the user to explicitly set this (repo-sftp-host-key-check-type=none) rather than it being the default.
Older versions of meson fail when a build target in a subproject has the same name as another subproject.
This has been fixed in newer versions, but we still need to support older versions and in any case this seems cleaner and the help build target is already prefixed in this fashion.
This option is intended to eventually create a comprehensive report about the state of the pgBackRest configuration based on the results of the check command.
Implement a detailed report of the configuration options in the environment and configuration files. This should be useful information when debugging configuration errors, since invalid options and configurations are automatically noted. While custom config locations will not be found automatically, it will at least be clear that the config is not in a standard location.
For now keep this option internal since there is a lot of work to be done, but commit it so that it can be used when needed and tested in various environments.
Note that for now when --report is specified, the check command is not being run at all. Only the config report is generated. This behavior will be improved in the future.
This new option allows tags to be added to objects in S3, GCS, and Azure repositories.
This was fairly straightforward for S3 and Azure, but GCS does not allow tags for a simple upload using the JSON interface. If tags are required then the resumable interface must be used even if the file falls below the limit that usually triggers a resumable upload (i.e. size < repo-storage-upload-chunk-size).
This option is structured so that tags must be specified per-repo rather than globally for all repos. This seems logical since the tag keys and values may vary by service, e.g. S3 vs GCS.
These storage tags are independent of backup annotations since they are likely to be used for different purposes, e.g. billing, while the backup annotations are primarily intended for monitoring.
The prior code would only connect to the first address provided by getaddrinfo().
Instead try each address in the list. If all connections fail then wait and try them all again until timeout.
Currently a round robin approach is used where each connection attempt must fail before the next connection is attempted. This works fine, for example, when an ipv6 address has no route to the host, but will work less well when a host answers but doesn't respond in a timely fashion.
We may consider a Happy Eyeballs approach in the future, but since pgBackRest is primarily a background process, it is not clear that slightly improved response time (in the case of failed connections) is worth the extra complexity.
The prior code did one list command against the storage for each WAL segment. This led to a lot of lists and was especially inefficient when the WAL (or the majority of it) was already present.
Optimize to keep the contents of a WAL directory and use them on a subsequent search. Leave the optimizations for a single WAL segment since other places still use that mode.
sckHostLookup() only returned the first address record returned from getaddrinfo(). The new AddressInfo object provides a full list of values returned from getaddrinfo(). Freeing the list is also handled by the object so there is no longer a need for FINALLY blocks to ensure the list is freed.
Add the selected address to the client/server names for debugging purposes.
This code does not attempt to connect to multiple addresses. It just lays the groundwork for a future commit to do so.
On certain file systems (e.g. ext4) pg_control may appear torn if there is a concurrent write while reading the file. To prevent an invalid read, retry until the checksum matches the control data.
Special handling is required for the pg-version-force feature since the offset of the checksum is not known. In this case, scan from the default position to the end of the data looking for a checksum match. This is a bit imprecise, but better than nothing, and the chance of a random collision in the control data seems very remote considering the ratio of data size (< 512 bytes) to checksum size (4 bytes).
This was discovered and a possible solution proposed for PostgreSQL in [1]. The proposed solution may work for backup, but pgBackRest needs to be able to read pg_control reliably outside of backup. So no matter what fix is adopted for PostgreSQL, pgBackRest need retries. Further adjustment may be required as the PostgreSQL fix evolves.
[1] https://www.postgresql.org/message-id/20221123014224.xisi44byq3cf5psi%40awork3.anarazel.de
If there are a lot of retries then the output might be very large and even be truncated by the error module. Either way, it is not good information for the user.
When a message is repeated, aggregate so that total retries and time range are output for the message. This provides helpful information about what happened without overwhelming the user with data.
The prior code gave a "free" extra iteration at the end of the wait, functionality that was copied directly from the equivalent code in Perl. This works and is mostly negligible except when wait loops are nested, in which case outer loops will always run twice even if an inner loop times out, which has a multiplying effect. For example, three nested wait loops with a timeout of three seconds will result in the inner loop being run four times (for a total of twelve seconds) even if it times out each time.
Instead make waitMore() stop exactly when time is up. This makes more sense because a complete failure and timeout of an inner loop means retrying an outer loop is probably a waste of time since that inner loop will likely continue to fail.
Also make waitRemaining() recalculate the remaining time rather than depending on the prior result.
Some tests needed to be adjusted to take into account there being one less loop. In general this led to a simplification of the tests.
Reinit a begin value in the wait unit tests. This is not related to the current change but it does make the time measurements more accurate and less likely to fail on an edge case, which has been observed from time to time.
This change appears to have a benefit for test runtime, which seems plausible especially for nested waits, but a larger sample of CI runs are needed to be sure.
The current tests only generate small quantities of WAL per backup but sometimes it is useful to generate large quantities for testing.
Fix the issues with generating large quantities of WAL and also improve memory management.
The prior code avoided uploading a chunk if it was not clear whether the write was complete or not. This was primarily due to the GCS documentation being very vague on what to do in the case of a zero-size chunk.
Now chunks are uploaded as they are available. This should improve performance and also reduces the diff against a future commit that absolutely requires zero-size chunks.
The restore command can run while the stanza is stopped so it makes sense for the archive-get command to follow the same rule.
The important thing is to ensure that all commands that write to the repository are stopped when the stanza is stopped.
The tests expect the group name/id to match between the host system and the container. If there is a conflict rename the group with the required id to the expected name.
This could have unintended consequences but it seems reasonably safe since we control everything that runs in the container and there should never be any system processes running.
The documentation indicates that leading tilde file paths for public/private keys are valid but the functionality was omitted from the original implementation.
Check command now checks multiple stanzas when the stanza option is omitted.
The stanza list is extracted from the current configuration rather than scanning the repository like the info command. Scanning the repository is a problem because configuration for each stanza may not be present in the current configuration. Since this functionality is new for check there is no regression.
Add a new section to the user guide to cover multi-stanza configuration and provide additional coverage for this feature.
Also fix a small issue in the parser when an indexed option has a dependency on a non-indexed option. There were no examples of this case in the previous configuration.
Bug Fixes:
* Preserve block incremental info in manifest during delta backup. (Reviewed by Stephen Frost. Reported by Francisco Miguel Biete Banon.)
* Fix block incremental file names in verify command. (Reviewed by Reid Thompson. Reported by Francisco Miguel Biete Banon.)
* Fix spurious automatic delta backup on backup from standby. (Reviewed by Stephen Frost. Reported by krmozejko, Don Seiler.)
* Skip recovery.signal for PostgreSQL >= 12 when recovery type=none. (Reviewed by Stefan Fercot. Reported by T.Anastacio.)
* Fix unique label generation for diff/incr backup. (Fixed by Andrey Sokolov. Reviewed by David Steele.)
* Fix time-based archive expiration when no backups are expired. (Reviewed by Stefan Fercot.)
Improvements:
* Improve performance of SFTP storage driver. (Contributed by Stephen Frost, Reid Thompson. Reviewed by David Steele.)
* Add timezone offset to info command date/time output. (Reviewed by Stefan Fercot, Philip Hurst. Suggested by Philip Hurst.)
* Centralize error handling for unsupported features. (Reviewed by Stefan Fercot.)
Documentation Improvements:
* Clarify preference to install from packages in the user guide. (Reviewed by Stefan Fercot. Suggested by dr-kd.)
When performing backup from standby the file sizes on the standby may not be equal to file sizes on the primary. This is because replication continues during the backup and by the time the file is copied from the standby it may have changed. Since we cap the size of all files copied from the standby this practically applies to truncation and in particular truncation of free space maps (at least, every case we have seen so far is an fsm). Free space maps are especially vulnerable since they are only partially replicated, which amplifies the difference between the primary and standby.
On an incremental backup it may look like the size has changed on the primary (because of the final size recorded by the standby in the prior backup) but the timestamp may not have changed on the primary and this will trigger a checksum delta for safety. While this has no impact on backup integrity, checksum delta incrementals can run much longer than regular incrementals and backup schedules may be impacted.
The solution is to preserve the original size in the manifest and use it to do the time/size check. In the case of backup from standby the original size will always be the size on the primary, which makes comparisons against subsequent file sizes on the primary consistent. Original size is only stored in the manifest when it differs from final size, so there should not be any noticeable manifest bloat.
It was possible for block incremental info to be lost if a file had been modified in such a way that block incremental would be disabled if the file were new, e.g. if the file shrank below the block incremental limit or the file timestamp regressed far enough into the past. In those cases the block incremental info would not be copied in manifestBuildIncr().
Instead always copy the block incremental info in case the file ends up being referenced to a prior backup.
The validation tests were not robust enough to catch this issue so they were improved in 1d42aed.
In the particular case that exposed this bug, a file had a timestamp that was almost four weeks in the past at full backup time. A few days later a fail over occurred and the next incremental ran on the new primary (old standby) in delta mode. The same file had a timestamp just a few hours older than in the full backup, but now four weeks older than the current backup. Block incremental was disabled for the file on initial manifest build because of its age, which meant the block incremental info was not copied into the new manifest. The delta then determined the file had not changed and referenced it to the full backup. On restore, the file appeared to be a normal file stored in a bundle but could not be decompressed because it was in fact a block incremental.
The verify command was not appending the .pgbi extension instead of the compression extension when verifying block incremental files stored outside a bundle.
Originally the idea was that verify would not need any changes (since it just examines repo-size and checksum) but at some point the new extension was added and broke that assumption.
Use backupFileRepoPathP() to generate the correct filename (Just like backup, restore, etc).
Referenced files were not being checked for validity unless they were hard linked to the current backup (which a lot of the tests did). Newer tests with bundling do not have hard links and so missed these checks.
Improve the validation code to check referenced files in the manifest even when they are not hard linked into the current backup.
Add a delta test for bundling/block incremental that includes a file old enough to get a block size of zero. These are good tests by themselves but they also reduce the churn in an upcoming bug fix.
The initial implementation used simple waits when having to loop due to getting a LIBSSH2_ERROR_EAGAIN, but we don't want to just wait some amount of time, we want to wait until we're able to read or write on the fd that we would have blocked on.
This change removes all of the wait code from the SFTP driver and changes the loops to call the newly introduced storageSftpWaitFd(), which in turn checks with libssh2 to determine the appropriate direction to wait on (read, write, or both) and then calls fdReady() to perform the wait using the provided timeout.
This also removes the need to pass ioSession or timeout down into the SFTP read/write code.
In the case that no backups were expired but time-based retention was met no archive expiration would occur and the following would be logged:
INFO: time-based archive retention not met - archive logs will not be expired
In most cases this was harmless, but when retention was first met or if retention was increased, it would require one additional backup to expire earlier WAL. After that expiration worked as normal.
Even once expiration was working normally the message would continue to be output, which was pretty misleading since retention had been met, even though there was nothing to do.
Bring this code in line with count-based retention, i.e. always log what should be expired at detail level (even if nothing will be expired) and then log info about what was expired (even if nothing is expired). For example:
DETAIL: repo1: 11-1 archive retention on backup 20181119-152138F, start = 000000010000000000000002
INFO: repo1: 11-1 no archive to remove