Integration expect log testing was originally used as a rough-and-ready way to make sure that certain code paths were being executed before the unit tests existed. Now that we have 100% unit test coverage (with expect log testing) the value of the integration expect tests seems minimal at best.
But they do cause numerous issues:
- Maintenance of the expect code and replacements that are required to keep logs reproducible.
- Even a trivial change can cause massive churn in the expect logs, e.g. d9088b2. These changes should be minutely audited but since the expect logs have little value now it is seldom worth the effort.
- The OS version used to do expect testing (RHEL7) can only be used to test one version of PostgreSQL. This makes it hard to balance the PostgreSQL version testing between OS versions.
- When a commit affects expect logs it is not clear (especially for new developers) how to regenerate them and our contributing guide is silent on the issue.
The goal is to migrate the integration tests to C and expect testing is not part of that plan. It seems best to get rid of them now.
The version ranges given in the user guides caused confusion. For example, because the user guide for RHEL specified PostgreSQL 9.6-11, users questioned whether pgBackRest worked for PostgreSQL 12 on RHEL.
Remove these ranges and add more explanatory text to the introduction to try and make it clearer how the user guides work and which versions are covered (basically all of them).
The function worked fine, but Coverity was unable to determine that the finally block was run, which led to false positives about unfreed memory.
Using a boolean in the block makes it clear to Coverity that the finally block will always be run no matter what else happens.
We'll depend on the compiler to optimize away the boolean if it is not used in a finally block. The cost of the boolean is fairly low in comparison to everything else being done in these macros, so it does not seem worth having a separate block even if the compiler is not able to eliminate the boolean.
This reverts most of 9a271e9 that fixed a bug caused by c5b5b58, which was also attempting to help Coverity understand FINALLY() blocks.
Dividers were used in some files, but not others, and some had section names (which are hard to maintain) and others did not.
Try to make this more consistent by putting a divider on front of every section, variable block, and wherever else seems appropriate.
In cases where clock skew or timezone issues are preventing backup label generation the user could see an error like this:
new backup label '20220504-152308F' is not later than latest backup label '20220504-222042F_20220504-222141I.manifest.gz'
This will happen if the most recent label is drawn from the history. It is cleaner (and probably less confusing) to strip off the extensions so the user sees:
new backup label '20220504-152308F' is not later than latest backup label '20220504-222042F_20220504-222141I'
The order of callbacks and frees meant that memory needed during a callback (for logging in all known cases) might end up being freed before a callback needed it.
Requiring callbacks and logging to check the validity of their allocations is pretty risky and it is not clear that all possible cases have been accounted for.
Instead recursively execute all the callbacks first and then come back and recursively free the context. This is safer and it removes the need to check if a context is freeing so a simple active flag (in debug builds) will do. The caller no longer needs this information at all so remove memContextFreeing() and objMemContextFreeing().
In the JSON output the percent complete is storage as an integer of the percent complete * 100. So, before display it should be converted to double and divided by 100, or split using integer mod and div.
Note that percent complete will only be displayed on the host where the backup was executed. Remote hosts will show a backup/expire running with no percent complete.
PostgreSQL 15 drops support for exclusive backup and renames the start/stop backup commands.
This is based on the pgdg-testing repo since beta1 has not been released yet, but it seems unlikely that breaking changes will be made at this point. beta1 should be tagged just before our next release so we'll retest before the release.
This column has been removed in PostgreSQL 15. Rather than add a lot of special handling, it seems better just to update all versions to not depend on this column.
Add centralized functions to identify the type of database (i.e. system or user) by name and use FirstNormalObjectId when a name is not available.
The new query in the db module will still return the prior result for PostgreSQL <= 15, which will be stored in the manifest. This is important to preserve behavior when downgrading pgBackRest. There are no concerns here for PostgreSQL 15 since older versions of pgBackRest won't be able to restore backups for PostgreSQL 15 anyway.
Any error thrown resets execution to the last setjmp(), which means that parts of the try block need to make sure they don't get run again. FINALLY() was not doing this so if it threw an error it would end up back in the FINALLY() block, where the error would likely be thrown again, causing an infinite loop.
Fix this by tracking the state of FINALLY() and only running it once. This requires cleaning the error stack like CATCH*() and clearing the error like TRY_END() depending on the order of execution.
The archive-get/archive-push commands would not error for, .e.g permissions errors, when attempting to get a lock before launching the async process. Since the async process was not launched there would be no error status file and the user would get a generic failure message. Also, there would be no async log.
Refactor lockAcquireFile() to throw an error when failOnNoLock = false unless the file is locked by another process. This seems to be the original intent of this parameter and there may have been a mistake when porting from Perl. In any case it looks wrong enough to be considered a bug.
If this error is thrown rather than a specific error returned from the async process, it means the async process is unable to write the status files for some reason and the only way to get the error is out of the async log.
This hint includes the exact async log path and name to make finding errors easier.
Previously read/writing JSON required parsing/render via a variant, which add many more memory allocations and loops.
Instead allow JSON to be read/written serially to improve performance and simplify the code. This also allows us to get rid of many String and Variant constant which are no longer required.
The goal is to be able to read/write very large (e.g. gigabyte manifest) JSON structures, which would not be practical with the current code.
Note that external JSON (GCS, S3, etc) is still handled using variants. Converting these will require more consideration about key ordering since it cannot be guaranteed as in our own formats.
Packs support stronger typing than JSON and are more efficient. For the small result sets that we deal with efficiency is probably not very important, but this removes another place where we are using JSON instead of Pack.
Push checking for result struct (e.g. single row) down into PgClient since it has easy access to this information rather than needing to parse the result set to find out.
Refactor all code downstream that depends on PgClient results.
Previously the process id was skipped if it did not exist. Instead, throw an error and handle the errors in downstream code.
This was probably ignored at some point to provide backward-compatibility, but that is no longer required, if it ever was.
Sometimes we need to read a lock from another process. This was done two different ways and in the case of cmdStop() was definitely hacky.
Centralize the logic to make it easier to read the locks for another process. This will also make it easier to add new lock data.
When archive-mode-check is disabled and archive-push is running from multiple hosts, it is very likely that the file will already exist with the same checksum, so disable the warning.
However, if the checksums do not match, an error will still be thrown.
If a boolean option had an unresolved dependency then the value would be NULL, which meant the dependency would need to be checked in the code to avoid an error. For example, cfgOptionBool(cfgOptOnline) needed to be checked before it was safe to call cfgOptionBool(cfgOptArchiveCheck).
Allow a default for boolean options when they are unresolved to simplify the code. This makes using the options easier and less prone to error. Not all boolean options get a dependency default in this commit, but more may be added in the future.
For PITR with --type=lsn, attempt to auto-select the appropriate backup set based on the --target LSN provided. Pick the most recent backup where backup-lsn-stop is less than or equal to the provided LSN.
It is possible that a file will be be truncated to zero-length after the backup manifest has been built. We could build logic into backupFile() to handle this case but it is hard to test well because of the race condition so tests would need to written directly against backupFile() and backupJobResult(). It hardly seems worth all that effort for a condition that occurs rarely, if ever.
Instead just remove the manifest check and add tests to restore to make sure it handles bundled zero-length files correctly. Logging will show that the file was bundled so if it happens a lot (which seems very unlikely) then we can think about an alternate implementation.
This rule was added because there were not sufficient tests to demonstrate that the repo-hardlink option could be changed in a backup set.
Remove the restriction and add/update tests to show that it works.
This is necessary now because bundling requires that hardlinking be disabled. Rather than add code complexity, it seems better just to address this limitation.
Check for invalid path in repo-* commands. Perform path validation and throw an error when appropriate. Path may not contain '//'. Strip trailing '/' from path. Absolute path must fall under repo path.
IMDSv2 provides additional security to prevent instance metadata from being read by an attacker.
All AWS instances should provide IMDSv2 but still fail back to IMDSv1 if the IMDSv2 token request fails. This is in case there are any services outside AWS that are emulating IMDSv1 but have not implemented IMDSv2.
It seems best for these to be repo options so they can be configured per repo, rather than globally.
All clarify usage for repo-bundle-size and repo-bundle-limit.
Since files are stored sequentially in a bundle, it is often possible to restore multiple files with a single read. Previously, each restored file required a separate read. Reducing the number of reads is particularly beneficial for object stores, but performance should benefit on any file system.
Currently if there is a gap then a new read is required. In the future we might set a limit for how large a gap we'll skip without starting a new read.
Improve the stop command, when force and stanza options are specified, to terminate only processes holding lock files for the given stanza. Prior to these changes, termination of all processes holding lock files regardless of stanza occurred.
For very large backups only getting an update per percent may not be often enough.
Add hundredths to the percent complete logging to provide more timely information.
IMPORTANT NOTE: Repository size reported by the info command is now entirely based on what pgBackRest has written to storage. Previously, in certain cases, pgBackRest could detect if additional compression was being applied by the storage but this is no longer supported.
Bug Fixes:
* Retry errors in S3 batch file delete. (Reviewed by Reid Thompson. Reported by Alex Richman.)
* Allow case-insensitive matching of HTTP connection header values. (Reviewed by Reid Thompson. Reported by Rémi Vidier.)
Features:
* Add support for AWS S3 server-side encryption using KMS. (Contributed by Christoph Berg. Reviewed by David Steele, Tharindu Amila.)
* Add archive-missing-retry option. (Reviewed by Stefan Fercot.)
* Add backup type filter to info command. (Contributed by Stefan Fercot. Reviewed by David Steele.)
Improvements:
* Retry on page validation failure during backup. (Reviewed by Stephen Frost, David Christensen.)
* Handle TLS servers that do not close connections gracefully. (Reviewed by Rémi Vidier, David Christensen, Stephen Frost.)
* Add backup LSNs to info command output. (Contributed by Stefan Fercot. Reviewed by David Steele.)
* Automatically strip trailing slashes for repo-ls paths. (Contributed by David Christensen. Reviewed by David Steele.)
* Do not retry fatal errors. (Reviewed by Reid Thompson.)
* Remove support for PostgreSQL 8.3/8.4. (Reviewed by Reid Thompson, Stefan Fercot.)
* Remove logic that tried to determine additional file system compression. (Reviewed by Reid Thompson, Stefan Fercot.)
Documentation Bug Fixes:
* Move repo options in TLS documentation to the global section. (Reported by Anton Kurochkin.)
* Remove unused backup-standby option from stanza commands. (Reported by Stefan Fercot.)
* Fix typos in help and release notes. (Fixed by Daniel Gustafsson. Reviewed by David Steele.)
Documentation Improvements:
* Add aliveness check to systemd service configuration. (Suggested by Yogesh Sharma.)
* Add FAQ explaining WAL archive suffix. (Contributed by Stefan Fercot. Reviewed by David Steele.)
* Note that replications slots are not restored. (Contributed by Reid Thompson. Reviewed by David Steele, Stefan Fercot. Suggested by Christophe Courtois.)
Some TLS server implementations will simply close the socket rather than correctly closing the TLS connection. This causes problems when connection: close is specified with no content-length or chunked encoding and we are forced to read to EOF. It is hard to know if this is a real EOF or a network error.
In cases where we can parse the content and (hopefully) ensure it is correct, allow the closed socket to serve as EOF. This is not ideal, but the change in 8e1807c means that currently working servers with this issue will stop working after 2.35 is installed, which seems too risky.
da0f3a855 used a workaround to get the documentation building on aarch64 but recent changes to the PGDG yum repo have broken this workaround. Installing the regular way still doesn't work, either.
Reverting for now to get the CI pipeline working again.
Docker outputs build info to stderr even when the build is successful. This seems to be especially true on Mac M1.
ContainerTest.pm already does this suppression so add it the other places where containers are built.
Trailing slashes in at least some of the repository storage types were preventing repo-ls from displaying any content (presumably due to storage-specific behavior).
Since the path with the slash should be equivalent to the path without the slash, just remove it if provided by the user.
Checking that pd_upper == 0 is not enough since this field may be corrupted. Still use pd_upper as a quick check, but when it is zero proceed to check the rest of the page to ensure it is also all zeroes.
Rather than attempting to filter page checksum failures by LSN, just retry when there is a page checksum failure. If the page has not changed since the last read report it as an error. If the page has changed, then PostgreSQL must be modifying the page so we can ignore the error because a full page write (and possibly updates) will be in the WAL.
Also remove tests made redundant by the test merge in b4897077.
When there was an issue with the system library path during building, the build-help rule would fail during executing ./build-help with the effect that main.c wouldn't build.
Break out help.auto.c generation from the build-help stage to allow it to be re-executed when the library path has been corrected.
Retry a WAL segment that was previously reported as missing by the archive-get command. This prevents notifications in the spool path from a prior restore from being used and possibly causing a recovery failure if consistency has not been reached.
Disabling this option allows PostgreSQL to more reliably recognize when the end of the WAL in the archive has been reached, which permits it to switch over to streaming from the primary. With retries enabled, a steady stream of WAL being archived will cause PostgreSQL to continue getting WAL from the archive rather than switch to streaming.
When disabling this option it is important to ensure that the spool path for the stanza is empty. The restore command does this automatically if the spool path is configured at restore time. Otherwise, it is up to the user to ensure the spool path is empty.
Coverity complained that this pass by value was inefficient:
CID 376402: Performance inefficiencies (PASS_BY_VALUE)
Passing parameter file of type "ManifestFile" (size 136 bytes) by value.
This was completely intentional since it gives us a copy of the struct that we can change without bothering the caller. However, updating fields is fine and may benefit the caller at some future data, and in any case does no harm now.
And as usual it is easier not to fight with Coverity.
Limit which files can be added to bundles, which allows resume to work reasonably well. On resume, the bundles are removed and any remaining file is eligible to be to be resumed.
Also reduce the bundle-size default to 20MiB. This is pretty arbitrary, but a smaller default seems better.
Bundle (combine) smaller files during backup to reduce the number of files written to the repository (enable with --bundle). Reducing the number of files is a benefit on all file systems, but especially so on object stores such as S3 that have a high file creation cost. Another benefit is that zero-length files are only stored as metadata in the manifest.
Files are batched up to bundle-size and then compressed/encrypted individually and stored sequentially in the bundle. The bundle id and offset of each file is stored in the manifest so files can be retrieved randomly without needing to read the entire bundle. Files are ordered by timestamp descending when being assigned to bundles to reduce the amount of random access that needs to be done. The idea is that bundles with older files can be read in their entirety on restore and only bundles with newer files will get fragmented.
Bundles are a custom format with metadata stored in the manifest. Tar was considered but it is too limited a format, the major issue being that the size of the file must be known in advance and that is very contrary to how pgBackRest works, especially once we introduce page-level incremental backups.
Bundles are stored numbered in the bundle directory. Some files may still end up in pg_data if they are added after the backup is complete. backup_label is an example.
Currently, only the backup command works in batches. The restore and verify commands use the offsets to pull individual files out of the bundle. It seems better to finalize how this is going to work before optimizing the other commands. Even as is, this is a major step forward, and all commands function with bundling.
One caveat: resume is currently not supported when bundle is enabled.
There is some evidence that retrying fatal errors, especially out of memory errors, may cause lockups. It makes sense to report fatal errors as quickly as possible and bypass retries. This may or not fix the lockup issue but it is worth doing either way.
For now, the only fatal errors will be AssertError and MemoryError.
If the entire batch failed it would be retried, but individual file errors were not retried. This could cause pgBackRest to terminate during expiration or when removing an unresumable backup.
Rather than retry the entire batch, delete the errored files individually to take advantage of the HTTP retry rather than adding a new retry loop. These errors seem rare enough that it should not be a performance issue.
In theory, the additional stat() call after a file has been copied to the repo can determine if additional compression has been applied by the file system. However, it has been a very long time since we tested this in practice. There are currently no unit tests that accurately test this feature since it requires a compressed file system like ZFS to work, which never seemed worth the extra cost.
It can also add a lot of time to backups if there are a large quantity of small files.
In addition, it stands as a blocker for combining files for small file support since it is no longer possible to get per-file sizes from the viewpoint of the file system. There are several ways this could be reworked but none of them are easy while at the same time maintaining current info functionality.
It doesn't seem worth keeping an untested feature that will only work in some special cases (if it still works) when it is blocking development.
The most recent release of Minio has broken CI builds but there is no logging to indicate what is wrong.
For now, just use the prior release to get CI builds working again. This kind if breakage is not uncommon for Minio but they usually resolve it in the next release.
Update lock code to use standard common/io functions and module patterns. This module was developed before the common/io module existed and our patterns had stabilized.
Previously manifest load required two passes through the file list, one to load the data and one to set the defaults. This required each file to be packed twice.
Instead simply note that the file value is default and then set the file defaults when they are loaded from the manifest. This is made possible by the different internal/external representations for files so the same method cannot be applied to paths and links.
This change seems to resolve the performance issues noted in 61ce586 but there is no obvious reason why.
Centralize these options so they are consistent across clusters.
Also, there were some options that the user doesn't really need to see, .e.g. log_line_prefix. These can be set in advance so they don't need to be part of the documentation.
This function (which creates lots of tables) is generally useful for testing (not just stress testing) so create it as soon as the cluster is created.
Also add the data parameter which will insert a single row into the table so the file on disk is not zero bytes.
Manifests with a very large number of files can use a considerable amount of memory. There are a lot of zeroes in the data so it can be stored more efficiently by using base-128 varint encoding for the integers and storing the strings in the same allocation.
The downside is that the data needs to be unpacked in order to be used, but in most cases this seems fast enough (about 10% slower than before) except for saving the manifest, which is 10% slower up to 10 million files and then gets about 5x slower by 100 million (two minutes on my M1 Mac). Profiling does not show this slowdown so I wonder if this is related to the change in memory layout. Curiously, the function that increased most was jsonFromStrInternal(), which was not modified. That gives more weight to the idea that there is some kind of memory issue going on here and one hopes that servers would be less affected. Either way, they largest use cases we have seen are for about 6 million files so if we can improve that case I believe we will be better off.
Further analysis showed that most of the time was taken up writing the size and timestamp fields, which makes almost no sense. The same amount of time was used if they were hard-coded to 0, which points to some odd memory issue on the M1 architecture.
This change has been planned for a while, but the particular impetus at this time is that small file support requires additional fields that would increase manifest memory usage by about 20%, even if the feature is not used.
Note that the Pack code has been updated to use the new varint encoder, but the decoder remains separate because it needs to fetch one byte at a time.
Manifest defaults for user, group, and mode were previously generated by scanning the data to find the most common values. This was very accurate but slow and complicated. It could also lead to surprising changes in the manifest when a default value suddenly changed.
Instead, use the $PGDATA path to generate defaults. In the vast majority of cases the same user/group should own all the path/files and the default file mode is easily derived from the path mode. There may be some edge cases where this generates larger manifests, but in general it reduces time and complexity when saving the manifest.
Remove the MCV code since it is longer longer used.
Change the mode back to 0700 earlier to reduce churn in the expect logs.
This will be especially important in a future commit that gets the defaults exclusively from the base path.
This flag was only being used by the backup command after manifestNewBuild() and had no other uses. There was a time when it was important for integration testing but the unit tests now fulfill this role.
Since backup is the only code concerned with the primary flag, move the code into the backup module.
We don't have any cross-version testing but this change was tested manually with the most recent version of pgBackRest to make sure it was tolerant of the missing primary info. When an older version of pgBackRest loads a newer manifest the primary flag will always be set to false, which is fine since it is not used.
BackupJobData has several members that backupProcessQueue() needs so it is more efficient to use them rather than passing them separately or getting them from the configuration.
Coverity pointed out that -1 could be passed to lseek() (added in a79034ae) after a file failed to open because it is missing. Overall it seems simpler to enclose the success code in an else block to prevent any repeats of this mistake in the future.
This was not an active bug because there are currently no cases where we do read offsets in a file that is allowed to be missing.
Also remove the result flag since it is easier to just check that the file descriptor is valid.
Updating the manifest this way was not a great idea because it broke abstraction for the object. This meant certain changes to the interface and internals were not possible because the code was modifying internal manifest data.
Instead track the user replacements entirely in the restore module.
This also has the benefit of eliminating a pass over the manifest path/file/link lists.
AWS S3 integrates with AWS Key Management Service (AWS KMS) to provide server side encryption of S3 objects. This integration protects objects under encryption keys that never leave AWS KMS unencrypted.
The range feature allows reading out an arbitrary chunk of a file and will be important for efficient small file support.
Now that all drivers are required to support ranges remove the storageFeatureLimitRead feature flag that was implemented only by the Posix driver.
Do the replacement anywhere cfgOptionGroupIdxToKey() is being used to construct a group name in a message. cfgOptionGroupName() is better for this case since it also includes the name of the group so that it does not need to be repeated in each message.
The backup LSNs are useful for performing LSN-based PITR. LSNs will not be displayed in the general text output (without --set) because they are probably not useful enough to deserve their own line.
There is no evidence that users need 8.3/8.4 anymore but it does cost us in terms of development and testing, especially now that we have a number of new backup/restore features planned.
It seems to make sense to remove this support now. If there are users who need to use/migrate from these versions they can use an older version of pgBackRest.
Bug Fixes:
* Fix restore delta link mapping when path/file already exists. (Reviewed by Reid Thompson. Reported by Younes Alhroub.)
* Fix socket leak on connection retries. (Reviewed by Reid Thompson. Reported by James Coleman.)
Features:
* Add TLS server. (Reviewed by Stephen Frost, Reid Thompson, Andrew L'Ecuyer.)
* Add --cmd option. (Contributed by Reid Thompson. Reviewed by Stefan Fercot, David Steele. Suggested by Virgile CREVON.)
Improvements:
* Check archive immediately after backup start. (Reviewed by Reid Thompson, David Christensen.)
* Add timeline and checkpoint checks to backup. (Reviewed by Stefan Fercot, Reid Thompson.)
* Check that clusters are alive and correctly configured during a backup. (Reviewed by Stefan Fercot.)
* Error when restore is unable to find a backup to match the time target. (Reviewed by Reid Thompson, Douglas J Hunley. Suggested by Douglas J Hunley.)
* Parse protocol/port in S3/Azure endpoints. (Contributed by Reid Thompson. Reviewed by David Steele.)
* Add warning when checkpoint_timeout exceeds db-timeout. (Contributed by Stefan Fercot. Reviewed by David Steele.)
* Add verb to HTTP error output. (Contributed by Christoph Berg. Reviewed by David Steele.)
* Allow y/n arguments for boolean command-line options. (Contributed by Reid Thompson. Reviewed by David Steele.)
* Make backup size logging exactly match info command output. (Contributed by Reid Thompson. Reviewed by David Steele. Suggested by Mahomed Hussein.)
Documentation Improvements:
* Display size option default and allowed values with appropriate units. (Reviewed by Reid Thompson.)
* Fix typos and improve documentation for the tablespace-map-all option. (Reviewed by Reid Thompson. Suggested by Reid Thompson.)
* Remove obsolete statement about future multi-repository support. (Suggested by David Christensen.)
Utilize httpUrlNewParseP() to parse endpoint and port from the URL in the S3 and Azure helpers to avoid issues where protocol was not expected to be part of the URL.
This leak was caused by the file descriptor variable getting clobbered after a long jump. Mark it as volatile to fix.
Testing this is a bit complex because the issue only happens in optimized builds, if at all. Put the test into the performance suite, which is always optimized, until a better idea presents itself.
If a path/file was remapped to a link using either --link-map or --link-all there would be no affect if the path/file already existed. If a link existed it would be properly updated and converting a link to a path/file also worked.
The issue happened during delta cleanup, which failed to check if the existing path/file had been remapped to a link.
Add checks for newly mapped path/file links and remove the old path/file we required.
This was previously a warning but the warning is easy to miss so a lot of time may be lost restoring and recovering a backup that will not hit the target.
Since this is technically a breaking change, add an "important note" about the change to the release.
In the backup command, add a warning if start-fast is disabled and the PostgreSQL checkpoint_timeout is greater than db-timeout.
In such cases, we might timeout before the checkpoint occurs and the backup really starts.
Fail the backup if a cluster stops or the standby is promoted. Previously, shutting down the primary would cause an error but it was not detected until the end of the backup. Now the error will happen sooner and a promotion on the standby will also cause an error.
SIGHUP allows the configuration to be reloaded. Note that the configuration will not be updated in child processes that have already started.
SIGTERM terminates the server process gracefully and sends SIGTERM to all child processes. This also gives the tests an easy way to stop the server.
Add the following checks:
* Checkpoint is updated in pg_control after pg_start_backup(). This helps ensure that PostgreSQL and pgBackRest have a consistent view of the storage and that PGDATA paths match.
* Timeline of backup start WAL file matches pg_control. Hard to see how this one could get hit, but we have the power...
* Standby is on the same timeline as the primary. If not, this standby is not following the primary.
* Last standby checkpoint is not greater than the backup checkpoint. If so, this standby is not following the primary.
This also requires some additional plumbing to read/write timeline/checkpoint from pg_control and parse timelines from WAL filenames. There were some changes in the backup tests caused by the fact that pg_control now has different contents for each backup.
The check to ensure that the required checkpoint was reached on the standby should also be updated to use pg_control (it currently uses pg_control_checkpoint()), but that requires non-trivial changes to the test harness and will need to wait.
Eliminate summing and passing of copied files sizes for logging backup size.
Instead, utilize infoBackupDataByLabel() to pull the backup size for the log message.
This allows boolean boolean command-line options to work like their config file equivalents.
At least for now this behavior will remain undocumented since all examples in the documentation will continue to use the standard syntax. The idea is that it will "just work" when options are copied out of config files rather than generating an error.
Previously the archive was only checked at the end of the backup to ensure all WAL required to make the backup consistent was present. The problem was that if archiving was not functioning then the backup had to complete before the user found out, which could be a while if the database was large enough.
Add an archive check immediately after backup start so failures are reported earlier.
The trick is to determine which WAL to check. If the repo is new there may not be any WAL in it and pg_start_backup() will not switch the WAL segment if it is empty. These are both likely scenarios when setting up and/or testing pgBackRest.
If the WAL segment is switched by pg_start_backup(), then check the archive for the segment that was detected prior to backup start. This should be common on normal running clusters with regular activity. Note that this might not be the segment immediately prior to the backup start segment if WAL volume is high.
If pg_start_backup() did not switch the WAL then we can force a switch on PostgreSQL >= 9.3 by creating a restore point. In that case the WAL to check will be the backup start WAL. This is most likely to happen on idle systems, during testing, or immediately after a repo switch.
An advantage of this approach other than earlier notification is that the backup directory will not be created so no resume will be attempted on the next backup.
Note that some additional churn was created in backup.c because the load of archive.info needs to be done earlier.
Properly log the size of files copied during the backup, matching the backup size returned from the info command.
In the reference issue, the incremental backup after switchover logs the size of all files evaluated rather than only the size of the files copied in the backup.
Size option default and allowed values were displayed in bytes, which was confusing for the user.
This also lays the groundwork for adding units to time options.
Move option parsing functions into a common module so they can be used from the build module.
Allows users to provide an executable to be used when pgbackrest generates command strings that expect to invoke pgbackrest. These generated commands are written to files by pgbackrest, e.g. recovery.conf.
The error handler used a loop to process try, catch, and finally blocks. This worked fine but static analysis tools like Coverity did not understand that the finally block would always run and so there were false positives about double-free, unfreed resource, etc.
This implementation removes the loop, which simplifies everything, and makes it clear that the finally block will always run. This cuts down on Coverity false positives.
This implementation also catches lack of coverage on empty catch blocks so a few test fixes were committed separately in d74fe7a.
A small refactor in backup.c is required because gcc 10.3.1 on Fedora 33 complains that the reason variable may be used uninitialized. It's not clear why this is the case, but reducing the scope of the TRY block fixes the issue.
Rather the converting String to StringIds at runtime, store defaults in StringId format in parse.auto.c and convert user input to StringId during parsing.
The compress-type, repo-type and log-level-* options have allow lists, which means it is more efficient to treat them as StringIds.
For compress-type and log-level-* also update the functions that convert them to enums.
The strIdFrom*() forced the caller to pick an encoding, which led to a number of TRY...CATCH blocks in the code. In practice the caller does not care which encoding is used as long as the string is valid for some encoding.
Update the strIdFrom*() function to try all possible encodings and only throw an error when the string is not valid for any of them.
Bug Fixes:
* Allow "global" as a stanza prefix. (Reviewed by Stefan Fercot. Reported by Younes Alhroub.)
* Fix segfault on invalid GCS key file. (Reviewed by Stephen Frost. Reported by Henrik Feldt.)
Improvements:
* Allow link-map option to create new links. (Reviewed by Don Seiler, Stefan Fercot, Chris Bandy. Suggested by Don Seiler.)
* Increase max index allowed for pg/repo options to 256. (Reviewed by Cynthia Shang.)
* Add WebIdentity authentication for AWS S3. (Reviewed by James Callahan, Reid Thompson, Benjamin Blattberg, Andrew L'Ecuyer.)
* Report backup file validation errors in backup.info. (Contributed by Stefan Fercot. Reviewed by David Steele.)
* Add recovery start time to online backup restore log. (Reviewed by Tom Swartz, Stefan Fercot. Suggested by Tom Swartz.)
* Report original error and retries on local job failure. (Reviewed by Stefan Fercot.)
* Rename page checksum error to error list in info text output. (Reviewed by Stefan Fercot.)
* Add hints to standby replay timeout message. (Reviewed by Cynthia Shang, Stefan Fercot. Suggested by Leigh Downs.)
The new rendering behavior is correct in normal cases, but for the pre-rendered HTML blocks in the command and configuration references it causes a lot of churn. This would be OK if the new HTML was diff-able, but it is not.
Go back to the old behavior of using br tags for this case to reduce churn until a more permanent solution is found.
Since CentOS 8 will be EOL at the end of the year it makes sense to do this now. The centos:8 image is still used in documentation.xml because changes there require manual testing, which will need to be done at a later date. The changes are not user-facing, however, and can be done at any time.
Also update CentOS references to RHEL since that is what we are emulating for testing purposes.
This is mostly to revert some comment changes in b11ab9f7 that will break the ppc64le patch, but at the same time keep the spelling consistent in all comments and documentation.
Also revert some space changes for the same reason.
Azurite released another breaking change (see fbd018cd, 096829b3, c38d6926, and Azurite issue 1039) so make adjustments as needed to documentation and tests.
Also remove some dead code that hid the repo-storage-host option and was made obsolete by all these changes.
Checking the return value is not terribly important here, but if setsockopt() fails it is likely that bind() will fail as well. May as well get it over with and this makes Coverity happy.
3879bc69 added this call and the parameters were not quite right but in way that the compiler decided they were OK. It was mostly working but TLS verification was disabled if caPath was NULL, which is not OK.
The variants were needed to easily serialize configurations for the Perl code.
Unions are more efficient and will allow us to add new types that are not supported by variants, e.g. StringId.