Previously manifest load required two passes through the file list, one to load the data and one to set the defaults. This required each file to be packed twice.
Instead simply note that the file value is default and then set the file defaults when they are loaded from the manifest. This is made possible by the different internal/external representations for files so the same method cannot be applied to paths and links.
This change seems to resolve the performance issues noted in 61ce586 but there is no obvious reason why.
Centralize these options so they are consistent across clusters.
Also, there were some options that the user doesn't really need to see, .e.g. log_line_prefix. These can be set in advance so they don't need to be part of the documentation.
This function (which creates lots of tables) is generally useful for testing (not just stress testing) so create it as soon as the cluster is created.
Also add the data parameter which will insert a single row into the table so the file on disk is not zero bytes.
Manifests with a very large number of files can use a considerable amount of memory. There are a lot of zeroes in the data so it can be stored more efficiently by using base-128 varint encoding for the integers and storing the strings in the same allocation.
The downside is that the data needs to be unpacked in order to be used, but in most cases this seems fast enough (about 10% slower than before) except for saving the manifest, which is 10% slower up to 10 million files and then gets about 5x slower by 100 million (two minutes on my M1 Mac). Profiling does not show this slowdown so I wonder if this is related to the change in memory layout. Curiously, the function that increased most was jsonFromStrInternal(), which was not modified. That gives more weight to the idea that there is some kind of memory issue going on here and one hopes that servers would be less affected. Either way, they largest use cases we have seen are for about 6 million files so if we can improve that case I believe we will be better off.
Further analysis showed that most of the time was taken up writing the size and timestamp fields, which makes almost no sense. The same amount of time was used if they were hard-coded to 0, which points to some odd memory issue on the M1 architecture.
This change has been planned for a while, but the particular impetus at this time is that small file support requires additional fields that would increase manifest memory usage by about 20%, even if the feature is not used.
Note that the Pack code has been updated to use the new varint encoder, but the decoder remains separate because it needs to fetch one byte at a time.
Manifest defaults for user, group, and mode were previously generated by scanning the data to find the most common values. This was very accurate but slow and complicated. It could also lead to surprising changes in the manifest when a default value suddenly changed.
Instead, use the $PGDATA path to generate defaults. In the vast majority of cases the same user/group should own all the path/files and the default file mode is easily derived from the path mode. There may be some edge cases where this generates larger manifests, but in general it reduces time and complexity when saving the manifest.
Remove the MCV code since it is longer longer used.
Change the mode back to 0700 earlier to reduce churn in the expect logs.
This will be especially important in a future commit that gets the defaults exclusively from the base path.
This flag was only being used by the backup command after manifestNewBuild() and had no other uses. There was a time when it was important for integration testing but the unit tests now fulfill this role.
Since backup is the only code concerned with the primary flag, move the code into the backup module.
We don't have any cross-version testing but this change was tested manually with the most recent version of pgBackRest to make sure it was tolerant of the missing primary info. When an older version of pgBackRest loads a newer manifest the primary flag will always be set to false, which is fine since it is not used.
BackupJobData has several members that backupProcessQueue() needs so it is more efficient to use them rather than passing them separately or getting them from the configuration.
Coverity pointed out that -1 could be passed to lseek() (added in a79034ae) after a file failed to open because it is missing. Overall it seems simpler to enclose the success code in an else block to prevent any repeats of this mistake in the future.
This was not an active bug because there are currently no cases where we do read offsets in a file that is allowed to be missing.
Also remove the result flag since it is easier to just check that the file descriptor is valid.
Updating the manifest this way was not a great idea because it broke abstraction for the object. This meant certain changes to the interface and internals were not possible because the code was modifying internal manifest data.
Instead track the user replacements entirely in the restore module.
This also has the benefit of eliminating a pass over the manifest path/file/link lists.
AWS S3 integrates with AWS Key Management Service (AWS KMS) to provide server side encryption of S3 objects. This integration protects objects under encryption keys that never leave AWS KMS unencrypted.
The range feature allows reading out an arbitrary chunk of a file and will be important for efficient small file support.
Now that all drivers are required to support ranges remove the storageFeatureLimitRead feature flag that was implemented only by the Posix driver.
Do the replacement anywhere cfgOptionGroupIdxToKey() is being used to construct a group name in a message. cfgOptionGroupName() is better for this case since it also includes the name of the group so that it does not need to be repeated in each message.
Functionality to copy from IoRead to IoWrite is frequently used so centralize it. This also simplifies coverage testing in places where a loop was required before.
The backup LSNs are useful for performing LSN-based PITR. LSNs will not be displayed in the general text output (without --set) because they are probably not useful enough to deserve their own line.
There is no evidence that users need 8.3/8.4 anymore but it does cost us in terms of development and testing, especially now that we have a number of new backup/restore features planned.
It seems to make sense to remove this support now. If there are users who need to use/migrate from these versions they can use an older version of pgBackRest.
Bug Fixes:
* Fix restore delta link mapping when path/file already exists. (Reviewed by Reid Thompson. Reported by Younes Alhroub.)
* Fix socket leak on connection retries. (Reviewed by Reid Thompson. Reported by James Coleman.)
Features:
* Add TLS server. (Reviewed by Stephen Frost, Reid Thompson, Andrew L'Ecuyer.)
* Add --cmd option. (Contributed by Reid Thompson. Reviewed by Stefan Fercot, David Steele. Suggested by Virgile CREVON.)
Improvements:
* Check archive immediately after backup start. (Reviewed by Reid Thompson, David Christensen.)
* Add timeline and checkpoint checks to backup. (Reviewed by Stefan Fercot, Reid Thompson.)
* Check that clusters are alive and correctly configured during a backup. (Reviewed by Stefan Fercot.)
* Error when restore is unable to find a backup to match the time target. (Reviewed by Reid Thompson, Douglas J Hunley. Suggested by Douglas J Hunley.)
* Parse protocol/port in S3/Azure endpoints. (Contributed by Reid Thompson. Reviewed by David Steele.)
* Add warning when checkpoint_timeout exceeds db-timeout. (Contributed by Stefan Fercot. Reviewed by David Steele.)
* Add verb to HTTP error output. (Contributed by Christoph Berg. Reviewed by David Steele.)
* Allow y/n arguments for boolean command-line options. (Contributed by Reid Thompson. Reviewed by David Steele.)
* Make backup size logging exactly match info command output. (Contributed by Reid Thompson. Reviewed by David Steele. Suggested by Mahomed Hussein.)
Documentation Improvements:
* Display size option default and allowed values with appropriate units. (Reviewed by Reid Thompson.)
* Fix typos and improve documentation for the tablespace-map-all option. (Reviewed by Reid Thompson. Suggested by Reid Thompson.)
* Remove obsolete statement about future multi-repository support. (Suggested by David Christensen.)
Utilize httpUrlNewParseP() to parse endpoint and port from the URL in the S3 and Azure helpers to avoid issues where protocol was not expected to be part of the URL.
This leak was caused by the file descriptor variable getting clobbered after a long jump. Mark it as volatile to fix.
Testing this is a bit complex because the issue only happens in optimized builds, if at all. Put the test into the performance suite, which is always optimized, until a better idea presents itself.
If a path/file was remapped to a link using either --link-map or --link-all there would be no affect if the path/file already existed. If a link existed it would be properly updated and converting a link to a path/file also worked.
The issue happened during delta cleanup, which failed to check if the existing path/file had been remapped to a link.
Add checks for newly mapped path/file links and remove the old path/file we required.
This was previously a warning but the warning is easy to miss so a lot of time may be lost restoring and recovering a backup that will not hit the target.
Since this is technically a breaking change, add an "important note" about the change to the release.
In the backup command, add a warning if start-fast is disabled and the PostgreSQL checkpoint_timeout is greater than db-timeout.
In such cases, we might timeout before the checkpoint occurs and the backup really starts.
Fail the backup if a cluster stops or the standby is promoted. Previously, shutting down the primary would cause an error but it was not detected until the end of the backup. Now the error will happen sooner and a promotion on the standby will also cause an error.
SIGHUP allows the configuration to be reloaded. Note that the configuration will not be updated in child processes that have already started.
SIGTERM terminates the server process gracefully and sends SIGTERM to all child processes. This also gives the tests an easy way to stop the server.
Add the following checks:
* Checkpoint is updated in pg_control after pg_start_backup(). This helps ensure that PostgreSQL and pgBackRest have a consistent view of the storage and that PGDATA paths match.
* Timeline of backup start WAL file matches pg_control. Hard to see how this one could get hit, but we have the power...
* Standby is on the same timeline as the primary. If not, this standby is not following the primary.
* Last standby checkpoint is not greater than the backup checkpoint. If so, this standby is not following the primary.
This also requires some additional plumbing to read/write timeline/checkpoint from pg_control and parse timelines from WAL filenames. There were some changes in the backup tests caused by the fact that pg_control now has different contents for each backup.
The check to ensure that the required checkpoint was reached on the standby should also be updated to use pg_control (it currently uses pg_control_checkpoint()), but that requires non-trivial changes to the test harness and will need to wait.