It seems better to use TEST_PATH in combination with a constant string rather than have a number of different path constants. This improves readability and reduces confusion about which constant should be used.
For tests already updated as part of the macro-replacement effort, the output tests (TEST_ERROR, TEST_RESULT_LOG, TEST_STORAGE_LIST and TEST_RESULT_STR) have been simplified for readability to remove all but the TEST_PATH constants. The ongoing macro-replacement effort will include these changes.
Updated: expireTest, stanzaTest, checkTest, infoTest, verifyTest (infoArchive and infoBackup had no changes).
Switch from JSON-based to binary protocol for communicating with local and remote process. The pack type is used to implement the binary protocol.
There are a number advantages:
* The pack type is more compact than JSON and are more efficient to render/parse.
* Packs are more strictly typed than JSON.
* Each protocol message is written entirely within ProtocolServer/ProtocolClient so is less likely to get interrupted by an error and leave the protocol in a bad state.
* There is no limit on message size. Previously this was limited by buffer size without a custom implementation, as was done for read/writing files.
Some cruft from the Perl days was removed, specifically allowing NULL messages and stack traces. This is no longer possible in C.
There is room for improvement here, in particular locking down the allowed sequence of protocol messages and building a state machine to enforce it. This will be useful for resetting the protocol when it gets in a bad state.
Some tests had to be reordered or updated, as follows:
* Reordered tests at line 317 and 331 to avoid unnecessary file removal.
* Change "stanza found" test at line 1735 to reflect real-life scenario. Originally this test had the cipher-pass environment key set up which caused the RepoGrp to be 2 but with no valid repo path. This resulted in the repo loops executing for the repo2 but since the path was not defined, the tests just reported "none" for cipher which is incorrect since the repo IS encrypted.
* Moved order of HRN_CFG_LOAD in some tests when able to avoid using storageTest.
It is better to clear errors after the catch block completes rather than leave them set until the next error. This also make is possible to tell when a error is currently being handled, which a function further down the stack might use to modify its behavior. Currently this is only useful in testing, but clearing the error seems like a good idea in general.
Two places used errors outside the CATCH() block. Mem context cleanup now uses a FINALLY() which is a better implementation anyway. The error handling in main() now calls exitSafe() from withing the CATCH() block.
Since the pack type was stored in 4 bits, only 15 values were allowed (0 was reserved).
Allow virtually unlimited types by storing type info in a base-128 encoded integer following the tag when the type bits in the tag are set to 0xF.
Also separate the type IDs used in the pack (PackTypeMap) from those presented to the user (PackType). The prior PackType enum exposed implementation details to the user, e.g. pckTypeUnknown.
Bug Fixes:
* Fix issues with leftover spool files from a prior restore. (Reviewed by Cynthia Shang, Stefan Fercot, Floris van Nee. Reported by Floris van Nee.)
* Fix issue when checking links for large numbers of tablespaces. (Reviewed by Cynthia Shang, Avinash Vallarapu. Reported by Avinash Vallarapu.)
* Free no longer needed remotes so they do not timeout during restore. (Reviewed by Cynthia Shang. Reported by Francisco Miguel Biete.)
* Fix help when a valid option is invalid for the specified command. (Reviewed by Stefan Fercot. Reported by Cynthia Shang.)
Features:
* Add PostgreSQL 14 support. (Reviewed by Cynthia Shang.)
* Add automatic GCS authentication for GCE instances. (Reviewed by Jan Wieck, Daniel Farina.)
* Add repo-retention-history option to expire backup history. (Contributed by Stefan Fercot. Reviewed by Cynthia Shang, David Steele.)
* Add db-exclude option. (Contributed by Stefan Fercot. Reviewed by Cynthia Shang.)
Improvements:
* Change archive expiration logging from detail to info level. (Contributed by Cynthia Shang. Reviewed by David Steele.)
* Remove stanza archive spool path on restore. (Reviewed by Cynthia Shang, Stefan Fercot.)
* Do not write files atomically or sync paths during backup copy. (Reviewed by Stephen Frost, Stefan Fercot, Cynthia Shang.)
Documentation Improvements:
* Update contributing documentation. (Contributed by Cynthia Shang. Reviewed by David Steele, Stefan Fercot.)
* Consolidate RHEL/CentOS user guide into a single document. (Reviewed by Cynthia Shang.)
* Clarify that repo-s3-role is not an ARN. (Contributed by Isaac Yuen. Reviewed by David Steele.)
HRN_CFG_LOAD() handles the majority of test configuration loads and has various options for special cases.
It was not clear when to use harnessCfgLoadRaw() vs harnessCfgLoad(). Now "raw" functionality is granular and enabled by parameters, e.g. noStd.
The default is to keep all backup history to match the current behavior. In minimal configuration (0 days), unexpired backups are always kept in history.
When a full backup manifest expires, all dependent differential/incremental manifests expire as well.
This allows protocolRemoteExec() to be shimmed, which means the remote can be run as a child of the test process, simplifying coverage testing.
The shim does not need SSH parameters, so also split those out into a separate function and update the tests to match.
Add executable to parameter list to avoid first option being lost. The backup, restore, and verify tests worked OK with their first option being defaulted because it ended up being job-retry which worked fine as the default.
Add hrnProtocolLocalShimUninstall() allow the shim to be uninstalled.
Log shim at debug level to make it obvious in the logs when a shim is in use.
There are no code changes from PostgreSQL 13 so simply add the new version.
Add CATALOG_VERSION_NO_MAX to allow the catalog version to "float" during the PostgreSQL beta/rc period so new pgBackRest versions are not required when the catalog version changes.
Update the integration tests to handle new PostgreSQL startup messages.
manifestLinkCheck() was pretty inefficient so large numbers of links caused it to use a lot of memory and eventually crash. This is a more efficient implementation which runs O(nlogn) and uses far less memory.
Checking for duplicate file links has been added, which represents a change in behavior, but hopefully a good one.
The user guide was split primarily to provide documentation for the stop-auto option in PostgreSQL <= 9.5. Now that 9.5 is EOL there does not seem to be a good reason to generate an extra user guide. The stop-auto function is still documented in the reference.
Leave the stop-auto documentation in the user guide in case we want to manually generate documentation for older versions.
Also rename centos to rhel for most identifiers since that is the core platform we are building for, similar to how we label 'debian' builds even though we generally use Ubuntu. With CentOS set to become an upstream for RHEL later this year, we'll likely need to pick a new test distribution, perhaps Rocky Linux if that gets off the ground.
Replace all instances of strNew("") with strNew() and use strNewZ() for non-empty zero-terminated strings. Besides saving a useless parameter, this will allow smarter memory allocation in a future commit by signaling intent, in general, to append or not.
In the tests use STRDEF() or VARSTRDEF() where more appropriate rather than blindly replacing with strNewZ(). Also replace strLstAdd() with strLstAddZ() where appropriate for the same reason.
Run the local process inside a forked child process instead of exec'ing it. This allows coverage to accumulate in the local process rather than needing to test the local protocol functions directly, resulting in better end-to-end testing and less test duplication. Another advantage is that the pgbackrest binary does not need to be built for the test.
The backup, restore, and verify command tests have been updated to use the new shim for coverage.
getopt_long() requires an exhaustive list of all possible options that may be found on the command line. Because of the way options are indexed (e.g. repo1-4, pg1-8) optionList[] has 827 entries and we have kept it small by curtailing the maximum indexes very severely. Another issue is that getopt_long() scans the array sequentially so parsing gets slower as the index maximums increase.
Replace getopt_long() with a custom implementation that behaves the same but allows options to be parsed with a function instead of using optionList[]. This commit leaves the list in place in order to focus on the getopt_long() replacement, but cfgParseOption() could be replaced with a more efficient implementation that removes the need for optionList[].
This implementation also fixes an issue where invalid options were misreported in the error message if they only had one dash, e.g. -config. This seems to have been some kind of problem in getopt_long(), but no investigation was done since the new implementation fixes it.
Tests were added at 0825428, 2b8d2da, 34dd663, and 384f247 to check that previously untested getopt_long() behavior doesn't change.
Remove stanza archive spool path so existing files do not interfere with the new cluster. For instance, old archive-push acknowledgements could cause a new cluster to skip archiving. This should not happen if a new timeline is selected but better to be safe. Missing stanza spool paths are ignored.
Also add new path expression STORAGE_SPOOL_ARCHIVE to easily access this path.
When running on a GCE instance the authentication token can be pulled directly from the instance metadata. This is configured with repo-gcs-key-type=auto.
In a separate commit (26fefa6), move the code that parses the token response into a separate function, storageGcsAuthToken(), since it is now needed by two key types. This drastically improves the readability of the main commit.
927d9adb changed the way CATALOG_VERSION_NO is used to identify PostgreSQL versions since PG_CONTROL_VERSION is generally bumped with each release. The goal was to make the beta/rc period less painful because any CATALOG_VERSION_NO bump renders pgBackRest inoperative.
This worked, but in fact we'd rather be stricter about which CATALOG_VERSION_NO we accept when identifying a version of PostgreSQL. It is not just about identifying a major version, but making sure the build contains all the functions and catalogs we expect to make pgBackRest work correctly. It is better to reject early dev/beta/rc builds that may not work.
Since 927d9adb was relatively recent the chance that this stricter checking will cause a problem seems minimal, so revert to checking CATALOG_VERSION_NO for every PostgreSQL version.
Leave in place the code that pulls CATALOG_VERSION_NO from pg_control rather than the internal constant since the plan is still to allow catalog versions to "float" during the PostgreSQL beta/rc phase, which will be the subject of a future commit.
If an ok file (which indicates the WAL segment was not found) is present on the first iteration of the loop then remove it and spawn the async process to retry. This action also resets the queue.
Also error if no response is received from the async process rather than returning not found. PostgreSQL will respond the same either way, but this allows us to determine when something is going wrong with the async process.
Update archiveAsyncStatus() to allow warnings to be suppressed. It is better to retry if no WAL segment was found before warning because the warning might be stale.
Convert most of the remaining options that benefit from being StringIds. Since all the command modules can include config.h directly it makes sense to auto-generate these values instead of manually creating an enum for each one.
For the time being StringIds are not being auto-generated because the StringId code does not exist in Perl. However, the *_Z zero-terminated constants for each allowed option value are now auto-generated.
Allows removal of backupType()/backupTypeStr() and improves debug logging of the enum.
Move BackupType enum and string constants to info/infoBackup.h so they are available to more modules. Also convert InfoBackup to use BackupType instead of a String.
It is no longer possible to pull news source from the PostgreSQL website so add a sample in the doc directory. Update the release instructions to reflect this change.
Also note that it is no longer necessary to post separately to pgsql-announce.
Using StringId for the client/session type removes String constants and some awkward referencing/dereferencing needed to use a String constant in the interface.
Converting IoSessionRole to StringId removes a conditional in ioSessionToLog() and improves debug logging by outputting client/server instead of 0/1.
Centralize the formatting of the configuration value for display to the user or passing on a command line.
For the new functions, if the value was set by the user via the command line, config, etc., then that exact value will be displayed. This makes it easier for the user to recognize the value and saves having to format it into something reasonable, especially for time and size option types.
Note that cfgOptTypeHash and cfgOptTypeList option types are not supported by these functions, but they are generally not displayed to the user as a whole.
This also fixes a bug in config/load.c where time values where not being formatted correctly in an error message.
Use StringIds for the storage types (e.g. STORAGE_S3_TYPE) and configuration settings, e.g. cfgOptS3KeyType.
Also add new config functions and harness config functions to support StringIds.
There is no need to write the file atomically (e.g. via a temp file on Posix) because checksums are tested on resume after a failed backup. The path does not need be synced for each file because all paths are synced at the end of the backup.
This functionality was not lost during the migration -- it never existed in the Perl code, though these settings are used in restore. See 59f1353 where backupFile() was migrated to C.
Fix the segfault when getting help for an internal option is requested by adding help for all internal options that are valid for a default command role.
Also print warnings about internal options in code rather than putting in each command/option description.
The remotes are no longer needed in the main process after the manifest is loaded. If the restore is long enough the connection will timeout and WARN at the end of the restore. This is harmless for the restore but distracting for the user.
To prevent this, free the remotes once they are no longer needed.
Getting help for a valid option that was invalid for the command would segfault.
Add a check to ensure the option is valid for the command's default role.
It is often useful to represent identifiers as strings when they cannot easily be represented as an enum/integer, e.g. because they are distributed among a number of unrelated modules or need to be passed to remote processes. Strings are also more helpful in debugging since they can be recognized without cross-referencing the source. However, strings are awkward to work with in C since they cannot be directly used in switch statements leading to less efficient if-else structures.
A StringId encodes a short string into an integer so it can be used in switch statements but may also be readily converted back into a string for debugging purposes. StringIds may also be suitable for matching user input providing the strings are short enough.
This patch includes a sample of StringId usage by converting protocol commands to StringIds. There are many other possible use cases. To list a few:
* All "types" in storage, filters. IO , etc. These types are primarily for identification and debugging so they fit well with this model.
* MemContext names would work well as StringIds since these are entirely for debugging.
* Option values could be represented as StringIds which would mean we could remove the functions that convert strings to enums, e.g. CipherType.
* There are a number of places where enums need to be converted back to strings for logging/debugging purposes. An example is protocolParallelJobToConstZ. If ProtocolParallelJobState were defined as:
typedef enum
{
protocolParallelJobStatePending = STRID5("pend", ...),
protocolParallelJobStateRunning = STRID5("run", ...),
protocolParallelJobStateDone = STRID5("done", ...),
} ProtocolParallelJobState;
then protocolParallelJobToConstZ() could be replaced with strIdToZ(). This also applies to many enums that we don't covert to strings for logging, such as CipherMode.
As an example of usage, convert all protocol commands from strings to StringIds.
Restore excluding the specified databases. Databases excluded will be restored as sparse, zeroed files to save space but still allow PostgreSQL to perform recovery. After recovery, those databases will not be accessible but can be removed with the drop database command. The --db-exclude option can be passed multiple times to specify more than one database to exclude.
When used in combination with the --db-include option, --db-exclude will only apply to standard system databases (template0, template1, and postgres).
In combination with the thisPub() function, this macro simplifies accessing the public part of a private object struct.
thisPub() asserts this != NULL so the caller does not need to do it.
Introduce a standard pattern for exposing public struct members (as documented in CODING.md) and use it to inline lstSize() which should improve the performance of iterating large lists.
Since many functions in these modules are just thin wrappers of other functions, inline where appropriate.
Remove strLstExistsZ() and strLstInsertZ() since they were only used in tests, where the String version of the function is sufficient.
Move strLstNewSplitSizeZ() to command/help/help.c and remove strLstNewSplitSize(). This function has only ever been used by help and does not seem widely applicable.
Bug Fixes:
* Fix option warnings breaking async archive-get/archive-push. (Reviewed by Cynthia Shang. Reported by Lev Kokotov.)
* Fix memory leak in backup during archive copy. (Reviewed by Cynthia Shang. Reported by Christian ROUX, Efremov Egor.)
* Fix stack overflow in cipher passphrase generation. (Reviewed by Cynthia Shang. Reported by bsiara.)
* Fix repo-ls / on S3 repositories. (Reviewed by Cynthia Shang. Reported by Lesovsky Alexey.)
Features:
* Multiple repository support. (Contributed by Cynthia Shang, David Steele. Reviewed by Stefan Fercot, Stephen Frost.)
* GCS support for repository storage. (Reviewed by Cynthia Shang.)
* Add archive-header-check option. (Reviewed by Stephen Frost, Cynthia Shang. Suggested by Hans-Jürgen Schönig.)
Improvements:
* Include recreated system databases during selective restore. (Contributed by Stefan Fercot. Reviewed by Cynthia Shang.)
* Exclude content-length from S3 signed headers. (Reviewed by Cynthia Shang. Suggested by Brian P Bockelman.)
* Consolidate less commonly used repository storage options. (Reviewed by Cynthia Shang.)
* Allow custom config-path default with ./configure --with-configdir. (Contributed by Michael Schout. Reviewed by David Steele.)
* Log archive copy during backup. (Reviewed by Cynthia Shang, Stefan Fercot.)
Documentation Improvements:
* Update reference to include links to user guide examples. (Contributed by Cynthia Shang. Reviewed by David Steele.)
* Update selective restore documentation with caveats. (Reviewed by Cynthia Shang, Stefan Fercot.)
* Add compress-type clarification to archive-copy documentation. (Reviewed by Cynthia Shang, Stefan Fercot.)
* Add compress-level defaults per compress-type value. (Contributed by Cynthia Shang. Reviewed by David Steele.)
* Add note about required NFS settings being the same as PostgreSQL. (Contributed by Cynthia Shang. Reviewed by David Steele.)
The command-example and command-example-list elements were removed from the documentation rendering some time ago so these tags were dead code. The tags, however, contained some examples and information that were pertinent to the command, so where possible, the information was included in the description of the command and/or the user-guide and links to the relevant user guide sections were added.
Note that some commands could not be updated with user guide references since doing so would cause a cyclical reference in the user guide. These commands have an internal comment to indicate this.
In addition, some clarifications were added (e.g. expire --set option) where information was lacking.
Enabled by default, this option checks the WAL header against the PostgreSQL version and system identifier to ensure that the WAL is being copied to the correct stanza. This is in addition to checking pg_control against the stanza and verifying that WAL is being copied from the same PostgreSQL data directory where pg_control is located.
Therefore, disabling this check is fairly safe but should only be done when required, e.g. if the WAL is encrypted.
3b8f0ef missed some cases that could cause archive-push to fail:
* Checking archive info.
* Checking to see if a WAL segment already exists.
These cases are now handled so archive-push can succeed on any valid repos.
This improvement reduces the number of errors thrown; these errors will now be reported as a status for the stanza or repo as appropriate. Invalid option configurations are still thrown but all other errors are caught, formatted and reported. This was necessary for multiple repositories so that the command can complete gathering information from each repository and report the results rather than immediately aborting when an error occurs.
Two new error codes were introduced:
6 = requested backup not found
99 = other, which is used to indicate an error has occurred that requires more details to be provided
A new stanza name of "[invalid]" was created for instances where a stanza was not specified and no stanza can be found.
If there is only one repository configured the error will move up to the stanza level with the standard error formatting of 'error (message)' where the message will be "other" and the details of the error will be listed on the next line(s):
stanza: stanza1
status: error (other)
[CryptoError] unable to load info file '/var/lib/pgbackrest/repo/backup/stanza1/backup.info' or '/var/lib/pgbackrest/repo/backup/stanza1/backup.info.copy':
CryptoError: cipher header invalid
HINT: is or was the repo encrypted?
FileMissingError: unable to open missing file '/var/lib/pgbackrest/repo/backup/stanza1/backup.info.copy' for read
HINT: backup.info cannot be opened and is required to perform a backup.
HINT: has a stanza-create been performed?
HINT: use option --stanza if encryption settings are different for the stanza than the global
cipher: aes-256-cbc
If a backup set is requested but is not found on any repo, a stanza-level status error of 'requested backup not found' is reported when there are no other errors:
pgbackrest info --stanza=demo --set=bogus
stanza: demo
status: error (requested backup not found)
cipher: mixed
repo1: aes-256-cbc
repo2: none
If there are multiple repositories configured and a single repo is in error but the other repos are ok or have a different error:
pgbackrest info --stanza=demo --set=20210322-171211F
stanza: demo
status: mixed
repo1: error
[CryptoError] unable to load info file '/var/lib/pgbackrest/repo/backup/stanza1/backup.info' or '/var/lib/pgbackrest/repo/backup/stanza1/backup.info.copy':
CryptoError: cipher header invalid
HINT: is or was the repo encrypted?
FileMissingError: unable to open missing file '/var/lib/pgbackrest/repo/backup/stanza1/backup.info.copy' for read
HINT: backup.info cannot be opened and is required to perform a backup.
HINT: has a stanza-create been performed?
HINT: use option --stanza if encryption settings are different for the stanza than the global
repo2: ok
cipher: mixed
repo1: aes-256-cbc
repo2: none
db (current)
wal archive min/max (12): 000000010000000000000001/000000010000000000000003
full backup: 20210322-171211F
timestamp start/stop: 2021-03-22 17:12:11 / 2021-03-22 17:12:28
wal start/stop: 000000010000000000000002 / 000000010000000000000002
database size: 23.4MB, database backup size: 23.4MB
repo2: backup set size: 2.8MB, backup size: 2.8MB
database list: postgres (13359)
Json output will include the repository information and any error information. If no stanzas are found, then [invalid] will be set as the name:
[
{
"archive":[],
"backup":[],
"cipher":"none",
"db":[],
"name":"[invalid]",
"repo":[
{
"cipher":"none",
"key":1,
"status":{
"code":99,
"message":"[PathOpenError] unable to list file info for path '/var/lib/pgbackrest/repo2/backup': [13] Permission denied"
}
}
],
"status":{
"code":99,
"lock":{"backup":{"held":false}},
"message":"other"
}
}
]
The content-length header was being signed since it was the only header that didn't need to be and it seemed simpler just to sign it as well. Also, the S3 documentation encourages signing as many headers as possible to avoid tampering.
However, some proxies munge this header causing authentication failure, so skip signing content-length.
Make protocol handlers have one function per command. This allows the logic of finding the handler to be in ProtocolServer, isolates each command to a function, and removes the need to test the "not found" condition for each handler.
S3 returns 200 for HEAD / which indicates it is a file but does not return the expected headers which causes an error.
Rather than fix this for S3, just automatically return / as not existing for any storage that does not support paths.
Also add some defensive checks to prevent this from generating a segfault if it happens again.
Some standard system databases (e.g. postgres) may be recreated by the user and have an OID that makes them look like user databases.
Identify the standard three system databases (template0, template1, postgres) and restore them non-zeroed no matter what OID they have.
Recovery may error unless --type=immediate is specified. This is because after consistency is reached PostgreSQL will flag zeroed pages as errors even for a full-page write.
For PostgreSQL ≥ 13 the ignore_invalid_pages setting may be used to ignore invalid pages. In this case it is important to check the logs after recovery to ensure that no invalid pages were reported in the selected databases.
It is best if the archive-push and backup commands have the same compress-type (e.g. lz4) when using archive-copy. Otherwise, the WAL segments will need to be recompressed with the compress-type used by the backup, which can be fairly expensive depending on how much WAL was generated during the backup.
There was already leakage here but when the compression transcoding was added it became a deluge.
There is some argument to be made that the filters should clean themselves up better but a temp mem context makes sense here anyway so do that.
The stanza-create, stanza-upgrade and stanza-delete were required to be run on the repository host. When there was only one repository allowed this was not a problem.
However, with the introduction of multiple repository support, this becomes more of a burden to the user, therefore the stanza-create, stanza-upgrade and stanza-delete commands have been improved to allow for them to be run remotely.
Moving to YAML allows the configuration data to be read by C programs.
Also go back to using YAML::XS since it is the only implementation that has proper boolean support.
Up to four repositories may be configured. A potential benefit is the ability to have a local repository for fast restores and a remote repository for redundancy.
Some commands, e.g. stanza-create/stanza-update, will automatically work with all configured repositories while others, e.g. stanza-delete, will require a repository to be specified using the repo option. See the command reference for details on which commands require the repository to be specified.
Note that the repo option is not required when only repo1 is configured in order to maintain backward compatibility. However, the repo option is required when a single repo is configured as, e.g. repo2. This is to prevent command breakage if a new repository is added later.
The archive-push command will always push WAL to the archive in all configured repositories but backups will need to be scheduled individually for each repository. In many cases this is desirable since backup types and retention will vary by repository. Likewise, restores must specify a repository. It is generally better to specify a repository for restores that has low latency/cost even if that means more recovery time. Only restore testing can determine which repository will be most efficient.
For single repository configurations there should be no change in behavior.
The HTML command reference was showing some options that were not valid because it did not properly understand the new role validity system. Also, the custom section for the new repo option was not being honored.
This is a bit messy because it leads to some duplicated code in help.c but there doesn't seem to be any way to fix that with the Perl data structures as they are.
This code is being migrated to C so it doesn't seem worth messing with it too much with the risk of breaking other things.
Some commands (repo-*, verify) still required the --repo option but it makes sense to give them the same treatment as backup and simply use the first repo when one is not specified.
This leaves stanza-delete as the only remaining command that requires --repo. This is by design to enhance safe usage.
The following options are renamed as specified:
repo1-azure-ca-file -> repo1-storage-ca-file
repo1-azure-ca-path -> repo1-storage-ca-path
repo1-azure-host -> repo1-storage-host
repo1-azure-port -> repo1-storage-port
repo1-azure-verify-tls -> repo1-storage-verify-tls
repo1-s3-ca-file -> repo1-storage-ca-file
repo1-s3-ca-path -> repo1-storage-ca-path
repo1-s3-host -> repo1-storage-host
repo1-s3-port -> repo1-storage-port
repo1-s3-verify-tls -> repo1-storage-verify-tls
The old option names (e.g. repo1-s3-port) will continue to work for repo1, but repo2, etc. will require the new names.
The archive-push command will continue to push even after it gets a write error on one or more repos. The idea is to archive to as many repos as possible even we still need to throw an error to PostgreSQL to prevent it from removing the WAL file.
Add --with-confdir=DIR option to configure, which can be used to override the default configuration directory of /etc/pgbackrest.
Probably in the future it would be better to just leverage ${sysconfdir} which is based on prefix, but since previously the config directory was hard coded to /etc/pgbackrest, we retain that default value by not relying on sysconfdir for now.
The restore command automatically defaults to selecting the latest backup from a single repository. With multiple repositories configured, the restore command will now default to selecting the latest backup from the first repository where backups exist. The order in which the repositories are checked is dictated by the pgbackrest.conf order.
To select from a specific repository, the --repo option can be passed (e.g. --repo=1). The --set option can be passed if a backup other than the latest is desired.
Repositories will be searched in order for the requested archive file.
Errors will be reported as warnings as long as a valid copy of the archive file is found.
Errors are logged to the log file rather than thrown. If, after processing all repos, one or more errors occurred, then a single error error will be thrown to indicate there were errors and the log file should be inspected.
Also update log messages to be more consistent with new patterns.
Option warnings will cause the async process to fail because a warning is logged but stdout is closed so the process aborts.
This bug has existed for quite some time, but it was made worse by abb8ebe because now the async role can have different valid options than the default role. Previously at least a warning would be emitted before the async process died.
Fix this by only allowing warnings for the default role. Warnings were already suppressed for local and remote roles so the logic already exists.
The destination buffer on the stack was not large enough to contain the zero-terminating character.
Increase the buffer size and add an assertion to prevent regressions.
Found on arm64 running musl libc. Other architectures and glibc do not seem to be affected though it is clearly a bug.
The expire command has been enhanced to expire backups and archives from all configured repositories by default.
In addition, it will accept the --repo option to expire backups and archives only from the specified repository. Using the --repo options the --set option can also be refined further to the specified repo. If --set is provided but the --repo option has not, then all repositories will be searched and retention settings will be applied on each whether the backup set has been found or not.
Monospaced identifiers could end up running over if latex was not able to find a place to break the line. Using sloppypar forces breaks so monospaced identifiers don't run over or get broken up.
Also add vspace to admonitions so they have some separation from the prior text.
Bug Fixes:
* Fix resume after partial delete of backup by prior resume. (Reviewed by Cynthia Shang. Reported by Tom Swartz.)
Features:
* Add repo-ls command. (Reviewed by Cynthia Shang, Stefan Fercot.)
* Add repo-get command. (Contributed by Stefan Fercot, David Steele. Reviewed by Cynthia Shang.)
* Add archive-mode-check option. (Contributed by Stefan Fercot. Reviewed by David Steele, Michael Banck.)
Improvements:
* Improve archive-get performance. (Reviewed by Cynthia Shang.)
In preparation for multi-repo support, a repo tag is added in this commit to the expire command log and error messages. This change also affects the expect logs and the user-guide. The format of the tag is "repoX:" where X is the repo key used in the configuration.
Until multi-repo support has been completed, this tag will always be "repo1:".
The original intention was to enclose complex code in braces but somehow braces got propagated almost everywhere.
Document the standard for braces in switch statements and update the code to reflect the standard.
At one time Minio had stability problems with latest but that appears to be resolved for the last year or so.
Use latest so we'll know if something breaks since Minio is frequently used in production.
This is phase 2 of verify command development (phase 1 was processing the archives and phase 3 will be reconciling the archives and backups). In this phase the backups are verified by verifying each file listed in the manifest for the backup and creating a result set with the list of invalid files, if any. A summary is then rendered.
Unit tests have been added and duplicate tests have been removed.
The info command provides total sizes for files in the backup on the database as well as the repository. The text output and associated user documentation has been updated to provide more clarity regarding the sizes being displayed.
In addition, the info command is updated to allow a user to optionally specify the repository when requesting a specific backup set. In this case, the text output will reflect the status of the stanza, the cipher types and archive min/max over all the repositories instead of a single repository when the repo option is specified.
YAML::XS requires libyaml so it not as portable as pure Perl versions of YAML.
Instead of using YAML:PP just use the general YAML::Any module which uses whatever is installed. We are not concerned about performance for YAML so whatever works is fine.
This is a more appropriate place for the check and means test.pl can avoid loading any XML files if --no-gen is specified.
The XML::Checker::Parser module originally selected for XML in Perl is not very portable so the requirement reduces the number of platforms where tests can be run.
Multi-repository implementations for the archive-push, check, info, stanza-create, stanza-upgrade, and stanza-delete commands.
Multi-repo configuration is disabled so there should be no behavioral changes between these commands and their current single-repo implementations.
Multi-repo documentation and integration tests are still in the multi-repo development branch. All unit tests work as multi-repo since they are able to bypass the configuration restrictions.
Check that archive files exist in the main process instead of the local process. This means that the archive.info file only needs to be loaded once per execution rather than once per file to get.
Stop looking when a file is missing or in error. PostgreSQL will never request anything past the missing file so there is no point in getting them. This also reduces "unable to find" logging in the async process.
Cache results of storageList() when looking for multiple files to reduce storage I/O.
Look for all requested archive files in the archive-id where the first file is found. They may not all be there, but this reduces the number of list calls. If subsequent files are in another archive id they will be found on the next archive-get call.
Append "asynchronously" to messages when the async process fetched the file (not in the actual async process log, though).
Add "repo1" to make it clear what archive we are talking about. This is not very useful by itself but soon we'll be able to add the archive id, which is very useful.
Add constants for messages that are used multiple times to ensure they stay consistent.
If files other than backup.manifest.copy were left in a backup path by a prior resume then the next resume would skip the backup rather than removing it. Since the backup path still existed, it would be found during backup label generation and cause an error if it appeared to be later than the new backup label. This occurred if the skipped backup was full.
The error was only likely on object stores such as S3 because of the order of file deletion. Posix file systems delete from the bottom up because directories containing files cannot be deleted. Object stores do not have directories so files are deleted in whatever order they are provided by the list command. However, the issue can be reproduced on a Posix file system by manually deleting backup.manifest.copy from a resumable backup path.
Fix the issue by removing the resumable backup if it has no manifest files. Also add a new warning message for this condition.
Note that this issue could be resolved by running expire or a new full backup.
These options specify the number of local worker job retries and the retry interval after one immediate retry.
There is some value in allowing retries to be specified by the user but for the most part these options are for suppressing retries during testing, which can save a lot of time. The bug introduced in d1d25c7 and fixed in 8b86d5e also suggests it is better not to use retries in tests.
Remove the default delayed retries for archive-get/archive-push, leaving only the immediate retry. These commands are retried by PostgreSQL so it doesn't make sense to do too many retries internally.
These options are currently internal.
The pg option only has one current usage, to let the backup local know which pg index it should copy files from.
There are other possible uses for this option, but they need thought, tests, and documentation.
This option was added in advance of the multi-repo functionality but it has no purpose and it is not clear what the validity rules should be.
The option will be added back when multi-repo functionality is committed.
Building on 23f5712, limit option validity by role. This is mostly for options that weren't needed for certain roles but were harmless. However, the upcoming multi repository functionality requires the granularity implemented here.
The remote role benefits since host options can automatically excluded when building the options. Also, many options that are only required for the default role (e.g. repo-retention-full) no longer need to be passed in tests for other roles.
Testing on Travis-CI has been getting slower (from ~18 minutes to 3-6 hours) and the travis-ci.org service will be terminated at the end of the year. Moving to travis-ci.com is an option but the quotas are too low for our purposes.
Instead use Github Actions, which does not currently have quotas, and runs our current tests with just a few tweaks.
This still leaves multi-architecture tests on Travis-CI but we may be able to run those and stay within the new quotas.
Also fix a minor bug in restoreTest.c exposed by Github Actions using a different name for the user and group.
The pack type is an architecture-independent format for serializing data compactly, inspired by ProtocolBuffers and Avro.
Also add ioReadSmall(), which is optimized for small binary reads, similar to ioReadLineParam().
The C code does not use doubles to represent seconds like the Perl code did so time can be represented as an integer which reduces the number of data types that config has to understand.
Also remove Variant doubles since they are no longer used.
Note that not all double code was removed since we still need to display times to the user in seconds and it is possible for the times to be fractional. In the future this will likely be simplified by storing the original user input and using that value when the time needs to be displayed.
Bug Fixes:
* Allow [, #, and space as the first character in database names. (Reviewed by Stefan Fercot, Cynthia Shang. Reported by Jefferson Alexandre.)
* Create standby.signal only on PostgreSQL 12 when restore type is standby. (Fixed by Stefan Fercot. Reviewed by David Steele. Reported by Keith Fiske.)
Features:
* Expire history files. (Contributed by Stefan Fercot. Reviewed by David Steele.)
* Report page checksum errors in info command text output. (Contributed by Stefan Fercot. Reviewed by Cynthia Shang.)
* Add repo-azure-endpoint option. (Reviewed by Cynthia Shang, Brian Peterson. Suggested by Brian Peterson.)
* Add pg-database option. (Reviewed by Cynthia Shang.)
Improvements:
* Improve info command output when a stanza is specified but missing. (Contributed by Stefan Fercot. Reviewed by Cynthia Shang, David Steele. Suggested by uspen.)
* Improve performance of large file lists in backup/restore commands. (Reviewed by Cynthia Shang, Oscar.)
* Add retries to PostgreSQL sleep when starting a backup. (Reviewed by Cynthia Shang. Suggested by Vitaliy Kukharik.)
Documentation Improvements:
* Replace RHEL/CentOS 6 documentation with RHEL/CentOS 8.
Update RHEL/CentOS 7 to cover the versions that were previously covered by RHEL/CentOS 6.
Since RHEL/CentOS 7/8 work the same update the documentation logic and labels to reflect this compatibility.
Inaccuracies in sleep time or clock skew might make a single sleep insufficient to reach the next second.
Add a few retries to make the process more reliable but still avoid an infinite loop if something is seriously wrong.
CentOS6 EOL'd and the mirrors were swiftly deleted, leading to failures in tests and documentation.
Remove CentOS 6 for now to get builds going again with the intention to replace it in the near future with CentOS 8.
Refactor the code to allow a dynamic number of indexes for indexed options, e.g. pg-path. Our reliance on getopt_long() still limits the number of indexes we can have per group, but once this limitation is removed the rest of the code should be happy with dynamic numbers of indexes (with a reasonable maximum).
Add an option to set a default in each group. This was previously handled by the host-id option but now there is a specific option for each group, pg and repo. These remain internal until they can be fully tested with multi-repo support. They are fully tested for internal usage.
Remove the ConfigDefineOption enum and use the ConfigOption enum instead. They are now equal since the indexed options (e.g. cfgOptRepoHost2) have been removed from ConfigOption.
Remove the config/config test module and add required tests to the config/parse test module. Parsing is now the only way to load a config so this removes some redundancy.
Split new internal config structures and functions into a new header file, config.intern.h. More functions will need to be moved over from config.h but that will need to be done in a future commit to reduce churn.
Add repoIdx to repoIsLocal() and storageRepo*(). Multi-repository support requires that repo locality and storage be accessible by index. This allows, for example, multiple repos to be iterated in a loop. This could be done in a separate commit but doesn't seem worth it since the code is related.
Remove the type parameter from storageRepoGet(). This parameter existed solely to provide coverage for the case where the storage type was invalid. A better pattern is to check that the type is S3 once all other types have been ruled out.
Improve locking on remote processes by introducing an exec-id that is unique to the main process and passed to all remote processes. This allows the remote processes to determine if a lock is held by a remote from the same main process. If so, the lock is allowed.
The exec-id is also useful for associating remote logs with main logs for debugging purposes.
When restore type standby is provided, the recovery.signal isn't needed and may lead to some confusion (see #1236).
Lately, when using pg_basebackup --write-recovery-conf, only the standby.signal file is created. This change would then align with that behaviour.
Three exclamation points are used by convention as a marker for code that needs attention before it can be committed to integration.
If the markers are in this file they come up in every search.
Return a path missing error when a stanza is specified for the info command but the stanza does not exist in the repository.
Previously [] was returned, which is still the case if no stanza is specified and the repository does not exist.
lstRemoveIdx(list, 0) resulted in the entire list being moved down to the first position which could take a long time for big lists. This is a common pattern in backup/restore when processing file queues.
Instead simply move the list pointer up when first item is removed. Then on insert check if there is space at the beginning when there is no longer space at the end and do the move then. This way if a list is built and then drained without any new inserts then no move is required.
There were a number of places in the code where "hostId" was used, but hostId is just the option group index + 1 so this led to a lot of +1 and -1 to convert the id to an index and vice versa.
Instead just use the zero based index wherever possible. This is pretty much everywhere except when the host-id option is read or set, or where a message is being formatted for the user.
Also fix a bug in protocolRemoteParam() where remotes spawned from the main process could get process ids that were not 0. Only the locals should spawn remotes with process id > 0. This seems to have been harmless since the process id is only a label, but it could be confusing when debugging.
iniLoad() was trimming lines which meant that a leading space would not pass checksum validation when a manifest was reloaded. Remove the trims since files we write should never contain extraneous spaces. This further diverges the format for the functions that read conf files (e.g. pgbackrest.conf) and those that read info (e.g. manifest) files.
While we are at it also allow [ and # as initial characters. # was reserved for comments but we never put comments into info files. [ denotes a section but we can get around this by never allowing arrays as values in info files, so if a line ends in ] it must be a section. This is currently the case but enforce it by adding an assert to info/info.c.
WAL timeline history files were not being expired because they were small and generally not very plentiful.
However, in some cases large numbers of history files may be generated so it makes sense to remove useless history files to keep things tidy.
The history file for the oldest retained timeline is kept for debugging purposes even though it is not used for recovery.
Instead of using memmove() to manage the internal output buffer for every small read, track the current buffer position and only move data when the small read cannot be satisfied and more data is needed.
Group related options together so operations (e.g. valid, test, index total) can be performed on all options in the group.
Previously, options at the top of the hierarchy of the related options were used to do these tests. This was prone to error as option relationships changed and it was not always clear which option (or options) should be used.
Bug Fixes:
* Error with hints when backup user cannot read pg_settings. (Reviewed by Stefan Fercot, Cynthia Shang. Reported by Mohamed Insaf K.)
Features:
* PostgreSQL 13 support. (Reviewed by Cynthia Shang.)
Improvements:
* Improve PostgreSQL version identification. (Reviewed by Cynthia Shang, Stephen Frost.)
* Improve working directory error message. (Reviewed by Stefan Fercot.)
* Add hint about starting the stanza when WAL segment not found. (Contributed by David Christensen. Reviewed by David Steele.)
* Add hint for protocol version mismatch. (Reviewed by Cynthia Shang. Suggested by loop-evgeny.)
Documentation Improvements:
* Add note that pgBackRest versions must match when running remotely. (Reviewed by Cynthia Shang. Suggested by loop-evgeny.)
* Move info command text to the reference and link to user guide. (Reviewed by Cynthia Shang. Suggested by Christophe Courtois.)
* Update yum repository path for CentOS/RHEL user guide. (Contributed by Heath Lord. Reviewed by David Steele.)
Since links are not possible in the command line help just display the name of the linked section.
Also, during reference text rendering there is no out key so make sure it is defined before trying to use it.
This means the same text will appear in both places, which should make it easier to find.
Also update the link code to allow both page and section to be specified rather than only one or the other.
Update the documentation to explicitly state that versions must match across hosts when running remotely.
Add a hint to the protocol version mismatch error to help the user identify the problem.
Add older PostgreSQL versions to the u18 container that were not available before.
This also updates all minor versions for prior versions of PostgreSQL.
Scan the WAL archive for missing or invalid files and build up ranges of WAL that will be used to verify backup integrity. A number of errors and warnings are currently emitted but they should not be considered authoritative (yet).
The command is incomplete so is marked internal.
Previously, catalog versions were fixed for all versions which made maintaining the catalog versions during PostgreSQL beta and release candidate cycles very painful. A version of pgBackRest which was functionally compatible was rendered useless by a catalog version bump in PostgreSQL.
Instead use only the control version to identify a PostgreSQL version when possible. Some older versions require a catalog version to positively identify a PostgreSQL version, so include them when required.
Since the catalog number is required to work with tablespaces it will need to be stored. There's already a copy of it in backup.info so use that (even though we have been ignoring it in the C versions).
This condition used to give a not-very-clear error which we have been intending to improve. But in the meantime the changes in fbff299 resulted in a segfault for this condition instead because the data_directory was assumed to be non-NULL.
Fix this by explicitly throwing an error with hints when any row in pg_settings cannot be selected.
This file is created by pg_basebackup so might be in the data directory if the cluster was restored from a pg_basebackup backup. Also exclude backup_manifest.tmp since it is possible to find that in the backup directory.
Improve the wording of the error message and add a hint to make it clearer what is wrong and how the user can fix it.
Also change the assert to a regular error since this is not an internal error.
If a stop command has been issued the check command fails due to archiving timing out.
Provide a hint to document this situation and point the user in the proper direction.
If the callback never returned any jobs then protocolParallelDone() would never be true. The reason is that the done state was being set in protocolParallelResult(), which never gets called if there are no results.
Calling protocolParallelResult() doesn't make much sense in this case so instead move the done logic to protocolParallelDone().
For current usage of ProtocolParallel we ensure there are jobs before processing so this is not a live issue, but the new behavior is required for future development.
Bug Fixes:
* Suppress errors when closing local/remote processes. Since the command has completed it is counterproductive to throw an error but still warn to indicate that something unusual happened. (Reviewed by Cynthia Shang. Reported by argdenis.)
* Fix issue with = character in file or database names. (Reviewed by Bastian Wegge, Cynthia Shang. Reported by Brad Nicholson, Bastian Wegge.)
Features:
* Automatically retrieve temporary S3 credentials on AWS instances. (Contributed by David Steele, Stephen Frost. Reviewed by Cynthia Shang, David Youatt, Aleš Zelený, Jeanette Bromage.)
* Add archive-mode option to disable archiving on restore. (Reviewed by Stephen Frost. Suggested by Stephen Frost.)
Improvements:
* PostgreSQL 13 beta3 support. Changes to the control/catalog/WAL versions in subsequent betas may break compatibility but pgBackRest will be updated with each release to keep pace.
* Asynchronous list/remove for S3/Azure storage. (Reviewed by Cynthia Shang, Stephen Frost.)
* Improve memory usage of unlogged relation detection in manifest build. (Reviewed by Cynthia Shang, Stephen Frost, Brad Nicholson, Oscar. Suggested by Oscar, Brad Nicholson.)
* Proactively close file descriptors after forking async process. (Reviewed by Stephen Frost, Cynthia Shang.)
* Delay backup remote connection close until after archive check. (Contributed by Floris van Nee. Reviewed by David Steele.)
* Improve detailed error output. (Reviewed by Cynthia Shang.)
* Improve TLS error reporting. (Reviewed by Cynthia Shang, Stephen Frost.)
Documentation Bug Fixes:
* Add none to compress-type option reference and fix example. (Reported by Ugo Bellavance, Don Seiler.)
* Add missing azure type in repo-type option reference. (Fixed by Don Seiler. Reviewed by David Steele.)
* Fix typo in repo-cipher-type option reference. (Fixed by Don Seiler. Reviewed by David Steele.)
Documentation Improvements:
* Clarify that expire must be run regularly when expire-auto is disabled. (Reviewed by Douglas J Hunley. Suggested by Douglas J Hunley.)
When restoring a cluster that will be promoted but is not intended to be the new primary, it is important to disable archiving to avoid polluting the repository with useless WAL. This option makes disabling archiving a bit easier.
Automatically retrieve the role and temporary credentials for S3 when the AWS instance is associated with an IAM role. Credentials are automatically updated when they are <= 5 minutes from expiring.
Basic configuration is to set repo1-s3-key-type=auto. repo1-s3-role can be used to set a specific role, otherwise it will be retrieved automatically.
Add more info (command, version, options) to asserts, and errors when debug logging is enabled. This won't cover all cases but might mean we get more info in some circumstances.
Currently each module that needs to collect statistics implements custom code to do so. This is cumbersome.
Create a general purpose module for collecting and reporting statistics. Statistics are output in the log at detail level, but there are other uses they could be put to eventually.
No new functionality is added. This is just a drop-in replacement for the current statistics, with the advantage of being more flexible.
The new stats are slower because they involve a list lookup, but performance testing shows stats can be updated at about 40,000/ms which seems fast enough for our purposes.
Improve the performance of list/delete operations by using async requests.
It's questionable whether this will have any impact on Azure deletes since they are sent one at a time with little work done in between, but it doesn't hurt to try.
HTTP/1.0 connections are closed by default after a single response. Other than that, treat 1.0 the same as 1.1.
HTTP/1.0 allows different date formats that we can't parse but for now, at least, we don't need any date headers from 1.0 requests.
Following up on 111d33c, implement the new interfaces for socket client/session. Now HTTP objects can be used over TLS or plain sockets.
This required adding ioSessionFd() and ioSessionRole() to provide the functionality of sckSessionFd() and sckSessionType(). sckClientHost() and sckClientPort don't make sense in a generic interface so they were replaced with ioSessionName().
Only close the remote connection after verifying that the WAL files have been received. This is necessary if the archive_command on the PostgreSQL host is conditional, i.e. archiving only happens while a backup lock is held, to ensure all WAL segments are archived.
Move sckSessionReadyRead()/Write() into the IoRead/IoWrite interfaces. This is a more logical place for them and the alternative would be to add them to the IoSession interface, which does not seem like a good idea.
This is mostly a refactor, but a big change is the select() logic in fdRead.c has been replaced by ioReadReady(). This was duplicated code that was being used by our protocol but not TLS. Since we have not had any problems with requiring poll() in the field this seems like a good time to remove our dependence on select().
Also, IoFdWrite now requires a timeout so update where required, mostly in the tests.
These interfaces allow the HttpClient and HttpSession objects to work with protocols other than TLS, .e.g. plain sockets. This is necessary to allow standard HTTP -- right now only HTTPS is allowed, i.e. HTTP over TLS.
For now only TlsClient and TlsSession have been converted to the new interfaces. SocketClient and SocketSession will also need to be converted but first sckSessionReadyRead() and sckSessionReadyWrite() need to be moved into the IoRead and IoWrite interfaces, since they are not a good fit for IoSession.
Before 9f2d647 TLS errors included additional details in at least some cases. After 9f2d647 a connection to an HTTP server threw `TLS error [1]` instead of `unable to negotiate TLS connection: [336031996] unknown protocol`.
Bring back the detailed messages to make debugging TLS errors easier. Since the error routine is now generic the `unable to negotiate TLS connection context` is not available so the error looks like `TLS error [1:336031996] unknown protocol`.
PostgreSQL may be using most of the available file descriptors when it executes the the archive-get/archive-push commands (especially archive-get). This can lead to problems depending on how many file descriptors are needed for parallelism in the async process.
Proactively free file descriptors between 3 and 1023 to help ensure there are enough available for reasonable values of process-max, i.e. <= 300.
This loop was using a lot of memory without freeing it at intervals.
Rewrite to use char arrays when possible to reduce memory that needs to be allocated and freed.
The fix for = characters in info files (039d314) added JSON validation but discarded the resulting Variant which means the JSON is being parsed twice. This nearly doubles the time to load a manifest since a lot of complex JSON is involved.
Time to load a million file manifest:
Before 039d314: 7.8s
039d314: 15.5s
This patch: 7.5s
To fix this regression return the Variant in the callback so the caller does not have to parse it again. The new code appears slightly more efficient overall, probably because there are fewer operations against Strings.
We use the Z suffix in many functions to indicate that we are expecting a zero-terminated string so make this function conform to the pattern.
As a bonus the new name is a bit shorter, which is a good quality in a commonly-used function.
The manifest uses the = character as the key/value separator so = characters in the key cause parsing errors and lead to an error or segfault.
Since the value must be valid JSON we can keep checking the value on the right side of the = and stop building the key when the value is valid. It's a bit hackish but it does seem to do the job without breaking the manifest format.
Unsurprisingly this makes parsing about 50% slower but it's still more than fast enough. Parsing 10 million key/values takes about 6.5s for the old code and 10s for the new code. Since the value is used as JSON downstream we can reclaim most of this time by just passing the JSON value rather than making the callback reparse it. We'll save that for another commit, though.
Since the command has completed it is counterproductive to throw an error but still warn to indicate that something unusual happened.
Also fix the related issue that the local processes were not being shut down when they completed, which meant that they might timeout before being closed when pgbackrest terminated.
Also update the policy in doc/RELEASE.md to get the latest versions at the beginning of the release cycle. The older policy was created when we were getting new versions right before the release.
Bug Fixes:
* Fix restore --force acting like --force --delta. This caused restore to replace files based on timestamp and size rather than overwriting, which meant some files that should have been updated were left unchanged. Normal restore and restore --delta were not affected by this issue. (Reviewed by Cynthia Shang.)
Features:
* Azure support for repository storage. (Reviewed by Cynthia Shang, Don Seiler.)
* Add expire-auto option. This allows automatic expiration after a successful backup to be disabled. (Contributed by Stefan Fercot. Reviewed by Cynthia Shang, David Steele.)
Improvements:
* Asynchronous S3 multipart upload. (Reviewed by Stephen Frost.)
* Automatic retry for backup, restore, archive-get, and archive-push. (Reviewed by Cynthia Shang.)
* Disable query parallelism in PostgreSQL sessions used for backup control. (Reviewed by Stefan Fercot.)
* PostgreSQL 13 beta2 support. Changes to the control/catalog/WAL versions in subsequent betas may break compatibility but pgBackRest will be updated with each release to keep pace.
* Improve handling of invalid HTTP response status. (Reviewed by Cynthia Shang.)
* Improve error when pg1-path option missing for archive-get command. (Reviewed by Cynthia Shang.)
* Add hint when checksum delta is enabled after a timeline switch. (Reviewed by Matt Bunter, Cynthia Shang.)
* Use PostgreSQL instead of postmaster where appropriate. (Reviewed by Cynthia Shang.)
Documentation Bug Fixes:
* Fix incorrect example for repo-retention-full-type option. (Reported by Höseyin Sönmez.)
* Remove internal commands from HTML and man command references. (Reported by Cynthia Shang.)
Documentation Improvements:
* Update PostgreSQL versions used to build user guides. Also add version ranges to indicate that a user guide is accurate for a range of PostgreSQL versions even if it was built for a specific version. (Reviewed by Stephen Frost.)
* Update FAQ for expiring a specific backup set. (Contributed by Cynthia Shang. Reviewed by David Steele.)
* Update FAQ to clarify default PITR behavior. (Contributed by Cynthia Shang. Reviewed by David Steele.)
The postgresql.auto.conf file was being used instead of recovery.conf, but there were still instances in the text that used recovery.conf. Update to postgresql.auto.conf for PostgreSQL >= 10 and change wording where needed.
Remove all check and stanza-* tests except for the ones that are intended to succeed. The successful tests show that the queries run with expected results against each version of PG which should also validate queries for the failure tests in the unit tests.
Also remove the tests for --no-online backups since they don't require a database and are well tested in the unit tests.
The prior code was only able to use the main passphrase automatically and expected sub passphrases to be specified for each operation. This was fine for testing but hardly sufficient for a user-facing feature.
Update the code to determine which passphrase to use for any file in the repository and error when an invalid file or location is selected.
The repo-get command is still internal for now, but with this improvement it should be ready to be made public.
If a local command, e.g. backupFile(), fails it will stop the entire process. Instead, retry local commands to deal with transient errors.
Remove special logic in the S3 storage driver to retry RequestTimeTooSkewed errors since this is now handled by the general retry mechanism in the places where it is most likely to happen, i.e. file read/write. Also, this error should have been entirely eliminated by the asynchronous TLS implementation.
A shared access signature (SAS) provides granular, delegated access to resources in a storage account. This is often preferable to using a shared key which provides more access and is a greater security risk if compromised.
This caused restore to replace files based on timestamp and size rather than overwriting, which meant some files that should have been updated were left unchanged. Normal restore and restore --delta were not affected by this issue.
Azure and Azure-compatible object stores can now be used for repository storage.
Currently only shared key authentication is supported but SAS will be added soon.
There don't appear to be any behavioral changes since PostgreSQL 12 and all the tests pass.
Changes to the control/catalog/WAL versions in subsequent betas may break compatibility but pgBackRest will be updated with each release to keep pace.
There is no need to have parallelism enabled in a backup control session. In particular, 9.6 marks pg_stop_backup() as parallel-safe but an error will be thrown if pg_stop_backup() is run in a worker.
When uploading large files the upload is split into multiple parts which are assembled at the end to create the final file. Previously we waited until each part was acknowledged before starting on the processing (i.e. compression, etc.) of the next part.
Now, the request for each part is sent while processing continues and the response is read just before sending the request for the next part. This asynchronous method allows us to continue processing while the S3 server formulates a response.
Testing from outside AWS in a high-bandwidth, low-latency environment showed a 35% improvement in the upload time of 1GB files. The time spent waiting for multipart notifications was reduced by ~300% (this measurement included the final part which is not uploaded asynchronously).
There are still some possible improvements: 1) the creation of the multipart id could be made asynchronous when it looks like the upload will need to be multipart (this may incur cost if the upload turns out not to be multipart). 2) allow more than one async request (this will use more memory).
A fair amount of refactoring was required to make the HTTP responses asynchronous. This may seem like overkill but having well-defined request, response, and session objects will also be advantageous for the upcoming HTTP server functionality.
Another advantage is that the lifecycle of an HttpSession is better defined. We only want to reuse sessions that complete the request/response cycle successfully, otherwise we consider the session to be in a bad state and would prefer to start clean with a new one. Previously, this required complex notifications to mark a session as "successfully done". Now, ownership of the session is passed to the request and then the response and only returned to the client after a successful response. If an error occurs anywhere along the way the session will be automatically closed by the object destructor when the request/response object is freed (depending on which one currently owns the session).
strPtr() is called more than any other function and during profiling (with or without optimization) it can end up using a disproportionate amount of the total runtime. Even though it is fast, the profiler has a minimum resolution for each function call so strPtr() will often end up towards the top of the list even though the real runtime is quite small.
Instead, inline strPtr() and indicate to gcc that it should be inlined even for non-optimized builds, since that's how profiles are usually generated.
To make strPtr() smaller require "this" to be non-NULL and add another function, strPtrNull(), to deal with the few cases where we need NULL handling.
As a bonus this makes the executable about 1% smaller even when compared to a prior optimized build which would inline some percentage of strPtr() calls.
This aligns better with general PostgreSQL usage and our own documentation (updated in 4bcef702).
Usage in the backup.manifest tests has not been updated since it might break the file format.
Expressions only worked at the first level of recursion because the expression was also being applied to paths so the path had to match the filter in order to recurse.
This is not considered a bug since it does not affect any existing code paths, but it is required for the general-purpose repo-ls command.
A truncated HTTP response status could lead to an an unfriendly error message, which would be retried, but could be confusing if the error was persistent and required debugging.
Improve the error handling overall to catch more error cases explicitly and respond better to edge cases.
Also update the terminology in comments to align with the RFC. Variable and function names were not changed because a refactor is intended for HTTP response and it doesn't seem worth the additional code churn.
Bug Fixes:
* Fix issue checking if file links are contained in path links. (Reviewed by Cynthia Shang. Reported by Christophe Cavallié.)
* Allow pg-path1 to be optional for synchronous archive-push. (Reviewed by Cynthia Shang. Reported by Jerome Peng.)
* The expire command now checks if a stop file is present. (Fixed by Cynthia Shang. Reviewed by David Steele.)
* Handle missing reason phrase in HTTP response. (Reviewed by Cynthia Shang. Reported by Tenuun.)
* Increase buffer size for lz4 compression flush. (Reviewed by Cynthia Shang. Reported by Eric Radman.)
* Ignore pg-host* and repo-host* options for the remote command. (Reviewed by Cynthia Shang. Reported by Pavel Suderevsky.)
* Fix possibly missing pg1-* options for the remote command. (Reviewed by Cynthia Shang. Reported by Andrew L'Ecuyer.)
Features:
* Time-based retention for full backups. The --repo-retention-full-type option allows retention of full backups based on a time period, specified in days. (Contributed by Cynthia Shang, Pierre Ducroquet. Reviewed by David Steele.)
* Ad hoc backup expiration. Allow the user to remove a specified backup regardless of retention settings. (Contributed by Cynthia Shang. Reviewed by David Steele.)
* Zstandard compression support. Note that setting compress-type=zst will make new backups and archive incompatible (unrestorable) with prior versions of pgBackRest. (Reviewed by Cynthia Shang.)
* bzip2 compression support. Note that setting compress-type=bz2 will make new backups and archive incompatible (unrestorable) with prior versions of pgBackRest. (Contributed by Stephen Frost. Reviewed by David Steele, Cynthia Shang.)
* Add backup/expire running status to the info command. (Contributed by Stefan Fercot. Reviewed by David Steele.)
Improvements:
* Expire WAL archive only when repo-retention-archive threshold is met. WAL prior to the first full backup was previously expired after the first full backup. Now it is preserved according to retention settings. (Contributed by Cynthia Shang. Reviewed by David Steele.)
* Add local MD5 implementation so S3 works when FIPS is enabled. (Reviewed by Cynthia Shang, Stephen Frost. Suggested by Brian Almeida, John Kelley.)
* PostgreSQL 13 beta1 support. Changes to the control/catalog/WAL versions in subsequent betas may break compatibility but pgBackRest will be updated with each release to keep pace. (Reviewed by Cynthia Shang.)
* Reduce buffer-size default to 1MiB. (Reviewed by Stephen Frost.)
* Throw user-friendly error if expire is not run on repository host. (Contributed by Cynthia Shang. Reviewed by David Steele.)
The purpose of the remote command is to get access to local resources, so a remote should never start another remote. However, this could happen if there were host settings on the remote host, which ended badly with lock errors, loops, etc.
Add pg-local and repo-local options to indicate that the resource is local even if there are host settings.
Note that for the time being these options are internal and not intended for general usage. However, this is likely the direction needed to allow for more symmetric and manageable configurations.
Some pg1-* options are required by the remote so if they are not provided in the remote's configuration file then it may cause a configuration error, depending on the operation. This currently only applies to the pg1-path option.
This is still an issue for repo-* options but the same solution cannot be applied because some repo-* options are secure and cannot be passed on the command-line.
There don't appear to be any behavioral changes since PostgreSQL 12 and all the tests pass.
Changes to the control/catalog/WAL versions in subsequent betas may break compatibility but pgBackRest will be updated with each release to keep pace.
S3 requires the Content-MD5 header for many requests but MD5 is not available via OpenSSL when FIPS is enabled because it is considered to be insecure.
Even though our usage does not present any security risks a local M5 implementation is required to circumvent the over-broad FIPS restriction.
Vendorize the MD5 implementation found at https://openwall.info/wiki/people/solar/software/public-domain-source-code/md5 and add full coverage for the module in the common/crypto unit tests.
The prior default was determined by benchmarking the Perl code prior to the 1.0 release. In general buffer allocation was more expensive in Perl so large buffers gave the best performance. This was due to multiple buffer allocations for each filter in an IO operation.
The C code allocates fixed buffers for each IO operation so the cost for buffer allocation is lower than Perl. That being the case it made sense to benchmark the C code to determine the optimal buffer default.
The performance/storage tests were used to measure the performance of a variety of filters. 1GiB of data was processed by each filter 10 times and the results of the tests were averaged.
While most buffer sizes gave similar performance, 1MiB appeared to perform the best overall. Of course, different architectures are likely to yield different results but this seems like a sensible default. The buffer-size option may still need to be manually configured to give optimal results.
Raw test data for reference:
4MB buffer (prior default)
copy time 1807ms, avg time 180ms, avg throughput: 5942MB/s
md5 time 14200ms, avg time 1420ms, avg throughput: 756MB/s
sha1 time 11431ms, avg time 1143ms, avg throughput: 939MB/s
sha256 time 23463ms, avg time 2346ms, avg throughput: 457MB/s
gzip -6 time 381199ms, avg time 38119ms, avg throughput: 28MB/s
lz4 -1 time 15484ms, avg time 1548ms, avg throughput: 693MB/s
1MB buffer (new default)
copy time 1760ms, avg time 176ms, avg throughput: 6100MB/s
md5 time 13739ms, avg time 1373ms, avg throughput: 781MB/s
sha1 time 11025ms, avg time 1102ms, avg throughput: 973MB/s
sha256 time 22539ms, avg time 2253ms, avg throughput: 476MB/s
gzip -6 time 372995ms, avg time 37299ms, avg throughput: 28MB/s
lz4 -1 time 15118ms, avg time 1511ms, avg throughput: 710MB/s
512K buffer
copy time 1782ms, avg time 178ms, avg throughput: 6025MB/s
md5 time 13724ms, avg time 1372ms, avg throughput: 782MB/s
sha1 time 10959ms, avg time 1095ms, avg throughput: 979MB/s
sha256 time 22982ms, avg time 2298ms, avg throughput: 467MB/s
gzip -6 time 378120ms, avg time 37812ms, avg throughput: 28MB/s
lz4 -1 time 15484ms, avg time 1548ms, avg throughput: 693MB/s
256K buffer
copy time 1805ms, avg time 180ms, avg throughput: 5948MB/s
md5 time 13706ms, avg time 1370ms, avg throughput: 783MB/s
sha1 time 11074ms, avg time 1107ms, avg throughput: 969MB/s
sha256 time 22588ms, avg time 2258ms, avg throughput: 475MB/s
gzip -6 time 372645ms, avg time 37264ms, avg throughput: 28MB/s
lz4 -1 time 16346ms, avg time 1634ms, avg throughput: 656MB/s
Reason phrases (e.g. OK) are optional in HTTP 1.1 but the space after the status code is not. When the reason phrase was missing the required space was trimmed along with the trailing CR leading to a format error.
Rework the logic to preserve the space and allow empty reason phrases.
Found while testing against the Backblaze S3-compatible API.
Some lz4 versions between r131 and 1.7.5 did not return a sufficient buffer size from LZ4F_compressBound() to allow LZ4F_compressEnd() to complete reliably. While this issue was fixed in lz4 1.7.5 there are affected versions in supported distributions such as CentOS/RHEL 7.
Use one of the hacks suggested in https://github.com/lz4/lz4/issues/290 to increase the buffer size enough for LZ4F_compressEnd() to complete. This means that a slightly larger buffer size is required for all versions but it seems worth it to (hopefully) to fix the issue in all lz4 versions.
Update error types throw by bzip2 to be more consistent with gzip.
Update the bzip2 and gzip error default to be AssertError as that's the more common case in both, and add a 'break;' to the default clause -- we don't intend to be just falling through those case statements, even if the default is the last, we should be explicit about that.
Clean up some tabs that snuck in, rename a variable to be more clear, and add some comments.
The --repo-retention-full-type option allows retention of full backups based on a time period, specified in days.
The new option will default to 'count' and therefore will not affect current installations. Setting repo-retention-full-type to 'time' will allow the user to use a time period, in days, to indicate full backup retention. Using this method, a full backup can be expired only if the time the backup completed is older than the number of days set with repo-retention-full (calculated from the moment the 'expire' command is run) and at least one full backup meets the retention period. If archive retention has not been configured, then the default settings will expire archives that are prior to the oldest retained full backup. For example, if there are three full backups ending in times that are 25 days old (F1), 20 days old (F2) and 10 days old (F3), then if the full retention period is 15 days, then only F1 will be expired; F2 will be retained because F1 is not at least 15 days old.
Newer versions of sudo output this message to stdout when run in a container:
sudo: setrlimit(RLIMIT_CORE): Operation not permitted
See https://github.com/sudo-project/sudo/issues/42 for details.
A simple workaround is to prevent sudo from disabling core dumps. This seems safe enough because if sudo is segfaulting then core files are the least of our worries.
bzip2 is a widely available, high-quality data compressor. It typically compresses files to within 10% to 15% of the best available techniques (the PPM family of statistical compressors), while being around twice as fast at compression and six times faster at decompression.
bzip2 is currently available on all supported platforms.
Zstandard is a fast lossless compression algorithm targeting real-time compression scenarios at zlib-level and better compression ratios. It's backed by a very fast entropy stage, provided by Huff0 and FSE library.
Zstandard version >= 1.0 is required, which is generally only available on newer distributions.
The previous location was too late to allow --var=s3-all=y to work with --require=/repo-host, which depends on /quickstart/configure-archiving.
Since the section is not included in production documentation, the position is not very important to flow so just move it to where it works.
If the WAL path is absolute then pg1-path should be optional but in fact it was required to load pg_control.
Skip the pg_control check when pg1-path is not specified. The check against the stanza version/system-id remains to protect the repo from corruption.
There is no conflict if the path containing a file link is a parent path of a path link. The Perl code apparently had this right but the migration to C missed it.
Exclude this case when checking for link conflicts.
There have been a number of segfaults reported because a string option expected to be non-null was actually null. This is generally due to options that are expected to be set but are in fact optional.
Protect against this by creating cfgOptionStrNull() to get options that can be null, while changing cfgOptionStr() to always expect non-null. There are relatively few places where nulls are expected.
There is definitely a chance for breakage here as null options might currently be working in the field but will be caught by this new check. Hopefully introducing the check early in the release cycle will allow us to catch any issues.
Previously when retention-archive was set (either by the user or by default), archives prior to the archive-start of the oldest remaining full backup (after backup expiration occurred) would be expired even though the retention-archive threshold had not been met. For example, if there were 1 full backup remaining after backup expiration and the retention-archive was set to 2 and retention-archive-type=full, then archives prior to the archive-start of the remaining full backup would still be removed even though retention-archive required 2 full backups remaining before archives should be expired.
The thought was to keep the archive directory clean and since the full backup did not require prior archives, it was safe to delete them. However, this has caused problems for some users in the past (because they needed the WAL for other purposes) and with the new adhoc and time-based retention features, it was decided that the archives should remain until the threshold was met. The archives will eventually be removed and if having them causes space issues, the expire command and the retention-archive can always be run and adjusted.
The specified backup set (i.e. the backup label provided and all of its dependent backups, if any) will be expired regardless of backup retention rules except that at least one full backup must remain in the repository.
This is implemented by checking for a backup lock on the host where info is running so there are a few limitations:
* It is not currently possible to know which command is running: backup, expire, or stanza-*. The stanza commands are very unlikely to be running so it's pretty safe to guess backup/expire. Command information may be added to the lock file to improve the accuracy of the reported command.
* If the info command is run on a host that is not participating in the backup, e.g. a standby, then there will be no backup lock. This seems like a minor limitation since running info on the repo or primary host is preferred.
Bug Fixes:
* Remove empty subexpression from manifest regular expression. MacOS was not happy about this though other platforms seemed to work fine. (Fixed by David Raftis.)
Improvements:
* Non-blocking TLS implementation. (Reviewed by Slava Moudry, Cynthia Shang, Stephen Frost.)
* Only limit backup copy size for WAL-logged files. The prior behavior could possibly lead to postgresql.conf or postgresql.auto.conf being truncated in the backup. (Reviewed by Cynthia Shang.)
* TCP keep-alive options are configurable. (Suggested by Marc Cousin.)
* Add io-timeout option.
Timeout used for connections and read/write operations.
Note that the entire read/write operation does not need to complete within this timeout but some progress must be made, even if it is only a single byte.
The prior blocking implementation seemed to be prone to locking up on some (especially recent) kernel versions. Since we were unable to reproduce the issue in a development environment we can only speculate as to the cause, but there is a good chance that blocking sockets were the issue or contributed to the issue.
So move to a non-blocking implementation to hopefully clear up these issues. Testing in production environments that were prone to locking shows that the approach is promising and at the very least not a regression.
The main differences from the blocking version are the non-blocking connect() implementation and handling of WANT_READ/WANT_WRITE retries for all SSL*() functions.
Timeouts in the tests needed to be increased because socket connect() and TLS SSL_connect() were not included in the timeout before. The tests don't run any slower, though. In fact, all platforms but Ubuntu 12.04 worked fine with the shorter timeouts.
select() is a bit old-fashioned and cumbersome to use. Since the select() code needed to be modified to handle write ready this seems like a good time to upgrade to poll().
poll() has been around for a long time so there doesn't seem to be any need to provide a fallback to select().
Also change the error on timeout from FileReadError to ProtocolError. This works better for read vs. write and failure to poll() is indicative of a protocol error or unexpected EOF.
The prior behavior introduced in dcddf3a5 could possibly lead to postgresql.conf or postgresql.auto.conf being truncated in the backup since they are copied via tmp files and could change size during the backup.
In general it seems safer to limit this feature to WAL-logged files which will be reconstructed during recovery.
A session looks much the same whether it is initiated from the client or the server, so use the session objects to implement the TLS, HTTP, and S3 test servers.
For TLS, at least, there are some differences between client and server sessions so add a client/server type to SocketSession to determine how the session was initiated.
Aside from reducing code duplication, the main advantage is that the test server will now timeout rather than hanging indefinitely when less input that expected is received.
Previously an error was only thrown when errno was set but in practice this is usually not the case. This may have something to do with getting errno late but attempts to get it earlier have not been successful. It appears that errno usually gets cleared and spot research seems to indicate that other users have similar issues.
An error at this point indicates unexpected EOF so it seems better to just throw an error all the time and be consistent.
To test this properly our test server needs to call SSL_shutdown() except when the client expects this error.
This abstraction allows the session code to be shared between the TLS client and (upcoming) server code.
Session management is no longer implemented in TlsClient so the HttpClient was updated to free and create sessions as needed. No test changes were required for HttpClient so the functionality should be unchanged.
Mechanical changes to the TLS tests were required to use TlsSession where appropriate rather than TlsClient. There should be no change in functionality other than how sessions are managed, i.e. using tlsClientOpen()/tlsSessionFree() rather than just tlsClientOpen().
The errorInternalThrowSys*() functions were marked as returning during coverage testing even when they had no possibility to return, i.e. the error parameter was set to constant true. This meant the compiler would treat the functions as returning even when they would not.
Instead create completely separate functions for coverage to use for THROW_ON_SYS_ERROR*() that can return and leave the regular functions marked __noreturn__.
This abstraction allows the session code to be shared between the socket client and (upcoming) server code. There should no difference in how the code works -- only the organization has changed. Note that no changes to the tests were required.
This same abstraction will be required for TlsClient but that will be done in a separate commit because it requires test changes.
When the Vagrant file was updated to use pgbackrest/ vs /backrest/ as the location for executing tests and building the documentation, parts of the contributing.xml (and hence the CONTRIBUTING.md) were not updated since some parts of the document are not actually executed when the CONTRIBUTING.md is built from contributing.xml: those parts that are executed were updated but those parts that are not executed were not.
This commit fixes the contributing.xml issue but also removes test/README.md as its contents were out of date and redundant given that they are covered in CONTRIBUTING.md.
The storage driver requires two list functions to be implemented, list and infoList. But the former is a subset of the latter so implementing both in every driver is wasteful. The reason both exist is that in Posix it is cheaper to get a list of names than it is to stat files to get size, time, etc. In S3 these operations are equivalent.
Introduce storageInfoLevelType to determine the amount of information required by the caller. That way Posix can work efficiently and all drivers can return only the data required which saves some bandwidth. The storageList() and storageInfoList() functions remain in the storage interface since they are useful -- the only change is simplifying the drivers with no external impact.
Note that since list() accepted an expression infoList() must now do so. Checking the expression is optional for the driver but can be used to limit results or save IO costs.
Similarly, exists() and pathExists() are just specialized forms of info() so adapt them to call info() instead.
This has been the policy for some time but due to migration pressure only new functions and refactors have been following this rule. Now it seems sensible to make a clean sweep and move all the comments that have not been moved already (i.e. most of them).
Only obvious typos and gross inaccuracies in the comments have been fixed. For this most part this was a copy and paste operation.
Useless comments, e.g. "New object", were not copied. Even so, there are surely many deficient comments left.
Some rearranging was done where needed and functions were placed in the proper sections, e.g. "Constructors", "Functions", etc.
A few function prototypes were found that not longer had an implementation. These were removed, but there may be more.
The coding document has been updated to reflect this policy, which is not new but has never been documented.
This is really a socket option so the new name is clearer.
Since common/io/socket/tcp will contains a mix of options it makes sense to rename it to socket and cascade name changes as needed.
Prior to 2.25 the individual TCP keep-alive options were not being configured due to a missing header. In 2.25 they were being configured incorrectly due to a disconnect between the timeout specified in ms and what was expected by the TCP options, i.e. seconds.
Instead make the TCP keep-alive options directly configurable, with correct units and better testing. Keep-alive is enabled by default (though it can be defaulted to the system setting instead) and the rest of the options are not set by default. This is in line with what PostgreSQL does, though PostgreSQL does not allow keep-alive to be defaulted.
Also move configuration of TCP options before connect() as PostgreSQL does.
Features:
* Add lz4 compression support. Note that setting compress-type=lz4 will make new backups and archive incompatible (unrestorable) with prior versions of pgBackRest. (Reviewed by Cynthia Shang.)
* Add --dry-run option to the expire command. Use dry-run to see which backups/archive would be removed by the expire command without actually removing anything. (Contributed by Cynthia Shang, Luca Ferrari.)
Improvements:
* Improve performance of remote manifest build. (Suggested by Jens Wilke.)
* Fix detection of keepalive options on Linux. (Contributed by Marc Cousin.)
* Add configure host detection to set standards flags correctly. (Contributed by Marc Cousin.)
* Remove compress/compress-level options from commands where unused. These commands (e.g. restore, archive-get) never used the compress options but allowed them to be passed on the command line. Now they will error when these options are passed on the command line. If these errors occur then remove the unused options. (Reviewed by Cynthia Shang.)
* Limit backup file copy size to size reported at backup start. If a file grows during the backup it will be reconstructed by WAL replay during recovery so there is no need to copy the additional data. (Reviewed by Cynthia Shang.)
Other storage*InfoList() functions do this but it was missed here.
memResize()/memFree() operations become more expensive as the mem context grows larger so freeing it periodically saves processing time.
If the work or result directories already contain data then the docs might be generated slightly differently. Doing a clean ensures they will always produce the same output (provided the code does not change).
Building the contributing document has some special requirements because it runs Docker in Docker so the repo path must align on the host and all Docker containers. Run `pgbackrest/doc/doc.pl` from within the home directory of the user that will do the doc build, e.g. `home/vagrant`. If the repo is not located directly in the home directory, e.g. `/home/vagrant/pgbackrest`, then a symlink may be used, e.g. `ln -s /path/to/repo /home/vagrant/pgbackrest`.
Mount the repo in the Vagrantfile at /home/vagrant/pgbackrest but provide a link from the old location at /backrest to make the transition less painful.
If a file grows during the backup it will be reconstructed by WAL replay during recovery so there is no need to copy the additional data.
This also reduces the likelihood of seeing torn pages during the copy. Torn pages can still occur in the middle of the file, though, so they must be handled.
This code stanza was not being included on Linux platforms because of a missing header file.
Also update the order of operations and make the timeout calculations more sensible.
The primary source for project info is now src/version.h.
The pgBackRestDoc::ProjectInfo module loads the project info from src/version.h at runtime so there is no need to update it.
This is consistent with the way BackRest and BackRest test were renamed way back in 18fd2523.
More modules will be moving to pgBackRestDoc soon so renaming now reduces churn later.
This directory was once the home of the production Perl code but since f0ef73db this is no longer true.
Move the modules to test in most cases, except where the module is expected to be useful for the doc engine beyond the expected lifetime of the Perl test code (about a year if all goes well).
The exception is pgBackRest::Version which requires more work to migrate since it is used to track pgBackRest versions.
LZ4 compresses data faster than gzip but at a lower ratio. This can be a good tradeoff in certain scenarios.
Note that setting compress-type=lz4 will make new backups and archive incompatible (unrestorable) with prior versions of pgBackRest.
This was the interface between Perl and C introduced in 36a5349b but since f0ef73db has only been used by the Perl integration tests. This is expensive code to maintain just for testing.
The main dependency was the interface to storage, no matter where it was located, e.g. S3. Replace this with the new-introduced repo commands (d3c83453) that allow access to repo storage via the command line.
The other dependency was on various cfgOption* functions and CFGOPT_ constants that were convenient but not necessary. Replace these with hard-coded strings in most places and create new constants for commonly used values.
Remove all auto-generated Perl code. This means that the error list will no longer be maintained automatically so copy used errors to Common::Exception.pm. This file will need to be maintained manually going forward but there is not likely to be much churn as the Perl integration tests are being retired.
Update test.pl and related code to remove LibC builds.
Ding, dong, LibC is dead.
These commands are generally useful but more importantly they allow removing LibC by providing the Perl integration tests an alternate way to work with repository storage.
All the commands are currently internal only and should not be used on production repositories.
This command only makes sense for the repository storage since other storage (e.g. pg and spool) must be located on a local Posix filesystem and can be listed using standard unix commands. Since the repo storage can be located lots of places having a common way to list it makes sense.
Prefix with repo- to make the scope of this command clear.
Update documentation to reflect this change.
Add compress-type option and deprecate compress option. Since the compress option is boolean it won't work with multiple compression types. Add logic to cfgLoadUpdateOption() to update compress-type if it is not set directly. The compress option should no longer be referenced outside the cfgLoadUpdateOption() function.
Add common/compress/helper module to contain interface functions that work with multiple compression types. Code outside this module should no longer call specific compression drivers, though it may be OK to reference a specific compression type using the new interface (e.g., saving backup history files in gz format).
Unit tests only test compression using the gz format because other formats may not be available in all builds. It is the job of integration tests to exercise all compression types.
Additional compression types will be added in future commits.
Adding a sleep before was necessary since only adding a sleep after did not always work. This helps to ensure the backup stop time for the previous backup does not equal time-recovery-timestamp. The sleep after allows enough time between the time retrieval and dropping important_table so PostgreSQL can consistently recover to before the table drop.
Note that these issues were caused by picking a timestamp too close to the restore command or a database operation, not due to any problem in backup selection of the restore command.
These commands (e.g. restore, archive-get) never used the compress options but allowed them to be passed on the command line. Now they will error when these options are passed on the command line. If these errors occur then remove the unused options.
"gz" was used as the extension but "gzip" was generally used for function and type naming.
With a new compression format on the way, it makes sense to standardize on a single abbreviation to represent a compression format in the code. Since the extension is standard and we must use it, also use the extension for all naming.
The prior code used TRY...CATCH blocks to cleanup mem contexts when an error occurred. This included freeing new mem contexts that were still being initialized when the error occurred and ensuring that the prior memory context was restored.
This worked fine in production but it involved a lot of setjmp()/longjmp() calls that resulted in longer compilation times and sluggish performance under valgrind, profiling, and coverage testing.
Instead maintain a stack of new contexts and context switches that can be used to do cleanup after an error. Normally, the stack is not used for this purpose and pushing/popping is a cheap operation. In the prior implementation most of the TRY...CATCH logic needed to be run even on success.
One bonus is that the binary is about 8% smaller after this change. Another benefit is that new contexts *must* be explicitly freed/discarded or an error will occur. See info/manifest.c for an example of where this is useful outside the standard macros.
Bug Fixes:
* Prevent defunct processes in asynchronous archive commands. (Reviewed by Stephen Frost. Reported by Adam Brusselback, ejberdecia.)
* Error when archive-get/archive-push/restore are not run on a PostgreSQL host. (Reviewed by Stephen Frost. Reported by Jesper St John.)
* Read HTTP content to eof when size/encoding not specified. (Reviewed by Cynthia Shang. Reported by Christian ROUX.)
* Fix resume when the resumable backup was created by Perl. In this case the resumable backup should be ignored, but the C code was not able to load the partial manifest written by Perl since the format differs slightly. Add validations to catch this case and continue gracefully. (Reported by Kacey Holston.)
Features:
* Auto-select backup set on restore when time target is specified. Auto-selection is performed only when --set is not specified. If a backup set for the given target time cannot not be found, the latest (default) backup set will be used. (Contributed by Cynthia Shang.)
Improvements:
* Skip pg_internal.init temp file during backup. (Reviewed by Cynthia Shang. Suggested by Michael Paquier.)
* Add more validations to the manifest on backup. (Reviewed by Cynthia Shang.)
Documentation Improvements:
* Prevent lock-bot from adding comments to locked issues. (Suggested by Christoph Berg.)
If PostgreSQL crashes it can leave behind a pg_internal.init temp file with the pid as the extension, as discussed in https://www.postgresql.org/message-id/flat/20200131045352.GB2631%40paquier.xyz#7700b9481ef5b0dd5f09cc410b4750f6. On restart this file is not cleaned up so it can persist for the lifetime of the cluster or until another process with the same id happens to write pg_internal.init.
This is arguably a bug in PostgreSQL, but in any case it makes sense not to backup this file.
This error was lost during the migration to C. The error that occurred instead (generally an SSH auth error) was hard to debug.
Restore the original behavior by throwing an error immediately if pg1-host is configured for any of these commands. reset-pg1-host can be used to suppress the error when required.
The main improvement is a double-fork to prevent zombie processes if the parent process exits after the (child) async process. This is a real possibility since the parent process sticks around to monitor the results of the async process.
In the first fork, ignore SIGCHLD in the very unlikely case that the async process exits before the first fork. This is probably only possible if the async process exits immediately, perhaps due to a chdir() failure. Set SIGCHLD back to default in the async process so waitpid() will work as expected.
Also update the comment on chdir() to more accurately reflect what is happening.
Finally, add a test in certain debug builds to ensure the first fork exits very quickly. This only works when valgrind is not in use because valgrind makes forking so slow that it is hard to tell if the async process performed work or not (in the case that the second fork goes missing and the async process is a direct child).
In this case the resumable backup should be ignored, but the C code was not able to load the partial manifest written by Perl since the format differs slightly. Add validations to catch this case and continue gracefully.
Generally, the content-size or content-encoding headers will be used to specify how much content should be expected.
There is a special case where the server sends 'Connection:close' without the content headers and the content may be read up until eof.
This appears to be an atypical usage but it is required by the specification.
Auto-selection is performed only when --set is not specified. If a backup set for the given target time cannot not be found, the latest (default) backup set will be used.
Currently a limited number of date formats are recognized and timezone names are not allowed, only timezone offsets.
Validate that checksums exist for zero size files. This means that the checksums for zero size files are explicitly set by backup even though they'll always be the same. Also validate that zero length files have the correct checksum.
Validate that repo size is > 0 if size is > 0. No matter what compression type is used a non-zero amount of data cannot be stored in zero bytes.
Bug Fixes:
* Fix missing files corrupting the manifest. If a file was removed by PostgreSQL during the backup (or was missing from the standby) then the next file might not be copied and updated in the manifest. If this happened then the backup would error when restored. (Reviewed by Cynthia Shang. Reported by Vitaliy Kukharik.)
Improvements:
* Use pkg-config instead of xml2-config for libxml2 build options. (Contributed by David Steele, Adrian Vondendriesch.)
* Validate checksums are set in the manifest on backup/restore. (Reviewed by Cynthia Shang.)
This is a modest start but it addresses the specific issue that was caused by the bug fixed in 45ec694a. This validation will produce an immediate error rather than erroring out partway through the restore.
More validations are planned but this is the most important one and seems safest for this release.
If a file was removed by PostgreSQL during the backup (or was missing from the standby) then the next file might not be copied and updated in the manifest. If this happened then the backup would error when restored.
The issue was that removing files from the manifest invalidated the pointers stored in the processing queues. When a file was removed, all the pointers shifted to the next file in the list, causing a file to be unprocessed. Since the unprocessed file was still in the manifest it would be saved with no checksum, causing a failure on restore.
When process-max was > 1 then the bug would often not express since the file had already been pulled from the queue and updates to the manifest are done by name rather than by pointer.
pkg-config is a generic way to get build options rather than relying on a package-specific utility.
XML2_CONFIG can be used to override this utility for systems that do not ship pkg-config.
Bug Fixes:
* Fix error in timeline conversion. The timeline is required to verify WAL segments in the archive after a backup. The conversion was performed base 10 instead of 16, which led to errors when the timeline was ≥ 0xA. (Reported by Lukas Ertl, Eric Veldhuyzen.)
The timeline is required to verify WAL segments in the archive after a backup. The conversion was performed base 10 instead of 16, which led to errors when the timeline was ≥ 0xA.
Bug Fixes:
* Fix options being ignored by asynchronous commands. The asynchronous archive-get/archive-push processes were not loading options configured in command configuration sections, e.g. [global:archive-get]. (Reviewed by Cynthia Shang. Reported by Urs Kramer.)
* Fix handling of \ in filenames. \ was not being properly escaped when calculating the manifest checksum which prevented the manifest from loading. Since instances of \ in cluster filenames should be rare to nonexistent this does not seem likely to be a serious problem in the field.
Features:
* pgBackRest is now pure C.
* Add pg-user option. Specifies the database user name when connecting to PostgreSQL. If not specified pgBackRest will connect with the local OS user or PGUSER, which was the previous behavior. (Contributed by Mike Palmiotto.)
* Allow path-style URIs in S3 driver.
Improvements:
* The backup command is implemented entirely in C. (Reviewed by Cynthia Shang.)
The local, remote, archive-get-async, and archive-push-async commands were used to run functionality that was not directly available to the user. Unfortunately that meant they would not pick up options from the command that the user expected, e.g. backup, archive-get, etc.
Remove the internal commands and add roles which allow pgBackRest to determine what functionality is required without implementing special commands. This way the options are loaded from the expected command section.
Since remote is no longer a specific command with its own options, more manipulation is required when calling remote. This might be something we can improve in the config system but it may be worth leaving as is because it is a one-off, for now at least.
Although path-style URIs have been deprecated by AWS, they may still be used with products like Minio because no additional DNS configuration is required.
Path-style URIs must be explicitly enabled since it is not clear how they can be auto-detected reliably. More importantly, faulty detection could cause regressions in current installations.
Previously dates were not being filled by these functions which was fine since dates were not used.
We plan to use dates for the ls command plus it makes sense for the driver to be complete since it will be used as an example.
These are similar to what mktime() and strptime() do but they ignore the local system timezone which saves having to munge the TZ env variable to do time conversions.
This macro was created before the String object existed so subsequent usage with String always included a lot of strPtr() wrapping.
TEST_RESULT_STR_Z() had already been introduced but a wholesale replacement of TEST_RESULT_STR() was not done since the priority was on the C migration.
Update all calls to (old) TEST_RESULT_STR() with one of the following variants: (new) TEST_RESULT_STR(), TEST_RESULT_STR_Z(), TEST_RESULT_Z(), TEST_RESULT_Z_STR().
Specifies the database user name when connecting to PostgreSQL.
If not specified pgBackRest will connect with the local OS user or PGUSER, which was the previous behavior.
\ was not being properly escaped when calculating the manifest checksum which prevented the manifest from loading.
Use jsonFromStr() to properly quote and escape \.
Since instances of \ in cluster filenames should be rare to nonexistent this does not seem likely to be a serious problem in the field.
Remove embedded Perl from the distributed binary. This includes code, configure, Makefile, and packages. The distributed binary is now pure C.
Remove storagePathEnforceSet() from the C Storage object which allowed Perl to write outside of the storage base directory. Update mock/all and real/all integration tests to use storageLocal() where they were violating this rule.
Remove "c" option that allowed the remote to tell if it was being called from C or Perl.
Code to convert options to JSON for passing to Perl (perl/config.c) has been moved to LibC since it is still required for Perl integration tests.
Update build and installation instructions in the user guide.
Remove all Perl unit tests.
Remove obsolete Perl code. In particular this included all the Perl protocol code which required modifications to the Perl storage, manifest, and db objects that are still required for integration testing but only run locally. Any remaining Perl code is required for testing, documentation, or code generation.
Rename perlReq to binReq in define.yaml to indicate that the binary is required for a test. This had been the actual meaning for quite some time but the key was never renamed.
For the most part this is a direct migration of the Perl code into C except as noted below.
A backup can now be initiated from a linked directory. The link will not be stored in the manifest or recreated on restore. If a link or directory does not already exist in the restore location then a directory will be created.
The logic for creating backup labels has been improved and it should no longer be possible to get a backup label earlier than the latest backup even with timezone changes or clock skew. This has never been an issue in the field that we know of, but we found it in testing.
For online backups all times are fetched from the PostgreSQL primary host (before only copy start was). This doesn't affect backup integrity but it does prevent clock skew between hosts affecting backup duration reporting.
Archive copy now works as expected when the archive and backup have different compression settings, i.e. when one is compressed and the other is not. This was a long-standing bug in the Perl code.
Resume will now work even if hardlink settings have been changed.
Reviewed by Cynthia Shang.
Bug Fixes:
* Fix archive-push/archive-get when PGDATA is symlinked. These commands tried to use cwd() as PGDATA but this would disagree with the path configured in pgBackRest if PGDATA was symlinked. If cwd() does not match the pgBackRest path then chdir() to the path and make sure the next cwd() matches the result from the first call. (Reported by Stephen Frost, Milosz Suchy.)
* Fix reference list when backup.info is reconstructed in expire command. Since the backup command is still using the Perl version of reconstruct this issue will not express unless 1) there is a backup missing from backup.info and 2) the expire command is run directly instead of running after backup as usual. This unlikely combination of events means this is probably not a problem in the field.
* Fix segfault on unexpected EOF in gzip decompression. (Reported by Stephen Frost.)
Commit 7168e074 tried to use cwd() as PGDATA but this would disagree with the path configured in pgBackRest if PGDATA was symlinked.
If cwd() does not match the pgBackRest path then chdir() to the path and make sure the next cwd() matches the result from the first call.
If the compressed stream terminated early then the decompression process would get a flush request (NULL input buffer) since the filter was not marked as done. This could happen on a zero-length or truncated (i.e. invalid) compressed file.
Change the existing assertion to an error to catch this condition in production gracefully.
Adding a manifest to backup.info was migrated to C in 4e4d1f41 but deduplication of the references was missed leading to a reference for every file being added to backup.info.
Since the backup command is still using the Perl version of reconstruct this issue will not express unless 1) there is a backup missing from backup.info and 2) the expire command is run directly instead of running after backup as usual.
This unlikely combination of events means this is probably not a problem in the field.
Installing lcov 1.14 everywhere turned out to be a problem just as using 1.13 on Ubuntu 19.04 was.
Since we primarily use Ubuntu 18.04 for coverage testing and reporting, we definitely want to make sure that works. So, revert to using the default packaged lcov except when specified otherwise in VmTest.pm.
PostgreSQL minor version releases are also included since all containers have been rebuilt.
This should have been removed when the support for the option was removed in c7333190.
The option cannot be removed entirely because we don't want to error in the case where --force was specified but the stanza is valid.
Adding a dummy column which is always set by the P() macro allows a single macro to be used for parameters or no parameters without violating C's prohibition on the {} initializer.
-Wmissing-field-initializers remains disabled because it still gives wildly different results between versions of gcc.
Bug Fixes:
* Fix remote timeout in delta restore. When performing a delta restore on a largely unchanged cluster the remote could timeout if no files were fetched from the repository within protocol-timeout. Add keep-alives to prevent remote timeout. (Reported by James Sewell, Jens Wilke.)
* Fix handling of repeated HTTP headers. When HTTP headers are repeated they should be considered equivalent to a single comma-separated header rather than generating an error, which was the prior behavior. (Reported by donicrosby.)
Improvements:
* JSON output from the info command is no longer pretty-printed. Monitoring systems can more easily ingest the JSON without linefeeds. External tools such as jq can be used to pretty-print if desired. (Contributed by Cynthia Shang.)
* The check command is implemented entirely in C. (Contributed by Cynthia Shang.)
Documentation Improvements:
* Document how to contribute to pgBackRest. (Contributed by Cynthia Shang.)
* Document maximum version for auto-stop option. (Contributed by Brad Nicholson.)
Test Suite Improvements:
* Fix container test path being used when --vm=none. (Suggested by Stephen Frost.)
* Fix mismatched timezone in expect test. (Suggested by Stephen Frost.)
* Don't autogenerate embedded libc code by default. (Suggested by Stephen Frost.)
When HTTP headers are repeated they should be considered equivalent to a single comma-separated header rather than generating an error, which was the prior behavior.
Reported by donicrosby.
We had some problems with newer versions so had held off on updating. Those problems appear to have been resolved.
In addition, the --compat flag is no longer required. Prior versions of MinIO required all parts of a multi-part upload (except the last) to be of equal size. The --compat flag was introduced to restore the default S3 behavior. Now --compat is only required when ETag is being used for MD5 verification, which we don't do.
This documentation shows how to build a development environment on Ubuntu 19.04 and should work for other Debian-based distros.
Note that this document is not included in automated testing due to some unresolved issues with Docker in Docker on Travis CI. We'll address this in the future when we add contributing documentation to the website.
Mark all pre commands as skip so they won't be run again after the container is built.
Ensure that pre commands added to the container are run as the container user if they are not intended to run as root.
This is only needed when new code is added to the Perl C library, which is becoming rare as the migration progresses.
Also, the code will vary slightly based on the Perl version used for generation so for normal users it is just noise.
Suggested by Stephen Frost.
When performing a delta restore on a largely unchanged cluster the remote could timeout if no files were fetched from the repository within protocol-timeout.
Add keep-alives to prevent remote timeout.
Reported by James Sewell, Jens Wilke.
Note that building the manifest on each host has been temporarily removed.
This feature will likely be brought back as a non-default option (after the manifest code has been fully migrated to C) since it can be fairly expensive.
Three major changes were required to get this working:
1) Provide the path to pgbackrest in the build directory when running outside a container. Tests in a container will continue to install and run against /usr/bin/pgbackrest.
1) Set a per-test lock path so tests don't conflict on the default /tmp/pgbackrest path. Also set a per-test log-path while we are at it.
2) Use localhost instead of a custom host for TLS test connections. Tests in containers will continue to update /etc/hosts and use the custom host.
Add infrastructure and update harnessCfgLoad*() to get the correct exe and paths loaded for testing.
Since new tests are required to verify that running outside a container works, also rework the tests in Travis CI to provide coverage within a reasonable amount of time. Mainly, break up to doc tests by VM and run an abbreviated unit test suite on co6 and co7.
Features:
* PostgreSQL 12 support.
* Add info command set option for detailed text output. The additional details include databases that can be used for selective restore and a list of tablespaces and symlinks with their default destinations. (Contributed by Cynthia Shang. Suggested by Stephen Frost, ejberdecia.)
* Add standby restore type. This restore type automatically adds standby_mode=on to recovery.conf for PostgreSQL < 12 and creates standby.signal for PostgreSQL ≥ 12, creating a common interface between PostgreSQL versions. (Reviewed by Cynthia Shang.)
Improvements:
* The restore command is implemented entirely in C. (Reviewed by Cynthia Shang.)
Documentation Improvements:
* Document the relationship between db-timeout and protocol-timeout. (Contributed by Cynthia Shang. Suggested by James Chanco Jr.)
* Add documentation clarifications regarding standby repositories. (Contributed by Cynthia Shang.)
* Add FAQ for time-based Point-in-Time Recovery. (Contributed by Cynthia Shang.)
Recovery settings are now written into postgresql.auto.conf instead of recovery.conf. Existing recovery_target* settings will be commented out to help avoid conflicts.
A comment is added before recovery settings to identify them as written by pgBackRest since it is unclear how, in general, old settings will be removed.
recovery.signal and standby.signal are automatically created based on the recovery settings.
The additional details include databases that can be used for selective restore and a list of tablespaces and symlinks with their default destinations.
This information is not included in the JSON output because it requires reading the manifest which is too IO intensive to do for all manifests. We plan to include this information for JSON in a future release.
This restore type automatically adds standby_mode=on to recovery.conf.
This could be accomplished previously by setting --recovery-option=standby_mode=on but PostgreSQL 12 requires standby mode to be enabled by a special file named standby.signal.
The new restore type allows us to maintain a common interface between PostgreSQL versions.
We haven't had the time to complete this documentation and it has suffered bit rot.
This prevents us from building the docs on PostgreSQL >= 11 so just comment it all out until it can be updated.
For the most part this is a direct migration of the Perl code into C.
There is one important behavioral change with regard to how file permissions are handled. The Perl code tried to set ownership as it was in the manifest even when running as an unprivileged user. This usually just led to errors and frustration.
The C code works like this:
If a restore is run as a non-root user (the typical scenario) then all files restored will belong to the user/group executing pgBackRest. If existing files are not owned by the executing user/group then an error will result if the ownership cannot be updated to the executing user/group. In that case the file ownership will need to be updated by a privileged user before the restore can be retried.
If a restore is run as the root user then pgBackRest will attempt to recreate the ownership recorded in the manifest when the backup was made. Only user/group names are stored in the manifest so the same names must exist on the restore host for this to work. If the user/group name cannot be found locally then the user/group of the PostgreSQL data directory will be used and finally root if the data directory user/group cannot be mapped to a name.
Reviewed by Cynthia Shang.
The backup manifest stores a complete list of all files, links, and paths in a backup along with metadata such as checksums, sizes,
timestamps, etc. A list of databases is also included for selective restore.
The purpose of the manifest is to allow the restore command to confidently reconstruct the PostgreSQL data directory and ensure that
nothing is missing or corrupt. It is also useful for reporting, e.g. size of backup, backup time, etc.
For now, migrate enough functionality to implement the restore command.
Reviewed by Cynthia Shang.