It is better to clear errors after the catch block completes rather than leave them set until the next error. This also make is possible to tell when a error is currently being handled, which a function further down the stack might use to modify its behavior. Currently this is only useful in testing, but clearing the error seems like a good idea in general.
Two places used errors outside the CATCH() block. Mem context cleanup now uses a FINALLY() which is a better implementation anyway. The error handling in main() now calls exitSafe() from withing the CATCH() block.
Add StringList, which is not a primitive type but rather an array of String types.
Also update pckWriteToLog() to work after pckWriteEnd(), i.e. this->tagStackTop is NULL.
Move a PackRead or PackWrite object to a new mem context.
Also note that these functions may not work as expected with pack objects created by pckReadNewBuf() and pckWriteNewBuf() since the pack object does not have ownership of the passed buffer and cannot move it.
The hrnErrorThrowP() macro allows errors with specified fields to be generated, which simplifies testing.
Update the common/exit test to use the new macro.
Azurite, which is used for testing, did not enforce this before so the capital letters were not a problem. Now Azurite enforces the same rules as Azure so use lower-case identifiers instead.
These names were only used in integration tests so there was no production impact.
This allows TEST_STORAGE_EXISTS() to be used in most cases where TEST_STORAGE_REMOVE() was used before.
Rename TEST_STORAGE_REMOVE() to HRN_STORAGE_REMOVE() now that is is no longer used as a test. Still allow an error when the file is missing just to help keep tests tidy.
Since the pack type was stored in 4 bits, only 15 values were allowed (0 was reserved).
Allow virtually unlimited types by storing type info in a base-128 encoded integer following the tag when the type bits in the tag are set to 0xF.
Also separate the type IDs used in the pack (PackTypeMap) from those presented to the user (PackType). The prior PackType enum exposed implementation details to the user, e.g. pckTypeUnknown.
The functions were named with short integer representations (e.g. I32) but the param structs were using longer ones, e.g. UInt32. Shorten the integer representations in the param structs to match.
Also rename pckReadUInt64Internal() to pckReadU64Internal() for the same reason.
The pg storage must be started before the repo storage to set the max remotes allowed to 2. The protocol helper expects all remotes to have the same type so we are cheating here a bit, but without this ordering the second remote will never be sent an explicit exit and may not save coverage data.
Bug Fixes:
* Fix issues with leftover spool files from a prior restore. (Reviewed by Cynthia Shang, Stefan Fercot, Floris van Nee. Reported by Floris van Nee.)
* Fix issue when checking links for large numbers of tablespaces. (Reviewed by Cynthia Shang, Avinash Vallarapu. Reported by Avinash Vallarapu.)
* Free no longer needed remotes so they do not timeout during restore. (Reviewed by Cynthia Shang. Reported by Francisco Miguel Biete.)
* Fix help when a valid option is invalid for the specified command. (Reviewed by Stefan Fercot. Reported by Cynthia Shang.)
Features:
* Add PostgreSQL 14 support. (Reviewed by Cynthia Shang.)
* Add automatic GCS authentication for GCE instances. (Reviewed by Jan Wieck, Daniel Farina.)
* Add repo-retention-history option to expire backup history. (Contributed by Stefan Fercot. Reviewed by Cynthia Shang, David Steele.)
* Add db-exclude option. (Contributed by Stefan Fercot. Reviewed by Cynthia Shang.)
Improvements:
* Change archive expiration logging from detail to info level. (Contributed by Cynthia Shang. Reviewed by David Steele.)
* Remove stanza archive spool path on restore. (Reviewed by Cynthia Shang, Stefan Fercot.)
* Do not write files atomically or sync paths during backup copy. (Reviewed by Stephen Frost, Stefan Fercot, Cynthia Shang.)
Documentation Improvements:
* Update contributing documentation. (Contributed by Cynthia Shang. Reviewed by David Steele, Stefan Fercot.)
* Consolidate RHEL/CentOS user guide into a single document. (Reviewed by Cynthia Shang.)
* Clarify that repo-s3-role is not an ARN. (Contributed by Isaac Yuen. Reviewed by David Steele.)
HRN_CFG_LOAD() handles the majority of test configuration loads and has various options for special cases.
It was not clear when to use harnessCfgLoadRaw() vs harnessCfgLoad(). Now "raw" functionality is granular and enabled by parameters, e.g. noStd.
Make the macros more consistent in format and make sure that each macro outputs a line number before doing any work so when errors happen it is clear where they happened.
Add noRecurse option to TEST_STORAGE_LIST().
Add comment option to all storage macros.
Instead store the line number in hrnTestLogPrefix() so it doesn't need to be passed to hrnTestResultBegin().
Also add missing linefeed in hrnStorageList().
All instances of storageTest are better represented with storagePg*(), which allows TEST_PATH and TEST_PATH_PG to be omitted.
Also remove some headers which are no longer needed.
The default is to keep all backup history to match the current behavior. In minimal configuration (0 days), unexpired backups are always kept in history.
When a full backup manifest expires, all dependent differential/incremental manifests expire as well.
Run the remote process inside a forked child process instead of exec'ing it. This allows coverage to accumulate in the remote process rather than needing to test the remote protocol functions directly, resulting in better end-to-end testing and less test duplication. Another advantage is that the pgbackrest binary does not need to be built for the test and the test does not need to run in a container.
This allows protocolRemoteExec() to be shimmed, which means the remote can be run as a child of the test process, simplifying coverage testing.
The shim does not need SSH parameters, so also split those out into a separate function and update the tests to match.
Add executable to parameter list to avoid first option being lost. The backup, restore, and verify tests worked OK with their first option being defaulted because it ended up being job-retry which worked fine as the default.
Add hrnProtocolLocalShimUninstall() allow the shim to be uninstalled.
Log shim at debug level to make it obvious in the logs when a shim is in use.
There are no code changes from PostgreSQL 13 so simply add the new version.
Add CATALOG_VERSION_NO_MAX to allow the catalog version to "float" during the PostgreSQL beta/rc period so new pgBackRest versions are not required when the catalog version changes.
Update the integration tests to handle new PostgreSQL startup messages.
manifestLinkCheck() was pretty inefficient so large numbers of links caused it to use a lot of memory and eventually crash. This is a more efficient implementation which runs O(nlogn) and uses far less memory.
Checking for duplicate file links has been added, which represents a change in behavior, but hopefully a good one.
Test and remove WAL segment 000000010000000100000002 in the test where it is created rather than as a byproduct of a much later test.
Remove incorrect local role from test. It worked, but was not the correct command role to be using when calling cmdArchiveGet().
Move some harness headers down to the correct section.
Comment formatting was not used much but it incurred a heavy cost in each macro to process possible formatting.
Remove formatted comments where they did not contain valuable information and replace with strZ(strNewFmt()) otherwise.
A define was already added for TEST_PATH but it was not widely used. Replace all occurrences of testPath() with TEST_PATH in the tests.
Replace testUser() with TEST_USER, testGroup() with TEST_GROUP, testRepoPath() with HRN_PATH_REPO, testDataPath() with HRN_PATH, testProjectExe() with TEST_PROJECT_EXE, and testScale() with TEST_SCALE.
Replace {[path]}, {[user]}, {[group]}, etc. with defines and remove hrnReplaceKey(). This is better than having two ways to deal with replacements.
In some cases the original test*() getters were kept because they are used by the harness, which does not have access to the new defines. Move them to harnessTest.intern.h to indicate that the tests should no longer use them.
Replace all instances of strNew("") with strNew() and use strNewZ() for non-empty zero-terminated strings. Besides saving a useless parameter, this will allow smarter memory allocation in a future commit by signaling intent, in general, to append or not.
In the tests use STRDEF() or VARSTRDEF() where more appropriate rather than blindly replacing with strNewZ(). Also replace strLstAdd() with strLstAddZ() where appropriate for the same reason.
Run the local process inside a forked child process instead of exec'ing it. This allows coverage to accumulate in the local process rather than needing to test the local protocol functions directly, resulting in better end-to-end testing and less test duplication. Another advantage is that the pgbackrest binary does not need to be built for the test.
The backup, restore, and verify command tests have been updated to use the new shim for coverage.
A shim allows a test harness to access static functions and variables in a C module, and also allows functions to be shimmed (i.e. overridden) for the purposes of testing.
For instance, coverage testing works when a process that is normally exec'd is run as a forked child process instead.
getopt_long() requires an exhaustive list of all possible options that may be found on the command line. Because of the way options are indexed (e.g. repo1-4, pg1-8) optionList[] has 827 entries and we have kept it small by curtailing the maximum indexes very severely. Another issue is that getopt_long() scans the array sequentially so parsing gets slower as the index maximums increase.
Replace getopt_long() with a custom implementation that behaves the same but allows options to be parsed with a function instead of using optionList[]. This commit leaves the list in place in order to focus on the getopt_long() replacement, but cfgParseOption() could be replaced with a more efficient implementation that removes the need for optionList[].
This implementation also fixes an issue where invalid options were misreported in the error message if they only had one dash, e.g. -config. This seems to have been some kind of problem in getopt_long(), but no investigation was done since the new implementation fixes it.
Tests were added at 0825428, 2b8d2da, 34dd663, and 384f247 to check that previously untested getopt_long() behavior doesn't change.
This makes the macro useful when subpaths are present.
Identify types other than files (path, link, etc.) with a single appended character for easier debugging.
Remove stanza archive spool path so existing files do not interfere with the new cluster. For instance, old archive-push acknowledgements could cause a new cluster to skip archiving. This should not happen if a new timeline is selected but better to be safe. Missing stanza spool paths are ignored.
Also add new path expression STORAGE_SPOOL_ARCHIVE to easily access this path.
When running on a GCE instance the authentication token can be pulled directly from the instance metadata. This is configured with repo-gcs-key-type=auto.
In a separate commit (26fefa6), move the code that parses the token response into a separate function, storageGcsAuthToken(), since it is now needed by two key types. This drastically improves the readability of the main commit.
When running outside of our standard Vagrantfile the default will not be set correctly, so require the user to set it.
In any case, this option is primarily useful for reporting so note that in the command line help.
Some version interface test functions were integrated into the core code because they relied on the PostgreSQL versioned interface. Even though they were compiled out for production builds they cluttered the core code and made it harder to determine what was required by core.
Create a PostgreSQL version interface in a test harness to contain these functions. This does require some duplication but the cleaner core code seems a good tradeoff. It is possible for some of this code to be auto-generated but since it is only updated once per year the matter is not pressing.
If an ok file (which indicates the WAL segment was not found) is present on the first iteration of the loop then remove it and spawn the async process to retry. This action also resets the queue.
Also error if no response is received from the async process rather than returning not found. PostgreSQL will respond the same either way, but this allows us to determine when something is going wrong with the async process.
Update archiveAsyncStatus() to allow warnings to be suppressed. It is better to retry if no WAL segment was found before warning because the warning might be stale.
If an option name has a space at the beginning then it will be considered an invalid command, but a space at the end is an invalid option. Add tests for these conditions.
Spaces in option arguments should be preserved, so add a test to be sure this is true.
Convert most of the remaining options that benefit from being StringIds. Since all the command modules can include config.h directly it makes sense to auto-generate these values instead of manually creating an enum for each one.
For the time being StringIds are not being auto-generated because the StringId code does not exist in Perl. However, the *_Z zero-terminated constants for each allowed option value are now auto-generated.
The CentOS 7 documentation test relies on PostgreSQL 9.5 which has been removed from the yum.p.o repository package. Switch the test to CentOS 8 to fix the immediate issue, but a decision on the PostgreSQL 9.5 documentation will need to be made before the next release.
The tests worked fine on multiple architectures, but would only run "bare metal", i.e. tests that required containers could not be run.
Enable basic multi-architecture support by allowing containers to be built using whatever architecture the host supports. Also allow cached containers to be defined for multiple architectures in container.yaml.
Add a Dockerfile which can be used as a container for other containers to provide a consistent development environment.
The primary goal is to allow development on Mac M1 but other architectures should find these improvements useful.
Allows removal of backupType()/backupTypeStr() and improves debug logging of the enum.
Move BackupType enum and string constants to info/infoBackup.h so they are available to more modules. Also convert InfoBackup to use BackupType instead of a String.
Centralize the formatting of the configuration value for display to the user or passing on a command line.
For the new functions, if the value was set by the user via the command line, config, etc., then that exact value will be displayed. This makes it easier for the user to recognize the value and saves having to format it into something reasonable, especially for time and size option types.
Note that cfgOptTypeHash and cfgOptTypeList option types are not supported by these functions, but they are generally not displayed to the user as a whole.
This also fixes a bug in config/load.c where time values where not being formatted correctly in an error message.
Use StringIds for the storage types (e.g. STORAGE_S3_TYPE) and configuration settings, e.g. cfgOptS3KeyType.
Also add new config functions and harness config functions to support StringIds.
There is no need to write the file atomically (e.g. via a temp file on Posix) because checksums are tested on resume after a failed backup. The path does not need be synced for each file because all paths are synced at the end of the backup.
This functionality was not lost during the migration -- it never existed in the Perl code, though these settings are used in restore. See 59f1353 where backupFile() was migrated to C.
Fix the segfault when getting help for an internal option is requested by adding help for all internal options that are valid for a default command role.
Also print warnings about internal options in code rather than putting in each command/option description.
The enum truncation observed was due to the value getting passed via a protocol function which silently narrowed the enum.
Even so, add some tests to ensure tested platforms support 64-bit enums.
Although kvAdd() works like kvPut() on the first call, kvPut() is more efficient when a key has a single value.
Update the comment to clarify that kvAdd() is seldom required.
Getting help for a valid option that was invalid for the command would segfault.
Add a check to ensure the option is valid for the command's default role.
This lets the compiler know that these variables are not modified which should lead to better optimization.
Smart compilers should be able to figure this out on their own, but marking parameters const is still good for documentation.
It is often useful to represent identifiers as strings when they cannot easily be represented as an enum/integer, e.g. because they are distributed among a number of unrelated modules or need to be passed to remote processes. Strings are also more helpful in debugging since they can be recognized without cross-referencing the source. However, strings are awkward to work with in C since they cannot be directly used in switch statements leading to less efficient if-else structures.
A StringId encodes a short string into an integer so it can be used in switch statements but may also be readily converted back into a string for debugging purposes. StringIds may also be suitable for matching user input providing the strings are short enough.
This patch includes a sample of StringId usage by converting protocol commands to StringIds. There are many other possible use cases. To list a few:
* All "types" in storage, filters. IO , etc. These types are primarily for identification and debugging so they fit well with this model.
* MemContext names would work well as StringIds since these are entirely for debugging.
* Option values could be represented as StringIds which would mean we could remove the functions that convert strings to enums, e.g. CipherType.
* There are a number of places where enums need to be converted back to strings for logging/debugging purposes. An example is protocolParallelJobToConstZ. If ProtocolParallelJobState were defined as:
typedef enum
{
protocolParallelJobStatePending = STRID5("pend", ...),
protocolParallelJobStateRunning = STRID5("run", ...),
protocolParallelJobStateDone = STRID5("done", ...),
} ProtocolParallelJobState;
then protocolParallelJobToConstZ() could be replaced with strIdToZ(). This also applies to many enums that we don't covert to strings for logging, such as CipherMode.
As an example of usage, convert all protocol commands from strings to StringIds.
Restore excluding the specified databases. Databases excluded will be restored as sparse, zeroed files to save space but still allow PostgreSQL to perform recovery. After recovery, those databases will not be accessible but can be removed with the drop database command. The --db-exclude option can be passed multiple times to specify more than one database to exclude.
When used in combination with the --db-include option, --db-exclude will only apply to standard system databases (template0, template1, and postgres).
This function has not been used since the switch to the fork/exec model.
lockClear() was still used in one test (other than the lock test) so update the test and remove the function.
Both NDEBUG and DEBUG were used in the code, which was a bit confusing.
Define DEBUG in build.auto.c so it is available in all C and header files and stop using NDEBUG. This is preferable to using NDEBUG everywhere since there are multiple DEBUG* defines, e.g. DEBUG_COVERAGE.
Note that NDEBUG is still required since it is used by the C libraries.
Inline functions are more efficient and if they are not used are automatically omitted from the binary.
This also makes the implementation of these functions easier to find and removes the need for a declaration. That is, the complete implementation is located in the header rather than being spread between the header and C file.
OBJECT_DEFINE_MOVE() and OBJECT_DEFINE_FREE() will be replaced with inlines so this would be the only macro left that is constructing functions.
It is not a great pattern anyway since it makes it hard to find the function implementation.
This macro was originally intended to simplify the creation of simple getters but it has been superseded by the pattern introduced in 79a2d02c.
Remove instances of OBJECT_DEFINE_GET() to avoid confusion with the new pattern.
Introduce a standard pattern for exposing public struct members (as documented in CODING.md) and use it to inline lstSize() which should improve the performance of iterating large lists.
Since many functions in these modules are just thin wrappers of other functions, inline where appropriate.
Remove strLstExistsZ() and strLstInsertZ() since they were only used in tests, where the String version of the function is sufficient.
Move strLstNewSplitSizeZ() to command/help/help.c and remove strLstNewSplitSize(). This function has only ever been used by help and does not seem widely applicable.
Bug Fixes:
* Fix option warnings breaking async archive-get/archive-push. (Reviewed by Cynthia Shang. Reported by Lev Kokotov.)
* Fix memory leak in backup during archive copy. (Reviewed by Cynthia Shang. Reported by Christian ROUX, Efremov Egor.)
* Fix stack overflow in cipher passphrase generation. (Reviewed by Cynthia Shang. Reported by bsiara.)
* Fix repo-ls / on S3 repositories. (Reviewed by Cynthia Shang. Reported by Lesovsky Alexey.)
Features:
* Multiple repository support. (Contributed by Cynthia Shang, David Steele. Reviewed by Stefan Fercot, Stephen Frost.)
* GCS support for repository storage. (Reviewed by Cynthia Shang.)
* Add archive-header-check option. (Reviewed by Stephen Frost, Cynthia Shang. Suggested by Hans-Jürgen Schönig.)
Improvements:
* Include recreated system databases during selective restore. (Contributed by Stefan Fercot. Reviewed by Cynthia Shang.)
* Exclude content-length from S3 signed headers. (Reviewed by Cynthia Shang. Suggested by Brian P Bockelman.)
* Consolidate less commonly used repository storage options. (Reviewed by Cynthia Shang.)
* Allow custom config-path default with ./configure --with-configdir. (Contributed by Michael Schout. Reviewed by David Steele.)
* Log archive copy during backup. (Reviewed by Cynthia Shang, Stefan Fercot.)
Documentation Improvements:
* Update reference to include links to user guide examples. (Contributed by Cynthia Shang. Reviewed by David Steele.)
* Update selective restore documentation with caveats. (Reviewed by Cynthia Shang, Stefan Fercot.)
* Add compress-type clarification to archive-copy documentation. (Reviewed by Cynthia Shang, Stefan Fercot.)
* Add compress-level defaults per compress-type value. (Contributed by Cynthia Shang. Reviewed by David Steele.)
* Add note about required NFS settings being the same as PostgreSQL. (Contributed by Cynthia Shang. Reviewed by David Steele.)
hrnReplaceKey() was added to the TEST_ERROR*() macros in 58760486 but some calls to TEST_ERROR*() already used it. This led to the function being called twice on the same buffer which had no effect but valgrind definitely did not like.
Remove extraneous calls to make valgrind happy. Since this is test code there are no implications for production.
The command-example and command-example-list elements were removed from the documentation rendering some time ago so these tags were dead code. The tags, however, contained some examples and information that were pertinent to the command, so where possible, the information was included in the description of the command and/or the user-guide and links to the relevant user guide sections were added.
Note that some commands could not be updated with user guide references since doing so would cause a cyclical reference in the user guide. These commands have an internal comment to indicate this.
In addition, some clarifications were added (e.g. expire --set option) where information was lacking.
Enabled by default, this option checks the WAL header against the PostgreSQL version and system identifier to ensure that the WAL is being copied to the correct stanza. This is in addition to checking pg_control against the stanza and verifying that WAL is being copied from the same PostgreSQL data directory where pg_control is located.
Therefore, disabling this check is fairly safe but should only be done when required, e.g. if the WAL is encrypted.
3b8f0ef missed some cases that could cause archive-push to fail:
* Checking archive info.
* Checking to see if a WAL segment already exists.
These cases are now handled so archive-push can succeed on any valid repos.
This improvement reduces the number of errors thrown; these errors will now be reported as a status for the stanza or repo as appropriate. Invalid option configurations are still thrown but all other errors are caught, formatted and reported. This was necessary for multiple repositories so that the command can complete gathering information from each repository and report the results rather than immediately aborting when an error occurs.
Two new error codes were introduced:
6 = requested backup not found
99 = other, which is used to indicate an error has occurred that requires more details to be provided
A new stanza name of "[invalid]" was created for instances where a stanza was not specified and no stanza can be found.
If there is only one repository configured the error will move up to the stanza level with the standard error formatting of 'error (message)' where the message will be "other" and the details of the error will be listed on the next line(s):
stanza: stanza1
status: error (other)
[CryptoError] unable to load info file '/var/lib/pgbackrest/repo/backup/stanza1/backup.info' or '/var/lib/pgbackrest/repo/backup/stanza1/backup.info.copy':
CryptoError: cipher header invalid
HINT: is or was the repo encrypted?
FileMissingError: unable to open missing file '/var/lib/pgbackrest/repo/backup/stanza1/backup.info.copy' for read
HINT: backup.info cannot be opened and is required to perform a backup.
HINT: has a stanza-create been performed?
HINT: use option --stanza if encryption settings are different for the stanza than the global
cipher: aes-256-cbc
If a backup set is requested but is not found on any repo, a stanza-level status error of 'requested backup not found' is reported when there are no other errors:
pgbackrest info --stanza=demo --set=bogus
stanza: demo
status: error (requested backup not found)
cipher: mixed
repo1: aes-256-cbc
repo2: none
If there are multiple repositories configured and a single repo is in error but the other repos are ok or have a different error:
pgbackrest info --stanza=demo --set=20210322-171211F
stanza: demo
status: mixed
repo1: error
[CryptoError] unable to load info file '/var/lib/pgbackrest/repo/backup/stanza1/backup.info' or '/var/lib/pgbackrest/repo/backup/stanza1/backup.info.copy':
CryptoError: cipher header invalid
HINT: is or was the repo encrypted?
FileMissingError: unable to open missing file '/var/lib/pgbackrest/repo/backup/stanza1/backup.info.copy' for read
HINT: backup.info cannot be opened and is required to perform a backup.
HINT: has a stanza-create been performed?
HINT: use option --stanza if encryption settings are different for the stanza than the global
repo2: ok
cipher: mixed
repo1: aes-256-cbc
repo2: none
db (current)
wal archive min/max (12): 000000010000000000000001/000000010000000000000003
full backup: 20210322-171211F
timestamp start/stop: 2021-03-22 17:12:11 / 2021-03-22 17:12:28
wal start/stop: 000000010000000000000002 / 000000010000000000000002
database size: 23.4MB, database backup size: 23.4MB
repo2: backup set size: 2.8MB, backup size: 2.8MB
database list: postgres (13359)
Json output will include the repository information and any error information. If no stanzas are found, then [invalid] will be set as the name:
[
{
"archive":[],
"backup":[],
"cipher":"none",
"db":[],
"name":"[invalid]",
"repo":[
{
"cipher":"none",
"key":1,
"status":{
"code":99,
"message":"[PathOpenError] unable to list file info for path '/var/lib/pgbackrest/repo2/backup': [13] Permission denied"
}
}
],
"status":{
"code":99,
"lock":{"backup":{"held":false}},
"message":"other"
}
}
]
The content-length header was being signed since it was the only header that didn't need to be and it seemed simpler just to sign it as well. Also, the S3 documentation encourages signing as many headers as possible to avoid tampering.
However, some proxies munge this header causing authentication failure, so skip signing content-length.
Make protocol handlers have one function per command. This allows the logic of finding the handler to be in ProtocolServer, isolates each command to a function, and removes the need to test the "not found" condition for each handler.
S3 returns 200 for HEAD / which indicates it is a file but does not return the expected headers which causes an error.
Rather than fix this for S3, just automatically return / as not existing for any storage that does not support paths.
Also add some defensive checks to prevent this from generating a segfault if it happens again.
Some standard system databases (e.g. postgres) may be recreated by the user and have an OID that makes them look like user databases.
Identify the standard three system databases (template0, template1, postgres) and restore them non-zeroed no matter what OID they have.
Cipher type was inferred from the presence of cipherSubPass rather than being passed explicitly in order to maintain compatibility with Perl backupFile().
Now that Perl is gone it makes sense to pass it explicitly, as we do elsewhere.
This test was added to take the place of another test, which turned out not to be workable.
Even so, it adds coverages at little cost so it seems worth keeping.
When the FUNCTION_*_RESULT*() macros were renamed to FUNCTION_*_RETURN_*() in the core code the test harness macros were missed.
Update them to make the naming consistent.
The stanza-create, stanza-upgrade and stanza-delete were required to be run on the repository host. When there was only one repository allowed this was not a problem.
However, with the introduction of multiple repository support, this becomes more of a burden to the user, therefore the stanza-create, stanza-upgrade and stanza-delete commands have been improved to allow for them to be run remotely.
Moving to YAML allows the configuration data to be read by C programs.
Also go back to using YAML::XS since it is the only implementation that has proper boolean support.
Up to four repositories may be configured. A potential benefit is the ability to have a local repository for fast restores and a remote repository for redundancy.
Some commands, e.g. stanza-create/stanza-update, will automatically work with all configured repositories while others, e.g. stanza-delete, will require a repository to be specified using the repo option. See the command reference for details on which commands require the repository to be specified.
Note that the repo option is not required when only repo1 is configured in order to maintain backward compatibility. However, the repo option is required when a single repo is configured as, e.g. repo2. This is to prevent command breakage if a new repository is added later.
The archive-push command will always push WAL to the archive in all configured repositories but backups will need to be scheduled individually for each repository. In many cases this is desirable since backup types and retention will vary by repository. Likewise, restores must specify a repository. It is generally better to specify a repository for restores that has low latency/cost even if that means more recovery time. Only restore testing can determine which repository will be most efficient.
For single repository configurations there should be no change in behavior.
Some commands (repo-*, verify) still required the --repo option but it makes sense to give them the same treatment as backup and simply use the first repo when one is not specified.
This leaves stanza-delete as the only remaining command that requires --repo. This is by design to enhance safe usage.
The following options are renamed as specified:
repo1-azure-ca-file -> repo1-storage-ca-file
repo1-azure-ca-path -> repo1-storage-ca-path
repo1-azure-host -> repo1-storage-host
repo1-azure-port -> repo1-storage-port
repo1-azure-verify-tls -> repo1-storage-verify-tls
repo1-s3-ca-file -> repo1-storage-ca-file
repo1-s3-ca-path -> repo1-storage-ca-path
repo1-s3-host -> repo1-storage-host
repo1-s3-port -> repo1-storage-port
repo1-s3-verify-tls -> repo1-storage-verify-tls
The old option names (e.g. repo1-s3-port) will continue to work for repo1, but repo2, etc. will require the new names.
The archive-push command will continue to push even after it gets a write error on one or more repos. The idea is to archive to as many repos as possible even we still need to throw an error to PostgreSQL to prevent it from removing the WAL file.
The real/all test could fill the ramdisk depending on which vm and pg version were selected.
Debug level should be fine for most purposes and the level can be increased when needed.
The restore command automatically defaults to selecting the latest backup from a single repository. With multiple repositories configured, the restore command will now default to selecting the latest backup from the first repository where backups exist. The order in which the repositories are checked is dictated by the pgbackrest.conf order.
To select from a specific repository, the --repo option can be passed (e.g. --repo=1). The --set option can be passed if a backup other than the latest is desired.
Repositories will be searched in order for the requested archive file.
Errors will be reported as warnings as long as a valid copy of the archive file is found.
Errors are logged to the log file rather than thrown. If, after processing all repos, one or more errors occurred, then a single error error will be thrown to indicate there were errors and the log file should be inspected.
Also update log messages to be more consistent with new patterns.
This is more efficient and the error case can be an assert rather than a runtime error.
For extra safety initialize destinationSize to SIZE_MAX to increase the chances of an error if the switch fails.
There is not enough code here to justify multiple files and declaring the functions for each encoding as static allows the compiler to inline where appropriate.
These constructors wrap encodeToStr() and decodeToBin(), making them convenient and safe by eliminating the need to create intermediate buffers. Encoding/decoding is performed directly into the target String/Buffer. Sizing of the destination buffer is handled by the new functions so it doesn't have to be done at each call site.
If the second letter is capital or a digit then the word is likely an acronym so don't lower-case the first letter.
For now only the digit case is checked since there are no summaries with a capital as the second letter.
GCS requires mixed encoding in the path so encoding inside HttpRequest does not work.
Instead, require the path to be correctly encoded before being passed to HttpRequest.
The path was originally named uri due to the canonicalized path being called "canonicalized uri" in the S3 authentication documentation. The name got propagated everywhere from there.
This is not correct for general usage, however, so rename to path when describing the path component of an HTTP request.
ASCII may occasionally be encoded (e.g. &) to prevent ambiguity depending on where the JSON is located.
Only ASCII can be decoded. In general Unicode should not be encoded in JSON.
Option warnings will cause the async process to fail because a warning is logged but stdout is closed so the process aborts.
This bug has existed for quite some time, but it was made worse by abb8ebe because now the async role can have different valid options than the default role. Previously at least a warning would be emitted before the async process died.
Fix this by only allowing warnings for the default role. Warnings were already suppressed for local and remote roles so the logic already exists.
These tests were broken because they were being gated by resetLogLevel. So they were not setting the log levels, but not because of the role setting. Because resetLogLevel was being checked last coverage testing indicated that the tests were working.
Fix the resetLogLevel parameter in the tests and move resetLogLevel to be tested first so coverage reporting works as expected. This isn't perfect but it is an improvement.
The expire command has been enhanced to expire backups and archives from all configured repositories by default.
In addition, it will accept the --repo option to expire backups and archives only from the specified repository. Using the --repo options the --set option can also be refined further to the specified repo. If --set is provided but the --repo option has not, then all repositories will be searched and retention settings will be applied on each whether the backup set has been found or not.
Bug Fixes:
* Fix resume after partial delete of backup by prior resume. (Reviewed by Cynthia Shang. Reported by Tom Swartz.)
Features:
* Add repo-ls command. (Reviewed by Cynthia Shang, Stefan Fercot.)
* Add repo-get command. (Contributed by Stefan Fercot, David Steele. Reviewed by Cynthia Shang.)
* Add archive-mode-check option. (Contributed by Stefan Fercot. Reviewed by David Steele, Michael Banck.)
Improvements:
* Improve archive-get performance. (Reviewed by Cynthia Shang.)
The stackTrace and memContext error handlers were hard-coded which made testing the error module in isolation impossible.
Making the error handlers configurable also makes adding new ones in the future easier.
This is useful for initialization that needs to be done for the test and all subsequent tests.
Use the new defines to implement initialization for sockets and statistics.
In preparation for multi-repo support, a repo tag is added in this commit to the expire command log and error messages. This change also affects the expect logs and the user-guide. The format of the tag is "repoX:" where X is the repo key used in the configuration.
Until multi-repo support has been completed, this tag will always be "repo1:".
When building tests only include files covered by the current test or by prior tests. This increases performance (less compilation and linking) and also helps detect cross-dependencies in the code. Since there are currently cross-dependencies the depend option is used to document them and allow compilation. The idea is to resolve them incrementally over time.
Add the harness option to include harness modules when the minimum requirements for compilation are met.
Add the feature option to indicate which features are now available in the harness (based on source modules already tested). This allows conditional compilation in harness modules when some features are not yet available.
This is required for coverage when the common/error module is run with just the source files required to make it run, rather than all source files as we do now.
Likely something in the harness is providing coverage, but cover it explicitly so the coverage won't be lost if the harness changes.
The original intention was to enclose complex code in braces but somehow braces got propagated almost everywhere.
Document the standard for braces in switch statements and update the code to reflect the standard.
At one time Minio had stability problems with latest but that appears to be resolved for the last year or so.
Use latest so we'll know if something breaks since Minio is frequently used in production.
This is phase 2 of verify command development (phase 1 was processing the archives and phase 3 will be reconciling the archives and backups). In this phase the backups are verified by verifying each file listed in the manifest for the backup and creating a result set with the list of invalid files, if any. A summary is then rendered.
Unit tests have been added and duplicate tests have been removed.
The info command provides total sizes for files in the backup on the database as well as the repository. The text output and associated user documentation has been updated to provide more clarity regarding the sizes being displayed.
In addition, the info command is updated to allow a user to optionally specify the repository when requesting a specific backup set. In this case, the text output will reflect the status of the stanza, the cipher types and archive min/max over all the repositories instead of a single repository when the repo option is specified.
The unit test Makefile generation was a hodge-podge of constants and rules based on distros/versions that easily got out of date and did not work on an unknown system. All of this dates from the mixed Perl/C unit test implementation.
Instead use configure to generate most of the important Makefile variables, which allows the unit tests to run on multiple platforms, e.g. MacOS and FreeBSD.
There is plenty of work to be done here and not all the unit tests work on MacOS and FreeBSD for various reasons.
As a POC update the MacOS and FreeBSD tests on Cirrus-CI to run a few command unit tests.
MacOS does not allow files to be removed recursively unless the owner has write and execute permissions on all the directories.
Some tests leave the permissions in a bad state so fix them up before trying to delete.
The exact message is platform dependent so get the platform error to use in the expect.
It doesn't matter what the message is as long as there is an error and it is logged.
YAML::XS requires libyaml so it not as portable as pure Perl versions of YAML.
Instead of using YAML:PP just use the general YAML::Any module which uses whatever is installed. We are not concerned about performance for YAML so whatever works is fine.
Messages on stderr were being lost due to the error suppression used to customize the error message.
Also update the formatting to be more informative and concise.
MacOS has a very old version of rsync that does not support this option.
Rather than require a newer version of rsync exclude the option since the plan is to remove the requirement for it.
This is a more appropriate place for the check and means test.pl can avoid loading any XML files if --no-gen is specified.
The XML::Checker::Parser module originally selected for XML in Perl is not very portable so the requirement reduces the number of platforms where tests can be run.
Clang justifiably complains about pointer arithmetic on a known NULL value during testing. We know this is fine but use uintptr_t to silence the warnings.
Found on MacOS M1.
The return value is not checked because we are happy with a truncated result in this case, which is guaranteed by passing the buffer size.
Found on MacOS M1.