Comment formatting was not used much but it incurred a heavy cost in each macro to process possible formatting.
Remove formatted comments where they did not contain valuable information and replace with strZ(strNewFmt()) otherwise.
A define was already added for TEST_PATH but it was not widely used. Replace all occurrences of testPath() with TEST_PATH in the tests.
Replace testUser() with TEST_USER, testGroup() with TEST_GROUP, testRepoPath() with HRN_PATH_REPO, testDataPath() with HRN_PATH, testProjectExe() with TEST_PROJECT_EXE, and testScale() with TEST_SCALE.
Replace {[path]}, {[user]}, {[group]}, etc. with defines and remove hrnReplaceKey(). This is better than having two ways to deal with replacements.
In some cases the original test*() getters were kept because they are used by the harness, which does not have access to the new defines. Move them to harnessTest.intern.h to indicate that the tests should no longer use them.
Replace all instances of strNew("") with strNew() and use strNewZ() for non-empty zero-terminated strings. Besides saving a useless parameter, this will allow smarter memory allocation in a future commit by signaling intent, in general, to append or not.
In the tests use STRDEF() or VARSTRDEF() where more appropriate rather than blindly replacing with strNewZ(). Also replace strLstAdd() with strLstAddZ() where appropriate for the same reason.
Run the local process inside a forked child process instead of exec'ing it. This allows coverage to accumulate in the local process rather than needing to test the local protocol functions directly, resulting in better end-to-end testing and less test duplication. Another advantage is that the pgbackrest binary does not need to be built for the test.
The backup, restore, and verify command tests have been updated to use the new shim for coverage.
A shim allows a test harness to access static functions and variables in a C module, and also allows functions to be shimmed (i.e. overridden) for the purposes of testing.
For instance, coverage testing works when a process that is normally exec'd is run as a forked child process instead.
getopt_long() requires an exhaustive list of all possible options that may be found on the command line. Because of the way options are indexed (e.g. repo1-4, pg1-8) optionList[] has 827 entries and we have kept it small by curtailing the maximum indexes very severely. Another issue is that getopt_long() scans the array sequentially so parsing gets slower as the index maximums increase.
Replace getopt_long() with a custom implementation that behaves the same but allows options to be parsed with a function instead of using optionList[]. This commit leaves the list in place in order to focus on the getopt_long() replacement, but cfgParseOption() could be replaced with a more efficient implementation that removes the need for optionList[].
This implementation also fixes an issue where invalid options were misreported in the error message if they only had one dash, e.g. -config. This seems to have been some kind of problem in getopt_long(), but no investigation was done since the new implementation fixes it.
Tests were added at 0825428, 2b8d2da, 34dd663, and 384f247 to check that previously untested getopt_long() behavior doesn't change.
This makes the macro useful when subpaths are present.
Identify types other than files (path, link, etc.) with a single appended character for easier debugging.
Remove stanza archive spool path so existing files do not interfere with the new cluster. For instance, old archive-push acknowledgements could cause a new cluster to skip archiving. This should not happen if a new timeline is selected but better to be safe. Missing stanza spool paths are ignored.
Also add new path expression STORAGE_SPOOL_ARCHIVE to easily access this path.
When running on a GCE instance the authentication token can be pulled directly from the instance metadata. This is configured with repo-gcs-key-type=auto.
In a separate commit (26fefa6), move the code that parses the token response into a separate function, storageGcsAuthToken(), since it is now needed by two key types. This drastically improves the readability of the main commit.
When running outside of our standard Vagrantfile the default will not be set correctly, so require the user to set it.
In any case, this option is primarily useful for reporting so note that in the command line help.
Some version interface test functions were integrated into the core code because they relied on the PostgreSQL versioned interface. Even though they were compiled out for production builds they cluttered the core code and made it harder to determine what was required by core.
Create a PostgreSQL version interface in a test harness to contain these functions. This does require some duplication but the cleaner core code seems a good tradeoff. It is possible for some of this code to be auto-generated but since it is only updated once per year the matter is not pressing.
If an ok file (which indicates the WAL segment was not found) is present on the first iteration of the loop then remove it and spawn the async process to retry. This action also resets the queue.
Also error if no response is received from the async process rather than returning not found. PostgreSQL will respond the same either way, but this allows us to determine when something is going wrong with the async process.
Update archiveAsyncStatus() to allow warnings to be suppressed. It is better to retry if no WAL segment was found before warning because the warning might be stale.
If an option name has a space at the beginning then it will be considered an invalid command, but a space at the end is an invalid option. Add tests for these conditions.
Spaces in option arguments should be preserved, so add a test to be sure this is true.
Convert most of the remaining options that benefit from being StringIds. Since all the command modules can include config.h directly it makes sense to auto-generate these values instead of manually creating an enum for each one.
For the time being StringIds are not being auto-generated because the StringId code does not exist in Perl. However, the *_Z zero-terminated constants for each allowed option value are now auto-generated.
The CentOS 7 documentation test relies on PostgreSQL 9.5 which has been removed from the yum.p.o repository package. Switch the test to CentOS 8 to fix the immediate issue, but a decision on the PostgreSQL 9.5 documentation will need to be made before the next release.
The tests worked fine on multiple architectures, but would only run "bare metal", i.e. tests that required containers could not be run.
Enable basic multi-architecture support by allowing containers to be built using whatever architecture the host supports. Also allow cached containers to be defined for multiple architectures in container.yaml.
Add a Dockerfile which can be used as a container for other containers to provide a consistent development environment.
The primary goal is to allow development on Mac M1 but other architectures should find these improvements useful.
Allows removal of backupType()/backupTypeStr() and improves debug logging of the enum.
Move BackupType enum and string constants to info/infoBackup.h so they are available to more modules. Also convert InfoBackup to use BackupType instead of a String.
Centralize the formatting of the configuration value for display to the user or passing on a command line.
For the new functions, if the value was set by the user via the command line, config, etc., then that exact value will be displayed. This makes it easier for the user to recognize the value and saves having to format it into something reasonable, especially for time and size option types.
Note that cfgOptTypeHash and cfgOptTypeList option types are not supported by these functions, but they are generally not displayed to the user as a whole.
This also fixes a bug in config/load.c where time values where not being formatted correctly in an error message.
Use StringIds for the storage types (e.g. STORAGE_S3_TYPE) and configuration settings, e.g. cfgOptS3KeyType.
Also add new config functions and harness config functions to support StringIds.
There is no need to write the file atomically (e.g. via a temp file on Posix) because checksums are tested on resume after a failed backup. The path does not need be synced for each file because all paths are synced at the end of the backup.
This functionality was not lost during the migration -- it never existed in the Perl code, though these settings are used in restore. See 59f1353 where backupFile() was migrated to C.
Fix the segfault when getting help for an internal option is requested by adding help for all internal options that are valid for a default command role.
Also print warnings about internal options in code rather than putting in each command/option description.
The enum truncation observed was due to the value getting passed via a protocol function which silently narrowed the enum.
Even so, add some tests to ensure tested platforms support 64-bit enums.
Although kvAdd() works like kvPut() on the first call, kvPut() is more efficient when a key has a single value.
Update the comment to clarify that kvAdd() is seldom required.
Getting help for a valid option that was invalid for the command would segfault.
Add a check to ensure the option is valid for the command's default role.
This lets the compiler know that these variables are not modified which should lead to better optimization.
Smart compilers should be able to figure this out on their own, but marking parameters const is still good for documentation.
It is often useful to represent identifiers as strings when they cannot easily be represented as an enum/integer, e.g. because they are distributed among a number of unrelated modules or need to be passed to remote processes. Strings are also more helpful in debugging since they can be recognized without cross-referencing the source. However, strings are awkward to work with in C since they cannot be directly used in switch statements leading to less efficient if-else structures.
A StringId encodes a short string into an integer so it can be used in switch statements but may also be readily converted back into a string for debugging purposes. StringIds may also be suitable for matching user input providing the strings are short enough.
This patch includes a sample of StringId usage by converting protocol commands to StringIds. There are many other possible use cases. To list a few:
* All "types" in storage, filters. IO , etc. These types are primarily for identification and debugging so they fit well with this model.
* MemContext names would work well as StringIds since these are entirely for debugging.
* Option values could be represented as StringIds which would mean we could remove the functions that convert strings to enums, e.g. CipherType.
* There are a number of places where enums need to be converted back to strings for logging/debugging purposes. An example is protocolParallelJobToConstZ. If ProtocolParallelJobState were defined as:
typedef enum
{
protocolParallelJobStatePending = STRID5("pend", ...),
protocolParallelJobStateRunning = STRID5("run", ...),
protocolParallelJobStateDone = STRID5("done", ...),
} ProtocolParallelJobState;
then protocolParallelJobToConstZ() could be replaced with strIdToZ(). This also applies to many enums that we don't covert to strings for logging, such as CipherMode.
As an example of usage, convert all protocol commands from strings to StringIds.
Restore excluding the specified databases. Databases excluded will be restored as sparse, zeroed files to save space but still allow PostgreSQL to perform recovery. After recovery, those databases will not be accessible but can be removed with the drop database command. The --db-exclude option can be passed multiple times to specify more than one database to exclude.
When used in combination with the --db-include option, --db-exclude will only apply to standard system databases (template0, template1, and postgres).
This function has not been used since the switch to the fork/exec model.
lockClear() was still used in one test (other than the lock test) so update the test and remove the function.
Both NDEBUG and DEBUG were used in the code, which was a bit confusing.
Define DEBUG in build.auto.c so it is available in all C and header files and stop using NDEBUG. This is preferable to using NDEBUG everywhere since there are multiple DEBUG* defines, e.g. DEBUG_COVERAGE.
Note that NDEBUG is still required since it is used by the C libraries.
Inline functions are more efficient and if they are not used are automatically omitted from the binary.
This also makes the implementation of these functions easier to find and removes the need for a declaration. That is, the complete implementation is located in the header rather than being spread between the header and C file.
OBJECT_DEFINE_MOVE() and OBJECT_DEFINE_FREE() will be replaced with inlines so this would be the only macro left that is constructing functions.
It is not a great pattern anyway since it makes it hard to find the function implementation.
This macro was originally intended to simplify the creation of simple getters but it has been superseded by the pattern introduced in 79a2d02c.
Remove instances of OBJECT_DEFINE_GET() to avoid confusion with the new pattern.
Introduce a standard pattern for exposing public struct members (as documented in CODING.md) and use it to inline lstSize() which should improve the performance of iterating large lists.
Since many functions in these modules are just thin wrappers of other functions, inline where appropriate.
Remove strLstExistsZ() and strLstInsertZ() since they were only used in tests, where the String version of the function is sufficient.
Move strLstNewSplitSizeZ() to command/help/help.c and remove strLstNewSplitSize(). This function has only ever been used by help and does not seem widely applicable.
Bug Fixes:
* Fix option warnings breaking async archive-get/archive-push. (Reviewed by Cynthia Shang. Reported by Lev Kokotov.)
* Fix memory leak in backup during archive copy. (Reviewed by Cynthia Shang. Reported by Christian ROUX, Efremov Egor.)
* Fix stack overflow in cipher passphrase generation. (Reviewed by Cynthia Shang. Reported by bsiara.)
* Fix repo-ls / on S3 repositories. (Reviewed by Cynthia Shang. Reported by Lesovsky Alexey.)
Features:
* Multiple repository support. (Contributed by Cynthia Shang, David Steele. Reviewed by Stefan Fercot, Stephen Frost.)
* GCS support for repository storage. (Reviewed by Cynthia Shang.)
* Add archive-header-check option. (Reviewed by Stephen Frost, Cynthia Shang. Suggested by Hans-Jürgen Schönig.)
Improvements:
* Include recreated system databases during selective restore. (Contributed by Stefan Fercot. Reviewed by Cynthia Shang.)
* Exclude content-length from S3 signed headers. (Reviewed by Cynthia Shang. Suggested by Brian P Bockelman.)
* Consolidate less commonly used repository storage options. (Reviewed by Cynthia Shang.)
* Allow custom config-path default with ./configure --with-configdir. (Contributed by Michael Schout. Reviewed by David Steele.)
* Log archive copy during backup. (Reviewed by Cynthia Shang, Stefan Fercot.)
Documentation Improvements:
* Update reference to include links to user guide examples. (Contributed by Cynthia Shang. Reviewed by David Steele.)
* Update selective restore documentation with caveats. (Reviewed by Cynthia Shang, Stefan Fercot.)
* Add compress-type clarification to archive-copy documentation. (Reviewed by Cynthia Shang, Stefan Fercot.)
* Add compress-level defaults per compress-type value. (Contributed by Cynthia Shang. Reviewed by David Steele.)
* Add note about required NFS settings being the same as PostgreSQL. (Contributed by Cynthia Shang. Reviewed by David Steele.)
hrnReplaceKey() was added to the TEST_ERROR*() macros in 58760486 but some calls to TEST_ERROR*() already used it. This led to the function being called twice on the same buffer which had no effect but valgrind definitely did not like.
Remove extraneous calls to make valgrind happy. Since this is test code there are no implications for production.
The command-example and command-example-list elements were removed from the documentation rendering some time ago so these tags were dead code. The tags, however, contained some examples and information that were pertinent to the command, so where possible, the information was included in the description of the command and/or the user-guide and links to the relevant user guide sections were added.
Note that some commands could not be updated with user guide references since doing so would cause a cyclical reference in the user guide. These commands have an internal comment to indicate this.
In addition, some clarifications were added (e.g. expire --set option) where information was lacking.
Enabled by default, this option checks the WAL header against the PostgreSQL version and system identifier to ensure that the WAL is being copied to the correct stanza. This is in addition to checking pg_control against the stanza and verifying that WAL is being copied from the same PostgreSQL data directory where pg_control is located.
Therefore, disabling this check is fairly safe but should only be done when required, e.g. if the WAL is encrypted.
3b8f0ef missed some cases that could cause archive-push to fail:
* Checking archive info.
* Checking to see if a WAL segment already exists.
These cases are now handled so archive-push can succeed on any valid repos.
This improvement reduces the number of errors thrown; these errors will now be reported as a status for the stanza or repo as appropriate. Invalid option configurations are still thrown but all other errors are caught, formatted and reported. This was necessary for multiple repositories so that the command can complete gathering information from each repository and report the results rather than immediately aborting when an error occurs.
Two new error codes were introduced:
6 = requested backup not found
99 = other, which is used to indicate an error has occurred that requires more details to be provided
A new stanza name of "[invalid]" was created for instances where a stanza was not specified and no stanza can be found.
If there is only one repository configured the error will move up to the stanza level with the standard error formatting of 'error (message)' where the message will be "other" and the details of the error will be listed on the next line(s):
stanza: stanza1
status: error (other)
[CryptoError] unable to load info file '/var/lib/pgbackrest/repo/backup/stanza1/backup.info' or '/var/lib/pgbackrest/repo/backup/stanza1/backup.info.copy':
CryptoError: cipher header invalid
HINT: is or was the repo encrypted?
FileMissingError: unable to open missing file '/var/lib/pgbackrest/repo/backup/stanza1/backup.info.copy' for read
HINT: backup.info cannot be opened and is required to perform a backup.
HINT: has a stanza-create been performed?
HINT: use option --stanza if encryption settings are different for the stanza than the global
cipher: aes-256-cbc
If a backup set is requested but is not found on any repo, a stanza-level status error of 'requested backup not found' is reported when there are no other errors:
pgbackrest info --stanza=demo --set=bogus
stanza: demo
status: error (requested backup not found)
cipher: mixed
repo1: aes-256-cbc
repo2: none
If there are multiple repositories configured and a single repo is in error but the other repos are ok or have a different error:
pgbackrest info --stanza=demo --set=20210322-171211F
stanza: demo
status: mixed
repo1: error
[CryptoError] unable to load info file '/var/lib/pgbackrest/repo/backup/stanza1/backup.info' or '/var/lib/pgbackrest/repo/backup/stanza1/backup.info.copy':
CryptoError: cipher header invalid
HINT: is or was the repo encrypted?
FileMissingError: unable to open missing file '/var/lib/pgbackrest/repo/backup/stanza1/backup.info.copy' for read
HINT: backup.info cannot be opened and is required to perform a backup.
HINT: has a stanza-create been performed?
HINT: use option --stanza if encryption settings are different for the stanza than the global
repo2: ok
cipher: mixed
repo1: aes-256-cbc
repo2: none
db (current)
wal archive min/max (12): 000000010000000000000001/000000010000000000000003
full backup: 20210322-171211F
timestamp start/stop: 2021-03-22 17:12:11 / 2021-03-22 17:12:28
wal start/stop: 000000010000000000000002 / 000000010000000000000002
database size: 23.4MB, database backup size: 23.4MB
repo2: backup set size: 2.8MB, backup size: 2.8MB
database list: postgres (13359)
Json output will include the repository information and any error information. If no stanzas are found, then [invalid] will be set as the name:
[
{
"archive":[],
"backup":[],
"cipher":"none",
"db":[],
"name":"[invalid]",
"repo":[
{
"cipher":"none",
"key":1,
"status":{
"code":99,
"message":"[PathOpenError] unable to list file info for path '/var/lib/pgbackrest/repo2/backup': [13] Permission denied"
}
}
],
"status":{
"code":99,
"lock":{"backup":{"held":false}},
"message":"other"
}
}
]
The content-length header was being signed since it was the only header that didn't need to be and it seemed simpler just to sign it as well. Also, the S3 documentation encourages signing as many headers as possible to avoid tampering.
However, some proxies munge this header causing authentication failure, so skip signing content-length.
Make protocol handlers have one function per command. This allows the logic of finding the handler to be in ProtocolServer, isolates each command to a function, and removes the need to test the "not found" condition for each handler.
S3 returns 200 for HEAD / which indicates it is a file but does not return the expected headers which causes an error.
Rather than fix this for S3, just automatically return / as not existing for any storage that does not support paths.
Also add some defensive checks to prevent this from generating a segfault if it happens again.
Some standard system databases (e.g. postgres) may be recreated by the user and have an OID that makes them look like user databases.
Identify the standard three system databases (template0, template1, postgres) and restore them non-zeroed no matter what OID they have.
Cipher type was inferred from the presence of cipherSubPass rather than being passed explicitly in order to maintain compatibility with Perl backupFile().
Now that Perl is gone it makes sense to pass it explicitly, as we do elsewhere.
This test was added to take the place of another test, which turned out not to be workable.
Even so, it adds coverages at little cost so it seems worth keeping.
When the FUNCTION_*_RESULT*() macros were renamed to FUNCTION_*_RETURN_*() in the core code the test harness macros were missed.
Update them to make the naming consistent.
The stanza-create, stanza-upgrade and stanza-delete were required to be run on the repository host. When there was only one repository allowed this was not a problem.
However, with the introduction of multiple repository support, this becomes more of a burden to the user, therefore the stanza-create, stanza-upgrade and stanza-delete commands have been improved to allow for them to be run remotely.
Moving to YAML allows the configuration data to be read by C programs.
Also go back to using YAML::XS since it is the only implementation that has proper boolean support.
Up to four repositories may be configured. A potential benefit is the ability to have a local repository for fast restores and a remote repository for redundancy.
Some commands, e.g. stanza-create/stanza-update, will automatically work with all configured repositories while others, e.g. stanza-delete, will require a repository to be specified using the repo option. See the command reference for details on which commands require the repository to be specified.
Note that the repo option is not required when only repo1 is configured in order to maintain backward compatibility. However, the repo option is required when a single repo is configured as, e.g. repo2. This is to prevent command breakage if a new repository is added later.
The archive-push command will always push WAL to the archive in all configured repositories but backups will need to be scheduled individually for each repository. In many cases this is desirable since backup types and retention will vary by repository. Likewise, restores must specify a repository. It is generally better to specify a repository for restores that has low latency/cost even if that means more recovery time. Only restore testing can determine which repository will be most efficient.
For single repository configurations there should be no change in behavior.
Some commands (repo-*, verify) still required the --repo option but it makes sense to give them the same treatment as backup and simply use the first repo when one is not specified.
This leaves stanza-delete as the only remaining command that requires --repo. This is by design to enhance safe usage.
The following options are renamed as specified:
repo1-azure-ca-file -> repo1-storage-ca-file
repo1-azure-ca-path -> repo1-storage-ca-path
repo1-azure-host -> repo1-storage-host
repo1-azure-port -> repo1-storage-port
repo1-azure-verify-tls -> repo1-storage-verify-tls
repo1-s3-ca-file -> repo1-storage-ca-file
repo1-s3-ca-path -> repo1-storage-ca-path
repo1-s3-host -> repo1-storage-host
repo1-s3-port -> repo1-storage-port
repo1-s3-verify-tls -> repo1-storage-verify-tls
The old option names (e.g. repo1-s3-port) will continue to work for repo1, but repo2, etc. will require the new names.
The archive-push command will continue to push even after it gets a write error on one or more repos. The idea is to archive to as many repos as possible even we still need to throw an error to PostgreSQL to prevent it from removing the WAL file.
The real/all test could fill the ramdisk depending on which vm and pg version were selected.
Debug level should be fine for most purposes and the level can be increased when needed.
The restore command automatically defaults to selecting the latest backup from a single repository. With multiple repositories configured, the restore command will now default to selecting the latest backup from the first repository where backups exist. The order in which the repositories are checked is dictated by the pgbackrest.conf order.
To select from a specific repository, the --repo option can be passed (e.g. --repo=1). The --set option can be passed if a backup other than the latest is desired.
Repositories will be searched in order for the requested archive file.
Errors will be reported as warnings as long as a valid copy of the archive file is found.
Errors are logged to the log file rather than thrown. If, after processing all repos, one or more errors occurred, then a single error error will be thrown to indicate there were errors and the log file should be inspected.
Also update log messages to be more consistent with new patterns.
This is more efficient and the error case can be an assert rather than a runtime error.
For extra safety initialize destinationSize to SIZE_MAX to increase the chances of an error if the switch fails.
There is not enough code here to justify multiple files and declaring the functions for each encoding as static allows the compiler to inline where appropriate.
These constructors wrap encodeToStr() and decodeToBin(), making them convenient and safe by eliminating the need to create intermediate buffers. Encoding/decoding is performed directly into the target String/Buffer. Sizing of the destination buffer is handled by the new functions so it doesn't have to be done at each call site.
If the second letter is capital or a digit then the word is likely an acronym so don't lower-case the first letter.
For now only the digit case is checked since there are no summaries with a capital as the second letter.
GCS requires mixed encoding in the path so encoding inside HttpRequest does not work.
Instead, require the path to be correctly encoded before being passed to HttpRequest.
The path was originally named uri due to the canonicalized path being called "canonicalized uri" in the S3 authentication documentation. The name got propagated everywhere from there.
This is not correct for general usage, however, so rename to path when describing the path component of an HTTP request.
ASCII may occasionally be encoded (e.g. &) to prevent ambiguity depending on where the JSON is located.
Only ASCII can be decoded. In general Unicode should not be encoded in JSON.
Option warnings will cause the async process to fail because a warning is logged but stdout is closed so the process aborts.
This bug has existed for quite some time, but it was made worse by abb8ebe because now the async role can have different valid options than the default role. Previously at least a warning would be emitted before the async process died.
Fix this by only allowing warnings for the default role. Warnings were already suppressed for local and remote roles so the logic already exists.
These tests were broken because they were being gated by resetLogLevel. So they were not setting the log levels, but not because of the role setting. Because resetLogLevel was being checked last coverage testing indicated that the tests were working.
Fix the resetLogLevel parameter in the tests and move resetLogLevel to be tested first so coverage reporting works as expected. This isn't perfect but it is an improvement.
The expire command has been enhanced to expire backups and archives from all configured repositories by default.
In addition, it will accept the --repo option to expire backups and archives only from the specified repository. Using the --repo options the --set option can also be refined further to the specified repo. If --set is provided but the --repo option has not, then all repositories will be searched and retention settings will be applied on each whether the backup set has been found or not.
Bug Fixes:
* Fix resume after partial delete of backup by prior resume. (Reviewed by Cynthia Shang. Reported by Tom Swartz.)
Features:
* Add repo-ls command. (Reviewed by Cynthia Shang, Stefan Fercot.)
* Add repo-get command. (Contributed by Stefan Fercot, David Steele. Reviewed by Cynthia Shang.)
* Add archive-mode-check option. (Contributed by Stefan Fercot. Reviewed by David Steele, Michael Banck.)
Improvements:
* Improve archive-get performance. (Reviewed by Cynthia Shang.)
The stackTrace and memContext error handlers were hard-coded which made testing the error module in isolation impossible.
Making the error handlers configurable also makes adding new ones in the future easier.
This is useful for initialization that needs to be done for the test and all subsequent tests.
Use the new defines to implement initialization for sockets and statistics.
In preparation for multi-repo support, a repo tag is added in this commit to the expire command log and error messages. This change also affects the expect logs and the user-guide. The format of the tag is "repoX:" where X is the repo key used in the configuration.
Until multi-repo support has been completed, this tag will always be "repo1:".
When building tests only include files covered by the current test or by prior tests. This increases performance (less compilation and linking) and also helps detect cross-dependencies in the code. Since there are currently cross-dependencies the depend option is used to document them and allow compilation. The idea is to resolve them incrementally over time.
Add the harness option to include harness modules when the minimum requirements for compilation are met.
Add the feature option to indicate which features are now available in the harness (based on source modules already tested). This allows conditional compilation in harness modules when some features are not yet available.
This is required for coverage when the common/error module is run with just the source files required to make it run, rather than all source files as we do now.
Likely something in the harness is providing coverage, but cover it explicitly so the coverage won't be lost if the harness changes.
The original intention was to enclose complex code in braces but somehow braces got propagated almost everywhere.
Document the standard for braces in switch statements and update the code to reflect the standard.
At one time Minio had stability problems with latest but that appears to be resolved for the last year or so.
Use latest so we'll know if something breaks since Minio is frequently used in production.
This is phase 2 of verify command development (phase 1 was processing the archives and phase 3 will be reconciling the archives and backups). In this phase the backups are verified by verifying each file listed in the manifest for the backup and creating a result set with the list of invalid files, if any. A summary is then rendered.
Unit tests have been added and duplicate tests have been removed.
The info command provides total sizes for files in the backup on the database as well as the repository. The text output and associated user documentation has been updated to provide more clarity regarding the sizes being displayed.
In addition, the info command is updated to allow a user to optionally specify the repository when requesting a specific backup set. In this case, the text output will reflect the status of the stanza, the cipher types and archive min/max over all the repositories instead of a single repository when the repo option is specified.
The unit test Makefile generation was a hodge-podge of constants and rules based on distros/versions that easily got out of date and did not work on an unknown system. All of this dates from the mixed Perl/C unit test implementation.
Instead use configure to generate most of the important Makefile variables, which allows the unit tests to run on multiple platforms, e.g. MacOS and FreeBSD.
There is plenty of work to be done here and not all the unit tests work on MacOS and FreeBSD for various reasons.
As a POC update the MacOS and FreeBSD tests on Cirrus-CI to run a few command unit tests.
MacOS does not allow files to be removed recursively unless the owner has write and execute permissions on all the directories.
Some tests leave the permissions in a bad state so fix them up before trying to delete.
The exact message is platform dependent so get the platform error to use in the expect.
It doesn't matter what the message is as long as there is an error and it is logged.
YAML::XS requires libyaml so it not as portable as pure Perl versions of YAML.
Instead of using YAML:PP just use the general YAML::Any module which uses whatever is installed. We are not concerned about performance for YAML so whatever works is fine.
Messages on stderr were being lost due to the error suppression used to customize the error message.
Also update the formatting to be more informative and concise.
MacOS has a very old version of rsync that does not support this option.
Rather than require a newer version of rsync exclude the option since the plan is to remove the requirement for it.
This is a more appropriate place for the check and means test.pl can avoid loading any XML files if --no-gen is specified.
The XML::Checker::Parser module originally selected for XML in Perl is not very portable so the requirement reduces the number of platforms where tests can be run.
Clang justifiably complains about pointer arithmetic on a known NULL value during testing. We know this is fine but use uintptr_t to silence the warnings.
Found on MacOS M1.
The return value is not checked because we are happy with a truncated result in this case, which is guaranteed by passing the buffer size.
Found on MacOS M1.
Multi-repository implementations for the archive-push, check, info, stanza-create, stanza-upgrade, and stanza-delete commands.
Multi-repo configuration is disabled so there should be no behavioral changes between these commands and their current single-repo implementations.
Multi-repo documentation and integration tests are still in the multi-repo development branch. All unit tests work as multi-repo since they are able to bypass the configuration restrictions.
The option portion was not being capitalized or replacing - with _.
The parser does not care, but in cases where we have mixed hrnCfgEnv*()/setenv() calls the env variable might not get cleared, which can lead to funny test results.
The default lock path should fail since the test VM gives ownership of /tmp to root.
For some reason this was not working as expected under u18 but it fails under u20.
All unit tests now require full coverage so the "full" keyword is obsolete and has been removed.
The covered code modules are simply listed, with only "no code" modules annotated.
Check that archive files exist in the main process instead of the local process. This means that the archive.info file only needs to be loaded once per execution rather than once per file to get.
Stop looking when a file is missing or in error. PostgreSQL will never request anything past the missing file so there is no point in getting them. This also reduces "unable to find" logging in the async process.
Cache results of storageList() when looking for multiple files to reduce storage I/O.
Look for all requested archive files in the archive-id where the first file is found. They may not all be there, but this reduces the number of list calls. If subsequent files are in another archive id they will be found on the next archive-get call.
Append "asynchronously" to messages when the async process fetched the file (not in the actual async process log, though).
Add "repo1" to make it clear what archive we are talking about. This is not very useful by itself but soon we'll be able to add the archive id, which is very useful.
Add constants for messages that are used multiple times to ensure they stay consistent.
If files other than backup.manifest.copy were left in a backup path by a prior resume then the next resume would skip the backup rather than removing it. Since the backup path still existed, it would be found during backup label generation and cause an error if it appeared to be later than the new backup label. This occurred if the skipped backup was full.
The error was only likely on object stores such as S3 because of the order of file deletion. Posix file systems delete from the bottom up because directories containing files cannot be deleted. Object stores do not have directories so files are deleted in whatever order they are provided by the list command. However, the issue can be reproduced on a Posix file system by manually deleting backup.manifest.copy from a resumable backup path.
Fix the issue by removing the resumable backup if it has no manifest files. Also add a new warning message for this condition.
Note that this issue could be resolved by running expire or a new full backup.
These options specify the number of local worker job retries and the retry interval after one immediate retry.
There is some value in allowing retries to be specified by the user but for the most part these options are for suppressing retries during testing, which can save a lot of time. The bug introduced in d1d25c7 and fixed in 8b86d5e also suggests it is better not to use retries in tests.
Remove the default delayed retries for archive-get/archive-push, leaving only the immediate retry. These commands are retried by PostgreSQL so it doesn't make sense to do too many retries internally.
These options are currently internal.
The test was pretty old and written in stages during the migration, so storage use was a bit archaic and the organization was poor.
Update using the new storage macros and reorganize the tests to provide better coverage.
The macros should make it much easier to write complex tests, especially when compression and encryption are involved.
Update the command/archiveGet test to show how the new macros are used.
This avoids the need for strLstJoin() when testing lists.
Lists are \n delimited (rather than command or pipe) so that non-trivial lists can be more easily diff'd.
Add separation and some visual cues to help identify the start of a test.
Also add a counter which can be used to search for a specific test, which is useful if there is a lot of debug output to search through.
These were required to deal with the legacy Perl code being unable to load new options between tests.
The C code does not have this issue so remove the forks and update process ids in the log tests.
No timeout is expected here but the small timeout prevents errors from being thrown.
This is not a bug since the error would be thrown on the next archive-get call but it does make the tests harder to debug when there is an error.
It is not clear why there was a timeout here at all. It is likely cruft from a prior test or a copy/paste error.
Tests that are duplicated are being removed from the info command unit tests. Specifically tests where the only thing different was whether a lock was held or not which affects only the status display. Removing these tests will reduce churn in the upcoming multi-repo support.
The data returned by the protocol has not been sorted yet so it is vulnerable to differences in collation.
Multiple records are not needed for this test so limit it to one path to solve this issue.
The pg option only has one current usage, to let the backup local know which pg index it should copy files from.
There are other possible uses for this option, but they need thought, tests, and documentation.
This option was added in advance of the multi-repo functionality but it has no purpose and it is not clear what the validity rules should be.
The option will be added back when multi-repo functionality is committed.
There is an inconsistency when the JSON is output for the case when a stanza is requested and it does not exist in the repo. This was the only case where the archive array was not added to the JSON. Adding it will simplify the upcoming multi-repo support code.
Also, a redundant test was removed rather than updating it for this case.
This was a hack to prevent the remote from loading host settings, which is now handled by option validity for command roles.
These options are still useful so don't remove them, but do leave them internal for now.
Building on 23f5712, limit option validity by role. This is mostly for options that weren't needed for certain roles but were harmless. However, the upcoming multi repository functionality requires the granularity implemented here.
The remote role benefits since host options can automatically excluded when building the options. Also, many options that are only required for the default role (e.g. repo-retention-full) no longer need to be passed in tests for other roles.
Some tests used options in contexts that are currently valid but are not correct usage, i.e. usage of internal options for the default role.
Update these tests in advance of the option validity becoming stricter.
Validity by command was not granular enough so numerous options needed be marked internal so users would not stumble across them. Options were also needlessly being passed to roles that had no use for them.
Introduce per-role validity lists that depend on what roles are valid per command. Also add a check to ensure that only valid roles are used with a command.
This commit adds the functionality but does not introduce any new behavior, i.e. all options are valid for all roles that the command is valid for. A subsequent commit will introduce the new role restrictions to make the changes easier to audit.
Data required for parsing was spread between the config and defined modules, mostly for historical reasons because the same data was used by Perl.
Requiring all the parse rules to be accessed with function interfaces makes the code more complicated and new rules harder to implement.
Instead, move the data to the parse module so in the most complex cases no interface functions are needed. This reduces the total amount of code and paves the way for more complex parse rules.
The help data can be represented more compactly in a pack and this separates data needed for help from data needed for parsing, freeing each to have a more appropriate representation.
This was done in the internal versions but not the user-facing function. That meant the field had to be explicitly read after determining it was NULL, which is wasteful.
Since there is only one behavior now, remove pckReadDefaultNull() and move the logic to pckReadNullInternal().
Testing on Travis-CI has been getting slower (from ~18 minutes to 3-6 hours) and the travis-ci.org service will be terminated at the end of the year. Moving to travis-ci.com is an option but the quotas are too low for our purposes.
Instead use Github Actions, which does not currently have quotas, and runs our current tests with just a few tweaks.
This still leaves multi-architecture tests on Travis-CI but we may be able to run those and stay within the new quotas.
Also fix a minor bug in restoreTest.c exposed by Github Actions using a different name for the user and group.
The pack type is an architecture-independent format for serializing data compactly, inspired by ProtocolBuffers and Avro.
Also add ioReadSmall(), which is optimized for small binary reads, similar to ioReadLineParam().
The C code does not use doubles to represent seconds like the Perl code did so time can be represented as an integer which reduces the number of data types that config has to understand.
Also remove Variant doubles since they are no longer used.
Note that not all double code was removed since we still need to display times to the user in seconds and it is possible for the times to be fractional. In the future this will likely be simplified by storing the original user input and using that value when the time needs to be displayed.
Bug Fixes:
* Allow [, #, and space as the first character in database names. (Reviewed by Stefan Fercot, Cynthia Shang. Reported by Jefferson Alexandre.)
* Create standby.signal only on PostgreSQL 12 when restore type is standby. (Fixed by Stefan Fercot. Reviewed by David Steele. Reported by Keith Fiske.)
Features:
* Expire history files. (Contributed by Stefan Fercot. Reviewed by David Steele.)
* Report page checksum errors in info command text output. (Contributed by Stefan Fercot. Reviewed by Cynthia Shang.)
* Add repo-azure-endpoint option. (Reviewed by Cynthia Shang, Brian Peterson. Suggested by Brian Peterson.)
* Add pg-database option. (Reviewed by Cynthia Shang.)
Improvements:
* Improve info command output when a stanza is specified but missing. (Contributed by Stefan Fercot. Reviewed by Cynthia Shang, David Steele. Suggested by uspen.)
* Improve performance of large file lists in backup/restore commands. (Reviewed by Cynthia Shang, Oscar.)
* Add retries to PostgreSQL sleep when starting a backup. (Reviewed by Cynthia Shang. Suggested by Vitaliy Kukharik.)
Documentation Improvements:
* Replace RHEL/CentOS 6 documentation with RHEL/CentOS 8.
Update RHEL/CentOS 7 to cover the versions that were previously covered by RHEL/CentOS 6.
Since RHEL/CentOS 7/8 work the same update the documentation logic and labels to reflect this compatibility.
Inaccuracies in sleep time or clock skew might make a single sleep insufficient to reach the next second.
Add a few retries to make the process more reliable but still avoid an infinite loop if something is seriously wrong.
CentOS6 EOL'd and the mirrors were swiftly deleted, leading to failures in tests and documentation.
Remove CentOS 6 for now to get builds going again with the intention to replace it in the near future with CentOS 8.
There is not a lot to be done in this case since it looks like PostgreSQL disconnected while the query was running, but at least improve the error message and remove the assert, which indicates a coding error.
Refactor the code to allow a dynamic number of indexes for indexed options, e.g. pg-path. Our reliance on getopt_long() still limits the number of indexes we can have per group, but once this limitation is removed the rest of the code should be happy with dynamic numbers of indexes (with a reasonable maximum).
Add an option to set a default in each group. This was previously handled by the host-id option but now there is a specific option for each group, pg and repo. These remain internal until they can be fully tested with multi-repo support. They are fully tested for internal usage.
Remove the ConfigDefineOption enum and use the ConfigOption enum instead. They are now equal since the indexed options (e.g. cfgOptRepoHost2) have been removed from ConfigOption.
Remove the config/config test module and add required tests to the config/parse test module. Parsing is now the only way to load a config so this removes some redundancy.
Split new internal config structures and functions into a new header file, config.intern.h. More functions will need to be moved over from config.h but that will need to be done in a future commit to reduce churn.
Add repoIdx to repoIsLocal() and storageRepo*(). Multi-repository support requires that repo locality and storage be accessible by index. This allows, for example, multiple repos to be iterated in a loop. This could be done in a separate commit but doesn't seem worth it since the code is related.
Remove the type parameter from storageRepoGet(). This parameter existed solely to provide coverage for the case where the storage type was invalid. A better pattern is to check that the type is S3 once all other types have been ruled out.
Improve locking on remote processes by introducing an exec-id that is unique to the main process and passed to all remote processes. This allows the remote processes to determine if a lock is held by a remote from the same main process. If so, the lock is allowed.
The exec-id is also useful for associating remote logs with main logs for debugging purposes.
When restore type standby is provided, the recovery.signal isn't needed and may lead to some confusion (see #1236).
Lately, when using pg_basebackup --write-recovery-conf, only the standby.signal file is created. This change would then align with that behaviour.
If a user reset an option such as pg-default on the command-line then an override in the code would not take effect.
Ignore a reset when the code explicitly sets an option to prevent this.
These warnings were only being reported to PostgreSQL on the console. Now they are also recorded in the async log increasing the chance that they will be seen.
This also improves coverage by requiring a warning during async processing to have a test case, which has been added.
The tests were originally written by loading values directly into the configuration before the parser was available.
Update to use harnessCfgLoad() to simplify the tests and make them compatible with upcoming config changes.
Return a path missing error when a stanza is specified for the info command but the stanza does not exist in the repository.
Previously [] was returned, which is still the case if no stanza is specified and the repository does not exist.
lstRemoveIdx(list, 0) resulted in the entire list being moved down to the first position which could take a long time for big lists. This is a common pattern in backup/restore when processing file queues.
Instead simply move the list pointer up when first item is removed. Then on insert check if there is space at the beginning when there is no longer space at the end and do the move then. This way if a list is built and then drained without any new inserts then no move is required.
There were a number of places in the code where "hostId" was used, but hostId is just the option group index + 1 so this led to a lot of +1 and -1 to convert the id to an index and vice versa.
Instead just use the zero based index wherever possible. This is pretty much everywhere except when the host-id option is read or set, or where a message is being formatted for the user.
Also fix a bug in protocolRemoteParam() where remotes spawned from the main process could get process ids that were not 0. Only the locals should spawn remotes with process id > 0. This seems to have been harmless since the process id is only a label, but it could be confusing when debugging.
iniLoad() was trimming lines which meant that a leading space would not pass checksum validation when a manifest was reloaded. Remove the trims since files we write should never contain extraneous spaces. This further diverges the format for the functions that read conf files (e.g. pgbackrest.conf) and those that read info (e.g. manifest) files.
While we are at it also allow [ and # as initial characters. # was reserved for comments but we never put comments into info files. [ denotes a section but we can get around this by never allowing arrays as values in info files, so if a line ends in ] it must be a section. This is currently the case but enforce it by adding an assert to info/info.c.
The tests were originally written by loading values directly into the configuration before the parser was available.
Update to use harnessCfgLoadRaw() to simplify the tests and make them compatible with upcoming config changes.
Note that some unreachable conditions were removed since they could not be reached via a parsed config, only by munging values directly into the config. cfgOptionTest(optionId) was removed because a non-default value must always be set. cfgOptionValid(cfgOptLogTimestamp) was removed because it is true for all commands except for cfgCmdNone, which is checked with an assert.
cfgOptionId() did not recognize deprecated options which made the help command throw errors when they were specified on the command line. cfgParseOption() will correctly identify deprecated options.
cfgParseOption() can also be used in cfgParse() to reduce code duplication when parsing info out of the option value returned by optionFind().
Finally, code the option key index separately in parse.auto.c. For now they are simply added back together but future code will need them separated.
This has always been equivalent to the ConfigCommand enum so it just adds complexity.
It was created for symmetry with ConfigDefineOption, which will also be removed soon.
Currently indexes above 1 do not have dependencies checked, so this doesn't error.
In a future commit we will enable those checks and this will error if it is not fixed.
These constants don't scale well as the index total is increased for an option.
The core code rarely uses these options and they are easily replaced with cfgOptionName().
The tests had started to make use of the constants, so provide functions that build the option name from the optionId and, optionally, the optionKey.
WAL timeline history files were not being expired because they were small and generally not very plentiful.
However, in some cases large numbers of history files may be generated so it makes sense to remove useless history files to keep things tidy.
The history file for the oldest retained timeline is kept for debugging purposes even though it is not used for recovery.
Group related options together so operations (e.g. valid, test, index total) can be performed on all options in the group.
Previously, options at the top of the hierarchy of the related options were used to do these tests. This was prone to error as option relationships changed and it was not always clear which option (or options) should be used.
Bug Fixes:
* Error with hints when backup user cannot read pg_settings. (Reviewed by Stefan Fercot, Cynthia Shang. Reported by Mohamed Insaf K.)
Features:
* PostgreSQL 13 support. (Reviewed by Cynthia Shang.)
Improvements:
* Improve PostgreSQL version identification. (Reviewed by Cynthia Shang, Stephen Frost.)
* Improve working directory error message. (Reviewed by Stefan Fercot.)
* Add hint about starting the stanza when WAL segment not found. (Contributed by David Christensen. Reviewed by David Steele.)
* Add hint for protocol version mismatch. (Reviewed by Cynthia Shang. Suggested by loop-evgeny.)
Documentation Improvements:
* Add note that pgBackRest versions must match when running remotely. (Reviewed by Cynthia Shang. Suggested by loop-evgeny.)
* Move info command text to the reference and link to user guide. (Reviewed by Cynthia Shang. Suggested by Christophe Courtois.)
* Update yum repository path for CentOS/RHEL user guide. (Contributed by Heath Lord. Reviewed by David Steele.)
Update the documentation to explicitly state that versions must match across hosts when running remotely.
Add a hint to the protocol version mismatch error to help the user identify the problem.
Add older PostgreSQL versions to the u18 container that were not available before.
This also updates all minor versions for prior versions of PostgreSQL.
Scan the WAL archive for missing or invalid files and build up ranges of WAL that will be used to verify backup integrity. A number of errors and warnings are currently emitted but they should not be considered authoritative (yet).
The command is incomplete so is marked internal.
Previously, catalog versions were fixed for all versions which made maintaining the catalog versions during PostgreSQL beta and release candidate cycles very painful. A version of pgBackRest which was functionally compatible was rendered useless by a catalog version bump in PostgreSQL.
Instead use only the control version to identify a PostgreSQL version when possible. Some older versions require a catalog version to positively identify a PostgreSQL version, so include them when required.
Since the catalog number is required to work with tablespaces it will need to be stored. There's already a copy of it in backup.info so use that (even though we have been ignoring it in the C versions).
These values are not used by the Perl integration tests so maybe it would be better to remove them, but for now just update since they should not be changing again for PG13.
This condition used to give a not-very-clear error which we have been intending to improve. But in the meantime the changes in fbff299 resulted in a segfault for this condition instead because the data_directory was assumed to be non-NULL.
Fix this by explicitly throwing an error with hints when any row in pg_settings cannot be selected.
This file is created by pg_basebackup so might be in the data directory if the cluster was restored from a pg_basebackup backup. Also exclude backup_manifest.tmp since it is possible to find that in the backup directory.
Improve the wording of the error message and add a hint to make it clearer what is wrong and how the user can fix it.
Also change the assert to a regular error since this is not an internal error.
If a stop command has been issued the check command fails due to archiving timing out.
Provide a hint to document this situation and point the user in the proper direction.
If the callback never returned any jobs then protocolParallelDone() would never be true. The reason is that the done state was being set in protocolParallelResult(), which never gets called if there are no results.
Calling protocolParallelResult() doesn't make much sense in this case so instead move the done logic to protocolParallelDone().
For current usage of ProtocolParallel we ensure there are jobs before processing so this is not a live issue, but the new behavior is required for future development.
Bug Fixes:
* Suppress errors when closing local/remote processes. Since the command has completed it is counterproductive to throw an error but still warn to indicate that something unusual happened. (Reviewed by Cynthia Shang. Reported by argdenis.)
* Fix issue with = character in file or database names. (Reviewed by Bastian Wegge, Cynthia Shang. Reported by Brad Nicholson, Bastian Wegge.)
Features:
* Automatically retrieve temporary S3 credentials on AWS instances. (Contributed by David Steele, Stephen Frost. Reviewed by Cynthia Shang, David Youatt, Aleš Zelený, Jeanette Bromage.)
* Add archive-mode option to disable archiving on restore. (Reviewed by Stephen Frost. Suggested by Stephen Frost.)
Improvements:
* PostgreSQL 13 beta3 support. Changes to the control/catalog/WAL versions in subsequent betas may break compatibility but pgBackRest will be updated with each release to keep pace.
* Asynchronous list/remove for S3/Azure storage. (Reviewed by Cynthia Shang, Stephen Frost.)
* Improve memory usage of unlogged relation detection in manifest build. (Reviewed by Cynthia Shang, Stephen Frost, Brad Nicholson, Oscar. Suggested by Oscar, Brad Nicholson.)
* Proactively close file descriptors after forking async process. (Reviewed by Stephen Frost, Cynthia Shang.)
* Delay backup remote connection close until after archive check. (Contributed by Floris van Nee. Reviewed by David Steele.)
* Improve detailed error output. (Reviewed by Cynthia Shang.)
* Improve TLS error reporting. (Reviewed by Cynthia Shang, Stephen Frost.)
Documentation Bug Fixes:
* Add none to compress-type option reference and fix example. (Reported by Ugo Bellavance, Don Seiler.)
* Add missing azure type in repo-type option reference. (Fixed by Don Seiler. Reviewed by David Steele.)
* Fix typo in repo-cipher-type option reference. (Fixed by Don Seiler. Reviewed by David Steele.)
Documentation Improvements:
* Clarify that expire must be run regularly when expire-auto is disabled. (Reviewed by Douglas J Hunley. Suggested by Douglas J Hunley.)
When restoring a cluster that will be promoted but is not intended to be the new primary, it is important to disable archiving to avoid polluting the repository with useless WAL. This option makes disabling archiving a bit easier.
Automatically retrieve the role and temporary credentials for S3 when the AWS instance is associated with an IAM role. Credentials are automatically updated when they are <= 5 minutes from expiring.
Basic configuration is to set repo1-s3-key-type=auto. repo1-s3-role can be used to set a specific role, otherwise it will be retrieved automatically.
Add more info (command, version, options) to asserts, and errors when debug logging is enabled. This won't cover all cases but might mean we get more info in some circumstances.
When testing the common/stack-trace module it is important not to call this test function since the trace stack is empty and it will cause a buffer under run.
Instead use a macro that is only defined under the correct circumstances and add an assert() to catch future regressions.
Currently each module that needs to collect statistics implements custom code to do so. This is cumbersome.
Create a general purpose module for collecting and reporting statistics. Statistics are output in the log at detail level, but there are other uses they could be put to eventually.
No new functionality is added. This is just a drop-in replacement for the current statistics, with the advantage of being more flexible.
The new stats are slower because they involve a list lookup, but performance testing shows stats can be updated at about 40,000/ms which seems fast enough for our purposes.
HTTP/1.0 connections are closed by default after a single response. Other than that, treat 1.0 the same as 1.1.
HTTP/1.0 allows different date formats that we can't parse but for now, at least, we don't need any date headers from 1.0 requests.
The prior implementation only supported a single connection on TLS. This is not flexible enough for complex testing scenarios which might require multiple simultaneous connections on different protocols.
Allow multiple simultaneous connections and add plain sockets as a protocol option. Rename the functions used for server scripting to hrnServerScript*() to make it clear they are related. Improve error messages when less input is received by the server than expected.
Also, do a bit of cleanup and add more comments.
Following up on 111d33c, implement the new interfaces for socket client/session. Now HTTP objects can be used over TLS or plain sockets.
This required adding ioSessionFd() and ioSessionRole() to provide the functionality of sckSessionFd() and sckSessionType(). sckClientHost() and sckClientPort don't make sense in a generic interface so they were replaced with ioSessionName().
Rather than calling storageS3New() directly, create the storage by loading a configuration and calling repoStorageGet(). This is a better end-to-end test and cuts down on a lot of redundant tests.
Add tests that include security tokens in error messages to ensure they are redacted.
Move sckSessionReadyRead()/Write() into the IoRead/IoWrite interfaces. This is a more logical place for them and the alternative would be to add them to the IoSession interface, which does not seem like a good idea.
This is mostly a refactor, but a big change is the select() logic in fdRead.c has been replaced by ioReadReady(). This was duplicated code that was being used by our protocol but not TLS. Since we have not had any problems with requiring poll() in the field this seems like a good time to remove our dependence on select().
Also, IoFdWrite now requires a timeout so update where required, mostly in the tests.
These interfaces allow the HttpClient and HttpSession objects to work with protocols other than TLS, .e.g. plain sockets. This is necessary to allow standard HTTP -- right now only HTTPS is allowed, i.e. HTTP over TLS.
For now only TlsClient and TlsSession have been converted to the new interfaces. SocketClient and SocketSession will also need to be converted but first sckSessionReadyRead() and sckSessionReadyWrite() need to be moved into the IoRead and IoWrite interfaces, since they are not a good fit for IoSession.
Pretty much everywhere handle is used what is really meant is file descriptor (fd). This terminology got migrated over from Perl and is just not quite correct, or at least not as correct as fd.
There were also plenty of places fd was used so now all uses are consistent.
The Perl code was not updated but might be in a future commit.
This does not appear to have been used in quite some time and the tests are equally useless because they don't prove the correct port was passed to httpClientNew().
Before 9f2d647 TLS errors included additional details in at least some cases. After 9f2d647 a connection to an HTTP server threw `TLS error [1]` instead of `unable to negotiate TLS connection: [336031996] unknown protocol`.
Bring back the detailed messages to make debugging TLS errors easier. Since the error routine is now generic the `unable to negotiate TLS connection context` is not available so the error looks like `TLS error [1:336031996] unknown protocol`.
This loop was using a lot of memory without freeing it at intervals.
Rewrite to use char arrays when possible to reduce memory that needs to be allocated and freed.
Zigzag encoding places the sign bit in the least significant bit so that -1 is encoded as 1, 1 as 2, etc. This moves as many bits as possible into the low order bits which is good for other types of encoding, e.g. base-128.
See https://en.wikipedia.org/wiki/Variable-length_quantity#Zigzag_encoding.
It seems like overkill to encode this when other enums (e.g. StorageInfoLevel) are passed as integers.
Instead note that StorageType values should not be changed and remove the special encoding.
The fix for = characters in info files (039d314) added JSON validation but discarded the resulting Variant which means the JSON is being parsed twice. This nearly doubles the time to load a manifest since a lot of complex JSON is involved.
Time to load a million file manifest:
Before 039d314: 7.8s
039d314: 15.5s
This patch: 7.5s
To fix this regression return the Variant in the callback so the caller does not have to parse it again. The new code appears slightly more efficient overall, probably because there are fewer operations against Strings.
We use the Z suffix in many functions to indicate that we are expecting a zero-terminated string so make this function conform to the pattern.
As a bonus the new name is a bit shorter, which is a good quality in a commonly-used function.
The manifest uses the = character as the key/value separator so = characters in the key cause parsing errors and lead to an error or segfault.
Since the value must be valid JSON we can keep checking the value on the right side of the = and stop building the key when the value is valid. It's a bit hackish but it does seem to do the job without breaking the manifest format.
Unsurprisingly this makes parsing about 50% slower but it's still more than fast enough. Parsing 10 million key/values takes about 6.5s for the old code and 10s for the new code. Since the value is used as JSON downstream we can reclaim most of this time by just passing the JSON value rather than making the callback reparse it. We'll save that for another commit, though.
Since the command has completed it is counterproductive to throw an error but still warn to indicate that something unusual happened.
Also fix the related issue that the local processes were not being shut down when they completed, which meant that they might timeout before being closed when pgbackrest terminated.
Use a test storage driver to allow manifestNewBuild() to be run against a test cluster at any scale without having to write files to disk.
Simplify the test by using the output of manifestNewBuild() to feed manifestSave() and manifestNewLoad().
Also add manifest size to the output.
Calculates the memory used by the context and all child contexts.
This is primarily useful for debugging but it is not conditional on DEBUG because it is useful for profile/performance tests.
A number of tests used invalid JSON values where an error was expected or the value would be ignored.
Update these tests to use valid JSON values so all values in the file can be validated even if they are not used.
Something like 3="string" would return an Int64 variant and ignore the invalid portion after the integer. Other JSON interface functions have this check but it was forgotten here.
There are no current issues because of this but we want to be able to validate arbitrary JSON strings and this function was not working correctly for that usage.
This function is not used in the core code so remove it and update the test where it was used.
There may eventually be a need for a strLstNewP() function but it doesn't seem worth the code churn until there is an actual requirement.
The old constructor was left around to reduce code churn during the migration but it just makes the code harder to read and search.
Remove the old constructor and rename all remaining instances to lstNewP(), which by default has the same semantics.