Utilize httpUrlNewParseP() to parse endpoint and port from the URL in the S3 and Azure helpers to avoid issues where protocol was not expected to be part of the URL.
This leak was caused by the file descriptor variable getting clobbered after a long jump. Mark it as volatile to fix.
Testing this is a bit complex because the issue only happens in optimized builds, if at all. Put the test into the performance suite, which is always optimized, until a better idea presents itself.
If a path/file was remapped to a link using either --link-map or --link-all there would be no affect if the path/file already existed. If a link existed it would be properly updated and converting a link to a path/file also worked.
The issue happened during delta cleanup, which failed to check if the existing path/file had been remapped to a link.
Add checks for newly mapped path/file links and remove the old path/file we required.
This was previously a warning but the warning is easy to miss so a lot of time may be lost restoring and recovering a backup that will not hit the target.
Since this is technically a breaking change, add an "important note" about the change to the release.
In the backup command, add a warning if start-fast is disabled and the PostgreSQL checkpoint_timeout is greater than db-timeout.
In such cases, we might timeout before the checkpoint occurs and the backup really starts.
Fail the backup if a cluster stops or the standby is promoted. Previously, shutting down the primary would cause an error but it was not detected until the end of the backup. Now the error will happen sooner and a promotion on the standby will also cause an error.
SIGHUP allows the configuration to be reloaded. Note that the configuration will not be updated in child processes that have already started.
SIGTERM terminates the server process gracefully and sends SIGTERM to all child processes. This also gives the tests an easy way to stop the server.
Add the following checks:
* Checkpoint is updated in pg_control after pg_start_backup(). This helps ensure that PostgreSQL and pgBackRest have a consistent view of the storage and that PGDATA paths match.
* Timeline of backup start WAL file matches pg_control. Hard to see how this one could get hit, but we have the power...
* Standby is on the same timeline as the primary. If not, this standby is not following the primary.
* Last standby checkpoint is not greater than the backup checkpoint. If so, this standby is not following the primary.
This also requires some additional plumbing to read/write timeline/checkpoint from pg_control and parse timelines from WAL filenames. There were some changes in the backup tests caused by the fact that pg_control now has different contents for each backup.
The check to ensure that the required checkpoint was reached on the standby should also be updated to use pg_control (it currently uses pg_control_checkpoint()), but that requires non-trivial changes to the test harness and will need to wait.
A CHECK() worked exactly like ASSERT() except that it was compiled into production code. However, over time many checks have been added that should not throw AssertError, which should be reserved for probable coding errors.
Allow the error code to be specified so other error types can be thrown. Also add a human-readable message since many of these could be seen by users even when there is no coding error.
Update coverage exceptions for CHECK() to match ASSERT() since all conditions will never be covered.
These macros simplify management of pg_control test files.
Centralize time updates for pg_control in the command/backup module. This caused some time updates in the logs.
Finally, move the postgres module after the storage module so it can use storage macros.
hrnPgControlToBuffer() and hrnPgWalToBuffer() now generate the system id based on the version of Postgres. If a value less than 100 is specified for systemId then it will be added to the default system id so there can be multiple ids for a single version of PostgreSQL.
Add constants to represent version system ids in tests. These will eventually be auto-generated.
This changes some checksums and we no longer have big-endian tests systems, so X those checksums out so it is obvious they are no longer valid.
Tests that run without DEBUG for performance did not have ASSERT() and were using CHECK() instead.
Instead ensure that the ASSERT() macro is always available in tests.
Eliminate summing and passing of copied files sizes for logging backup size.
Instead, utilize infoBackupDataByLabel() to pull the backup size for the log message.
This allows boolean boolean command-line options to work like their config file equivalents.
At least for now this behavior will remain undocumented since all examples in the documentation will continue to use the standard syntax. The idea is that it will "just work" when options are copied out of config files rather than generating an error.
Previously the archive was only checked at the end of the backup to ensure all WAL required to make the backup consistent was present. The problem was that if archiving was not functioning then the backup had to complete before the user found out, which could be a while if the database was large enough.
Add an archive check immediately after backup start so failures are reported earlier.
The trick is to determine which WAL to check. If the repo is new there may not be any WAL in it and pg_start_backup() will not switch the WAL segment if it is empty. These are both likely scenarios when setting up and/or testing pgBackRest.
If the WAL segment is switched by pg_start_backup(), then check the archive for the segment that was detected prior to backup start. This should be common on normal running clusters with regular activity. Note that this might not be the segment immediately prior to the backup start segment if WAL volume is high.
If pg_start_backup() did not switch the WAL then we can force a switch on PostgreSQL >= 9.3 by creating a restore point. In that case the WAL to check will be the backup start WAL. This is most likely to happen on idle systems, during testing, or immediately after a repo switch.
An advantage of this approach other than earlier notification is that the backup directory will not be created so no resume will be attempted on the next backup.
Note that some additional churn was created in backup.c because the load of archive.info needs to be done earlier.
This is easier to read than using infoBackupDataByLabel() != NULL.
It also allows an assertion to be added to infoBackupDataByLabel() to ensure that a NULL return value is not used unsafely.
This test was lost due to a syntax issue in a58635ac.
Update the test to use system() to better mimic what postgres does and add logging so pgBackRest timing can be determined.
Properly log the size of files copied during the backup, matching the backup size returned from the info command.
In the reference issue, the incremental backup after switchover logs the size of all files evaluated rather than only the size of the files copied in the backup.
This appears to have been an attempt to not delete files that we don't recognize, but it only works in narrow cases and could leave the user is a position of not being able to complete the stanza delete without manual intervention. It seems better just to proceed with the delete, especially since the info files have already been removed.
In addition, deleting the manifests individually could be slow on object stores if there were a very large number of backups.
Size option default and allowed values were displayed in bytes, which was confusing for the user.
This also lays the groundwork for adding units to time options.
Move option parsing functions into a common module so they can be used from the build module.
Allows users to provide an executable to be used when pgbackrest generates command strings that expect to invoke pgbackrest. These generated commands are written to files by pgbackrest, e.g. recovery.conf.
The error handler used a loop to process try, catch, and finally blocks. This worked fine but static analysis tools like Coverity did not understand that the finally block would always run and so there were false positives about double-free, unfreed resource, etc.
This implementation removes the loop, which simplifies everything, and makes it clear that the finally block will always run. This cuts down on Coverity false positives.
This implementation also catches lack of coverage on empty catch blocks so a few test fixes were committed separately in d74fe7a.
A small refactor in backup.c is required because gcc 10.3.1 on Fedora 33 complains that the reason variable may be used uninitialized. It's not clear why this is the case, but reducing the scope of the TRY block fixes the issue.
Rather the converting String to StringIds at runtime, store defaults in StringId format in parse.auto.c and convert user input to StringId during parsing.
The compress-type, repo-type and log-level-* options have allow lists, which means it is more efficient to treat them as StringIds.
For compress-type and log-level-* also update the functions that convert them to enums.
The strIdFrom*() forced the caller to pick an encoding, which led to a number of TRY...CATCH blocks in the code. In practice the caller does not care which encoding is used as long as the string is valid for some encoding.
Update the strIdFrom*() function to try all possible encodings and only throw an error when the string is not valid for any of them.
Bug Fixes:
* Allow "global" as a stanza prefix. (Reviewed by Stefan Fercot. Reported by Younes Alhroub.)
* Fix segfault on invalid GCS key file. (Reviewed by Stephen Frost. Reported by Henrik Feldt.)
Improvements:
* Allow link-map option to create new links. (Reviewed by Don Seiler, Stefan Fercot, Chris Bandy. Suggested by Don Seiler.)
* Increase max index allowed for pg/repo options to 256. (Reviewed by Cynthia Shang.)
* Add WebIdentity authentication for AWS S3. (Reviewed by James Callahan, Reid Thompson, Benjamin Blattberg, Andrew L'Ecuyer.)
* Report backup file validation errors in backup.info. (Contributed by Stefan Fercot. Reviewed by David Steele.)
* Add recovery start time to online backup restore log. (Reviewed by Tom Swartz, Stefan Fercot. Suggested by Tom Swartz.)
* Report original error and retries on local job failure. (Reviewed by Stefan Fercot.)
* Rename page checksum error to error list in info text output. (Reviewed by Stefan Fercot.)
* Add hints to standby replay timeout message. (Reviewed by Cynthia Shang, Stefan Fercot. Suggested by Leigh Downs.)
Since CentOS 8 will be EOL at the end of the year it makes sense to do this now. The centos:8 image is still used in documentation.xml because changes there require manual testing, which will need to be done at a later date. The changes are not user-facing, however, and can be done at any time.
Also update CentOS references to RHEL since that is what we are emulating for testing purposes.
Currently empty CATCH() blocks are always marked as covered because of the loop structure of error handling.
A prototype implementation of error handling without looping has shown that these CATCH() blocks are not covered without new tests. Whether or not that prototype gets committed it is worth adding the tests.
This is mostly to revert some comment changes in b11ab9f7 that will break the ppc64le patch, but at the same time keep the spelling consistent in all comments and documentation.
Also revert some space changes for the same reason.
Azurite released another breaking change (see fbd018cd, 096829b3, c38d6926, and Azurite issue 1039) so make adjustments as needed to documentation and tests.
Also remove some dead code that hid the repo-storage-host option and was made obsolete by all these changes.
The variants were needed to easily serialize configurations for the Perl code.
Unions are more efficient and will allow us to add new types that are not supported by variants, e.g. StringId.
These flags are used for all tests but it was not possible to add them to configure before the change in 046d6643. This is especially important for adhoc tests to ensure the flags are not forgotten.
Remove the flags from test make commands where they were being applied.
There is no change for production builds.
The TLS server is an alternative to using SSH for protocol connections to remote hosts.
This command is currently experimental and intended only for trial and testing. As such, the new commands and options will not show up in the command-line help unless directly requested.
Some tests can generate very large error messages for diffs and they often get cut off before the end.
Also fix a test so it does not create too large a buffer on the stack.
The previous format was custom for configuration parsing and was not as expressive as the pack format. An immediate benefit is that commands with the same optional rules are merged.
Defaults are now represented correctly (not multiplied), which simplifies the option default functions used by help.
These allow packs to be created without allocating a buffer in the case that the buffer already exists or the data is in a global constant.
Also fix a rendering issue in hrnPackReadToStr().
The vast majority of Strings are never modified so for most cases allocate memory for the string with the object. This results in one allocation in most cases instead of two. Use strNew() if strCat*() functions are needed.
Update varNewStr() in the same way since String Variants can never be modified. This results in one allocation in all cases instead of three. Also update varNewStrZ() to use STR() instead of strNewZ() to save two more allocations.
A stanza name like global_stanza was not allowed because the code was not selective enough about how a global section should be formatted.
Update the config parser to correctly recognize global sections.
Remove the hardcoded storage helpers from storageRepoGet() except for the the built-in Posix helper and the special remote helper.
The goal is to make storage driver development a bit easier by isolating as much of the code as possible into the driver module. This also makes coverage reporting much simpler for additional drivers since they do not need to provide coverage for storage/helper.
Consolidate the CIFS tests into the Posix tests since CIFS is just a special case of the Posix.
Test all storage features in the Posix test so that other storage driver tests do not need to provide coverage for storage/storage.
Remove some dead code in the storage/s3 test.
Currently link-map only allows links that exist in the backup manifest to be remapped to a new destination.
Allow link-map to create a new link as long as a valid path/file from the backup is referenced.
The local process will retry jobs (e.g. backup file) but after a certain number of failures gives up. Previously, the last error was reported but generally the first error is far more valuable. The last error is likely to be a cascade failure such as the protocol being out of sync.
Report the first error (and stack trace) and append the retry errors to the first error without stack trace information.
Currently errors found during the backup are only available in text output when specifying --set.
Add a flag to backup.info that is available in both the text and json output when --set is not specified. This at least provides the basic info that an error was found in the cluster during the backup, though details are still only available as described above.
These tests run in a container without permissions to mount tempfs, so add an option to ci.pl to not create tempfs. Also add some packages not in the base image.
Make the output consistent even when files are listed in a different order. This is purely for testing purposes, but there is no harm in consistent output.
Found on arm64.
This allows the stack trace to be set when an error is received by the protocol, rather than appending it to the message. Now these errors will look no different than any other error and the stack trace will be reported in the same way.
One immediate benefit is that test.pl --vm-out --log-level-test=debug will work for tests that check expect log results. Previously, the test would error at the first check because the stack trace included in the message would not match the expected log output.
This will allow new links to be added in a future commit. The current implementation is driven by the links that already exist in the manifest, which would make the new use case more complex to implement.
Also, add a more helpful error when a tablespace link is specified.
"error list" makes it clearer that other errors may be reported. For example, if checksum-page is true in the manifest but no checksum-page-error list is provided then the error is in alignment, i.e. the file size is not a multiple of the page size, with allowances made for a valid-looking partial page at the end of the file.
It is still not possible to differentiate between alignment and page checksum errors in the output but this will be addressed in a future commit.
Azurite introduced a breaking change in 8f63964e to use automatically host-style URIs when the endpoint appears to be a multipart hostname.
This option allows the user to configure which style URI will be used, but changing the endpoint might cause breakage if Azurite decides to use a different style. Future changes to Azurite may also cause breakage.
The pack is both more compact and more efficient than a variant.
Also aggregate the page error info in the main process rather than in the filter to allow additional LSN filtering, to be added in a future commit.
The push and pop code was duplicated in four places, so centralize the code into pckTagStackPop() and pckTagStackPush().
Also create a default bottom item for the stack to avoid allocating a list if there will only ever be the default container, which is very common. This avoids the extra time and memory to allocate a list.
Rather than working directly with Buffer types, define a new Pack pseudo-type that represents a Buffer containing a pack. This makes it clearer that a pack is being stored and allows stronger typing.
The Pack type is more compact and flexible than the Variant type. The Pack type also allows binary data to be stored, which is useful for transferring the passphrase in the CipherBlock filter.
The primary purpose is to allow more (and more complex) result data to be returned efficiently from the PageChecksum filter. For now the PageChecksum filter still returns the original Variant. Converting the result data will be the subject of a future commit.
Also convert filter types to StringId.
Command-line help is now generated at build time so it does not need to be committed. This reduces churn on commits that add configuration and/or update the help.
Since churn is no longer an issue, help.auto.c is bzip2 compressed to save space in the binary.
The Perl config parser (Data.pm) has been moved to doc/lib since the Perl build path is no longer required.
Likewise doc/xml/reference.xml has been moved to src/build/help/help.xml since it is required at build time.
The newer version of valgrind helps with some arm64 issues that have been fixed since the architecture has become more popular. Also add the valgrind builds to the Vagrantfile and Dockerfile.
Move the CA cert install from the base container to the test container. This means the CA cert can be changed without rebuilding all the base containers.
The primary benefit is that objects can allocate memory for their struct with the context, which saves an additional allocation and makes it easier to read context/allocation dumps. Also, the memory context does not need to be stored with the object since it can be determined using the object pointer.
Object pointers cannot be moved, so this means whatever additional memory is allocated cannot be resized. That makes the additional memory ideal for object structs, but not so much for allocating a list that might change size.
Mem contexts can no longer be reused since they will probably be the wrong size so their memory is freed on memContextFree(). This still means fewer allocations and frees overall.
Interfaces still need to be freed by mem context so the old objMove() and objFree() have been preserved as objMoveContext() and objFreeContext(). This will be addressed in a future commit.
The prior limitations were based on using getopt_long() to parse command-line options, which required a static list of allowed options. Setting index max too high bloated the binary unacceptably. 45a4e80 replaced the functionality of getopt_long() but the static list remained.
Improve cfgParseOption() to use available option data and remove the need for a static list. This also allows the option deprecations to be represented more compactly.
Index max is still capped at 256 because a large enough index could cause parseOptionIdxValue() to run out of memory since it allocates a static list based on the highest index found. If that function were improved with a map of found index values then index max could be set to UINT64_MAX.
Note that deprecations no longer set an index max or define whether reset is valid. These were space-saving measures which are no longer required. This means that indexed deprecated options will also be valid up to 256 and always allow reset, but it doesn't seem worth additional code to limit this behavior.
cfgParseOptionId() is no longer needed because calling cfgParseOption() with .ignoreMissingIndex = true duplicates the functionality of cfgParseOptionId(). This leads to some simplification in the help code.
The certs are available in test/certificate so it makes more sense to use them there. In addition the container does not need to be rebuilt unless the CA cert changes.
contextParentIdx was introduced in 90709dfd to improve the performance of mem context frees. memContextMove() did not get the message, however, and continued to use a loop to find the mem context in the old parent.
Use contextParentIdx to find the mem context in the old parent to avoid a loop.
IMPORTANT NOTE: The log level for copied files in the backup/restore commands has been changed to detail. This makes the info log level less noisy but if these messages are required then set the log level for the backup/restore commands to detail.
Bug Fixes:
* Detect errors in S3 multi-part upload finalize. (Reviewed by Cynthia Shang, Marco Montagna. Reported by Marco Montagna, Lev Kokotov, Anderson A. Mallmann.)
* Fix detection of circular symlinks. (Reviewed by Stefan Fercot. Reported by Rohit Raveendran.)
* Only pass selected repo options to the remote. (Reviewed by David Christensen, Cynthia Shang. Reported by Greg Sabino Mullane, David Christensen.)
Improvements:
* Binary protocol. (Reviewed by Cynthia Shang.)
* Automatically create data directory on restore. (Contributed by Stefan Fercot. Reviewed by David Steele. Suggested by Chris Bandy.)
* Allow restore --type=lsn. (Contributed by Stefan Fercot. Reviewed by Cynthia Shang. Suggested by James Coleman.)
* Change level of backup/restore copied file logging to detail. (Reviewed by Stefan Fercot. Suggested by Jens Wilke.)
* Loop while waiting for checkpoint LSN to reach replay LSN. (Contributed by Stefan Fercot. Reviewed by David Steele. Suggested by Fatih Mencutekin.)
* Log backup file total and restore size/file total. (Reviewed by Cynthia Shang.)
Documentation Bug Fixes:
* Fix incorrect host names in user guide. (Reviewed by Stefan Fercot. Reported by Greg Sabino Mullane.)
Documentation Improvements:
* Update contributing documentation and add pull request template. (Contributed by Cynthia Shang. Reviewed by David Steele.)
* Rearrange backup documentation in user guide. (Reviewed by Cynthia Shang.)
* Clarify restore --type behavior in command reference. (Contributed by Cynthia Shang. Reviewed by David Steele.)
* Fix documentation and comment typos. (Contributed by Eric Radman. Reviewed by David Steele.)
Test Suite Improvements:
* Add check for test path inside repo path. (Reviewed by Greg Sabino Mullane. Suggested by Greg Sabino Mullane.)
* Add CodeQL static code analysis. (Reviewed by Cynthia Shang.)
* Update tests to use standard patterns. (Contributed by Cynthia Shang. Reviewed by David Steele.)
The error was written to the client and then another command read. If the write did not fail then the loop would never exit.
Instead exit on any error that is not raised by the command handler as we can pretty safely assume this is an unrecoverable protocol error. The command handler might throw a protocol error itself, but this should be caught in the next read or write in the main loop.
If the buffer was not full at EOF then ioReadSmall() would get stuck in an infinite loop. Instead, return on EOF even if the buffer is not full.
This is not an issue in released versions since ioReadSmall() is not being used.
Also fix a comment typo.
The backup size was a bit off because it did not include any files (e.g. backup_label, WAL files) that were added to the manifest after the main copy. To fix this move the log message to the very end of the backup.
Add size/file total log message to restore since it did not exist before.
The storageInfoList() test was broken by 54c4eb0c when the remote was changed to use writeable storage. Since the test driver was being injected into the wrong location, new default storage was created and the test effectively did nothing but still "succeeded".
To prevent this type of regression, add checks to ensure the expected test driver is being used and the callback runs the expected number of times.
Cleanup all clients inherited from the parent process so they cannot be accidentally used to send messages to servers that do not belong to this process.
We need to do this carefully so that exit commands are not sent and processes are not terminated, so clear the mem context callback on each object before freeing it.
Options for other repos can cause conflicts and should never be used. Each remote can address exactly one repo or pg cluster.
Also fix an outdated comment.
This function was included in a header but not declared inline, so linker errors happened when the header was included into more than one file.
Because of the setjmp() in TRY_BEGIN() it can't be inlined so put it in a C file.
Also add some missing headers.
There have been intermittent failures on f33 (with coverage) but not on u16 (without coverage).
Reproducing this reliably has been very difficult, so just try increasing the timeouts. This is based on the observation that tests with coverage take longer than tests without, which may lead the f33 tests to fail if CI is running slower than usual.
This will not increase the runtime of the test unless there is an error.
If configure/make has been run in the src path it can conflict with tests, which may require different build options.
Also add a comment when rebuilding for code generation.
This file duplicated the command list that already exists in parse.auto.c.
Combine the data from config.auto.c into parse.auto.c and adjust the interface functions as needed. Quite a few were able to be moved to parse.c as static.
If the test path is inside the repo path then it can cause strange issues during testing because the entire repo path is duplicated into the test path so that all tests see a consistent view of the repo.
Another solution might be to pick a better test path name and exclude it from the rsync, but this fix at least addresses the immediate issue.
This was started in c5ae047e but did not include generation of parse.auto.c.
The parser has also been improved with better errors and multiple passes to reduce dependency on ordering and produce and cleaner output.
Option order resolution now includes cycle detection.
Remove strIdGenerate() since bldStrId() performs the same function without cluttering the core code. Since bldStrId() is intended to work in non-debug builds, move the validity checks for input strings out of the DEBUG block.
StringIds are generated as 5/6 bit, whichever is most efficient, for each option value. cfgOptionStrIdInternal() has been updated for this logic.
This allows a local/remote to be started independently of server initialization, which will be useful for implementing new transport types, e.g. TLS.
Also remove some dead code in localTest.c.
protocolServerNew() does not automatically process a noop so this made the handshake in the constructors asymmetric. This made testing a bit confusing since an extra noop was needed when cmdLocal()/cmdRemote() were not called (since they processed the noop sent by protocolClientNew()).
The extra noops also make complex protocol negotiation (coming in a future commit) more complicated and slower because of the additional round trips.
The storage tests were not modified to the HRN_STORAGE_* nor TEST_STORAGE_* macros as these test are testing the storage drivers.
Note that posixTest.c removed an extraneous #endif // TEST_CONTAINER_REQUIRED and #ifdef TEST_CONTAINER_REQUIRED.
This PR includes all files in the storage/* test directory, namely: azureTest.c, cifsTest.c, gcsTest.c, posixTest.c, remoteTest.c, s3Test.c
--smart is now the default mode. Since --dev is now just an alias for --no-optimize, remove it. --dev-test has been a noop for a while, so this seems like a good time to remove it.
Also make the C auto-generator skip writing files that have not changed to avoid updating the timestamp.
Note that the logging output display of a parent/child test may look jumbled on some systems since the child and parent are attempting to log information at the same time. This is not an issue with the actual test, rather a harness issue that would be beyond the scope of this project to fix.
Parse enough of config.yaml to auto-generate config.auto.h and config.auto.c.
This commit implements most of the infrastructure needed to migrate the rest of the build code to C, but each set of auto-generated files will present its own challenges.
The build is now dependent on libyaml. At this point there is no need for a hard requirement, but that will come soon so it seems better to add the dependency now.
Update Ubuntu 12.04 to 16.04. Version 16.04 is recently EOL but testing on an old version is beneficial.
Update Ubuntu 18.04 to 20.04.
Update Fedora 32 to 33. Version 34 would have been preferred but there were some build issues, i.e. the default shell did not work with configure, and after ksh was installed configure locked up.
Add --no-install-recommends to apt-get commands to save a bit of time and space.
Update test Dockerfile to run in multiple steps. This makes the container larger but also makes rebuilding after changes faster. The --squash option may be used to keep the container small.
Remove obsolete casts in protocol/parallel module. These casts were included in the original migration because Ubuntu 12.04 32-bit gcc required them, but Ubuntu 16.04 32-bit gcc complains. There is no production issue here since at this point in the code the file descriptors are guaranteed to be >= 0.
In the first test (helpRenderSplitSize) added test for empty list and in that and some other tests, the test comment was updated to clarify a bit more what the actual tests is trying to accomplish.
Note that help test parameters can only use the harnessConfig system when testing option values that have been set since options passed to the help command are not "set" options.
Includes backup and backupCommon tests.
Some tests in backupTest were split out where they were originally combined into a single boolean check - which made it difficult to determine which part of the conditional failed.
String values were also removed where they were no longer needed.
It is possible for the checkpoint LSN to lag slightly behind the replay LSN until pg_control has been updated.
Add a loop to keep checking rather than failing when the checkpoint LSN has not been updated.
This removes a lot of boiler plate where every instance needs to create these interfaces.
Also add HRN_FORK_*_NOTIFY*() macros to standardize synchronizing between the parent and child processes.
In both cases update the tests with the new macros.
Simplify HRN_FORK_CHILD_BEGIN() by adding optional parameters with the common defaults.
Add _FD() to macros that retrieve file descriptors to make their purpose clearer.
The log level for copied files in the backup/restore commands has been changed to detail. This makes the info log level less noisy but if these messages are required then set the log level for the backup/restore commands to detail.
The protocol does not put an end message on exit so there was a race between the main process exiting and the remote processes exiting. If the main process exited first then the remote processes might not write coverage data causing coverage to fail.
Fix by calling exit explicitly at the end of the test and update the harness to put an end message so the exits are synchronized.
In the commandTest the HRN_STORAGE_REMOVE replacement uses .errorOnMissing when the code being tested added the file. The reason for this is 3 fold:
1. to ensure that an inadvertent typo in the path/file name does not go undetected,
2. to ensure that nothing else has removed the file prior to the call, and
3. consistency
Also, added "stanza" to comment when a stanza stop file is removed vs an "all" stop file.
Multi-part upload may fail despite returning an HTTP success code. Check for the ETag field in the result and if not present consider the upload to have failed. This will trigger a retry at the local job level.
Links were followed before they were checked for validity so a circular link would send the manifest build into endless recursion leading to a crash. Fix by moving the recursion after the link check.
Note that this issue has existed since the C migration and was not introduced by the refactor in eba013b.
Data directory creation was added during the C migration, but creation of the base data directory (PGDATA) was prevented by a check migrated from Perl.
Remove the check and update tests to create the data directory at least once.
Includes archiveCommon, archiveGet and archivePush.
Also fixed a test that was looking in repo instead of repo3 in the original archivePush to use the repo3 path as stated by the comment (line 879 in original tests and line 855 in new tests).
It seems better to use TEST_PATH in combination with a constant string rather than have a number of different path constants. This improves readability and reduces confusion about which constant should be used.
For tests already updated as part of the macro-replacement effort, the output tests (TEST_ERROR, TEST_RESULT_LOG, TEST_STORAGE_LIST and TEST_RESULT_STR) have been simplified for readability to remove all but the TEST_PATH constants. The ongoing macro-replacement effort will include these changes.
Updated: expireTest, stanzaTest, checkTest, infoTest, verifyTest (infoArchive and infoBackup had no changes).
Switch from JSON-based to binary protocol for communicating with local and remote process. The pack type is used to implement the binary protocol.
There are a number advantages:
* The pack type is more compact than JSON and are more efficient to render/parse.
* Packs are more strictly typed than JSON.
* Each protocol message is written entirely within ProtocolServer/ProtocolClient so is less likely to get interrupted by an error and leave the protocol in a bad state.
* There is no limit on message size. Previously this was limited by buffer size without a custom implementation, as was done for read/writing files.
Some cruft from the Perl days was removed, specifically allowing NULL messages and stack traces. This is no longer possible in C.
There is room for improvement here, in particular locking down the allowed sequence of protocol messages and building a state machine to enforce it. This will be useful for resetting the protocol when it gets in a bad state.
Some tests had to be reordered or updated, as follows:
* Reordered tests at line 317 and 331 to avoid unnecessary file removal.
* Change "stanza found" test at line 1735 to reflect real-life scenario. Originally this test had the cipher-pass environment key set up which caused the RepoGrp to be 2 but with no valid repo path. This resulted in the repo loops executing for the repo2 but since the path was not defined, the tests just reported "none" for cipher which is incorrect since the repo IS encrypted.
* Moved order of HRN_CFG_LOAD in some tests when able to avoid using storageTest.
It is better to clear errors after the catch block completes rather than leave them set until the next error. This also make is possible to tell when a error is currently being handled, which a function further down the stack might use to modify its behavior. Currently this is only useful in testing, but clearing the error seems like a good idea in general.
Two places used errors outside the CATCH() block. Mem context cleanup now uses a FINALLY() which is a better implementation anyway. The error handling in main() now calls exitSafe() from withing the CATCH() block.
Add StringList, which is not a primitive type but rather an array of String types.
Also update pckWriteToLog() to work after pckWriteEnd(), i.e. this->tagStackTop is NULL.
Move a PackRead or PackWrite object to a new mem context.
Also note that these functions may not work as expected with pack objects created by pckReadNewBuf() and pckWriteNewBuf() since the pack object does not have ownership of the passed buffer and cannot move it.
The hrnErrorThrowP() macro allows errors with specified fields to be generated, which simplifies testing.
Update the common/exit test to use the new macro.
Azurite, which is used for testing, did not enforce this before so the capital letters were not a problem. Now Azurite enforces the same rules as Azure so use lower-case identifiers instead.
These names were only used in integration tests so there was no production impact.
This allows TEST_STORAGE_EXISTS() to be used in most cases where TEST_STORAGE_REMOVE() was used before.
Rename TEST_STORAGE_REMOVE() to HRN_STORAGE_REMOVE() now that is is no longer used as a test. Still allow an error when the file is missing just to help keep tests tidy.
Since the pack type was stored in 4 bits, only 15 values were allowed (0 was reserved).
Allow virtually unlimited types by storing type info in a base-128 encoded integer following the tag when the type bits in the tag are set to 0xF.
Also separate the type IDs used in the pack (PackTypeMap) from those presented to the user (PackType). The prior PackType enum exposed implementation details to the user, e.g. pckTypeUnknown.
The functions were named with short integer representations (e.g. I32) but the param structs were using longer ones, e.g. UInt32. Shorten the integer representations in the param structs to match.
Also rename pckReadUInt64Internal() to pckReadU64Internal() for the same reason.
The pg storage must be started before the repo storage to set the max remotes allowed to 2. The protocol helper expects all remotes to have the same type so we are cheating here a bit, but without this ordering the second remote will never be sent an explicit exit and may not save coverage data.
Bug Fixes:
* Fix issues with leftover spool files from a prior restore. (Reviewed by Cynthia Shang, Stefan Fercot, Floris van Nee. Reported by Floris van Nee.)
* Fix issue when checking links for large numbers of tablespaces. (Reviewed by Cynthia Shang, Avinash Vallarapu. Reported by Avinash Vallarapu.)
* Free no longer needed remotes so they do not timeout during restore. (Reviewed by Cynthia Shang. Reported by Francisco Miguel Biete.)
* Fix help when a valid option is invalid for the specified command. (Reviewed by Stefan Fercot. Reported by Cynthia Shang.)
Features:
* Add PostgreSQL 14 support. (Reviewed by Cynthia Shang.)
* Add automatic GCS authentication for GCE instances. (Reviewed by Jan Wieck, Daniel Farina.)
* Add repo-retention-history option to expire backup history. (Contributed by Stefan Fercot. Reviewed by Cynthia Shang, David Steele.)
* Add db-exclude option. (Contributed by Stefan Fercot. Reviewed by Cynthia Shang.)
Improvements:
* Change archive expiration logging from detail to info level. (Contributed by Cynthia Shang. Reviewed by David Steele.)
* Remove stanza archive spool path on restore. (Reviewed by Cynthia Shang, Stefan Fercot.)
* Do not write files atomically or sync paths during backup copy. (Reviewed by Stephen Frost, Stefan Fercot, Cynthia Shang.)
Documentation Improvements:
* Update contributing documentation. (Contributed by Cynthia Shang. Reviewed by David Steele, Stefan Fercot.)
* Consolidate RHEL/CentOS user guide into a single document. (Reviewed by Cynthia Shang.)
* Clarify that repo-s3-role is not an ARN. (Contributed by Isaac Yuen. Reviewed by David Steele.)
HRN_CFG_LOAD() handles the majority of test configuration loads and has various options for special cases.
It was not clear when to use harnessCfgLoadRaw() vs harnessCfgLoad(). Now "raw" functionality is granular and enabled by parameters, e.g. noStd.
Make the macros more consistent in format and make sure that each macro outputs a line number before doing any work so when errors happen it is clear where they happened.
Add noRecurse option to TEST_STORAGE_LIST().
Add comment option to all storage macros.
Instead store the line number in hrnTestLogPrefix() so it doesn't need to be passed to hrnTestResultBegin().
Also add missing linefeed in hrnStorageList().
All instances of storageTest are better represented with storagePg*(), which allows TEST_PATH and TEST_PATH_PG to be omitted.
Also remove some headers which are no longer needed.
The default is to keep all backup history to match the current behavior. In minimal configuration (0 days), unexpired backups are always kept in history.
When a full backup manifest expires, all dependent differential/incremental manifests expire as well.
Run the remote process inside a forked child process instead of exec'ing it. This allows coverage to accumulate in the remote process rather than needing to test the remote protocol functions directly, resulting in better end-to-end testing and less test duplication. Another advantage is that the pgbackrest binary does not need to be built for the test and the test does not need to run in a container.
This allows protocolRemoteExec() to be shimmed, which means the remote can be run as a child of the test process, simplifying coverage testing.
The shim does not need SSH parameters, so also split those out into a separate function and update the tests to match.
Add executable to parameter list to avoid first option being lost. The backup, restore, and verify tests worked OK with their first option being defaulted because it ended up being job-retry which worked fine as the default.
Add hrnProtocolLocalShimUninstall() allow the shim to be uninstalled.
Log shim at debug level to make it obvious in the logs when a shim is in use.
There are no code changes from PostgreSQL 13 so simply add the new version.
Add CATALOG_VERSION_NO_MAX to allow the catalog version to "float" during the PostgreSQL beta/rc period so new pgBackRest versions are not required when the catalog version changes.
Update the integration tests to handle new PostgreSQL startup messages.
manifestLinkCheck() was pretty inefficient so large numbers of links caused it to use a lot of memory and eventually crash. This is a more efficient implementation which runs O(nlogn) and uses far less memory.
Checking for duplicate file links has been added, which represents a change in behavior, but hopefully a good one.
Test and remove WAL segment 000000010000000100000002 in the test where it is created rather than as a byproduct of a much later test.
Remove incorrect local role from test. It worked, but was not the correct command role to be using when calling cmdArchiveGet().
Move some harness headers down to the correct section.
Comment formatting was not used much but it incurred a heavy cost in each macro to process possible formatting.
Remove formatted comments where they did not contain valuable information and replace with strZ(strNewFmt()) otherwise.
A define was already added for TEST_PATH but it was not widely used. Replace all occurrences of testPath() with TEST_PATH in the tests.
Replace testUser() with TEST_USER, testGroup() with TEST_GROUP, testRepoPath() with HRN_PATH_REPO, testDataPath() with HRN_PATH, testProjectExe() with TEST_PROJECT_EXE, and testScale() with TEST_SCALE.
Replace {[path]}, {[user]}, {[group]}, etc. with defines and remove hrnReplaceKey(). This is better than having two ways to deal with replacements.
In some cases the original test*() getters were kept because they are used by the harness, which does not have access to the new defines. Move them to harnessTest.intern.h to indicate that the tests should no longer use them.
Replace all instances of strNew("") with strNew() and use strNewZ() for non-empty zero-terminated strings. Besides saving a useless parameter, this will allow smarter memory allocation in a future commit by signaling intent, in general, to append or not.
In the tests use STRDEF() or VARSTRDEF() where more appropriate rather than blindly replacing with strNewZ(). Also replace strLstAdd() with strLstAddZ() where appropriate for the same reason.
Run the local process inside a forked child process instead of exec'ing it. This allows coverage to accumulate in the local process rather than needing to test the local protocol functions directly, resulting in better end-to-end testing and less test duplication. Another advantage is that the pgbackrest binary does not need to be built for the test.
The backup, restore, and verify command tests have been updated to use the new shim for coverage.
A shim allows a test harness to access static functions and variables in a C module, and also allows functions to be shimmed (i.e. overridden) for the purposes of testing.
For instance, coverage testing works when a process that is normally exec'd is run as a forked child process instead.
getopt_long() requires an exhaustive list of all possible options that may be found on the command line. Because of the way options are indexed (e.g. repo1-4, pg1-8) optionList[] has 827 entries and we have kept it small by curtailing the maximum indexes very severely. Another issue is that getopt_long() scans the array sequentially so parsing gets slower as the index maximums increase.
Replace getopt_long() with a custom implementation that behaves the same but allows options to be parsed with a function instead of using optionList[]. This commit leaves the list in place in order to focus on the getopt_long() replacement, but cfgParseOption() could be replaced with a more efficient implementation that removes the need for optionList[].
This implementation also fixes an issue where invalid options were misreported in the error message if they only had one dash, e.g. -config. This seems to have been some kind of problem in getopt_long(), but no investigation was done since the new implementation fixes it.
Tests were added at 0825428, 2b8d2da, 34dd663, and 384f247 to check that previously untested getopt_long() behavior doesn't change.
This makes the macro useful when subpaths are present.
Identify types other than files (path, link, etc.) with a single appended character for easier debugging.
Remove stanza archive spool path so existing files do not interfere with the new cluster. For instance, old archive-push acknowledgements could cause a new cluster to skip archiving. This should not happen if a new timeline is selected but better to be safe. Missing stanza spool paths are ignored.
Also add new path expression STORAGE_SPOOL_ARCHIVE to easily access this path.
When running on a GCE instance the authentication token can be pulled directly from the instance metadata. This is configured with repo-gcs-key-type=auto.
In a separate commit (26fefa6), move the code that parses the token response into a separate function, storageGcsAuthToken(), since it is now needed by two key types. This drastically improves the readability of the main commit.
When running outside of our standard Vagrantfile the default will not be set correctly, so require the user to set it.
In any case, this option is primarily useful for reporting so note that in the command line help.
Some version interface test functions were integrated into the core code because they relied on the PostgreSQL versioned interface. Even though they were compiled out for production builds they cluttered the core code and made it harder to determine what was required by core.
Create a PostgreSQL version interface in a test harness to contain these functions. This does require some duplication but the cleaner core code seems a good tradeoff. It is possible for some of this code to be auto-generated but since it is only updated once per year the matter is not pressing.
If an ok file (which indicates the WAL segment was not found) is present on the first iteration of the loop then remove it and spawn the async process to retry. This action also resets the queue.
Also error if no response is received from the async process rather than returning not found. PostgreSQL will respond the same either way, but this allows us to determine when something is going wrong with the async process.
Update archiveAsyncStatus() to allow warnings to be suppressed. It is better to retry if no WAL segment was found before warning because the warning might be stale.
If an option name has a space at the beginning then it will be considered an invalid command, but a space at the end is an invalid option. Add tests for these conditions.
Spaces in option arguments should be preserved, so add a test to be sure this is true.
Convert most of the remaining options that benefit from being StringIds. Since all the command modules can include config.h directly it makes sense to auto-generate these values instead of manually creating an enum for each one.
For the time being StringIds are not being auto-generated because the StringId code does not exist in Perl. However, the *_Z zero-terminated constants for each allowed option value are now auto-generated.
The CentOS 7 documentation test relies on PostgreSQL 9.5 which has been removed from the yum.p.o repository package. Switch the test to CentOS 8 to fix the immediate issue, but a decision on the PostgreSQL 9.5 documentation will need to be made before the next release.
The tests worked fine on multiple architectures, but would only run "bare metal", i.e. tests that required containers could not be run.
Enable basic multi-architecture support by allowing containers to be built using whatever architecture the host supports. Also allow cached containers to be defined for multiple architectures in container.yaml.
Add a Dockerfile which can be used as a container for other containers to provide a consistent development environment.
The primary goal is to allow development on Mac M1 but other architectures should find these improvements useful.
Allows removal of backupType()/backupTypeStr() and improves debug logging of the enum.
Move BackupType enum and string constants to info/infoBackup.h so they are available to more modules. Also convert InfoBackup to use BackupType instead of a String.
Centralize the formatting of the configuration value for display to the user or passing on a command line.
For the new functions, if the value was set by the user via the command line, config, etc., then that exact value will be displayed. This makes it easier for the user to recognize the value and saves having to format it into something reasonable, especially for time and size option types.
Note that cfgOptTypeHash and cfgOptTypeList option types are not supported by these functions, but they are generally not displayed to the user as a whole.
This also fixes a bug in config/load.c where time values where not being formatted correctly in an error message.
Use StringIds for the storage types (e.g. STORAGE_S3_TYPE) and configuration settings, e.g. cfgOptS3KeyType.
Also add new config functions and harness config functions to support StringIds.
There is no need to write the file atomically (e.g. via a temp file on Posix) because checksums are tested on resume after a failed backup. The path does not need be synced for each file because all paths are synced at the end of the backup.
This functionality was not lost during the migration -- it never existed in the Perl code, though these settings are used in restore. See 59f1353 where backupFile() was migrated to C.
Fix the segfault when getting help for an internal option is requested by adding help for all internal options that are valid for a default command role.
Also print warnings about internal options in code rather than putting in each command/option description.
The enum truncation observed was due to the value getting passed via a protocol function which silently narrowed the enum.
Even so, add some tests to ensure tested platforms support 64-bit enums.
Although kvAdd() works like kvPut() on the first call, kvPut() is more efficient when a key has a single value.
Update the comment to clarify that kvAdd() is seldom required.
Getting help for a valid option that was invalid for the command would segfault.
Add a check to ensure the option is valid for the command's default role.
This lets the compiler know that these variables are not modified which should lead to better optimization.
Smart compilers should be able to figure this out on their own, but marking parameters const is still good for documentation.
It is often useful to represent identifiers as strings when they cannot easily be represented as an enum/integer, e.g. because they are distributed among a number of unrelated modules or need to be passed to remote processes. Strings are also more helpful in debugging since they can be recognized without cross-referencing the source. However, strings are awkward to work with in C since they cannot be directly used in switch statements leading to less efficient if-else structures.
A StringId encodes a short string into an integer so it can be used in switch statements but may also be readily converted back into a string for debugging purposes. StringIds may also be suitable for matching user input providing the strings are short enough.
This patch includes a sample of StringId usage by converting protocol commands to StringIds. There are many other possible use cases. To list a few:
* All "types" in storage, filters. IO , etc. These types are primarily for identification and debugging so they fit well with this model.
* MemContext names would work well as StringIds since these are entirely for debugging.
* Option values could be represented as StringIds which would mean we could remove the functions that convert strings to enums, e.g. CipherType.
* There are a number of places where enums need to be converted back to strings for logging/debugging purposes. An example is protocolParallelJobToConstZ. If ProtocolParallelJobState were defined as:
typedef enum
{
protocolParallelJobStatePending = STRID5("pend", ...),
protocolParallelJobStateRunning = STRID5("run", ...),
protocolParallelJobStateDone = STRID5("done", ...),
} ProtocolParallelJobState;
then protocolParallelJobToConstZ() could be replaced with strIdToZ(). This also applies to many enums that we don't covert to strings for logging, such as CipherMode.
As an example of usage, convert all protocol commands from strings to StringIds.
Restore excluding the specified databases. Databases excluded will be restored as sparse, zeroed files to save space but still allow PostgreSQL to perform recovery. After recovery, those databases will not be accessible but can be removed with the drop database command. The --db-exclude option can be passed multiple times to specify more than one database to exclude.
When used in combination with the --db-include option, --db-exclude will only apply to standard system databases (template0, template1, and postgres).
This function has not been used since the switch to the fork/exec model.
lockClear() was still used in one test (other than the lock test) so update the test and remove the function.
Both NDEBUG and DEBUG were used in the code, which was a bit confusing.
Define DEBUG in build.auto.c so it is available in all C and header files and stop using NDEBUG. This is preferable to using NDEBUG everywhere since there are multiple DEBUG* defines, e.g. DEBUG_COVERAGE.
Note that NDEBUG is still required since it is used by the C libraries.
Inline functions are more efficient and if they are not used are automatically omitted from the binary.
This also makes the implementation of these functions easier to find and removes the need for a declaration. That is, the complete implementation is located in the header rather than being spread between the header and C file.
OBJECT_DEFINE_MOVE() and OBJECT_DEFINE_FREE() will be replaced with inlines so this would be the only macro left that is constructing functions.
It is not a great pattern anyway since it makes it hard to find the function implementation.
This macro was originally intended to simplify the creation of simple getters but it has been superseded by the pattern introduced in 79a2d02c.
Remove instances of OBJECT_DEFINE_GET() to avoid confusion with the new pattern.
Introduce a standard pattern for exposing public struct members (as documented in CODING.md) and use it to inline lstSize() which should improve the performance of iterating large lists.
Since many functions in these modules are just thin wrappers of other functions, inline where appropriate.
Remove strLstExistsZ() and strLstInsertZ() since they were only used in tests, where the String version of the function is sufficient.
Move strLstNewSplitSizeZ() to command/help/help.c and remove strLstNewSplitSize(). This function has only ever been used by help and does not seem widely applicable.
Bug Fixes:
* Fix option warnings breaking async archive-get/archive-push. (Reviewed by Cynthia Shang. Reported by Lev Kokotov.)
* Fix memory leak in backup during archive copy. (Reviewed by Cynthia Shang. Reported by Christian ROUX, Efremov Egor.)
* Fix stack overflow in cipher passphrase generation. (Reviewed by Cynthia Shang. Reported by bsiara.)
* Fix repo-ls / on S3 repositories. (Reviewed by Cynthia Shang. Reported by Lesovsky Alexey.)
Features:
* Multiple repository support. (Contributed by Cynthia Shang, David Steele. Reviewed by Stefan Fercot, Stephen Frost.)
* GCS support for repository storage. (Reviewed by Cynthia Shang.)
* Add archive-header-check option. (Reviewed by Stephen Frost, Cynthia Shang. Suggested by Hans-Jürgen Schönig.)
Improvements:
* Include recreated system databases during selective restore. (Contributed by Stefan Fercot. Reviewed by Cynthia Shang.)
* Exclude content-length from S3 signed headers. (Reviewed by Cynthia Shang. Suggested by Brian P Bockelman.)
* Consolidate less commonly used repository storage options. (Reviewed by Cynthia Shang.)
* Allow custom config-path default with ./configure --with-configdir. (Contributed by Michael Schout. Reviewed by David Steele.)
* Log archive copy during backup. (Reviewed by Cynthia Shang, Stefan Fercot.)
Documentation Improvements:
* Update reference to include links to user guide examples. (Contributed by Cynthia Shang. Reviewed by David Steele.)
* Update selective restore documentation with caveats. (Reviewed by Cynthia Shang, Stefan Fercot.)
* Add compress-type clarification to archive-copy documentation. (Reviewed by Cynthia Shang, Stefan Fercot.)
* Add compress-level defaults per compress-type value. (Contributed by Cynthia Shang. Reviewed by David Steele.)
* Add note about required NFS settings being the same as PostgreSQL. (Contributed by Cynthia Shang. Reviewed by David Steele.)
hrnReplaceKey() was added to the TEST_ERROR*() macros in 58760486 but some calls to TEST_ERROR*() already used it. This led to the function being called twice on the same buffer which had no effect but valgrind definitely did not like.
Remove extraneous calls to make valgrind happy. Since this is test code there are no implications for production.
The command-example and command-example-list elements were removed from the documentation rendering some time ago so these tags were dead code. The tags, however, contained some examples and information that were pertinent to the command, so where possible, the information was included in the description of the command and/or the user-guide and links to the relevant user guide sections were added.
Note that some commands could not be updated with user guide references since doing so would cause a cyclical reference in the user guide. These commands have an internal comment to indicate this.
In addition, some clarifications were added (e.g. expire --set option) where information was lacking.
Enabled by default, this option checks the WAL header against the PostgreSQL version and system identifier to ensure that the WAL is being copied to the correct stanza. This is in addition to checking pg_control against the stanza and verifying that WAL is being copied from the same PostgreSQL data directory where pg_control is located.
Therefore, disabling this check is fairly safe but should only be done when required, e.g. if the WAL is encrypted.
3b8f0ef missed some cases that could cause archive-push to fail:
* Checking archive info.
* Checking to see if a WAL segment already exists.
These cases are now handled so archive-push can succeed on any valid repos.
This improvement reduces the number of errors thrown; these errors will now be reported as a status for the stanza or repo as appropriate. Invalid option configurations are still thrown but all other errors are caught, formatted and reported. This was necessary for multiple repositories so that the command can complete gathering information from each repository and report the results rather than immediately aborting when an error occurs.
Two new error codes were introduced:
6 = requested backup not found
99 = other, which is used to indicate an error has occurred that requires more details to be provided
A new stanza name of "[invalid]" was created for instances where a stanza was not specified and no stanza can be found.
If there is only one repository configured the error will move up to the stanza level with the standard error formatting of 'error (message)' where the message will be "other" and the details of the error will be listed on the next line(s):
stanza: stanza1
status: error (other)
[CryptoError] unable to load info file '/var/lib/pgbackrest/repo/backup/stanza1/backup.info' or '/var/lib/pgbackrest/repo/backup/stanza1/backup.info.copy':
CryptoError: cipher header invalid
HINT: is or was the repo encrypted?
FileMissingError: unable to open missing file '/var/lib/pgbackrest/repo/backup/stanza1/backup.info.copy' for read
HINT: backup.info cannot be opened and is required to perform a backup.
HINT: has a stanza-create been performed?
HINT: use option --stanza if encryption settings are different for the stanza than the global
cipher: aes-256-cbc
If a backup set is requested but is not found on any repo, a stanza-level status error of 'requested backup not found' is reported when there are no other errors:
pgbackrest info --stanza=demo --set=bogus
stanza: demo
status: error (requested backup not found)
cipher: mixed
repo1: aes-256-cbc
repo2: none
If there are multiple repositories configured and a single repo is in error but the other repos are ok or have a different error:
pgbackrest info --stanza=demo --set=20210322-171211F
stanza: demo
status: mixed
repo1: error
[CryptoError] unable to load info file '/var/lib/pgbackrest/repo/backup/stanza1/backup.info' or '/var/lib/pgbackrest/repo/backup/stanza1/backup.info.copy':
CryptoError: cipher header invalid
HINT: is or was the repo encrypted?
FileMissingError: unable to open missing file '/var/lib/pgbackrest/repo/backup/stanza1/backup.info.copy' for read
HINT: backup.info cannot be opened and is required to perform a backup.
HINT: has a stanza-create been performed?
HINT: use option --stanza if encryption settings are different for the stanza than the global
repo2: ok
cipher: mixed
repo1: aes-256-cbc
repo2: none
db (current)
wal archive min/max (12): 000000010000000000000001/000000010000000000000003
full backup: 20210322-171211F
timestamp start/stop: 2021-03-22 17:12:11 / 2021-03-22 17:12:28
wal start/stop: 000000010000000000000002 / 000000010000000000000002
database size: 23.4MB, database backup size: 23.4MB
repo2: backup set size: 2.8MB, backup size: 2.8MB
database list: postgres (13359)
Json output will include the repository information and any error information. If no stanzas are found, then [invalid] will be set as the name:
[
{
"archive":[],
"backup":[],
"cipher":"none",
"db":[],
"name":"[invalid]",
"repo":[
{
"cipher":"none",
"key":1,
"status":{
"code":99,
"message":"[PathOpenError] unable to list file info for path '/var/lib/pgbackrest/repo2/backup': [13] Permission denied"
}
}
],
"status":{
"code":99,
"lock":{"backup":{"held":false}},
"message":"other"
}
}
]
The content-length header was being signed since it was the only header that didn't need to be and it seemed simpler just to sign it as well. Also, the S3 documentation encourages signing as many headers as possible to avoid tampering.
However, some proxies munge this header causing authentication failure, so skip signing content-length.
Make protocol handlers have one function per command. This allows the logic of finding the handler to be in ProtocolServer, isolates each command to a function, and removes the need to test the "not found" condition for each handler.
S3 returns 200 for HEAD / which indicates it is a file but does not return the expected headers which causes an error.
Rather than fix this for S3, just automatically return / as not existing for any storage that does not support paths.
Also add some defensive checks to prevent this from generating a segfault if it happens again.
Some standard system databases (e.g. postgres) may be recreated by the user and have an OID that makes them look like user databases.
Identify the standard three system databases (template0, template1, postgres) and restore them non-zeroed no matter what OID they have.
Cipher type was inferred from the presence of cipherSubPass rather than being passed explicitly in order to maintain compatibility with Perl backupFile().
Now that Perl is gone it makes sense to pass it explicitly, as we do elsewhere.
This test was added to take the place of another test, which turned out not to be workable.
Even so, it adds coverages at little cost so it seems worth keeping.
When the FUNCTION_*_RESULT*() macros were renamed to FUNCTION_*_RETURN_*() in the core code the test harness macros were missed.
Update them to make the naming consistent.
The stanza-create, stanza-upgrade and stanza-delete were required to be run on the repository host. When there was only one repository allowed this was not a problem.
However, with the introduction of multiple repository support, this becomes more of a burden to the user, therefore the stanza-create, stanza-upgrade and stanza-delete commands have been improved to allow for them to be run remotely.
Moving to YAML allows the configuration data to be read by C programs.
Also go back to using YAML::XS since it is the only implementation that has proper boolean support.
Up to four repositories may be configured. A potential benefit is the ability to have a local repository for fast restores and a remote repository for redundancy.
Some commands, e.g. stanza-create/stanza-update, will automatically work with all configured repositories while others, e.g. stanza-delete, will require a repository to be specified using the repo option. See the command reference for details on which commands require the repository to be specified.
Note that the repo option is not required when only repo1 is configured in order to maintain backward compatibility. However, the repo option is required when a single repo is configured as, e.g. repo2. This is to prevent command breakage if a new repository is added later.
The archive-push command will always push WAL to the archive in all configured repositories but backups will need to be scheduled individually for each repository. In many cases this is desirable since backup types and retention will vary by repository. Likewise, restores must specify a repository. It is generally better to specify a repository for restores that has low latency/cost even if that means more recovery time. Only restore testing can determine which repository will be most efficient.
For single repository configurations there should be no change in behavior.
Some commands (repo-*, verify) still required the --repo option but it makes sense to give them the same treatment as backup and simply use the first repo when one is not specified.
This leaves stanza-delete as the only remaining command that requires --repo. This is by design to enhance safe usage.
The following options are renamed as specified:
repo1-azure-ca-file -> repo1-storage-ca-file
repo1-azure-ca-path -> repo1-storage-ca-path
repo1-azure-host -> repo1-storage-host
repo1-azure-port -> repo1-storage-port
repo1-azure-verify-tls -> repo1-storage-verify-tls
repo1-s3-ca-file -> repo1-storage-ca-file
repo1-s3-ca-path -> repo1-storage-ca-path
repo1-s3-host -> repo1-storage-host
repo1-s3-port -> repo1-storage-port
repo1-s3-verify-tls -> repo1-storage-verify-tls
The old option names (e.g. repo1-s3-port) will continue to work for repo1, but repo2, etc. will require the new names.
The archive-push command will continue to push even after it gets a write error on one or more repos. The idea is to archive to as many repos as possible even we still need to throw an error to PostgreSQL to prevent it from removing the WAL file.
The real/all test could fill the ramdisk depending on which vm and pg version were selected.
Debug level should be fine for most purposes and the level can be increased when needed.
The restore command automatically defaults to selecting the latest backup from a single repository. With multiple repositories configured, the restore command will now default to selecting the latest backup from the first repository where backups exist. The order in which the repositories are checked is dictated by the pgbackrest.conf order.
To select from a specific repository, the --repo option can be passed (e.g. --repo=1). The --set option can be passed if a backup other than the latest is desired.
Repositories will be searched in order for the requested archive file.
Errors will be reported as warnings as long as a valid copy of the archive file is found.
Errors are logged to the log file rather than thrown. If, after processing all repos, one or more errors occurred, then a single error error will be thrown to indicate there were errors and the log file should be inspected.
Also update log messages to be more consistent with new patterns.
This is more efficient and the error case can be an assert rather than a runtime error.
For extra safety initialize destinationSize to SIZE_MAX to increase the chances of an error if the switch fails.
There is not enough code here to justify multiple files and declaring the functions for each encoding as static allows the compiler to inline where appropriate.
These constructors wrap encodeToStr() and decodeToBin(), making them convenient and safe by eliminating the need to create intermediate buffers. Encoding/decoding is performed directly into the target String/Buffer. Sizing of the destination buffer is handled by the new functions so it doesn't have to be done at each call site.
If the second letter is capital or a digit then the word is likely an acronym so don't lower-case the first letter.
For now only the digit case is checked since there are no summaries with a capital as the second letter.
GCS requires mixed encoding in the path so encoding inside HttpRequest does not work.
Instead, require the path to be correctly encoded before being passed to HttpRequest.
The path was originally named uri due to the canonicalized path being called "canonicalized uri" in the S3 authentication documentation. The name got propagated everywhere from there.
This is not correct for general usage, however, so rename to path when describing the path component of an HTTP request.
ASCII may occasionally be encoded (e.g. &) to prevent ambiguity depending on where the JSON is located.
Only ASCII can be decoded. In general Unicode should not be encoded in JSON.
Option warnings will cause the async process to fail because a warning is logged but stdout is closed so the process aborts.
This bug has existed for quite some time, but it was made worse by abb8ebe because now the async role can have different valid options than the default role. Previously at least a warning would be emitted before the async process died.
Fix this by only allowing warnings for the default role. Warnings were already suppressed for local and remote roles so the logic already exists.
These tests were broken because they were being gated by resetLogLevel. So they were not setting the log levels, but not because of the role setting. Because resetLogLevel was being checked last coverage testing indicated that the tests were working.
Fix the resetLogLevel parameter in the tests and move resetLogLevel to be tested first so coverage reporting works as expected. This isn't perfect but it is an improvement.
The expire command has been enhanced to expire backups and archives from all configured repositories by default.
In addition, it will accept the --repo option to expire backups and archives only from the specified repository. Using the --repo options the --set option can also be refined further to the specified repo. If --set is provided but the --repo option has not, then all repositories will be searched and retention settings will be applied on each whether the backup set has been found or not.
Bug Fixes:
* Fix resume after partial delete of backup by prior resume. (Reviewed by Cynthia Shang. Reported by Tom Swartz.)
Features:
* Add repo-ls command. (Reviewed by Cynthia Shang, Stefan Fercot.)
* Add repo-get command. (Contributed by Stefan Fercot, David Steele. Reviewed by Cynthia Shang.)
* Add archive-mode-check option. (Contributed by Stefan Fercot. Reviewed by David Steele, Michael Banck.)
Improvements:
* Improve archive-get performance. (Reviewed by Cynthia Shang.)
The stackTrace and memContext error handlers were hard-coded which made testing the error module in isolation impossible.
Making the error handlers configurable also makes adding new ones in the future easier.
This is useful for initialization that needs to be done for the test and all subsequent tests.
Use the new defines to implement initialization for sockets and statistics.
In preparation for multi-repo support, a repo tag is added in this commit to the expire command log and error messages. This change also affects the expect logs and the user-guide. The format of the tag is "repoX:" where X is the repo key used in the configuration.
Until multi-repo support has been completed, this tag will always be "repo1:".
When building tests only include files covered by the current test or by prior tests. This increases performance (less compilation and linking) and also helps detect cross-dependencies in the code. Since there are currently cross-dependencies the depend option is used to document them and allow compilation. The idea is to resolve them incrementally over time.
Add the harness option to include harness modules when the minimum requirements for compilation are met.
Add the feature option to indicate which features are now available in the harness (based on source modules already tested). This allows conditional compilation in harness modules when some features are not yet available.
This is required for coverage when the common/error module is run with just the source files required to make it run, rather than all source files as we do now.
Likely something in the harness is providing coverage, but cover it explicitly so the coverage won't be lost if the harness changes.
The original intention was to enclose complex code in braces but somehow braces got propagated almost everywhere.
Document the standard for braces in switch statements and update the code to reflect the standard.
At one time Minio had stability problems with latest but that appears to be resolved for the last year or so.
Use latest so we'll know if something breaks since Minio is frequently used in production.
This is phase 2 of verify command development (phase 1 was processing the archives and phase 3 will be reconciling the archives and backups). In this phase the backups are verified by verifying each file listed in the manifest for the backup and creating a result set with the list of invalid files, if any. A summary is then rendered.
Unit tests have been added and duplicate tests have been removed.
The info command provides total sizes for files in the backup on the database as well as the repository. The text output and associated user documentation has been updated to provide more clarity regarding the sizes being displayed.
In addition, the info command is updated to allow a user to optionally specify the repository when requesting a specific backup set. In this case, the text output will reflect the status of the stanza, the cipher types and archive min/max over all the repositories instead of a single repository when the repo option is specified.
The unit test Makefile generation was a hodge-podge of constants and rules based on distros/versions that easily got out of date and did not work on an unknown system. All of this dates from the mixed Perl/C unit test implementation.
Instead use configure to generate most of the important Makefile variables, which allows the unit tests to run on multiple platforms, e.g. MacOS and FreeBSD.
There is plenty of work to be done here and not all the unit tests work on MacOS and FreeBSD for various reasons.
As a POC update the MacOS and FreeBSD tests on Cirrus-CI to run a few command unit tests.
MacOS does not allow files to be removed recursively unless the owner has write and execute permissions on all the directories.
Some tests leave the permissions in a bad state so fix them up before trying to delete.
The exact message is platform dependent so get the platform error to use in the expect.
It doesn't matter what the message is as long as there is an error and it is logged.
YAML::XS requires libyaml so it not as portable as pure Perl versions of YAML.
Instead of using YAML:PP just use the general YAML::Any module which uses whatever is installed. We are not concerned about performance for YAML so whatever works is fine.
Messages on stderr were being lost due to the error suppression used to customize the error message.
Also update the formatting to be more informative and concise.
MacOS has a very old version of rsync that does not support this option.
Rather than require a newer version of rsync exclude the option since the plan is to remove the requirement for it.
This is a more appropriate place for the check and means test.pl can avoid loading any XML files if --no-gen is specified.
The XML::Checker::Parser module originally selected for XML in Perl is not very portable so the requirement reduces the number of platforms where tests can be run.
Clang justifiably complains about pointer arithmetic on a known NULL value during testing. We know this is fine but use uintptr_t to silence the warnings.
Found on MacOS M1.
The return value is not checked because we are happy with a truncated result in this case, which is guaranteed by passing the buffer size.
Found on MacOS M1.
Multi-repository implementations for the archive-push, check, info, stanza-create, stanza-upgrade, and stanza-delete commands.
Multi-repo configuration is disabled so there should be no behavioral changes between these commands and their current single-repo implementations.
Multi-repo documentation and integration tests are still in the multi-repo development branch. All unit tests work as multi-repo since they are able to bypass the configuration restrictions.
The option portion was not being capitalized or replacing - with _.
The parser does not care, but in cases where we have mixed hrnCfgEnv*()/setenv() calls the env variable might not get cleared, which can lead to funny test results.
The default lock path should fail since the test VM gives ownership of /tmp to root.
For some reason this was not working as expected under u18 but it fails under u20.
All unit tests now require full coverage so the "full" keyword is obsolete and has been removed.
The covered code modules are simply listed, with only "no code" modules annotated.
Check that archive files exist in the main process instead of the local process. This means that the archive.info file only needs to be loaded once per execution rather than once per file to get.
Stop looking when a file is missing or in error. PostgreSQL will never request anything past the missing file so there is no point in getting them. This also reduces "unable to find" logging in the async process.
Cache results of storageList() when looking for multiple files to reduce storage I/O.
Look for all requested archive files in the archive-id where the first file is found. They may not all be there, but this reduces the number of list calls. If subsequent files are in another archive id they will be found on the next archive-get call.
Append "asynchronously" to messages when the async process fetched the file (not in the actual async process log, though).
Add "repo1" to make it clear what archive we are talking about. This is not very useful by itself but soon we'll be able to add the archive id, which is very useful.
Add constants for messages that are used multiple times to ensure they stay consistent.
If files other than backup.manifest.copy were left in a backup path by a prior resume then the next resume would skip the backup rather than removing it. Since the backup path still existed, it would be found during backup label generation and cause an error if it appeared to be later than the new backup label. This occurred if the skipped backup was full.
The error was only likely on object stores such as S3 because of the order of file deletion. Posix file systems delete from the bottom up because directories containing files cannot be deleted. Object stores do not have directories so files are deleted in whatever order they are provided by the list command. However, the issue can be reproduced on a Posix file system by manually deleting backup.manifest.copy from a resumable backup path.
Fix the issue by removing the resumable backup if it has no manifest files. Also add a new warning message for this condition.
Note that this issue could be resolved by running expire or a new full backup.
These options specify the number of local worker job retries and the retry interval after one immediate retry.
There is some value in allowing retries to be specified by the user but for the most part these options are for suppressing retries during testing, which can save a lot of time. The bug introduced in d1d25c7 and fixed in 8b86d5e also suggests it is better not to use retries in tests.
Remove the default delayed retries for archive-get/archive-push, leaving only the immediate retry. These commands are retried by PostgreSQL so it doesn't make sense to do too many retries internally.
These options are currently internal.
The test was pretty old and written in stages during the migration, so storage use was a bit archaic and the organization was poor.
Update using the new storage macros and reorganize the tests to provide better coverage.
The macros should make it much easier to write complex tests, especially when compression and encryption are involved.
Update the command/archiveGet test to show how the new macros are used.
This avoids the need for strLstJoin() when testing lists.
Lists are \n delimited (rather than command or pipe) so that non-trivial lists can be more easily diff'd.
Add separation and some visual cues to help identify the start of a test.
Also add a counter which can be used to search for a specific test, which is useful if there is a lot of debug output to search through.
These were required to deal with the legacy Perl code being unable to load new options between tests.
The C code does not have this issue so remove the forks and update process ids in the log tests.
No timeout is expected here but the small timeout prevents errors from being thrown.
This is not a bug since the error would be thrown on the next archive-get call but it does make the tests harder to debug when there is an error.
It is not clear why there was a timeout here at all. It is likely cruft from a prior test or a copy/paste error.
Tests that are duplicated are being removed from the info command unit tests. Specifically tests where the only thing different was whether a lock was held or not which affects only the status display. Removing these tests will reduce churn in the upcoming multi-repo support.
The data returned by the protocol has not been sorted yet so it is vulnerable to differences in collation.
Multiple records are not needed for this test so limit it to one path to solve this issue.
The pg option only has one current usage, to let the backup local know which pg index it should copy files from.
There are other possible uses for this option, but they need thought, tests, and documentation.
This option was added in advance of the multi-repo functionality but it has no purpose and it is not clear what the validity rules should be.
The option will be added back when multi-repo functionality is committed.
There is an inconsistency when the JSON is output for the case when a stanza is requested and it does not exist in the repo. This was the only case where the archive array was not added to the JSON. Adding it will simplify the upcoming multi-repo support code.
Also, a redundant test was removed rather than updating it for this case.
This was a hack to prevent the remote from loading host settings, which is now handled by option validity for command roles.
These options are still useful so don't remove them, but do leave them internal for now.
Building on 23f5712, limit option validity by role. This is mostly for options that weren't needed for certain roles but were harmless. However, the upcoming multi repository functionality requires the granularity implemented here.
The remote role benefits since host options can automatically excluded when building the options. Also, many options that are only required for the default role (e.g. repo-retention-full) no longer need to be passed in tests for other roles.
Some tests used options in contexts that are currently valid but are not correct usage, i.e. usage of internal options for the default role.
Update these tests in advance of the option validity becoming stricter.
Validity by command was not granular enough so numerous options needed be marked internal so users would not stumble across them. Options were also needlessly being passed to roles that had no use for them.
Introduce per-role validity lists that depend on what roles are valid per command. Also add a check to ensure that only valid roles are used with a command.
This commit adds the functionality but does not introduce any new behavior, i.e. all options are valid for all roles that the command is valid for. A subsequent commit will introduce the new role restrictions to make the changes easier to audit.
Data required for parsing was spread between the config and defined modules, mostly for historical reasons because the same data was used by Perl.
Requiring all the parse rules to be accessed with function interfaces makes the code more complicated and new rules harder to implement.
Instead, move the data to the parse module so in the most complex cases no interface functions are needed. This reduces the total amount of code and paves the way for more complex parse rules.
The help data can be represented more compactly in a pack and this separates data needed for help from data needed for parsing, freeing each to have a more appropriate representation.
This was done in the internal versions but not the user-facing function. That meant the field had to be explicitly read after determining it was NULL, which is wasteful.
Since there is only one behavior now, remove pckReadDefaultNull() and move the logic to pckReadNullInternal().
Testing on Travis-CI has been getting slower (from ~18 minutes to 3-6 hours) and the travis-ci.org service will be terminated at the end of the year. Moving to travis-ci.com is an option but the quotas are too low for our purposes.
Instead use Github Actions, which does not currently have quotas, and runs our current tests with just a few tweaks.
This still leaves multi-architecture tests on Travis-CI but we may be able to run those and stay within the new quotas.
Also fix a minor bug in restoreTest.c exposed by Github Actions using a different name for the user and group.
The pack type is an architecture-independent format for serializing data compactly, inspired by ProtocolBuffers and Avro.
Also add ioReadSmall(), which is optimized for small binary reads, similar to ioReadLineParam().
The C code does not use doubles to represent seconds like the Perl code did so time can be represented as an integer which reduces the number of data types that config has to understand.
Also remove Variant doubles since they are no longer used.
Note that not all double code was removed since we still need to display times to the user in seconds and it is possible for the times to be fractional. In the future this will likely be simplified by storing the original user input and using that value when the time needs to be displayed.
Bug Fixes:
* Allow [, #, and space as the first character in database names. (Reviewed by Stefan Fercot, Cynthia Shang. Reported by Jefferson Alexandre.)
* Create standby.signal only on PostgreSQL 12 when restore type is standby. (Fixed by Stefan Fercot. Reviewed by David Steele. Reported by Keith Fiske.)
Features:
* Expire history files. (Contributed by Stefan Fercot. Reviewed by David Steele.)
* Report page checksum errors in info command text output. (Contributed by Stefan Fercot. Reviewed by Cynthia Shang.)
* Add repo-azure-endpoint option. (Reviewed by Cynthia Shang, Brian Peterson. Suggested by Brian Peterson.)
* Add pg-database option. (Reviewed by Cynthia Shang.)
Improvements:
* Improve info command output when a stanza is specified but missing. (Contributed by Stefan Fercot. Reviewed by Cynthia Shang, David Steele. Suggested by uspen.)
* Improve performance of large file lists in backup/restore commands. (Reviewed by Cynthia Shang, Oscar.)
* Add retries to PostgreSQL sleep when starting a backup. (Reviewed by Cynthia Shang. Suggested by Vitaliy Kukharik.)
Documentation Improvements:
* Replace RHEL/CentOS 6 documentation with RHEL/CentOS 8.
Update RHEL/CentOS 7 to cover the versions that were previously covered by RHEL/CentOS 6.
Since RHEL/CentOS 7/8 work the same update the documentation logic and labels to reflect this compatibility.
Inaccuracies in sleep time or clock skew might make a single sleep insufficient to reach the next second.
Add a few retries to make the process more reliable but still avoid an infinite loop if something is seriously wrong.
CentOS6 EOL'd and the mirrors were swiftly deleted, leading to failures in tests and documentation.
Remove CentOS 6 for now to get builds going again with the intention to replace it in the near future with CentOS 8.
There is not a lot to be done in this case since it looks like PostgreSQL disconnected while the query was running, but at least improve the error message and remove the assert, which indicates a coding error.
Refactor the code to allow a dynamic number of indexes for indexed options, e.g. pg-path. Our reliance on getopt_long() still limits the number of indexes we can have per group, but once this limitation is removed the rest of the code should be happy with dynamic numbers of indexes (with a reasonable maximum).
Add an option to set a default in each group. This was previously handled by the host-id option but now there is a specific option for each group, pg and repo. These remain internal until they can be fully tested with multi-repo support. They are fully tested for internal usage.
Remove the ConfigDefineOption enum and use the ConfigOption enum instead. They are now equal since the indexed options (e.g. cfgOptRepoHost2) have been removed from ConfigOption.
Remove the config/config test module and add required tests to the config/parse test module. Parsing is now the only way to load a config so this removes some redundancy.
Split new internal config structures and functions into a new header file, config.intern.h. More functions will need to be moved over from config.h but that will need to be done in a future commit to reduce churn.
Add repoIdx to repoIsLocal() and storageRepo*(). Multi-repository support requires that repo locality and storage be accessible by index. This allows, for example, multiple repos to be iterated in a loop. This could be done in a separate commit but doesn't seem worth it since the code is related.
Remove the type parameter from storageRepoGet(). This parameter existed solely to provide coverage for the case where the storage type was invalid. A better pattern is to check that the type is S3 once all other types have been ruled out.
Improve locking on remote processes by introducing an exec-id that is unique to the main process and passed to all remote processes. This allows the remote processes to determine if a lock is held by a remote from the same main process. If so, the lock is allowed.
The exec-id is also useful for associating remote logs with main logs for debugging purposes.
When restore type standby is provided, the recovery.signal isn't needed and may lead to some confusion (see #1236).
Lately, when using pg_basebackup --write-recovery-conf, only the standby.signal file is created. This change would then align with that behaviour.
If a user reset an option such as pg-default on the command-line then an override in the code would not take effect.
Ignore a reset when the code explicitly sets an option to prevent this.
These warnings were only being reported to PostgreSQL on the console. Now they are also recorded in the async log increasing the chance that they will be seen.
This also improves coverage by requiring a warning during async processing to have a test case, which has been added.
The tests were originally written by loading values directly into the configuration before the parser was available.
Update to use harnessCfgLoad() to simplify the tests and make them compatible with upcoming config changes.
Return a path missing error when a stanza is specified for the info command but the stanza does not exist in the repository.
Previously [] was returned, which is still the case if no stanza is specified and the repository does not exist.
lstRemoveIdx(list, 0) resulted in the entire list being moved down to the first position which could take a long time for big lists. This is a common pattern in backup/restore when processing file queues.
Instead simply move the list pointer up when first item is removed. Then on insert check if there is space at the beginning when there is no longer space at the end and do the move then. This way if a list is built and then drained without any new inserts then no move is required.
There were a number of places in the code where "hostId" was used, but hostId is just the option group index + 1 so this led to a lot of +1 and -1 to convert the id to an index and vice versa.
Instead just use the zero based index wherever possible. This is pretty much everywhere except when the host-id option is read or set, or where a message is being formatted for the user.
Also fix a bug in protocolRemoteParam() where remotes spawned from the main process could get process ids that were not 0. Only the locals should spawn remotes with process id > 0. This seems to have been harmless since the process id is only a label, but it could be confusing when debugging.
iniLoad() was trimming lines which meant that a leading space would not pass checksum validation when a manifest was reloaded. Remove the trims since files we write should never contain extraneous spaces. This further diverges the format for the functions that read conf files (e.g. pgbackrest.conf) and those that read info (e.g. manifest) files.
While we are at it also allow [ and # as initial characters. # was reserved for comments but we never put comments into info files. [ denotes a section but we can get around this by never allowing arrays as values in info files, so if a line ends in ] it must be a section. This is currently the case but enforce it by adding an assert to info/info.c.
The tests were originally written by loading values directly into the configuration before the parser was available.
Update to use harnessCfgLoadRaw() to simplify the tests and make them compatible with upcoming config changes.
Note that some unreachable conditions were removed since they could not be reached via a parsed config, only by munging values directly into the config. cfgOptionTest(optionId) was removed because a non-default value must always be set. cfgOptionValid(cfgOptLogTimestamp) was removed because it is true for all commands except for cfgCmdNone, which is checked with an assert.
cfgOptionId() did not recognize deprecated options which made the help command throw errors when they were specified on the command line. cfgParseOption() will correctly identify deprecated options.
cfgParseOption() can also be used in cfgParse() to reduce code duplication when parsing info out of the option value returned by optionFind().
Finally, code the option key index separately in parse.auto.c. For now they are simply added back together but future code will need them separated.
This has always been equivalent to the ConfigCommand enum so it just adds complexity.
It was created for symmetry with ConfigDefineOption, which will also be removed soon.
Currently indexes above 1 do not have dependencies checked, so this doesn't error.
In a future commit we will enable those checks and this will error if it is not fixed.