YAML::XS requires libyaml so it not as portable as pure Perl versions of YAML.
Instead of using YAML:PP just use the general YAML::Any module which uses whatever is installed. We are not concerned about performance for YAML so whatever works is fine.
Messages on stderr were being lost due to the error suppression used to customize the error message.
Also update the formatting to be more informative and concise.
MacOS has a very old version of rsync that does not support this option.
Rather than require a newer version of rsync exclude the option since the plan is to remove the requirement for it.
This is a more appropriate place for the check and means test.pl can avoid loading any XML files if --no-gen is specified.
The XML::Checker::Parser module originally selected for XML in Perl is not very portable so the requirement reduces the number of platforms where tests can be run.
Clang justifiably complains about pointer arithmetic on a known NULL value during testing. We know this is fine but use uintptr_t to silence the warnings.
Found on MacOS M1.
The return value is not checked because we are happy with a truncated result in this case, which is guaranteed by passing the buffer size.
Found on MacOS M1.
Multi-repository implementations for the archive-push, check, info, stanza-create, stanza-upgrade, and stanza-delete commands.
Multi-repo configuration is disabled so there should be no behavioral changes between these commands and their current single-repo implementations.
Multi-repo documentation and integration tests are still in the multi-repo development branch. All unit tests work as multi-repo since they are able to bypass the configuration restrictions.
The option portion was not being capitalized or replacing - with _.
The parser does not care, but in cases where we have mixed hrnCfgEnv*()/setenv() calls the env variable might not get cleared, which can lead to funny test results.
The default lock path should fail since the test VM gives ownership of /tmp to root.
For some reason this was not working as expected under u18 but it fails under u20.
All unit tests now require full coverage so the "full" keyword is obsolete and has been removed.
The covered code modules are simply listed, with only "no code" modules annotated.
Check that archive files exist in the main process instead of the local process. This means that the archive.info file only needs to be loaded once per execution rather than once per file to get.
Stop looking when a file is missing or in error. PostgreSQL will never request anything past the missing file so there is no point in getting them. This also reduces "unable to find" logging in the async process.
Cache results of storageList() when looking for multiple files to reduce storage I/O.
Look for all requested archive files in the archive-id where the first file is found. They may not all be there, but this reduces the number of list calls. If subsequent files are in another archive id they will be found on the next archive-get call.
Append "asynchronously" to messages when the async process fetched the file (not in the actual async process log, though).
Add "repo1" to make it clear what archive we are talking about. This is not very useful by itself but soon we'll be able to add the archive id, which is very useful.
Add constants for messages that are used multiple times to ensure they stay consistent.
If files other than backup.manifest.copy were left in a backup path by a prior resume then the next resume would skip the backup rather than removing it. Since the backup path still existed, it would be found during backup label generation and cause an error if it appeared to be later than the new backup label. This occurred if the skipped backup was full.
The error was only likely on object stores such as S3 because of the order of file deletion. Posix file systems delete from the bottom up because directories containing files cannot be deleted. Object stores do not have directories so files are deleted in whatever order they are provided by the list command. However, the issue can be reproduced on a Posix file system by manually deleting backup.manifest.copy from a resumable backup path.
Fix the issue by removing the resumable backup if it has no manifest files. Also add a new warning message for this condition.
Note that this issue could be resolved by running expire or a new full backup.
These options specify the number of local worker job retries and the retry interval after one immediate retry.
There is some value in allowing retries to be specified by the user but for the most part these options are for suppressing retries during testing, which can save a lot of time. The bug introduced in d1d25c7 and fixed in 8b86d5e also suggests it is better not to use retries in tests.
Remove the default delayed retries for archive-get/archive-push, leaving only the immediate retry. These commands are retried by PostgreSQL so it doesn't make sense to do too many retries internally.
These options are currently internal.
The test was pretty old and written in stages during the migration, so storage use was a bit archaic and the organization was poor.
Update using the new storage macros and reorganize the tests to provide better coverage.
The macros should make it much easier to write complex tests, especially when compression and encryption are involved.
Update the command/archiveGet test to show how the new macros are used.
This avoids the need for strLstJoin() when testing lists.
Lists are \n delimited (rather than command or pipe) so that non-trivial lists can be more easily diff'd.
Add separation and some visual cues to help identify the start of a test.
Also add a counter which can be used to search for a specific test, which is useful if there is a lot of debug output to search through.
These were required to deal with the legacy Perl code being unable to load new options between tests.
The C code does not have this issue so remove the forks and update process ids in the log tests.
No timeout is expected here but the small timeout prevents errors from being thrown.
This is not a bug since the error would be thrown on the next archive-get call but it does make the tests harder to debug when there is an error.
It is not clear why there was a timeout here at all. It is likely cruft from a prior test or a copy/paste error.
Tests that are duplicated are being removed from the info command unit tests. Specifically tests where the only thing different was whether a lock was held or not which affects only the status display. Removing these tests will reduce churn in the upcoming multi-repo support.
The data returned by the protocol has not been sorted yet so it is vulnerable to differences in collation.
Multiple records are not needed for this test so limit it to one path to solve this issue.
The pg option only has one current usage, to let the backup local know which pg index it should copy files from.
There are other possible uses for this option, but they need thought, tests, and documentation.
This option was added in advance of the multi-repo functionality but it has no purpose and it is not clear what the validity rules should be.
The option will be added back when multi-repo functionality is committed.
There is an inconsistency when the JSON is output for the case when a stanza is requested and it does not exist in the repo. This was the only case where the archive array was not added to the JSON. Adding it will simplify the upcoming multi-repo support code.
Also, a redundant test was removed rather than updating it for this case.
This was a hack to prevent the remote from loading host settings, which is now handled by option validity for command roles.
These options are still useful so don't remove them, but do leave them internal for now.
Building on 23f5712, limit option validity by role. This is mostly for options that weren't needed for certain roles but were harmless. However, the upcoming multi repository functionality requires the granularity implemented here.
The remote role benefits since host options can automatically excluded when building the options. Also, many options that are only required for the default role (e.g. repo-retention-full) no longer need to be passed in tests for other roles.
Some tests used options in contexts that are currently valid but are not correct usage, i.e. usage of internal options for the default role.
Update these tests in advance of the option validity becoming stricter.
Validity by command was not granular enough so numerous options needed be marked internal so users would not stumble across them. Options were also needlessly being passed to roles that had no use for them.
Introduce per-role validity lists that depend on what roles are valid per command. Also add a check to ensure that only valid roles are used with a command.
This commit adds the functionality but does not introduce any new behavior, i.e. all options are valid for all roles that the command is valid for. A subsequent commit will introduce the new role restrictions to make the changes easier to audit.
Data required for parsing was spread between the config and defined modules, mostly for historical reasons because the same data was used by Perl.
Requiring all the parse rules to be accessed with function interfaces makes the code more complicated and new rules harder to implement.
Instead, move the data to the parse module so in the most complex cases no interface functions are needed. This reduces the total amount of code and paves the way for more complex parse rules.
The help data can be represented more compactly in a pack and this separates data needed for help from data needed for parsing, freeing each to have a more appropriate representation.
This was done in the internal versions but not the user-facing function. That meant the field had to be explicitly read after determining it was NULL, which is wasteful.
Since there is only one behavior now, remove pckReadDefaultNull() and move the logic to pckReadNullInternal().
Testing on Travis-CI has been getting slower (from ~18 minutes to 3-6 hours) and the travis-ci.org service will be terminated at the end of the year. Moving to travis-ci.com is an option but the quotas are too low for our purposes.
Instead use Github Actions, which does not currently have quotas, and runs our current tests with just a few tweaks.
This still leaves multi-architecture tests on Travis-CI but we may be able to run those and stay within the new quotas.
Also fix a minor bug in restoreTest.c exposed by Github Actions using a different name for the user and group.
The pack type is an architecture-independent format for serializing data compactly, inspired by ProtocolBuffers and Avro.
Also add ioReadSmall(), which is optimized for small binary reads, similar to ioReadLineParam().
The C code does not use doubles to represent seconds like the Perl code did so time can be represented as an integer which reduces the number of data types that config has to understand.
Also remove Variant doubles since they are no longer used.
Note that not all double code was removed since we still need to display times to the user in seconds and it is possible for the times to be fractional. In the future this will likely be simplified by storing the original user input and using that value when the time needs to be displayed.
Bug Fixes:
* Allow [, #, and space as the first character in database names. (Reviewed by Stefan Fercot, Cynthia Shang. Reported by Jefferson Alexandre.)
* Create standby.signal only on PostgreSQL 12 when restore type is standby. (Fixed by Stefan Fercot. Reviewed by David Steele. Reported by Keith Fiske.)
Features:
* Expire history files. (Contributed by Stefan Fercot. Reviewed by David Steele.)
* Report page checksum errors in info command text output. (Contributed by Stefan Fercot. Reviewed by Cynthia Shang.)
* Add repo-azure-endpoint option. (Reviewed by Cynthia Shang, Brian Peterson. Suggested by Brian Peterson.)
* Add pg-database option. (Reviewed by Cynthia Shang.)
Improvements:
* Improve info command output when a stanza is specified but missing. (Contributed by Stefan Fercot. Reviewed by Cynthia Shang, David Steele. Suggested by uspen.)
* Improve performance of large file lists in backup/restore commands. (Reviewed by Cynthia Shang, Oscar.)
* Add retries to PostgreSQL sleep when starting a backup. (Reviewed by Cynthia Shang. Suggested by Vitaliy Kukharik.)
Documentation Improvements:
* Replace RHEL/CentOS 6 documentation with RHEL/CentOS 8.
Update RHEL/CentOS 7 to cover the versions that were previously covered by RHEL/CentOS 6.
Since RHEL/CentOS 7/8 work the same update the documentation logic and labels to reflect this compatibility.
Inaccuracies in sleep time or clock skew might make a single sleep insufficient to reach the next second.
Add a few retries to make the process more reliable but still avoid an infinite loop if something is seriously wrong.
CentOS6 EOL'd and the mirrors were swiftly deleted, leading to failures in tests and documentation.
Remove CentOS 6 for now to get builds going again with the intention to replace it in the near future with CentOS 8.
There is not a lot to be done in this case since it looks like PostgreSQL disconnected while the query was running, but at least improve the error message and remove the assert, which indicates a coding error.
Refactor the code to allow a dynamic number of indexes for indexed options, e.g. pg-path. Our reliance on getopt_long() still limits the number of indexes we can have per group, but once this limitation is removed the rest of the code should be happy with dynamic numbers of indexes (with a reasonable maximum).
Add an option to set a default in each group. This was previously handled by the host-id option but now there is a specific option for each group, pg and repo. These remain internal until they can be fully tested with multi-repo support. They are fully tested for internal usage.
Remove the ConfigDefineOption enum and use the ConfigOption enum instead. They are now equal since the indexed options (e.g. cfgOptRepoHost2) have been removed from ConfigOption.
Remove the config/config test module and add required tests to the config/parse test module. Parsing is now the only way to load a config so this removes some redundancy.
Split new internal config structures and functions into a new header file, config.intern.h. More functions will need to be moved over from config.h but that will need to be done in a future commit to reduce churn.
Add repoIdx to repoIsLocal() and storageRepo*(). Multi-repository support requires that repo locality and storage be accessible by index. This allows, for example, multiple repos to be iterated in a loop. This could be done in a separate commit but doesn't seem worth it since the code is related.
Remove the type parameter from storageRepoGet(). This parameter existed solely to provide coverage for the case where the storage type was invalid. A better pattern is to check that the type is S3 once all other types have been ruled out.
Improve locking on remote processes by introducing an exec-id that is unique to the main process and passed to all remote processes. This allows the remote processes to determine if a lock is held by a remote from the same main process. If so, the lock is allowed.
The exec-id is also useful for associating remote logs with main logs for debugging purposes.
When restore type standby is provided, the recovery.signal isn't needed and may lead to some confusion (see #1236).
Lately, when using pg_basebackup --write-recovery-conf, only the standby.signal file is created. This change would then align with that behaviour.
If a user reset an option such as pg-default on the command-line then an override in the code would not take effect.
Ignore a reset when the code explicitly sets an option to prevent this.
These warnings were only being reported to PostgreSQL on the console. Now they are also recorded in the async log increasing the chance that they will be seen.
This also improves coverage by requiring a warning during async processing to have a test case, which has been added.
The tests were originally written by loading values directly into the configuration before the parser was available.
Update to use harnessCfgLoad() to simplify the tests and make them compatible with upcoming config changes.
Return a path missing error when a stanza is specified for the info command but the stanza does not exist in the repository.
Previously [] was returned, which is still the case if no stanza is specified and the repository does not exist.
lstRemoveIdx(list, 0) resulted in the entire list being moved down to the first position which could take a long time for big lists. This is a common pattern in backup/restore when processing file queues.
Instead simply move the list pointer up when first item is removed. Then on insert check if there is space at the beginning when there is no longer space at the end and do the move then. This way if a list is built and then drained without any new inserts then no move is required.
There were a number of places in the code where "hostId" was used, but hostId is just the option group index + 1 so this led to a lot of +1 and -1 to convert the id to an index and vice versa.
Instead just use the zero based index wherever possible. This is pretty much everywhere except when the host-id option is read or set, or where a message is being formatted for the user.
Also fix a bug in protocolRemoteParam() where remotes spawned from the main process could get process ids that were not 0. Only the locals should spawn remotes with process id > 0. This seems to have been harmless since the process id is only a label, but it could be confusing when debugging.
iniLoad() was trimming lines which meant that a leading space would not pass checksum validation when a manifest was reloaded. Remove the trims since files we write should never contain extraneous spaces. This further diverges the format for the functions that read conf files (e.g. pgbackrest.conf) and those that read info (e.g. manifest) files.
While we are at it also allow [ and # as initial characters. # was reserved for comments but we never put comments into info files. [ denotes a section but we can get around this by never allowing arrays as values in info files, so if a line ends in ] it must be a section. This is currently the case but enforce it by adding an assert to info/info.c.
The tests were originally written by loading values directly into the configuration before the parser was available.
Update to use harnessCfgLoadRaw() to simplify the tests and make them compatible with upcoming config changes.
Note that some unreachable conditions were removed since they could not be reached via a parsed config, only by munging values directly into the config. cfgOptionTest(optionId) was removed because a non-default value must always be set. cfgOptionValid(cfgOptLogTimestamp) was removed because it is true for all commands except for cfgCmdNone, which is checked with an assert.
cfgOptionId() did not recognize deprecated options which made the help command throw errors when they were specified on the command line. cfgParseOption() will correctly identify deprecated options.
cfgParseOption() can also be used in cfgParse() to reduce code duplication when parsing info out of the option value returned by optionFind().
Finally, code the option key index separately in parse.auto.c. For now they are simply added back together but future code will need them separated.
This has always been equivalent to the ConfigCommand enum so it just adds complexity.
It was created for symmetry with ConfigDefineOption, which will also be removed soon.
Currently indexes above 1 do not have dependencies checked, so this doesn't error.
In a future commit we will enable those checks and this will error if it is not fixed.
These constants don't scale well as the index total is increased for an option.
The core code rarely uses these options and they are easily replaced with cfgOptionName().
The tests had started to make use of the constants, so provide functions that build the option name from the optionId and, optionally, the optionKey.
WAL timeline history files were not being expired because they were small and generally not very plentiful.
However, in some cases large numbers of history files may be generated so it makes sense to remove useless history files to keep things tidy.
The history file for the oldest retained timeline is kept for debugging purposes even though it is not used for recovery.
Group related options together so operations (e.g. valid, test, index total) can be performed on all options in the group.
Previously, options at the top of the hierarchy of the related options were used to do these tests. This was prone to error as option relationships changed and it was not always clear which option (or options) should be used.
Bug Fixes:
* Error with hints when backup user cannot read pg_settings. (Reviewed by Stefan Fercot, Cynthia Shang. Reported by Mohamed Insaf K.)
Features:
* PostgreSQL 13 support. (Reviewed by Cynthia Shang.)
Improvements:
* Improve PostgreSQL version identification. (Reviewed by Cynthia Shang, Stephen Frost.)
* Improve working directory error message. (Reviewed by Stefan Fercot.)
* Add hint about starting the stanza when WAL segment not found. (Contributed by David Christensen. Reviewed by David Steele.)
* Add hint for protocol version mismatch. (Reviewed by Cynthia Shang. Suggested by loop-evgeny.)
Documentation Improvements:
* Add note that pgBackRest versions must match when running remotely. (Reviewed by Cynthia Shang. Suggested by loop-evgeny.)
* Move info command text to the reference and link to user guide. (Reviewed by Cynthia Shang. Suggested by Christophe Courtois.)
* Update yum repository path for CentOS/RHEL user guide. (Contributed by Heath Lord. Reviewed by David Steele.)
Update the documentation to explicitly state that versions must match across hosts when running remotely.
Add a hint to the protocol version mismatch error to help the user identify the problem.
Add older PostgreSQL versions to the u18 container that were not available before.
This also updates all minor versions for prior versions of PostgreSQL.
Scan the WAL archive for missing or invalid files and build up ranges of WAL that will be used to verify backup integrity. A number of errors and warnings are currently emitted but they should not be considered authoritative (yet).
The command is incomplete so is marked internal.
Previously, catalog versions were fixed for all versions which made maintaining the catalog versions during PostgreSQL beta and release candidate cycles very painful. A version of pgBackRest which was functionally compatible was rendered useless by a catalog version bump in PostgreSQL.
Instead use only the control version to identify a PostgreSQL version when possible. Some older versions require a catalog version to positively identify a PostgreSQL version, so include them when required.
Since the catalog number is required to work with tablespaces it will need to be stored. There's already a copy of it in backup.info so use that (even though we have been ignoring it in the C versions).
These values are not used by the Perl integration tests so maybe it would be better to remove them, but for now just update since they should not be changing again for PG13.
This condition used to give a not-very-clear error which we have been intending to improve. But in the meantime the changes in fbff299 resulted in a segfault for this condition instead because the data_directory was assumed to be non-NULL.
Fix this by explicitly throwing an error with hints when any row in pg_settings cannot be selected.
This file is created by pg_basebackup so might be in the data directory if the cluster was restored from a pg_basebackup backup. Also exclude backup_manifest.tmp since it is possible to find that in the backup directory.
Improve the wording of the error message and add a hint to make it clearer what is wrong and how the user can fix it.
Also change the assert to a regular error since this is not an internal error.
If a stop command has been issued the check command fails due to archiving timing out.
Provide a hint to document this situation and point the user in the proper direction.
If the callback never returned any jobs then protocolParallelDone() would never be true. The reason is that the done state was being set in protocolParallelResult(), which never gets called if there are no results.
Calling protocolParallelResult() doesn't make much sense in this case so instead move the done logic to protocolParallelDone().
For current usage of ProtocolParallel we ensure there are jobs before processing so this is not a live issue, but the new behavior is required for future development.
Bug Fixes:
* Suppress errors when closing local/remote processes. Since the command has completed it is counterproductive to throw an error but still warn to indicate that something unusual happened. (Reviewed by Cynthia Shang. Reported by argdenis.)
* Fix issue with = character in file or database names. (Reviewed by Bastian Wegge, Cynthia Shang. Reported by Brad Nicholson, Bastian Wegge.)
Features:
* Automatically retrieve temporary S3 credentials on AWS instances. (Contributed by David Steele, Stephen Frost. Reviewed by Cynthia Shang, David Youatt, Aleš Zelený, Jeanette Bromage.)
* Add archive-mode option to disable archiving on restore. (Reviewed by Stephen Frost. Suggested by Stephen Frost.)
Improvements:
* PostgreSQL 13 beta3 support. Changes to the control/catalog/WAL versions in subsequent betas may break compatibility but pgBackRest will be updated with each release to keep pace.
* Asynchronous list/remove for S3/Azure storage. (Reviewed by Cynthia Shang, Stephen Frost.)
* Improve memory usage of unlogged relation detection in manifest build. (Reviewed by Cynthia Shang, Stephen Frost, Brad Nicholson, Oscar. Suggested by Oscar, Brad Nicholson.)
* Proactively close file descriptors after forking async process. (Reviewed by Stephen Frost, Cynthia Shang.)
* Delay backup remote connection close until after archive check. (Contributed by Floris van Nee. Reviewed by David Steele.)
* Improve detailed error output. (Reviewed by Cynthia Shang.)
* Improve TLS error reporting. (Reviewed by Cynthia Shang, Stephen Frost.)
Documentation Bug Fixes:
* Add none to compress-type option reference and fix example. (Reported by Ugo Bellavance, Don Seiler.)
* Add missing azure type in repo-type option reference. (Fixed by Don Seiler. Reviewed by David Steele.)
* Fix typo in repo-cipher-type option reference. (Fixed by Don Seiler. Reviewed by David Steele.)
Documentation Improvements:
* Clarify that expire must be run regularly when expire-auto is disabled. (Reviewed by Douglas J Hunley. Suggested by Douglas J Hunley.)
When restoring a cluster that will be promoted but is not intended to be the new primary, it is important to disable archiving to avoid polluting the repository with useless WAL. This option makes disabling archiving a bit easier.
Automatically retrieve the role and temporary credentials for S3 when the AWS instance is associated with an IAM role. Credentials are automatically updated when they are <= 5 minutes from expiring.
Basic configuration is to set repo1-s3-key-type=auto. repo1-s3-role can be used to set a specific role, otherwise it will be retrieved automatically.
Add more info (command, version, options) to asserts, and errors when debug logging is enabled. This won't cover all cases but might mean we get more info in some circumstances.
When testing the common/stack-trace module it is important not to call this test function since the trace stack is empty and it will cause a buffer under run.
Instead use a macro that is only defined under the correct circumstances and add an assert() to catch future regressions.
Currently each module that needs to collect statistics implements custom code to do so. This is cumbersome.
Create a general purpose module for collecting and reporting statistics. Statistics are output in the log at detail level, but there are other uses they could be put to eventually.
No new functionality is added. This is just a drop-in replacement for the current statistics, with the advantage of being more flexible.
The new stats are slower because they involve a list lookup, but performance testing shows stats can be updated at about 40,000/ms which seems fast enough for our purposes.
HTTP/1.0 connections are closed by default after a single response. Other than that, treat 1.0 the same as 1.1.
HTTP/1.0 allows different date formats that we can't parse but for now, at least, we don't need any date headers from 1.0 requests.
The prior implementation only supported a single connection on TLS. This is not flexible enough for complex testing scenarios which might require multiple simultaneous connections on different protocols.
Allow multiple simultaneous connections and add plain sockets as a protocol option. Rename the functions used for server scripting to hrnServerScript*() to make it clear they are related. Improve error messages when less input is received by the server than expected.
Also, do a bit of cleanup and add more comments.
Following up on 111d33c, implement the new interfaces for socket client/session. Now HTTP objects can be used over TLS or plain sockets.
This required adding ioSessionFd() and ioSessionRole() to provide the functionality of sckSessionFd() and sckSessionType(). sckClientHost() and sckClientPort don't make sense in a generic interface so they were replaced with ioSessionName().
Rather than calling storageS3New() directly, create the storage by loading a configuration and calling repoStorageGet(). This is a better end-to-end test and cuts down on a lot of redundant tests.
Add tests that include security tokens in error messages to ensure they are redacted.
Move sckSessionReadyRead()/Write() into the IoRead/IoWrite interfaces. This is a more logical place for them and the alternative would be to add them to the IoSession interface, which does not seem like a good idea.
This is mostly a refactor, but a big change is the select() logic in fdRead.c has been replaced by ioReadReady(). This was duplicated code that was being used by our protocol but not TLS. Since we have not had any problems with requiring poll() in the field this seems like a good time to remove our dependence on select().
Also, IoFdWrite now requires a timeout so update where required, mostly in the tests.
These interfaces allow the HttpClient and HttpSession objects to work with protocols other than TLS, .e.g. plain sockets. This is necessary to allow standard HTTP -- right now only HTTPS is allowed, i.e. HTTP over TLS.
For now only TlsClient and TlsSession have been converted to the new interfaces. SocketClient and SocketSession will also need to be converted but first sckSessionReadyRead() and sckSessionReadyWrite() need to be moved into the IoRead and IoWrite interfaces, since they are not a good fit for IoSession.
Pretty much everywhere handle is used what is really meant is file descriptor (fd). This terminology got migrated over from Perl and is just not quite correct, or at least not as correct as fd.
There were also plenty of places fd was used so now all uses are consistent.
The Perl code was not updated but might be in a future commit.
This does not appear to have been used in quite some time and the tests are equally useless because they don't prove the correct port was passed to httpClientNew().
Before 9f2d647 TLS errors included additional details in at least some cases. After 9f2d647 a connection to an HTTP server threw `TLS error [1]` instead of `unable to negotiate TLS connection: [336031996] unknown protocol`.
Bring back the detailed messages to make debugging TLS errors easier. Since the error routine is now generic the `unable to negotiate TLS connection context` is not available so the error looks like `TLS error [1:336031996] unknown protocol`.
This loop was using a lot of memory without freeing it at intervals.
Rewrite to use char arrays when possible to reduce memory that needs to be allocated and freed.
Zigzag encoding places the sign bit in the least significant bit so that -1 is encoded as 1, 1 as 2, etc. This moves as many bits as possible into the low order bits which is good for other types of encoding, e.g. base-128.
See https://en.wikipedia.org/wiki/Variable-length_quantity#Zigzag_encoding.
It seems like overkill to encode this when other enums (e.g. StorageInfoLevel) are passed as integers.
Instead note that StorageType values should not be changed and remove the special encoding.
The fix for = characters in info files (039d314) added JSON validation but discarded the resulting Variant which means the JSON is being parsed twice. This nearly doubles the time to load a manifest since a lot of complex JSON is involved.
Time to load a million file manifest:
Before 039d314: 7.8s
039d314: 15.5s
This patch: 7.5s
To fix this regression return the Variant in the callback so the caller does not have to parse it again. The new code appears slightly more efficient overall, probably because there are fewer operations against Strings.
We use the Z suffix in many functions to indicate that we are expecting a zero-terminated string so make this function conform to the pattern.
As a bonus the new name is a bit shorter, which is a good quality in a commonly-used function.
The manifest uses the = character as the key/value separator so = characters in the key cause parsing errors and lead to an error or segfault.
Since the value must be valid JSON we can keep checking the value on the right side of the = and stop building the key when the value is valid. It's a bit hackish but it does seem to do the job without breaking the manifest format.
Unsurprisingly this makes parsing about 50% slower but it's still more than fast enough. Parsing 10 million key/values takes about 6.5s for the old code and 10s for the new code. Since the value is used as JSON downstream we can reclaim most of this time by just passing the JSON value rather than making the callback reparse it. We'll save that for another commit, though.
Since the command has completed it is counterproductive to throw an error but still warn to indicate that something unusual happened.
Also fix the related issue that the local processes were not being shut down when they completed, which meant that they might timeout before being closed when pgbackrest terminated.
Use a test storage driver to allow manifestNewBuild() to be run against a test cluster at any scale without having to write files to disk.
Simplify the test by using the output of manifestNewBuild() to feed manifestSave() and manifestNewLoad().
Also add manifest size to the output.
Calculates the memory used by the context and all child contexts.
This is primarily useful for debugging but it is not conditional on DEBUG because it is useful for profile/performance tests.
A number of tests used invalid JSON values where an error was expected or the value would be ignored.
Update these tests to use valid JSON values so all values in the file can be validated even if they are not used.
Something like 3="string" would return an Int64 variant and ignore the invalid portion after the integer. Other JSON interface functions have this check but it was forgotten here.
There are no current issues because of this but we want to be able to validate arbitrary JSON strings and this function was not working correctly for that usage.
This function is not used in the core code so remove it and update the test where it was used.
There may eventually be a need for a strLstNewP() function but it doesn't seem worth the code churn until there is an actual requirement.
The old constructor was left around to reduce code churn during the migration but it just makes the code harder to read and search.
Remove the old constructor and rename all remaining instances to lstNewP(), which by default has the same semantics.
Testing against static checksums is valuable but it can be become burdensome when supporting multiple architectures.
Reduce the number of tests we are doing against static checksums when the architecture can cause the checksum to vary.
Bug Fixes:
* Fix restore --force acting like --force --delta. This caused restore to replace files based on timestamp and size rather than overwriting, which meant some files that should have been updated were left unchanged. Normal restore and restore --delta were not affected by this issue. (Reviewed by Cynthia Shang.)
Features:
* Azure support for repository storage. (Reviewed by Cynthia Shang, Don Seiler.)
* Add expire-auto option. This allows automatic expiration after a successful backup to be disabled. (Contributed by Stefan Fercot. Reviewed by Cynthia Shang, David Steele.)
Improvements:
* Asynchronous S3 multipart upload. (Reviewed by Stephen Frost.)
* Automatic retry for backup, restore, archive-get, and archive-push. (Reviewed by Cynthia Shang.)
* Disable query parallelism in PostgreSQL sessions used for backup control. (Reviewed by Stefan Fercot.)
* PostgreSQL 13 beta2 support. Changes to the control/catalog/WAL versions in subsequent betas may break compatibility but pgBackRest will be updated with each release to keep pace.
* Improve handling of invalid HTTP response status. (Reviewed by Cynthia Shang.)
* Improve error when pg1-path option missing for archive-get command. (Reviewed by Cynthia Shang.)
* Add hint when checksum delta is enabled after a timeline switch. (Reviewed by Matt Bunter, Cynthia Shang.)
* Use PostgreSQL instead of postmaster where appropriate. (Reviewed by Cynthia Shang.)
Documentation Bug Fixes:
* Fix incorrect example for repo-retention-full-type option. (Reported by Höseyin Sönmez.)
* Remove internal commands from HTML and man command references. (Reported by Cynthia Shang.)
Documentation Improvements:
* Update PostgreSQL versions used to build user guides. Also add version ranges to indicate that a user guide is accurate for a range of PostgreSQL versions even if it was built for a specific version. (Reviewed by Stephen Frost.)
* Update FAQ for expiring a specific backup set. (Contributed by Cynthia Shang. Reviewed by David Steele.)
* Update FAQ to clarify default PITR behavior. (Contributed by Cynthia Shang. Reviewed by David Steele.)
Remove all check and stanza-* tests except for the ones that are intended to succeed. The successful tests show that the queries run with expected results against each version of PG which should also validate queries for the failure tests in the unit tests.
Also remove the tests for --no-online backups since they don't require a database and are well tested in the unit tests.
The prior code was only able to use the main passphrase automatically and expected sub passphrases to be specified for each operation. This was fine for testing but hardly sufficient for a user-facing feature.
Update the code to determine which passphrase to use for any file in the repository and error when an invalid file or location is selected.
The repo-get command is still internal for now, but with this improvement it should be ready to be made public.
There are a few non version specific tests that need to be run in integration because we can't get coverage in the unit tests.
To save some time we'll only run those tests against the same version we use for expect testing.
If a local command, e.g. backupFile(), fails it will stop the entire process. Instead, retry local commands to deal with transient errors.
Remove special logic in the S3 storage driver to retry RequestTimeTooSkewed errors since this is now handled by the general retry mechanism in the places where it is most likely to happen, i.e. file read/write. Also, this error should have been entirely eliminated by the asynchronous TLS implementation.
The Azure storage driver exposes secrets in the query when using SAS authorization. These secrets can show up during logging or when an error occurs.
Allow redaction of queries to prevent secrets from being exposed in logs and errors.
A shared access signature (SAS) provides granular, delegated access to resources in a storage account. This is often preferable to using a shared key which provides more access and is a greater security risk if compromised.
Rework size limits so that this->size is always the current size no matter how much is allocated.
Most importantly, this removes the conditional in bufSize(), which makes it a better candidate for inlining.
This caused restore to replace files based on timestamp and size rather than overwriting, which meant some files that should have been updated were left unchanged. Normal restore and restore --delta were not affected by this issue.
httpUriDecode() reverses the encoding in httpUriEncode().
httpQueryNewStr() creates a new HttpQuery by parsing a query string.
httpQueryMerge() merges the contents of one query into another query.
Azure and Azure-compatible object stores can now be used for repository storage.
Currently only shared key authentication is supported but SAS will be added soon.
There don't appear to be any behavioral changes since PostgreSQL 12 and all the tests pass.
Changes to the control/catalog/WAL versions in subsequent betas may break compatibility but pgBackRest will be updated with each release to keep pace.
There is no need to have parallelism enabled in a backup control session. In particular, 9.6 marks pg_stop_backup() as parallel-safe but an error will be thrown if pg_stop_backup() is run in a worker.
When uploading large files the upload is split into multiple parts which are assembled at the end to create the final file. Previously we waited until each part was acknowledged before starting on the processing (i.e. compression, etc.) of the next part.
Now, the request for each part is sent while processing continues and the response is read just before sending the request for the next part. This asynchronous method allows us to continue processing while the S3 server formulates a response.
Testing from outside AWS in a high-bandwidth, low-latency environment showed a 35% improvement in the upload time of 1GB files. The time spent waiting for multipart notifications was reduced by ~300% (this measurement included the final part which is not uploaded asynchronously).
There are still some possible improvements: 1) the creation of the multipart id could be made asynchronous when it looks like the upload will need to be multipart (this may incur cost if the upload turns out not to be multipart). 2) allow more than one async request (this will use more memory).
A fair amount of refactoring was required to make the HTTP responses asynchronous. This may seem like overkill but having well-defined request, response, and session objects will also be advantageous for the upcoming HTTP server functionality.
Another advantage is that the lifecycle of an HttpSession is better defined. We only want to reuse sessions that complete the request/response cycle successfully, otherwise we consider the session to be in a bad state and would prefer to start clean with a new one. Previously, this required complex notifications to mark a session as "successfully done". Now, ownership of the session is passed to the request and then the response and only returned to the client after a successful response. If an error occurs anywhere along the way the session will be automatically closed by the object destructor when the request/response object is freed (depending on which one currently owns the session).
strCat() did not follow our convention of appending Z to functions that accept zero-terminated strings rather than String objects.
Add strCatZ() to accept zero-terminated strings and update strCat() to accept String objects.
Use LF_STR where appropriate but don't use other String constants because they do not improve readability.
Test matrices were previously simplified for the mock/* tests (e.g. d4410611, d489eb87) but not for real/all since the rules for which tests would run with which options was extremely complex. This only got more complex when new compression formats were added.
Because the loop-generated matrix was so large, mosts tests were skipped for most option combinations following arcane logic which was nearly impossible to decipher even when reading the code, and completely impossible from the test.pl interface. As a consequence, important tests got excluded. For example, backup from standby was excluded for most versions of PostgreSQL because it was only run once per distro, against the latest version to be included in that distro.
Simplify the tests by having a single run per PostgreSQL version and vary test parameters according to the capabilities of each version and the underlying distro. So, ZST testing is based on whether the distro supports ZST. Every test is run for each set of parameters based on the capabilities of the PostgreSQL version, e.g. backup from standby is not attempted on versions that don't support it.
Note that since more tests are running the overall time to run the mock/all tests has increased by about 20-25%. Some time may be saved my removing tests that are adequately covered by unit tests but that should the subject of another commit. Another option would be to limit some non version-specific tests to a single, well defined version of PostgreSQL, .e.g the version that is run by expect tests, currently 9.6.
The motivation for this refactor is that new storage drivers are coming and the loop-generated test matrix simply was not up to the task of adding them.
The following is an example of the new test log (note longer runtime of each test):
module=real, test=all, run=1, pg-version=10 (106.91s)
module=real, test=all, run=1, pg-version=9.5 (151.09s)
module=real, test=all, run=1, pg-version=9.2 (123.11s)
module=real, test=all, run=1, pg-version=9.1 (129s)
vs. the old test log (sub-second tests were skipped entirely):
module=real, test=all, run=2, pg-version=10 (0.31s)
module=real, test=all, run=3, pg-version=10 (0.26s)
module=real, test=all, run=4, pg-version=10 (60.39s)
module=real, test=all, run=1, pg-version=10 (69.12s)
module=real, test=all, run=6, pg-version=10 (34s)
module=real, test=all, run=5, pg-version=10 (42.75s)
module=real, test=all, run=2, pg-version=9.5 (0.21s)
module=real, test=all, run=3, pg-version=9.5 (0.21s)
module=real, test=all, run=4, pg-version=9.5 (0.21s)
module=real, test=all, run=5, pg-version=9.5 (0.26s)
module=real, test=all, run=6, pg-version=9.5 (0.21s)
module=real, test=all, run=1, pg-version=9.2 (72.78s)
module=real, test=all, run=2, pg-version=9.2 (0.26s)
module=real, test=all, run=3, pg-version=9.2 (0.31s)
module=real, test=all, run=4, pg-version=9.2 (0.21s)
module=real, test=all, run=5, pg-version=9.2 (0.21s)
module=real, test=all, run=6, pg-version=9.2 (0.21s)
module=real, test=all, run=1, pg-version=9.5 (88.41s)
module=real, test=all, run=2, pg-version=9.1 (0.21s)
module=real, test=all, run=3, pg-version=9.1 (0.26s)
module=real, test=all, run=4, pg-version=9.1 (0.21s)
module=real, test=all, run=5, pg-version=9.1 (0.31s)
module=real, test=all, run=6, pg-version=9.1 (0.26s)
module=real, test=all, run=1, pg-version=9.1 (72.4s)
The unsupported version error is showing up on older versions of PostgreSQL (e.g. 9.1, 9.2) on RHEL6 when setting up a standby with streaming replication. The error occurs when a client does not properly send a version number and it's not clear why it is happening here, but it does not appear to have anything to do with pgBackRest and only affects RHEL6, i.e. 9.1 and 9.2 do not show this error on other distros.
For now ignore the error since RHEL6 is nearly EOL.
HTTP is an acronym so it should be capitalized. Coding conventions dictate otherwise for function and type names but that should not have been propagated to comments and messages.
strPtr() is called more than any other function and during profiling (with or without optimization) it can end up using a disproportionate amount of the total runtime. Even though it is fast, the profiler has a minimum resolution for each function call so strPtr() will often end up towards the top of the list even though the real runtime is quite small.
Instead, inline strPtr() and indicate to gcc that it should be inlined even for non-optimized builds, since that's how profiles are usually generated.
To make strPtr() smaller require "this" to be non-NULL and add another function, strPtrNull(), to deal with the few cases where we need NULL handling.
As a bonus this makes the executable about 1% smaller even when compared to a prior optimized build which would inline some percentage of strPtr() calls.
There is no sense in generating detailed coverage reports in CI environments where they will never be seen. It takes time and format differences in some older versions can cause problems in the report generation code.
Note that missing coverage will still be reported on stdout and the test will fail.
These were missed in d41eea68 when the functionality of TEST_RESULT_STR() was changed. Using TEST_RESULT_STR() instead of TEST_RESULT_PTR() is more type-safe and clearer.
Add a comment to make it clear that TEST_RESULT_PTR() should be used only when a better alternative is not available.
This aligns better with general PostgreSQL usage and our own documentation (updated in 4bcef702).
Usage in the backup.manifest tests has not been updated since it might break the file format.
Expressions only worked at the first level of recursion because the expression was also being applied to paths so the path had to match the filter in order to recurse.
This is not considered a bug since it does not affect any existing code paths, but it is required for the general-purpose repo-ls command.
A truncated HTTP response status could lead to an an unfriendly error message, which would be retried, but could be confusing if the error was persistent and required debugging.
Improve the error handling overall to catch more error cases explicitly and respond better to edge cases.
Also update the terminology in comments to align with the RFC. Variable and function names were not changed because a refactor is intended for HTTP response and it doesn't seem worth the additional code churn.
The prior harness required a separate function to contain the server behavior but this made keeping the client/server code in sync very difficult and in general meant test writing took longer.
Now, commands to define server behavior are inline with the client code, which should greatly simplify test writing.
Bug Fixes:
* Fix issue checking if file links are contained in path links. (Reviewed by Cynthia Shang. Reported by Christophe Cavallié.)
* Allow pg-path1 to be optional for synchronous archive-push. (Reviewed by Cynthia Shang. Reported by Jerome Peng.)
* The expire command now checks if a stop file is present. (Fixed by Cynthia Shang. Reviewed by David Steele.)
* Handle missing reason phrase in HTTP response. (Reviewed by Cynthia Shang. Reported by Tenuun.)
* Increase buffer size for lz4 compression flush. (Reviewed by Cynthia Shang. Reported by Eric Radman.)
* Ignore pg-host* and repo-host* options for the remote command. (Reviewed by Cynthia Shang. Reported by Pavel Suderevsky.)
* Fix possibly missing pg1-* options for the remote command. (Reviewed by Cynthia Shang. Reported by Andrew L'Ecuyer.)
Features:
* Time-based retention for full backups. The --repo-retention-full-type option allows retention of full backups based on a time period, specified in days. (Contributed by Cynthia Shang, Pierre Ducroquet. Reviewed by David Steele.)
* Ad hoc backup expiration. Allow the user to remove a specified backup regardless of retention settings. (Contributed by Cynthia Shang. Reviewed by David Steele.)
* Zstandard compression support. Note that setting compress-type=zst will make new backups and archive incompatible (unrestorable) with prior versions of pgBackRest. (Reviewed by Cynthia Shang.)
* bzip2 compression support. Note that setting compress-type=bz2 will make new backups and archive incompatible (unrestorable) with prior versions of pgBackRest. (Contributed by Stephen Frost. Reviewed by David Steele, Cynthia Shang.)
* Add backup/expire running status to the info command. (Contributed by Stefan Fercot. Reviewed by David Steele.)
Improvements:
* Expire WAL archive only when repo-retention-archive threshold is met. WAL prior to the first full backup was previously expired after the first full backup. Now it is preserved according to retention settings. (Contributed by Cynthia Shang. Reviewed by David Steele.)
* Add local MD5 implementation so S3 works when FIPS is enabled. (Reviewed by Cynthia Shang, Stephen Frost. Suggested by Brian Almeida, John Kelley.)
* PostgreSQL 13 beta1 support. Changes to the control/catalog/WAL versions in subsequent betas may break compatibility but pgBackRest will be updated with each release to keep pace. (Reviewed by Cynthia Shang.)
* Reduce buffer-size default to 1MiB. (Reviewed by Stephen Frost.)
* Throw user-friendly error if expire is not run on repository host. (Contributed by Cynthia Shang. Reviewed by David Steele.)
The purpose of the remote command is to get access to local resources, so a remote should never start another remote. However, this could happen if there were host settings on the remote host, which ended badly with lock errors, loops, etc.
Add pg-local and repo-local options to indicate that the resource is local even if there are host settings.
Note that for the time being these options are internal and not intended for general usage. However, this is likely the direction needed to allow for more symmetric and manageable configurations.
Some pg1-* options are required by the remote so if they are not provided in the remote's configuration file then it may cause a configuration error, depending on the operation. This currently only applies to the pg1-path option.
This is still an issue for repo-* options but the same solution cannot be applied because some repo-* options are secure and cannot be passed on the command-line.
There don't appear to be any behavioral changes since PostgreSQL 12 and all the tests pass.
Changes to the control/catalog/WAL versions in subsequent betas may break compatibility but pgBackRest will be updated with each release to keep pace.
S3 requires the Content-MD5 header for many requests but MD5 is not available via OpenSSL when FIPS is enabled because it is considered to be insecure.
Even though our usage does not present any security risks a local M5 implementation is required to circumvent the over-broad FIPS restriction.
Vendorize the MD5 implementation found at https://openwall.info/wiki/people/solar/software/public-domain-source-code/md5 and add full coverage for the module in the common/crypto unit tests.
The prior default was determined by benchmarking the Perl code prior to the 1.0 release. In general buffer allocation was more expensive in Perl so large buffers gave the best performance. This was due to multiple buffer allocations for each filter in an IO operation.
The C code allocates fixed buffers for each IO operation so the cost for buffer allocation is lower than Perl. That being the case it made sense to benchmark the C code to determine the optimal buffer default.
The performance/storage tests were used to measure the performance of a variety of filters. 1GiB of data was processed by each filter 10 times and the results of the tests were averaged.
While most buffer sizes gave similar performance, 1MiB appeared to perform the best overall. Of course, different architectures are likely to yield different results but this seems like a sensible default. The buffer-size option may still need to be manually configured to give optimal results.
Raw test data for reference:
4MB buffer (prior default)
copy time 1807ms, avg time 180ms, avg throughput: 5942MB/s
md5 time 14200ms, avg time 1420ms, avg throughput: 756MB/s
sha1 time 11431ms, avg time 1143ms, avg throughput: 939MB/s
sha256 time 23463ms, avg time 2346ms, avg throughput: 457MB/s
gzip -6 time 381199ms, avg time 38119ms, avg throughput: 28MB/s
lz4 -1 time 15484ms, avg time 1548ms, avg throughput: 693MB/s
1MB buffer (new default)
copy time 1760ms, avg time 176ms, avg throughput: 6100MB/s
md5 time 13739ms, avg time 1373ms, avg throughput: 781MB/s
sha1 time 11025ms, avg time 1102ms, avg throughput: 973MB/s
sha256 time 22539ms, avg time 2253ms, avg throughput: 476MB/s
gzip -6 time 372995ms, avg time 37299ms, avg throughput: 28MB/s
lz4 -1 time 15118ms, avg time 1511ms, avg throughput: 710MB/s
512K buffer
copy time 1782ms, avg time 178ms, avg throughput: 6025MB/s
md5 time 13724ms, avg time 1372ms, avg throughput: 782MB/s
sha1 time 10959ms, avg time 1095ms, avg throughput: 979MB/s
sha256 time 22982ms, avg time 2298ms, avg throughput: 467MB/s
gzip -6 time 378120ms, avg time 37812ms, avg throughput: 28MB/s
lz4 -1 time 15484ms, avg time 1548ms, avg throughput: 693MB/s
256K buffer
copy time 1805ms, avg time 180ms, avg throughput: 5948MB/s
md5 time 13706ms, avg time 1370ms, avg throughput: 783MB/s
sha1 time 11074ms, avg time 1107ms, avg throughput: 969MB/s
sha256 time 22588ms, avg time 2258ms, avg throughput: 475MB/s
gzip -6 time 372645ms, avg time 37264ms, avg throughput: 28MB/s
lz4 -1 time 16346ms, avg time 1634ms, avg throughput: 656MB/s
Improve the accuracy of the calculations in several areas with better integer expressions.
Make the input buffer size configurable. Previously it was always 1mb, i.e. block size.
Use a macro for output results to reduce code duplication.
Reason phrases (e.g. OK) are optional in HTTP 1.1 but the space after the status code is not. When the reason phrase was missing the required space was trimmed along with the trailing CR leading to a format error.
Rework the logic to preserve the space and allow empty reason phrases.
Found while testing against the Backblaze S3-compatible API.
Vendorized code is copied from another project when a library is not available and a git subproject won't work. Currently all the vendorized code is copied from PostgreSQL but it makes sense to have a more general mechanism for indicating vendorized code.
The .vendor extension will be used to denote vendorized code in the same way that .auto is used to denote auto-generated code.
Rather than bS3 use strStorage which can indicate more than two storage types.
For the moment there are still only two storage types but this change is required before more can be added.
cdebfb09 added relative times to backup.into but a subtle issue was introduced that would cause the tests to fail if the time acquired by cmdExpire() was exactly the same as timeNow used to format backup.info. cmdExpire() was working correctly given the inputs, but the tests did not run predictably.
This was found while running the tests with --no-valgrind --no-coverage which allows them to run a lot faster, thus exposing the timing issue.
These tests required sudo to achieve complete coverage.
Add a new coverage exception, vm_covered, that applies to code that can only be covered in a container. When the test is run outside of a container code sections that require a container will be excluded with TEST_CONTAINER_REQUIRED and the coverage exception will be added to prevent a coverage error.
This does require marking up the core code with vm_covered, which in some modules (e.g. common/io/tls/client) can be extensive. It's possible that some of these tests can be rewritten to be less dependent on sudo but no attempt was made to do that here.
Only allow coverage summaries in a vm since coverage summaries outside a vm will not be complete, which was true even before this commit.
Update error types throw by bzip2 to be more consistent with gzip.
Update the bzip2 and gzip error default to be AssertError as that's the more common case in both, and add a 'break;' to the default clause -- we don't intend to be just falling through those case statements, even if the default is the last, we should be explicit about that.
Clean up some tabs that snuck in, rename a variable to be more clear, and add some comments.
The --repo-retention-full-type option allows retention of full backups based on a time period, specified in days.
The new option will default to 'count' and therefore will not affect current installations. Setting repo-retention-full-type to 'time' will allow the user to use a time period, in days, to indicate full backup retention. Using this method, a full backup can be expired only if the time the backup completed is older than the number of days set with repo-retention-full (calculated from the moment the 'expire' command is run) and at least one full backup meets the retention period. If archive retention has not been configured, then the default settings will expire archives that are prior to the oldest retained full backup. For example, if there are three full backups ending in times that are 25 days old (F1), 20 days old (F2) and 10 days old (F3), then if the full retention period is 15 days, then only F1 will be expired; F2 will be retained because F1 is not at least 15 days old.
Newer versions of sudo output this message to stderr when run in a container:
sudo: setrlimit(RLIMIT_CORE): Operation not permitted
See https://github.com/sudo-project/sudo/issues/42 for details.
A simple workaround is to prevent sudo from disabling core dumps. This seems safe enough because if sudo is segfaulting then core files are the least of our worries.
There are a number of Valgrind errors on Ubuntu 12.04 which do not happen on newer distro versions. However, suppressions for these errors have masked legitimate issues in subsequent code.
Instead, make suppressions VM specific so errors in other VMs are not masked.
Resolving localhost can vary based on the local network configuration so it is safer to just use a static IP.
This was found while testing on Travis-CI arm64.
bzip2 is a widely available, high-quality data compressor. It typically compresses files to within 10% to 15% of the best available techniques (the PPM family of statistical compressors), while being around twice as fast at compression and six times faster at decompression.
bzip2 is currently available on all supported platforms.
This was an oversight in 438b957f which added multiple compression type support. The booleans were interpreted as none and gz which works fine for the CompressType enum until the position of gz or none changes.
Zstandard is a fast lossless compression algorithm targeting real-time compression scenarios at zlib-level and better compression ratios. It's backed by a very fast entropy stage, provided by Huff0 and FSE library.
Zstandard version >= 1.0 is required, which is generally only available on newer distributions.
If the WAL path is absolute then pg1-path should be optional but in fact it was required to load pg_control.
Skip the pg_control check when pg1-path is not specified. The check against the stanza version/system-id remains to protect the repo from corruption.
Perhaps this was intended to verify the WAL size but was never implemented.
Verifying the WAL size is probably a good idea so this member may be added back if the feature is implemented.
An upcoming feature requires new parameters for storagePosixNew() and this causes a lot of churn because almost every test creates a Posix storage object. Some refactoring in the tests might reduce this duplication but storagePosixNew() is collecting a lot of parameters so converting to storagePosixNewP() makes sense in any case.
There are relatively few call sites in the core code but they still benefit from better readability after this change.
There is no conflict if the path containing a file link is a parent path of a path link. The Perl code apparently had this right but the migration to C missed it.
Exclude this case when checking for link conflicts.
There have been a number of segfaults reported because a string option expected to be non-null was actually null. This is generally due to options that are expected to be set but are in fact optional.
Protect against this by creating cfgOptionStrNull() to get options that can be null, while changing cfgOptionStr() to always expect non-null. There are relatively few places where nulls are expected.
There is definitely a chance for breakage here as null options might currently be working in the field but will be caught by this new check. Hopefully introducing the check early in the release cycle will allow us to catch any issues.
It makes sense to do this check right after the first compression so any issues are caught early.
Also, none of the current compression formats omit decompressCmd so make the test mandatory.
Previously when retention-archive was set (either by the user or by default), archives prior to the archive-start of the oldest remaining full backup (after backup expiration occurred) would be expired even though the retention-archive threshold had not been met. For example, if there were 1 full backup remaining after backup expiration and the retention-archive was set to 2 and retention-archive-type=full, then archives prior to the archive-start of the remaining full backup would still be removed even though retention-archive required 2 full backups remaining before archives should be expired.
The thought was to keep the archive directory clean and since the full backup did not require prior archives, it was safe to delete them. However, this has caused problems for some users in the past (because they needed the WAL for other purposes) and with the new adhoc and time-based retention features, it was decided that the archives should remain until the threshold was met. The archives will eventually be removed and if having them causes space issues, the expire command and the retention-archive can always be run and adjusted.
Coverity was concerned that regExpError() might return and lead to an invalid reference of "this". This was unlikely since the function should never return but Coverity didn't know that. Also, a difference in error-handling logic at the two sites could cause the issue Coverity reported if they were to get out of sync.
Fix by refactoring out the core error function so that it is clear it will never return.
If an option may not be valid for a command it should be checked with cfgOptionValid() or cfgOptionTest().
It appears this rule is followed pretty strictly since the only changes required were in unit tests.
The specified backup set (i.e. the backup label provided and all of its dependent backups, if any) will be expired regardless of backup retention rules except that at least one full backup must remain in the repository.
Each option type enforced its own constraints but there was a lot of duplication. Centralize the enforcement to remove the duplication.
Also convert the option type assert to a production error. This is unlikely to happen in production but the test is quite cheap so it can't hurt.
Finally, add a NULL check. Most option types can never be NULL.
This is implemented by checking for a backup lock on the host where info is running so there are a few limitations:
* It is not currently possible to know which command is running: backup, expire, or stanza-*. The stanza commands are very unlikely to be running so it's pretty safe to guess backup/expire. Command information may be added to the lock file to improve the accuracy of the reported command.
* If the info command is run on a host that is not participating in the backup, e.g. a standby, then there will be no backup lock. This seems like a minor limitation since running info on the repo or primary host is preferred.
Make the restore clean process look more like manifest build, i.e. do cleanup of each target root directory outside the main cleanup callback. This means some code duplication but removes the logic handling "dot" paths.
Add tests for both restore and backup (which already worked but was not tested).
Bug Fixes:
* Remove empty subexpression from manifest regular expression. MacOS was not happy about this though other platforms seemed to work fine. (Fixed by David Raftis.)
Improvements:
* Non-blocking TLS implementation. (Reviewed by Slava Moudry, Cynthia Shang, Stephen Frost.)
* Only limit backup copy size for WAL-logged files. The prior behavior could possibly lead to postgresql.conf or postgresql.auto.conf being truncated in the backup. (Reviewed by Cynthia Shang.)
* TCP keep-alive options are configurable. (Suggested by Marc Cousin.)
* Add io-timeout option.
Timeout used for connections and read/write operations.
Note that the entire read/write operation does not need to complete within this timeout but some progress must be made, even if it is only a single byte.
The prior blocking implementation seemed to be prone to locking up on some (especially recent) kernel versions. Since we were unable to reproduce the issue in a development environment we can only speculate as to the cause, but there is a good chance that blocking sockets were the issue or contributed to the issue.
So move to a non-blocking implementation to hopefully clear up these issues. Testing in production environments that were prone to locking shows that the approach is promising and at the very least not a regression.
The main differences from the blocking version are the non-blocking connect() implementation and handling of WANT_READ/WANT_WRITE retries for all SSL*() functions.
Timeouts in the tests needed to be increased because socket connect() and TLS SSL_connect() were not included in the timeout before. The tests don't run any slower, though. In fact, all platforms but Ubuntu 12.04 worked fine with the shorter timeouts.
select() is a bit old-fashioned and cumbersome to use. Since the select() code needed to be modified to handle write ready this seems like a good time to upgrade to poll().
poll() has been around for a long time so there doesn't seem to be any need to provide a fallback to select().
Also change the error on timeout from FileReadError to ProtocolError. This works better for read vs. write and failure to poll() is indicative of a protocol error or unexpected EOF.
The prior behavior introduced in dcddf3a5 could possibly lead to postgresql.conf or postgresql.auto.conf being truncated in the backup since they are copied via tmp files and could change size during the backup.
In general it seems safer to limit this feature to WAL-logged files which will be reconstructed during recovery.
Building on these platforms gives us better coverage for our build code. Cirrus CI was chosen because it is the only service that supports FreeBSD (that we could find).
The FreedBSD configuration for Vagrant is currently just enough to perform a build.
The MacOS configuration is not actually for Vagrant (yet) but does show the steps needed to setup the build environment on MacOS.
This allows us to add new configurations mostly without changing the behavior of vagrant from the command line, i.e. vagrant up and vagrant ssh will continue to bring up the default configuration.
However, vagrant destroy -f will remove all configurations. That's really only a change in behavior if more than one configuration is running, which is not currently possible.
A session looks much the same whether it is initiated from the client or the server, so use the session objects to implement the TLS, HTTP, and S3 test servers.
For TLS, at least, there are some differences between client and server sessions so add a client/server type to SocketSession to determine how the session was initiated.
Aside from reducing code duplication, the main advantage is that the test server will now timeout rather than hanging indefinitely when less input that expected is received.
Previously an error was only thrown when errno was set but in practice this is usually not the case. This may have something to do with getting errno late but attempts to get it earlier have not been successful. It appears that errno usually gets cleared and spot research seems to indicate that other users have similar issues.
An error at this point indicates unexpected EOF so it seems better to just throw an error all the time and be consistent.
To test this properly our test server needs to call SSL_shutdown() except when the client expects this error.
This abstraction allows the session code to be shared between the TLS client and (upcoming) server code.
Session management is no longer implemented in TlsClient so the HttpClient was updated to free and create sessions as needed. No test changes were required for HttpClient so the functionality should be unchanged.
Mechanical changes to the TLS tests were required to use TlsSession where appropriate rather than TlsClient. There should be no change in functionality other than how sessions are managed, i.e. using tlsClientOpen()/tlsSessionFree() rather than just tlsClientOpen().
The errorInternalThrowSys*() functions were marked as returning during coverage testing even when they had no possibility to return, i.e. the error parameter was set to constant true. This meant the compiler would treat the functions as returning even when they would not.
Instead create completely separate functions for coverage to use for THROW_ON_SYS_ERROR*() that can return and leave the regular functions marked __noreturn__.
This abstraction allows the session code to be shared between the socket client and (upcoming) server code. There should no difference in how the code works -- only the organization has changed. Note that no changes to the tests were required.
This same abstraction will be required for TlsClient but that will be done in a separate commit because it requires test changes.
These forks were done in a custom way (not sure why) and lack the capability of the standard macros for the parent to wait for child exit.
This mean that the server would continue to run after the tests were complete and that multiple servers could run at once. This caused subtle timing and connection issues that required larger timeouts to resolve.
Don't change the timeouts here since they need to be adjusted in future commits anyway.
It is pretty much impossible for a static IP to not resolve to an address but in theory the error could catch other conditions so it seems best to keep it.
When the Vagrant file was updated to use pgbackrest/ vs /backrest/ as the location for executing tests and building the documentation, parts of the contributing.xml (and hence the CONTRIBUTING.md) were not updated since some parts of the document are not actually executed when the CONTRIBUTING.md is built from contributing.xml: those parts that are executed were updated but those parts that are not executed were not.
This commit fixes the contributing.xml issue but also removes test/README.md as its contents were out of date and redundant given that they are covered in CONTRIBUTING.md.
This limitation forced extra logic in cases where zero wait times were needed.
Remove the limitation and the extra logic in cases where zero wait times are possible.
Help identify whether errors are happening in the forked server or the main test by showing the line number where the server was forked off in the stack trace.
If these are not reset then an error not wrapped in a TEST_ERROR*() macro may show the line number of the previous error in a stack trace, which is confusing.
It is better for the line number to be unreported than wrong.
The default process id was previously always 0 but there are cases where it is useful to be able to set the default.
Currently the only use case is for testing but the upcoming server code will also make use of it.
The storage driver requires two list functions to be implemented, list and infoList. But the former is a subset of the latter so implementing both in every driver is wasteful. The reason both exist is that in Posix it is cheaper to get a list of names than it is to stat files to get size, time, etc. In S3 these operations are equivalent.
Introduce storageInfoLevelType to determine the amount of information required by the caller. That way Posix can work efficiently and all drivers can return only the data required which saves some bandwidth. The storageList() and storageInfoList() functions remain in the storage interface since they are useful -- the only change is simplifying the drivers with no external impact.
Note that since list() accepted an expression infoList() must now do so. Checking the expression is optional for the driver but can be used to limit results or save IO costs.
Similarly, exists() and pathExists() are just specialized forms of info() so adapt them to call info() instead.
It's better to start out with plural forms rather than flip back and forth as functions are added and subtracted. So, use "Constructors" instead of "Constructor".
Use "Getters/Setters" rather than "Getters" or "Setters" to avoid similar churn.
This has been the policy for some time but due to migration pressure only new functions and refactors have been following this rule. Now it seems sensible to make a clean sweep and move all the comments that have not been moved already (i.e. most of them).
Only obvious typos and gross inaccuracies in the comments have been fixed. For this most part this was a copy and paste operation.
Useless comments, e.g. "New object", were not copied. Even so, there are surely many deficient comments left.
Some rearranging was done where needed and functions were placed in the proper sections, e.g. "Constructors", "Functions", etc.
A few function prototypes were found that not longer had an implementation. These were removed, but there may be more.
The coding document has been updated to reflect this policy, which is not new but has never been documented.
Prior to performing a backup or expiring backups, the backup.info file is validated by reconstructing it from the backups in the repository. When a backup had already been removed from the repo, it was removed from the backup.info file but its dependents were not.
Now, the dependent backups will also be removed from backup.info and only backups in the repo that have their full dependency chain will be added to backup.info if they are missing.
These functions accepted const Buffer objects and returned non-const pointers which is definitely not a good idea. Add bufPtrConst() to handle cases where only a const return value is needed and update call sites.
Use UNCONSTIFY() in cases where library code out of our control requires a non-const pointer. This includes the already-documented exception in command/backup/pageChecksum and input buffers in the gzCompress and gzDecompress filters.
Allows casting const-ness away from an expression, but doesn't allow changing the type. Enforcement of the latter currently only works for gcc-like compilers.
Note that it is not safe to cast const-ness away if the result will ever be modified (it would be undefined behavior). Doing so can cause compiler mis-optimizations or runtime crashes (by modifying read-only memory). It is only safe to use when the result will not be modified, but API design or language restrictions prevent you from declaring that (e.g. because a function returns both const and non-const variables).
Note that this only works in function scope, not for global variables (it would be nice, but not trivial, to improve that).
UNCONSTIFY() requires static assert which is a feature in its own right.
PostgreSQL enables this option when available which seems like a good idea since we also do not share connections between processes.
Note that as in PostgreSQL there is no way to disable this option.
PostgreSQL enables this option when available which seems like a good idea since we also buffer transmissions.
Note that as in PostgreSQL there is no way to disable this option.
This is really a socket option so the new name is clearer.
Since common/io/socket/tcp will contains a mix of options it makes sense to rename it to socket and cascade name changes as needed.
Prior to 2.25 the individual TCP keep-alive options were not being configured due to a missing header. In 2.25 they were being configured incorrectly due to a disconnect between the timeout specified in ms and what was expected by the TCP options, i.e. seconds.
Instead make the TCP keep-alive options directly configurable, with correct units and better testing. Keep-alive is enabled by default (though it can be defaulted to the system setting instead) and the rest of the options are not set by default. This is in line with what PostgreSQL does, though PostgreSQL does not allow keep-alive to be defaulted.
Also move configuration of TCP options before connect() as PostgreSQL does.
This functionality was embedded into TlsClient but that was starting to get unwieldy.
Add SocketClient to contain all socket-related client functionality.
Documentation builds and tests have only a few packages in common so rearrange packages to save some time and clarify dependencies.
Remove the libperl-dev package which became obsolete when the LibC module was removed in 79cfd3ae.
Add a few comments for good measure.
The primary purpose of this test (currently) is to measure the performance of storageRemoteInfoList(), which is critical for building a manifest when the PostgreSQL host is remote.
The starting baseline of 1 million files is perhaps a bit aggressive but it seems very likely to blow up if there are performance regressions.
Recent performance improvements allow increasing the baseline of this test.
In general it is best if the baseline is large enough to cause the test to blow up if there are performance regressions.
Features:
* Add lz4 compression support. Note that setting compress-type=lz4 will make new backups and archive incompatible (unrestorable) with prior versions of pgBackRest. (Reviewed by Cynthia Shang.)
* Add --dry-run option to the expire command. Use dry-run to see which backups/archive would be removed by the expire command without actually removing anything. (Contributed by Cynthia Shang, Luca Ferrari.)
Improvements:
* Improve performance of remote manifest build. (Suggested by Jens Wilke.)
* Fix detection of keepalive options on Linux. (Contributed by Marc Cousin.)
* Add configure host detection to set standards flags correctly. (Contributed by Marc Cousin.)
* Remove compress/compress-level options from commands where unused. These commands (e.g. restore, archive-get) never used the compress options but allowed them to be passed on the command line. Now they will error when these options are passed on the command line. If these errors occur then remove the unused options. (Reviewed by Cynthia Shang.)
* Limit backup file copy size to size reported at backup start. If a file grows during the backup it will be reconstructed by WAL replay during recovery so there is no need to copy the additional data. (Reviewed by Cynthia Shang.)
If the tests are running quickly then the time target might end up the same as the end time of the prior full backup. That means restore auto-select will not pick it as a candidate and restore the last backup instead causing the restore compare to fail.
So, sleep one second.
Add functions to select a current backup by label and to retrieve a backup dependency list for any given backup.
Update the expire code to utilize the new functions and to expire backup sets from newest dependency to oldest.
Decisions about when to optimize or enable debug code were spread out in too many places making it hard to keep them consistent.
Centralize the logic as much as possible to make it easier to maintain.
Append N characters from a zero-terminated string.
Note that the string does not actually need to be zero-terminated as long as N is <= the end of the string being concatenated.
In the ExpireEnvTest.pm backupCreate() function, backup-prior was incorrectly set for diff backups to the previous backup regardless of what backup type the previous backup was. This did not cause any issues in the Mock Expire tests before because it was not being checked. However, in order to reduce churn in the expect logs for a new feature where the backup-prior is utilized, this is being fixed so that the full backup is always used as backup-prior.
The major bottleneck was finding the memory allocation to be resized since it required a sequential search through a list.
Instead, put the allocation header at the beginning of the allocation and return an offset to the user for their buffer. This allows us to use pointer arithmetic to get back to the allocation header quickly when resizing. A side effect is to make memFree() faster as well. The downside is we won't detect garbage pointers passed to memResize()/memFree(), which is also true for MemContext pointers.
The performance benefits can be pretty large in certain cases, in particular when loading and saving manifests. The following are the before and after performance tests on a 900K file manifest.
Before:
run 003 - manifestNewLoad()/manifestSave()
000.000s l0125 - generate manifest
183.411s l0236 - 101.2MB manifest generated with 900000 files
183.411s l0239 - load manifest
403.816s l0243 - completed in 220405ms
403.816s l0245 - check file total
403.816s l0248 - save manifest
670.217s l0253 - completed in 266401ms
670.217s l0256 - find all files
671.263s l0266 - completed in 1046ms
After:
run 003 - manifestNewLoad()/manifestSave()
000.000s l0125 - generate manifest
007.730s l0236 - 101.2MB manifest generated with 900000 files
007.730s l0239 - load manifest
033.431s l0243 - completed in 25701ms
033.431s l0245 - check file total
033.431s l0248 - save manifest
057.755s l0253 - completed in 24324ms
057.755s l0256 - find all files
058.689s l0266 - completed in 934ms
* Fix a few issues with file names being truncated introduced in 787d3fd6.
* Use function line info from the lcov file to calculate which lines to show for uncovered functions. This is more accurate than what we were doing before and function comment headers are now excluded which reduces clutter in the report.
The prior macros had grown over time to be pretty significant pieces of code that required a lot of compile time, though runtime was efficient.
Move most of the macro code into functions to reduce compile time, perhaps at a slight expense to runtime. The overall performance benefit is 10-15% so this seems like a good tradeoff.
Add TEST_RESULT_UINT_INT() to safely compare uint to int with range checking.
Upcoming changes to the TEST_RESULT_* macros are more type safe and identified that the wrong macros were being used to test results in many cases.
Commit these changes separately to verify that they work with the current macro versions.
Note that no core bugs were exposed by these changes.
TRY...CATCH blocks are fairly expensive and when all the TEST_RESULT*() macros succeed they are not needed.
Instead just record info at the start of the result test so a detailed exception can be thrown in test.c in the rare case where an exception occurs.
This is helpful for test macros that know the line number.
The line number can now be non-zero below the top of the stack without WITH_BACKTRACE so instead ignore the line number for output when it is zero.
This was passing since we don't test WITH_BACKTRACE in CI because it is used only for test builds.
Ideally we would test this but it doesn't seem worth the trouble at the moment.
Building the contributing document has some special requirements because it runs Docker in Docker so the repo path must align on the host and all Docker containers. Run `pgbackrest/doc/doc.pl` from within the home directory of the user that will do the doc build, e.g. `home/vagrant`. If the repo is not located directly in the home directory, e.g. `/home/vagrant/pgbackrest`, then a symlink may be used, e.g. `ln -s /path/to/repo /home/vagrant/pgbackrest`.
Mount the repo in the Vagrantfile at /home/vagrant/pgbackrest but provide a link from the old location at /backrest to make the transition less painful.
The old coverage data has been recorded so it is no longer needed. In newer versions of gcc leaving this file around can lead to an error when writing profile data after forking off to a non-pgbackrest binary (which we do in some unit tests).
* Show all uncovered branch parts even when there are more than two parts per branch. This is the way gcc9 reports coverage so it needs to work even if it doesn't make as much sense as the old way.
* Show covered branches in functions where coverage is missing. Showing just the uncovered branches can be confusing because it's not always clear how the coverage relates to the code. By showing all branch coverage (+ or -) this correspondence is made easier.
We don't report branch coverage on test modules (e.g. test/src/module/common/errorTest.c) but the code that excluded branch coverage from the test module would also exclude it from all core modules if the test module was included in the lcov report due to lack of function/line coverage.
Adjust the coverage code to only exclude branches during the extraction of test module coverage.
For some reason gcc9 would not do -O0 builds in combination with one of the options that libperl required. Now that libperl is gone this exception is no longer required.
If a file grows during the backup it will be reconstructed by WAL replay during recovery so there is no need to copy the additional data.
This also reduces the likelihood of seeing torn pages during the copy. Torn pages can still occur in the middle of the file, though, so they must be handled.
The manifest is excellent for validation but including the entire manifest is too noisy and some values are architecture/algorithm dependent.
Output a redacted version that contains the most important information which can be improved on over time.
This macro will automatically do key replacement before the comparison. This saves the indentation required for an embedded function call.
Possibly TEST_RESULT_Z_KEYRPL() would also be useful but it will be added when needed.
The current use case is reading files from the PostgreSQL cluster during backup.
A file may grow during backup but we only need to copy the number of bytes that were reported during the manifest build. The rest will be rebuilt from the WAL during recovery so copying more is just a waste of space.
Limiting the copy sizes in backup will be part of a future commit.
When multiple files were missing coverage it could be hard to locate the coverage report for a specific file.
Add links for uncovered files to make this easier.
Also move table titles out of the table so they are valid html.
These days it is better to include the module in define.yaml when we need to poke at the internal implementation.
This doesn't quite work for the log test harness, so for now some variables will need to remain extern'd in debug builds.
Enhance dry-run support added in 2fa69af8 by forbidding writes in the storage layer and adding prefixes to log messages.
The former will protect against mistakes in dry-run implementations and the latter will make it clear when a command was executed in dry-run mode.
Update expire unit tests with the new log prefix.
These results were stored in the vagrant path along with a full copy of src.
Instead store the raw coverage data in test/result/raw and change source references to the files that already exist in [test-path]/repo.
It makes more sense to build in the test path since many developers won't have a vagrant path. Anyway, it's better not to modify the vagrant path since it belongs to vagrant.
Instead of installing the binary just mount it into the container from where it was built. This saves a bit of time and space.
When pgbackrest was present this test behaved unexpectedly.
While the binary is not currently required for this test is might be in the future so fix the test to prevent a regression.
Building packages is not a normal part of development so don't build packages by default. Instead build them in CI as needed.
Do the builds in test/result instead of .vagrant to be friendlier with hosts that are not running vagrant. Anyway, it's probably not a good idea to be creating files in the .vagrant path.
Building the configure.ac script can take multiple seconds depending on the state of the autoconf cache. Use a checksum to only rebuild when configure.ac has changed no matter how the timestamps have changed.
Configure:
* Use standard make variables, e.g. CFLAGS, rather than our own, e.g. CINCLUDE
* Add PG_CONFIG var for configuring custom pg_config location
* Don't error if xml_config or pg_config is missing (but error if libs/headers not found)
* Check for zlib.h header
* Check for lz4frame.h header when liblz4 is present
Make:
* Use gcc-style auto dependencies
* Put src list at the top since it is most frequently modified
* Add clean-all target to also remove auto-generated config files
This file is used to generate src/configure and is not required to make pgbackrest since src/configure is updated before distribution.
Move to src/build so it is out of the way.
Changes to reference.xml can affect the command-line documentation built into the binary so changes must trigger an auto-generated code build during smart builds.
The prior method was to build a special container to hold these files which meant they would get stale on development systems. On CI the container was always rebuilt so failures would be seen there even when dev seemed to be working.
Instead get the package source when the package is built to ensure it is as up-to-date as possible.
This change was prompted by failures on the Ubuntu 12.04 container while getting the package source, probably due to an ancient version of git. Package builds are no longer supported on that platform with the addition of lz4 compression so it didn't seem worth fixing.
The primary source for project info is now src/version.h.
The pgBackRestDoc::ProjectInfo module loads the project info from src/version.h at runtime so there is no need to update it.
This is consistent with the way BackRest and BackRest test were renamed way back in 18fd2523.
More modules will be moving to pgBackRestDoc soon so renaming now reduces churn later.
This directory was once the home of the production Perl code but since f0ef73db this is no longer true.
Move the modules to test in most cases, except where the module is expected to be useful for the doc engine beyond the expected lifetime of the Perl test code (about a year if all goes well).
The exception is pgBackRest::Version which requires more work to migrate since it is used to track pgBackRest versions.
LZ4 compresses data faster than gzip but at a lower ratio. This can be a good tradeoff in certain scenarios.
Note that setting compress-type=lz4 will make new backups and archive incompatible (unrestorable) with prior versions of pgBackRest.
This was the interface between Perl and C introduced in 36a5349b but since f0ef73db has only been used by the Perl integration tests. This is expensive code to maintain just for testing.
The main dependency was the interface to storage, no matter where it was located, e.g. S3. Replace this with the new-introduced repo commands (d3c83453) that allow access to repo storage via the command line.
The other dependency was on various cfgOption* functions and CFGOPT_ constants that were convenient but not necessary. Replace these with hard-coded strings in most places and create new constants for commonly used values.
Remove all auto-generated Perl code. This means that the error list will no longer be maintained automatically so copy used errors to Common::Exception.pm. This file will need to be maintained manually going forward but there is not likely to be much churn as the Perl integration tests are being retired.
Update test.pl and related code to remove LibC builds.
Ding, dong, LibC is dead.
These commands are generally useful but more importantly they allow removing LibC by providing the Perl integration tests an alternate way to work with repository storage.
All the commands are currently internal only and should not be used on production repositories.
If the command was passed a file it would return no results since it was originally intended to list files when passed a path.
However, as a general purpose command working directly with files makes sense.
This command only makes sense for the repository storage since other storage (e.g. pg and spool) must be located on a local Posix filesystem and can be listed using standard unix commands. Since the repo storage can be located lots of places having a common way to list it makes sense.
Prefix with repo- to make the scope of this command clear.
Update documentation to reflect this change.
Add compress-type option and deprecate compress option. Since the compress option is boolean it won't work with multiple compression types. Add logic to cfgLoadUpdateOption() to update compress-type if it is not set directly. The compress option should no longer be referenced outside the cfgLoadUpdateOption() function.
Add common/compress/helper module to contain interface functions that work with multiple compression types. Code outside this module should no longer call specific compression drivers, though it may be OK to reference a specific compression type using the new interface (e.g., saving backup history files in gz format).
Unit tests only test compression using the gz format because other formats may not be available in all builds. It is the job of integration tests to exercise all compression types.
Additional compression types will be added in future commits.
All the methods in this module will need to be implemented via the command-line in order to get rid of LibC, so the first step is to reduce the code in the module as much as possible.
First remove storageDb() and use storageTest() instead. Then create storageTest() using pgBackRestTest::Common::Storage which has no dependencies on LibC. Now the only storage using the LibC interface is storageRepo().
Remove all link functions since those operations cannot be performed on a repo unless it is Posix, in which case the LibC interface is not needed. Same for owner().
Remove pathSync() because syncs are not required in the tests. No test data is reused after a crash.
Path create/exists functions should never be explicitly performed on a repo so remove those. File exists can be implemented by calling info() instead.
Remove encryption detection functions which were only used by Backup/Archive::Info reconstruct() which are now obsolete.
Remove all filters except pgBackRest::Storage::Filter::CipherBlock since they are not being used. That also means there are no filters returning results so remove all the result code.
Move hashSize() and pathAbsolute() into pgBackRest::Storage::Base where they can be shared between pgBackRest::Storage::Storage and pgBackRestTest::Common::Storage.
This was mostly dead code except the DB_BACKUP_ADVISORY_LOCK constant, moved to the real/all test module, and the function that pulls info from pg_control, moved to ExpireEnvTest.pm.
The postgres/pageChecksum module was designed as an interface to the C structs for the Perl code. The new C code can do this directly so no need for an interface.
Move the remaining test for pgPageChecksum() into the postgres/interface test module.
We were using a customized version which worked fine but was hard to merge with upstream changes. Now this code is maintained much like the types in static.auto.h that we copy and check with each release.
The goal is to eventually build directly against PostgreSQL (either source or libcommon) and this brings us one step closer.
All zero pages should not have checksums. Not only is this test invalid but it will not work with the stock page checksum implementation in PostgreSQL, which checks for zero pages. Since we will be using that code verbatim soon this test needs to go.
Using static values serves as a better cross-check against the page checksum code. The downside is that these checksums may not work with some big endian systems but in that case neither will the unit tests.
We can also remove the page checksum interface from LibC which brings us one step closer to eliminating it.
Page size is passed around a lot but in fact it can only have one value, PG_PAGE_SIZE_DEFAULT, which is checked when pg_control is loaded. There may be an argument for supporting multiple page sizes in the future but for now just use the constant to simplify the code.
There is also a significant performance benefit. Because pageSize was being used in pageChecksumBlock() the main loop was neither unrolled nor vectorized (-funroll-loops -ftree-vectorize) as it is now with a constant loop boundary.
This function made validation faster in Perl because fewer calls (and buffer transformations) were required when all checksums were valid.
In C calling pageChecksumTest() directly is just as efficient so there is no longer a need for pageChecksumBufferTest().
These data structures were copied a few places (but only once in the core code) so put them in a place where everyone can use them.
To do this create a new file, static.auto.h, to contain data types and macros that have stayed the same through all the versions of PostgreSQL that we support. This allows us to have single, non-versioned set of headers and code for stable data structures like page headers.
Migrate a few types from version.auto.h that are required for page header structures and pull the remaining types from PostgreSQL directly.
We had previously renamed xlog to wal so update those where required since we won't be modifying the PostgreSQL names anymore.
The S3 driver depends on being able to generate a common prefix to limit the number of results from list commands, which saves on bandwidth.
The prior implementation could be tricked by an expression like ^ABC|^DEF where there is more than one possible prefix. To fix this disallow any prefix when another ^ anchor is found in the expression. [^ and \^ are OK since they are not anchors.
Note that this was not an active bug because there are currently no expressions with multiple ^ anchors.
The restore test function was passing strBackup to the restoreCompare function but when the restore is expected to pick a backup based on a timestamp, then strBackup may not be the one chosen.
Modified the code so that strBackupExpected is set based on the parameters passed to the function and this is then passed to restoreCompare.
This option was used for boolean testing but it will soon be deprecated and the semantics changed. To reduce churn it seems easiest to just use other options for testing. This will also be helpful when the option is eventually removed.
These commands (e.g. restore, archive-get) never used the compress options but allowed them to be passed on the command line. Now they will error when these options are passed on the command line. If these errors occur then remove the unused options.
This was a minor optimization used in protocol layer compression. Even though it was slightly faster, it omitted the crc-32 that is generated during normal compression which could lead to corrupt data after a bad network transmission. This would be caught on restore by our checksum but it seems better to catch an issue like this early.
The raw option also made the function signature different than future compression formats which may not support raw, or require different code to support raw.
In general, it doesn't seem worth the extra testing to support a format that has minimal benefit and is seldom used, since protocol compression is only enabled when the transmitted data is uncompressed.
"gz" was used as the extension but "gzip" was generally used for function and type naming.
With a new compression format on the way, it makes sense to standardize on a single abbreviation to represent a compression format in the code. Since the extension is standard and we must use it, also use the extension for all naming.
The prior code used TRY...CATCH blocks to cleanup mem contexts when an error occurred. This included freeing new mem contexts that were still being initialized when the error occurred and ensuring that the prior memory context was restored.
This worked fine in production but it involved a lot of setjmp()/longjmp() calls that resulted in longer compilation times and sluggish performance under valgrind, profiling, and coverage testing.
Instead maintain a stack of new contexts and context switches that can be used to do cleanup after an error. Normally, the stack is not used for this purpose and pushing/popping is a cheap operation. In the prior implementation most of the TRY...CATCH logic needed to be run even on success.
One bonus is that the binary is about 8% smaller after this change. Another benefit is that new contexts *must* be explicitly freed/discarded or an error will occur. See info/manifest.c for an example of where this is useful outside the standard macros.
If PostgreSQL crashes it can leave behind a pg_internal.init temp file with the pid as the extension, as discussed in https://www.postgresql.org/message-id/flat/20200131045352.GB2631%40paquier.xyz#7700b9481ef5b0dd5f09cc410b4750f6. On restart this file is not cleaned up so it can persist for the lifetime of the cluster or until another process with the same id happens to write pg_internal.init.
This is arguably a bug in PostgreSQL, but in any case it makes sense not to backup this file.
This error was lost during the migration to C. The error that occurred instead (generally an SSH auth error) was hard to debug.
Restore the original behavior by throwing an error immediately if pg1-host is configured for any of these commands. reset-pg1-host can be used to suppress the error when required.
The main improvement is a double-fork to prevent zombie processes if the parent process exits after the (child) async process. This is a real possibility since the parent process sticks around to monitor the results of the async process.
In the first fork, ignore SIGCHLD in the very unlikely case that the async process exits before the first fork. This is probably only possible if the async process exits immediately, perhaps due to a chdir() failure. Set SIGCHLD back to default in the async process so waitpid() will work as expected.
Also update the comment on chdir() to more accurately reflect what is happening.
Finally, add a test in certain debug builds to ensure the first fork exits very quickly. This only works when valgrind is not in use because valgrind makes forking so slow that it is hard to tell if the async process performed work or not (in the case that the second fork goes missing and the async process is a direct child).
In this case the resumable backup should be ignored, but the C code was not able to load the partial manifest written by Perl since the format differs slightly. Add validations to catch this case and continue gracefully.
2a06df93 removed the error file so an old error would not be reported before the async process had a chance to try again. However, if the async process was already running this might lead to a timeout error before reporting the correct error.
Instead, remove the error files once we know that the async process will start, i.e. after the archive lock has been acquired.
This effectively reverts 2a06df93.
Generally, the content-size or content-encoding headers will be used to specify how much content should be expected.
There is a special case where the server sends 'Connection:close' without the content headers and the content may be read up until eof.
This appears to be an atypical usage but it is required by the specification.
Auto-selection is performed only when --set is not specified. If a backup set for the given target time cannot not be found, the latest (default) backup set will be used.
Currently a limited number of date formats are recognized and timezone names are not allowed, only timezone offsets.
Add tzPartsValid() and tzOffsetSecond() to calculate timezone offsets from user provided values.
Update epochFromParts() to accept a timezone offset in seconds.
The test that checks for no output from the server was leaving a connection open which valgrind was complaining about.
Wait on the server long enough to cause the error on the client then close the connection to free the memory.