These values are not used by the Perl integration tests so maybe it would be better to remove them, but for now just update since they should not be changing again for PG13.
This condition used to give a not-very-clear error which we have been intending to improve. But in the meantime the changes in fbff299 resulted in a segfault for this condition instead because the data_directory was assumed to be non-NULL.
Fix this by explicitly throwing an error with hints when any row in pg_settings cannot be selected.
This file is created by pg_basebackup so might be in the data directory if the cluster was restored from a pg_basebackup backup. Also exclude backup_manifest.tmp since it is possible to find that in the backup directory.
Improve the wording of the error message and add a hint to make it clearer what is wrong and how the user can fix it.
Also change the assert to a regular error since this is not an internal error.
If a stop command has been issued the check command fails due to archiving timing out.
Provide a hint to document this situation and point the user in the proper direction.
If the callback never returned any jobs then protocolParallelDone() would never be true. The reason is that the done state was being set in protocolParallelResult(), which never gets called if there are no results.
Calling protocolParallelResult() doesn't make much sense in this case so instead move the done logic to protocolParallelDone().
For current usage of ProtocolParallel we ensure there are jobs before processing so this is not a live issue, but the new behavior is required for future development.
Bug Fixes:
* Suppress errors when closing local/remote processes. Since the command has completed it is counterproductive to throw an error but still warn to indicate that something unusual happened. (Reviewed by Cynthia Shang. Reported by argdenis.)
* Fix issue with = character in file or database names. (Reviewed by Bastian Wegge, Cynthia Shang. Reported by Brad Nicholson, Bastian Wegge.)
Features:
* Automatically retrieve temporary S3 credentials on AWS instances. (Contributed by David Steele, Stephen Frost. Reviewed by Cynthia Shang, David Youatt, Aleš Zelený, Jeanette Bromage.)
* Add archive-mode option to disable archiving on restore. (Reviewed by Stephen Frost. Suggested by Stephen Frost.)
Improvements:
* PostgreSQL 13 beta3 support. Changes to the control/catalog/WAL versions in subsequent betas may break compatibility but pgBackRest will be updated with each release to keep pace.
* Asynchronous list/remove for S3/Azure storage. (Reviewed by Cynthia Shang, Stephen Frost.)
* Improve memory usage of unlogged relation detection in manifest build. (Reviewed by Cynthia Shang, Stephen Frost, Brad Nicholson, Oscar. Suggested by Oscar, Brad Nicholson.)
* Proactively close file descriptors after forking async process. (Reviewed by Stephen Frost, Cynthia Shang.)
* Delay backup remote connection close until after archive check. (Contributed by Floris van Nee. Reviewed by David Steele.)
* Improve detailed error output. (Reviewed by Cynthia Shang.)
* Improve TLS error reporting. (Reviewed by Cynthia Shang, Stephen Frost.)
Documentation Bug Fixes:
* Add none to compress-type option reference and fix example. (Reported by Ugo Bellavance, Don Seiler.)
* Add missing azure type in repo-type option reference. (Fixed by Don Seiler. Reviewed by David Steele.)
* Fix typo in repo-cipher-type option reference. (Fixed by Don Seiler. Reviewed by David Steele.)
Documentation Improvements:
* Clarify that expire must be run regularly when expire-auto is disabled. (Reviewed by Douglas J Hunley. Suggested by Douglas J Hunley.)
When restoring a cluster that will be promoted but is not intended to be the new primary, it is important to disable archiving to avoid polluting the repository with useless WAL. This option makes disabling archiving a bit easier.
Automatically retrieve the role and temporary credentials for S3 when the AWS instance is associated with an IAM role. Credentials are automatically updated when they are <= 5 minutes from expiring.
Basic configuration is to set repo1-s3-key-type=auto. repo1-s3-role can be used to set a specific role, otherwise it will be retrieved automatically.
Add more info (command, version, options) to asserts, and errors when debug logging is enabled. This won't cover all cases but might mean we get more info in some circumstances.
When testing the common/stack-trace module it is important not to call this test function since the trace stack is empty and it will cause a buffer under run.
Instead use a macro that is only defined under the correct circumstances and add an assert() to catch future regressions.
Currently each module that needs to collect statistics implements custom code to do so. This is cumbersome.
Create a general purpose module for collecting and reporting statistics. Statistics are output in the log at detail level, but there are other uses they could be put to eventually.
No new functionality is added. This is just a drop-in replacement for the current statistics, with the advantage of being more flexible.
The new stats are slower because they involve a list lookup, but performance testing shows stats can be updated at about 40,000/ms which seems fast enough for our purposes.
HTTP/1.0 connections are closed by default after a single response. Other than that, treat 1.0 the same as 1.1.
HTTP/1.0 allows different date formats that we can't parse but for now, at least, we don't need any date headers from 1.0 requests.
The prior implementation only supported a single connection on TLS. This is not flexible enough for complex testing scenarios which might require multiple simultaneous connections on different protocols.
Allow multiple simultaneous connections and add plain sockets as a protocol option. Rename the functions used for server scripting to hrnServerScript*() to make it clear they are related. Improve error messages when less input is received by the server than expected.
Also, do a bit of cleanup and add more comments.
Following up on 111d33c, implement the new interfaces for socket client/session. Now HTTP objects can be used over TLS or plain sockets.
This required adding ioSessionFd() and ioSessionRole() to provide the functionality of sckSessionFd() and sckSessionType(). sckClientHost() and sckClientPort don't make sense in a generic interface so they were replaced with ioSessionName().
Rather than calling storageS3New() directly, create the storage by loading a configuration and calling repoStorageGet(). This is a better end-to-end test and cuts down on a lot of redundant tests.
Add tests that include security tokens in error messages to ensure they are redacted.
Move sckSessionReadyRead()/Write() into the IoRead/IoWrite interfaces. This is a more logical place for them and the alternative would be to add them to the IoSession interface, which does not seem like a good idea.
This is mostly a refactor, but a big change is the select() logic in fdRead.c has been replaced by ioReadReady(). This was duplicated code that was being used by our protocol but not TLS. Since we have not had any problems with requiring poll() in the field this seems like a good time to remove our dependence on select().
Also, IoFdWrite now requires a timeout so update where required, mostly in the tests.
These interfaces allow the HttpClient and HttpSession objects to work with protocols other than TLS, .e.g. plain sockets. This is necessary to allow standard HTTP -- right now only HTTPS is allowed, i.e. HTTP over TLS.
For now only TlsClient and TlsSession have been converted to the new interfaces. SocketClient and SocketSession will also need to be converted but first sckSessionReadyRead() and sckSessionReadyWrite() need to be moved into the IoRead and IoWrite interfaces, since they are not a good fit for IoSession.
Pretty much everywhere handle is used what is really meant is file descriptor (fd). This terminology got migrated over from Perl and is just not quite correct, or at least not as correct as fd.
There were also plenty of places fd was used so now all uses are consistent.
The Perl code was not updated but might be in a future commit.
This does not appear to have been used in quite some time and the tests are equally useless because they don't prove the correct port was passed to httpClientNew().
Before 9f2d647 TLS errors included additional details in at least some cases. After 9f2d647 a connection to an HTTP server threw `TLS error [1]` instead of `unable to negotiate TLS connection: [336031996] unknown protocol`.
Bring back the detailed messages to make debugging TLS errors easier. Since the error routine is now generic the `unable to negotiate TLS connection context` is not available so the error looks like `TLS error [1:336031996] unknown protocol`.
This loop was using a lot of memory without freeing it at intervals.
Rewrite to use char arrays when possible to reduce memory that needs to be allocated and freed.
Zigzag encoding places the sign bit in the least significant bit so that -1 is encoded as 1, 1 as 2, etc. This moves as many bits as possible into the low order bits which is good for other types of encoding, e.g. base-128.
See https://en.wikipedia.org/wiki/Variable-length_quantity#Zigzag_encoding.
It seems like overkill to encode this when other enums (e.g. StorageInfoLevel) are passed as integers.
Instead note that StorageType values should not be changed and remove the special encoding.
The fix for = characters in info files (039d314) added JSON validation but discarded the resulting Variant which means the JSON is being parsed twice. This nearly doubles the time to load a manifest since a lot of complex JSON is involved.
Time to load a million file manifest:
Before 039d314: 7.8s
039d314: 15.5s
This patch: 7.5s
To fix this regression return the Variant in the callback so the caller does not have to parse it again. The new code appears slightly more efficient overall, probably because there are fewer operations against Strings.
We use the Z suffix in many functions to indicate that we are expecting a zero-terminated string so make this function conform to the pattern.
As a bonus the new name is a bit shorter, which is a good quality in a commonly-used function.
The manifest uses the = character as the key/value separator so = characters in the key cause parsing errors and lead to an error or segfault.
Since the value must be valid JSON we can keep checking the value on the right side of the = and stop building the key when the value is valid. It's a bit hackish but it does seem to do the job without breaking the manifest format.
Unsurprisingly this makes parsing about 50% slower but it's still more than fast enough. Parsing 10 million key/values takes about 6.5s for the old code and 10s for the new code. Since the value is used as JSON downstream we can reclaim most of this time by just passing the JSON value rather than making the callback reparse it. We'll save that for another commit, though.
Since the command has completed it is counterproductive to throw an error but still warn to indicate that something unusual happened.
Also fix the related issue that the local processes were not being shut down when they completed, which meant that they might timeout before being closed when pgbackrest terminated.
Use a test storage driver to allow manifestNewBuild() to be run against a test cluster at any scale without having to write files to disk.
Simplify the test by using the output of manifestNewBuild() to feed manifestSave() and manifestNewLoad().
Also add manifest size to the output.
Calculates the memory used by the context and all child contexts.
This is primarily useful for debugging but it is not conditional on DEBUG because it is useful for profile/performance tests.
A number of tests used invalid JSON values where an error was expected or the value would be ignored.
Update these tests to use valid JSON values so all values in the file can be validated even if they are not used.
Something like 3="string" would return an Int64 variant and ignore the invalid portion after the integer. Other JSON interface functions have this check but it was forgotten here.
There are no current issues because of this but we want to be able to validate arbitrary JSON strings and this function was not working correctly for that usage.
This function is not used in the core code so remove it and update the test where it was used.
There may eventually be a need for a strLstNewP() function but it doesn't seem worth the code churn until there is an actual requirement.
The old constructor was left around to reduce code churn during the migration but it just makes the code harder to read and search.
Remove the old constructor and rename all remaining instances to lstNewP(), which by default has the same semantics.
Testing against static checksums is valuable but it can be become burdensome when supporting multiple architectures.
Reduce the number of tests we are doing against static checksums when the architecture can cause the checksum to vary.
Bug Fixes:
* Fix restore --force acting like --force --delta. This caused restore to replace files based on timestamp and size rather than overwriting, which meant some files that should have been updated were left unchanged. Normal restore and restore --delta were not affected by this issue. (Reviewed by Cynthia Shang.)
Features:
* Azure support for repository storage. (Reviewed by Cynthia Shang, Don Seiler.)
* Add expire-auto option. This allows automatic expiration after a successful backup to be disabled. (Contributed by Stefan Fercot. Reviewed by Cynthia Shang, David Steele.)
Improvements:
* Asynchronous S3 multipart upload. (Reviewed by Stephen Frost.)
* Automatic retry for backup, restore, archive-get, and archive-push. (Reviewed by Cynthia Shang.)
* Disable query parallelism in PostgreSQL sessions used for backup control. (Reviewed by Stefan Fercot.)
* PostgreSQL 13 beta2 support. Changes to the control/catalog/WAL versions in subsequent betas may break compatibility but pgBackRest will be updated with each release to keep pace.
* Improve handling of invalid HTTP response status. (Reviewed by Cynthia Shang.)
* Improve error when pg1-path option missing for archive-get command. (Reviewed by Cynthia Shang.)
* Add hint when checksum delta is enabled after a timeline switch. (Reviewed by Matt Bunter, Cynthia Shang.)
* Use PostgreSQL instead of postmaster where appropriate. (Reviewed by Cynthia Shang.)
Documentation Bug Fixes:
* Fix incorrect example for repo-retention-full-type option. (Reported by Höseyin Sönmez.)
* Remove internal commands from HTML and man command references. (Reported by Cynthia Shang.)
Documentation Improvements:
* Update PostgreSQL versions used to build user guides. Also add version ranges to indicate that a user guide is accurate for a range of PostgreSQL versions even if it was built for a specific version. (Reviewed by Stephen Frost.)
* Update FAQ for expiring a specific backup set. (Contributed by Cynthia Shang. Reviewed by David Steele.)
* Update FAQ to clarify default PITR behavior. (Contributed by Cynthia Shang. Reviewed by David Steele.)
Remove all check and stanza-* tests except for the ones that are intended to succeed. The successful tests show that the queries run with expected results against each version of PG which should also validate queries for the failure tests in the unit tests.
Also remove the tests for --no-online backups since they don't require a database and are well tested in the unit tests.
The prior code was only able to use the main passphrase automatically and expected sub passphrases to be specified for each operation. This was fine for testing but hardly sufficient for a user-facing feature.
Update the code to determine which passphrase to use for any file in the repository and error when an invalid file or location is selected.
The repo-get command is still internal for now, but with this improvement it should be ready to be made public.
There are a few non version specific tests that need to be run in integration because we can't get coverage in the unit tests.
To save some time we'll only run those tests against the same version we use for expect testing.
If a local command, e.g. backupFile(), fails it will stop the entire process. Instead, retry local commands to deal with transient errors.
Remove special logic in the S3 storage driver to retry RequestTimeTooSkewed errors since this is now handled by the general retry mechanism in the places where it is most likely to happen, i.e. file read/write. Also, this error should have been entirely eliminated by the asynchronous TLS implementation.
The Azure storage driver exposes secrets in the query when using SAS authorization. These secrets can show up during logging or when an error occurs.
Allow redaction of queries to prevent secrets from being exposed in logs and errors.
A shared access signature (SAS) provides granular, delegated access to resources in a storage account. This is often preferable to using a shared key which provides more access and is a greater security risk if compromised.
Rework size limits so that this->size is always the current size no matter how much is allocated.
Most importantly, this removes the conditional in bufSize(), which makes it a better candidate for inlining.
This caused restore to replace files based on timestamp and size rather than overwriting, which meant some files that should have been updated were left unchanged. Normal restore and restore --delta were not affected by this issue.
httpUriDecode() reverses the encoding in httpUriEncode().
httpQueryNewStr() creates a new HttpQuery by parsing a query string.
httpQueryMerge() merges the contents of one query into another query.
Azure and Azure-compatible object stores can now be used for repository storage.
Currently only shared key authentication is supported but SAS will be added soon.
There don't appear to be any behavioral changes since PostgreSQL 12 and all the tests pass.
Changes to the control/catalog/WAL versions in subsequent betas may break compatibility but pgBackRest will be updated with each release to keep pace.
There is no need to have parallelism enabled in a backup control session. In particular, 9.6 marks pg_stop_backup() as parallel-safe but an error will be thrown if pg_stop_backup() is run in a worker.
When uploading large files the upload is split into multiple parts which are assembled at the end to create the final file. Previously we waited until each part was acknowledged before starting on the processing (i.e. compression, etc.) of the next part.
Now, the request for each part is sent while processing continues and the response is read just before sending the request for the next part. This asynchronous method allows us to continue processing while the S3 server formulates a response.
Testing from outside AWS in a high-bandwidth, low-latency environment showed a 35% improvement in the upload time of 1GB files. The time spent waiting for multipart notifications was reduced by ~300% (this measurement included the final part which is not uploaded asynchronously).
There are still some possible improvements: 1) the creation of the multipart id could be made asynchronous when it looks like the upload will need to be multipart (this may incur cost if the upload turns out not to be multipart). 2) allow more than one async request (this will use more memory).
A fair amount of refactoring was required to make the HTTP responses asynchronous. This may seem like overkill but having well-defined request, response, and session objects will also be advantageous for the upcoming HTTP server functionality.
Another advantage is that the lifecycle of an HttpSession is better defined. We only want to reuse sessions that complete the request/response cycle successfully, otherwise we consider the session to be in a bad state and would prefer to start clean with a new one. Previously, this required complex notifications to mark a session as "successfully done". Now, ownership of the session is passed to the request and then the response and only returned to the client after a successful response. If an error occurs anywhere along the way the session will be automatically closed by the object destructor when the request/response object is freed (depending on which one currently owns the session).
strCat() did not follow our convention of appending Z to functions that accept zero-terminated strings rather than String objects.
Add strCatZ() to accept zero-terminated strings and update strCat() to accept String objects.
Use LF_STR where appropriate but don't use other String constants because they do not improve readability.
Test matrices were previously simplified for the mock/* tests (e.g. d4410611, d489eb87) but not for real/all since the rules for which tests would run with which options was extremely complex. This only got more complex when new compression formats were added.
Because the loop-generated matrix was so large, mosts tests were skipped for most option combinations following arcane logic which was nearly impossible to decipher even when reading the code, and completely impossible from the test.pl interface. As a consequence, important tests got excluded. For example, backup from standby was excluded for most versions of PostgreSQL because it was only run once per distro, against the latest version to be included in that distro.
Simplify the tests by having a single run per PostgreSQL version and vary test parameters according to the capabilities of each version and the underlying distro. So, ZST testing is based on whether the distro supports ZST. Every test is run for each set of parameters based on the capabilities of the PostgreSQL version, e.g. backup from standby is not attempted on versions that don't support it.
Note that since more tests are running the overall time to run the mock/all tests has increased by about 20-25%. Some time may be saved my removing tests that are adequately covered by unit tests but that should the subject of another commit. Another option would be to limit some non version-specific tests to a single, well defined version of PostgreSQL, .e.g the version that is run by expect tests, currently 9.6.
The motivation for this refactor is that new storage drivers are coming and the loop-generated test matrix simply was not up to the task of adding them.
The following is an example of the new test log (note longer runtime of each test):
module=real, test=all, run=1, pg-version=10 (106.91s)
module=real, test=all, run=1, pg-version=9.5 (151.09s)
module=real, test=all, run=1, pg-version=9.2 (123.11s)
module=real, test=all, run=1, pg-version=9.1 (129s)
vs. the old test log (sub-second tests were skipped entirely):
module=real, test=all, run=2, pg-version=10 (0.31s)
module=real, test=all, run=3, pg-version=10 (0.26s)
module=real, test=all, run=4, pg-version=10 (60.39s)
module=real, test=all, run=1, pg-version=10 (69.12s)
module=real, test=all, run=6, pg-version=10 (34s)
module=real, test=all, run=5, pg-version=10 (42.75s)
module=real, test=all, run=2, pg-version=9.5 (0.21s)
module=real, test=all, run=3, pg-version=9.5 (0.21s)
module=real, test=all, run=4, pg-version=9.5 (0.21s)
module=real, test=all, run=5, pg-version=9.5 (0.26s)
module=real, test=all, run=6, pg-version=9.5 (0.21s)
module=real, test=all, run=1, pg-version=9.2 (72.78s)
module=real, test=all, run=2, pg-version=9.2 (0.26s)
module=real, test=all, run=3, pg-version=9.2 (0.31s)
module=real, test=all, run=4, pg-version=9.2 (0.21s)
module=real, test=all, run=5, pg-version=9.2 (0.21s)
module=real, test=all, run=6, pg-version=9.2 (0.21s)
module=real, test=all, run=1, pg-version=9.5 (88.41s)
module=real, test=all, run=2, pg-version=9.1 (0.21s)
module=real, test=all, run=3, pg-version=9.1 (0.26s)
module=real, test=all, run=4, pg-version=9.1 (0.21s)
module=real, test=all, run=5, pg-version=9.1 (0.31s)
module=real, test=all, run=6, pg-version=9.1 (0.26s)
module=real, test=all, run=1, pg-version=9.1 (72.4s)
The unsupported version error is showing up on older versions of PostgreSQL (e.g. 9.1, 9.2) on RHEL6 when setting up a standby with streaming replication. The error occurs when a client does not properly send a version number and it's not clear why it is happening here, but it does not appear to have anything to do with pgBackRest and only affects RHEL6, i.e. 9.1 and 9.2 do not show this error on other distros.
For now ignore the error since RHEL6 is nearly EOL.
HTTP is an acronym so it should be capitalized. Coding conventions dictate otherwise for function and type names but that should not have been propagated to comments and messages.
strPtr() is called more than any other function and during profiling (with or without optimization) it can end up using a disproportionate amount of the total runtime. Even though it is fast, the profiler has a minimum resolution for each function call so strPtr() will often end up towards the top of the list even though the real runtime is quite small.
Instead, inline strPtr() and indicate to gcc that it should be inlined even for non-optimized builds, since that's how profiles are usually generated.
To make strPtr() smaller require "this" to be non-NULL and add another function, strPtrNull(), to deal with the few cases where we need NULL handling.
As a bonus this makes the executable about 1% smaller even when compared to a prior optimized build which would inline some percentage of strPtr() calls.
There is no sense in generating detailed coverage reports in CI environments where they will never be seen. It takes time and format differences in some older versions can cause problems in the report generation code.
Note that missing coverage will still be reported on stdout and the test will fail.
These were missed in d41eea68 when the functionality of TEST_RESULT_STR() was changed. Using TEST_RESULT_STR() instead of TEST_RESULT_PTR() is more type-safe and clearer.
Add a comment to make it clear that TEST_RESULT_PTR() should be used only when a better alternative is not available.
This aligns better with general PostgreSQL usage and our own documentation (updated in 4bcef702).
Usage in the backup.manifest tests has not been updated since it might break the file format.
Expressions only worked at the first level of recursion because the expression was also being applied to paths so the path had to match the filter in order to recurse.
This is not considered a bug since it does not affect any existing code paths, but it is required for the general-purpose repo-ls command.
A truncated HTTP response status could lead to an an unfriendly error message, which would be retried, but could be confusing if the error was persistent and required debugging.
Improve the error handling overall to catch more error cases explicitly and respond better to edge cases.
Also update the terminology in comments to align with the RFC. Variable and function names were not changed because a refactor is intended for HTTP response and it doesn't seem worth the additional code churn.
The prior harness required a separate function to contain the server behavior but this made keeping the client/server code in sync very difficult and in general meant test writing took longer.
Now, commands to define server behavior are inline with the client code, which should greatly simplify test writing.
Bug Fixes:
* Fix issue checking if file links are contained in path links. (Reviewed by Cynthia Shang. Reported by Christophe Cavallié.)
* Allow pg-path1 to be optional for synchronous archive-push. (Reviewed by Cynthia Shang. Reported by Jerome Peng.)
* The expire command now checks if a stop file is present. (Fixed by Cynthia Shang. Reviewed by David Steele.)
* Handle missing reason phrase in HTTP response. (Reviewed by Cynthia Shang. Reported by Tenuun.)
* Increase buffer size for lz4 compression flush. (Reviewed by Cynthia Shang. Reported by Eric Radman.)
* Ignore pg-host* and repo-host* options for the remote command. (Reviewed by Cynthia Shang. Reported by Pavel Suderevsky.)
* Fix possibly missing pg1-* options for the remote command. (Reviewed by Cynthia Shang. Reported by Andrew L'Ecuyer.)
Features:
* Time-based retention for full backups. The --repo-retention-full-type option allows retention of full backups based on a time period, specified in days. (Contributed by Cynthia Shang, Pierre Ducroquet. Reviewed by David Steele.)
* Ad hoc backup expiration. Allow the user to remove a specified backup regardless of retention settings. (Contributed by Cynthia Shang. Reviewed by David Steele.)
* Zstandard compression support. Note that setting compress-type=zst will make new backups and archive incompatible (unrestorable) with prior versions of pgBackRest. (Reviewed by Cynthia Shang.)
* bzip2 compression support. Note that setting compress-type=bz2 will make new backups and archive incompatible (unrestorable) with prior versions of pgBackRest. (Contributed by Stephen Frost. Reviewed by David Steele, Cynthia Shang.)
* Add backup/expire running status to the info command. (Contributed by Stefan Fercot. Reviewed by David Steele.)
Improvements:
* Expire WAL archive only when repo-retention-archive threshold is met. WAL prior to the first full backup was previously expired after the first full backup. Now it is preserved according to retention settings. (Contributed by Cynthia Shang. Reviewed by David Steele.)
* Add local MD5 implementation so S3 works when FIPS is enabled. (Reviewed by Cynthia Shang, Stephen Frost. Suggested by Brian Almeida, John Kelley.)
* PostgreSQL 13 beta1 support. Changes to the control/catalog/WAL versions in subsequent betas may break compatibility but pgBackRest will be updated with each release to keep pace. (Reviewed by Cynthia Shang.)
* Reduce buffer-size default to 1MiB. (Reviewed by Stephen Frost.)
* Throw user-friendly error if expire is not run on repository host. (Contributed by Cynthia Shang. Reviewed by David Steele.)
The purpose of the remote command is to get access to local resources, so a remote should never start another remote. However, this could happen if there were host settings on the remote host, which ended badly with lock errors, loops, etc.
Add pg-local and repo-local options to indicate that the resource is local even if there are host settings.
Note that for the time being these options are internal and not intended for general usage. However, this is likely the direction needed to allow for more symmetric and manageable configurations.
Some pg1-* options are required by the remote so if they are not provided in the remote's configuration file then it may cause a configuration error, depending on the operation. This currently only applies to the pg1-path option.
This is still an issue for repo-* options but the same solution cannot be applied because some repo-* options are secure and cannot be passed on the command-line.
There don't appear to be any behavioral changes since PostgreSQL 12 and all the tests pass.
Changes to the control/catalog/WAL versions in subsequent betas may break compatibility but pgBackRest will be updated with each release to keep pace.
S3 requires the Content-MD5 header for many requests but MD5 is not available via OpenSSL when FIPS is enabled because it is considered to be insecure.
Even though our usage does not present any security risks a local M5 implementation is required to circumvent the over-broad FIPS restriction.
Vendorize the MD5 implementation found at https://openwall.info/wiki/people/solar/software/public-domain-source-code/md5 and add full coverage for the module in the common/crypto unit tests.
The prior default was determined by benchmarking the Perl code prior to the 1.0 release. In general buffer allocation was more expensive in Perl so large buffers gave the best performance. This was due to multiple buffer allocations for each filter in an IO operation.
The C code allocates fixed buffers for each IO operation so the cost for buffer allocation is lower than Perl. That being the case it made sense to benchmark the C code to determine the optimal buffer default.
The performance/storage tests were used to measure the performance of a variety of filters. 1GiB of data was processed by each filter 10 times and the results of the tests were averaged.
While most buffer sizes gave similar performance, 1MiB appeared to perform the best overall. Of course, different architectures are likely to yield different results but this seems like a sensible default. The buffer-size option may still need to be manually configured to give optimal results.
Raw test data for reference:
4MB buffer (prior default)
copy time 1807ms, avg time 180ms, avg throughput: 5942MB/s
md5 time 14200ms, avg time 1420ms, avg throughput: 756MB/s
sha1 time 11431ms, avg time 1143ms, avg throughput: 939MB/s
sha256 time 23463ms, avg time 2346ms, avg throughput: 457MB/s
gzip -6 time 381199ms, avg time 38119ms, avg throughput: 28MB/s
lz4 -1 time 15484ms, avg time 1548ms, avg throughput: 693MB/s
1MB buffer (new default)
copy time 1760ms, avg time 176ms, avg throughput: 6100MB/s
md5 time 13739ms, avg time 1373ms, avg throughput: 781MB/s
sha1 time 11025ms, avg time 1102ms, avg throughput: 973MB/s
sha256 time 22539ms, avg time 2253ms, avg throughput: 476MB/s
gzip -6 time 372995ms, avg time 37299ms, avg throughput: 28MB/s
lz4 -1 time 15118ms, avg time 1511ms, avg throughput: 710MB/s
512K buffer
copy time 1782ms, avg time 178ms, avg throughput: 6025MB/s
md5 time 13724ms, avg time 1372ms, avg throughput: 782MB/s
sha1 time 10959ms, avg time 1095ms, avg throughput: 979MB/s
sha256 time 22982ms, avg time 2298ms, avg throughput: 467MB/s
gzip -6 time 378120ms, avg time 37812ms, avg throughput: 28MB/s
lz4 -1 time 15484ms, avg time 1548ms, avg throughput: 693MB/s
256K buffer
copy time 1805ms, avg time 180ms, avg throughput: 5948MB/s
md5 time 13706ms, avg time 1370ms, avg throughput: 783MB/s
sha1 time 11074ms, avg time 1107ms, avg throughput: 969MB/s
sha256 time 22588ms, avg time 2258ms, avg throughput: 475MB/s
gzip -6 time 372645ms, avg time 37264ms, avg throughput: 28MB/s
lz4 -1 time 16346ms, avg time 1634ms, avg throughput: 656MB/s
Improve the accuracy of the calculations in several areas with better integer expressions.
Make the input buffer size configurable. Previously it was always 1mb, i.e. block size.
Use a macro for output results to reduce code duplication.
Reason phrases (e.g. OK) are optional in HTTP 1.1 but the space after the status code is not. When the reason phrase was missing the required space was trimmed along with the trailing CR leading to a format error.
Rework the logic to preserve the space and allow empty reason phrases.
Found while testing against the Backblaze S3-compatible API.
Vendorized code is copied from another project when a library is not available and a git subproject won't work. Currently all the vendorized code is copied from PostgreSQL but it makes sense to have a more general mechanism for indicating vendorized code.
The .vendor extension will be used to denote vendorized code in the same way that .auto is used to denote auto-generated code.
Rather than bS3 use strStorage which can indicate more than two storage types.
For the moment there are still only two storage types but this change is required before more can be added.
cdebfb09 added relative times to backup.into but a subtle issue was introduced that would cause the tests to fail if the time acquired by cmdExpire() was exactly the same as timeNow used to format backup.info. cmdExpire() was working correctly given the inputs, but the tests did not run predictably.
This was found while running the tests with --no-valgrind --no-coverage which allows them to run a lot faster, thus exposing the timing issue.
These tests required sudo to achieve complete coverage.
Add a new coverage exception, vm_covered, that applies to code that can only be covered in a container. When the test is run outside of a container code sections that require a container will be excluded with TEST_CONTAINER_REQUIRED and the coverage exception will be added to prevent a coverage error.
This does require marking up the core code with vm_covered, which in some modules (e.g. common/io/tls/client) can be extensive. It's possible that some of these tests can be rewritten to be less dependent on sudo but no attempt was made to do that here.
Only allow coverage summaries in a vm since coverage summaries outside a vm will not be complete, which was true even before this commit.
Update error types throw by bzip2 to be more consistent with gzip.
Update the bzip2 and gzip error default to be AssertError as that's the more common case in both, and add a 'break;' to the default clause -- we don't intend to be just falling through those case statements, even if the default is the last, we should be explicit about that.
Clean up some tabs that snuck in, rename a variable to be more clear, and add some comments.
The --repo-retention-full-type option allows retention of full backups based on a time period, specified in days.
The new option will default to 'count' and therefore will not affect current installations. Setting repo-retention-full-type to 'time' will allow the user to use a time period, in days, to indicate full backup retention. Using this method, a full backup can be expired only if the time the backup completed is older than the number of days set with repo-retention-full (calculated from the moment the 'expire' command is run) and at least one full backup meets the retention period. If archive retention has not been configured, then the default settings will expire archives that are prior to the oldest retained full backup. For example, if there are three full backups ending in times that are 25 days old (F1), 20 days old (F2) and 10 days old (F3), then if the full retention period is 15 days, then only F1 will be expired; F2 will be retained because F1 is not at least 15 days old.
Newer versions of sudo output this message to stderr when run in a container:
sudo: setrlimit(RLIMIT_CORE): Operation not permitted
See https://github.com/sudo-project/sudo/issues/42 for details.
A simple workaround is to prevent sudo from disabling core dumps. This seems safe enough because if sudo is segfaulting then core files are the least of our worries.
There are a number of Valgrind errors on Ubuntu 12.04 which do not happen on newer distro versions. However, suppressions for these errors have masked legitimate issues in subsequent code.
Instead, make suppressions VM specific so errors in other VMs are not masked.
Resolving localhost can vary based on the local network configuration so it is safer to just use a static IP.
This was found while testing on Travis-CI arm64.
bzip2 is a widely available, high-quality data compressor. It typically compresses files to within 10% to 15% of the best available techniques (the PPM family of statistical compressors), while being around twice as fast at compression and six times faster at decompression.
bzip2 is currently available on all supported platforms.
This was an oversight in 438b957f which added multiple compression type support. The booleans were interpreted as none and gz which works fine for the CompressType enum until the position of gz or none changes.
Zstandard is a fast lossless compression algorithm targeting real-time compression scenarios at zlib-level and better compression ratios. It's backed by a very fast entropy stage, provided by Huff0 and FSE library.
Zstandard version >= 1.0 is required, which is generally only available on newer distributions.
If the WAL path is absolute then pg1-path should be optional but in fact it was required to load pg_control.
Skip the pg_control check when pg1-path is not specified. The check against the stanza version/system-id remains to protect the repo from corruption.
Perhaps this was intended to verify the WAL size but was never implemented.
Verifying the WAL size is probably a good idea so this member may be added back if the feature is implemented.
An upcoming feature requires new parameters for storagePosixNew() and this causes a lot of churn because almost every test creates a Posix storage object. Some refactoring in the tests might reduce this duplication but storagePosixNew() is collecting a lot of parameters so converting to storagePosixNewP() makes sense in any case.
There are relatively few call sites in the core code but they still benefit from better readability after this change.
There is no conflict if the path containing a file link is a parent path of a path link. The Perl code apparently had this right but the migration to C missed it.
Exclude this case when checking for link conflicts.
There have been a number of segfaults reported because a string option expected to be non-null was actually null. This is generally due to options that are expected to be set but are in fact optional.
Protect against this by creating cfgOptionStrNull() to get options that can be null, while changing cfgOptionStr() to always expect non-null. There are relatively few places where nulls are expected.
There is definitely a chance for breakage here as null options might currently be working in the field but will be caught by this new check. Hopefully introducing the check early in the release cycle will allow us to catch any issues.
It makes sense to do this check right after the first compression so any issues are caught early.
Also, none of the current compression formats omit decompressCmd so make the test mandatory.
Previously when retention-archive was set (either by the user or by default), archives prior to the archive-start of the oldest remaining full backup (after backup expiration occurred) would be expired even though the retention-archive threshold had not been met. For example, if there were 1 full backup remaining after backup expiration and the retention-archive was set to 2 and retention-archive-type=full, then archives prior to the archive-start of the remaining full backup would still be removed even though retention-archive required 2 full backups remaining before archives should be expired.
The thought was to keep the archive directory clean and since the full backup did not require prior archives, it was safe to delete them. However, this has caused problems for some users in the past (because they needed the WAL for other purposes) and with the new adhoc and time-based retention features, it was decided that the archives should remain until the threshold was met. The archives will eventually be removed and if having them causes space issues, the expire command and the retention-archive can always be run and adjusted.
Coverity was concerned that regExpError() might return and lead to an invalid reference of "this". This was unlikely since the function should never return but Coverity didn't know that. Also, a difference in error-handling logic at the two sites could cause the issue Coverity reported if they were to get out of sync.
Fix by refactoring out the core error function so that it is clear it will never return.
If an option may not be valid for a command it should be checked with cfgOptionValid() or cfgOptionTest().
It appears this rule is followed pretty strictly since the only changes required were in unit tests.
The specified backup set (i.e. the backup label provided and all of its dependent backups, if any) will be expired regardless of backup retention rules except that at least one full backup must remain in the repository.
Each option type enforced its own constraints but there was a lot of duplication. Centralize the enforcement to remove the duplication.
Also convert the option type assert to a production error. This is unlikely to happen in production but the test is quite cheap so it can't hurt.
Finally, add a NULL check. Most option types can never be NULL.
This is implemented by checking for a backup lock on the host where info is running so there are a few limitations:
* It is not currently possible to know which command is running: backup, expire, or stanza-*. The stanza commands are very unlikely to be running so it's pretty safe to guess backup/expire. Command information may be added to the lock file to improve the accuracy of the reported command.
* If the info command is run on a host that is not participating in the backup, e.g. a standby, then there will be no backup lock. This seems like a minor limitation since running info on the repo or primary host is preferred.
Make the restore clean process look more like manifest build, i.e. do cleanup of each target root directory outside the main cleanup callback. This means some code duplication but removes the logic handling "dot" paths.
Add tests for both restore and backup (which already worked but was not tested).
Bug Fixes:
* Remove empty subexpression from manifest regular expression. MacOS was not happy about this though other platforms seemed to work fine. (Fixed by David Raftis.)
Improvements:
* Non-blocking TLS implementation. (Reviewed by Slava Moudry, Cynthia Shang, Stephen Frost.)
* Only limit backup copy size for WAL-logged files. The prior behavior could possibly lead to postgresql.conf or postgresql.auto.conf being truncated in the backup. (Reviewed by Cynthia Shang.)
* TCP keep-alive options are configurable. (Suggested by Marc Cousin.)
* Add io-timeout option.
Timeout used for connections and read/write operations.
Note that the entire read/write operation does not need to complete within this timeout but some progress must be made, even if it is only a single byte.
The prior blocking implementation seemed to be prone to locking up on some (especially recent) kernel versions. Since we were unable to reproduce the issue in a development environment we can only speculate as to the cause, but there is a good chance that blocking sockets were the issue or contributed to the issue.
So move to a non-blocking implementation to hopefully clear up these issues. Testing in production environments that were prone to locking shows that the approach is promising and at the very least not a regression.
The main differences from the blocking version are the non-blocking connect() implementation and handling of WANT_READ/WANT_WRITE retries for all SSL*() functions.
Timeouts in the tests needed to be increased because socket connect() and TLS SSL_connect() were not included in the timeout before. The tests don't run any slower, though. In fact, all platforms but Ubuntu 12.04 worked fine with the shorter timeouts.
select() is a bit old-fashioned and cumbersome to use. Since the select() code needed to be modified to handle write ready this seems like a good time to upgrade to poll().
poll() has been around for a long time so there doesn't seem to be any need to provide a fallback to select().
Also change the error on timeout from FileReadError to ProtocolError. This works better for read vs. write and failure to poll() is indicative of a protocol error or unexpected EOF.
The prior behavior introduced in dcddf3a5 could possibly lead to postgresql.conf or postgresql.auto.conf being truncated in the backup since they are copied via tmp files and could change size during the backup.
In general it seems safer to limit this feature to WAL-logged files which will be reconstructed during recovery.
Building on these platforms gives us better coverage for our build code. Cirrus CI was chosen because it is the only service that supports FreeBSD (that we could find).
The FreedBSD configuration for Vagrant is currently just enough to perform a build.
The MacOS configuration is not actually for Vagrant (yet) but does show the steps needed to setup the build environment on MacOS.
This allows us to add new configurations mostly without changing the behavior of vagrant from the command line, i.e. vagrant up and vagrant ssh will continue to bring up the default configuration.
However, vagrant destroy -f will remove all configurations. That's really only a change in behavior if more than one configuration is running, which is not currently possible.
A session looks much the same whether it is initiated from the client or the server, so use the session objects to implement the TLS, HTTP, and S3 test servers.
For TLS, at least, there are some differences between client and server sessions so add a client/server type to SocketSession to determine how the session was initiated.
Aside from reducing code duplication, the main advantage is that the test server will now timeout rather than hanging indefinitely when less input that expected is received.
Previously an error was only thrown when errno was set but in practice this is usually not the case. This may have something to do with getting errno late but attempts to get it earlier have not been successful. It appears that errno usually gets cleared and spot research seems to indicate that other users have similar issues.
An error at this point indicates unexpected EOF so it seems better to just throw an error all the time and be consistent.
To test this properly our test server needs to call SSL_shutdown() except when the client expects this error.
This abstraction allows the session code to be shared between the TLS client and (upcoming) server code.
Session management is no longer implemented in TlsClient so the HttpClient was updated to free and create sessions as needed. No test changes were required for HttpClient so the functionality should be unchanged.
Mechanical changes to the TLS tests were required to use TlsSession where appropriate rather than TlsClient. There should be no change in functionality other than how sessions are managed, i.e. using tlsClientOpen()/tlsSessionFree() rather than just tlsClientOpen().
The errorInternalThrowSys*() functions were marked as returning during coverage testing even when they had no possibility to return, i.e. the error parameter was set to constant true. This meant the compiler would treat the functions as returning even when they would not.
Instead create completely separate functions for coverage to use for THROW_ON_SYS_ERROR*() that can return and leave the regular functions marked __noreturn__.
This abstraction allows the session code to be shared between the socket client and (upcoming) server code. There should no difference in how the code works -- only the organization has changed. Note that no changes to the tests were required.
This same abstraction will be required for TlsClient but that will be done in a separate commit because it requires test changes.
These forks were done in a custom way (not sure why) and lack the capability of the standard macros for the parent to wait for child exit.
This mean that the server would continue to run after the tests were complete and that multiple servers could run at once. This caused subtle timing and connection issues that required larger timeouts to resolve.
Don't change the timeouts here since they need to be adjusted in future commits anyway.
It is pretty much impossible for a static IP to not resolve to an address but in theory the error could catch other conditions so it seems best to keep it.
When the Vagrant file was updated to use pgbackrest/ vs /backrest/ as the location for executing tests and building the documentation, parts of the contributing.xml (and hence the CONTRIBUTING.md) were not updated since some parts of the document are not actually executed when the CONTRIBUTING.md is built from contributing.xml: those parts that are executed were updated but those parts that are not executed were not.
This commit fixes the contributing.xml issue but also removes test/README.md as its contents were out of date and redundant given that they are covered in CONTRIBUTING.md.
This limitation forced extra logic in cases where zero wait times were needed.
Remove the limitation and the extra logic in cases where zero wait times are possible.
Help identify whether errors are happening in the forked server or the main test by showing the line number where the server was forked off in the stack trace.
If these are not reset then an error not wrapped in a TEST_ERROR*() macro may show the line number of the previous error in a stack trace, which is confusing.
It is better for the line number to be unreported than wrong.
The default process id was previously always 0 but there are cases where it is useful to be able to set the default.
Currently the only use case is for testing but the upcoming server code will also make use of it.
The storage driver requires two list functions to be implemented, list and infoList. But the former is a subset of the latter so implementing both in every driver is wasteful. The reason both exist is that in Posix it is cheaper to get a list of names than it is to stat files to get size, time, etc. In S3 these operations are equivalent.
Introduce storageInfoLevelType to determine the amount of information required by the caller. That way Posix can work efficiently and all drivers can return only the data required which saves some bandwidth. The storageList() and storageInfoList() functions remain in the storage interface since they are useful -- the only change is simplifying the drivers with no external impact.
Note that since list() accepted an expression infoList() must now do so. Checking the expression is optional for the driver but can be used to limit results or save IO costs.
Similarly, exists() and pathExists() are just specialized forms of info() so adapt them to call info() instead.
It's better to start out with plural forms rather than flip back and forth as functions are added and subtracted. So, use "Constructors" instead of "Constructor".
Use "Getters/Setters" rather than "Getters" or "Setters" to avoid similar churn.
This has been the policy for some time but due to migration pressure only new functions and refactors have been following this rule. Now it seems sensible to make a clean sweep and move all the comments that have not been moved already (i.e. most of them).
Only obvious typos and gross inaccuracies in the comments have been fixed. For this most part this was a copy and paste operation.
Useless comments, e.g. "New object", were not copied. Even so, there are surely many deficient comments left.
Some rearranging was done where needed and functions were placed in the proper sections, e.g. "Constructors", "Functions", etc.
A few function prototypes were found that not longer had an implementation. These were removed, but there may be more.
The coding document has been updated to reflect this policy, which is not new but has never been documented.
Prior to performing a backup or expiring backups, the backup.info file is validated by reconstructing it from the backups in the repository. When a backup had already been removed from the repo, it was removed from the backup.info file but its dependents were not.
Now, the dependent backups will also be removed from backup.info and only backups in the repo that have their full dependency chain will be added to backup.info if they are missing.
These functions accepted const Buffer objects and returned non-const pointers which is definitely not a good idea. Add bufPtrConst() to handle cases where only a const return value is needed and update call sites.
Use UNCONSTIFY() in cases where library code out of our control requires a non-const pointer. This includes the already-documented exception in command/backup/pageChecksum and input buffers in the gzCompress and gzDecompress filters.
Allows casting const-ness away from an expression, but doesn't allow changing the type. Enforcement of the latter currently only works for gcc-like compilers.
Note that it is not safe to cast const-ness away if the result will ever be modified (it would be undefined behavior). Doing so can cause compiler mis-optimizations or runtime crashes (by modifying read-only memory). It is only safe to use when the result will not be modified, but API design or language restrictions prevent you from declaring that (e.g. because a function returns both const and non-const variables).
Note that this only works in function scope, not for global variables (it would be nice, but not trivial, to improve that).
UNCONSTIFY() requires static assert which is a feature in its own right.
PostgreSQL enables this option when available which seems like a good idea since we also do not share connections between processes.
Note that as in PostgreSQL there is no way to disable this option.
PostgreSQL enables this option when available which seems like a good idea since we also buffer transmissions.
Note that as in PostgreSQL there is no way to disable this option.
This is really a socket option so the new name is clearer.
Since common/io/socket/tcp will contains a mix of options it makes sense to rename it to socket and cascade name changes as needed.
Prior to 2.25 the individual TCP keep-alive options were not being configured due to a missing header. In 2.25 they were being configured incorrectly due to a disconnect between the timeout specified in ms and what was expected by the TCP options, i.e. seconds.
Instead make the TCP keep-alive options directly configurable, with correct units and better testing. Keep-alive is enabled by default (though it can be defaulted to the system setting instead) and the rest of the options are not set by default. This is in line with what PostgreSQL does, though PostgreSQL does not allow keep-alive to be defaulted.
Also move configuration of TCP options before connect() as PostgreSQL does.
This functionality was embedded into TlsClient but that was starting to get unwieldy.
Add SocketClient to contain all socket-related client functionality.
Documentation builds and tests have only a few packages in common so rearrange packages to save some time and clarify dependencies.
Remove the libperl-dev package which became obsolete when the LibC module was removed in 79cfd3ae.
Add a few comments for good measure.
The primary purpose of this test (currently) is to measure the performance of storageRemoteInfoList(), which is critical for building a manifest when the PostgreSQL host is remote.
The starting baseline of 1 million files is perhaps a bit aggressive but it seems very likely to blow up if there are performance regressions.
Recent performance improvements allow increasing the baseline of this test.
In general it is best if the baseline is large enough to cause the test to blow up if there are performance regressions.
Features:
* Add lz4 compression support. Note that setting compress-type=lz4 will make new backups and archive incompatible (unrestorable) with prior versions of pgBackRest. (Reviewed by Cynthia Shang.)
* Add --dry-run option to the expire command. Use dry-run to see which backups/archive would be removed by the expire command without actually removing anything. (Contributed by Cynthia Shang, Luca Ferrari.)
Improvements:
* Improve performance of remote manifest build. (Suggested by Jens Wilke.)
* Fix detection of keepalive options on Linux. (Contributed by Marc Cousin.)
* Add configure host detection to set standards flags correctly. (Contributed by Marc Cousin.)
* Remove compress/compress-level options from commands where unused. These commands (e.g. restore, archive-get) never used the compress options but allowed them to be passed on the command line. Now they will error when these options are passed on the command line. If these errors occur then remove the unused options. (Reviewed by Cynthia Shang.)
* Limit backup file copy size to size reported at backup start. If a file grows during the backup it will be reconstructed by WAL replay during recovery so there is no need to copy the additional data. (Reviewed by Cynthia Shang.)
If the tests are running quickly then the time target might end up the same as the end time of the prior full backup. That means restore auto-select will not pick it as a candidate and restore the last backup instead causing the restore compare to fail.
So, sleep one second.
Add functions to select a current backup by label and to retrieve a backup dependency list for any given backup.
Update the expire code to utilize the new functions and to expire backup sets from newest dependency to oldest.
Decisions about when to optimize or enable debug code were spread out in too many places making it hard to keep them consistent.
Centralize the logic as much as possible to make it easier to maintain.
Append N characters from a zero-terminated string.
Note that the string does not actually need to be zero-terminated as long as N is <= the end of the string being concatenated.
In the ExpireEnvTest.pm backupCreate() function, backup-prior was incorrectly set for diff backups to the previous backup regardless of what backup type the previous backup was. This did not cause any issues in the Mock Expire tests before because it was not being checked. However, in order to reduce churn in the expect logs for a new feature where the backup-prior is utilized, this is being fixed so that the full backup is always used as backup-prior.
The major bottleneck was finding the memory allocation to be resized since it required a sequential search through a list.
Instead, put the allocation header at the beginning of the allocation and return an offset to the user for their buffer. This allows us to use pointer arithmetic to get back to the allocation header quickly when resizing. A side effect is to make memFree() faster as well. The downside is we won't detect garbage pointers passed to memResize()/memFree(), which is also true for MemContext pointers.
The performance benefits can be pretty large in certain cases, in particular when loading and saving manifests. The following are the before and after performance tests on a 900K file manifest.
Before:
run 003 - manifestNewLoad()/manifestSave()
000.000s l0125 - generate manifest
183.411s l0236 - 101.2MB manifest generated with 900000 files
183.411s l0239 - load manifest
403.816s l0243 - completed in 220405ms
403.816s l0245 - check file total
403.816s l0248 - save manifest
670.217s l0253 - completed in 266401ms
670.217s l0256 - find all files
671.263s l0266 - completed in 1046ms
After:
run 003 - manifestNewLoad()/manifestSave()
000.000s l0125 - generate manifest
007.730s l0236 - 101.2MB manifest generated with 900000 files
007.730s l0239 - load manifest
033.431s l0243 - completed in 25701ms
033.431s l0245 - check file total
033.431s l0248 - save manifest
057.755s l0253 - completed in 24324ms
057.755s l0256 - find all files
058.689s l0266 - completed in 934ms
* Fix a few issues with file names being truncated introduced in 787d3fd6.
* Use function line info from the lcov file to calculate which lines to show for uncovered functions. This is more accurate than what we were doing before and function comment headers are now excluded which reduces clutter in the report.
The prior macros had grown over time to be pretty significant pieces of code that required a lot of compile time, though runtime was efficient.
Move most of the macro code into functions to reduce compile time, perhaps at a slight expense to runtime. The overall performance benefit is 10-15% so this seems like a good tradeoff.
Add TEST_RESULT_UINT_INT() to safely compare uint to int with range checking.
Upcoming changes to the TEST_RESULT_* macros are more type safe and identified that the wrong macros were being used to test results in many cases.
Commit these changes separately to verify that they work with the current macro versions.
Note that no core bugs were exposed by these changes.
TRY...CATCH blocks are fairly expensive and when all the TEST_RESULT*() macros succeed they are not needed.
Instead just record info at the start of the result test so a detailed exception can be thrown in test.c in the rare case where an exception occurs.
This is helpful for test macros that know the line number.
The line number can now be non-zero below the top of the stack without WITH_BACKTRACE so instead ignore the line number for output when it is zero.
This was passing since we don't test WITH_BACKTRACE in CI because it is used only for test builds.
Ideally we would test this but it doesn't seem worth the trouble at the moment.
Building the contributing document has some special requirements because it runs Docker in Docker so the repo path must align on the host and all Docker containers. Run `pgbackrest/doc/doc.pl` from within the home directory of the user that will do the doc build, e.g. `home/vagrant`. If the repo is not located directly in the home directory, e.g. `/home/vagrant/pgbackrest`, then a symlink may be used, e.g. `ln -s /path/to/repo /home/vagrant/pgbackrest`.
Mount the repo in the Vagrantfile at /home/vagrant/pgbackrest but provide a link from the old location at /backrest to make the transition less painful.
The old coverage data has been recorded so it is no longer needed. In newer versions of gcc leaving this file around can lead to an error when writing profile data after forking off to a non-pgbackrest binary (which we do in some unit tests).
* Show all uncovered branch parts even when there are more than two parts per branch. This is the way gcc9 reports coverage so it needs to work even if it doesn't make as much sense as the old way.
* Show covered branches in functions where coverage is missing. Showing just the uncovered branches can be confusing because it's not always clear how the coverage relates to the code. By showing all branch coverage (+ or -) this correspondence is made easier.
We don't report branch coverage on test modules (e.g. test/src/module/common/errorTest.c) but the code that excluded branch coverage from the test module would also exclude it from all core modules if the test module was included in the lcov report due to lack of function/line coverage.
Adjust the coverage code to only exclude branches during the extraction of test module coverage.
For some reason gcc9 would not do -O0 builds in combination with one of the options that libperl required. Now that libperl is gone this exception is no longer required.
If a file grows during the backup it will be reconstructed by WAL replay during recovery so there is no need to copy the additional data.
This also reduces the likelihood of seeing torn pages during the copy. Torn pages can still occur in the middle of the file, though, so they must be handled.
The manifest is excellent for validation but including the entire manifest is too noisy and some values are architecture/algorithm dependent.
Output a redacted version that contains the most important information which can be improved on over time.
This macro will automatically do key replacement before the comparison. This saves the indentation required for an embedded function call.
Possibly TEST_RESULT_Z_KEYRPL() would also be useful but it will be added when needed.
The current use case is reading files from the PostgreSQL cluster during backup.
A file may grow during backup but we only need to copy the number of bytes that were reported during the manifest build. The rest will be rebuilt from the WAL during recovery so copying more is just a waste of space.
Limiting the copy sizes in backup will be part of a future commit.
When multiple files were missing coverage it could be hard to locate the coverage report for a specific file.
Add links for uncovered files to make this easier.
Also move table titles out of the table so they are valid html.
These days it is better to include the module in define.yaml when we need to poke at the internal implementation.
This doesn't quite work for the log test harness, so for now some variables will need to remain extern'd in debug builds.
Enhance dry-run support added in 2fa69af8 by forbidding writes in the storage layer and adding prefixes to log messages.
The former will protect against mistakes in dry-run implementations and the latter will make it clear when a command was executed in dry-run mode.
Update expire unit tests with the new log prefix.
These results were stored in the vagrant path along with a full copy of src.
Instead store the raw coverage data in test/result/raw and change source references to the files that already exist in [test-path]/repo.
It makes more sense to build in the test path since many developers won't have a vagrant path. Anyway, it's better not to modify the vagrant path since it belongs to vagrant.
Instead of installing the binary just mount it into the container from where it was built. This saves a bit of time and space.
When pgbackrest was present this test behaved unexpectedly.
While the binary is not currently required for this test is might be in the future so fix the test to prevent a regression.
Building packages is not a normal part of development so don't build packages by default. Instead build them in CI as needed.
Do the builds in test/result instead of .vagrant to be friendlier with hosts that are not running vagrant. Anyway, it's probably not a good idea to be creating files in the .vagrant path.
Building the configure.ac script can take multiple seconds depending on the state of the autoconf cache. Use a checksum to only rebuild when configure.ac has changed no matter how the timestamps have changed.
Configure:
* Use standard make variables, e.g. CFLAGS, rather than our own, e.g. CINCLUDE
* Add PG_CONFIG var for configuring custom pg_config location
* Don't error if xml_config or pg_config is missing (but error if libs/headers not found)
* Check for zlib.h header
* Check for lz4frame.h header when liblz4 is present
Make:
* Use gcc-style auto dependencies
* Put src list at the top since it is most frequently modified
* Add clean-all target to also remove auto-generated config files
This file is used to generate src/configure and is not required to make pgbackrest since src/configure is updated before distribution.
Move to src/build so it is out of the way.
Changes to reference.xml can affect the command-line documentation built into the binary so changes must trigger an auto-generated code build during smart builds.
The prior method was to build a special container to hold these files which meant they would get stale on development systems. On CI the container was always rebuilt so failures would be seen there even when dev seemed to be working.
Instead get the package source when the package is built to ensure it is as up-to-date as possible.
This change was prompted by failures on the Ubuntu 12.04 container while getting the package source, probably due to an ancient version of git. Package builds are no longer supported on that platform with the addition of lz4 compression so it didn't seem worth fixing.
The primary source for project info is now src/version.h.
The pgBackRestDoc::ProjectInfo module loads the project info from src/version.h at runtime so there is no need to update it.
This is consistent with the way BackRest and BackRest test were renamed way back in 18fd2523.
More modules will be moving to pgBackRestDoc soon so renaming now reduces churn later.
This directory was once the home of the production Perl code but since f0ef73db this is no longer true.
Move the modules to test in most cases, except where the module is expected to be useful for the doc engine beyond the expected lifetime of the Perl test code (about a year if all goes well).
The exception is pgBackRest::Version which requires more work to migrate since it is used to track pgBackRest versions.
LZ4 compresses data faster than gzip but at a lower ratio. This can be a good tradeoff in certain scenarios.
Note that setting compress-type=lz4 will make new backups and archive incompatible (unrestorable) with prior versions of pgBackRest.
This was the interface between Perl and C introduced in 36a5349b but since f0ef73db has only been used by the Perl integration tests. This is expensive code to maintain just for testing.
The main dependency was the interface to storage, no matter where it was located, e.g. S3. Replace this with the new-introduced repo commands (d3c83453) that allow access to repo storage via the command line.
The other dependency was on various cfgOption* functions and CFGOPT_ constants that were convenient but not necessary. Replace these with hard-coded strings in most places and create new constants for commonly used values.
Remove all auto-generated Perl code. This means that the error list will no longer be maintained automatically so copy used errors to Common::Exception.pm. This file will need to be maintained manually going forward but there is not likely to be much churn as the Perl integration tests are being retired.
Update test.pl and related code to remove LibC builds.
Ding, dong, LibC is dead.
These commands are generally useful but more importantly they allow removing LibC by providing the Perl integration tests an alternate way to work with repository storage.
All the commands are currently internal only and should not be used on production repositories.
If the command was passed a file it would return no results since it was originally intended to list files when passed a path.
However, as a general purpose command working directly with files makes sense.
This command only makes sense for the repository storage since other storage (e.g. pg and spool) must be located on a local Posix filesystem and can be listed using standard unix commands. Since the repo storage can be located lots of places having a common way to list it makes sense.
Prefix with repo- to make the scope of this command clear.
Update documentation to reflect this change.
Add compress-type option and deprecate compress option. Since the compress option is boolean it won't work with multiple compression types. Add logic to cfgLoadUpdateOption() to update compress-type if it is not set directly. The compress option should no longer be referenced outside the cfgLoadUpdateOption() function.
Add common/compress/helper module to contain interface functions that work with multiple compression types. Code outside this module should no longer call specific compression drivers, though it may be OK to reference a specific compression type using the new interface (e.g., saving backup history files in gz format).
Unit tests only test compression using the gz format because other formats may not be available in all builds. It is the job of integration tests to exercise all compression types.
Additional compression types will be added in future commits.
All the methods in this module will need to be implemented via the command-line in order to get rid of LibC, so the first step is to reduce the code in the module as much as possible.
First remove storageDb() and use storageTest() instead. Then create storageTest() using pgBackRestTest::Common::Storage which has no dependencies on LibC. Now the only storage using the LibC interface is storageRepo().
Remove all link functions since those operations cannot be performed on a repo unless it is Posix, in which case the LibC interface is not needed. Same for owner().
Remove pathSync() because syncs are not required in the tests. No test data is reused after a crash.
Path create/exists functions should never be explicitly performed on a repo so remove those. File exists can be implemented by calling info() instead.
Remove encryption detection functions which were only used by Backup/Archive::Info reconstruct() which are now obsolete.
Remove all filters except pgBackRest::Storage::Filter::CipherBlock since they are not being used. That also means there are no filters returning results so remove all the result code.
Move hashSize() and pathAbsolute() into pgBackRest::Storage::Base where they can be shared between pgBackRest::Storage::Storage and pgBackRestTest::Common::Storage.
This was mostly dead code except the DB_BACKUP_ADVISORY_LOCK constant, moved to the real/all test module, and the function that pulls info from pg_control, moved to ExpireEnvTest.pm.
The postgres/pageChecksum module was designed as an interface to the C structs for the Perl code. The new C code can do this directly so no need for an interface.
Move the remaining test for pgPageChecksum() into the postgres/interface test module.
We were using a customized version which worked fine but was hard to merge with upstream changes. Now this code is maintained much like the types in static.auto.h that we copy and check with each release.
The goal is to eventually build directly against PostgreSQL (either source or libcommon) and this brings us one step closer.
All zero pages should not have checksums. Not only is this test invalid but it will not work with the stock page checksum implementation in PostgreSQL, which checks for zero pages. Since we will be using that code verbatim soon this test needs to go.
Using static values serves as a better cross-check against the page checksum code. The downside is that these checksums may not work with some big endian systems but in that case neither will the unit tests.
We can also remove the page checksum interface from LibC which brings us one step closer to eliminating it.
Page size is passed around a lot but in fact it can only have one value, PG_PAGE_SIZE_DEFAULT, which is checked when pg_control is loaded. There may be an argument for supporting multiple page sizes in the future but for now just use the constant to simplify the code.
There is also a significant performance benefit. Because pageSize was being used in pageChecksumBlock() the main loop was neither unrolled nor vectorized (-funroll-loops -ftree-vectorize) as it is now with a constant loop boundary.
This function made validation faster in Perl because fewer calls (and buffer transformations) were required when all checksums were valid.
In C calling pageChecksumTest() directly is just as efficient so there is no longer a need for pageChecksumBufferTest().
These data structures were copied a few places (but only once in the core code) so put them in a place where everyone can use them.
To do this create a new file, static.auto.h, to contain data types and macros that have stayed the same through all the versions of PostgreSQL that we support. This allows us to have single, non-versioned set of headers and code for stable data structures like page headers.
Migrate a few types from version.auto.h that are required for page header structures and pull the remaining types from PostgreSQL directly.
We had previously renamed xlog to wal so update those where required since we won't be modifying the PostgreSQL names anymore.
The S3 driver depends on being able to generate a common prefix to limit the number of results from list commands, which saves on bandwidth.
The prior implementation could be tricked by an expression like ^ABC|^DEF where there is more than one possible prefix. To fix this disallow any prefix when another ^ anchor is found in the expression. [^ and \^ are OK since they are not anchors.
Note that this was not an active bug because there are currently no expressions with multiple ^ anchors.
The restore test function was passing strBackup to the restoreCompare function but when the restore is expected to pick a backup based on a timestamp, then strBackup may not be the one chosen.
Modified the code so that strBackupExpected is set based on the parameters passed to the function and this is then passed to restoreCompare.
This option was used for boolean testing but it will soon be deprecated and the semantics changed. To reduce churn it seems easiest to just use other options for testing. This will also be helpful when the option is eventually removed.
These commands (e.g. restore, archive-get) never used the compress options but allowed them to be passed on the command line. Now they will error when these options are passed on the command line. If these errors occur then remove the unused options.
This was a minor optimization used in protocol layer compression. Even though it was slightly faster, it omitted the crc-32 that is generated during normal compression which could lead to corrupt data after a bad network transmission. This would be caught on restore by our checksum but it seems better to catch an issue like this early.
The raw option also made the function signature different than future compression formats which may not support raw, or require different code to support raw.
In general, it doesn't seem worth the extra testing to support a format that has minimal benefit and is seldom used, since protocol compression is only enabled when the transmitted data is uncompressed.
"gz" was used as the extension but "gzip" was generally used for function and type naming.
With a new compression format on the way, it makes sense to standardize on a single abbreviation to represent a compression format in the code. Since the extension is standard and we must use it, also use the extension for all naming.
The prior code used TRY...CATCH blocks to cleanup mem contexts when an error occurred. This included freeing new mem contexts that were still being initialized when the error occurred and ensuring that the prior memory context was restored.
This worked fine in production but it involved a lot of setjmp()/longjmp() calls that resulted in longer compilation times and sluggish performance under valgrind, profiling, and coverage testing.
Instead maintain a stack of new contexts and context switches that can be used to do cleanup after an error. Normally, the stack is not used for this purpose and pushing/popping is a cheap operation. In the prior implementation most of the TRY...CATCH logic needed to be run even on success.
One bonus is that the binary is about 8% smaller after this change. Another benefit is that new contexts *must* be explicitly freed/discarded or an error will occur. See info/manifest.c for an example of where this is useful outside the standard macros.
If PostgreSQL crashes it can leave behind a pg_internal.init temp file with the pid as the extension, as discussed in https://www.postgresql.org/message-id/flat/20200131045352.GB2631%40paquier.xyz#7700b9481ef5b0dd5f09cc410b4750f6. On restart this file is not cleaned up so it can persist for the lifetime of the cluster or until another process with the same id happens to write pg_internal.init.
This is arguably a bug in PostgreSQL, but in any case it makes sense not to backup this file.
This error was lost during the migration to C. The error that occurred instead (generally an SSH auth error) was hard to debug.
Restore the original behavior by throwing an error immediately if pg1-host is configured for any of these commands. reset-pg1-host can be used to suppress the error when required.
The main improvement is a double-fork to prevent zombie processes if the parent process exits after the (child) async process. This is a real possibility since the parent process sticks around to monitor the results of the async process.
In the first fork, ignore SIGCHLD in the very unlikely case that the async process exits before the first fork. This is probably only possible if the async process exits immediately, perhaps due to a chdir() failure. Set SIGCHLD back to default in the async process so waitpid() will work as expected.
Also update the comment on chdir() to more accurately reflect what is happening.
Finally, add a test in certain debug builds to ensure the first fork exits very quickly. This only works when valgrind is not in use because valgrind makes forking so slow that it is hard to tell if the async process performed work or not (in the case that the second fork goes missing and the async process is a direct child).
In this case the resumable backup should be ignored, but the C code was not able to load the partial manifest written by Perl since the format differs slightly. Add validations to catch this case and continue gracefully.
2a06df93 removed the error file so an old error would not be reported before the async process had a chance to try again. However, if the async process was already running this might lead to a timeout error before reporting the correct error.
Instead, remove the error files once we know that the async process will start, i.e. after the archive lock has been acquired.
This effectively reverts 2a06df93.
Generally, the content-size or content-encoding headers will be used to specify how much content should be expected.
There is a special case where the server sends 'Connection:close' without the content headers and the content may be read up until eof.
This appears to be an atypical usage but it is required by the specification.
Auto-selection is performed only when --set is not specified. If a backup set for the given target time cannot not be found, the latest (default) backup set will be used.
Currently a limited number of date formats are recognized and timezone names are not allowed, only timezone offsets.
Add tzPartsValid() and tzOffsetSecond() to calculate timezone offsets from user provided values.
Update epochFromParts() to accept a timezone offset in seconds.
The test that checks for no output from the server was leaving a connection open which valgrind was complaining about.
Wait on the server long enough to cause the error on the client then close the connection to free the memory.
Validate that checksums exist for zero size files. This means that the checksums for zero size files are explicitly set by backup even though they'll always be the same. Also validate that zero length files have the correct checksum.
Validate that repo size is > 0 if size is > 0. No matter what compression type is used a non-zero amount of data cannot be stored in zero bytes.
This is a modest start but it addresses the specific issue that was caused by the bug fixed in 45ec694a. This validation will produce an immediate error rather than erroring out partway through the restore.
More validations are planned but this is the most important one and seems safest for this release.
If a file was removed by PostgreSQL during the backup (or was missing from the standby) then the next file might not be copied and updated in the manifest. If this happened then the backup would error when restored.
The issue was that removing files from the manifest invalidated the pointers stored in the processing queues. When a file was removed, all the pointers shifted to the next file in the list, causing a file to be unprocessed. Since the unprocessed file was still in the manifest it would be saved with no checksum, causing a failure on restore.
When process-max was > 1 then the bug would often not express since the file had already been pulled from the queue and updates to the manifest are done by name rather than by pointer.
pkg-config is a generic way to get build options rather than relying on a package-specific utility.
XML2_CONFIG can be used to override this utility for systems that do not ship pkg-config.
Previously memNew() used memset() to initialize all struct members to 0, NULL, false, etc. While this appears to work in practice, it is a violation of the C specification. For instance, NULL == 0 must be true but neither NULL nor 0 must be represented with all zero bits.
Instead use designated initializers to initialize structs. These guarantee that struct members will be properly initialized even if they are not specified in the initializer. Note that due to a quirk in the C99 specification at least one member must be explicitly initialized even if it needs to be the default value.
Since pre-zeroed memory is no longer required, adjust memAllocInternal()/memReallocInternal() to return raw memory and update dependent functions accordingly. All instances of memset() have been removed except in debug/test code where needed.
Add memMewPtrArray() to allocate an array of pointers and automatically set all pointers to NULL.
Rename memGrowRaw() to the more logical memResize().
The timeline is required to verify WAL segments in the archive after a backup. The conversion was performed base 10 instead of 16, which led to errors when the timeline was ≥ 0xA.
This macro block encapsulates the common pattern of switching to the prior (formerly called old) mem context to return results from a function.
Also rename MEM_CONTEXT_OLD() to memContextPrior(). This violates our convention of macros being in all caps but memContextPrior() will become a function very soon so this will reduce churn.
A few places were using just memContextNew(), probably because they did not immediately need to create anything in the new context, but it's better if we use the same pattern everywhere, even if it results in a few extra mem context switches.
Bug Fixes:
* Fix options being ignored by asynchronous commands. The asynchronous archive-get/archive-push processes were not loading options configured in command configuration sections, e.g. [global:archive-get]. (Reviewed by Cynthia Shang. Reported by Urs Kramer.)
* Fix handling of \ in filenames. \ was not being properly escaped when calculating the manifest checksum which prevented the manifest from loading. Since instances of \ in cluster filenames should be rare to nonexistent this does not seem likely to be a serious problem in the field.
Features:
* pgBackRest is now pure C.
* Add pg-user option. Specifies the database user name when connecting to PostgreSQL. If not specified pgBackRest will connect with the local OS user or PGUSER, which was the previous behavior. (Contributed by Mike Palmiotto.)
* Allow path-style URIs in S3 driver.
Improvements:
* The backup command is implemented entirely in C. (Reviewed by Cynthia Shang.)
The local, remote, archive-get-async, and archive-push-async commands were used to run functionality that was not directly available to the user. Unfortunately that meant they would not pick up options from the command that the user expected, e.g. backup, archive-get, etc.
Remove the internal commands and add roles which allow pgBackRest to determine what functionality is required without implementing special commands. This way the options are loaded from the expected command section.
Since remote is no longer a specific command with its own options, more manipulation is required when calling remote. This might be something we can improve in the config system but it may be worth leaving as is because it is a one-off, for now at least.
Although path-style URIs have been deprecated by AWS, they may still be used with products like Minio because no additional DNS configuration is required.
Path-style URIs must be explicitly enabled since it is not clear how they can be auto-detected reliably. More importantly, faulty detection could cause regressions in current installations.
Time is supported in all drivers with the update to S3 at 61538f93, so it is now possible to add time to the ls command and have it work on all repo types.
Parameter lists that are passed directly to exec*() do not need quoting when spaces are present. Worse, the quotes will not be stripped and the option value will be garbled.
Unfortunately this still does not fix all issues with quoting since we don't know how it might need to be escaped to work with SSH command configuration. The answer seems to be to pass the options in the protocol layer but that's beyond the scope of this commit.
This option was overloaded on the general type option but it makes sense to split this out since the meaning is pretty different.
Rename the values to conform to current standards, i.e. pg and repo, now that the Perl code won't care anymore.
Previously dates were not being filled by these functions which was fine since dates were not used.
We plan to use dates for the ls command plus it makes sense for the driver to be complete since it will be used as an example.
These are similar to what mktime() and strptime() do but they ignore the local system timezone which saves having to munge the TZ env variable to do time conversions.
This macro was created before the String object existed so subsequent usage with String always included a lot of strPtr() wrapping.
TEST_RESULT_STR_Z() had already been introduced but a wholesale replacement of TEST_RESULT_STR() was not done since the priority was on the C migration.
Update all calls to (old) TEST_RESULT_STR() with one of the following variants: (new) TEST_RESULT_STR(), TEST_RESULT_STR_Z(), TEST_RESULT_Z(), TEST_RESULT_Z_STR().
Specifies the database user name when connecting to PostgreSQL.
If not specified pgBackRest will connect with the local OS user or PGUSER, which was the previous behavior.
Most of these tests are just checking that errors are thrown when required. These are well covered in various unit tests.
The "cannot resume" tests are also well covered in the backup unit tests.
Finally, config warnings are well covered in the config unit tests.
There is more to be done here, but this accounts for the low-hanging fruit.
Set log-level-file=off when more that one test will run. In this case is it impossible to see the logs anyway since they will be automatically cleaned up after the test. This improves performance pretty dramatically since trace-level logging is expensive. If a singe integration test is run then log-level-file is trace by default but can be changed with the --log-level-test-file option.
Reduce buffer-size to 64k to save memory during testing and allow more processes to run in parallel.
Update log replacement rules so that these options can change without affecting expect logs.
The co6 tests were occasionally running out of space so bump up the size of the ramdisk a bit to hopefully prevent this.
A longer term solution would be to disable the trace-level file logs when running on Travis CI since they seem to be using most of the space.
PostgreSQL >= 9.6 uses non-exclusive backup which has implicit stop-auto since the backup will stop when the connection is terminated.
The warning was made more verbose in 1f2ce45e but this now seems like a bad idea since there are likely users with mixed version environments where stop-auto is enabled globally. There's no reason to fill their logs with warnings over a harmless option. If anything we should warn when stop-auto is explicitly set to false but this doesn't seem very important either.
Revert to the prior behavior, which is to warn and reset when stop-auto is enabled on PostgreSQL < 9.3.
\ was not being properly escaped when calculating the manifest checksum which prevented the manifest from loading.
Use jsonFromStr() to properly quote and escape \.
Since instances of \ in cluster filenames should be rare to nonexistent this does not seem likely to be a serious problem in the field.
Remove embedded Perl from the distributed binary. This includes code, configure, Makefile, and packages. The distributed binary is now pure C.
Remove storagePathEnforceSet() from the C Storage object which allowed Perl to write outside of the storage base directory. Update mock/all and real/all integration tests to use storageLocal() where they were violating this rule.
Remove "c" option that allowed the remote to tell if it was being called from C or Perl.
Code to convert options to JSON for passing to Perl (perl/config.c) has been moved to LibC since it is still required for Perl integration tests.
Update build and installation instructions in the user guide.
Remove all Perl unit tests.
Remove obsolete Perl code. In particular this included all the Perl protocol code which required modifications to the Perl storage, manifest, and db objects that are still required for integration testing but only run locally. Any remaining Perl code is required for testing, documentation, or code generation.
Rename perlReq to binReq in define.yaml to indicate that the binary is required for a test. This had been the actual meaning for quite some time but the key was never renamed.
For the most part this is a direct migration of the Perl code into C except as noted below.
A backup can now be initiated from a linked directory. The link will not be stored in the manifest or recreated on restore. If a link or directory does not already exist in the restore location then a directory will be created.
The logic for creating backup labels has been improved and it should no longer be possible to get a backup label earlier than the latest backup even with timezone changes or clock skew. This has never been an issue in the field that we know of, but we found it in testing.
For online backups all times are fetched from the PostgreSQL primary host (before only copy start was). This doesn't affect backup integrity but it does prevent clock skew between hosts affecting backup duration reporting.
Archive copy now works as expected when the archive and backup have different compression settings, i.e. when one is compressed and the other is not. This was a long-standing bug in the Perl code.
Resume will now work even if hardlink settings have been changed.
Reviewed by Cynthia Shang.
/ takes precedence over & but the appropriate parens were not provided.
By some bad luck the tests worked either way, so add a new test that only works the correct way to prevent a regression.
Bug Fixes:
* Fix archive-push/archive-get when PGDATA is symlinked. These commands tried to use cwd() as PGDATA but this would disagree with the path configured in pgBackRest if PGDATA was symlinked. If cwd() does not match the pgBackRest path then chdir() to the path and make sure the next cwd() matches the result from the first call. (Reported by Stephen Frost, Milosz Suchy.)
* Fix reference list when backup.info is reconstructed in expire command. Since the backup command is still using the Perl version of reconstruct this issue will not express unless 1) there is a backup missing from backup.info and 2) the expire command is run directly instead of running after backup as usual. This unlikely combination of events means this is probably not a problem in the field.
* Fix segfault on unexpected EOF in gzip decompression. (Reported by Stephen Frost.)
The TZ environment variable was not reliably pushed down to the test processes.
Instead pass TZ via a command line parameter and set explicitly in the test process.
Commit 7168e074 tried to use cwd() as PGDATA but this would disagree with the path configured in pgBackRest if PGDATA was symlinked.
If cwd() does not match the pgBackRest path then chdir() to the path and make sure the next cwd() matches the result from the first call.
If the compressed stream terminated early then the decompression process would get a flush request (NULL input buffer) since the filter was not marked as done. This could happen on a zero-length or truncated (i.e. invalid) compressed file.
Change the existing assertion to an error to catch this condition in production gracefully.
82df7e6f and 9856fef5 updated tests that used test points in preparation for the feature not being available in the C code.
Since tests points are no longer used remove the infrastructure.
Also remove one stray --test option in mock/all that was essentially a noop but no longer works now that the option has been removed.
Pq script errors are now printed in test output in case they are being masked by a later error.
Once a script error occurs, the same error will be thrown forever rather than throwing a new error on the next item in the script.
HRNPQ_MACRO_CLOSE() is not required in scripts unless harnessPqScriptStrictSet(true) is called. Most higher-level tests should not need to run in strict mode.
The command/check test seems to require strict mode but there's no apparent reason why it should. This would be a good thing to look into at some point.
Some log output (e.g. time) is hard to test because the values can change between tests.
Add expressions to replace substrings in the log with predictable values to simplify testing.
This is similar to the log replacement facility available for Perl expect log testing.
A recopy would occur if the size or checksum was invalid but on error the backup would terminate.
Instead, recopy the resumed file on any error. If the error is systemic (e.g. network failure) then it should show up again during the recopy.
These were not getting updated to match the directory name when the manifests were copied.
The Perl code didn't care but the C code expects labels to be set correctly.
For now this is only used in testing but there are places where it could be useful in the core code.
Even if that turns out not to be true, it doesn't seem worth implementing a new version in testing just to capture a few values that we already have.
This is to maintain compatibility with the older Perl code that returned the lowest sorted order item in a tie.
For other datatypes the C code returns the same value, often enough at least to not cause churn in the expect tests.
Adding a manifest to backup.info was migrated to C in 4e4d1f41 but deduplication of the references was missed leading to a reference for every file being added to backup.info.
Since the backup command is still using the Perl version of reconstruct this issue will not express unless 1) there is a backup missing from backup.info and 2) the expire command is run directly instead of running after backup as usual.
This unlikely combination of events means this is probably not a problem in the field.
Archive check does not run when in offline backup mode but the option was set to true in the manifest. It's harmless since these options are informational only but it could cause confusion when debugging.
Test points are not supported by the new C code so these will be replaced with unit tests.
The fact that the tests still pass even when the changes aren't made mid-backup (except application_name) shows how weak they were in the first place.
Even so, this does represent a regression in (soon to be be removed) Perl coverage.
Test points will not be available in the C code so update these tests as best as possible without using them.
This represents a loss of coverage for the Perl code (soon to be removed) which will be made up in the C code with unit tests.
These tests require test points which are not being implemented in the C code.
This functionality is fully tested in the command/control unit tests so integration tests are no longer required.
This expression determines which files contain page checksums but it was also including the directory above the relation directories. In a real PostgreSQL installation this not a problem because these directories don't contain any files.
However, our tests place a file in `base` which the Perl code thought should have page checksums while the new C code says no.
Update the expression to document the change and avoid churn in the expect logs later.
Installing lcov 1.14 everywhere turned out to be a problem just as using 1.13 on Ubuntu 19.04 was.
Since we primarily use Ubuntu 18.04 for coverage testing and reporting, we definitely want to make sure that works. So, revert to using the default packaged lcov except when specified otherwise in VmTest.pm.
PostgreSQL minor version releases are also included since all containers have been rebuilt.
The issue "fixed" in f01aa586 was caused by treating all strings as format strings while logging, which was fixed in 0c05df45.
Revert because there no longer seems a reason for the extra logic, and it was only partially applied, i.e. not to env vars, command-line options, or config options.
Using the same macros for formatted and unformatted logging had several disadvantages.
First, the compiler was unable to verify the format string against the parameters.
Second, legitimate % characters in messages were being interpreted as format characters with garbage output ensuing.
Add _FMT() variants and update all call sites to use the correct variant.
This character causes problems in C and in the shell if we try to output it in an error message.
Forbid it completely and spell it out in error messages to avoid strange effects.
There is likely a better way deal with the issue but this will do for now.
The protocol timeout tests have been superceded by unit tests.
The TEST_BACKUP_RESUME test point was incorrectly included into a number of tests, probably a copy pasto. It didn't hurt anything but it did add 200ms to each test where it appeared.
Catalog and control version tests were redundant. The database version and system id tests covered the important code paths and the C code gets these values from a lookup table.
Finally, fix an incomplete update to the backup.info file while munging for tests.
It is occasionally useful to get information about a file outside of the base storage path. storageLocal() can be used in some cases but when the storage is remote is doesn't seem worth creating a separate storage object for adhoc info requests.
storageInfo() is a read-only operation so this seems pretty safe. The noPathEnforce parameter will make auditing exceptions easy.
The ability to disable enforcement (i.e., the requested absolute path is within the storage path) globally will be removed after the Perl migration.
The feature will still be needed occasionally so allow it in an adhoc fashion.
A . in a link will always lead to an error since the destination will be inside PGDATA. However, it is accepted symlink syntax so it's better to resolve it and get the correct error message.
Also, we may have other uses for this function in the future.
Adding a dummy column which is always set by the P() macro allows a single macro to be used for parameters or no parameters without violating C's prohibition on the {} initializer.
-Wmissing-field-initializers remains disabled because it still gives wildly different results between versions of gcc.
It wasn't practical for the main process to be ignorant of the remote path, and in any case knowing the path makes debugging easier.
Pull the remote path when connecting and pass the result of local storagePath() to the remote when making calls.
Pushing output through a JSON blob is not practical if the output is extremely large, e.g. a backup manifest with 100K+ files.
Add read/write routines so that output can be returned in chunks but errors will still be detected.
Previously, options were being filtered based on what was currently valid. For chained commands (e.g. backup then expire) some options may be valid for the first command but not the second.
Filter based on the command definition rather than what is currently valid to avoid logging options that are not valid for subsequent commands. This reduces the number of options logged and will hopefully help avoid confusion and expect log churn.
Bug Fixes:
* Fix remote timeout in delta restore. When performing a delta restore on a largely unchanged cluster the remote could timeout if no files were fetched from the repository within protocol-timeout. Add keep-alives to prevent remote timeout. (Reported by James Sewell, Jens Wilke.)
* Fix handling of repeated HTTP headers. When HTTP headers are repeated they should be considered equivalent to a single comma-separated header rather than generating an error, which was the prior behavior. (Reported by donicrosby.)
Improvements:
* JSON output from the info command is no longer pretty-printed. Monitoring systems can more easily ingest the JSON without linefeeds. External tools such as jq can be used to pretty-print if desired. (Contributed by Cynthia Shang.)
* The check command is implemented entirely in C. (Contributed by Cynthia Shang.)
Documentation Improvements:
* Document how to contribute to pgBackRest. (Contributed by Cynthia Shang.)
* Document maximum version for auto-stop option. (Contributed by Brad Nicholson.)
Test Suite Improvements:
* Fix container test path being used when --vm=none. (Suggested by Stephen Frost.)
* Fix mismatched timezone in expect test. (Suggested by Stephen Frost.)
* Don't autogenerate embedded libc code by default. (Suggested by Stephen Frost.)
When HTTP headers are repeated they should be considered equivalent to a single comma-separated header rather than generating an error, which was the prior behavior.
Reported by donicrosby.
We had some problems with newer versions so had held off on updating. Those problems appear to have been resolved.
In addition, the --compat flag is no longer required. Prior versions of MinIO required all parts of a multi-part upload (except the last) to be of equal size. The --compat flag was introduced to restore the default S3 behavior. Now --compat is only required when ETag is being used for MD5 verification, which we don't do.
Previously we were using int64_t to debug time_t but this may not be right depending on how the compiler represents time_t, e.g. it could be a float.
Since a mismatch would have caused a compiler error we are not worried that this has actually happened, and anyway the worst case is that the debug log would be wonky.
The primary benefit, aside from correctness, is that it makes choosing a parameter debug type for time_t obvious.
Previously the mock integration tests would be skipped for VMs other than the standard four used in CI. Now VMs outside the standard four will run the same tests as VM4 (currently U18).
Using pg1-path, as we were doing previously, could lead to WAL being copied to/from unexpected places. PostgreSQL sets the current working directory to PGDATA so we can use that to resolve relative paths.
This makes configuring tests easier.
Also add a parameter for tests that require sudo. This should be retired at some point but some tests still require it.
This will likely improve performance, but it also makes the filesystem consistent between platforms.
A number of tests were failing on shiftfs, which was the default for arm64 on Travis.
1.13 is not compatible with gcc 8 which is what ships with newer distributions. Build from source to get a more recent version.
1.13 is not compatible with gcc 9 so we'll need to address that at a later date.
This user was created before we tested in containers to ensure isolation between the pg and repo hosts which were then just directories. The downside is that this resulted in a lot of sudos to set the pgbackrest user and to remove files which did not belong to the main test user.
Containers provide isolation without needing separate users so we can now safely remove the pgbackrest user. This allows us to remove most sudos, except where they are explicitly needed in tests.
While we're at it, remove the code that installed the Perl C library (which also required sudo) and simply add the build path to @INC instead.
This test was not creating recovery.signal when testing with --type=preserve. The preserve recovery type only keeps existing files and does not create any.
RC1 was just ignoring recovery.signal and going right into recovery. Weirdly, 12.0 used restore_command to do crash recovery which made the problem harder to diagnose, but this has now been fixed in PostgreSQL and should be released in 12.1.
This is only needed when new code is added to the Perl C library, which is becoming rare as the migration progresses.
Also, the code will vary slightly based on the Perl version used for generation so for normal users it is just noise.
Suggested by Stephen Frost.
A number of tests have been updated and Fedora 30 has been added to the test suite so the unit tests can run on gcc 9.
Stop running unit tests on co6/7 since we appear to have ample unit test coverage.
This tool was only being used it a few places but was a pretty large dependency.
Rework the forceStorageMove() code using our storage layer and replace one aws cli cp with a storage put.
Also, remove the Dockerfile that was once used to build the Scality S3 test container.
Now that our tests are more diversified it makes sense to load only the packages that are needed for each test.
Move the package loads from .travis.yaml to test/travis.pl where we have more control over what is loaded.
Note that building the manifest on each host has been temporarily removed.
This feature will likely be brought back as a non-default option (after the manifest code has been fully migrated to C) since it can be fairly expensive.
Check the backup.info file against the backup path. Add any backups that are missing and remove any backups that no longer exist.
It's important to run this before backup or expire to be sure we are using the most up-to-date list of backups.
Three major changes were required to get this working:
1) Provide the path to pgbackrest in the build directory when running outside a container. Tests in a container will continue to install and run against /usr/bin/pgbackrest.
1) Set a per-test lock path so tests don't conflict on the default /tmp/pgbackrest path. Also set a per-test log-path while we are at it.
2) Use localhost instead of a custom host for TLS test connections. Tests in containers will continue to update /etc/hosts and use the custom host.
Add infrastructure and update harnessCfgLoad*() to get the correct exe and paths loaded for testing.
Since new tests are required to verify that running outside a container works, also rework the tests in Travis CI to provide coverage within a reasonable amount of time. Mainly, break up to doc tests by VM and run an abbreviated unit test suite on co6 and co7.
Recovery settings are now written into postgresql.auto.conf instead of recovery.conf. Existing recovery_target* settings will be commented out to help avoid conflicts.
A comment is added before recovery settings to identify them as written by pgBackRest since it is unclear how, in general, old settings will be removed.
recovery.signal and standby.signal are automatically created based on the recovery settings.
The additional details include databases that can be used for selective restore and a list of tablespaces and symlinks with their default destinations.
This information is not included in the JSON output because it requires reading the manifest which is too IO intensive to do for all manifests. We plan to include this information for JSON in a future release.
Scaling allows the starting values to be increased from the command-line without code changes.
Also suppress valgrind and assertions when running performance testing. Optimization is left at -O0 because we should not be depending on compiler optimizations to make our code performant, and it makes profiling more informative.
bsearch() is far more efficient than an iterative approach except in the most trivial cases.
For now insert will reset the sort order to none and the list will need to be resorted before bsearch() can be used. This is necessary because item pointers are not stable after a sort, i.e. they can move around. Until lists are stable it's not a good idea to surprise the caller by mixing up their pointers on insert.
PostgreSQL 12 will shutdown in these cases which seems to be the correct action (according to the documentation) when hot_standby = off, but older versions are promoting instead. Set target_action explicitly so all versions will behave the same way.
This does beg the question of whether the PostgreSQL 12 behavior is wrong (though it matches the docs) or the previous versions are.
Separate the generation of recovery values and formatting them into recovery.conf format. This is generally a good idea, but also makes the code ready to deal with a different recovery file in PostgreSQL 12.
Also move the recovery file logic out of cmdRestore() into restoreRecoveryWrite().
This restore type automatically adds standby_mode=on to recovery.conf.
This could be accomplished previously by setting --recovery-option=standby_mode=on but PostgreSQL 12 requires standby mode to be enabled by a special file named standby.signal.
The new restore type allows us to maintain a common interface between PostgreSQL versions.
For the most part this is a direct migration of the Perl code into C.
There is one important behavioral change with regard to how file permissions are handled. The Perl code tried to set ownership as it was in the manifest even when running as an unprivileged user. This usually just led to errors and frustration.
The C code works like this:
If a restore is run as a non-root user (the typical scenario) then all files restored will belong to the user/group executing pgBackRest. If existing files are not owned by the executing user/group then an error will result if the ownership cannot be updated to the executing user/group. In that case the file ownership will need to be updated by a privileged user before the restore can be retried.
If a restore is run as the root user then pgBackRest will attempt to recreate the ownership recorded in the manifest when the backup was made. Only user/group names are stored in the manifest so the same names must exist on the restore host for this to work. If the user/group name cannot be found locally then the user/group of the PostgreSQL data directory will be used and finally root if the data directory user/group cannot be mapped to a name.
Reviewed by Cynthia Shang.
This macro displays a title for each test. A test frequently has multiple parts and it was hard to tell which subparts went together. We used ad hoc indentation to do this.
Anything that is a not a title is automatically indented so manually indenting is not longer needed. This should make the tests and the test output easier to read.
These macros encapsulate the functionality provided by direct calls to harnessLogResult() and system(). They both have _FMT() variants.
The primary advantage is that {[path]}, {[user]}, and {[group]} will be replaced with the test path, user, and group respectively. This saves a log of strNewFmt() calls and makes the tests less noisy.
The backup manifest stores a complete list of all files, links, and paths in a backup along with metadata such as checksums, sizes,
timestamps, etc. A list of databases is also included for selective restore.
The purpose of the manifest is to allow the restore command to confidently reconstruct the PostgreSQL data directory and ensure that
nothing is missing or corrupt. It is also useful for reporting, e.g. size of backup, backup time, etc.
For now, migrate enough functionality to implement the restore command.
Reviewed by Cynthia Shang.
cfgExecParam() was originally written to provide options for remote processes. Remotes processes do not have access to the local config so it was necessary to pass every non-default option.
Local processes on the other hand, e.g. archive-get, archive-get-async, archive-push-async, and local, do have access to the local config and therefore don't need every parameter to be passed on the command-line. The previous way was not wrong, but it was overly verbose and did not align with the way Perl had worked.
Update cfgExecParam() to accept a local option which excludes options from the command line which can be read from local configs.
strPathAbsolute() generates an absolute path from an absolute base path and an absolute/relative path.
strLstRemoveIdx() is a support function based on lstRemoveIdx().
In general we don't care about path and link times since they are easily recreated when restoring.
So, outside of storageInfo() we don't need to bother testing them.
Loading jobs in advance uses a lot of memory in the case that there are millions of jobs to be performed. We haven't seen this yet, but with backup and restore on the horizon it will become the norm.
Instead, use a callback so that jobs are only created as they are needed and can be freed as soon as they are completed.
These features finally make the ls command practical.
Currently the JSON contains only name, type, and size. We may add more fields in the future, but these seem like the minimum needed to be useful.
This warning gives very unpredictable results between compiler versions and seems unrealistic since most of our structs are zeroed for initialization.
This warning has been disabled in the Makefile for a long time.
Broken vendor packages have been causing builds to break due to an error on apt-get update.
Ignore errors and proceed directory to apt-get install. It's possible that we'll try to reference an expired package version and get an error anyway, but that seems better than a guaranteed hard error.
Push the responsibility for sort and find down to the List object by introducing a general comparator function that can be used for both sorting and finding.
Update insert and add functions to return the item added rather than the list. This is more useful in the core code, though numerous updates to the tests were required.
Update StorageWritePosix to use the new functions.
A side effect is that storageWritePosixOpen() will no longer error when the user/group name does not exist. It will simply retain the original user/group, i.e. the user that executed the restore.
In general this is a feature since completing a restore is more important than setting permissions exactly from the source host. However, some notification of this omission to the user would be beneficial.
Travis will timeout after 10 minutes with no output. Emit a warning every 5 minutes to keep Travis alive and increase the total timeout to 20 minutes.
Documentation builds have been timing out a lot recently so hopefully this will help.
The control and catalog versions were stored a variety of places in the optimistic hope that they would be useful. In fact they never were.
We can't remove them from the backup.info and backup.manifest files due to backwards compatibility concerns, but we can at least avoid loading and storing them in C structures.
Add functions to the PostgreSQL interface which will return the control and catalog versions for any supported version of PostgreSQL to allow backwards compatibility for backup.info and backup.manifest. These functions will be useful in other ways, e.g. generating the tablespace identifier in PostgreSQL >= 9.0.
Info files required three copies in memory to be loaded (the original string, an ini representation, and the final info object). Not only was this memory inefficient but the Ini object does sequential scans when searching for keys making large files very slow to load.
This has not been an issue since archive.info and backup.info are very small, but it becomes a big deal when loading manifests with hundreds of thousands of files.
Instead of holding copies of the data in memory, use a callback to deliver the ini data directly to the object when loading. Use a similar method for save to avoid having an intermediate copy. Save is a bit complex because sections/keys must be written in alpha order or older versions of pgBackRest will not calculate the correct checksum.
Also move the load retry logic to helper functions rather than embedding it in the Info object. This allows for more flexibility in loading and ensures that stack traces will be available when developing unit tests.
Reviewed by Cynthia Shang.
The manifest is not an info file so if anything it should be called backupManifest. But that seems too long for such a commonly used object so manifest seems better.
Note that unlike Perl there is no storage manifest method so this stands as the only manifest in the C code, as befits its importance.
Bug Fixes:
* Improve slow manifest build for very large quantities of tables/segments. (Reported by Jens Wilke.)
* Fix exclusions for special files. (Reported by CluelessTechnologist, Janis Puris, Rachid Broum.)
Improvements:
* The stanza-create/update/delete commands are implemented entirely in C. (Contributed by Cynthia Shang.)
* The start/stop commands are implemented entirely in C. (Contributed by Cynthia Shang.)
* Create log directories/files with 0750/0640 mode. (Suggested by Damiano Albani.)
Documentation Bug Fixes:
* Fix yum.p.o package being installed when custom package specified. (Reported by Joe Ayers, John Harvey.)
Documentation Improvements:
* Build pgBackRest as an unprivileged user. (Suggested by Laurenz Albe.)
This test is commonly used for sanity checking but the combination of S3 and encryption makes it hard to use and encourages temporary changes to make it usable.
Acknowledge this and disable S3 and encryption for this test and move them to mock/all/2.