This aligns better with general PostgreSQL usage and our own documentation (updated in 4bcef702).
Usage in the backup.manifest tests has not been updated since it might break the file format.
Expressions only worked at the first level of recursion because the expression was also being applied to paths so the path had to match the filter in order to recurse.
This is not considered a bug since it does not affect any existing code paths, but it is required for the general-purpose repo-ls command.
A truncated HTTP response status could lead to an an unfriendly error message, which would be retried, but could be confusing if the error was persistent and required debugging.
Improve the error handling overall to catch more error cases explicitly and respond better to edge cases.
Also update the terminology in comments to align with the RFC. Variable and function names were not changed because a refactor is intended for HTTP response and it doesn't seem worth the additional code churn.
Bug Fixes:
* Fix issue checking if file links are contained in path links. (Reviewed by Cynthia Shang. Reported by Christophe Cavallié.)
* Allow pg-path1 to be optional for synchronous archive-push. (Reviewed by Cynthia Shang. Reported by Jerome Peng.)
* The expire command now checks if a stop file is present. (Fixed by Cynthia Shang. Reviewed by David Steele.)
* Handle missing reason phrase in HTTP response. (Reviewed by Cynthia Shang. Reported by Tenuun.)
* Increase buffer size for lz4 compression flush. (Reviewed by Cynthia Shang. Reported by Eric Radman.)
* Ignore pg-host* and repo-host* options for the remote command. (Reviewed by Cynthia Shang. Reported by Pavel Suderevsky.)
* Fix possibly missing pg1-* options for the remote command. (Reviewed by Cynthia Shang. Reported by Andrew L'Ecuyer.)
Features:
* Time-based retention for full backups. The --repo-retention-full-type option allows retention of full backups based on a time period, specified in days. (Contributed by Cynthia Shang, Pierre Ducroquet. Reviewed by David Steele.)
* Ad hoc backup expiration. Allow the user to remove a specified backup regardless of retention settings. (Contributed by Cynthia Shang. Reviewed by David Steele.)
* Zstandard compression support. Note that setting compress-type=zst will make new backups and archive incompatible (unrestorable) with prior versions of pgBackRest. (Reviewed by Cynthia Shang.)
* bzip2 compression support. Note that setting compress-type=bz2 will make new backups and archive incompatible (unrestorable) with prior versions of pgBackRest. (Contributed by Stephen Frost. Reviewed by David Steele, Cynthia Shang.)
* Add backup/expire running status to the info command. (Contributed by Stefan Fercot. Reviewed by David Steele.)
Improvements:
* Expire WAL archive only when repo-retention-archive threshold is met. WAL prior to the first full backup was previously expired after the first full backup. Now it is preserved according to retention settings. (Contributed by Cynthia Shang. Reviewed by David Steele.)
* Add local MD5 implementation so S3 works when FIPS is enabled. (Reviewed by Cynthia Shang, Stephen Frost. Suggested by Brian Almeida, John Kelley.)
* PostgreSQL 13 beta1 support. Changes to the control/catalog/WAL versions in subsequent betas may break compatibility but pgBackRest will be updated with each release to keep pace. (Reviewed by Cynthia Shang.)
* Reduce buffer-size default to 1MiB. (Reviewed by Stephen Frost.)
* Throw user-friendly error if expire is not run on repository host. (Contributed by Cynthia Shang. Reviewed by David Steele.)
The purpose of the remote command is to get access to local resources, so a remote should never start another remote. However, this could happen if there were host settings on the remote host, which ended badly with lock errors, loops, etc.
Add pg-local and repo-local options to indicate that the resource is local even if there are host settings.
Note that for the time being these options are internal and not intended for general usage. However, this is likely the direction needed to allow for more symmetric and manageable configurations.
Some pg1-* options are required by the remote so if they are not provided in the remote's configuration file then it may cause a configuration error, depending on the operation. This currently only applies to the pg1-path option.
This is still an issue for repo-* options but the same solution cannot be applied because some repo-* options are secure and cannot be passed on the command-line.
There don't appear to be any behavioral changes since PostgreSQL 12 and all the tests pass.
Changes to the control/catalog/WAL versions in subsequent betas may break compatibility but pgBackRest will be updated with each release to keep pace.
S3 requires the Content-MD5 header for many requests but MD5 is not available via OpenSSL when FIPS is enabled because it is considered to be insecure.
Even though our usage does not present any security risks a local M5 implementation is required to circumvent the over-broad FIPS restriction.
Vendorize the MD5 implementation found at https://openwall.info/wiki/people/solar/software/public-domain-source-code/md5 and add full coverage for the module in the common/crypto unit tests.
The prior default was determined by benchmarking the Perl code prior to the 1.0 release. In general buffer allocation was more expensive in Perl so large buffers gave the best performance. This was due to multiple buffer allocations for each filter in an IO operation.
The C code allocates fixed buffers for each IO operation so the cost for buffer allocation is lower than Perl. That being the case it made sense to benchmark the C code to determine the optimal buffer default.
The performance/storage tests were used to measure the performance of a variety of filters. 1GiB of data was processed by each filter 10 times and the results of the tests were averaged.
While most buffer sizes gave similar performance, 1MiB appeared to perform the best overall. Of course, different architectures are likely to yield different results but this seems like a sensible default. The buffer-size option may still need to be manually configured to give optimal results.
Raw test data for reference:
4MB buffer (prior default)
copy time 1807ms, avg time 180ms, avg throughput: 5942MB/s
md5 time 14200ms, avg time 1420ms, avg throughput: 756MB/s
sha1 time 11431ms, avg time 1143ms, avg throughput: 939MB/s
sha256 time 23463ms, avg time 2346ms, avg throughput: 457MB/s
gzip -6 time 381199ms, avg time 38119ms, avg throughput: 28MB/s
lz4 -1 time 15484ms, avg time 1548ms, avg throughput: 693MB/s
1MB buffer (new default)
copy time 1760ms, avg time 176ms, avg throughput: 6100MB/s
md5 time 13739ms, avg time 1373ms, avg throughput: 781MB/s
sha1 time 11025ms, avg time 1102ms, avg throughput: 973MB/s
sha256 time 22539ms, avg time 2253ms, avg throughput: 476MB/s
gzip -6 time 372995ms, avg time 37299ms, avg throughput: 28MB/s
lz4 -1 time 15118ms, avg time 1511ms, avg throughput: 710MB/s
512K buffer
copy time 1782ms, avg time 178ms, avg throughput: 6025MB/s
md5 time 13724ms, avg time 1372ms, avg throughput: 782MB/s
sha1 time 10959ms, avg time 1095ms, avg throughput: 979MB/s
sha256 time 22982ms, avg time 2298ms, avg throughput: 467MB/s
gzip -6 time 378120ms, avg time 37812ms, avg throughput: 28MB/s
lz4 -1 time 15484ms, avg time 1548ms, avg throughput: 693MB/s
256K buffer
copy time 1805ms, avg time 180ms, avg throughput: 5948MB/s
md5 time 13706ms, avg time 1370ms, avg throughput: 783MB/s
sha1 time 11074ms, avg time 1107ms, avg throughput: 969MB/s
sha256 time 22588ms, avg time 2258ms, avg throughput: 475MB/s
gzip -6 time 372645ms, avg time 37264ms, avg throughput: 28MB/s
lz4 -1 time 16346ms, avg time 1634ms, avg throughput: 656MB/s
Reason phrases (e.g. OK) are optional in HTTP 1.1 but the space after the status code is not. When the reason phrase was missing the required space was trimmed along with the trailing CR leading to a format error.
Rework the logic to preserve the space and allow empty reason phrases.
Found while testing against the Backblaze S3-compatible API.
Some lz4 versions between r131 and 1.7.5 did not return a sufficient buffer size from LZ4F_compressBound() to allow LZ4F_compressEnd() to complete reliably. While this issue was fixed in lz4 1.7.5 there are affected versions in supported distributions such as CentOS/RHEL 7.
Use one of the hacks suggested in https://github.com/lz4/lz4/issues/290 to increase the buffer size enough for LZ4F_compressEnd() to complete. This means that a slightly larger buffer size is required for all versions but it seems worth it to (hopefully) to fix the issue in all lz4 versions.
Update error types throw by bzip2 to be more consistent with gzip.
Update the bzip2 and gzip error default to be AssertError as that's the more common case in both, and add a 'break;' to the default clause -- we don't intend to be just falling through those case statements, even if the default is the last, we should be explicit about that.
Clean up some tabs that snuck in, rename a variable to be more clear, and add some comments.
The --repo-retention-full-type option allows retention of full backups based on a time period, specified in days.
The new option will default to 'count' and therefore will not affect current installations. Setting repo-retention-full-type to 'time' will allow the user to use a time period, in days, to indicate full backup retention. Using this method, a full backup can be expired only if the time the backup completed is older than the number of days set with repo-retention-full (calculated from the moment the 'expire' command is run) and at least one full backup meets the retention period. If archive retention has not been configured, then the default settings will expire archives that are prior to the oldest retained full backup. For example, if there are three full backups ending in times that are 25 days old (F1), 20 days old (F2) and 10 days old (F3), then if the full retention period is 15 days, then only F1 will be expired; F2 will be retained because F1 is not at least 15 days old.
Newer versions of sudo output this message to stdout when run in a container:
sudo: setrlimit(RLIMIT_CORE): Operation not permitted
See https://github.com/sudo-project/sudo/issues/42 for details.
A simple workaround is to prevent sudo from disabling core dumps. This seems safe enough because if sudo is segfaulting then core files are the least of our worries.
bzip2 is a widely available, high-quality data compressor. It typically compresses files to within 10% to 15% of the best available techniques (the PPM family of statistical compressors), while being around twice as fast at compression and six times faster at decompression.
bzip2 is currently available on all supported platforms.
Zstandard is a fast lossless compression algorithm targeting real-time compression scenarios at zlib-level and better compression ratios. It's backed by a very fast entropy stage, provided by Huff0 and FSE library.
Zstandard version >= 1.0 is required, which is generally only available on newer distributions.
The previous location was too late to allow --var=s3-all=y to work with --require=/repo-host, which depends on /quickstart/configure-archiving.
Since the section is not included in production documentation, the position is not very important to flow so just move it to where it works.
If the WAL path is absolute then pg1-path should be optional but in fact it was required to load pg_control.
Skip the pg_control check when pg1-path is not specified. The check against the stanza version/system-id remains to protect the repo from corruption.
There is no conflict if the path containing a file link is a parent path of a path link. The Perl code apparently had this right but the migration to C missed it.
Exclude this case when checking for link conflicts.
There have been a number of segfaults reported because a string option expected to be non-null was actually null. This is generally due to options that are expected to be set but are in fact optional.
Protect against this by creating cfgOptionStrNull() to get options that can be null, while changing cfgOptionStr() to always expect non-null. There are relatively few places where nulls are expected.
There is definitely a chance for breakage here as null options might currently be working in the field but will be caught by this new check. Hopefully introducing the check early in the release cycle will allow us to catch any issues.
Previously when retention-archive was set (either by the user or by default), archives prior to the archive-start of the oldest remaining full backup (after backup expiration occurred) would be expired even though the retention-archive threshold had not been met. For example, if there were 1 full backup remaining after backup expiration and the retention-archive was set to 2 and retention-archive-type=full, then archives prior to the archive-start of the remaining full backup would still be removed even though retention-archive required 2 full backups remaining before archives should be expired.
The thought was to keep the archive directory clean and since the full backup did not require prior archives, it was safe to delete them. However, this has caused problems for some users in the past (because they needed the WAL for other purposes) and with the new adhoc and time-based retention features, it was decided that the archives should remain until the threshold was met. The archives will eventually be removed and if having them causes space issues, the expire command and the retention-archive can always be run and adjusted.
The specified backup set (i.e. the backup label provided and all of its dependent backups, if any) will be expired regardless of backup retention rules except that at least one full backup must remain in the repository.
This is implemented by checking for a backup lock on the host where info is running so there are a few limitations:
* It is not currently possible to know which command is running: backup, expire, or stanza-*. The stanza commands are very unlikely to be running so it's pretty safe to guess backup/expire. Command information may be added to the lock file to improve the accuracy of the reported command.
* If the info command is run on a host that is not participating in the backup, e.g. a standby, then there will be no backup lock. This seems like a minor limitation since running info on the repo or primary host is preferred.
Bug Fixes:
* Remove empty subexpression from manifest regular expression. MacOS was not happy about this though other platforms seemed to work fine. (Fixed by David Raftis.)
Improvements:
* Non-blocking TLS implementation. (Reviewed by Slava Moudry, Cynthia Shang, Stephen Frost.)
* Only limit backup copy size for WAL-logged files. The prior behavior could possibly lead to postgresql.conf or postgresql.auto.conf being truncated in the backup. (Reviewed by Cynthia Shang.)
* TCP keep-alive options are configurable. (Suggested by Marc Cousin.)
* Add io-timeout option.
Timeout used for connections and read/write operations.
Note that the entire read/write operation does not need to complete within this timeout but some progress must be made, even if it is only a single byte.
The prior blocking implementation seemed to be prone to locking up on some (especially recent) kernel versions. Since we were unable to reproduce the issue in a development environment we can only speculate as to the cause, but there is a good chance that blocking sockets were the issue or contributed to the issue.
So move to a non-blocking implementation to hopefully clear up these issues. Testing in production environments that were prone to locking shows that the approach is promising and at the very least not a regression.
The main differences from the blocking version are the non-blocking connect() implementation and handling of WANT_READ/WANT_WRITE retries for all SSL*() functions.
Timeouts in the tests needed to be increased because socket connect() and TLS SSL_connect() were not included in the timeout before. The tests don't run any slower, though. In fact, all platforms but Ubuntu 12.04 worked fine with the shorter timeouts.
select() is a bit old-fashioned and cumbersome to use. Since the select() code needed to be modified to handle write ready this seems like a good time to upgrade to poll().
poll() has been around for a long time so there doesn't seem to be any need to provide a fallback to select().
Also change the error on timeout from FileReadError to ProtocolError. This works better for read vs. write and failure to poll() is indicative of a protocol error or unexpected EOF.
The prior behavior introduced in dcddf3a5 could possibly lead to postgresql.conf or postgresql.auto.conf being truncated in the backup since they are copied via tmp files and could change size during the backup.
In general it seems safer to limit this feature to WAL-logged files which will be reconstructed during recovery.
A session looks much the same whether it is initiated from the client or the server, so use the session objects to implement the TLS, HTTP, and S3 test servers.
For TLS, at least, there are some differences between client and server sessions so add a client/server type to SocketSession to determine how the session was initiated.
Aside from reducing code duplication, the main advantage is that the test server will now timeout rather than hanging indefinitely when less input that expected is received.
Previously an error was only thrown when errno was set but in practice this is usually not the case. This may have something to do with getting errno late but attempts to get it earlier have not been successful. It appears that errno usually gets cleared and spot research seems to indicate that other users have similar issues.
An error at this point indicates unexpected EOF so it seems better to just throw an error all the time and be consistent.
To test this properly our test server needs to call SSL_shutdown() except when the client expects this error.
This abstraction allows the session code to be shared between the TLS client and (upcoming) server code.
Session management is no longer implemented in TlsClient so the HttpClient was updated to free and create sessions as needed. No test changes were required for HttpClient so the functionality should be unchanged.
Mechanical changes to the TLS tests were required to use TlsSession where appropriate rather than TlsClient. There should be no change in functionality other than how sessions are managed, i.e. using tlsClientOpen()/tlsSessionFree() rather than just tlsClientOpen().
The errorInternalThrowSys*() functions were marked as returning during coverage testing even when they had no possibility to return, i.e. the error parameter was set to constant true. This meant the compiler would treat the functions as returning even when they would not.
Instead create completely separate functions for coverage to use for THROW_ON_SYS_ERROR*() that can return and leave the regular functions marked __noreturn__.
This abstraction allows the session code to be shared between the socket client and (upcoming) server code. There should no difference in how the code works -- only the organization has changed. Note that no changes to the tests were required.
This same abstraction will be required for TlsClient but that will be done in a separate commit because it requires test changes.
When the Vagrant file was updated to use pgbackrest/ vs /backrest/ as the location for executing tests and building the documentation, parts of the contributing.xml (and hence the CONTRIBUTING.md) were not updated since some parts of the document are not actually executed when the CONTRIBUTING.md is built from contributing.xml: those parts that are executed were updated but those parts that are not executed were not.
This commit fixes the contributing.xml issue but also removes test/README.md as its contents were out of date and redundant given that they are covered in CONTRIBUTING.md.
The storage driver requires two list functions to be implemented, list and infoList. But the former is a subset of the latter so implementing both in every driver is wasteful. The reason both exist is that in Posix it is cheaper to get a list of names than it is to stat files to get size, time, etc. In S3 these operations are equivalent.
Introduce storageInfoLevelType to determine the amount of information required by the caller. That way Posix can work efficiently and all drivers can return only the data required which saves some bandwidth. The storageList() and storageInfoList() functions remain in the storage interface since they are useful -- the only change is simplifying the drivers with no external impact.
Note that since list() accepted an expression infoList() must now do so. Checking the expression is optional for the driver but can be used to limit results or save IO costs.
Similarly, exists() and pathExists() are just specialized forms of info() so adapt them to call info() instead.
This has been the policy for some time but due to migration pressure only new functions and refactors have been following this rule. Now it seems sensible to make a clean sweep and move all the comments that have not been moved already (i.e. most of them).
Only obvious typos and gross inaccuracies in the comments have been fixed. For this most part this was a copy and paste operation.
Useless comments, e.g. "New object", were not copied. Even so, there are surely many deficient comments left.
Some rearranging was done where needed and functions were placed in the proper sections, e.g. "Constructors", "Functions", etc.
A few function prototypes were found that not longer had an implementation. These were removed, but there may be more.
The coding document has been updated to reflect this policy, which is not new but has never been documented.
This is really a socket option so the new name is clearer.
Since common/io/socket/tcp will contains a mix of options it makes sense to rename it to socket and cascade name changes as needed.
Prior to 2.25 the individual TCP keep-alive options were not being configured due to a missing header. In 2.25 they were being configured incorrectly due to a disconnect between the timeout specified in ms and what was expected by the TCP options, i.e. seconds.
Instead make the TCP keep-alive options directly configurable, with correct units and better testing. Keep-alive is enabled by default (though it can be defaulted to the system setting instead) and the rest of the options are not set by default. This is in line with what PostgreSQL does, though PostgreSQL does not allow keep-alive to be defaulted.
Also move configuration of TCP options before connect() as PostgreSQL does.
Features:
* Add lz4 compression support. Note that setting compress-type=lz4 will make new backups and archive incompatible (unrestorable) with prior versions of pgBackRest. (Reviewed by Cynthia Shang.)
* Add --dry-run option to the expire command. Use dry-run to see which backups/archive would be removed by the expire command without actually removing anything. (Contributed by Cynthia Shang, Luca Ferrari.)
Improvements:
* Improve performance of remote manifest build. (Suggested by Jens Wilke.)
* Fix detection of keepalive options on Linux. (Contributed by Marc Cousin.)
* Add configure host detection to set standards flags correctly. (Contributed by Marc Cousin.)
* Remove compress/compress-level options from commands where unused. These commands (e.g. restore, archive-get) never used the compress options but allowed them to be passed on the command line. Now they will error when these options are passed on the command line. If these errors occur then remove the unused options. (Reviewed by Cynthia Shang.)
* Limit backup file copy size to size reported at backup start. If a file grows during the backup it will be reconstructed by WAL replay during recovery so there is no need to copy the additional data. (Reviewed by Cynthia Shang.)
Other storage*InfoList() functions do this but it was missed here.
memResize()/memFree() operations become more expensive as the mem context grows larger so freeing it periodically saves processing time.
If the work or result directories already contain data then the docs might be generated slightly differently. Doing a clean ensures they will always produce the same output (provided the code does not change).
Building the contributing document has some special requirements because it runs Docker in Docker so the repo path must align on the host and all Docker containers. Run `pgbackrest/doc/doc.pl` from within the home directory of the user that will do the doc build, e.g. `home/vagrant`. If the repo is not located directly in the home directory, e.g. `/home/vagrant/pgbackrest`, then a symlink may be used, e.g. `ln -s /path/to/repo /home/vagrant/pgbackrest`.
Mount the repo in the Vagrantfile at /home/vagrant/pgbackrest but provide a link from the old location at /backrest to make the transition less painful.
If a file grows during the backup it will be reconstructed by WAL replay during recovery so there is no need to copy the additional data.
This also reduces the likelihood of seeing torn pages during the copy. Torn pages can still occur in the middle of the file, though, so they must be handled.
This code stanza was not being included on Linux platforms because of a missing header file.
Also update the order of operations and make the timeout calculations more sensible.
The primary source for project info is now src/version.h.
The pgBackRestDoc::ProjectInfo module loads the project info from src/version.h at runtime so there is no need to update it.
This is consistent with the way BackRest and BackRest test were renamed way back in 18fd2523.
More modules will be moving to pgBackRestDoc soon so renaming now reduces churn later.
This directory was once the home of the production Perl code but since f0ef73db this is no longer true.
Move the modules to test in most cases, except where the module is expected to be useful for the doc engine beyond the expected lifetime of the Perl test code (about a year if all goes well).
The exception is pgBackRest::Version which requires more work to migrate since it is used to track pgBackRest versions.
LZ4 compresses data faster than gzip but at a lower ratio. This can be a good tradeoff in certain scenarios.
Note that setting compress-type=lz4 will make new backups and archive incompatible (unrestorable) with prior versions of pgBackRest.
This was the interface between Perl and C introduced in 36a5349b but since f0ef73db has only been used by the Perl integration tests. This is expensive code to maintain just for testing.
The main dependency was the interface to storage, no matter where it was located, e.g. S3. Replace this with the new-introduced repo commands (d3c83453) that allow access to repo storage via the command line.
The other dependency was on various cfgOption* functions and CFGOPT_ constants that were convenient but not necessary. Replace these with hard-coded strings in most places and create new constants for commonly used values.
Remove all auto-generated Perl code. This means that the error list will no longer be maintained automatically so copy used errors to Common::Exception.pm. This file will need to be maintained manually going forward but there is not likely to be much churn as the Perl integration tests are being retired.
Update test.pl and related code to remove LibC builds.
Ding, dong, LibC is dead.
These commands are generally useful but more importantly they allow removing LibC by providing the Perl integration tests an alternate way to work with repository storage.
All the commands are currently internal only and should not be used on production repositories.
This command only makes sense for the repository storage since other storage (e.g. pg and spool) must be located on a local Posix filesystem and can be listed using standard unix commands. Since the repo storage can be located lots of places having a common way to list it makes sense.
Prefix with repo- to make the scope of this command clear.
Update documentation to reflect this change.
Add compress-type option and deprecate compress option. Since the compress option is boolean it won't work with multiple compression types. Add logic to cfgLoadUpdateOption() to update compress-type if it is not set directly. The compress option should no longer be referenced outside the cfgLoadUpdateOption() function.
Add common/compress/helper module to contain interface functions that work with multiple compression types. Code outside this module should no longer call specific compression drivers, though it may be OK to reference a specific compression type using the new interface (e.g., saving backup history files in gz format).
Unit tests only test compression using the gz format because other formats may not be available in all builds. It is the job of integration tests to exercise all compression types.
Additional compression types will be added in future commits.
Adding a sleep before was necessary since only adding a sleep after did not always work. This helps to ensure the backup stop time for the previous backup does not equal time-recovery-timestamp. The sleep after allows enough time between the time retrieval and dropping important_table so PostgreSQL can consistently recover to before the table drop.
Note that these issues were caused by picking a timestamp too close to the restore command or a database operation, not due to any problem in backup selection of the restore command.
These commands (e.g. restore, archive-get) never used the compress options but allowed them to be passed on the command line. Now they will error when these options are passed on the command line. If these errors occur then remove the unused options.
"gz" was used as the extension but "gzip" was generally used for function and type naming.
With a new compression format on the way, it makes sense to standardize on a single abbreviation to represent a compression format in the code. Since the extension is standard and we must use it, also use the extension for all naming.
The prior code used TRY...CATCH blocks to cleanup mem contexts when an error occurred. This included freeing new mem contexts that were still being initialized when the error occurred and ensuring that the prior memory context was restored.
This worked fine in production but it involved a lot of setjmp()/longjmp() calls that resulted in longer compilation times and sluggish performance under valgrind, profiling, and coverage testing.
Instead maintain a stack of new contexts and context switches that can be used to do cleanup after an error. Normally, the stack is not used for this purpose and pushing/popping is a cheap operation. In the prior implementation most of the TRY...CATCH logic needed to be run even on success.
One bonus is that the binary is about 8% smaller after this change. Another benefit is that new contexts *must* be explicitly freed/discarded or an error will occur. See info/manifest.c for an example of where this is useful outside the standard macros.
Bug Fixes:
* Prevent defunct processes in asynchronous archive commands. (Reviewed by Stephen Frost. Reported by Adam Brusselback, ejberdecia.)
* Error when archive-get/archive-push/restore are not run on a PostgreSQL host. (Reviewed by Stephen Frost. Reported by Jesper St John.)
* Read HTTP content to eof when size/encoding not specified. (Reviewed by Cynthia Shang. Reported by Christian ROUX.)
* Fix resume when the resumable backup was created by Perl. In this case the resumable backup should be ignored, but the C code was not able to load the partial manifest written by Perl since the format differs slightly. Add validations to catch this case and continue gracefully. (Reported by Kacey Holston.)
Features:
* Auto-select backup set on restore when time target is specified. Auto-selection is performed only when --set is not specified. If a backup set for the given target time cannot not be found, the latest (default) backup set will be used. (Contributed by Cynthia Shang.)
Improvements:
* Skip pg_internal.init temp file during backup. (Reviewed by Cynthia Shang. Suggested by Michael Paquier.)
* Add more validations to the manifest on backup. (Reviewed by Cynthia Shang.)
Documentation Improvements:
* Prevent lock-bot from adding comments to locked issues. (Suggested by Christoph Berg.)
If PostgreSQL crashes it can leave behind a pg_internal.init temp file with the pid as the extension, as discussed in https://www.postgresql.org/message-id/flat/20200131045352.GB2631%40paquier.xyz#7700b9481ef5b0dd5f09cc410b4750f6. On restart this file is not cleaned up so it can persist for the lifetime of the cluster or until another process with the same id happens to write pg_internal.init.
This is arguably a bug in PostgreSQL, but in any case it makes sense not to backup this file.
This error was lost during the migration to C. The error that occurred instead (generally an SSH auth error) was hard to debug.
Restore the original behavior by throwing an error immediately if pg1-host is configured for any of these commands. reset-pg1-host can be used to suppress the error when required.
The main improvement is a double-fork to prevent zombie processes if the parent process exits after the (child) async process. This is a real possibility since the parent process sticks around to monitor the results of the async process.
In the first fork, ignore SIGCHLD in the very unlikely case that the async process exits before the first fork. This is probably only possible if the async process exits immediately, perhaps due to a chdir() failure. Set SIGCHLD back to default in the async process so waitpid() will work as expected.
Also update the comment on chdir() to more accurately reflect what is happening.
Finally, add a test in certain debug builds to ensure the first fork exits very quickly. This only works when valgrind is not in use because valgrind makes forking so slow that it is hard to tell if the async process performed work or not (in the case that the second fork goes missing and the async process is a direct child).
In this case the resumable backup should be ignored, but the C code was not able to load the partial manifest written by Perl since the format differs slightly. Add validations to catch this case and continue gracefully.
Generally, the content-size or content-encoding headers will be used to specify how much content should be expected.
There is a special case where the server sends 'Connection:close' without the content headers and the content may be read up until eof.
This appears to be an atypical usage but it is required by the specification.
Auto-selection is performed only when --set is not specified. If a backup set for the given target time cannot not be found, the latest (default) backup set will be used.
Currently a limited number of date formats are recognized and timezone names are not allowed, only timezone offsets.
Validate that checksums exist for zero size files. This means that the checksums for zero size files are explicitly set by backup even though they'll always be the same. Also validate that zero length files have the correct checksum.
Validate that repo size is > 0 if size is > 0. No matter what compression type is used a non-zero amount of data cannot be stored in zero bytes.
Bug Fixes:
* Fix missing files corrupting the manifest. If a file was removed by PostgreSQL during the backup (or was missing from the standby) then the next file might not be copied and updated in the manifest. If this happened then the backup would error when restored. (Reviewed by Cynthia Shang. Reported by Vitaliy Kukharik.)
Improvements:
* Use pkg-config instead of xml2-config for libxml2 build options. (Contributed by David Steele, Adrian Vondendriesch.)
* Validate checksums are set in the manifest on backup/restore. (Reviewed by Cynthia Shang.)
This is a modest start but it addresses the specific issue that was caused by the bug fixed in 45ec694a. This validation will produce an immediate error rather than erroring out partway through the restore.
More validations are planned but this is the most important one and seems safest for this release.
If a file was removed by PostgreSQL during the backup (or was missing from the standby) then the next file might not be copied and updated in the manifest. If this happened then the backup would error when restored.
The issue was that removing files from the manifest invalidated the pointers stored in the processing queues. When a file was removed, all the pointers shifted to the next file in the list, causing a file to be unprocessed. Since the unprocessed file was still in the manifest it would be saved with no checksum, causing a failure on restore.
When process-max was > 1 then the bug would often not express since the file had already been pulled from the queue and updates to the manifest are done by name rather than by pointer.
pkg-config is a generic way to get build options rather than relying on a package-specific utility.
XML2_CONFIG can be used to override this utility for systems that do not ship pkg-config.
Bug Fixes:
* Fix error in timeline conversion. The timeline is required to verify WAL segments in the archive after a backup. The conversion was performed base 10 instead of 16, which led to errors when the timeline was ≥ 0xA. (Reported by Lukas Ertl, Eric Veldhuyzen.)
The timeline is required to verify WAL segments in the archive after a backup. The conversion was performed base 10 instead of 16, which led to errors when the timeline was ≥ 0xA.
Bug Fixes:
* Fix options being ignored by asynchronous commands. The asynchronous archive-get/archive-push processes were not loading options configured in command configuration sections, e.g. [global:archive-get]. (Reviewed by Cynthia Shang. Reported by Urs Kramer.)
* Fix handling of \ in filenames. \ was not being properly escaped when calculating the manifest checksum which prevented the manifest from loading. Since instances of \ in cluster filenames should be rare to nonexistent this does not seem likely to be a serious problem in the field.
Features:
* pgBackRest is now pure C.
* Add pg-user option. Specifies the database user name when connecting to PostgreSQL. If not specified pgBackRest will connect with the local OS user or PGUSER, which was the previous behavior. (Contributed by Mike Palmiotto.)
* Allow path-style URIs in S3 driver.
Improvements:
* The backup command is implemented entirely in C. (Reviewed by Cynthia Shang.)
The local, remote, archive-get-async, and archive-push-async commands were used to run functionality that was not directly available to the user. Unfortunately that meant they would not pick up options from the command that the user expected, e.g. backup, archive-get, etc.
Remove the internal commands and add roles which allow pgBackRest to determine what functionality is required without implementing special commands. This way the options are loaded from the expected command section.
Since remote is no longer a specific command with its own options, more manipulation is required when calling remote. This might be something we can improve in the config system but it may be worth leaving as is because it is a one-off, for now at least.
Although path-style URIs have been deprecated by AWS, they may still be used with products like Minio because no additional DNS configuration is required.
Path-style URIs must be explicitly enabled since it is not clear how they can be auto-detected reliably. More importantly, faulty detection could cause regressions in current installations.
Previously dates were not being filled by these functions which was fine since dates were not used.
We plan to use dates for the ls command plus it makes sense for the driver to be complete since it will be used as an example.
These are similar to what mktime() and strptime() do but they ignore the local system timezone which saves having to munge the TZ env variable to do time conversions.
This macro was created before the String object existed so subsequent usage with String always included a lot of strPtr() wrapping.
TEST_RESULT_STR_Z() had already been introduced but a wholesale replacement of TEST_RESULT_STR() was not done since the priority was on the C migration.
Update all calls to (old) TEST_RESULT_STR() with one of the following variants: (new) TEST_RESULT_STR(), TEST_RESULT_STR_Z(), TEST_RESULT_Z(), TEST_RESULT_Z_STR().
Specifies the database user name when connecting to PostgreSQL.
If not specified pgBackRest will connect with the local OS user or PGUSER, which was the previous behavior.
\ was not being properly escaped when calculating the manifest checksum which prevented the manifest from loading.
Use jsonFromStr() to properly quote and escape \.
Since instances of \ in cluster filenames should be rare to nonexistent this does not seem likely to be a serious problem in the field.
Remove embedded Perl from the distributed binary. This includes code, configure, Makefile, and packages. The distributed binary is now pure C.
Remove storagePathEnforceSet() from the C Storage object which allowed Perl to write outside of the storage base directory. Update mock/all and real/all integration tests to use storageLocal() where they were violating this rule.
Remove "c" option that allowed the remote to tell if it was being called from C or Perl.
Code to convert options to JSON for passing to Perl (perl/config.c) has been moved to LibC since it is still required for Perl integration tests.
Update build and installation instructions in the user guide.
Remove all Perl unit tests.
Remove obsolete Perl code. In particular this included all the Perl protocol code which required modifications to the Perl storage, manifest, and db objects that are still required for integration testing but only run locally. Any remaining Perl code is required for testing, documentation, or code generation.
Rename perlReq to binReq in define.yaml to indicate that the binary is required for a test. This had been the actual meaning for quite some time but the key was never renamed.
For the most part this is a direct migration of the Perl code into C except as noted below.
A backup can now be initiated from a linked directory. The link will not be stored in the manifest or recreated on restore. If a link or directory does not already exist in the restore location then a directory will be created.
The logic for creating backup labels has been improved and it should no longer be possible to get a backup label earlier than the latest backup even with timezone changes or clock skew. This has never been an issue in the field that we know of, but we found it in testing.
For online backups all times are fetched from the PostgreSQL primary host (before only copy start was). This doesn't affect backup integrity but it does prevent clock skew between hosts affecting backup duration reporting.
Archive copy now works as expected when the archive and backup have different compression settings, i.e. when one is compressed and the other is not. This was a long-standing bug in the Perl code.
Resume will now work even if hardlink settings have been changed.
Reviewed by Cynthia Shang.
Bug Fixes:
* Fix archive-push/archive-get when PGDATA is symlinked. These commands tried to use cwd() as PGDATA but this would disagree with the path configured in pgBackRest if PGDATA was symlinked. If cwd() does not match the pgBackRest path then chdir() to the path and make sure the next cwd() matches the result from the first call. (Reported by Stephen Frost, Milosz Suchy.)
* Fix reference list when backup.info is reconstructed in expire command. Since the backup command is still using the Perl version of reconstruct this issue will not express unless 1) there is a backup missing from backup.info and 2) the expire command is run directly instead of running after backup as usual. This unlikely combination of events means this is probably not a problem in the field.
* Fix segfault on unexpected EOF in gzip decompression. (Reported by Stephen Frost.)
Commit 7168e074 tried to use cwd() as PGDATA but this would disagree with the path configured in pgBackRest if PGDATA was symlinked.
If cwd() does not match the pgBackRest path then chdir() to the path and make sure the next cwd() matches the result from the first call.
If the compressed stream terminated early then the decompression process would get a flush request (NULL input buffer) since the filter was not marked as done. This could happen on a zero-length or truncated (i.e. invalid) compressed file.
Change the existing assertion to an error to catch this condition in production gracefully.
Adding a manifest to backup.info was migrated to C in 4e4d1f41 but deduplication of the references was missed leading to a reference for every file being added to backup.info.
Since the backup command is still using the Perl version of reconstruct this issue will not express unless 1) there is a backup missing from backup.info and 2) the expire command is run directly instead of running after backup as usual.
This unlikely combination of events means this is probably not a problem in the field.
Installing lcov 1.14 everywhere turned out to be a problem just as using 1.13 on Ubuntu 19.04 was.
Since we primarily use Ubuntu 18.04 for coverage testing and reporting, we definitely want to make sure that works. So, revert to using the default packaged lcov except when specified otherwise in VmTest.pm.
PostgreSQL minor version releases are also included since all containers have been rebuilt.
This should have been removed when the support for the option was removed in c7333190.
The option cannot be removed entirely because we don't want to error in the case where --force was specified but the stanza is valid.
Adding a dummy column which is always set by the P() macro allows a single macro to be used for parameters or no parameters without violating C's prohibition on the {} initializer.
-Wmissing-field-initializers remains disabled because it still gives wildly different results between versions of gcc.
Bug Fixes:
* Fix remote timeout in delta restore. When performing a delta restore on a largely unchanged cluster the remote could timeout if no files were fetched from the repository within protocol-timeout. Add keep-alives to prevent remote timeout. (Reported by James Sewell, Jens Wilke.)
* Fix handling of repeated HTTP headers. When HTTP headers are repeated they should be considered equivalent to a single comma-separated header rather than generating an error, which was the prior behavior. (Reported by donicrosby.)
Improvements:
* JSON output from the info command is no longer pretty-printed. Monitoring systems can more easily ingest the JSON without linefeeds. External tools such as jq can be used to pretty-print if desired. (Contributed by Cynthia Shang.)
* The check command is implemented entirely in C. (Contributed by Cynthia Shang.)
Documentation Improvements:
* Document how to contribute to pgBackRest. (Contributed by Cynthia Shang.)
* Document maximum version for auto-stop option. (Contributed by Brad Nicholson.)
Test Suite Improvements:
* Fix container test path being used when --vm=none. (Suggested by Stephen Frost.)
* Fix mismatched timezone in expect test. (Suggested by Stephen Frost.)
* Don't autogenerate embedded libc code by default. (Suggested by Stephen Frost.)
When HTTP headers are repeated they should be considered equivalent to a single comma-separated header rather than generating an error, which was the prior behavior.
Reported by donicrosby.
We had some problems with newer versions so had held off on updating. Those problems appear to have been resolved.
In addition, the --compat flag is no longer required. Prior versions of MinIO required all parts of a multi-part upload (except the last) to be of equal size. The --compat flag was introduced to restore the default S3 behavior. Now --compat is only required when ETag is being used for MD5 verification, which we don't do.
This documentation shows how to build a development environment on Ubuntu 19.04 and should work for other Debian-based distros.
Note that this document is not included in automated testing due to some unresolved issues with Docker in Docker on Travis CI. We'll address this in the future when we add contributing documentation to the website.
Mark all pre commands as skip so they won't be run again after the container is built.
Ensure that pre commands added to the container are run as the container user if they are not intended to run as root.
This is only needed when new code is added to the Perl C library, which is becoming rare as the migration progresses.
Also, the code will vary slightly based on the Perl version used for generation so for normal users it is just noise.
Suggested by Stephen Frost.
When performing a delta restore on a largely unchanged cluster the remote could timeout if no files were fetched from the repository within protocol-timeout.
Add keep-alives to prevent remote timeout.
Reported by James Sewell, Jens Wilke.
Note that building the manifest on each host has been temporarily removed.
This feature will likely be brought back as a non-default option (after the manifest code has been fully migrated to C) since it can be fairly expensive.
Three major changes were required to get this working:
1) Provide the path to pgbackrest in the build directory when running outside a container. Tests in a container will continue to install and run against /usr/bin/pgbackrest.
1) Set a per-test lock path so tests don't conflict on the default /tmp/pgbackrest path. Also set a per-test log-path while we are at it.
2) Use localhost instead of a custom host for TLS test connections. Tests in containers will continue to update /etc/hosts and use the custom host.
Add infrastructure and update harnessCfgLoad*() to get the correct exe and paths loaded for testing.
Since new tests are required to verify that running outside a container works, also rework the tests in Travis CI to provide coverage within a reasonable amount of time. Mainly, break up to doc tests by VM and run an abbreviated unit test suite on co6 and co7.
Features:
* PostgreSQL 12 support.
* Add info command set option for detailed text output. The additional details include databases that can be used for selective restore and a list of tablespaces and symlinks with their default destinations. (Contributed by Cynthia Shang. Suggested by Stephen Frost, ejberdecia.)
* Add standby restore type. This restore type automatically adds standby_mode=on to recovery.conf for PostgreSQL < 12 and creates standby.signal for PostgreSQL ≥ 12, creating a common interface between PostgreSQL versions. (Reviewed by Cynthia Shang.)
Improvements:
* The restore command is implemented entirely in C. (Reviewed by Cynthia Shang.)
Documentation Improvements:
* Document the relationship between db-timeout and protocol-timeout. (Contributed by Cynthia Shang. Suggested by James Chanco Jr.)
* Add documentation clarifications regarding standby repositories. (Contributed by Cynthia Shang.)
* Add FAQ for time-based Point-in-Time Recovery. (Contributed by Cynthia Shang.)
Recovery settings are now written into postgresql.auto.conf instead of recovery.conf. Existing recovery_target* settings will be commented out to help avoid conflicts.
A comment is added before recovery settings to identify them as written by pgBackRest since it is unclear how, in general, old settings will be removed.
recovery.signal and standby.signal are automatically created based on the recovery settings.
The additional details include databases that can be used for selective restore and a list of tablespaces and symlinks with their default destinations.
This information is not included in the JSON output because it requires reading the manifest which is too IO intensive to do for all manifests. We plan to include this information for JSON in a future release.
This restore type automatically adds standby_mode=on to recovery.conf.
This could be accomplished previously by setting --recovery-option=standby_mode=on but PostgreSQL 12 requires standby mode to be enabled by a special file named standby.signal.
The new restore type allows us to maintain a common interface between PostgreSQL versions.
We haven't had the time to complete this documentation and it has suffered bit rot.
This prevents us from building the docs on PostgreSQL >= 11 so just comment it all out until it can be updated.
For the most part this is a direct migration of the Perl code into C.
There is one important behavioral change with regard to how file permissions are handled. The Perl code tried to set ownership as it was in the manifest even when running as an unprivileged user. This usually just led to errors and frustration.
The C code works like this:
If a restore is run as a non-root user (the typical scenario) then all files restored will belong to the user/group executing pgBackRest. If existing files are not owned by the executing user/group then an error will result if the ownership cannot be updated to the executing user/group. In that case the file ownership will need to be updated by a privileged user before the restore can be retried.
If a restore is run as the root user then pgBackRest will attempt to recreate the ownership recorded in the manifest when the backup was made. Only user/group names are stored in the manifest so the same names must exist on the restore host for this to work. If the user/group name cannot be found locally then the user/group of the PostgreSQL data directory will be used and finally root if the data directory user/group cannot be mapped to a name.
Reviewed by Cynthia Shang.
The backup manifest stores a complete list of all files, links, and paths in a backup along with metadata such as checksums, sizes,
timestamps, etc. A list of databases is also included for selective restore.
The purpose of the manifest is to allow the restore command to confidently reconstruct the PostgreSQL data directory and ensure that
nothing is missing or corrupt. It is also useful for reporting, e.g. size of backup, backup time, etc.
For now, migrate enough functionality to implement the restore command.
Reviewed by Cynthia Shang.
These features finally make the ls command practical.
Currently the JSON contains only name, type, and size. We may add more fields in the future, but these seem like the minimum needed to be useful.
Connection reuse and pipelining are not the same thing and should not have been conflated.
Update comments and release notes to reflect the correct usage.
Info files required three copies in memory to be loaded (the original string, an ini representation, and the final info object). Not only was this memory inefficient but the Ini object does sequential scans when searching for keys making large files very slow to load.
This has not been an issue since archive.info and backup.info are very small, but it becomes a big deal when loading manifests with hundreds of thousands of files.
Instead of holding copies of the data in memory, use a callback to deliver the ini data directly to the object when loading. Use a similar method for save to avoid having an intermediate copy. Save is a bit complex because sections/keys must be written in alpha order or older versions of pgBackRest will not calculate the correct checksum.
Also move the load retry logic to helper functions rather than embedding it in the Info object. This allows for more flexibility in loading and ensures that stack traces will be available when developing unit tests.
Reviewed by Cynthia Shang.
286a106a updated the documentation to build pgBackRest as an unprivileged user, but the wget command was missed. This command is not actually run, just displayed, because the release is not yet available when the documentation is built.
Update the wget command to run as the local user.
Bug Fixes:
* Improve slow manifest build for very large quantities of tables/segments. (Reported by Jens Wilke.)
* Fix exclusions for special files. (Reported by CluelessTechnologist, Janis Puris, Rachid Broum.)
Improvements:
* The stanza-create/update/delete commands are implemented entirely in C. (Contributed by Cynthia Shang.)
* The start/stop commands are implemented entirely in C. (Contributed by Cynthia Shang.)
* Create log directories/files with 0750/0640 mode. (Suggested by Damiano Albani.)
Documentation Bug Fixes:
* Fix yum.p.o package being installed when custom package specified. (Reported by Joe Ayers, John Harvey.)
Documentation Improvements:
* Build pgBackRest as an unprivileged user. (Suggested by Laurenz Albe.)
The {[os-type-is-centos]} expression was missing parens which meant "and" expressions built on it would always evaluate true if the os-type was centos6.
Reported by Joe Ayers, John Harvey.
Prior to 2.16 the Perl manifest code would skip any file that began with a dot. This was not intentional but it allowed PostgreSQL socket files to be located in the data directory. The new C code in 2.16 did not have this unintentional exclusion so socket files in the data directory caused errors.
Worse, the file type error was being thrown before the exclusion check so there was really no way around the issue except to move the socket files out of the data directory.
Special file types (e.g. socket, pipe) will now be automatically skipped and a warning logged to notify the user of the exclusion. The warning can be suppressed with an explicit --exclude.
Reported by CluelessTechnologist, Janis Puris, Rachid Broum.
Putting the checksum at the beginning of the file made it impossible to stream the file out when saving. The entire file had to be held in memory while it was checksummed so the checksum could be written at the beginning.
Instead place the checksum at the end. This does not break the existing Perl or C code since the read is not order dependent.
There are no plans to improve the Perl code to take advantage of this change, but it will make the C implementation more efficient.
Reviewed by Cynthia Shang.
pgBackRest was being built by root in the documentation which is definitely not best practice.
Instead build as the unprivileged default container user. Sudo privileges are still required to install.
Suggested by Laurenz Albe.
storagePosixInfoList() processed each directory in a single memory context. If the directory contained hundreds of thousands of files processing became very slow due to the number of allocations.
Instead, reset the memory context every thousand files to minimize the number of allocations active at once, improving both speed and memory consumption.
Reported by Jens Wilke.
The log directories/files were being created with a mix of modes depending on whether they were created in C or Perl. In particular, the C code was creating log files with the execute bit set for the user and group which was just odd.
Standardize on 750/640 for both code paths.
Suggested by Damiano Albani.
The Perl versions remain because they are still being used by the Perl stanza commands. Once the stanza commands are migrated they can be removed.
Contributed by Cynthia Shang.
Bug Fixes:
* Retry S3 RequestTimeTooSkewed errors instead of immediately terminating. (Reported by sean0101n, Tim Garton, Jesper St John, Aleš Zelený.)
* Fix incorrect handling of transfer-encoding response to HEAD request. (Reported by Pavel Suderevsky.)
* Fix scoping violations exposed by optimizations in gcc 9. (Reported by Christian Lange, Ned T. Crigler.)
Features:
* Add repo-s3-port option for setting a non-standard S3 service port.
Improvements:
* The local command for backup is implemented entirely in C. (Contributed by David Steele, Cynthia Shang.)
* The check command is implemented partly in C. (Reviewed by Cynthia Shang.)
Implement switch WAL and archive check in C but leave the rest in Perl for now.
The main idea was to have some real integration tests for the new database code so the rest of the migration can wait.
Reviewed by Cynthia Shang.
Migrate functionality from the Perl Db module to C. For now this is just enough to implement the WAL switch check.
Add the dbGet() helper function to get Db objects easily.
Create macros in harnessPq to make writing pq scripts easier by grouping commonly used functions together.
Reviewed by Cynthia Shang.
The cause of this error seems to be that a failed request takes so long that a subsequent retry at the http level uses outdated headers.
We're not sure if pgBackRest it to blame here (in one case a kernel downgrade fixed it, in another case an incorrect network driver was the problem) so add retries to hopefully deal with the issue if it is not too persistent. If SSL_write() has long delays before reporting an error then this will obviously affect backup performance.
Reported by sean0101n, Tim Garton, Jesper St John, Aleš Zelený.
If this option is set then ports appended to repo-s3-endpoint or repo-s3-host will be ignored.
Setting this option explicitly may be the only way to use a bare ipv6 address with S3 (since multiple colons confuse the parser) but we plan to improve this in the future.
This direct interface to libpq allows simple queries to be run against PostgreSQL and supports timeouts.
Testing is performed using a shim that can use scripted responses to test all aspects of the client code. The shim will be very useful for testing backup scenarios on complex topologies.
Reviewed by Cynthia Shang.
The local process is now entirely migrated to C. Since all major I/O operations are performed in the local process, the vast majority of I/O is now performed in C.
Contributed by David Steele, Cynthia Shang.
The HTTP server can use either content-length or transfer-encoding to indicate that there is content in the response. HEAD requests do not include content but return all the same headers as GET. In the HEAD case we were ignoring content-length but not transfer-encoding which led to unexpected eof errors on AWS S3. Our test server, minio, uses content-length so this was not caught in integration testing.
Ignore all content for HEAD requests (no matter how it is reported) and add a unit test for transfer-encoding to prevent a regression.
Found by Pavel Suderevsky.
This analysis never produced anything but false positives (var might be NULL) but took over a minute per test run and added 600MB to the test container.
No new Perl code is being developed, so these tools are just taking up time and making migrations to newer platforms harder. There are only a few Perl tests remaining with full coverage so the coverage tool does not warn of loss of coverage in most cases.
Remove both tools and associated libraries.
gcc < 9 makes all compound literals function scope, even though the C spec requires them to be invalid outside the current scope. Since the compiler and valgrind were not enforcing this we had a few violations which caused problems in gcc >= 9.
Even though we are not quite ready to support gcc 9 officially, fix the scoping violations that currently exist in the codebase.
Reported by chrlange, Ned T. Crigler.
Maintaining the storage layer/drivers in two languages is burdensome. Since the integration tests require the Perl storage layer/drivers we'll need them even after the core code is migrated to C. Create an interface layer so the Perl code can be removed and new storage drivers/features introduced without adding Perl equivalents.
The goal is to move the integration tests to C so this interface will eventually be removed. That being the case, the interface was designed for maximum compatibility to ease the transition. The result looks a bit hacky but we'll improve it as needed until it can be retired.
Bug Fixes:
* Fix archive retention expiring too aggressively. (Fixed by Cynthia Shang. Reported by Mohamad El-Rifai.)
Improvements:
* The expire command is implemented entirely in C. (Contributed by Cynthia Shang.)
* The local command for restore is implemented entirely in C.
* Remove hard-coded PostgreSQL user so $PGUSER works. (Suggested by Julian Zhang, Janis Puris.)
* Honor configure --prefix option. (Suggested by Daniel Westermann.)
* Rename repo-s3-verify-ssl option to repo-s3-verify-tls. The new name is preferred because pgBackRest does not support any SSL protocol versions (they are all considered to be insecure). The old name will continue to be accepted.
Documentation Improvements:
* Add FAQ to the documentation. (Contributed by Cynthia Shang.)
* Use wal_level=replica in the documentation for PostgreSQL ≥ 9.6. (Suggested by Patrick McLaughlin.)
This variable needs to be replaced right before being used without being added to the cache since the host repo path will vary from system to system.
This is frankly a bit of a hack to get the documentation to build in the Debian packages for the upcoming release. We'll need to come up with something more flexible going forward.
The problem expressed when repo1-archive-retention-type was set to diff. In this case repo1-archive-retention ended up being effectively equal to one, which meant PITR recovery was only possible from the last backup. WAL required for consistency was still preserved for all backups.
This issue is not present in the C migration committed at 434cd832, which was written before this bug was reported. Even so, we wanted to note this issue in the release notes in case any other users have been affected.
Fixed by Cynthia Shang.
Reported by Mohamad El-Rifai.
This implementation duplicates the functionality of the Perl code but does so with different logic and includes full unit tests.
Along the way at least one bug was fixed, see issue #748.
Contributed by Cynthia Shang.
The PostgreSQL user was hard-coded to the OS user which libpq will automatically use if $PGUSER is not set, so this code was redundant and prevented $PGUSER from working when set.
Suggested by Julian Zhang, Janis Puris.
The tests and documentation have been using the core storage layer but soon that will depend entirely on the C library, creating a bootstrap problem (i.e. the storage layer will be needed to build the C library).
Create a simplified Posix storage layer to be used by documentation and the parts of the test code that build and execute the actual tests. The actual tests will still use the core storage driver so they can interact with any type of storage.
These names more accurately reflect what the functions do and follow the convention started in Info and InfoPg.
Also remove the ignoreMissing parameter since it was never used.
Contributed by Cynthia Shang.
The documentation was using wal_level=hot_standby which is a deprecated setting.
Also remove the reference to wal_level=archive since it is no longer supported and is not recommended for older versions.
Suggested by Patrick McLaughlin.
Allow commands to be skipped by default in the command help but still work if help is requested for the command directly. There may be other uses for the flag in the future.
Update help for ls now that it is exposed.
Allows listing repo paths/files from the command-line, to be used primarily for testing and debugging.
This command is internal-only so the interface may change at any time without notice.
The documentation was relying on a ScalityS3 container built for testing which wasn't very transparent. Instead, use the stock minio container and configure it in the documentation.
Also, install certificates and CA so that TLS verification can be enabled.
Since the CentOS 6/7 user guides were generated as a single page they did not get menus. Generate the entire site for each user guide so menus are included.
The release notes are generally a direct reflection of the git log. So, ease the burden of maintaining the release notes by using the git log to determine what needs to be added.
Currently only non-dev items are required to be matched to a git commit but the goal is to account for all commits.
The git history cache is generated from the git log but can be modified to correct typos and match the release notes as they evolve. The commit hash is used to identify commits that have already been added to the cache.
There's plenty more to do here. For instance, links to the commits for each release item should be added to the release notes.
The first paragraph should match the first line of the commit message as closely as possible. The following paragraphs add more information.
Release items have been updated back to 2.01.
The new name is preferred because pgBackRest does not support any SSL protocol versions (they are all considered to be insecure).
The old name will continue to be accepted.
This is just the part of restore run by the local helper processes, not the entire command.
Even so, various optimizations in the code (like pipelining and optimizations for zero-length files) should make the restore command faster on object stores.