apt.postgresql.org provides packages for ppcle64 so it's important that we support it.
Rearrange jobs a bit based on current runtimes and importance. Also reduce the number of tests run for arm64 since it is slower than other architectures.
cdebfb09 added relative times to backup.into but a subtle issue was introduced that would cause the tests to fail if the time acquired by cmdExpire() was exactly the same as timeNow used to format backup.info. cmdExpire() was working correctly given the inputs, but the tests did not run predictably.
This was found while running the tests with --no-valgrind --no-coverage which allows them to run a lot faster, thus exposing the timing issue.
These tests required sudo to achieve complete coverage.
Add a new coverage exception, vm_covered, that applies to code that can only be covered in a container. When the test is run outside of a container code sections that require a container will be excluded with TEST_CONTAINER_REQUIRED and the coverage exception will be added to prevent a coverage error.
This does require marking up the core code with vm_covered, which in some modules (e.g. common/io/tls/client) can be extensive. It's possible that some of these tests can be rewritten to be less dependent on sudo but no attempt was made to do that here.
Only allow coverage summaries in a vm since coverage summaries outside a vm will not be complete, which was true even before this commit.
Update error types throw by bzip2 to be more consistent with gzip.
Update the bzip2 and gzip error default to be AssertError as that's the more common case in both, and add a 'break;' to the default clause -- we don't intend to be just falling through those case statements, even if the default is the last, we should be explicit about that.
Clean up some tabs that snuck in, rename a variable to be more clear, and add some comments.
It's important that the values in these variables are maintained even after an exception is thrown, so they must be marked volatile.
Found while testing on the ppc64le architecture.
The --repo-retention-full-type option allows retention of full backups based on a time period, specified in days.
The new option will default to 'count' and therefore will not affect current installations. Setting repo-retention-full-type to 'time' will allow the user to use a time period, in days, to indicate full backup retention. Using this method, a full backup can be expired only if the time the backup completed is older than the number of days set with repo-retention-full (calculated from the moment the 'expire' command is run) and at least one full backup meets the retention period. If archive retention has not been configured, then the default settings will expire archives that are prior to the oldest retained full backup. For example, if there are three full backups ending in times that are 25 days old (F1), 20 days old (F2) and 10 days old (F3), then if the full retention period is 15 days, then only F1 will be expired; F2 will be retained because F1 is not at least 15 days old.
Checking for free space in the output buffer worked, but if the buffer was completely filled then deflate() would need to be called again, which was wasteful and a bit confusing for debugging.
Instead, use Z_STREAM_END to detect that compression is done.
This change was inspired by the bz2 implementation in a021c9fe since bz2 does not allow BZ2_bzCompress() to be called after BZ_STREAM_END is returned. That made it obvious that gz would prefer the same implementation, even if it is more tolerant. The documentation at https://www.zlib.net/manual.html agrees.
Newer versions of sudo output this message to stderr when run in a container:
sudo: setrlimit(RLIMIT_CORE): Operation not permitted
See https://github.com/sudo-project/sudo/issues/42 for details.
A simple workaround is to prevent sudo from disabling core dumps. This seems safe enough because if sudo is segfaulting then core files are the least of our worries.
Newer versions of sudo output this message to stdout when run in a container:
sudo: setrlimit(RLIMIT_CORE): Operation not permitted
See https://github.com/sudo-project/sudo/issues/42 for details.
A simple workaround is to prevent sudo from disabling core dumps. This seems safe enough because if sudo is segfaulting then core files are the least of our worries.
apt.postgresql.org will soon be providing packages for arm64 so it's important that we support it.
Testing on multiple architectures also helps expose potential issues in popular architectures. See 10a5182d for an example.
There are a number of Valgrind errors on Ubuntu 12.04 which do not happen on newer distro versions. However, suppressions for these errors have masked legitimate issues in subsequent code.
Instead, make suppressions VM specific so errors in other VMs are not masked.
EVP_CIPHER_CTX_cleanup() was being called instead of EVP_CIPHER_CTX_free() so most of the memory was being freed but not all of it.
This leak was masked by Valgrind suppressions which are only applicable to Ubuntu 12.04, which will be addressed in a future commit.
Travis-CI arm64 was not happy with this pattern, perhaps because connected was being reset after a longjmp() even though it should have stayed with its originally initialized value of false. In any case, tlsClientOpen() ended up returning NULL on error rather than throwing an exception.
The new pattern seems simpler and passes all tests unmodified, so even though the error was only seen in TlsClient it makes sense to propagate to the other clients.
Resolving localhost can vary based on the local network configuration so it is safer to just use a static IP.
This was found while testing on Travis-CI arm64.
bzip2 is a widely available, high-quality data compressor. It typically compresses files to within 10% to 15% of the best available techniques (the PPM family of statistical compressors), while being around twice as fast at compression and six times faster at decompression.
bzip2 is currently available on all supported platforms.
This was an oversight in 438b957f which added multiple compression type support. The booleans were interpreted as none and gz which works fine for the CompressType enum until the position of gz or none changes.
It's not clear how useful single-character zero-terminated constants are or if we want propagate them through the code, but it at least makes sense to centralize the constants used by the Buffer and String objects.
Zstandard is a fast lossless compression algorithm targeting real-time compression scenarios at zlib-level and better compression ratios. It's backed by a very fast entropy stage, provided by Huff0 and FSE library.
Zstandard version >= 1.0 is required, which is generally only available on newer distributions.
The previous location was too late to allow --var=s3-all=y to work with --require=/repo-host, which depends on /quickstart/configure-archiving.
Since the section is not included in production documentation, the position is not very important to flow so just move it to where it works.
If the WAL path is absolute then pg1-path should be optional but in fact it was required to load pg_control.
Skip the pg_control check when pg1-path is not specified. The check against the stanza version/system-id remains to protect the repo from corruption.
Perhaps this was intended to verify the WAL size but was never implemented.
Verifying the WAL size is probably a good idea so this member may be added back if the feature is implemented.
An upcoming feature requires new parameters for storagePosixNew() and this causes a lot of churn because almost every test creates a Posix storage object. Some refactoring in the tests might reduce this duplication but storagePosixNew() is collecting a lot of parameters so converting to storagePosixNewP() makes sense in any case.
There are relatively few call sites in the core code but they still benefit from better readability after this change.
There is no conflict if the path containing a file link is a parent path of a path link. The Perl code apparently had this right but the migration to C missed it.
Exclude this case when checking for link conflicts.
There have been a number of segfaults reported because a string option expected to be non-null was actually null. This is generally due to options that are expected to be set but are in fact optional.
Protect against this by creating cfgOptionStrNull() to get options that can be null, while changing cfgOptionStr() to always expect non-null. There are relatively few places where nulls are expected.
There is definitely a chance for breakage here as null options might currently be working in the field but will be caught by this new check. Hopefully introducing the check early in the release cycle will allow us to catch any issues.
It makes sense to do this check right after the first compression so any issues are caught early.
Also, none of the current compression formats omit decompressCmd so make the test mandatory.
Previously when retention-archive was set (either by the user or by default), archives prior to the archive-start of the oldest remaining full backup (after backup expiration occurred) would be expired even though the retention-archive threshold had not been met. For example, if there were 1 full backup remaining after backup expiration and the retention-archive was set to 2 and retention-archive-type=full, then archives prior to the archive-start of the remaining full backup would still be removed even though retention-archive required 2 full backups remaining before archives should be expired.
The thought was to keep the archive directory clean and since the full backup did not require prior archives, it was safe to delete them. However, this has caused problems for some users in the past (because they needed the WAL for other purposes) and with the new adhoc and time-based retention features, it was decided that the archives should remain until the threshold was met. The archives will eventually be removed and if having them causes space issues, the expire command and the retention-archive can always be run and adjusted.
Coverity was concerned that regExpError() might return and lead to an invalid reference of "this". This was unlikely since the function should never return but Coverity didn't know that. Also, a difference in error-handling logic at the two sites could cause the issue Coverity reported if they were to get out of sync.
Fix by refactoring out the core error function so that it is clear it will never return.
this->input is set to NULL when the read input goes to EOF but it was possible that this->input could be used again in a subsequent loop, according to Coverity.
In fact this would really only be a problem if EOF suddenly went back to false, which is not allowed. However, checking this->input is cheaper than calling ioReadEofDriver() driver on each loop so this change makes sense as an optimization and it makes Coverity happy, too.
If an option may not be valid for a command it should be checked with cfgOptionValid() or cfgOptionTest().
It appears this rule is followed pretty strictly since the only changes required were in unit tests.
The specified backup set (i.e. the backup label provided and all of its dependent backups, if any) will be expired regardless of backup retention rules except that at least one full backup must remain in the repository.
Each option type enforced its own constraints but there was a lot of duplication. Centralize the enforcement to remove the duplication.
Also convert the option type assert to a production error. This is unlikely to happen in production but the test is quite cheap so it can't hurt.
Finally, add a NULL check. Most option types can never be NULL.
This is implemented by checking for a backup lock on the host where info is running so there are a few limitations:
* It is not currently possible to know which command is running: backup, expire, or stanza-*. The stanza commands are very unlikely to be running so it's pretty safe to guess backup/expire. Command information may be added to the lock file to improve the accuracy of the reported command.
* If the info command is run on a host that is not participating in the backup, e.g. a standby, then there will be no backup lock. This seems like a minor limitation since running info on the repo or primary host is preferred.