Checking for free space in the output buffer worked, but if the buffer was completely filled then deflate() would need to be called again, which was wasteful and a bit confusing for debugging.
Instead, use Z_STREAM_END to detect that compression is done.
This change was inspired by the bz2 implementation in a021c9fe since bz2 does not allow BZ2_bzCompress() to be called after BZ_STREAM_END is returned. That made it obvious that gz would prefer the same implementation, even if it is more tolerant. The documentation at https://www.zlib.net/manual.html agrees.
Newer versions of sudo output this message to stderr when run in a container:
sudo: setrlimit(RLIMIT_CORE): Operation not permitted
See https://github.com/sudo-project/sudo/issues/42 for details.
A simple workaround is to prevent sudo from disabling core dumps. This seems safe enough because if sudo is segfaulting then core files are the least of our worries.
Newer versions of sudo output this message to stdout when run in a container:
sudo: setrlimit(RLIMIT_CORE): Operation not permitted
See https://github.com/sudo-project/sudo/issues/42 for details.
A simple workaround is to prevent sudo from disabling core dumps. This seems safe enough because if sudo is segfaulting then core files are the least of our worries.
apt.postgresql.org will soon be providing packages for arm64 so it's important that we support it.
Testing on multiple architectures also helps expose potential issues in popular architectures. See 10a5182d for an example.
There are a number of Valgrind errors on Ubuntu 12.04 which do not happen on newer distro versions. However, suppressions for these errors have masked legitimate issues in subsequent code.
Instead, make suppressions VM specific so errors in other VMs are not masked.
EVP_CIPHER_CTX_cleanup() was being called instead of EVP_CIPHER_CTX_free() so most of the memory was being freed but not all of it.
This leak was masked by Valgrind suppressions which are only applicable to Ubuntu 12.04, which will be addressed in a future commit.
Travis-CI arm64 was not happy with this pattern, perhaps because connected was being reset after a longjmp() even though it should have stayed with its originally initialized value of false. In any case, tlsClientOpen() ended up returning NULL on error rather than throwing an exception.
The new pattern seems simpler and passes all tests unmodified, so even though the error was only seen in TlsClient it makes sense to propagate to the other clients.
Resolving localhost can vary based on the local network configuration so it is safer to just use a static IP.
This was found while testing on Travis-CI arm64.
bzip2 is a widely available, high-quality data compressor. It typically compresses files to within 10% to 15% of the best available techniques (the PPM family of statistical compressors), while being around twice as fast at compression and six times faster at decompression.
bzip2 is currently available on all supported platforms.
This was an oversight in 438b957f which added multiple compression type support. The booleans were interpreted as none and gz which works fine for the CompressType enum until the position of gz or none changes.
It's not clear how useful single-character zero-terminated constants are or if we want propagate them through the code, but it at least makes sense to centralize the constants used by the Buffer and String objects.
Zstandard is a fast lossless compression algorithm targeting real-time compression scenarios at zlib-level and better compression ratios. It's backed by a very fast entropy stage, provided by Huff0 and FSE library.
Zstandard version >= 1.0 is required, which is generally only available on newer distributions.
The previous location was too late to allow --var=s3-all=y to work with --require=/repo-host, which depends on /quickstart/configure-archiving.
Since the section is not included in production documentation, the position is not very important to flow so just move it to where it works.
If the WAL path is absolute then pg1-path should be optional but in fact it was required to load pg_control.
Skip the pg_control check when pg1-path is not specified. The check against the stanza version/system-id remains to protect the repo from corruption.
Perhaps this was intended to verify the WAL size but was never implemented.
Verifying the WAL size is probably a good idea so this member may be added back if the feature is implemented.
An upcoming feature requires new parameters for storagePosixNew() and this causes a lot of churn because almost every test creates a Posix storage object. Some refactoring in the tests might reduce this duplication but storagePosixNew() is collecting a lot of parameters so converting to storagePosixNewP() makes sense in any case.
There are relatively few call sites in the core code but they still benefit from better readability after this change.
There is no conflict if the path containing a file link is a parent path of a path link. The Perl code apparently had this right but the migration to C missed it.
Exclude this case when checking for link conflicts.
There have been a number of segfaults reported because a string option expected to be non-null was actually null. This is generally due to options that are expected to be set but are in fact optional.
Protect against this by creating cfgOptionStrNull() to get options that can be null, while changing cfgOptionStr() to always expect non-null. There are relatively few places where nulls are expected.
There is definitely a chance for breakage here as null options might currently be working in the field but will be caught by this new check. Hopefully introducing the check early in the release cycle will allow us to catch any issues.
It makes sense to do this check right after the first compression so any issues are caught early.
Also, none of the current compression formats omit decompressCmd so make the test mandatory.
Previously when retention-archive was set (either by the user or by default), archives prior to the archive-start of the oldest remaining full backup (after backup expiration occurred) would be expired even though the retention-archive threshold had not been met. For example, if there were 1 full backup remaining after backup expiration and the retention-archive was set to 2 and retention-archive-type=full, then archives prior to the archive-start of the remaining full backup would still be removed even though retention-archive required 2 full backups remaining before archives should be expired.
The thought was to keep the archive directory clean and since the full backup did not require prior archives, it was safe to delete them. However, this has caused problems for some users in the past (because they needed the WAL for other purposes) and with the new adhoc and time-based retention features, it was decided that the archives should remain until the threshold was met. The archives will eventually be removed and if having them causes space issues, the expire command and the retention-archive can always be run and adjusted.
Coverity was concerned that regExpError() might return and lead to an invalid reference of "this". This was unlikely since the function should never return but Coverity didn't know that. Also, a difference in error-handling logic at the two sites could cause the issue Coverity reported if they were to get out of sync.
Fix by refactoring out the core error function so that it is clear it will never return.
this->input is set to NULL when the read input goes to EOF but it was possible that this->input could be used again in a subsequent loop, according to Coverity.
In fact this would really only be a problem if EOF suddenly went back to false, which is not allowed. However, checking this->input is cheaper than calling ioReadEofDriver() driver on each loop so this change makes sense as an optimization and it makes Coverity happy, too.
If an option may not be valid for a command it should be checked with cfgOptionValid() or cfgOptionTest().
It appears this rule is followed pretty strictly since the only changes required were in unit tests.
The specified backup set (i.e. the backup label provided and all of its dependent backups, if any) will be expired regardless of backup retention rules except that at least one full backup must remain in the repository.
Each option type enforced its own constraints but there was a lot of duplication. Centralize the enforcement to remove the duplication.
Also convert the option type assert to a production error. This is unlikely to happen in production but the test is quite cheap so it can't hurt.
Finally, add a NULL check. Most option types can never be NULL.
This is implemented by checking for a backup lock on the host where info is running so there are a few limitations:
* It is not currently possible to know which command is running: backup, expire, or stanza-*. The stanza commands are very unlikely to be running so it's pretty safe to guess backup/expire. Command information may be added to the lock file to improve the accuracy of the reported command.
* If the info command is run on a host that is not participating in the backup, e.g. a standby, then there will be no backup lock. This seems like a minor limitation since running info on the repo or primary host is preferred.
Make the restore clean process look more like manifest build, i.e. do cleanup of each target root directory outside the main cleanup callback. This means some code duplication but removes the logic handling "dot" paths.
Add tests for both restore and backup (which already worked but was not tested).
sk_GENERAL_NAME_free() only freed the name stack, not the names in the stack. sk_GENERAL_NAME_pop_free() frees both.
Due to aggressive connection reuse this leak was unlikely to be very noticeable.
Bug Fixes:
* Remove empty subexpression from manifest regular expression. MacOS was not happy about this though other platforms seemed to work fine. (Fixed by David Raftis.)
Improvements:
* Non-blocking TLS implementation. (Reviewed by Slava Moudry, Cynthia Shang, Stephen Frost.)
* Only limit backup copy size for WAL-logged files. The prior behavior could possibly lead to postgresql.conf or postgresql.auto.conf being truncated in the backup. (Reviewed by Cynthia Shang.)
* TCP keep-alive options are configurable. (Suggested by Marc Cousin.)
* Add io-timeout option.
The documentation recommends clearing the error queue before each SSL_*() call.
Since we always check the results of SSL_*() for errors instead of blindly calling SSL_get_error() it's not clear this makes any difference, but it still seems like a good idea to be sure there are no stray errors in the queue.