The prior code used TRY...CATCH blocks to cleanup mem contexts when an error occurred. This included freeing new mem contexts that were still being initialized when the error occurred and ensuring that the prior memory context was restored.
This worked fine in production but it involved a lot of setjmp()/longjmp() calls that resulted in longer compilation times and sluggish performance under valgrind, profiling, and coverage testing.
Instead maintain a stack of new contexts and context switches that can be used to do cleanup after an error. Normally, the stack is not used for this purpose and pushing/popping is a cheap operation. In the prior implementation most of the TRY...CATCH logic needed to be run even on success.
One bonus is that the binary is about 8% smaller after this change. Another benefit is that new contexts *must* be explicitly freed/discarded or an error will occur. See info/manifest.c for an example of where this is useful outside the standard macros.
If PostgreSQL crashes it can leave behind a pg_internal.init temp file with the pid as the extension, as discussed in https://www.postgresql.org/message-id/flat/20200131045352.GB2631%40paquier.xyz#7700b9481ef5b0dd5f09cc410b4750f6. On restart this file is not cleaned up so it can persist for the lifetime of the cluster or until another process with the same id happens to write pg_internal.init.
This is arguably a bug in PostgreSQL, but in any case it makes sense not to backup this file.
This error was lost during the migration to C. The error that occurred instead (generally an SSH auth error) was hard to debug.
Restore the original behavior by throwing an error immediately if pg1-host is configured for any of these commands. reset-pg1-host can be used to suppress the error when required.
The main improvement is a double-fork to prevent zombie processes if the parent process exits after the (child) async process. This is a real possibility since the parent process sticks around to monitor the results of the async process.
In the first fork, ignore SIGCHLD in the very unlikely case that the async process exits before the first fork. This is probably only possible if the async process exits immediately, perhaps due to a chdir() failure. Set SIGCHLD back to default in the async process so waitpid() will work as expected.
Also update the comment on chdir() to more accurately reflect what is happening.
Finally, add a test in certain debug builds to ensure the first fork exits very quickly. This only works when valgrind is not in use because valgrind makes forking so slow that it is hard to tell if the async process performed work or not (in the case that the second fork goes missing and the async process is a direct child).
In this case the resumable backup should be ignored, but the C code was not able to load the partial manifest written by Perl since the format differs slightly. Add validations to catch this case and continue gracefully.
2a06df93 removed the error file so an old error would not be reported before the async process had a chance to try again. However, if the async process was already running this might lead to a timeout error before reporting the correct error.
Instead, remove the error files once we know that the async process will start, i.e. after the archive lock has been acquired.
This effectively reverts 2a06df93.
Generally, the content-size or content-encoding headers will be used to specify how much content should be expected.
There is a special case where the server sends 'Connection:close' without the content headers and the content may be read up until eof.
This appears to be an atypical usage but it is required by the specification.
Auto-selection is performed only when --set is not specified. If a backup set for the given target time cannot not be found, the latest (default) backup set will be used.
Currently a limited number of date formats are recognized and timezone names are not allowed, only timezone offsets.
Add tzPartsValid() and tzOffsetSecond() to calculate timezone offsets from user provided values.
Update epochFromParts() to accept a timezone offset in seconds.
The test that checks for no output from the server was leaving a connection open which valgrind was complaining about.
Wait on the server long enough to cause the error on the client then close the connection to free the memory.
Validate that checksums exist for zero size files. This means that the checksums for zero size files are explicitly set by backup even though they'll always be the same. Also validate that zero length files have the correct checksum.
Validate that repo size is > 0 if size is > 0. No matter what compression type is used a non-zero amount of data cannot be stored in zero bytes.
This is a modest start but it addresses the specific issue that was caused by the bug fixed in 45ec694a. This validation will produce an immediate error rather than erroring out partway through the restore.
More validations are planned but this is the most important one and seems safest for this release.
If a file was removed by PostgreSQL during the backup (or was missing from the standby) then the next file might not be copied and updated in the manifest. If this happened then the backup would error when restored.
The issue was that removing files from the manifest invalidated the pointers stored in the processing queues. When a file was removed, all the pointers shifted to the next file in the list, causing a file to be unprocessed. Since the unprocessed file was still in the manifest it would be saved with no checksum, causing a failure on restore.
When process-max was > 1 then the bug would often not express since the file had already been pulled from the queue and updates to the manifest are done by name rather than by pointer.
pkg-config is a generic way to get build options rather than relying on a package-specific utility.
XML2_CONFIG can be used to override this utility for systems that do not ship pkg-config.
Previously memNew() used memset() to initialize all struct members to 0, NULL, false, etc. While this appears to work in practice, it is a violation of the C specification. For instance, NULL == 0 must be true but neither NULL nor 0 must be represented with all zero bits.
Instead use designated initializers to initialize structs. These guarantee that struct members will be properly initialized even if they are not specified in the initializer. Note that due to a quirk in the C99 specification at least one member must be explicitly initialized even if it needs to be the default value.
Since pre-zeroed memory is no longer required, adjust memAllocInternal()/memReallocInternal() to return raw memory and update dependent functions accordingly. All instances of memset() have been removed except in debug/test code where needed.
Add memMewPtrArray() to allocate an array of pointers and automatically set all pointers to NULL.
Rename memGrowRaw() to the more logical memResize().
The timeline is required to verify WAL segments in the archive after a backup. The conversion was performed base 10 instead of 16, which led to errors when the timeline was ≥ 0xA.
This macro block encapsulates the common pattern of switching to the prior (formerly called old) mem context to return results from a function.
Also rename MEM_CONTEXT_OLD() to memContextPrior(). This violates our convention of macros being in all caps but memContextPrior() will become a function very soon so this will reduce churn.
A few places were using just memContextNew(), probably because they did not immediately need to create anything in the new context, but it's better if we use the same pattern everywhere, even if it results in a few extra mem context switches.
Bug Fixes:
* Fix options being ignored by asynchronous commands. The asynchronous archive-get/archive-push processes were not loading options configured in command configuration sections, e.g. [global:archive-get]. (Reviewed by Cynthia Shang. Reported by Urs Kramer.)
* Fix handling of \ in filenames. \ was not being properly escaped when calculating the manifest checksum which prevented the manifest from loading. Since instances of \ in cluster filenames should be rare to nonexistent this does not seem likely to be a serious problem in the field.
Features:
* pgBackRest is now pure C.
* Add pg-user option. Specifies the database user name when connecting to PostgreSQL. If not specified pgBackRest will connect with the local OS user or PGUSER, which was the previous behavior. (Contributed by Mike Palmiotto.)
* Allow path-style URIs in S3 driver.
Improvements:
* The backup command is implemented entirely in C. (Reviewed by Cynthia Shang.)
The local, remote, archive-get-async, and archive-push-async commands were used to run functionality that was not directly available to the user. Unfortunately that meant they would not pick up options from the command that the user expected, e.g. backup, archive-get, etc.
Remove the internal commands and add roles which allow pgBackRest to determine what functionality is required without implementing special commands. This way the options are loaded from the expected command section.
Since remote is no longer a specific command with its own options, more manipulation is required when calling remote. This might be something we can improve in the config system but it may be worth leaving as is because it is a one-off, for now at least.
Although path-style URIs have been deprecated by AWS, they may still be used with products like Minio because no additional DNS configuration is required.
Path-style URIs must be explicitly enabled since it is not clear how they can be auto-detected reliably. More importantly, faulty detection could cause regressions in current installations.
Time is supported in all drivers with the update to S3 at 61538f93, so it is now possible to add time to the ls command and have it work on all repo types.
Parameter lists that are passed directly to exec*() do not need quoting when spaces are present. Worse, the quotes will not be stripped and the option value will be garbled.
Unfortunately this still does not fix all issues with quoting since we don't know how it might need to be escaped to work with SSH command configuration. The answer seems to be to pass the options in the protocol layer but that's beyond the scope of this commit.
This option was overloaded on the general type option but it makes sense to split this out since the meaning is pretty different.
Rename the values to conform to current standards, i.e. pg and repo, now that the Perl code won't care anymore.
Previously dates were not being filled by these functions which was fine since dates were not used.
We plan to use dates for the ls command plus it makes sense for the driver to be complete since it will be used as an example.
These are similar to what mktime() and strptime() do but they ignore the local system timezone which saves having to munge the TZ env variable to do time conversions.
This macro was created before the String object existed so subsequent usage with String always included a lot of strPtr() wrapping.
TEST_RESULT_STR_Z() had already been introduced but a wholesale replacement of TEST_RESULT_STR() was not done since the priority was on the C migration.
Update all calls to (old) TEST_RESULT_STR() with one of the following variants: (new) TEST_RESULT_STR(), TEST_RESULT_STR_Z(), TEST_RESULT_Z(), TEST_RESULT_Z_STR().
Specifies the database user name when connecting to PostgreSQL.
If not specified pgBackRest will connect with the local OS user or PGUSER, which was the previous behavior.
Most of these tests are just checking that errors are thrown when required. These are well covered in various unit tests.
The "cannot resume" tests are also well covered in the backup unit tests.
Finally, config warnings are well covered in the config unit tests.
There is more to be done here, but this accounts for the low-hanging fruit.
Set log-level-file=off when more that one test will run. In this case is it impossible to see the logs anyway since they will be automatically cleaned up after the test. This improves performance pretty dramatically since trace-level logging is expensive. If a singe integration test is run then log-level-file is trace by default but can be changed with the --log-level-test-file option.
Reduce buffer-size to 64k to save memory during testing and allow more processes to run in parallel.
Update log replacement rules so that these options can change without affecting expect logs.
The co6 tests were occasionally running out of space so bump up the size of the ramdisk a bit to hopefully prevent this.
A longer term solution would be to disable the trace-level file logs when running on Travis CI since they seem to be using most of the space.
PostgreSQL >= 9.6 uses non-exclusive backup which has implicit stop-auto since the backup will stop when the connection is terminated.
The warning was made more verbose in 1f2ce45e but this now seems like a bad idea since there are likely users with mixed version environments where stop-auto is enabled globally. There's no reason to fill their logs with warnings over a harmless option. If anything we should warn when stop-auto is explicitly set to false but this doesn't seem very important either.
Revert to the prior behavior, which is to warn and reset when stop-auto is enabled on PostgreSQL < 9.3.
\ was not being properly escaped when calculating the manifest checksum which prevented the manifest from loading.
Use jsonFromStr() to properly quote and escape \.
Since instances of \ in cluster filenames should be rare to nonexistent this does not seem likely to be a serious problem in the field.
Remove embedded Perl from the distributed binary. This includes code, configure, Makefile, and packages. The distributed binary is now pure C.
Remove storagePathEnforceSet() from the C Storage object which allowed Perl to write outside of the storage base directory. Update mock/all and real/all integration tests to use storageLocal() where they were violating this rule.
Remove "c" option that allowed the remote to tell if it was being called from C or Perl.
Code to convert options to JSON for passing to Perl (perl/config.c) has been moved to LibC since it is still required for Perl integration tests.
Update build and installation instructions in the user guide.
Remove all Perl unit tests.
Remove obsolete Perl code. In particular this included all the Perl protocol code which required modifications to the Perl storage, manifest, and db objects that are still required for integration testing but only run locally. Any remaining Perl code is required for testing, documentation, or code generation.
Rename perlReq to binReq in define.yaml to indicate that the binary is required for a test. This had been the actual meaning for quite some time but the key was never renamed.
For the most part this is a direct migration of the Perl code into C except as noted below.
A backup can now be initiated from a linked directory. The link will not be stored in the manifest or recreated on restore. If a link or directory does not already exist in the restore location then a directory will be created.
The logic for creating backup labels has been improved and it should no longer be possible to get a backup label earlier than the latest backup even with timezone changes or clock skew. This has never been an issue in the field that we know of, but we found it in testing.
For online backups all times are fetched from the PostgreSQL primary host (before only copy start was). This doesn't affect backup integrity but it does prevent clock skew between hosts affecting backup duration reporting.
Archive copy now works as expected when the archive and backup have different compression settings, i.e. when one is compressed and the other is not. This was a long-standing bug in the Perl code.
Resume will now work even if hardlink settings have been changed.
Reviewed by Cynthia Shang.
/ takes precedence over & but the appropriate parens were not provided.
By some bad luck the tests worked either way, so add a new test that only works the correct way to prevent a regression.
Bug Fixes:
* Fix archive-push/archive-get when PGDATA is symlinked. These commands tried to use cwd() as PGDATA but this would disagree with the path configured in pgBackRest if PGDATA was symlinked. If cwd() does not match the pgBackRest path then chdir() to the path and make sure the next cwd() matches the result from the first call. (Reported by Stephen Frost, Milosz Suchy.)
* Fix reference list when backup.info is reconstructed in expire command. Since the backup command is still using the Perl version of reconstruct this issue will not express unless 1) there is a backup missing from backup.info and 2) the expire command is run directly instead of running after backup as usual. This unlikely combination of events means this is probably not a problem in the field.
* Fix segfault on unexpected EOF in gzip decompression. (Reported by Stephen Frost.)
The TZ environment variable was not reliably pushed down to the test processes.
Instead pass TZ via a command line parameter and set explicitly in the test process.
Commit 7168e074 tried to use cwd() as PGDATA but this would disagree with the path configured in pgBackRest if PGDATA was symlinked.
If cwd() does not match the pgBackRest path then chdir() to the path and make sure the next cwd() matches the result from the first call.
If the compressed stream terminated early then the decompression process would get a flush request (NULL input buffer) since the filter was not marked as done. This could happen on a zero-length or truncated (i.e. invalid) compressed file.
Change the existing assertion to an error to catch this condition in production gracefully.
82df7e6f and 9856fef5 updated tests that used test points in preparation for the feature not being available in the C code.
Since tests points are no longer used remove the infrastructure.
Also remove one stray --test option in mock/all that was essentially a noop but no longer works now that the option has been removed.
Pq script errors are now printed in test output in case they are being masked by a later error.
Once a script error occurs, the same error will be thrown forever rather than throwing a new error on the next item in the script.
HRNPQ_MACRO_CLOSE() is not required in scripts unless harnessPqScriptStrictSet(true) is called. Most higher-level tests should not need to run in strict mode.
The command/check test seems to require strict mode but there's no apparent reason why it should. This would be a good thing to look into at some point.
Some log output (e.g. time) is hard to test because the values can change between tests.
Add expressions to replace substrings in the log with predictable values to simplify testing.
This is similar to the log replacement facility available for Perl expect log testing.
A recopy would occur if the size or checksum was invalid but on error the backup would terminate.
Instead, recopy the resumed file on any error. If the error is systemic (e.g. network failure) then it should show up again during the recopy.
These were not getting updated to match the directory name when the manifests were copied.
The Perl code didn't care but the C code expects labels to be set correctly.
For now this is only used in testing but there are places where it could be useful in the core code.
Even if that turns out not to be true, it doesn't seem worth implementing a new version in testing just to capture a few values that we already have.
This is to maintain compatibility with the older Perl code that returned the lowest sorted order item in a tie.
For other datatypes the C code returns the same value, often enough at least to not cause churn in the expect tests.
Adding a manifest to backup.info was migrated to C in 4e4d1f41 but deduplication of the references was missed leading to a reference for every file being added to backup.info.
Since the backup command is still using the Perl version of reconstruct this issue will not express unless 1) there is a backup missing from backup.info and 2) the expire command is run directly instead of running after backup as usual.
This unlikely combination of events means this is probably not a problem in the field.
Archive check does not run when in offline backup mode but the option was set to true in the manifest. It's harmless since these options are informational only but it could cause confusion when debugging.
Test points are not supported by the new C code so these will be replaced with unit tests.
The fact that the tests still pass even when the changes aren't made mid-backup (except application_name) shows how weak they were in the first place.
Even so, this does represent a regression in (soon to be be removed) Perl coverage.
Test points will not be available in the C code so update these tests as best as possible without using them.
This represents a loss of coverage for the Perl code (soon to be removed) which will be made up in the C code with unit tests.
These tests require test points which are not being implemented in the C code.
This functionality is fully tested in the command/control unit tests so integration tests are no longer required.
This expression determines which files contain page checksums but it was also including the directory above the relation directories. In a real PostgreSQL installation this not a problem because these directories don't contain any files.
However, our tests place a file in `base` which the Perl code thought should have page checksums while the new C code says no.
Update the expression to document the change and avoid churn in the expect logs later.
Installing lcov 1.14 everywhere turned out to be a problem just as using 1.13 on Ubuntu 19.04 was.
Since we primarily use Ubuntu 18.04 for coverage testing and reporting, we definitely want to make sure that works. So, revert to using the default packaged lcov except when specified otherwise in VmTest.pm.
PostgreSQL minor version releases are also included since all containers have been rebuilt.
The issue "fixed" in f01aa586 was caused by treating all strings as format strings while logging, which was fixed in 0c05df45.
Revert because there no longer seems a reason for the extra logic, and it was only partially applied, i.e. not to env vars, command-line options, or config options.
Using the same macros for formatted and unformatted logging had several disadvantages.
First, the compiler was unable to verify the format string against the parameters.
Second, legitimate % characters in messages were being interpreted as format characters with garbage output ensuing.
Add _FMT() variants and update all call sites to use the correct variant.
This character causes problems in C and in the shell if we try to output it in an error message.
Forbid it completely and spell it out in error messages to avoid strange effects.
There is likely a better way deal with the issue but this will do for now.
The protocol timeout tests have been superceded by unit tests.
The TEST_BACKUP_RESUME test point was incorrectly included into a number of tests, probably a copy pasto. It didn't hurt anything but it did add 200ms to each test where it appeared.
Catalog and control version tests were redundant. The database version and system id tests covered the important code paths and the C code gets these values from a lookup table.
Finally, fix an incomplete update to the backup.info file while munging for tests.
It is occasionally useful to get information about a file outside of the base storage path. storageLocal() can be used in some cases but when the storage is remote is doesn't seem worth creating a separate storage object for adhoc info requests.
storageInfo() is a read-only operation so this seems pretty safe. The noPathEnforce parameter will make auditing exceptions easy.
The ability to disable enforcement (i.e., the requested absolute path is within the storage path) globally will be removed after the Perl migration.
The feature will still be needed occasionally so allow it in an adhoc fashion.
A . in a link will always lead to an error since the destination will be inside PGDATA. However, it is accepted symlink syntax so it's better to resolve it and get the correct error message.
Also, we may have other uses for this function in the future.
Adding a dummy column which is always set by the P() macro allows a single macro to be used for parameters or no parameters without violating C's prohibition on the {} initializer.
-Wmissing-field-initializers remains disabled because it still gives wildly different results between versions of gcc.
It wasn't practical for the main process to be ignorant of the remote path, and in any case knowing the path makes debugging easier.
Pull the remote path when connecting and pass the result of local storagePath() to the remote when making calls.
Pushing output through a JSON blob is not practical if the output is extremely large, e.g. a backup manifest with 100K+ files.
Add read/write routines so that output can be returned in chunks but errors will still be detected.
Previously, options were being filtered based on what was currently valid. For chained commands (e.g. backup then expire) some options may be valid for the first command but not the second.
Filter based on the command definition rather than what is currently valid to avoid logging options that are not valid for subsequent commands. This reduces the number of options logged and will hopefully help avoid confusion and expect log churn.
Bug Fixes:
* Fix remote timeout in delta restore. When performing a delta restore on a largely unchanged cluster the remote could timeout if no files were fetched from the repository within protocol-timeout. Add keep-alives to prevent remote timeout. (Reported by James Sewell, Jens Wilke.)
* Fix handling of repeated HTTP headers. When HTTP headers are repeated they should be considered equivalent to a single comma-separated header rather than generating an error, which was the prior behavior. (Reported by donicrosby.)
Improvements:
* JSON output from the info command is no longer pretty-printed. Monitoring systems can more easily ingest the JSON without linefeeds. External tools such as jq can be used to pretty-print if desired. (Contributed by Cynthia Shang.)
* The check command is implemented entirely in C. (Contributed by Cynthia Shang.)
Documentation Improvements:
* Document how to contribute to pgBackRest. (Contributed by Cynthia Shang.)
* Document maximum version for auto-stop option. (Contributed by Brad Nicholson.)
Test Suite Improvements:
* Fix container test path being used when --vm=none. (Suggested by Stephen Frost.)
* Fix mismatched timezone in expect test. (Suggested by Stephen Frost.)
* Don't autogenerate embedded libc code by default. (Suggested by Stephen Frost.)
When HTTP headers are repeated they should be considered equivalent to a single comma-separated header rather than generating an error, which was the prior behavior.
Reported by donicrosby.
We had some problems with newer versions so had held off on updating. Those problems appear to have been resolved.
In addition, the --compat flag is no longer required. Prior versions of MinIO required all parts of a multi-part upload (except the last) to be of equal size. The --compat flag was introduced to restore the default S3 behavior. Now --compat is only required when ETag is being used for MD5 verification, which we don't do.
Previously we were using int64_t to debug time_t but this may not be right depending on how the compiler represents time_t, e.g. it could be a float.
Since a mismatch would have caused a compiler error we are not worried that this has actually happened, and anyway the worst case is that the debug log would be wonky.
The primary benefit, aside from correctness, is that it makes choosing a parameter debug type for time_t obvious.
Previously the mock integration tests would be skipped for VMs other than the standard four used in CI. Now VMs outside the standard four will run the same tests as VM4 (currently U18).
Using pg1-path, as we were doing previously, could lead to WAL being copied to/from unexpected places. PostgreSQL sets the current working directory to PGDATA so we can use that to resolve relative paths.
This makes configuring tests easier.
Also add a parameter for tests that require sudo. This should be retired at some point but some tests still require it.
This will likely improve performance, but it also makes the filesystem consistent between platforms.
A number of tests were failing on shiftfs, which was the default for arm64 on Travis.
1.13 is not compatible with gcc 8 which is what ships with newer distributions. Build from source to get a more recent version.
1.13 is not compatible with gcc 9 so we'll need to address that at a later date.
This user was created before we tested in containers to ensure isolation between the pg and repo hosts which were then just directories. The downside is that this resulted in a lot of sudos to set the pgbackrest user and to remove files which did not belong to the main test user.
Containers provide isolation without needing separate users so we can now safely remove the pgbackrest user. This allows us to remove most sudos, except where they are explicitly needed in tests.
While we're at it, remove the code that installed the Perl C library (which also required sudo) and simply add the build path to @INC instead.
This test was not creating recovery.signal when testing with --type=preserve. The preserve recovery type only keeps existing files and does not create any.
RC1 was just ignoring recovery.signal and going right into recovery. Weirdly, 12.0 used restore_command to do crash recovery which made the problem harder to diagnose, but this has now been fixed in PostgreSQL and should be released in 12.1.
This is only needed when new code is added to the Perl C library, which is becoming rare as the migration progresses.
Also, the code will vary slightly based on the Perl version used for generation so for normal users it is just noise.
Suggested by Stephen Frost.
A number of tests have been updated and Fedora 30 has been added to the test suite so the unit tests can run on gcc 9.
Stop running unit tests on co6/7 since we appear to have ample unit test coverage.
This tool was only being used it a few places but was a pretty large dependency.
Rework the forceStorageMove() code using our storage layer and replace one aws cli cp with a storage put.
Also, remove the Dockerfile that was once used to build the Scality S3 test container.
Now that our tests are more diversified it makes sense to load only the packages that are needed for each test.
Move the package loads from .travis.yaml to test/travis.pl where we have more control over what is loaded.
Note that building the manifest on each host has been temporarily removed.
This feature will likely be brought back as a non-default option (after the manifest code has been fully migrated to C) since it can be fairly expensive.
Check the backup.info file against the backup path. Add any backups that are missing and remove any backups that no longer exist.
It's important to run this before backup or expire to be sure we are using the most up-to-date list of backups.
Three major changes were required to get this working:
1) Provide the path to pgbackrest in the build directory when running outside a container. Tests in a container will continue to install and run against /usr/bin/pgbackrest.
1) Set a per-test lock path so tests don't conflict on the default /tmp/pgbackrest path. Also set a per-test log-path while we are at it.
2) Use localhost instead of a custom host for TLS test connections. Tests in containers will continue to update /etc/hosts and use the custom host.
Add infrastructure and update harnessCfgLoad*() to get the correct exe and paths loaded for testing.
Since new tests are required to verify that running outside a container works, also rework the tests in Travis CI to provide coverage within a reasonable amount of time. Mainly, break up to doc tests by VM and run an abbreviated unit test suite on co6 and co7.
Recovery settings are now written into postgresql.auto.conf instead of recovery.conf. Existing recovery_target* settings will be commented out to help avoid conflicts.
A comment is added before recovery settings to identify them as written by pgBackRest since it is unclear how, in general, old settings will be removed.
recovery.signal and standby.signal are automatically created based on the recovery settings.
The additional details include databases that can be used for selective restore and a list of tablespaces and symlinks with their default destinations.
This information is not included in the JSON output because it requires reading the manifest which is too IO intensive to do for all manifests. We plan to include this information for JSON in a future release.
Scaling allows the starting values to be increased from the command-line without code changes.
Also suppress valgrind and assertions when running performance testing. Optimization is left at -O0 because we should not be depending on compiler optimizations to make our code performant, and it makes profiling more informative.
bsearch() is far more efficient than an iterative approach except in the most trivial cases.
For now insert will reset the sort order to none and the list will need to be resorted before bsearch() can be used. This is necessary because item pointers are not stable after a sort, i.e. they can move around. Until lists are stable it's not a good idea to surprise the caller by mixing up their pointers on insert.
PostgreSQL 12 will shutdown in these cases which seems to be the correct action (according to the documentation) when hot_standby = off, but older versions are promoting instead. Set target_action explicitly so all versions will behave the same way.
This does beg the question of whether the PostgreSQL 12 behavior is wrong (though it matches the docs) or the previous versions are.
Separate the generation of recovery values and formatting them into recovery.conf format. This is generally a good idea, but also makes the code ready to deal with a different recovery file in PostgreSQL 12.
Also move the recovery file logic out of cmdRestore() into restoreRecoveryWrite().
This restore type automatically adds standby_mode=on to recovery.conf.
This could be accomplished previously by setting --recovery-option=standby_mode=on but PostgreSQL 12 requires standby mode to be enabled by a special file named standby.signal.
The new restore type allows us to maintain a common interface between PostgreSQL versions.
For the most part this is a direct migration of the Perl code into C.
There is one important behavioral change with regard to how file permissions are handled. The Perl code tried to set ownership as it was in the manifest even when running as an unprivileged user. This usually just led to errors and frustration.
The C code works like this:
If a restore is run as a non-root user (the typical scenario) then all files restored will belong to the user/group executing pgBackRest. If existing files are not owned by the executing user/group then an error will result if the ownership cannot be updated to the executing user/group. In that case the file ownership will need to be updated by a privileged user before the restore can be retried.
If a restore is run as the root user then pgBackRest will attempt to recreate the ownership recorded in the manifest when the backup was made. Only user/group names are stored in the manifest so the same names must exist on the restore host for this to work. If the user/group name cannot be found locally then the user/group of the PostgreSQL data directory will be used and finally root if the data directory user/group cannot be mapped to a name.
Reviewed by Cynthia Shang.
This macro displays a title for each test. A test frequently has multiple parts and it was hard to tell which subparts went together. We used ad hoc indentation to do this.
Anything that is a not a title is automatically indented so manually indenting is not longer needed. This should make the tests and the test output easier to read.
These macros encapsulate the functionality provided by direct calls to harnessLogResult() and system(). They both have _FMT() variants.
The primary advantage is that {[path]}, {[user]}, and {[group]} will be replaced with the test path, user, and group respectively. This saves a log of strNewFmt() calls and makes the tests less noisy.
The backup manifest stores a complete list of all files, links, and paths in a backup along with metadata such as checksums, sizes,
timestamps, etc. A list of databases is also included for selective restore.
The purpose of the manifest is to allow the restore command to confidently reconstruct the PostgreSQL data directory and ensure that
nothing is missing or corrupt. It is also useful for reporting, e.g. size of backup, backup time, etc.
For now, migrate enough functionality to implement the restore command.
Reviewed by Cynthia Shang.
cfgExecParam() was originally written to provide options for remote processes. Remotes processes do not have access to the local config so it was necessary to pass every non-default option.
Local processes on the other hand, e.g. archive-get, archive-get-async, archive-push-async, and local, do have access to the local config and therefore don't need every parameter to be passed on the command-line. The previous way was not wrong, but it was overly verbose and did not align with the way Perl had worked.
Update cfgExecParam() to accept a local option which excludes options from the command line which can be read from local configs.
strPathAbsolute() generates an absolute path from an absolute base path and an absolute/relative path.
strLstRemoveIdx() is a support function based on lstRemoveIdx().
In general we don't care about path and link times since they are easily recreated when restoring.
So, outside of storageInfo() we don't need to bother testing them.
Loading jobs in advance uses a lot of memory in the case that there are millions of jobs to be performed. We haven't seen this yet, but with backup and restore on the horizon it will become the norm.
Instead, use a callback so that jobs are only created as they are needed and can be freed as soon as they are completed.
These features finally make the ls command practical.
Currently the JSON contains only name, type, and size. We may add more fields in the future, but these seem like the minimum needed to be useful.
This warning gives very unpredictable results between compiler versions and seems unrealistic since most of our structs are zeroed for initialization.
This warning has been disabled in the Makefile for a long time.
Broken vendor packages have been causing builds to break due to an error on apt-get update.
Ignore errors and proceed directory to apt-get install. It's possible that we'll try to reference an expired package version and get an error anyway, but that seems better than a guaranteed hard error.
Push the responsibility for sort and find down to the List object by introducing a general comparator function that can be used for both sorting and finding.
Update insert and add functions to return the item added rather than the list. This is more useful in the core code, though numerous updates to the tests were required.
Update StorageWritePosix to use the new functions.
A side effect is that storageWritePosixOpen() will no longer error when the user/group name does not exist. It will simply retain the original user/group, i.e. the user that executed the restore.
In general this is a feature since completing a restore is more important than setting permissions exactly from the source host. However, some notification of this omission to the user would be beneficial.
Travis will timeout after 10 minutes with no output. Emit a warning every 5 minutes to keep Travis alive and increase the total timeout to 20 minutes.
Documentation builds have been timing out a lot recently so hopefully this will help.
The control and catalog versions were stored a variety of places in the optimistic hope that they would be useful. In fact they never were.
We can't remove them from the backup.info and backup.manifest files due to backwards compatibility concerns, but we can at least avoid loading and storing them in C structures.
Add functions to the PostgreSQL interface which will return the control and catalog versions for any supported version of PostgreSQL to allow backwards compatibility for backup.info and backup.manifest. These functions will be useful in other ways, e.g. generating the tablespace identifier in PostgreSQL >= 9.0.
Info files required three copies in memory to be loaded (the original string, an ini representation, and the final info object). Not only was this memory inefficient but the Ini object does sequential scans when searching for keys making large files very slow to load.
This has not been an issue since archive.info and backup.info are very small, but it becomes a big deal when loading manifests with hundreds of thousands of files.
Instead of holding copies of the data in memory, use a callback to deliver the ini data directly to the object when loading. Use a similar method for save to avoid having an intermediate copy. Save is a bit complex because sections/keys must be written in alpha order or older versions of pgBackRest will not calculate the correct checksum.
Also move the load retry logic to helper functions rather than embedding it in the Info object. This allows for more flexibility in loading and ensures that stack traces will be available when developing unit tests.
Reviewed by Cynthia Shang.
The manifest is not an info file so if anything it should be called backupManifest. But that seems too long for such a commonly used object so manifest seems better.
Note that unlike Perl there is no storage manifest method so this stands as the only manifest in the C code, as befits its importance.
Bug Fixes:
* Improve slow manifest build for very large quantities of tables/segments. (Reported by Jens Wilke.)
* Fix exclusions for special files. (Reported by CluelessTechnologist, Janis Puris, Rachid Broum.)
Improvements:
* The stanza-create/update/delete commands are implemented entirely in C. (Contributed by Cynthia Shang.)
* The start/stop commands are implemented entirely in C. (Contributed by Cynthia Shang.)
* Create log directories/files with 0750/0640 mode. (Suggested by Damiano Albani.)
Documentation Bug Fixes:
* Fix yum.p.o package being installed when custom package specified. (Reported by Joe Ayers, John Harvey.)
Documentation Improvements:
* Build pgBackRest as an unprivileged user. (Suggested by Laurenz Albe.)
This test is commonly used for sanity checking but the combination of S3 and encryption makes it hard to use and encourages temporary changes to make it usable.
Acknowledge this and disable S3 and encryption for this test and move them to mock/all/2.
ioReadLine() errors on eof because it has previously been used only for protocol reads.
Returning on eof is handy for reading lines from files where eof is not considered an error.
Prior to 2.16 the Perl manifest code would skip any file that began with a dot. This was not intentional but it allowed PostgreSQL socket files to be located in the data directory. The new C code in 2.16 did not have this unintentional exclusion so socket files in the data directory caused errors.
Worse, the file type error was being thrown before the exclusion check so there was really no way around the issue except to move the socket files out of the data directory.
Special file types (e.g. socket, pipe) will now be automatically skipped and a warning logged to notify the user of the exclusion. The warning can be suppressed with an explicit --exclude.
Reported by CluelessTechnologist, Janis Puris, Rachid Broum.
In versions <= 2.15 the old regexp caused any file or directory beginning with . to be ignored during a backup. This has caused behavioral differences in 2.16 because the new C code correctly excludes ./.. directories.
This Perl code is only used for testing now, but it should still match the output of the C functions.
Putting the checksum at the beginning of the file made it impossible to stream the file out when saving. The entire file had to be held in memory while it was checksummed so the checksum could be written at the beginning.
Instead place the checksum at the end. This does not break the existing Perl or C code since the read is not order dependent.
There are no plans to improve the Perl code to take advantage of this change, but it will make the C implementation more efficient.
Reviewed by Cynthia Shang.
Checking the PostgreSQL-reported path and version against the pgBackRest configuration helps ensure that pgBackRest is operating against the correct cluster.
In Perl this functionality was in the Db object, but check seems like a better place for it in C.
Contributed by Cynthia Shang.
Previously the host id to use was pulled from the host-id option or defaulted to 1.
The stanza, check, and backup commands will all need the ability to address a specified pg host, so add functions to make that possible.
Previously, info files (e.g. archive.info, backup.info) were created in Perl and only loaded in C.
The upcoming stanza commands in C need to create these files so refactor the Info* objects to allow new, empty objects to be created. Also, add functions needed to initialize each Info* object to a valid state.
Contributed by Cynthia Shang.
Previously storageLocal() was being used internally but loading pg_control from remote storage is often required.
Also, storagePg() is more appropriate than storageLocal() for all current usage.
Contributed by Cynthia Shang.
The pg1-socket-path and pg1-port options were not being reset when options from a higher index were being pushed down for processing by a remote. Since remotes only talk to one cluster they always use the options in index 1. This requires moving options from the original index to 1 before starting the remote. All options already set on index 1 must be removed if they are not being overwritten.
Processing large datasets in a memory context can lead to high memory usage and long allocation times. Add a new MEM_CONTEXT_TEMP_RESET_BEGIN() macro that allows temp allocations to be automatically freed after N iterations.
Calculate the most common value in a list of variants. If there is a tie then the first value passed to mcvUpdate() wins.
mcvResult() can be called multiple times because it does not end processing, but there is a cost to calculating the result each time
since it is not stored.
"null" is not allowed in the manifest format (null values should be missing instead) but Perl was treating the invalid values written by this test as if they were missing.
Update the test code to remove the values rather than setting them to "null".
Logging stayed in the backup log until the Perl code started. Fix this so it logs to the correct file and will still work after the Perl code is removed.
The Perl versions remain because they are still being used by the Perl stanza commands. Once the stanza commands are migrated they can be removed.
Contributed by Cynthia Shang.
Bug Fixes:
* Retry S3 RequestTimeTooSkewed errors instead of immediately terminating. (Reported by sean0101n, Tim Garton, Jesper St John, Aleš Zelený.)
* Fix incorrect handling of transfer-encoding response to HEAD request. (Reported by Pavel Suderevsky.)
* Fix scoping violations exposed by optimizations in gcc 9. (Reported by Christian Lange, Ned T. Crigler.)
Features:
* Add repo-s3-port option for setting a non-standard S3 service port.
Improvements:
* The local command for backup is implemented entirely in C. (Contributed by David Steele, Cynthia Shang.)
* The check command is implemented partly in C. (Reviewed by Cynthia Shang.)
Implement switch WAL and archive check in C but leave the rest in Perl for now.
The main idea was to have some real integration tests for the new database code so the rest of the migration can wait.
Reviewed by Cynthia Shang.
Migrate functionality from the Perl Db module to C. For now this is just enough to implement the WAL switch check.
Add the dbGet() helper function to get Db objects easily.
Create macros in harnessPq to make writing pq scripts easier by grouping commonly used functions together.
Reviewed by Cynthia Shang.
The cause of this error seems to be that a failed request takes so long that a subsequent retry at the http level uses outdated headers.
We're not sure if pgBackRest it to blame here (in one case a kernel downgrade fixed it, in another case an incorrect network driver was the problem) so add retries to hopefully deal with the issue if it is not too persistent. If SSL_write() has long delays before reporting an error then this will obviously affect backup performance.
Reported by sean0101n, Tim Garton, Jesper St John, Aleš Zelený.
Error codes were not being caught for SSL_write() so it was hard to see exactly what was happening in error cases. Report errors to aid in debugging.
Also add a retry for SSL_ERROR_WANT_READ. Even though we have not been able to reproduce this case it is required by SSL_write() so go ahead and implement it.
Multiple PostgreSQL hosts were supported via the host-id option but there are cases where it is useful to be able to directly specify the host id required, e.g. to iterate through pg* hosts when looking for candidate primaries and standbys during backup.
Keep trying to locate the WAL segment until timeout. This is useful for the check and backup commands which must wait for segments to arrive in the archive.
The remotes have their own config options (repo-host-config, etc.) so don't pass the local config* options.
This was a regression from the behavior of the Perl code and while there have been no field reports it caused breakage on test systems with multiple configurations.
Sometimes it is useful to get at the internals of a module that is not being tested for coverage in order to provide coverage for another module that is being tested. The include directive allows this.
Update modules that had previously been added to coverage that only need to be included.
If this option is set then ports appended to repo-s3-endpoint or repo-s3-host will be ignored.
Setting this option explicitly may be the only way to use a bare ipv6 address with S3 (since multiple colons confuse the parser) but we plan to improve this in the future.
This direct interface to libpq allows simple queries to be run against PostgreSQL and supports timeouts.
Testing is performed using a shim that can use scripted responses to test all aspects of the client code. The shim will be very useful for testing backup scenarios on complex topologies.
Reviewed by Cynthia Shang.
The local process is now entirely migrated to C. Since all major I/O operations are performed in the local process, the vast majority of I/O is now performed in C.
Contributed by David Steele, Cynthia Shang.
Add bool, array, and int64 as valid array subtypes.
Pretty print for the array subtype is not correct but is currently not in use (this can be seen at line 328 in typeJsonTest.c).
Discard all data passed to the filter. Useful for calculating size/checksum on a remote system when no data needs to be returned.
Update ioReadDrain() to automatically use the IoSink filter.
The HTTP server can use either content-length or transfer-encoding to indicate that there is content in the response. HEAD requests do not include content but return all the same headers as GET. In the HEAD case we were ignoring content-length but not transfer-encoding which led to unexpected eof errors on AWS S3. Our test server, minio, uses content-length so this was not caught in integration testing.
Ignore all content for HEAD requests (no matter how it is reported) and add a unit test for transfer-encoding to prevent a regression.
Found by Pavel Suderevsky.
This feature denotes storage that can compress files so that they take up less space than what was written. Currently this includes the Posix and CIFS drivers. The stored size of the file will be rechecked after write to determine if the reported size is different. This check would be wasted on object stores such as S3, and they might not report the file as existing immediately after write.
Also add tests to each storage driver to check features.
Previously only a single filter could be pushed to the remote since order was not being maintained. Now the filters are strictly ordered.
Results are returned from the remote and set in the local IoFilterGroup so they can be retrieved.
Expand remote filter support to include all filters.
Read all data from an IoRead object and discard it. This is handy for calculating size, hash, etc. when the output is not needed.
Update code where a loop was used before.
For offline backups the upper bound was being set to 0x0000FFFF0000FFFF rather than UINT64_MAX. This meant that page checksum errors might be ignored for databases with a lot of past WAL in offline mode.
Online mode is not affected since the upper bound is retrieved from pg_start_backup().
Files (especially build.auto.h) were being removed and forcing a full build between separate invocations of test.pl.
This affected ad-hoc testing at the command-line, not a full test run in CI.
This analysis never produced anything but false positives (var might be NULL) but took over a minute per test run and added 600MB to the test container.
Since 2.91 JSON::PP has a bias for saving variables that look like numbers as numbers even if they were declared as strings.
Force versions to strings where needed by appending ''.
Update the json-pp-perl package on Ubuntu 18.04 to 2.97 to provide test coverage.
No new Perl code is being developed, so these tools are just taking up time and making migrations to newer platforms harder. There are only a few Perl tests remaining with full coverage so the coverage tool does not warn of loss of coverage in most cases.
Remove both tools and associated libraries.
ScalityS3 has not received any maintenance in years and is slow to start which is bad for testing. Replace it with minio which starts quickly and ships as a single executable or a tiny container.
Minio has stricter limits on allowable characters but should still provide enough coverage to show that our encoding is working correctly.
This commit also includes the upgrade to openssl 1.1.1 in the Ubuntu 18.04 container.
Some HTTP error tests were failing after the upgrade to openssl 1.1.1, though the rest of the unit and integration tests worked fine. This seemed to be related to the very small messages used in the error testing, but it pointed to an issue with the code not being fully compliant, made worse by auto-retry being enabled by default.
Disable auto-retry and implement better error handling to bring the code in line with openssl recommendations.
There's no evidence this is a problem in the field, but having all the tests pass seems like a good idea and the new code is certainly more robust.
Coverage will be complete in the next commit when openssl 1.1.1 is introduced.
Maintaining the storage layer/drivers in two languages is burdensome. Since the integration tests require the Perl storage layer/drivers we'll need them even after the core code is migrated to C. Create an interface layer so the Perl code can be removed and new storage drivers/features introduced without adding Perl equivalents.
The goal is to move the integration tests to C so this interface will eventually be removed. That being the case, the interface was designed for maximum compatibility to ease the transition. The result looks a bit hacky but we'll improve it as needed until it can be retired.
Bug Fixes:
* Fix archive retention expiring too aggressively. (Fixed by Cynthia Shang. Reported by Mohamad El-Rifai.)
Improvements:
* The expire command is implemented entirely in C. (Contributed by Cynthia Shang.)
* The local command for restore is implemented entirely in C.
* Remove hard-coded PostgreSQL user so $PGUSER works. (Suggested by Julian Zhang, Janis Puris.)
* Honor configure --prefix option. (Suggested by Daniel Westermann.)
* Rename repo-s3-verify-ssl option to repo-s3-verify-tls. The new name is preferred because pgBackRest does not support any SSL protocol versions (they are all considered to be insecure). The old name will continue to be accepted.
Documentation Improvements:
* Add FAQ to the documentation. (Contributed by Cynthia Shang.)
* Use wal_level=replica in the documentation for PostgreSQL ≥ 9.6. (Suggested by Patrick McLaughlin.)
Secure options could show up in the help as "current". While the user must have permissions to see the source of the options (e.g. environment, config file) it's still not a good idea to display them in an unexpected context.
Instead show secure options as <redacted> in the help command.
Amend commit 434cd832 to error when the db history in archive.info and backup.info do not match.
The Perl code would attempt to reconcile the history by matching on system id and version but we are not planning to migrate that code to C. It's possible that there are users with mismatches but if so they should have been getting errors from info for the last six months. It's easy enough to manually fix these files if there are any mismatches in the field.
Contributed by Cynthia Shang.
If the file is compressible (i.e. not encrypted or already compressed) it can be marked as such in storageNewRead()/storageNewWrite(). If the file is being read from/written to a remote it will be compressed in transit using gzip.
Simplify filter group handling by having the IoRead/IoWrite objects create the filter group automatically. This removes the need for a lot of NULL checking and has a negligible effect on performance since a filter group needs to be created eventually unless the source file is missing.
Allow filters to be created using a VariantList so filter parameters can be passed to the remote.
This implementation duplicates the functionality of the Perl code but does so with different logic and includes full unit tests.
Along the way at least one bug was fixed, see issue #748.
Contributed by Cynthia Shang.
The tests and documentation have been using the core storage layer but soon that will depend entirely on the C library, creating a bootstrap problem (i.e. the storage layer will be needed to build the C library).
Create a simplified Posix storage layer to be used by documentation and the parts of the test code that build and execute the actual tests. The actual tests will still use the core storage driver so they can interact with any type of storage.
This filter exactly mimics the behavior of the Perl filter so is a drop-in replacement.
The filter is not integrated yet since it requires the Perl-to-C storage layer interface coming in a future commit.
These names more accurately reflect what the functions do and follow the convention started in Info and InfoPg.
Also remove the ignoreMissing parameter since it was never used.
Contributed by Cynthia Shang.
Some filters (e.g. encryption and compression) produce output even if there is no input. Since the filter group was marked as "done" initially, processing would not run when there was zero input and that resulted in zero output.
All filters start not done so start the filter group the same way.
The prior method of tailing the docker log no longer seems reliable. Instead, keep retrying the make bucket command until it works and show the error if it times out.
This allows copying from one S3 object to another. We generally try to avoid doing this but there are a few cases where it is needed and the tests do it quite a bit.
One thing to look out for here is that reads require the http client to be explicitly released by calling httpClientDone(). This means than clients could grow if they are not released properly. The http statistics will hopefully alert us if this is happening.
This cache manages multiple http clients and returns one to the caller that is not busy. It is the responsibility of the caller to indicate when they are done with a client. If returnContent is set then the client will automatically be marked done.
Also add special handing for HEAD requests to recognize that content-length is informational only and no content is expected.
This was not enforced at parse time because repo1-cipher-type could be passed on the command-line even in cases where encryption was not needed by the subprocess.
Filter repo-cipher-type so it is never passed on the command line. If the subprocess does not have access to the passphrase then knowing the encryption type is useless anyway.
Filter groups could not be manipulated once they had been assigned to an IO object. Now they can be freely manipulated up to the time the IO object is opened.
Also, move the filter group into the IO object's context so they don't need to be tracked separately.
The previous implementation searched for the file in a list which worked but was not optimal. For arbitrary bucket structures it would also produce a false negative if a match was not found in the first 1000 entries. This was not an issue for our repo structure since the max hits on exists calls is two but it seems worth fixing to avoid future complications.
The C code was passing the host (if specified) with the request which could force the server into path-style URLs, which are not supported.
Instead, use the Perl logic of always passing bucket.endpoint in the request no matter what host is used for the HTTPS connection.
It's an open question whether we should support path-style URLs but since we don't it's useless to tell the server otherwise. Note that Amazon S3 has deprecated path-style URLs and they are no longer supported on newly created buckets.
This was being removed by rsync which forced a full build even when a partial should have been fine. Rewrite the file after the rsync so it is preserved.
Allow commands to be skipped by default in the command help but still work if help is requested for the command directly. There may be other uses for the flag in the future.
Update help for ls now that it is exposed.
Allows listing repo paths/files from the command-line, to be used primarily for testing and debugging.
This command is internal-only so the interface may change at any time without notice.
The documentation was relying on a ScalityS3 container built for testing which wasn't very transparent. Instead, use the stock minio container and configure it in the documentation.
Also, install certificates and CA so that TLS verification can be enabled.
Not all storage types support paths as a physical thing that must be created/destroyed. Add a feature to determine which drivers use paths and simplify the driver API as much as possible given that knowledge and by implementing as much path logic as possible in the Storage object.
Remove the ignoreMissing parameter from pathSync() since it is not used and makes little sense.
Create a standard list of error messages for the drivers to use and apply them where the code was modified -- there is plenty of work still to be done here.
These functions are not required for repository storage so make them optional and error if they are not implemented for non-repository storage, .e.g. pg or spool.
The goal is to simplify the drivers (e.g. S3) that are intended only for repository storage.
The prior behavior was to return NULL so the caller would know the path was missing, but this is rarely useful, complicates the calling code, and increases the chance of segfaults.
The .nullOnMissing param has been added to enable the prior behavior.
The release notes are generally a direct reflection of the git log. So, ease the burden of maintaining the release notes by using the git log to determine what needs to be added.
Currently only non-dev items are required to be matched to a git commit but the goal is to account for all commits.
The git history cache is generated from the git log but can be modified to correct typos and match the release notes as they evolve. The commit hash is used to identify commits that have already been added to the cache.
There's plenty more to do here. For instance, links to the commits for each release item should be added to the release notes.
The C code is designed to be efficient rather than deterministic at the debug log level. As we move more testing from integration to unit tests it makes less sense to try and maintain the expect logs at this log level.
Most of the expect logs have already been moved to detail level but mock/all still had tests at debug level. Change the logging defaults in the config file and remove as many references to log-level-console as possible.
/ is escaped in the spec but the Perl renderer we use does not escape it which leads to checksum mismatches between the two sets of code.
This particular escape seems to be a more recent addition to the spec and is targeted toward embedding JSON in JavaScript.
\/ is still allowed when parsing JSON.
The new name is preferred because pgBackRest does not support any SSL protocol versions (they are all considered to be insecure).
The old name will continue to be accepted.
This is just the part of restore run by the local helper processes, not the entire command.
Even so, various optimizations in the code (like pipelining and optimizations for zero-length files) should make the restore command faster on object stores.
Bug Fixes:
* Fix segfault when process-max > 8 for archive-push/archive-get. (Reported by Jens Wilke.)
Improvements:
* Bypass database checks when stanza-delete issued with force. (Contributed by Cynthia Shang. Suggested by hatifnatt.)
* Add configure script for improved multi-platform support.
Documentation Features:
* Add user guides for CentOS/RHEL 6/7.
This report replaces the lcov report that was generated manually for each release.
The lcov report was overly verbose just to say that we have virtually 100% coverage.
Pre-calculate the value used by logAny() to improve performance and make it more likely to be inlined.
Move IF_LOG_ANY() into LOG_INTERNAL() to simplify the macros and improve performance of LOG() and LOG_PID(). If the message has no chance of being logged there's no reason to call logInternal().
Rename logWill() to logAny() because it seems more intuitive.
The branch coverage exclusion rules were overly broad and included functions that ended in a capital letter, which disabled all coverage for the statement. Improve matching so that all characters in the name must be upper-case for a match.
Some macros with internal branches accepted parameters that might contain conditionals. This made it impossible to tell which branches belonged to which, and in any case an overzealous exclusion rule was ignoring all branches in such cases. Add the DEBUG_COVERAGE flag to build a modified version of the macros without any internal branches to be used for coverage testing. In most cases, the branches were optimizations (like checking logWill()) that improve production performance but are not needed for testing. In other cases, a parameter needed to be added to the underlying function to handle the branch during coverage testing.
Also tweak the coverage rules so that macros without conditionals are automatically excluded from branch coverage as long as they are not themselves a parameter.
Finally, update tests and code where missing coverage was exposed by these changes. Some code was updated to remove existing coverage exclusions when it was a simple change.
Filters had different ideas about what "done" meant and this added complication to the group filter processing. For example, gzip decompression would detect end of stream and mark the filter as done before it had been flushed.
Improve the IoFilter interface to give a consistent definition of done across all filters, i.e. no filter can be done until it has started flushing no matter what the underlying driver reports. This removes quite a bit of tricky logic in the processing loop which tried to determine when a filter was "really" done.
Also improve management of the input buffers by pointing directly to the prior output buffer (or the caller's input) to eliminate loops that set/cleared these buffers.
If content was zero-length then the IO object was not created. This put the burden on the caller to test that the IO object existed before checking eof.
Instead, create an IO object even if it will immediately return eof. This has little cost and makes the calling code simpler.
Also add an explicit test for zero-length files in S3 and a few assertions.
The rules for when a C remote is required are getting complicated and will get worse when restoreFile() is migrated.
Instead, set the --c option when a C remote is required. This option will be removed when the remote is entirely implemented in C.
Most of the *Free() functions are pretty generic so add macros to make creating them as easy as possible.
Create a distinction between *Free() functions that the caller uses to free memory and callbacks that free third-party resources. There are a number of cases where a driver needs to free resources but does not need a normal *Free() because it is handled by the interface.
Add common/object.h for macros that make object maintenance easier. This pattern can also be used for many more object functions.
Rename memContextCallback() to memContextCallbackSet() to be more consistent with other parts of the code.
Free all context memory when an exception is thrown from a callback. Previously only the child contexts would be freed and this resulted in some allocations being lost. In practice this is probably not a big deal since the process will likely terminate shortly, but there may well be cases where that is not true.
Add GLUE() macro which is useful for creating identifiers.
Move MACRO_TO_STR() here and rename it STRINGIFY(). This appears to be the standard name for this type of macro and it is also an awesome name.
Remove "File" and "Driver" from object names so they are shorter and easier to keep consistent.
Also remove the "driver" directory so storage implementations are visible directly under "storage".
The function pointer casting used when creating drivers made changing interfaces difficult and led to slightly divergent driver implementations. Unit testing caught production-level errors but there were a lot of small issues and the process was harder than it should have been.
Use void pointers instead so that no casts are required. Introduce the THIS_VOID and THIS() macros to make dealing with void pointers a little safer.
Since we don't want to expose void pointers in header files, driver functions have been removed from the headers and the various driver objects return their interface type. This cuts down on accessor methods and the vast majority of those functions were not being used. Move functions that are still required to .intern.h.
Remove the special "C" crypto functions that were used in libc and instead use the standard interface.
Add bufDup() and bufNewUsedC().
Arrange bufNewC() params to match bufNewUsedC() since they have always seemed backward.
Fix bufHex() to only render the used portion of the buffer and fix some places where used was not being set correctly.
Use a union to make macro assignments for all legal values without casting. This is much more likely to catch bad assignments.
There is only one instance in the core code where this helps. It is mostly helpful in the tests.
There is an argument to be made that only THROW_SYS_ERROR*() variants should be used in the core code to improve test coverage. If so, that will be the subject of a future commit.
Some functions (e.g. getpwnam()/getgrnam()) will return an error but not set errno. In this case there's no use in appending strerror(), which will be "Success". This is confusing since an error has just been reported.
At least in the examples above, an error with no errno set just means "missing" and our current error message already conveys that.
The remote list was at most 9 (based on pg[1-8]-* max index) so anything over 8 wrote into unallocated memory.
The remote for the main process is (currently) stored in position zero so do the same for remotes started from locals, since there should only be one. The main process will need to start more remotes in the future which is why there is extra space.
Reported by Jens Wilke.
Use autoconf to provide a basic configure script. WITH_BACKTRACE is yet to be migrated to configure and the unit tests still use a custom Makefile.
Each C file must include "build.auto.conf" before all other includes and defines. This is enforced by test.pl for includes, but it won't detect incorrect define ordering.
Update packages to call configure and use standard flags to pass options.
Update RHEL repos that have changed upstream. Remove PostgreSQL 9.3 since the RHEL6/7 packages have disappeared.
Remove PostgreSQL versions from U12 that are still getting minor updates so the container does not need to be rebuilt.
LZ4 is included for future development, but this seems like a good time to add it to the containers.
The function provides all the file/path/link information required to build a backup manifest.
Also update storageInfo() to provide the same information for a single file.
At the same time change the way that load constructors work (and are named) so that Ini objects do not persist after the constructors complete.
infoArchiveSave() is excluded from this commit since it is just a trivial call to infoPgSave() and won't be required soon.
In most cases the JSON type is known so this is more efficient than converting to Variant first, both in terms of memory and time.
Also rename some of the existing functions for consistency.
Variants were being used to expose String and StringList types but this can be done more simply with an additional method.
Using only strings also allows for a more efficient implementation down the road.
This greatly reduces calls to filter processing, which is a performance benefit, but also makes the trace logs smaller and easier to read.
However, this means that ioWriteFlush() will no longer work with filters since a full flush of IoFilterGroup would require an expensive reset. Currently ioWriteFlush() is not used in this scenario so for now just add an assert to ensure it stays that way.
These are more efficient than creating buffers in place when needed.
After replacement discovered that bufNewStr() and BufNewZ() were not being used in the core code so removed them. This required using the macros in tests which is not the usual pattern.
Since the introduction of blocking read drivers (e.g. IoHandleRead, TlsClient) the non-blocking drivers have used the same rules for determining maximum buffer size, i.e. read only as much as requested. This is necessary so the blocking drivers don't get stuck waiting for data that might not be coming.
Instead mark blocking drivers so IoRead knows how much buffer to allow for the read. The non-blocking drivers can now request the maximum number of bytes allowed by buffer-size.
Bug Fixes:
* Fix zero-length reads causing problems for IO filters that did not expect them. (Reported by brunre01, jwpit, Tomasz Kontusz, guruguruguru.)
* Fix reliability of error reporting from local/remote processes.
* Fix Posix/CIFS error messages reporting the wrong filename on write/sync/close.
Add production checks to ensure no filter gets a zero-size input buffer.
Also, optimize the case where a filter returns no output. There's no sense in running downstream filters if they have no new input.
The IoRead object was passing zero-length buffers into the filter processing code but not all the filters were happy about getting them.
In particular, the gzip compression filter failed if it was given no input directly after it had flushed all of its buffers. This made the problem rather intermittent even though a zero-length buffer was being passed to the filter at the end of every file. It also explains why tweaking compress-level or buffer-size allowed the file to go through.
Since this error was happening after all processing had completed, there does not appear to be any risk that successfully processed files were corrupted.
Reported by brunre01, jwpit, Tomasz Kontusz, guruguruguru.
Releasing the lock too early was allowing other async processes to sneak in and start running before the current process was completely shut down.
The only symptom seems to have been mixed up log messages so not a very serious issue.
Asserts were only only reported on stderr rather than being returned through the protocol layer. This did not appear to be very reliable.
Instead, report the assert through the protocol layer like any other error. Add a stack trace if an assert error or debug logging is enabled.
These work almost exactly like the String constant macros. However, a struct per variant type was required which meant custom constructors and destructors for each type.
Propagate the variant constants out into the codebase wherever they are useful.
The STRING_CONST() macro worked fine for constants but was not able to constify strings created at runtime.
Add the STR() macro to do this by using strlen() to get the size.
Also rename STRING_CONST() to STRDEF() for brevity and to match the other macro name.
Removed the "anchor" parameter because it was never used in any calls in the Perl code so it was just a dead parameter that always defaulted to true.
Contributed by Cynthia Shang.
IMPORTANT NOTE: The new TLS/SSL implementation forbids dots in S3 bucket names per RFC-2818. This security fix is required for compliant hostname verification.
Bug Fixes:
* Fix issues when a path option is / terminated. (Reported by Marc Cousin.)
* Fix issues when log-level-file=off is set for the archive-get command. (Reported by Brad Nicholson.)
* Fix C code to recognize host:port option format like Perl does. (Reported by Kyle Nevins.)
* Fix issues with remote/local command logging options.
Improvements:
* The archive-push command is implemented entirely in C.
* Increase process-max limit to 999. (Suggested by Rakshitha-BR.)
* Improve error message when an S3 bucket name contains dots.
Documentation Improvements:
* Clarify that S3-compatible object stores are supported. (Suggested by Magnus Hagander.)
This was not an intentional feature in Perl, but it works, so it makes sense to implement the same syntax in C.
This is a break from other places where a -port option is explicitly supplied, so it may make sense to support both styles going forward. This commit does not address that, however.
Reported by Kyle Nevins.
The Perl lib we have been using for TLS allows dots in wildcards, but this is forbidden by RFC-2818. The new TLS implementation in C forbids this pattern, just as PostgreSQL and curl do.
However, this does present a problem for users who have been using bucket names with dots in older versions of pgBackRest. Since this limitation exists for security reasons there appears to be no option but to take a hard line and do our best to notify the user of the issue as clearly as possible.
This problem was not specific to archive-get, but that was the only place it was expressing in the last release. The new archive-push was also affected.
The issue was with daemon processes that had closed all their file descriptors. When exec'ing and setting up pipes to communicate with a child process the dup2() function created file descriptors that overlapped with the first descriptor (stdout) that was being duped into. This descriptor was subsequently closed and wackiness ensued.
If logging was enabled (the default) that increased all the file descriptors by one and everything worked.
Fix this by checking if the file descriptor to be closed is the same one being dup'd into. This solution may not be generally applicable but it works fine in this case.
Reported by Brad Nicholson.
This new implementation should behave exactly like the old Perl code with the exception of updated log messages.
Remove as much of the Perl code as possible without breaking other commands.
When a repository server is configured, commands that modify the repository acquire a remote lock as well as a local lock for extra protection against multiple writers.
Instead of the custom logic used in Perl, make remote locking part of the command configuration.
This also means that the C remote needs the stanza since it is used to construct the lock name. We may need to revisit this at a later date.
While the local processes are doing their jobs the remote connection from the main process may timeout.
Send occasional noops to ensure that doesn't happen.
This may not be the best way to detect 64-bit platforms but it seems to be working fine so far.
Create a macro to make it clearer what is being done and to make it easier to change the implementation.
This was missed because the unit tests were reusing a buffer without resetting it to zero, so this flag ended up still set when the test function was called.
This was not a live issue since it only expressed in tests and this code is not used in master yet.
The test harness was not being built with warnings which caused some wackiness with an improperly structured switch. Just use the same warnings as the code being tested.
Also enable warnings on code that is not directly being tested since other code modules are frequently modified during testing.
We deal with some pretty big lists in archive-push so a nested-loop anti-join looked like it would not be efficient enough.
This merge anti-join should do the trick even though both lists must be sorted first.
The prior behavior on a global error (i.e. not file specific) was to write an individual error file for each WAL file being processed. On retry each of these error files would be removed, and if the error was persistent, they would then be recreated. In a busy environment this could mean tens or hundreds of thousands of files.
Another issue was that the error files could not be written until a list of WAL files to process had been generated. This was easy enough for archive-get but archive-push requires more processing and any errors that happened when generating the list would only be reported in the pgBackRest log rather than the PostgreSQL log.
Instead write a global.error file that applies to any WAL file that does not have an explicit ok or error file. This reduces churn and allows more errors to be reported directly to PostgreSQL.
Having a copy per version worked well until it was time to add new features or modify existing functions. Then it was necessary to modify every version and try to keep them all in sync.
Consolidate all the PostgreSQL types into a single file using #if for type versions. Many types do not change or change infrequently so this cuts down on duplication. In addition, it is far easier to see what has changed when a new version is added.
Use macros to write the interface functions. There is still duplication here since some changes require a new copy of the macro, but it is far less than before.
Since archive-push is being moved to C, the Perl remote will no longer work with that command.
Eventually this module will need to be rewritten in C, but for now just use the restore command which is planned to be migrated last.
Now that repositories are writable the storage drivers that don't yet support file writes need to be updated to do so.
Note that the part size for multi-part upload has not been defined as a proper constant. This will become an option in the near future so it doesn't seem worth creating a constant that we might then forget to remove.
The xml objects only exposed read methods of the underlying libxml2.
This worked for S3 commands that only received data but to send data we need to be able to create XML documents from scratch.
Add the ability to create empty documents and add nodes and contents.
The C code was assuming that the current PostgreSQL version in archive.info/backup.info was the most recent item in the history, but this is not always the case with some stanza-upgrade scenarios. If a cluster is restored from before the upgrade and stanza-upgrade is run again, it will revert db-id to the original history item.
Instead, load db-id from the db section explicitly as the Perl code does.
This did not affect archive-get since it does a reverse scan through the history versions and does not rely on the current version.
Logging was being enable on local/remote processes even if --log-subprocess was not specified, so fix that.
Also, make sure that stderr is enabled at error level as it was on Perl. This helps expose error information for debugging.
For remotes, suppress log and lock paths since these are not applicable on remote hosts. These options should be set in the local config if they need to be overridden.
None of our C HTTP requests have needed to output a body, but they will with the migration of archive-push.
Also, add constants that are useful when POSTing/PUTing data.
This condition was not being properly checked for in the C code and it caused problems in the info command, at the very least.
Instead of applying a local fix, introduce a new path option type that will rigorously check the format of any incoming paths.
Reported by Marc Cousin.
This command was previously forked off from the archive-push command which required a bit of artificial option and log manipulation.
A separate command is easier to test and will work on platforms that don't have fork(), e.g. Windows.
This driver borrows heavily from the Posix driver.
At this point the only difference is that CIFS does not allow explicit directory fsyncs so they need to be suppressed. At some point the CIFS diver will also omit link support.
With the addition of this driver repository storage is now writable.
Bug Fixes:
* Fix possible truncated WAL segments when an error occurs mid-write. (Reported by blogh.)
* Fix info command missing WAL min/max when stanza specified. (Fixed by Stefan Fercot.)
* Fix non-compliant JSON for options passed from C to Perl. (Reported by Leo Khomenko.)
Improvements:
* The archive-get command is implemented entirely in C.
* Enable socket keep-alive on older Perl versions. (Contributed by Marc Cousin.)
* Error when parameters are passed to a command that does not accept parameters. (Suggested by Jason O'Donnell.)
* Add hints when unable to find a WAL segment in the archive. (Suggested by Hans-Jürgen Schönig.)
* Improve error when hostname cannot be found in a certificate. (Suggested by James Badger.)
* Add additional options to backup.manifest for debugging purposes. (Contributed by blogh.)
Add the buffer-size, compress-level, compress-level-network, and process-max options to the backup:option section in backup.manifest to aid in debugging.
It may also make sense to propagate these options up to backup.info so they can be displayed in the info command, but for now this is deemed sufficient.
Contributed by blogh.
When this error happens in the context of a backup it can be a bit mystifying as to why the backup is failing. Add some hints to get the user started.
These hints will appear any time a WAL segment can't be found, which makes the hint about the check command redundant when the user is actually running the check command, but it doesn't seem worth trying to exclude the hint in that case.
Suggested by Hans-Jürgen Schönig.
DESTDIR always had /usr/bin appended which was a problem systems that don't use /usr/bin as the install location for binaries.
Instead, use the value of DESTDIR exactly and update the Debian packages accordingly.
Contributed by Douglas J Hunley.
This behavior allowed a command like this to run without error:
pgbackrest backup --stanza=db full
Even though it actually performed an incremental backup in most circumstances because the `full` parameter was ignored.
Instead, output an error and exit.
Suggested by Jason O'Donnell.
This warning was being output when getting help if retention was not set:
WARN: option repo1-retention-full is not set, the repository may run out of space
Suppress this when getting help since the warning will display by default on a system that is not completely configured.
The same test configurations are run on all four test VMs, which seems a real waste of resources.
Vary the tests per VM to increase coverage while reducing the total number of tests. Be sure to include each major feature (remote, s3, encryption) in each VM at least once.
The same test configurations are run on all four test VMs, which seems a real waste of resources.
Vary the tests per VM to increase coverage while reducing the total number of tests.
The same test configurations are run on all four test VMs, which seems a real waste of resources.
Vary the tests per VM to increase coverage while reducing the total number of tests. Be sure to include each major feature (remote, s3, encryption) in each VM at least once.
This is very inefficient in terms of memory and time and dynamic context names were never utilized.
Just require that context names be valid for the life of the context.
In practice they are all static strings.
Allocations required a sequential scan through the allocation list for both contexts and memory. This was very inefficient since for the most part individual memory allocations are seldom freed directly, rather they are freed when their context is freed.
For both types of allocations track an index for the lowest free position. After an allocation of the free position, a sequential search will be required for the next allocation but this is still far better than doing a scan for every allocation.
With a moderately-sized dataset (500 history entries in backup.info), there is a 237X performance improvement when combined with the f74e88bb refactor.
Before:
% cumulative self
time seconds seconds name
65.11 331.37 331.37 memContextAlloc
16.19 413.78 82.40 memContextCurrent
14.74 488.81 75.03 memContextTop
2.65 502.29 13.48 memContextNewIndex
1.18 508.31 6.02 memFind
After:
% cumulative self
time seconds seconds name
94.69 2.14 2.14 memFind
Finding memory allocations in order to free or resize them is the next bottleneck, but this does not seem to be a major issue presently.
The prior method depended on IO:Socket:SSL to push the keep-alive options down to the socket but it only worked for recent versions of the module.
Instead, create the socket directly using IO::Socket::IP if available or IO:Socket:INET as a fallback. The keep-alive option is set directly on the socket before it is passed to IO:Socket:SSL.
Contributed by Marc Cousin.
The command option was not being set correctly when a remote was started from a local. It was being set as 'local' rather than the command that the local was running as.
Also automatically select the remote protocol id based on whether it is started from a local (use the local protocol id) or from the main process (use 0).
These were not live issues but could cause strange behaviors as new features are added that might be hard to diagnose.
This new implementation should behave exactly like the old Perl code with the exception of a few updated log messages.
Remove as much of the Perl code as possible without breaking other commands.
The C local is only used for C commands in the main process.
Some tweaking of the existing protocolGet() command was required. Originally the idea was to share the function for local and remote requests but the differences (as in Perl) were too great to make that practical.
Some IO objects have file descriptors which can be useful for monitoring with select().
It might also be useful to expose handles for write objects but there is currently no use case.
There was a lot of extra boilerplate involved in setting up pipes so that is now automated.
In some cases testing with multiple children is useful so allow that as well.
This amends 70c30dfb which disabled test tracing in general.
Instead, only enable test tracing by default for modules that are being unit tested. This saves lots of time but still ensures that test tracing is working and helps with debugging in unit tests.
Also rename the option to --debug-test-trace for a clarity.
The same test configurations are run on all four test VMs, which seems a real waste of resources.
Vary the tests per VM to increase coverage while reducing the total number of tests. Be sure to include each major feature (remote, s3, encryption) in each VM at least once.
The expect tests were originally a rough-and-ready type of unit test so monitoring changes in the expect log helped us detect changes in behavior.
Now the stanza code is heavily unit-tested so the detailed logs mainly cause churn and don't have any measurable benefit.
Reduce the log level to DETAIL to make the logs less verbose and volatile, yet still check user-facing log messages.
The same test configurations are run on all four test VMs, which seems a real waste of resources.
Vary the tests per VM to increase coverage while reducing the total number of tests. Be sure to include each major feature (remote, s3, encryption) in each VM at least once.
The expect tests were originally a rough-and-ready type of unit test so monitoring changes in the expect log helped us detect changes in behavior.
Now the archive code is heavily unit-tested so the detailed logs mainly cause churn and don't have any measurable benefit.
Reduce the log level to DETAIL to make the logs less verbose and volatile, yet still check user-facing log messages.
Update error message with the hostname and more detail about what went wrong. Hopefully this will help in diagnosing certificate/hostname issues.
Suggested by James Badger.
We have been using a hacked-up JSON generator to pass options from C to Perl since the C binary was introduced. This generator was not very compliant which led to issues with \n, ", etc. inside strings.
We have a fully-compliant JSON generator now so use that instead.
Reported by Leo Khomenko.
Detailed stack traces for low-level functions (e.g. strCat, bufMove) can be very useful for debugging but leaving them on for all tests has become quite burdensome in terms of time. Complex operations like generating JSON on a large KevValue can lead to timeouts even with generous values.
Add a new param, --debug-trace, to enable test-level stack trace, but leave it off by default.
The remote protocol was calling into the Storage object but this required some translation which will get more awkward as time goes by.
Instead, call directly into the local driver so the communication is directly driver to driver. This still requires resolving the path and may eventually have more duplication with the Storage object methods but it seems the right thing to do.
Expressions such as <REPO:ARCHIVE> require a stanza name in order to be resolved correctly. However, if the stanza name is passed to the remote then that remote will only work correctly for that one stanza.
Instead, resolved the expressions locally but still pass a relative path to the remote. That way, a storage path that is only configured on the remote does not need to be known locally.
This issue was a result of STORAGE_REPO_PATH prepending an extra stanza when the stanza was specified on the command line.
The tests missed this because by some strange coincidence the WAL dirs were empty for each test that specified a stanza. Add new tests to prevent a regression.
Fixed by Stefan Fercot.
Free all cached objects in the storage helper, especially the stanza name.
This clears the storage environment for tests that switch stanza names or go from a stanza name to no stanza name or vice versa. This is only useful for testing right now, but may be used in the future for commands than act on multiple stanzas.
This command was previously forked off from the archive-get command which required a bit of artificial option and log manipulation.
A separate command is easier to test and will work on platforms that don't have fork(), e.g. Windows.
Prior to this the Perl remote was used to satisfy C requests. This worked fine but since the remote needed to be migrated to C anyway there was no reason to wait.
Add the ProtocolServer object and tweak ProtocolClient to work with it. It was also necessary to add a mechanism to get option values from the remote so that encryption settings could be read and used in the storage object.
Update the remote storage objects to comply with the protocol changes and add the storage protocol handler.
Ideally this commit would have been broken up into smaller chunks but there are cross-dependencies in the protocol layer and it didn't seem worth the extra effort.
The file write object destructors called close() and finalized the file even if it was not completely written. This was an issue in both the C and Perl code.
Rewrite the destructors to simply free resources (like file handles) rather than calling the close() method. This leaves the temp file in place for filesystems that use temp files.
Add unit tests to prevent regression.
Reported by blogh.
This was not being caught because the integration tests for S3 were running remotely and going through the Perl code rather than the new C code.
Implement the exists method for the S3 driver and add tests to prevent a regression.
Reported by mibiio.
This already worked in reverse, but this case is needed when a command that only uses protocol-timeout (e.g. info) calls a remote process where protocol-timeout and db-timeout can be set. If protocol-timeout was set to less than the default db-timeout then an error resulted.
Rather than create _P/_PP variants for every type that needs to pass/return pointers, create FUNCTION_*_P/PP() macros that will properly pass or return any single/double pointer types.
There remain a few unresolved edge cases such as CHARPY but this handles the majority of types well.
The string object was reallocating memory with every concatenation which is not very efficient. This is especially true for JSON rendering which does a lot of concatenations.
Instead allocate a pool of extra memory on the first concatenation (50% of size) to be used for future concatenations and reallocate when needed.
Also add a 1GB size limit to ensure that there are no overflows.
Multiple status files were being created by asynchronous archiving if a high-level error occurred after one or more WAL segments had already been transferred successfully. Error files were being written for every file in the queue regardless of whether it had already succeeded. To fix this, add an option to skip writing error files when an ok file already exists.
There are other situations where both files might exist (various fsync and filesystem error scenarios) so it seems best to retry in the case that multiple status files are found rather than throwing a hard error (which then means that archiving is completely stuck). In the case of multiple status files, a warning will be logged to alert the user that something unusual is happening and the command will be retried.
Reported by fpa-postgres, Joe Ayers, Douglas J Hunley.
The implementation using gethostbyname() was only intended to be used during prototyping but was forgotten when the code was finalized.
Replace it with gettaddrinfo() which is more modern and supports IPv6.
Suggested by Bruno Friedmann.