Bug Fixes:
* Fix zero-length reads causing problems for IO filters that did not expect them. (Reported by brunre01, jwpit, Tomasz Kontusz, guruguruguru.)
* Fix reliability of error reporting from local/remote processes.
* Fix Posix/CIFS error messages reporting the wrong filename on write/sync/close.
Add production checks to ensure no filter gets a zero-size input buffer.
Also, optimize the case where a filter returns no output. There's no sense in running downstream filters if they have no new input.
The IoRead object was passing zero-length buffers into the filter processing code but not all the filters were happy about getting them.
In particular, the gzip compression filter failed if it was given no input directly after it had flushed all of its buffers. This made the problem rather intermittent even though a zero-length buffer was being passed to the filter at the end of every file. It also explains why tweaking compress-level or buffer-size allowed the file to go through.
Since this error was happening after all processing had completed, there does not appear to be any risk that successfully processed files were corrupted.
Reported by brunre01, jwpit, Tomasz Kontusz, guruguruguru.
Releasing the lock too early was allowing other async processes to sneak in and start running before the current process was completely shut down.
The only symptom seems to have been mixed up log messages so not a very serious issue.
Asserts were only only reported on stderr rather than being returned through the protocol layer. This did not appear to be very reliable.
Instead, report the assert through the protocol layer like any other error. Add a stack trace if an assert error or debug logging is enabled.
These work almost exactly like the String constant macros. However, a struct per variant type was required which meant custom constructors and destructors for each type.
Propagate the variant constants out into the codebase wherever they are useful.
The STRING_CONST() macro worked fine for constants but was not able to constify strings created at runtime.
Add the STR() macro to do this by using strlen() to get the size.
Also rename STRING_CONST() to STRDEF() for brevity and to match the other macro name.
Removed the "anchor" parameter because it was never used in any calls in the Perl code so it was just a dead parameter that always defaulted to true.
Contributed by Cynthia Shang.
These constants are easier than using cfgOptionName() and cfgCommandName() and lead to cleaner code and simpler to construct messages.
String versions are provided. Eventually all the strings will be used in the config structures, but for now they are useful to avoid wrapping with strNew().
IMPORTANT NOTE: The new TLS/SSL implementation forbids dots in S3 bucket names per RFC-2818. This security fix is required for compliant hostname verification.
Bug Fixes:
* Fix issues when a path option is / terminated. (Reported by Marc Cousin.)
* Fix issues when log-level-file=off is set for the archive-get command. (Reported by Brad Nicholson.)
* Fix C code to recognize host:port option format like Perl does. (Reported by Kyle Nevins.)
* Fix issues with remote/local command logging options.
Improvements:
* The archive-push command is implemented entirely in C.
* Increase process-max limit to 999. (Suggested by Rakshitha-BR.)
* Improve error message when an S3 bucket name contains dots.
Documentation Improvements:
* Clarify that S3-compatible object stores are supported. (Suggested by Magnus Hagander.)
This was not an intentional feature in Perl, but it works, so it makes sense to implement the same syntax in C.
This is a break from other places where a -port option is explicitly supplied, so it may make sense to support both styles going forward. This commit does not address that, however.
Reported by Kyle Nevins.
The Perl lib we have been using for TLS allows dots in wildcards, but this is forbidden by RFC-2818. The new TLS implementation in C forbids this pattern, just as PostgreSQL and curl do.
However, this does present a problem for users who have been using bucket names with dots in older versions of pgBackRest. Since this limitation exists for security reasons there appears to be no option but to take a hard line and do our best to notify the user of the issue as clearly as possible.
This problem was not specific to archive-get, but that was the only place it was expressing in the last release. The new archive-push was also affected.
The issue was with daemon processes that had closed all their file descriptors. When exec'ing and setting up pipes to communicate with a child process the dup2() function created file descriptors that overlapped with the first descriptor (stdout) that was being duped into. This descriptor was subsequently closed and wackiness ensued.
If logging was enabled (the default) that increased all the file descriptors by one and everything worked.
Fix this by checking if the file descriptor to be closed is the same one being dup'd into. This solution may not be generally applicable but it works fine in this case.
Reported by Brad Nicholson.
The documentation mentioned Amazon S3 frequently but failed to mention that other S3-compatible object stores are also supported.
Tone down the specific mentions of Amazon S3 and replace them with "S3-compatible object store" when appropriate.
Suggested by Magnus Hagander.
This new implementation should behave exactly like the old Perl code with the exception of updated log messages.
Remove as much of the Perl code as possible without breaking other commands.
When a repository server is configured, commands that modify the repository acquire a remote lock as well as a local lock for extra protection against multiple writers.
Instead of the custom logic used in Perl, make remote locking part of the command configuration.
This also means that the C remote needs the stanza since it is used to construct the lock name. We may need to revisit this at a later date.
While the local processes are doing their jobs the remote connection from the main process may timeout.
Send occasional noops to ensure that doesn't happen.
This may not be the best way to detect 64-bit platforms but it seems to be working fine so far.
Create a macro to make it clearer what is being done and to make it easier to change the implementation.
The test harness was not being built with warnings which caused some wackiness with an improperly structured switch. Just use the same warnings as the code being tested.
Also enable warnings on code that is not directly being tested since other code modules are frequently modified during testing.
We deal with some pretty big lists in archive-push so a nested-loop anti-join looked like it would not be efficient enough.
This merge anti-join should do the trick even though both lists must be sorted first.
The prior behavior on a global error (i.e. not file specific) was to write an individual error file for each WAL file being processed. On retry each of these error files would be removed, and if the error was persistent, they would then be recreated. In a busy environment this could mean tens or hundreds of thousands of files.
Another issue was that the error files could not be written until a list of WAL files to process had been generated. This was easy enough for archive-get but archive-push requires more processing and any errors that happened when generating the list would only be reported in the pgBackRest log rather than the PostgreSQL log.
Instead write a global.error file that applies to any WAL file that does not have an explicit ok or error file. This reduces churn and allows more errors to be reported directly to PostgreSQL.
Having a copy per version worked well until it was time to add new features or modify existing functions. Then it was necessary to modify every version and try to keep them all in sync.
Consolidate all the PostgreSQL types into a single file using #if for type versions. Many types do not change or change infrequently so this cuts down on duplication. In addition, it is far easier to see what has changed when a new version is added.
Use macros to write the interface functions. There is still duplication here since some changes require a new copy of the macro, but it is far less than before.
Move the documentation to postgres/interface.c so it can be updated without having to update N source files.
The "is" function was not very specific so rename to "controlIs".
Since archive-push is being moved to C, the Perl remote will no longer work with that command.
Eventually this module will need to be rewritten in C, but for now just use the restore command which is planned to be migrated last.
Now that repositories are writable the storage drivers that don't yet support file writes need to be updated to do so.
Note that the part size for multi-part upload has not been defined as a proper constant. This will become an option in the near future so it doesn't seem worth creating a constant that we might then forget to remove.
The xml objects only exposed read methods of the underlying libxml2.
This worked for S3 commands that only received data but to send data we need to be able to create XML documents from scratch.
Add the ability to create empty documents and add nodes and contents.
The C code was assuming that the current PostgreSQL version in archive.info/backup.info was the most recent item in the history, but this is not always the case with some stanza-upgrade scenarios. If a cluster is restored from before the upgrade and stanza-upgrade is run again, it will revert db-id to the original history item.
Instead, load db-id from the db section explicitly as the Perl code does.
This did not affect archive-get since it does a reverse scan through the history versions and does not rely on the current version.
Logging was being enable on local/remote processes even if --log-subprocess was not specified, so fix that.
Also, make sure that stderr is enabled at error level as it was on Perl. This helps expose error information for debugging.
For remotes, suppress log and lock paths since these are not applicable on remote hosts. These options should be set in the local config if they need to be overridden.
None of our C HTTP requests have needed to output a body, but they will with the migration of archive-push.
Also, add constants that are useful when POSTing/PUTing data.
The size constants are convenient for creating data structures of the proper size.
The hash type constant must be extern'd so that results can be pulled from a filter.
This was missing when bufUsed() was introduced.
It is not currently a live issue, but becomes a problem in the new archive-push code where the entire buffer is not always used.
This condition was not being properly checked for in the C code and it caused problems in the info command, at the very least.
Instead of applying a local fix, introduce a new path option type that will rigorously check the format of any incoming paths.
Reported by Marc Cousin.
This command was previously forked off from the archive-push command which required a bit of artificial option and log manipulation.
A separate command is easier to test and will work on platforms that don't have fork(), e.g. Windows.
This driver borrows heavily from the Posix driver.
At this point the only difference is that CIFS does not allow explicit directory fsyncs so they need to be suppressed. At some point the CIFS diver will also omit link support.
With the addition of this driver repository storage is now writable.
Bug Fixes:
* Fix possible truncated WAL segments when an error occurs mid-write. (Reported by blogh.)
* Fix info command missing WAL min/max when stanza specified. (Fixed by Stefan Fercot.)
* Fix non-compliant JSON for options passed from C to Perl. (Reported by Leo Khomenko.)
Improvements:
* The archive-get command is implemented entirely in C.
* Enable socket keep-alive on older Perl versions. (Contributed by Marc Cousin.)
* Error when parameters are passed to a command that does not accept parameters. (Suggested by Jason O'Donnell.)
* Add hints when unable to find a WAL segment in the archive. (Suggested by Hans-Jürgen Schönig.)
* Improve error when hostname cannot be found in a certificate. (Suggested by James Badger.)
* Add additional options to backup.manifest for debugging purposes. (Contributed by blogh.)
Add the buffer-size, compress-level, compress-level-network, and process-max options to the backup:option section in backup.manifest to aid in debugging.
It may also make sense to propagate these options up to backup.info so they can be displayed in the info command, but for now this is deemed sufficient.
Contributed by blogh.
When this error happens in the context of a backup it can be a bit mystifying as to why the backup is failing. Add some hints to get the user started.
These hints will appear any time a WAL segment can't be found, which makes the hint about the check command redundant when the user is actually running the check command, but it doesn't seem worth trying to exclude the hint in that case.
Suggested by Hans-Jürgen Schönig.
DESTDIR always had /usr/bin appended which was a problem systems that don't use /usr/bin as the install location for binaries.
Instead, use the value of DESTDIR exactly and update the Debian packages accordingly.
Contributed by Douglas J Hunley.
This behavior allowed a command like this to run without error:
pgbackrest backup --stanza=db full
Even though it actually performed an incremental backup in most circumstances because the `full` parameter was ignored.
Instead, output an error and exit.
Suggested by Jason O'Donnell.
This warning was being output when getting help if retention was not set:
WARN: option repo1-retention-full is not set, the repository may run out of space
Suppress this when getting help since the warning will display by default on a system that is not completely configured.
The same test configurations are run on all four test VMs, which seems a real waste of resources.
Vary the tests per VM to increase coverage while reducing the total number of tests. Be sure to include each major feature (remote, s3, encryption) in each VM at least once.
The same test configurations are run on all four test VMs, which seems a real waste of resources.
Vary the tests per VM to increase coverage while reducing the total number of tests.
The same test configurations are run on all four test VMs, which seems a real waste of resources.
Vary the tests per VM to increase coverage while reducing the total number of tests. Be sure to include each major feature (remote, s3, encryption) in each VM at least once.
This is very inefficient in terms of memory and time and dynamic context names were never utilized.
Just require that context names be valid for the life of the context.
In practice they are all static strings.
Allocations required a sequential scan through the allocation list for both contexts and memory. This was very inefficient since for the most part individual memory allocations are seldom freed directly, rather they are freed when their context is freed.
For both types of allocations track an index for the lowest free position. After an allocation of the free position, a sequential search will be required for the next allocation but this is still far better than doing a scan for every allocation.
With a moderately-sized dataset (500 history entries in backup.info), there is a 237X performance improvement when combined with the f74e88bb refactor.
Before:
% cumulative self
time seconds seconds name
65.11 331.37 331.37 memContextAlloc
16.19 413.78 82.40 memContextCurrent
14.74 488.81 75.03 memContextTop
2.65 502.29 13.48 memContextNewIndex
1.18 508.31 6.02 memFind
After:
% cumulative self
time seconds seconds name
94.69 2.14 2.14 memFind
Finding memory allocations in order to free or resize them is the next bottleneck, but this does not seem to be a major issue presently.
Using the functions internally is great for abstraction but not so great for performance on non-optimized builds.
Also, the functions end up prominent in any profiled build.
The prior method depended on IO:Socket:SSL to push the keep-alive options down to the socket but it only worked for recent versions of the module.
Instead, create the socket directly using IO::Socket::IP if available or IO:Socket:INET as a fallback. The keep-alive option is set directly on the socket before it is passed to IO:Socket:SSL.
Contributed by Marc Cousin.
This new implementation should behave exactly like the old Perl code with the exception of a few updated log messages.
Remove as much of the Perl code as possible without breaking other commands.
The C local is only used for C commands in the main process.
Some tweaking of the existing protocolGet() command was required. Originally the idea was to share the function for local and remote requests but the differences (as in Perl) were too great to make that practical.
Some IO objects have file descriptors which can be useful for monitoring with select().
It might also be useful to expose handles for write objects but there is currently no use case.
There was a lot of extra boilerplate involved in setting up pipes so that is now automated.
In some cases testing with multiple children is useful so allow that as well.
This amends 70c30dfb which disabled test tracing in general.
Instead, only enable test tracing by default for modules that are being unit tested. This saves lots of time but still ensures that test tracing is working and helps with debugging in unit tests.
Also rename the option to --debug-test-trace for a clarity.
The same test configurations are run on all four test VMs, which seems a real waste of resources.
Vary the tests per VM to increase coverage while reducing the total number of tests. Be sure to include each major feature (remote, s3, encryption) in each VM at least once.
The expect tests were originally a rough-and-ready type of unit test so monitoring changes in the expect log helped us detect changes in behavior.
Now the stanza code is heavily unit-tested so the detailed logs mainly cause churn and don't have any measurable benefit.
Reduce the log level to DETAIL to make the logs less verbose and volatile, yet still check user-facing log messages.
The same test configurations are run on all four test VMs, which seems a real waste of resources.
Vary the tests per VM to increase coverage while reducing the total number of tests. Be sure to include each major feature (remote, s3, encryption) in each VM at least once.
The expect tests were originally a rough-and-ready type of unit test so monitoring changes in the expect log helped us detect changes in behavior.
Now the archive code is heavily unit-tested so the detailed logs mainly cause churn and don't have any measurable benefit.
Reduce the log level to DETAIL to make the logs less verbose and volatile, yet still check user-facing log messages.
Update error message with the hostname and more detail about what went wrong. Hopefully this will help in diagnosing certificate/hostname issues.
Suggested by James Badger.
We have been using a hacked-up JSON generator to pass options from C to Perl since the C binary was introduced. This generator was not very compliant which led to issues with \n, ", etc. inside strings.
We have a fully-compliant JSON generator now so use that instead.
Reported by Leo Khomenko.