Use autoconf to provide a basic configure script. WITH_BACKTRACE is yet to be migrated to configure and the unit tests still use a custom Makefile.
Each C file must include "build.auto.conf" before all other includes and defines. This is enforced by test.pl for includes, but it won't detect incorrect define ordering.
Update packages to call configure and use standard flags to pass options.
Update RHEL repos that have changed upstream. Remove PostgreSQL 9.3 since the RHEL6/7 packages have disappeared.
Remove PostgreSQL versions from U12 that are still getting minor updates so the container does not need to be rebuilt.
LZ4 is included for future development, but this seems like a good time to add it to the containers.
The function provides all the file/path/link information required to build a backup manifest.
Also update storageInfo() to provide the same information for a single file.
At the same time change the way that load constructors work (and are named) so that Ini objects do not persist after the constructors complete.
infoArchiveSave() is excluded from this commit since it is just a trivial call to infoPgSave() and won't be required soon.
In most cases the JSON type is known so this is more efficient than converting to Variant first, both in terms of memory and time.
Also rename some of the existing functions for consistency.
Variants were being used to expose String and StringList types but this can be done more simply with an additional method.
Using only strings also allows for a more efficient implementation down the road.
This greatly reduces calls to filter processing, which is a performance benefit, but also makes the trace logs smaller and easier to read.
However, this means that ioWriteFlush() will no longer work with filters since a full flush of IoFilterGroup would require an expensive reset. Currently ioWriteFlush() is not used in this scenario so for now just add an assert to ensure it stays that way.
These are more efficient than creating buffers in place when needed.
After replacement discovered that bufNewStr() and BufNewZ() were not being used in the core code so removed them. This required using the macros in tests which is not the usual pattern.
Since the introduction of blocking read drivers (e.g. IoHandleRead, TlsClient) the non-blocking drivers have used the same rules for determining maximum buffer size, i.e. read only as much as requested. This is necessary so the blocking drivers don't get stuck waiting for data that might not be coming.
Instead mark blocking drivers so IoRead knows how much buffer to allow for the read. The non-blocking drivers can now request the maximum number of bytes allowed by buffer-size.
Bug Fixes:
* Fix zero-length reads causing problems for IO filters that did not expect them. (Reported by brunre01, jwpit, Tomasz Kontusz, guruguruguru.)
* Fix reliability of error reporting from local/remote processes.
* Fix Posix/CIFS error messages reporting the wrong filename on write/sync/close.
Add production checks to ensure no filter gets a zero-size input buffer.
Also, optimize the case where a filter returns no output. There's no sense in running downstream filters if they have no new input.
The IoRead object was passing zero-length buffers into the filter processing code but not all the filters were happy about getting them.
In particular, the gzip compression filter failed if it was given no input directly after it had flushed all of its buffers. This made the problem rather intermittent even though a zero-length buffer was being passed to the filter at the end of every file. It also explains why tweaking compress-level or buffer-size allowed the file to go through.
Since this error was happening after all processing had completed, there does not appear to be any risk that successfully processed files were corrupted.
Reported by brunre01, jwpit, Tomasz Kontusz, guruguruguru.
Releasing the lock too early was allowing other async processes to sneak in and start running before the current process was completely shut down.
The only symptom seems to have been mixed up log messages so not a very serious issue.
Asserts were only only reported on stderr rather than being returned through the protocol layer. This did not appear to be very reliable.
Instead, report the assert through the protocol layer like any other error. Add a stack trace if an assert error or debug logging is enabled.
These work almost exactly like the String constant macros. However, a struct per variant type was required which meant custom constructors and destructors for each type.
Propagate the variant constants out into the codebase wherever they are useful.
The STRING_CONST() macro worked fine for constants but was not able to constify strings created at runtime.
Add the STR() macro to do this by using strlen() to get the size.
Also rename STRING_CONST() to STRDEF() for brevity and to match the other macro name.
Removed the "anchor" parameter because it was never used in any calls in the Perl code so it was just a dead parameter that always defaulted to true.
Contributed by Cynthia Shang.
These constants are easier than using cfgOptionName() and cfgCommandName() and lead to cleaner code and simpler to construct messages.
String versions are provided. Eventually all the strings will be used in the config structures, but for now they are useful to avoid wrapping with strNew().
IMPORTANT NOTE: The new TLS/SSL implementation forbids dots in S3 bucket names per RFC-2818. This security fix is required for compliant hostname verification.
Bug Fixes:
* Fix issues when a path option is / terminated. (Reported by Marc Cousin.)
* Fix issues when log-level-file=off is set for the archive-get command. (Reported by Brad Nicholson.)
* Fix C code to recognize host:port option format like Perl does. (Reported by Kyle Nevins.)
* Fix issues with remote/local command logging options.
Improvements:
* The archive-push command is implemented entirely in C.
* Increase process-max limit to 999. (Suggested by Rakshitha-BR.)
* Improve error message when an S3 bucket name contains dots.
Documentation Improvements:
* Clarify that S3-compatible object stores are supported. (Suggested by Magnus Hagander.)
This was not an intentional feature in Perl, but it works, so it makes sense to implement the same syntax in C.
This is a break from other places where a -port option is explicitly supplied, so it may make sense to support both styles going forward. This commit does not address that, however.
Reported by Kyle Nevins.
The Perl lib we have been using for TLS allows dots in wildcards, but this is forbidden by RFC-2818. The new TLS implementation in C forbids this pattern, just as PostgreSQL and curl do.
However, this does present a problem for users who have been using bucket names with dots in older versions of pgBackRest. Since this limitation exists for security reasons there appears to be no option but to take a hard line and do our best to notify the user of the issue as clearly as possible.
This problem was not specific to archive-get, but that was the only place it was expressing in the last release. The new archive-push was also affected.
The issue was with daemon processes that had closed all their file descriptors. When exec'ing and setting up pipes to communicate with a child process the dup2() function created file descriptors that overlapped with the first descriptor (stdout) that was being duped into. This descriptor was subsequently closed and wackiness ensued.
If logging was enabled (the default) that increased all the file descriptors by one and everything worked.
Fix this by checking if the file descriptor to be closed is the same one being dup'd into. This solution may not be generally applicable but it works fine in this case.
Reported by Brad Nicholson.
The documentation mentioned Amazon S3 frequently but failed to mention that other S3-compatible object stores are also supported.
Tone down the specific mentions of Amazon S3 and replace them with "S3-compatible object store" when appropriate.
Suggested by Magnus Hagander.
This new implementation should behave exactly like the old Perl code with the exception of updated log messages.
Remove as much of the Perl code as possible without breaking other commands.
When a repository server is configured, commands that modify the repository acquire a remote lock as well as a local lock for extra protection against multiple writers.
Instead of the custom logic used in Perl, make remote locking part of the command configuration.
This also means that the C remote needs the stanza since it is used to construct the lock name. We may need to revisit this at a later date.
While the local processes are doing their jobs the remote connection from the main process may timeout.
Send occasional noops to ensure that doesn't happen.
This may not be the best way to detect 64-bit platforms but it seems to be working fine so far.
Create a macro to make it clearer what is being done and to make it easier to change the implementation.
The test harness was not being built with warnings which caused some wackiness with an improperly structured switch. Just use the same warnings as the code being tested.
Also enable warnings on code that is not directly being tested since other code modules are frequently modified during testing.
We deal with some pretty big lists in archive-push so a nested-loop anti-join looked like it would not be efficient enough.
This merge anti-join should do the trick even though both lists must be sorted first.
The prior behavior on a global error (i.e. not file specific) was to write an individual error file for each WAL file being processed. On retry each of these error files would be removed, and if the error was persistent, they would then be recreated. In a busy environment this could mean tens or hundreds of thousands of files.
Another issue was that the error files could not be written until a list of WAL files to process had been generated. This was easy enough for archive-get but archive-push requires more processing and any errors that happened when generating the list would only be reported in the pgBackRest log rather than the PostgreSQL log.
Instead write a global.error file that applies to any WAL file that does not have an explicit ok or error file. This reduces churn and allows more errors to be reported directly to PostgreSQL.
Having a copy per version worked well until it was time to add new features or modify existing functions. Then it was necessary to modify every version and try to keep them all in sync.
Consolidate all the PostgreSQL types into a single file using #if for type versions. Many types do not change or change infrequently so this cuts down on duplication. In addition, it is far easier to see what has changed when a new version is added.
Use macros to write the interface functions. There is still duplication here since some changes require a new copy of the macro, but it is far less than before.
Move the documentation to postgres/interface.c so it can be updated without having to update N source files.
The "is" function was not very specific so rename to "controlIs".
Since archive-push is being moved to C, the Perl remote will no longer work with that command.
Eventually this module will need to be rewritten in C, but for now just use the restore command which is planned to be migrated last.
Now that repositories are writable the storage drivers that don't yet support file writes need to be updated to do so.
Note that the part size for multi-part upload has not been defined as a proper constant. This will become an option in the near future so it doesn't seem worth creating a constant that we might then forget to remove.
The xml objects only exposed read methods of the underlying libxml2.
This worked for S3 commands that only received data but to send data we need to be able to create XML documents from scratch.
Add the ability to create empty documents and add nodes and contents.
The C code was assuming that the current PostgreSQL version in archive.info/backup.info was the most recent item in the history, but this is not always the case with some stanza-upgrade scenarios. If a cluster is restored from before the upgrade and stanza-upgrade is run again, it will revert db-id to the original history item.
Instead, load db-id from the db section explicitly as the Perl code does.
This did not affect archive-get since it does a reverse scan through the history versions and does not rely on the current version.
Logging was being enable on local/remote processes even if --log-subprocess was not specified, so fix that.
Also, make sure that stderr is enabled at error level as it was on Perl. This helps expose error information for debugging.
For remotes, suppress log and lock paths since these are not applicable on remote hosts. These options should be set in the local config if they need to be overridden.
None of our C HTTP requests have needed to output a body, but they will with the migration of archive-push.
Also, add constants that are useful when POSTing/PUTing data.
The size constants are convenient for creating data structures of the proper size.
The hash type constant must be extern'd so that results can be pulled from a filter.
This was missing when bufUsed() was introduced.
It is not currently a live issue, but becomes a problem in the new archive-push code where the entire buffer is not always used.
This condition was not being properly checked for in the C code and it caused problems in the info command, at the very least.
Instead of applying a local fix, introduce a new path option type that will rigorously check the format of any incoming paths.
Reported by Marc Cousin.
This command was previously forked off from the archive-push command which required a bit of artificial option and log manipulation.
A separate command is easier to test and will work on platforms that don't have fork(), e.g. Windows.
This driver borrows heavily from the Posix driver.
At this point the only difference is that CIFS does not allow explicit directory fsyncs so they need to be suppressed. At some point the CIFS diver will also omit link support.
With the addition of this driver repository storage is now writable.
Bug Fixes:
* Fix possible truncated WAL segments when an error occurs mid-write. (Reported by blogh.)
* Fix info command missing WAL min/max when stanza specified. (Fixed by Stefan Fercot.)
* Fix non-compliant JSON for options passed from C to Perl. (Reported by Leo Khomenko.)
Improvements:
* The archive-get command is implemented entirely in C.
* Enable socket keep-alive on older Perl versions. (Contributed by Marc Cousin.)
* Error when parameters are passed to a command that does not accept parameters. (Suggested by Jason O'Donnell.)
* Add hints when unable to find a WAL segment in the archive. (Suggested by Hans-Jürgen Schönig.)
* Improve error when hostname cannot be found in a certificate. (Suggested by James Badger.)
* Add additional options to backup.manifest for debugging purposes. (Contributed by blogh.)
Add the buffer-size, compress-level, compress-level-network, and process-max options to the backup:option section in backup.manifest to aid in debugging.
It may also make sense to propagate these options up to backup.info so they can be displayed in the info command, but for now this is deemed sufficient.
Contributed by blogh.
When this error happens in the context of a backup it can be a bit mystifying as to why the backup is failing. Add some hints to get the user started.
These hints will appear any time a WAL segment can't be found, which makes the hint about the check command redundant when the user is actually running the check command, but it doesn't seem worth trying to exclude the hint in that case.
Suggested by Hans-Jürgen Schönig.
DESTDIR always had /usr/bin appended which was a problem systems that don't use /usr/bin as the install location for binaries.
Instead, use the value of DESTDIR exactly and update the Debian packages accordingly.
Contributed by Douglas J Hunley.
This behavior allowed a command like this to run without error:
pgbackrest backup --stanza=db full
Even though it actually performed an incremental backup in most circumstances because the `full` parameter was ignored.
Instead, output an error and exit.
Suggested by Jason O'Donnell.
This warning was being output when getting help if retention was not set:
WARN: option repo1-retention-full is not set, the repository may run out of space
Suppress this when getting help since the warning will display by default on a system that is not completely configured.
The same test configurations are run on all four test VMs, which seems a real waste of resources.
Vary the tests per VM to increase coverage while reducing the total number of tests. Be sure to include each major feature (remote, s3, encryption) in each VM at least once.
The same test configurations are run on all four test VMs, which seems a real waste of resources.
Vary the tests per VM to increase coverage while reducing the total number of tests.
The same test configurations are run on all four test VMs, which seems a real waste of resources.
Vary the tests per VM to increase coverage while reducing the total number of tests. Be sure to include each major feature (remote, s3, encryption) in each VM at least once.
This is very inefficient in terms of memory and time and dynamic context names were never utilized.
Just require that context names be valid for the life of the context.
In practice they are all static strings.
Allocations required a sequential scan through the allocation list for both contexts and memory. This was very inefficient since for the most part individual memory allocations are seldom freed directly, rather they are freed when their context is freed.
For both types of allocations track an index for the lowest free position. After an allocation of the free position, a sequential search will be required for the next allocation but this is still far better than doing a scan for every allocation.
With a moderately-sized dataset (500 history entries in backup.info), there is a 237X performance improvement when combined with the f74e88bb refactor.
Before:
% cumulative self
time seconds seconds name
65.11 331.37 331.37 memContextAlloc
16.19 413.78 82.40 memContextCurrent
14.74 488.81 75.03 memContextTop
2.65 502.29 13.48 memContextNewIndex
1.18 508.31 6.02 memFind
After:
% cumulative self
time seconds seconds name
94.69 2.14 2.14 memFind
Finding memory allocations in order to free or resize them is the next bottleneck, but this does not seem to be a major issue presently.
Using the functions internally is great for abstraction but not so great for performance on non-optimized builds.
Also, the functions end up prominent in any profiled build.
The prior method depended on IO:Socket:SSL to push the keep-alive options down to the socket but it only worked for recent versions of the module.
Instead, create the socket directly using IO::Socket::IP if available or IO:Socket:INET as a fallback. The keep-alive option is set directly on the socket before it is passed to IO:Socket:SSL.
Contributed by Marc Cousin.
This new implementation should behave exactly like the old Perl code with the exception of a few updated log messages.
Remove as much of the Perl code as possible without breaking other commands.
The C local is only used for C commands in the main process.
Some tweaking of the existing protocolGet() command was required. Originally the idea was to share the function for local and remote requests but the differences (as in Perl) were too great to make that practical.
Some IO objects have file descriptors which can be useful for monitoring with select().
It might also be useful to expose handles for write objects but there is currently no use case.
There was a lot of extra boilerplate involved in setting up pipes so that is now automated.
In some cases testing with multiple children is useful so allow that as well.
This amends 70c30dfb which disabled test tracing in general.
Instead, only enable test tracing by default for modules that are being unit tested. This saves lots of time but still ensures that test tracing is working and helps with debugging in unit tests.
Also rename the option to --debug-test-trace for a clarity.
The same test configurations are run on all four test VMs, which seems a real waste of resources.
Vary the tests per VM to increase coverage while reducing the total number of tests. Be sure to include each major feature (remote, s3, encryption) in each VM at least once.
The expect tests were originally a rough-and-ready type of unit test so monitoring changes in the expect log helped us detect changes in behavior.
Now the stanza code is heavily unit-tested so the detailed logs mainly cause churn and don't have any measurable benefit.
Reduce the log level to DETAIL to make the logs less verbose and volatile, yet still check user-facing log messages.
The same test configurations are run on all four test VMs, which seems a real waste of resources.
Vary the tests per VM to increase coverage while reducing the total number of tests. Be sure to include each major feature (remote, s3, encryption) in each VM at least once.
The expect tests were originally a rough-and-ready type of unit test so monitoring changes in the expect log helped us detect changes in behavior.
Now the archive code is heavily unit-tested so the detailed logs mainly cause churn and don't have any measurable benefit.
Reduce the log level to DETAIL to make the logs less verbose and volatile, yet still check user-facing log messages.
Update error message with the hostname and more detail about what went wrong. Hopefully this will help in diagnosing certificate/hostname issues.
Suggested by James Badger.
We have been using a hacked-up JSON generator to pass options from C to Perl since the C binary was introduced. This generator was not very compliant which led to issues with \n, ", etc. inside strings.
We have a fully-compliant JSON generator now so use that instead.
Reported by Leo Khomenko.
Detailed stack traces for low-level functions (e.g. strCat, bufMove) can be very useful for debugging but leaving them on for all tests has become quite burdensome in terms of time. Complex operations like generating JSON on a large KevValue can lead to timeouts even with generous values.
Add a new param, --debug-trace, to enable test-level stack trace, but leave it off by default.
Expressions such as <REPO:ARCHIVE> require a stanza name in order to be resolved correctly. However, if the stanza name is passed to the remote then that remote will only work correctly for that one stanza.
Instead, resolved the expressions locally but still pass a relative path to the remote. That way, a storage path that is only configured on the remote does not need to be known locally.
This issue was a result of STORAGE_REPO_PATH prepending an extra stanza when the stanza was specified on the command line.
The tests missed this because by some strange coincidence the WAL dirs were empty for each test that specified a stanza. Add new tests to prevent a regression.
Fixed by Stefan Fercot.
Free all cached objects in the storage helper, especially the stanza name.
This clears the storage environment for tests that switch stanza names or go from a stanza name to no stanza name or vice versa. This is only useful for testing right now, but may be used in the future for commands than act on multiple stanzas.
This was previously 256, which was too small to log protocol parameters. Not only did this truncate important debug information but varying path lengths caused spurious differences in the expect logs.
This command was previously forked off from the archive-get command which required a bit of artificial option and log manipulation.
A separate command is easier to test and will work on platforms that don't have fork(), e.g. Windows.
These are intended to be temporary until a fully automated report is developed.
Since we don't know when that will happen, at least make it easier to generate the current report.
Prior to this the Perl remote was used to satisfy C requests. This worked fine but since the remote needed to be migrated to C anyway there was no reason to wait.
Add the ProtocolServer object and tweak ProtocolClient to work with it. It was also necessary to add a mechanism to get option values from the remote so that encryption settings could be read and used in the storage object.
Update the remote storage objects to comply with the protocol changes and add the storage protocol handler.
Ideally this commit would have been broken up into smaller chunks but there are cross-dependencies in the protocol layer and it didn't seem worth the extra effort.
The file write object destructors called close() and finalized the file even if it was not completely written. This was an issue in both the C and Perl code.
Rewrite the destructors to simply free resources (like file handles) rather than calling the close() method. This leaves the temp file in place for filesystems that use temp files.
Add unit tests to prevent regression.
Reported by blogh.
execRead() should be returning a size_t, not a void. Thankfully, this isn't actually used and therefore shouldn't be an issue, but we should fix it anyway.
Contributed by Stephen Frost.
This was not being caught because the integration tests for S3 were running remotely and going through the Perl code rather than the new C code.
Implement the exists method for the S3 driver and add tests to prevent a regression.
Reported by mibiio.
The check to verify that pg-path and data_directory are equal was not working because pg-path was getting overwritten with data_directory before validation took place.
Reported by James Chanco Jr.
This already worked in reverse, but this case is needed when a command that only uses protocol-timeout (e.g. info) calls a remote process where protocol-timeout and db-timeout can be set. If protocol-timeout was set to less than the default db-timeout then an error resulted.
Bug Fixes:
* Fix issue with multiple async status files causing a hard error. (Reported by Vidhya Gurumoorthi, Joe Ayers, Douglas J Hunley.)
Improvements:
* The info command is implemented entirely in C.
* Simplify info command text message when no stanzas are present by replacing the repository path with "the repository".
* Add _DARWIN_C_SOURCE flag to Makefile for MacOS builds. (Contributed by Douglas J Hunley.)
* Update address lookup in C TLS client to use modern methods. (Suggested by Bruno Friedmann.)
* Include Posix-compliant header for strcasecmp() and fd_set. (Suggested by ucando.)
This prevented packages from being passed to the documentation unless they were in the /backrest directory on the host.
Also make the local path /pgbackrest instead of the deprecated /backrest.
Reported by Heath Lord.
Rather than create _P/_PP variants for every type that needs to pass/return pointers, create FUNCTION_*_P/PP() macros that will properly pass or return any single/double pointer types.
There remain a few unresolved edge cases such as CHARPY but this handles the majority of types well.
This parameter was always useless but commit 7333b630 removed all references to it so remove the parameter at all call sites as well.
The original intention was probably to allow logging of TEST return values but that never happened.
Rather than create a CONST_ variant for every type that needs to be returned const, create a FUNCTION_LOG_RETURN_CONST() macro that will return any type as const.
The string object was reallocating memory with every concatenation which is not very efficient. This is especially true for JSON rendering which does a lot of concatenations.
Instead allocate a pool of extra memory on the first concatenation (50% of size) to be used for future concatenations and reallocate when needed.
Also add a 1GB size limit to ensure that there are no overflows.
Multiple status files were being created by asynchronous archiving if a high-level error occurred after one or more WAL segments had already been transferred successfully. Error files were being written for every file in the queue regardless of whether it had already succeeded. To fix this, add an option to skip writing error files when an ok file already exists.
There are other situations where both files might exist (various fsync and filesystem error scenarios) so it seems best to retry in the case that multiple status files are found rather than throwing a hard error (which then means that archiving is completely stuck). In the case of multiple status files, a warning will be logged to alert the user that something unusual is happening and the command will be retried.
Reported by fpa-postgres, Joe Ayers, Douglas J Hunley.
gcc has apparently merged this function in string.h but Posix specifies that it should be in strings.h. FreeBSD at at least is sticking to the standard.
In the long run it might be better to implement our own strcasecmp() function but for now just add the header.
Suggested by ucando.
The implementation using gethostbyname() was only intended to be used during prototyping but was forgotten when the code was finalized.
Replace it with gettaddrinfo() which is more modern and supports IPv6.
Suggested by Bruno Friedmann.
Rename FUNCTION_DEBUG_* macros to FUNCTION_LOG_* to more accurately reflect what they do. Further rename FUNCTION_DEBUG_RESULT* macros to FUNCTION_LOG_RETURN* to make it clearer that they return from the function as well as logging. Leave FUNCTION_TEST_* macros as they are.
Consolidate the various ASSERT* macros into a single ASSERT macro that is always compiled out of production builds. It was difficult to figure out when an assert would be checked with all the different types in play. When ASSERTs are compiled in they will always be checked regardless of the log level -- tying these two concepts together was not a good idea.
The C info code has already been committed but this commit wires it into main.
Also remove the info Perl code and tests since they are no longer called.
This is a partial implementation of remote storage with just enough functionality to get the info command working. The client is written in C but the server is still in Perl, which limits progress until a C server is written.
This is a complete protocol client implementation in C.
Currently there is no C server implementation so the C client is talking to a Perl server. This won't work very long, though, as the protocol format, even though in JSON, has a lot of language-specific structure. While it would be possible to maintain compatibility between C and Perl it's probably not worth the effort in the long run.
Just as in Perl there are helper functions to make constructing protocol objects easier. Currently only repository remotes are supported.
Executes a child process and allows the calling process to communicate with it using read/write io.
This object is specially tailored to implement the protocol layer and may or may not be generally applicable to general purpose
execution.
Parameters for the local/remote commands are based on parameters that are passed to the current command.
Generate parameters for the new command based on the intersection of parameters between the current command and the command to be executed.
General i/o objects for reading and writing file descriptors, in particular those that can block. In other words, these are not generally to be used with file descriptors for actual files, but rather pipes, sockets, etc.
Replace the repository path with just "the repository". The path is not important in this context and it is clearer to state where the stanzas are missing from.
The Perl code has a tendency to generate absolute paths even when they are not needed. This change helps the C and Perl storage work together via the protocol layer.
The C storage object strives to use rules whenever possible instead of generating absolute paths. This change helps the C and Perl storage work together via the protocol layer.
The prior behavior was to throw an exception but this was not very helpful when something unexpected happened. Better to at least emit the error message even if the error code is not very helpful.
There were some small differences in ordering and how the C version handled missing directories. It may be that the C version is more consistent, but for now it is more important to be compatible with the Perl version.
These differences were missed because the C info command was not wired into main.c so it was not being tested in regression. This commit does not fix the wiring issue because there will likely be a release soon and it is too big a change to put in at the last moment.
Casting to int caused large values to be slightly inaccurate so cast to uint64_t instead.
Also, use multiplication where possible since the compiler should precompute multiplied values.
SIGPIPE immediately terminates the process but we would rather catch the EPIPE error and gracefully shutdown.
Ignore SIGPIPE and throw the EPIPE error via normal error handling.
For some reason adding -D_POSIX_C_SOURCE=200112L caused MacOS builds to stop working. Combining both flags seems to work fine for all tested systems.
Contributed by Douglas J Hunley.
Including the C module after the headers required for testing meant that if headers were missing from the C module they were not caught while directly testing the C module.
The missing headers were caught in general testing, but it is frustrating to get an error in a module that has already passed while testing another module or running CI.
Move the C module include to the very top so missing headers cause immediate failures.
Bug Fixes:
* Remove request for S3 object info directly after putting it. (Reported by Matt Kunkel.)
* Correct archive-get-queue-max to be size type. (Reported by Ronan Dunklau.)
* Add error message when current user uid/gid does not map to a name. (Reported by Camilo Aguilar.)
* Error when --target-action=shutdown specified for PostgreSQL < 9.5.
Improvements:
* Set TCP keepalives on S3 connections. (Suggested by Ronan Dunklau.)
* Reorder info command text output so most recent backup is output last. (Contributed by Cynthia Shang. Suggested by Ryan Lambert.)
* Change file ownership only when required.
* Redact authentication header when throwing S3 errors. (Suggested by Brad Nicholson.)
Admonitions call out places where the user should take special care.
Support added for HTML, PDF, Markdown and help text renderers. XML files have been updated accordingly.
Contributed by Cynthia Shang.
A number of common characters are not allowed in latex without being escaped.
Also convert some HTML-specific codes that are used in the documentation.
Contributed by Cynthia Shang.
Keepalives may help in situations where RST packets are being blocked by a firewall or otherwise do not arrive.
The C code uses select on all reads so it should never block, but add keepalives just in case.
Suggested by Ronan Dunklau.
After a stanza-upgrade backups for the old cluster are displayed until they expire. Cluster info was output newest to oldest which meant after an upgrade the most recent backup would no longer be output last.
Update the text output ordering so the most recent backup is always output last.
Contributed by Cynthia Shang.
Suggested by Ryan Lambert.
The info command will only be executed in C if the repository is local, i.e. not located on a remote repository host. S3 is considered "local" in this case.
This is a direct migration from Perl to integrate as seamlessly with the remaining Perl code as possible. It should not be possible to determine if the C version is running unless debug-level logging is enabled.
Contributed by Cynthia Shang.
The infoBackup object is the counterpart to the infoArchive object which encapsulates the archive.info file.
Currently the object is read-only, i.e. it is not possible to create a new or modify an existing backup.info file.
There a number of constants that will also be used in the infoManifest object so go ahead and create a module to contain them so they don't need to be moved later.
Contributed by Cynthia Shang.
This was caused by a new container version that was released around December 5th. The new version explicitly denies user logons by leaving /var/run/nologin in place after boot.
The solution is to enable the service that is responsible for removing this file on a successful boot.
The previous way worked but was a head-scratcher when reading the code. This cast hopefully makes it a bit more obvious what is going on.
Contributed by Cynthia Shang.
INFO is generally used as the prefix for info file constants so rename these accordingly.
Also follow newly-adopted coding standards for when #define is required for a static String constant.
Contributed by Cynthia Shang.
Some commands (e.g. info) do not take a stanza or the stanza is optional. In that case it is the job of the command to construct the repository path with a stanza as needed.
Update helper functions to omit the stanza from the constructed path when it is NULL.
Contributed by Cynthia Shang.
- Add detail to errors when info files are loaded with incorrect encryption settings.
- Throw FileMissingError rather than FileOpenError when both copies of the info file are missing.
- If one file is present (but errors) and the other is missing, then return the error for the file that was present.
Contributed by Cynthia Shang.
This code was generated during testing and it seemed a good idea to keep it. It is only a partial solution since the primary also needs additional configuration to be able to fail back and forth.
These were introduced in 33fa2ede and ran for a day or so before they started failing consistently on CI. Local builds work fine.
Disable them to free the pipeline for further commits while we determine the issue.
Previously chown() would be called even when no ownership changes were required.
In most cases changes are not required and it seems better to perform an extra stat() rather than an extra chown().
Also add unit tests for owner() since there weren't any.
The authentication header contains the access key (not the secret key) so don't include it in errors that can be seen at any log level.
Suggested by Brad Nicholson.
This got missed in 1f8931f7 when the test binary was renamed.
Also output call graph along with the flat report. The flat report is generally most useful but it doesn't hurt to have both.
By default the documentation builds pgBackRest from source, but the documentation is also a good way to smoke-test packages.
Allow a package file to be specified by passing --var=package=/path/to/package.ext. This works for Debian and CentOS 6 builds.
Keywords were extremely limited and prevented us from generating multi-version documentation and other improvements.
Replace keywords with an if statement that can evaluate a Perl expression with variable replacement.
Since keywords were used to generate cache keys, add a --key-var parameter to identify which variables should make up the key.
This somehow was not configured as a size option when it was added. It worked, but queue sizes could not be specified in shorthand, e.g. 128GB.
This is not a breaking change because currently configured integer values will be read as bytes.
Reported by Ronan Dunklau.
After a file is copied during backup the size is requested from the storage in case it differs from what was written so that repo-size can be reported accurately. This is useful for situations where compression is being done by the filesystem (e.g. ZFS) and what is stored can differ in size from what was written.
In S3 the reported size will always be exactly what was written so there is no need to check the size and doing so immediately can cause problems because the new file might not appear in list commands. This has not been observed on S3 (though it seems to be possible) but it has been reported on the Swift S3 gateway.
Add a driver capability to determine if size needs to be called after a file is written and if not then simply use the number of bytes written for repo-size.
Reported by Matt Kunkel.
This allows the documentation to be built more quickly and offline during development when --pre is specified on the command line.
Each host gets a pre-built container with all the execute elements marked pre. As long as the pre elements do not change the container will not need to be rebuilt.
The feature should not be used for CI builds as it may hide errors in the documentation.
The previous error message only showed the last error. In addition, some errors were missed (such as directory permission errors) that could prevent the copy from being checked.
Show both errors below a generic "unable to load" error. Details are now given explaining exactly why the primary and copy failed.
Previously if one file could not be loaded a warning would be output. This has been removed because it is not clear what the user should do in this case. Should they do a stanza-create --force? Maybe the best idea is to automatically repair the corrupt file, but on the other hand that might just spread corruption if pgBackRest makes the wrong choice.
The decryption filter was added in archiveGetFile() and archiveGetCheck() was modified to return the WAL decryption key stored in archive.info. The rest was plumbing.
The mock/archive/1 integration test added encryption to provide coverage for the new code paths while mock/archive/2 dropped encryption to provide coverage for the existing code paths. This caused some churn in the expect logs but there was no change in behavior.
If InOut filters were placed next to each other then the second filter would never get a NULL input signaling it to flush. This arrangement only worked if the second filter had some other indication that it should flush, such as a decompression filter where the flush is indicated in the input stream.
This is not a live issue because currently no InOut filters are chained together.
This allows CipherBlock to be used as a filter in an IoFilterGroup. The C-style functions used by Perl are now deprecated and should not be used for any new code.
Also add functions to convert between cipher names and CipherType.
Add boolean and one-dimensional list types to jsonToKv().
Add varToJson() and kvToJson() to convert Variants and KeyValues to JSON.
Contributed by Cynthia Shang.
The only change required was to remove the filter that prevented S3 storage from being used. The archive-get command did not require any modification which demonstrates that the storage interface is working as intended.
The mock/archive/3 integration test was modified to run S3 storage locally to provide coverage for the new code paths while mock/stanza/3 was modified to run S3 storage remotely to provide coverage for the existing code paths. This caused some churn in the expect logs but there was no change in behavior.
TlsClient introduced a non-blocking read which is required to read protocol messages that are linefeed-terminated rather than a known size. However, in many cases the expected number of bytes is known in advance so in that case it is more efficient to have tlsClientRead() block until all the bytes are read.
Add block parameter to all read functions and use it when a blocking read is required. For most read functions this is a noop, i.e. if the read function never blocks then it can ignore the parameter.
In passing, set the log level of storageNew*() functions to debug to expose more high-level I/O operations.
A robust HTTP client with pipelining support and automatic retries.
Using a single object to make multiple requests is more efficient because requests are pipelined whenever possible. Requests are automatically retried when the connection has been closed by the server. Any 5xx response is also retried.
Only the HTTPS protocol is currently supported.
A simple, secure TLS client intended to allow access to services that are exposed via HTTPS. We call it TLS instead of SSL because SSL methods are disabled so only TLS connections are allowed.
This object is intended to be used for multiple TLS connections against a service so tlsClientOpen() can be called each time a new connection is needed. By default, an open connection will be reused for pipelining so the user must be prepared to retry their transaction on a read/write error if the server closes the connection before it can be reused. If this behavior is not desirable then tlsClientClose() may be used to ensure that the next call to tlsClientOpen() will create a new TLS session.
Note that tlsClientRead() is non-blocking unless there are *zero* bytes to be read from the session in which case it will raise an error after the defined timeout. In any case the tlsClientRead()/tlsClientWrite()/tlsClientEof() functions should not generally be called directly. Instead use the read/write interfaces available from tlsClientIoRead()/tlsClientIoWrite().
Test certificates were generated dynamically but there are advantages to using static certificates. For example, it possible to use the same certificate between container versions. Mostly, it is easier to document the certificates if they are not buried deep in the container code.
The new test certificates are initially intended to be used with the C unit tests but they will eventually be used for integration tests as well.
Two new certificates have been defined. See test/certificate/README.md for details.
The old dynamic certificates will be retained until they are replaced.
The embedded semicolon led to inconsistent semicolons when using the macro and is not our general convention.
Remove embedded semicolons from the macros and add semicolons in usage where they were not present.
Add XmlDocument, XmlNode, and XmlNodeList objects as a thin interface layer on libxml2.
This interface is not intended to be comprehensive. Only a few libxml2 capabilities are exposed but more can be added as needed.
S3 key options (repo1-s3-key/repo1-s3-key-secret) were not required which meant that users got an ugly assertion when they were missing rather than a tidy configuration error.
Only the local/remote commands need them to be optional. This is because local/remote commands get all their options from the command line but secrets cannot be passed on the command line. Instead, secrets are passed to the local/remote commands via the protocol for any operation that needs them.
The configuration system allows required to be set per command so use that to improve the error messages while not breaking the local/remote commands.
This allows a C unit test to access data in the code repository that might be useful for testing.
Add testRepoPathSet() to set the repository path.
In passing remove extra whitespace in the TEST_RESULT_VOID() macro.
Bug Fixes:
* Fix issue with archive-push-queue-max not being honored on connection error. (Reported by Lardière Sébastien.)
* Fix static WAL segment size used to determine if archive-push-queue-max has been exceeded.
* Fix error after log file open failure when processing should continue. (Reported by vthriller.)
Features:
* Automatically enable backup checksum delta when anomalies (e.g. timeline switch) are detected. (Contributed by Cynthia Shang.)
Improvements:
* Retry all S3 5xx errors rather than just 500 internal errors. (Suggested by Craig A. James.)
These interfaces previously used the memory context of the object they were associated with and did not have their own destructors.
There are times when it is useful to free the interface without also freeing the underlying object so give IoRead and IoWrite their own memory contexts and destructors.
In passing fix a comment type in bufferRead.c.
By default the IoWrite object does not write until the output buffer is full but this is a problem for protocol messages that must be sent in order to get a response.
ioWriteFlush() is not called internally by IoWrite but can be used at any time to immediately write all bytes from the output buffer without closing the IoWrite object.
Documentation block syntax requires that at least one var be specified.
This limitation should be removed but for now add a comment to describe why a bogus var is defined.
The prior message stated that there had been a buffer overrun which is not true since the code prevents that.
In fact, this message means the parameter buffer filled while building the parameter list. Rather than display a partial list we output this message instead.
Also remove !!! which by convention we use as a marker for code that needs attention before it can be committed to master.
These macros provide a convenient way to output debug information in tests.
They are not intended to be left in test code when it is committed to master.
ioReadLine() calls ioRead(), which aggressively tries to fill the output buffer, but this doesn't play well with blocking reads.
Give ioReadLine() an option that tells it to read only what is available. That doesn't mean the function will never block but at least it won't do so by reading too far.
The report HTML generated by lcov is overly verbose and cumbersome to navigate. Since we maintain 100% coverage it's far more interesting to look at what is not covered than what is.
The new report presents all missing coverage on a single page and excludes code that is covered for brevity.
Add HTML tags for table elements.
The strExtra parameter allows adhoc tags to be added to an element for features that can't be implemented with CSS, e.g. colspan.
There are many places (and the number is growing) where a zero-terminated string constant must be transformed into a String object to be usable. This pattern wastes time and memory, especially since the created string is generally used in a read-only fashion.
Define macros to create constant String objects that are initialized at compile time rather than at run time.
The storageList() command accepts a regular expression as a filter. This works fine for local filesystems where it is relatively cheap to get a complete list of files and filter them in code. However, for remote filesystems like S3 it can be expensive to fetch a complete list of files only to discard the bulk of them locally.
S3 does not filter on regular expressions but it can accept a static prefix so this function extracts a prefix from a regular expression when possible.
Even a few characters can drastically reduce the amount of data that must be fetched remotely so the function does not try to be too clever. It requires a ^ anchor and stops scanning when the first special character is found.
Allow buffers to report a lower size than their allocated size. This means a larger buffer can be used to do the work of a smaller buffer without having to create a new buffer and concatenate.
This is useful for blocking I/O where the buffer may be too large for the amount of data that is available to read.
The Wait object accepted a double in the constructor for wait time but used TimeMSec internally. This was done for compatibility with the Perl code.
Instead, use TimeMSec in the Wait constructor and make changes as needed to calling code.
Note that Perl still uses a double for its Wait object so translation is needed in some places. There are no plans to update the Perl code as it will become obsolete.
If an object free() method was called manually when a callback was set then the callback would call free() again. This meant that each free() method had to protect against a subsequent call.
Instead, clear the callback (if present) before calling memContextFree(). This is faster (since there is no unecessary callback) and removes the need for semaphores to protect against a double free().
Code generation saved files even when they had not changed, which often caused code generation cascades. So, don't save files unless they have changed.
Use rsync to determine which files have changed since the last test run. The manifest of changed files is saved and not removed until all code generation and builds have completed. If an error occurs the work will be redone on the next run.
The eventual goal is to do all the builds from the test/repo directory created by rsync but for now it is only used to track changes.
The contents were already preserved between tests in a single test.pl run but for a separate execution the entire project had to be built from scratch, which was getting slower as we added code.
Save the important build flags in a file so the new execution knows whether the build contents can be reused.
Mounting/unmounting tmpfs on /home/[user]/test takes time, forces at least 3GB of memory to be available for tests, and makes it harder to preserve data between tests.
Instead, move mounting of tmpfs to the Vagrantfile and add it to fstab so it survives reboots.
There are a number of cases where a checksum delta is more appropriate than the default time-based delta:
* Timeline has switched since the prior backup
* File timestamp is older than recorded in the prior backup
* File size changed but timestamp did not
* File timestamp is in the future compared to the start of the backup
* Online option has changed since the prior backup
A practical example is that checksum delta will be enabled after a failover to standby due to the timeline switch. In this case, timestamps can't be trusted and our recommendation has been to run a full backup, which can impact the retention schedule and requires manual intervention.
Now, a checksum delta will be performed if the backup type is incr/diff. This means more CPU will be used during the backup but the backup size will be smaller and the retention schedule will not be impacted.
Contributed by Cynthia Shang.
We were already retrying 500 errors but 503 (rate-limiting) errors were not being retried and would cause an instant failure which aborted the command.
There are only two 5xx errors currently implemented by S3 but instead of adding 503 simply retry all 5xx errors. This is consistent with the http definition of this error class, "the server failed to fulfill an apparently valid request."
Suggested by Craig A. James.
This calculation was missed when the WAL segment size was made dynamic in preparation for PostgreSQL 11.
Fix the calculation by checking the actual WAL file sizes instead of using an estimate based on WAL segment size. This is more accurate because it takes into account .history and .backup files, which are smaller. Since the calculation is done in the async process the additional processing time should not adversely affect performance.
Remove the PG_WAL_SIZE constant and instead use local constants where the old value is still required. This is only the case for some tests and PostgreSQL 8.3 which does not provide a way to get the WAL segment size from pg_control.
If an error occurred while acquiring a lock on a remote server the error would be reported correctly, but the queue max detection code was not reached. The tests failed to detect this because they fixed the connection before queue max, allowing the ccde to be reached.
Move the queue max code before the lock so it will run even when remote connections are not working. This means that no attempt will be made to transfer WAL once queue max has been exceeded, but it makes it much more likely that the code will be reach without error.
Update tests to continue errors up to the point where queue max is exceeded.
Reported by Lardière Sébastien.
The C code was warning on failure and continuing but the Perl logging code was never updated with the same feature.
Rather than add the feature to Perl, just disable file logging if the log file cannot be opened. Log files are always opened by C first, so this will eliminate the error in Perl.
Reported by vthriller.
The existing tests were not adequate to ensure the history was being added in the correct order when some entries were loaded from a file and others added with infoPgAdd().
Contributed by Cynthia Shang.
The InfoPg object was partially modified in 960ad732 to place the current history item in position 0, but infoPgDataCurrent() didn't get updated correctly.
Remove this->indexCurrent and make the current position always equal 0. Use the new lstInsert() function when adding new history items via infoPgAdd(), but continue to use lstAdd() when loading from a file for efficiency.
This does not appear to be a live bug because infoPgDataCurrent() and infoPgAdd() are not yet used in any production code. The archive-get command is the only C code using InfoPG and it always looks at the entire list of items rather than just the current item.
Suggested by Cynthia Shang.
Bug Fixes:
* Fix missing missing URI encoding in S3 driver. (Reported by Dan Farrell.)
* Fix incorrect error message for duplicate options in configuration files. (Reported by Jesper St John.)
* Fix incorrectly reported error return in info logging. A return code of 1 from the archive-get was being logged as an error message at info level but otherwise worked correctly.
Features:
* Add checksum delta for incremental backups which uses checksums rather than timestamps to determine if files have changed. (Contributed by Cynthia Shang.)
* PostgreSQL 11 support, including configurable WAL segment size.
Improvements:
* Ignore all files in a linked tablespace directory except the subdirectory for the current version of PostgreSQL. Previously an error would be generated if other files were present and not owned by the PostgreSQL user.
* Improve info command to display the stanza cipher type. (Contributed by Cynthia Shang. Suggested by Douglas J Hunley.)
* Improve support for special characters in filenames.
* Allow delta option to be specified in the pgBackRest configuration file. (Contributed by Cynthia Shang.)
PostgreSQL 11 RC1 support was tested in 9ae3d8c46 when the u18 container was rebuilt. Nothing substantive changed after RC1 so pgBackRest is ready for PostgreSQL 11 GA.
The standard npm packages on Ubuntu 18.04 suddenly required libssl1.0 which broke the pgbackrest package builds. Installing nodejs from deb.nodesource.com seems to work fine with standard libssl.
This package is required by ScalityS3 which is used for local S3 testing.
When the filter interface internals were split out into a new header file the documentation was not moved as it should have been. Additionally some functions which should have been moved were left behind.
Move the documentation and functions to filter.internal.h and add more documentation. Filters are a tricky subject so the more documentation the better.
Also add documentation for the user-facing filter functions in filter.h.
Allow a single linefeed-terminated line to be read or written. This is useful for various protocol implementations, including HTTP and pgBackRest's protocol.
On read the maximum line size is limited to buffer-size to prevent runaway memory usage in case a linefeed is not found. This seems fine for HTTP but we may need to revisit this decision when implementing the pgBackRest protocol. Another option would be to increase the minimum buffer size (currently 16KB).
This test has been flapping since 9b9396c7. It seems to be some kind of timing issue since all integration tests pass and this unit passes on all other VMs. It only happens on Travis and is not reproducible in any development environment that we have tried.
For now, disable the test since the constant flapping is causing major delays in testing and quite a bit of time has been spent trying to identify the root cause. We are actively developing these tests and hope the issue will be identified during the course of normal development.
A number of improvements were made to the tests while searching for this issue. While none of them helped, it makes sense to keep the improvements.
Duplicating a non-multi-value option was not throwing the correct message when the option was a boolean.
The reason was that the option was being validated as a boolean before the multi-value check was being done. The validation code assumed it was operating on a string but was instead operating on a string list causing an assertion to fail.
Since it's not safe to do the multi-value check so late, move it up to the command-line and configuration file parse phases instead.
Reported by Jesper St John.
Previously this was done in two separate places by checking if an option was type hash or list.
Bad enough that it was in two places, but an upcoming bug fix will add another instance so make it a function.
There doesn't seem to be any need to implement this as a filter since current use cases (S3 authentication) work on small datasets.
So, use the single function method provided by OpenSSL for simplicity.
This constructor creates a Buffer object directly from a zero-terminated string. The old way was to create a String object first, then convert that to a Buffer using bufNewStr().
Updated in all places that used the old pattern.
PostgreSQL 11 introduces configurable WAL segment sizes, from 1MB to 1GB.
There are two areas that needed to be updated to support this: building the archive-get queue and checking that WAL has been archived after a backup. Both operations require the WAL segment size to properly build a list.
Checking the archive after a backup is still implemented in Perl and has an active database connection, so just get the WAL segment size from the database.
The archive-get command does not have a connection to the database, so get the WAL segment size from pg_control instead. This requires a deeper inspection of pg_control than has been done in the past, so it seemed best to copy the relevant data structures from each version of PostgreSQL and build a generic interface layer to address them. While this approach is a bit verbose, it has the advantage of being relatively simple, and can easily be updated for new versions of PostgreSQL.
Since the integration tests generate pg_control files for testing, teach Perl how to generate files with the correct offsets for both 32-bit and 64-bit architectures.
Unsecured, passwordless SSH can be a scary thing. If an attacker gains access to one system they can easily hop to other systems.
Add documentation on how to use the command parameter in authorized_keys to limit ssh to running a single command, pgbackrest. There is more that could be done for security but this likely addresses most needs.
Also change references to "trusted ssh" to "passwordless ssh" since this seems more correct.
Suggested by Stephen Frost, Magnus Hagander.
Use checksums rather than timestamps to determine if files have changed. This is useful in cases where the timestamps may not be trustworthy, e.g. when performing an incremental after failing over to a standby.
If checksum delta is enabled then checksums will be used for verification of resumed backups, even if they are full. Resumes have always used checksums to verify the files in the repository, enabling delta performs checksums on the database files as well.
Note that the user must manually enable this feature in cases were it would be useful or just keep in enabled all the time. A future commit will address automatically enabling the feature in cases where it seems likely to be useful.
Contributed by Cynthia Shang.
This option was previously allowed on the command-line only for no particular reason that we could determine.
Being able to specify it in the config file seems like a good idea and won't change current usage.
Contributed by Cynthia Shang.
Apparently we never needed to run this function remotely.
It will be needed by the backup checksum delta feature, so implement it now.
Contributed by Cynthia Shang.
The test to make sure that some files (e.g. pg_control) do not get removed during the backup was lost during the storage refactor committed at de7fc37f.
This did not impact the integrity of the backups, but bring it back since it is a nice sanity check.
Contributed by Cynthia Shang.
As we add storage drivers it's important to keep the tests for each completely separate. Rather than have three tests for each driver, standardize on having a single test unit for each driver.
This is a workaround for inefficient handling of many setjmps in gcc >= 4.9. Setjmp is used in all error handling, but in the unit tests each test macro contains an error handling block so they add up pretty quickly for large unit tests.
Enabling -ftree-coalesce-vars in affected versions reduces build time and memory requirements by nearly an order of magnitude. Even so, compiles are much slower than gcc <= 4.8.
We submitted a bug for this at: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=87316
Which was marked as a duplicate of: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63155
For read-only repositories the Posix and CIFS drivers behave exactly the same. Since that's all we support in C right now it's valid to treat them as the same thing. An assertion has been added to remind us to add the CIFS driver before allowing the repository to be writable.
Mostly we want to make sure that the C code does not blow up when the repository type is CIFS.
Previously it was the responsibility of the individual tests to clean up after themselves. Now the test harness now does the cleanup automatically.
This means that some paths/files need to be recreated with each run but that doesn't happen very often.
An attempt has been made to remove all redundant cleanup code but it's hard to know if everything has been caught. No issues will be caused by anything that was missed, but they will continue to chew up time in the tests.
Storing the expect log (created by common/harnessLog) in the regular test directory was not ideal. It showed up in tests and made it difficult to clear the test directory between each run.
Move the expect log to a purpose-built directory one level up so it does not interfere with regular testing.
These are separated the same way in the Perl code where the remote storage driver is located in the Protocol module. However, in the C code the intention is to implement the remote storage driver as a regular driver in the storage layer rather than making a special case out of it.
So, merge the storage helpers. This also has the benefit of making the code a bit simpler.
Also separate storageSpool() and storageSpoolWrite() to make it clearer which operations require write access and to maintain consistency with the other storage helper functions.
If the total bytes read from the expect log file was 0 then the last byte of whatever was in memory before harnessLogBuffer would be set to 0.
On 32-bit systems this expressed as the high order byte of a pointer being cleared and wackiness (in the form of segfaults) ensued.
Fixed parameter constructors made adding new interface functions a burden, so we switched to using structs to define interfaces in the storage module at c49eaec7.
While propagating this pattern to the IO interfaces it became obvious that the existing variable parameter function pattern (begun in the storage module) was more succinct and consistent with the existing code.
So, use variable parameter functions to define all interfaces. This assumes that the non-interface parameters will be fixed, which seems reasonable for low-level code.
C or Perl coverage tests can now be run on any VM provided a recent enough version of Devel::Cover or lcov is available.
For now, leave u18 as the only VM to run coverage tests due to some issues with older versions of lcov.
The external storage interfaces (Storage, StorageFileRead, etc.) have been stable for a while, but internally they were calling the posix driver functions directly.
Create driver interfaces for storage, fileRead, and fileWrite and remove all references to the posix driver outside storage/driver/posix (with the exception of a direct call to pathRemove() in Perl LibC).
Posix is still the only available driver so more adjustment may be needed, but this should represent the bulk of the changes.