This is a modest start but it addresses the specific issue that was caused by the bug fixed in 45ec694a. This validation will produce an immediate error rather than erroring out partway through the restore.
More validations are planned but this is the most important one and seems safest for this release.
If a file was removed by PostgreSQL during the backup (or was missing from the standby) then the next file might not be copied and updated in the manifest. If this happened then the backup would error when restored.
The issue was that removing files from the manifest invalidated the pointers stored in the processing queues. When a file was removed, all the pointers shifted to the next file in the list, causing a file to be unprocessed. Since the unprocessed file was still in the manifest it would be saved with no checksum, causing a failure on restore.
When process-max was > 1 then the bug would often not express since the file had already been pulled from the queue and updates to the manifest are done by name rather than by pointer.
pkg-config is a generic way to get build options rather than relying on a package-specific utility.
XML2_CONFIG can be used to override this utility for systems that do not ship pkg-config.
Previously memNew() used memset() to initialize all struct members to 0, NULL, false, etc. While this appears to work in practice, it is a violation of the C specification. For instance, NULL == 0 must be true but neither NULL nor 0 must be represented with all zero bits.
Instead use designated initializers to initialize structs. These guarantee that struct members will be properly initialized even if they are not specified in the initializer. Note that due to a quirk in the C99 specification at least one member must be explicitly initialized even if it needs to be the default value.
Since pre-zeroed memory is no longer required, adjust memAllocInternal()/memReallocInternal() to return raw memory and update dependent functions accordingly. All instances of memset() have been removed except in debug/test code where needed.
Add memMewPtrArray() to allocate an array of pointers and automatically set all pointers to NULL.
Rename memGrowRaw() to the more logical memResize().
The timeline is required to verify WAL segments in the archive after a backup. The conversion was performed base 10 instead of 16, which led to errors when the timeline was ≥ 0xA.
This macro block encapsulates the common pattern of switching to the prior (formerly called old) mem context to return results from a function.
Also rename MEM_CONTEXT_OLD() to memContextPrior(). This violates our convention of macros being in all caps but memContextPrior() will become a function very soon so this will reduce churn.
A few places were using just memContextNew(), probably because they did not immediately need to create anything in the new context, but it's better if we use the same pattern everywhere, even if it results in a few extra mem context switches.
Bug Fixes:
* Fix options being ignored by asynchronous commands. The asynchronous archive-get/archive-push processes were not loading options configured in command configuration sections, e.g. [global:archive-get]. (Reviewed by Cynthia Shang. Reported by Urs Kramer.)
* Fix handling of \ in filenames. \ was not being properly escaped when calculating the manifest checksum which prevented the manifest from loading. Since instances of \ in cluster filenames should be rare to nonexistent this does not seem likely to be a serious problem in the field.
Features:
* pgBackRest is now pure C.
* Add pg-user option. Specifies the database user name when connecting to PostgreSQL. If not specified pgBackRest will connect with the local OS user or PGUSER, which was the previous behavior. (Contributed by Mike Palmiotto.)
* Allow path-style URIs in S3 driver.
Improvements:
* The backup command is implemented entirely in C. (Reviewed by Cynthia Shang.)
The local, remote, archive-get-async, and archive-push-async commands were used to run functionality that was not directly available to the user. Unfortunately that meant they would not pick up options from the command that the user expected, e.g. backup, archive-get, etc.
Remove the internal commands and add roles which allow pgBackRest to determine what functionality is required without implementing special commands. This way the options are loaded from the expected command section.
Since remote is no longer a specific command with its own options, more manipulation is required when calling remote. This might be something we can improve in the config system but it may be worth leaving as is because it is a one-off, for now at least.
Although path-style URIs have been deprecated by AWS, they may still be used with products like Minio because no additional DNS configuration is required.
Path-style URIs must be explicitly enabled since it is not clear how they can be auto-detected reliably. More importantly, faulty detection could cause regressions in current installations.
Time is supported in all drivers with the update to S3 at 61538f93, so it is now possible to add time to the ls command and have it work on all repo types.
Parameter lists that are passed directly to exec*() do not need quoting when spaces are present. Worse, the quotes will not be stripped and the option value will be garbled.
Unfortunately this still does not fix all issues with quoting since we don't know how it might need to be escaped to work with SSH command configuration. The answer seems to be to pass the options in the protocol layer but that's beyond the scope of this commit.
This option was overloaded on the general type option but it makes sense to split this out since the meaning is pretty different.
Rename the values to conform to current standards, i.e. pg and repo, now that the Perl code won't care anymore.
Previously dates were not being filled by these functions which was fine since dates were not used.
We plan to use dates for the ls command plus it makes sense for the driver to be complete since it will be used as an example.
These are similar to what mktime() and strptime() do but they ignore the local system timezone which saves having to munge the TZ env variable to do time conversions.
This macro was created before the String object existed so subsequent usage with String always included a lot of strPtr() wrapping.
TEST_RESULT_STR_Z() had already been introduced but a wholesale replacement of TEST_RESULT_STR() was not done since the priority was on the C migration.
Update all calls to (old) TEST_RESULT_STR() with one of the following variants: (new) TEST_RESULT_STR(), TEST_RESULT_STR_Z(), TEST_RESULT_Z(), TEST_RESULT_Z_STR().
Specifies the database user name when connecting to PostgreSQL.
If not specified pgBackRest will connect with the local OS user or PGUSER, which was the previous behavior.
Most of these tests are just checking that errors are thrown when required. These are well covered in various unit tests.
The "cannot resume" tests are also well covered in the backup unit tests.
Finally, config warnings are well covered in the config unit tests.
There is more to be done here, but this accounts for the low-hanging fruit.
Set log-level-file=off when more that one test will run. In this case is it impossible to see the logs anyway since they will be automatically cleaned up after the test. This improves performance pretty dramatically since trace-level logging is expensive. If a singe integration test is run then log-level-file is trace by default but can be changed with the --log-level-test-file option.
Reduce buffer-size to 64k to save memory during testing and allow more processes to run in parallel.
Update log replacement rules so that these options can change without affecting expect logs.
The co6 tests were occasionally running out of space so bump up the size of the ramdisk a bit to hopefully prevent this.
A longer term solution would be to disable the trace-level file logs when running on Travis CI since they seem to be using most of the space.
PostgreSQL >= 9.6 uses non-exclusive backup which has implicit stop-auto since the backup will stop when the connection is terminated.
The warning was made more verbose in 1f2ce45e but this now seems like a bad idea since there are likely users with mixed version environments where stop-auto is enabled globally. There's no reason to fill their logs with warnings over a harmless option. If anything we should warn when stop-auto is explicitly set to false but this doesn't seem very important either.
Revert to the prior behavior, which is to warn and reset when stop-auto is enabled on PostgreSQL < 9.3.
\ was not being properly escaped when calculating the manifest checksum which prevented the manifest from loading.
Use jsonFromStr() to properly quote and escape \.
Since instances of \ in cluster filenames should be rare to nonexistent this does not seem likely to be a serious problem in the field.
Remove embedded Perl from the distributed binary. This includes code, configure, Makefile, and packages. The distributed binary is now pure C.
Remove storagePathEnforceSet() from the C Storage object which allowed Perl to write outside of the storage base directory. Update mock/all and real/all integration tests to use storageLocal() where they were violating this rule.
Remove "c" option that allowed the remote to tell if it was being called from C or Perl.
Code to convert options to JSON for passing to Perl (perl/config.c) has been moved to LibC since it is still required for Perl integration tests.
Update build and installation instructions in the user guide.
Remove all Perl unit tests.
Remove obsolete Perl code. In particular this included all the Perl protocol code which required modifications to the Perl storage, manifest, and db objects that are still required for integration testing but only run locally. Any remaining Perl code is required for testing, documentation, or code generation.
Rename perlReq to binReq in define.yaml to indicate that the binary is required for a test. This had been the actual meaning for quite some time but the key was never renamed.
For the most part this is a direct migration of the Perl code into C except as noted below.
A backup can now be initiated from a linked directory. The link will not be stored in the manifest or recreated on restore. If a link or directory does not already exist in the restore location then a directory will be created.
The logic for creating backup labels has been improved and it should no longer be possible to get a backup label earlier than the latest backup even with timezone changes or clock skew. This has never been an issue in the field that we know of, but we found it in testing.
For online backups all times are fetched from the PostgreSQL primary host (before only copy start was). This doesn't affect backup integrity but it does prevent clock skew between hosts affecting backup duration reporting.
Archive copy now works as expected when the archive and backup have different compression settings, i.e. when one is compressed and the other is not. This was a long-standing bug in the Perl code.
Resume will now work even if hardlink settings have been changed.
Reviewed by Cynthia Shang.
/ takes precedence over & but the appropriate parens were not provided.
By some bad luck the tests worked either way, so add a new test that only works the correct way to prevent a regression.
Bug Fixes:
* Fix archive-push/archive-get when PGDATA is symlinked. These commands tried to use cwd() as PGDATA but this would disagree with the path configured in pgBackRest if PGDATA was symlinked. If cwd() does not match the pgBackRest path then chdir() to the path and make sure the next cwd() matches the result from the first call. (Reported by Stephen Frost, Milosz Suchy.)
* Fix reference list when backup.info is reconstructed in expire command. Since the backup command is still using the Perl version of reconstruct this issue will not express unless 1) there is a backup missing from backup.info and 2) the expire command is run directly instead of running after backup as usual. This unlikely combination of events means this is probably not a problem in the field.
* Fix segfault on unexpected EOF in gzip decompression. (Reported by Stephen Frost.)
The TZ environment variable was not reliably pushed down to the test processes.
Instead pass TZ via a command line parameter and set explicitly in the test process.
Commit 7168e074 tried to use cwd() as PGDATA but this would disagree with the path configured in pgBackRest if PGDATA was symlinked.
If cwd() does not match the pgBackRest path then chdir() to the path and make sure the next cwd() matches the result from the first call.
If the compressed stream terminated early then the decompression process would get a flush request (NULL input buffer) since the filter was not marked as done. This could happen on a zero-length or truncated (i.e. invalid) compressed file.
Change the existing assertion to an error to catch this condition in production gracefully.
82df7e6f and 9856fef5 updated tests that used test points in preparation for the feature not being available in the C code.
Since tests points are no longer used remove the infrastructure.
Also remove one stray --test option in mock/all that was essentially a noop but no longer works now that the option has been removed.
Pq script errors are now printed in test output in case they are being masked by a later error.
Once a script error occurs, the same error will be thrown forever rather than throwing a new error on the next item in the script.
HRNPQ_MACRO_CLOSE() is not required in scripts unless harnessPqScriptStrictSet(true) is called. Most higher-level tests should not need to run in strict mode.
The command/check test seems to require strict mode but there's no apparent reason why it should. This would be a good thing to look into at some point.
Some log output (e.g. time) is hard to test because the values can change between tests.
Add expressions to replace substrings in the log with predictable values to simplify testing.
This is similar to the log replacement facility available for Perl expect log testing.
A recopy would occur if the size or checksum was invalid but on error the backup would terminate.
Instead, recopy the resumed file on any error. If the error is systemic (e.g. network failure) then it should show up again during the recopy.
These were not getting updated to match the directory name when the manifests were copied.
The Perl code didn't care but the C code expects labels to be set correctly.
For now this is only used in testing but there are places where it could be useful in the core code.
Even if that turns out not to be true, it doesn't seem worth implementing a new version in testing just to capture a few values that we already have.
This is to maintain compatibility with the older Perl code that returned the lowest sorted order item in a tie.
For other datatypes the C code returns the same value, often enough at least to not cause churn in the expect tests.
Adding a manifest to backup.info was migrated to C in 4e4d1f41 but deduplication of the references was missed leading to a reference for every file being added to backup.info.
Since the backup command is still using the Perl version of reconstruct this issue will not express unless 1) there is a backup missing from backup.info and 2) the expire command is run directly instead of running after backup as usual.
This unlikely combination of events means this is probably not a problem in the field.
Archive check does not run when in offline backup mode but the option was set to true in the manifest. It's harmless since these options are informational only but it could cause confusion when debugging.
Test points are not supported by the new C code so these will be replaced with unit tests.
The fact that the tests still pass even when the changes aren't made mid-backup (except application_name) shows how weak they were in the first place.
Even so, this does represent a regression in (soon to be be removed) Perl coverage.
Test points will not be available in the C code so update these tests as best as possible without using them.
This represents a loss of coverage for the Perl code (soon to be removed) which will be made up in the C code with unit tests.
These tests require test points which are not being implemented in the C code.
This functionality is fully tested in the command/control unit tests so integration tests are no longer required.
This expression determines which files contain page checksums but it was also including the directory above the relation directories. In a real PostgreSQL installation this not a problem because these directories don't contain any files.
However, our tests place a file in `base` which the Perl code thought should have page checksums while the new C code says no.
Update the expression to document the change and avoid churn in the expect logs later.
Installing lcov 1.14 everywhere turned out to be a problem just as using 1.13 on Ubuntu 19.04 was.
Since we primarily use Ubuntu 18.04 for coverage testing and reporting, we definitely want to make sure that works. So, revert to using the default packaged lcov except when specified otherwise in VmTest.pm.
PostgreSQL minor version releases are also included since all containers have been rebuilt.
The issue "fixed" in f01aa586 was caused by treating all strings as format strings while logging, which was fixed in 0c05df45.
Revert because there no longer seems a reason for the extra logic, and it was only partially applied, i.e. not to env vars, command-line options, or config options.
Using the same macros for formatted and unformatted logging had several disadvantages.
First, the compiler was unable to verify the format string against the parameters.
Second, legitimate % characters in messages were being interpreted as format characters with garbage output ensuing.
Add _FMT() variants and update all call sites to use the correct variant.
This character causes problems in C and in the shell if we try to output it in an error message.
Forbid it completely and spell it out in error messages to avoid strange effects.
There is likely a better way deal with the issue but this will do for now.
The protocol timeout tests have been superceded by unit tests.
The TEST_BACKUP_RESUME test point was incorrectly included into a number of tests, probably a copy pasto. It didn't hurt anything but it did add 200ms to each test where it appeared.
Catalog and control version tests were redundant. The database version and system id tests covered the important code paths and the C code gets these values from a lookup table.
Finally, fix an incomplete update to the backup.info file while munging for tests.
It is occasionally useful to get information about a file outside of the base storage path. storageLocal() can be used in some cases but when the storage is remote is doesn't seem worth creating a separate storage object for adhoc info requests.
storageInfo() is a read-only operation so this seems pretty safe. The noPathEnforce parameter will make auditing exceptions easy.
The ability to disable enforcement (i.e., the requested absolute path is within the storage path) globally will be removed after the Perl migration.
The feature will still be needed occasionally so allow it in an adhoc fashion.
A . in a link will always lead to an error since the destination will be inside PGDATA. However, it is accepted symlink syntax so it's better to resolve it and get the correct error message.
Also, we may have other uses for this function in the future.
Adding a dummy column which is always set by the P() macro allows a single macro to be used for parameters or no parameters without violating C's prohibition on the {} initializer.
-Wmissing-field-initializers remains disabled because it still gives wildly different results between versions of gcc.
It wasn't practical for the main process to be ignorant of the remote path, and in any case knowing the path makes debugging easier.
Pull the remote path when connecting and pass the result of local storagePath() to the remote when making calls.
Pushing output through a JSON blob is not practical if the output is extremely large, e.g. a backup manifest with 100K+ files.
Add read/write routines so that output can be returned in chunks but errors will still be detected.
Previously, options were being filtered based on what was currently valid. For chained commands (e.g. backup then expire) some options may be valid for the first command but not the second.
Filter based on the command definition rather than what is currently valid to avoid logging options that are not valid for subsequent commands. This reduces the number of options logged and will hopefully help avoid confusion and expect log churn.
Bug Fixes:
* Fix remote timeout in delta restore. When performing a delta restore on a largely unchanged cluster the remote could timeout if no files were fetched from the repository within protocol-timeout. Add keep-alives to prevent remote timeout. (Reported by James Sewell, Jens Wilke.)
* Fix handling of repeated HTTP headers. When HTTP headers are repeated they should be considered equivalent to a single comma-separated header rather than generating an error, which was the prior behavior. (Reported by donicrosby.)
Improvements:
* JSON output from the info command is no longer pretty-printed. Monitoring systems can more easily ingest the JSON without linefeeds. External tools such as jq can be used to pretty-print if desired. (Contributed by Cynthia Shang.)
* The check command is implemented entirely in C. (Contributed by Cynthia Shang.)
Documentation Improvements:
* Document how to contribute to pgBackRest. (Contributed by Cynthia Shang.)
* Document maximum version for auto-stop option. (Contributed by Brad Nicholson.)
Test Suite Improvements:
* Fix container test path being used when --vm=none. (Suggested by Stephen Frost.)
* Fix mismatched timezone in expect test. (Suggested by Stephen Frost.)
* Don't autogenerate embedded libc code by default. (Suggested by Stephen Frost.)
When HTTP headers are repeated they should be considered equivalent to a single comma-separated header rather than generating an error, which was the prior behavior.
Reported by donicrosby.
We had some problems with newer versions so had held off on updating. Those problems appear to have been resolved.
In addition, the --compat flag is no longer required. Prior versions of MinIO required all parts of a multi-part upload (except the last) to be of equal size. The --compat flag was introduced to restore the default S3 behavior. Now --compat is only required when ETag is being used for MD5 verification, which we don't do.
Previously we were using int64_t to debug time_t but this may not be right depending on how the compiler represents time_t, e.g. it could be a float.
Since a mismatch would have caused a compiler error we are not worried that this has actually happened, and anyway the worst case is that the debug log would be wonky.
The primary benefit, aside from correctness, is that it makes choosing a parameter debug type for time_t obvious.
Previously the mock integration tests would be skipped for VMs other than the standard four used in CI. Now VMs outside the standard four will run the same tests as VM4 (currently U18).
Using pg1-path, as we were doing previously, could lead to WAL being copied to/from unexpected places. PostgreSQL sets the current working directory to PGDATA so we can use that to resolve relative paths.
This makes configuring tests easier.
Also add a parameter for tests that require sudo. This should be retired at some point but some tests still require it.
This will likely improve performance, but it also makes the filesystem consistent between platforms.
A number of tests were failing on shiftfs, which was the default for arm64 on Travis.
1.13 is not compatible with gcc 8 which is what ships with newer distributions. Build from source to get a more recent version.
1.13 is not compatible with gcc 9 so we'll need to address that at a later date.
This user was created before we tested in containers to ensure isolation between the pg and repo hosts which were then just directories. The downside is that this resulted in a lot of sudos to set the pgbackrest user and to remove files which did not belong to the main test user.
Containers provide isolation without needing separate users so we can now safely remove the pgbackrest user. This allows us to remove most sudos, except where they are explicitly needed in tests.
While we're at it, remove the code that installed the Perl C library (which also required sudo) and simply add the build path to @INC instead.
This test was not creating recovery.signal when testing with --type=preserve. The preserve recovery type only keeps existing files and does not create any.
RC1 was just ignoring recovery.signal and going right into recovery. Weirdly, 12.0 used restore_command to do crash recovery which made the problem harder to diagnose, but this has now been fixed in PostgreSQL and should be released in 12.1.
This is only needed when new code is added to the Perl C library, which is becoming rare as the migration progresses.
Also, the code will vary slightly based on the Perl version used for generation so for normal users it is just noise.
Suggested by Stephen Frost.
A number of tests have been updated and Fedora 30 has been added to the test suite so the unit tests can run on gcc 9.
Stop running unit tests on co6/7 since we appear to have ample unit test coverage.
This tool was only being used it a few places but was a pretty large dependency.
Rework the forceStorageMove() code using our storage layer and replace one aws cli cp with a storage put.
Also, remove the Dockerfile that was once used to build the Scality S3 test container.
Now that our tests are more diversified it makes sense to load only the packages that are needed for each test.
Move the package loads from .travis.yaml to test/travis.pl where we have more control over what is loaded.
Note that building the manifest on each host has been temporarily removed.
This feature will likely be brought back as a non-default option (after the manifest code has been fully migrated to C) since it can be fairly expensive.
Check the backup.info file against the backup path. Add any backups that are missing and remove any backups that no longer exist.
It's important to run this before backup or expire to be sure we are using the most up-to-date list of backups.
Three major changes were required to get this working:
1) Provide the path to pgbackrest in the build directory when running outside a container. Tests in a container will continue to install and run against /usr/bin/pgbackrest.
1) Set a per-test lock path so tests don't conflict on the default /tmp/pgbackrest path. Also set a per-test log-path while we are at it.
2) Use localhost instead of a custom host for TLS test connections. Tests in containers will continue to update /etc/hosts and use the custom host.
Add infrastructure and update harnessCfgLoad*() to get the correct exe and paths loaded for testing.
Since new tests are required to verify that running outside a container works, also rework the tests in Travis CI to provide coverage within a reasonable amount of time. Mainly, break up to doc tests by VM and run an abbreviated unit test suite on co6 and co7.
Recovery settings are now written into postgresql.auto.conf instead of recovery.conf. Existing recovery_target* settings will be commented out to help avoid conflicts.
A comment is added before recovery settings to identify them as written by pgBackRest since it is unclear how, in general, old settings will be removed.
recovery.signal and standby.signal are automatically created based on the recovery settings.
The additional details include databases that can be used for selective restore and a list of tablespaces and symlinks with their default destinations.
This information is not included in the JSON output because it requires reading the manifest which is too IO intensive to do for all manifests. We plan to include this information for JSON in a future release.
Scaling allows the starting values to be increased from the command-line without code changes.
Also suppress valgrind and assertions when running performance testing. Optimization is left at -O0 because we should not be depending on compiler optimizations to make our code performant, and it makes profiling more informative.
bsearch() is far more efficient than an iterative approach except in the most trivial cases.
For now insert will reset the sort order to none and the list will need to be resorted before bsearch() can be used. This is necessary because item pointers are not stable after a sort, i.e. they can move around. Until lists are stable it's not a good idea to surprise the caller by mixing up their pointers on insert.
PostgreSQL 12 will shutdown in these cases which seems to be the correct action (according to the documentation) when hot_standby = off, but older versions are promoting instead. Set target_action explicitly so all versions will behave the same way.
This does beg the question of whether the PostgreSQL 12 behavior is wrong (though it matches the docs) or the previous versions are.
Separate the generation of recovery values and formatting them into recovery.conf format. This is generally a good idea, but also makes the code ready to deal with a different recovery file in PostgreSQL 12.
Also move the recovery file logic out of cmdRestore() into restoreRecoveryWrite().
This restore type automatically adds standby_mode=on to recovery.conf.
This could be accomplished previously by setting --recovery-option=standby_mode=on but PostgreSQL 12 requires standby mode to be enabled by a special file named standby.signal.
The new restore type allows us to maintain a common interface between PostgreSQL versions.
For the most part this is a direct migration of the Perl code into C.
There is one important behavioral change with regard to how file permissions are handled. The Perl code tried to set ownership as it was in the manifest even when running as an unprivileged user. This usually just led to errors and frustration.
The C code works like this:
If a restore is run as a non-root user (the typical scenario) then all files restored will belong to the user/group executing pgBackRest. If existing files are not owned by the executing user/group then an error will result if the ownership cannot be updated to the executing user/group. In that case the file ownership will need to be updated by a privileged user before the restore can be retried.
If a restore is run as the root user then pgBackRest will attempt to recreate the ownership recorded in the manifest when the backup was made. Only user/group names are stored in the manifest so the same names must exist on the restore host for this to work. If the user/group name cannot be found locally then the user/group of the PostgreSQL data directory will be used and finally root if the data directory user/group cannot be mapped to a name.
Reviewed by Cynthia Shang.
This macro displays a title for each test. A test frequently has multiple parts and it was hard to tell which subparts went together. We used ad hoc indentation to do this.
Anything that is a not a title is automatically indented so manually indenting is not longer needed. This should make the tests and the test output easier to read.
These macros encapsulate the functionality provided by direct calls to harnessLogResult() and system(). They both have _FMT() variants.
The primary advantage is that {[path]}, {[user]}, and {[group]} will be replaced with the test path, user, and group respectively. This saves a log of strNewFmt() calls and makes the tests less noisy.
The backup manifest stores a complete list of all files, links, and paths in a backup along with metadata such as checksums, sizes,
timestamps, etc. A list of databases is also included for selective restore.
The purpose of the manifest is to allow the restore command to confidently reconstruct the PostgreSQL data directory and ensure that
nothing is missing or corrupt. It is also useful for reporting, e.g. size of backup, backup time, etc.
For now, migrate enough functionality to implement the restore command.
Reviewed by Cynthia Shang.
cfgExecParam() was originally written to provide options for remote processes. Remotes processes do not have access to the local config so it was necessary to pass every non-default option.
Local processes on the other hand, e.g. archive-get, archive-get-async, archive-push-async, and local, do have access to the local config and therefore don't need every parameter to be passed on the command-line. The previous way was not wrong, but it was overly verbose and did not align with the way Perl had worked.
Update cfgExecParam() to accept a local option which excludes options from the command line which can be read from local configs.
strPathAbsolute() generates an absolute path from an absolute base path and an absolute/relative path.
strLstRemoveIdx() is a support function based on lstRemoveIdx().
In general we don't care about path and link times since they are easily recreated when restoring.
So, outside of storageInfo() we don't need to bother testing them.
Loading jobs in advance uses a lot of memory in the case that there are millions of jobs to be performed. We haven't seen this yet, but with backup and restore on the horizon it will become the norm.
Instead, use a callback so that jobs are only created as they are needed and can be freed as soon as they are completed.
These features finally make the ls command practical.
Currently the JSON contains only name, type, and size. We may add more fields in the future, but these seem like the minimum needed to be useful.
This warning gives very unpredictable results between compiler versions and seems unrealistic since most of our structs are zeroed for initialization.
This warning has been disabled in the Makefile for a long time.
Broken vendor packages have been causing builds to break due to an error on apt-get update.
Ignore errors and proceed directory to apt-get install. It's possible that we'll try to reference an expired package version and get an error anyway, but that seems better than a guaranteed hard error.
Push the responsibility for sort and find down to the List object by introducing a general comparator function that can be used for both sorting and finding.
Update insert and add functions to return the item added rather than the list. This is more useful in the core code, though numerous updates to the tests were required.
Update StorageWritePosix to use the new functions.
A side effect is that storageWritePosixOpen() will no longer error when the user/group name does not exist. It will simply retain the original user/group, i.e. the user that executed the restore.
In general this is a feature since completing a restore is more important than setting permissions exactly from the source host. However, some notification of this omission to the user would be beneficial.
Travis will timeout after 10 minutes with no output. Emit a warning every 5 minutes to keep Travis alive and increase the total timeout to 20 minutes.
Documentation builds have been timing out a lot recently so hopefully this will help.
The control and catalog versions were stored a variety of places in the optimistic hope that they would be useful. In fact they never were.
We can't remove them from the backup.info and backup.manifest files due to backwards compatibility concerns, but we can at least avoid loading and storing them in C structures.
Add functions to the PostgreSQL interface which will return the control and catalog versions for any supported version of PostgreSQL to allow backwards compatibility for backup.info and backup.manifest. These functions will be useful in other ways, e.g. generating the tablespace identifier in PostgreSQL >= 9.0.
Info files required three copies in memory to be loaded (the original string, an ini representation, and the final info object). Not only was this memory inefficient but the Ini object does sequential scans when searching for keys making large files very slow to load.
This has not been an issue since archive.info and backup.info are very small, but it becomes a big deal when loading manifests with hundreds of thousands of files.
Instead of holding copies of the data in memory, use a callback to deliver the ini data directly to the object when loading. Use a similar method for save to avoid having an intermediate copy. Save is a bit complex because sections/keys must be written in alpha order or older versions of pgBackRest will not calculate the correct checksum.
Also move the load retry logic to helper functions rather than embedding it in the Info object. This allows for more flexibility in loading and ensures that stack traces will be available when developing unit tests.
Reviewed by Cynthia Shang.
The manifest is not an info file so if anything it should be called backupManifest. But that seems too long for such a commonly used object so manifest seems better.
Note that unlike Perl there is no storage manifest method so this stands as the only manifest in the C code, as befits its importance.
Bug Fixes:
* Improve slow manifest build for very large quantities of tables/segments. (Reported by Jens Wilke.)
* Fix exclusions for special files. (Reported by CluelessTechnologist, Janis Puris, Rachid Broum.)
Improvements:
* The stanza-create/update/delete commands are implemented entirely in C. (Contributed by Cynthia Shang.)
* The start/stop commands are implemented entirely in C. (Contributed by Cynthia Shang.)
* Create log directories/files with 0750/0640 mode. (Suggested by Damiano Albani.)
Documentation Bug Fixes:
* Fix yum.p.o package being installed when custom package specified. (Reported by Joe Ayers, John Harvey.)
Documentation Improvements:
* Build pgBackRest as an unprivileged user. (Suggested by Laurenz Albe.)
This test is commonly used for sanity checking but the combination of S3 and encryption makes it hard to use and encourages temporary changes to make it usable.
Acknowledge this and disable S3 and encryption for this test and move them to mock/all/2.
ioReadLine() errors on eof because it has previously been used only for protocol reads.
Returning on eof is handy for reading lines from files where eof is not considered an error.
Prior to 2.16 the Perl manifest code would skip any file that began with a dot. This was not intentional but it allowed PostgreSQL socket files to be located in the data directory. The new C code in 2.16 did not have this unintentional exclusion so socket files in the data directory caused errors.
Worse, the file type error was being thrown before the exclusion check so there was really no way around the issue except to move the socket files out of the data directory.
Special file types (e.g. socket, pipe) will now be automatically skipped and a warning logged to notify the user of the exclusion. The warning can be suppressed with an explicit --exclude.
Reported by CluelessTechnologist, Janis Puris, Rachid Broum.
In versions <= 2.15 the old regexp caused any file or directory beginning with . to be ignored during a backup. This has caused behavioral differences in 2.16 because the new C code correctly excludes ./.. directories.
This Perl code is only used for testing now, but it should still match the output of the C functions.
Putting the checksum at the beginning of the file made it impossible to stream the file out when saving. The entire file had to be held in memory while it was checksummed so the checksum could be written at the beginning.
Instead place the checksum at the end. This does not break the existing Perl or C code since the read is not order dependent.
There are no plans to improve the Perl code to take advantage of this change, but it will make the C implementation more efficient.
Reviewed by Cynthia Shang.
Checking the PostgreSQL-reported path and version against the pgBackRest configuration helps ensure that pgBackRest is operating against the correct cluster.
In Perl this functionality was in the Db object, but check seems like a better place for it in C.
Contributed by Cynthia Shang.
Previously the host id to use was pulled from the host-id option or defaulted to 1.
The stanza, check, and backup commands will all need the ability to address a specified pg host, so add functions to make that possible.
Previously, info files (e.g. archive.info, backup.info) were created in Perl and only loaded in C.
The upcoming stanza commands in C need to create these files so refactor the Info* objects to allow new, empty objects to be created. Also, add functions needed to initialize each Info* object to a valid state.
Contributed by Cynthia Shang.
Previously storageLocal() was being used internally but loading pg_control from remote storage is often required.
Also, storagePg() is more appropriate than storageLocal() for all current usage.
Contributed by Cynthia Shang.
The pg1-socket-path and pg1-port options were not being reset when options from a higher index were being pushed down for processing by a remote. Since remotes only talk to one cluster they always use the options in index 1. This requires moving options from the original index to 1 before starting the remote. All options already set on index 1 must be removed if they are not being overwritten.
Processing large datasets in a memory context can lead to high memory usage and long allocation times. Add a new MEM_CONTEXT_TEMP_RESET_BEGIN() macro that allows temp allocations to be automatically freed after N iterations.
Calculate the most common value in a list of variants. If there is a tie then the first value passed to mcvUpdate() wins.
mcvResult() can be called multiple times because it does not end processing, but there is a cost to calculating the result each time
since it is not stored.
"null" is not allowed in the manifest format (null values should be missing instead) but Perl was treating the invalid values written by this test as if they were missing.
Update the test code to remove the values rather than setting them to "null".
Logging stayed in the backup log until the Perl code started. Fix this so it logs to the correct file and will still work after the Perl code is removed.
The Perl versions remain because they are still being used by the Perl stanza commands. Once the stanza commands are migrated they can be removed.
Contributed by Cynthia Shang.
Bug Fixes:
* Retry S3 RequestTimeTooSkewed errors instead of immediately terminating. (Reported by sean0101n, Tim Garton, Jesper St John, Aleš Zelený.)
* Fix incorrect handling of transfer-encoding response to HEAD request. (Reported by Pavel Suderevsky.)
* Fix scoping violations exposed by optimizations in gcc 9. (Reported by Christian Lange, Ned T. Crigler.)
Features:
* Add repo-s3-port option for setting a non-standard S3 service port.
Improvements:
* The local command for backup is implemented entirely in C. (Contributed by David Steele, Cynthia Shang.)
* The check command is implemented partly in C. (Reviewed by Cynthia Shang.)
Implement switch WAL and archive check in C but leave the rest in Perl for now.
The main idea was to have some real integration tests for the new database code so the rest of the migration can wait.
Reviewed by Cynthia Shang.
Migrate functionality from the Perl Db module to C. For now this is just enough to implement the WAL switch check.
Add the dbGet() helper function to get Db objects easily.
Create macros in harnessPq to make writing pq scripts easier by grouping commonly used functions together.
Reviewed by Cynthia Shang.
The cause of this error seems to be that a failed request takes so long that a subsequent retry at the http level uses outdated headers.
We're not sure if pgBackRest it to blame here (in one case a kernel downgrade fixed it, in another case an incorrect network driver was the problem) so add retries to hopefully deal with the issue if it is not too persistent. If SSL_write() has long delays before reporting an error then this will obviously affect backup performance.
Reported by sean0101n, Tim Garton, Jesper St John, Aleš Zelený.
Error codes were not being caught for SSL_write() so it was hard to see exactly what was happening in error cases. Report errors to aid in debugging.
Also add a retry for SSL_ERROR_WANT_READ. Even though we have not been able to reproduce this case it is required by SSL_write() so go ahead and implement it.
Multiple PostgreSQL hosts were supported via the host-id option but there are cases where it is useful to be able to directly specify the host id required, e.g. to iterate through pg* hosts when looking for candidate primaries and standbys during backup.
Keep trying to locate the WAL segment until timeout. This is useful for the check and backup commands which must wait for segments to arrive in the archive.
The remotes have their own config options (repo-host-config, etc.) so don't pass the local config* options.
This was a regression from the behavior of the Perl code and while there have been no field reports it caused breakage on test systems with multiple configurations.
Sometimes it is useful to get at the internals of a module that is not being tested for coverage in order to provide coverage for another module that is being tested. The include directive allows this.
Update modules that had previously been added to coverage that only need to be included.
If this option is set then ports appended to repo-s3-endpoint or repo-s3-host will be ignored.
Setting this option explicitly may be the only way to use a bare ipv6 address with S3 (since multiple colons confuse the parser) but we plan to improve this in the future.
This direct interface to libpq allows simple queries to be run against PostgreSQL and supports timeouts.
Testing is performed using a shim that can use scripted responses to test all aspects of the client code. The shim will be very useful for testing backup scenarios on complex topologies.
Reviewed by Cynthia Shang.
The local process is now entirely migrated to C. Since all major I/O operations are performed in the local process, the vast majority of I/O is now performed in C.
Contributed by David Steele, Cynthia Shang.
Add bool, array, and int64 as valid array subtypes.
Pretty print for the array subtype is not correct but is currently not in use (this can be seen at line 328 in typeJsonTest.c).
Discard all data passed to the filter. Useful for calculating size/checksum on a remote system when no data needs to be returned.
Update ioReadDrain() to automatically use the IoSink filter.
The HTTP server can use either content-length or transfer-encoding to indicate that there is content in the response. HEAD requests do not include content but return all the same headers as GET. In the HEAD case we were ignoring content-length but not transfer-encoding which led to unexpected eof errors on AWS S3. Our test server, minio, uses content-length so this was not caught in integration testing.
Ignore all content for HEAD requests (no matter how it is reported) and add a unit test for transfer-encoding to prevent a regression.
Found by Pavel Suderevsky.
This feature denotes storage that can compress files so that they take up less space than what was written. Currently this includes the Posix and CIFS drivers. The stored size of the file will be rechecked after write to determine if the reported size is different. This check would be wasted on object stores such as S3, and they might not report the file as existing immediately after write.
Also add tests to each storage driver to check features.
Previously only a single filter could be pushed to the remote since order was not being maintained. Now the filters are strictly ordered.
Results are returned from the remote and set in the local IoFilterGroup so they can be retrieved.
Expand remote filter support to include all filters.
Read all data from an IoRead object and discard it. This is handy for calculating size, hash, etc. when the output is not needed.
Update code where a loop was used before.
For offline backups the upper bound was being set to 0x0000FFFF0000FFFF rather than UINT64_MAX. This meant that page checksum errors might be ignored for databases with a lot of past WAL in offline mode.
Online mode is not affected since the upper bound is retrieved from pg_start_backup().
Files (especially build.auto.h) were being removed and forcing a full build between separate invocations of test.pl.
This affected ad-hoc testing at the command-line, not a full test run in CI.
This analysis never produced anything but false positives (var might be NULL) but took over a minute per test run and added 600MB to the test container.
Since 2.91 JSON::PP has a bias for saving variables that look like numbers as numbers even if they were declared as strings.
Force versions to strings where needed by appending ''.
Update the json-pp-perl package on Ubuntu 18.04 to 2.97 to provide test coverage.
No new Perl code is being developed, so these tools are just taking up time and making migrations to newer platforms harder. There are only a few Perl tests remaining with full coverage so the coverage tool does not warn of loss of coverage in most cases.
Remove both tools and associated libraries.
ScalityS3 has not received any maintenance in years and is slow to start which is bad for testing. Replace it with minio which starts quickly and ships as a single executable or a tiny container.
Minio has stricter limits on allowable characters but should still provide enough coverage to show that our encoding is working correctly.
This commit also includes the upgrade to openssl 1.1.1 in the Ubuntu 18.04 container.
Some HTTP error tests were failing after the upgrade to openssl 1.1.1, though the rest of the unit and integration tests worked fine. This seemed to be related to the very small messages used in the error testing, but it pointed to an issue with the code not being fully compliant, made worse by auto-retry being enabled by default.
Disable auto-retry and implement better error handling to bring the code in line with openssl recommendations.
There's no evidence this is a problem in the field, but having all the tests pass seems like a good idea and the new code is certainly more robust.
Coverage will be complete in the next commit when openssl 1.1.1 is introduced.
Maintaining the storage layer/drivers in two languages is burdensome. Since the integration tests require the Perl storage layer/drivers we'll need them even after the core code is migrated to C. Create an interface layer so the Perl code can be removed and new storage drivers/features introduced without adding Perl equivalents.
The goal is to move the integration tests to C so this interface will eventually be removed. That being the case, the interface was designed for maximum compatibility to ease the transition. The result looks a bit hacky but we'll improve it as needed until it can be retired.
Bug Fixes:
* Fix archive retention expiring too aggressively. (Fixed by Cynthia Shang. Reported by Mohamad El-Rifai.)
Improvements:
* The expire command is implemented entirely in C. (Contributed by Cynthia Shang.)
* The local command for restore is implemented entirely in C.
* Remove hard-coded PostgreSQL user so $PGUSER works. (Suggested by Julian Zhang, Janis Puris.)
* Honor configure --prefix option. (Suggested by Daniel Westermann.)
* Rename repo-s3-verify-ssl option to repo-s3-verify-tls. The new name is preferred because pgBackRest does not support any SSL protocol versions (they are all considered to be insecure). The old name will continue to be accepted.
Documentation Improvements:
* Add FAQ to the documentation. (Contributed by Cynthia Shang.)
* Use wal_level=replica in the documentation for PostgreSQL ≥ 9.6. (Suggested by Patrick McLaughlin.)
Secure options could show up in the help as "current". While the user must have permissions to see the source of the options (e.g. environment, config file) it's still not a good idea to display them in an unexpected context.
Instead show secure options as <redacted> in the help command.
Amend commit 434cd832 to error when the db history in archive.info and backup.info do not match.
The Perl code would attempt to reconcile the history by matching on system id and version but we are not planning to migrate that code to C. It's possible that there are users with mismatches but if so they should have been getting errors from info for the last six months. It's easy enough to manually fix these files if there are any mismatches in the field.
Contributed by Cynthia Shang.
If the file is compressible (i.e. not encrypted or already compressed) it can be marked as such in storageNewRead()/storageNewWrite(). If the file is being read from/written to a remote it will be compressed in transit using gzip.
Simplify filter group handling by having the IoRead/IoWrite objects create the filter group automatically. This removes the need for a lot of NULL checking and has a negligible effect on performance since a filter group needs to be created eventually unless the source file is missing.
Allow filters to be created using a VariantList so filter parameters can be passed to the remote.
This implementation duplicates the functionality of the Perl code but does so with different logic and includes full unit tests.
Along the way at least one bug was fixed, see issue #748.
Contributed by Cynthia Shang.
The tests and documentation have been using the core storage layer but soon that will depend entirely on the C library, creating a bootstrap problem (i.e. the storage layer will be needed to build the C library).
Create a simplified Posix storage layer to be used by documentation and the parts of the test code that build and execute the actual tests. The actual tests will still use the core storage driver so they can interact with any type of storage.
This filter exactly mimics the behavior of the Perl filter so is a drop-in replacement.
The filter is not integrated yet since it requires the Perl-to-C storage layer interface coming in a future commit.
These names more accurately reflect what the functions do and follow the convention started in Info and InfoPg.
Also remove the ignoreMissing parameter since it was never used.
Contributed by Cynthia Shang.
Some filters (e.g. encryption and compression) produce output even if there is no input. Since the filter group was marked as "done" initially, processing would not run when there was zero input and that resulted in zero output.
All filters start not done so start the filter group the same way.
The prior method of tailing the docker log no longer seems reliable. Instead, keep retrying the make bucket command until it works and show the error if it times out.
This allows copying from one S3 object to another. We generally try to avoid doing this but there are a few cases where it is needed and the tests do it quite a bit.
One thing to look out for here is that reads require the http client to be explicitly released by calling httpClientDone(). This means than clients could grow if they are not released properly. The http statistics will hopefully alert us if this is happening.
This cache manages multiple http clients and returns one to the caller that is not busy. It is the responsibility of the caller to indicate when they are done with a client. If returnContent is set then the client will automatically be marked done.
Also add special handing for HEAD requests to recognize that content-length is informational only and no content is expected.
This was not enforced at parse time because repo1-cipher-type could be passed on the command-line even in cases where encryption was not needed by the subprocess.
Filter repo-cipher-type so it is never passed on the command line. If the subprocess does not have access to the passphrase then knowing the encryption type is useless anyway.
Filter groups could not be manipulated once they had been assigned to an IO object. Now they can be freely manipulated up to the time the IO object is opened.
Also, move the filter group into the IO object's context so they don't need to be tracked separately.
The previous implementation searched for the file in a list which worked but was not optimal. For arbitrary bucket structures it would also produce a false negative if a match was not found in the first 1000 entries. This was not an issue for our repo structure since the max hits on exists calls is two but it seems worth fixing to avoid future complications.
The C code was passing the host (if specified) with the request which could force the server into path-style URLs, which are not supported.
Instead, use the Perl logic of always passing bucket.endpoint in the request no matter what host is used for the HTTPS connection.
It's an open question whether we should support path-style URLs but since we don't it's useless to tell the server otherwise. Note that Amazon S3 has deprecated path-style URLs and they are no longer supported on newly created buckets.
This was being removed by rsync which forced a full build even when a partial should have been fine. Rewrite the file after the rsync so it is preserved.
Allow commands to be skipped by default in the command help but still work if help is requested for the command directly. There may be other uses for the flag in the future.
Update help for ls now that it is exposed.
Allows listing repo paths/files from the command-line, to be used primarily for testing and debugging.
This command is internal-only so the interface may change at any time without notice.
The documentation was relying on a ScalityS3 container built for testing which wasn't very transparent. Instead, use the stock minio container and configure it in the documentation.
Also, install certificates and CA so that TLS verification can be enabled.
Not all storage types support paths as a physical thing that must be created/destroyed. Add a feature to determine which drivers use paths and simplify the driver API as much as possible given that knowledge and by implementing as much path logic as possible in the Storage object.
Remove the ignoreMissing parameter from pathSync() since it is not used and makes little sense.
Create a standard list of error messages for the drivers to use and apply them where the code was modified -- there is plenty of work still to be done here.
These functions are not required for repository storage so make them optional and error if they are not implemented for non-repository storage, .e.g. pg or spool.
The goal is to simplify the drivers (e.g. S3) that are intended only for repository storage.
The prior behavior was to return NULL so the caller would know the path was missing, but this is rarely useful, complicates the calling code, and increases the chance of segfaults.
The .nullOnMissing param has been added to enable the prior behavior.
The release notes are generally a direct reflection of the git log. So, ease the burden of maintaining the release notes by using the git log to determine what needs to be added.
Currently only non-dev items are required to be matched to a git commit but the goal is to account for all commits.
The git history cache is generated from the git log but can be modified to correct typos and match the release notes as they evolve. The commit hash is used to identify commits that have already been added to the cache.
There's plenty more to do here. For instance, links to the commits for each release item should be added to the release notes.
The C code is designed to be efficient rather than deterministic at the debug log level. As we move more testing from integration to unit tests it makes less sense to try and maintain the expect logs at this log level.
Most of the expect logs have already been moved to detail level but mock/all still had tests at debug level. Change the logging defaults in the config file and remove as many references to log-level-console as possible.
/ is escaped in the spec but the Perl renderer we use does not escape it which leads to checksum mismatches between the two sets of code.
This particular escape seems to be a more recent addition to the spec and is targeted toward embedding JSON in JavaScript.
\/ is still allowed when parsing JSON.
The new name is preferred because pgBackRest does not support any SSL protocol versions (they are all considered to be insecure).
The old name will continue to be accepted.
This is just the part of restore run by the local helper processes, not the entire command.
Even so, various optimizations in the code (like pipelining and optimizations for zero-length files) should make the restore command faster on object stores.
Bug Fixes:
* Fix segfault when process-max > 8 for archive-push/archive-get. (Reported by Jens Wilke.)
Improvements:
* Bypass database checks when stanza-delete issued with force. (Contributed by Cynthia Shang. Suggested by hatifnatt.)
* Add configure script for improved multi-platform support.
Documentation Features:
* Add user guides for CentOS/RHEL 6/7.
This report replaces the lcov report that was generated manually for each release.
The lcov report was overly verbose just to say that we have virtually 100% coverage.
Pre-calculate the value used by logAny() to improve performance and make it more likely to be inlined.
Move IF_LOG_ANY() into LOG_INTERNAL() to simplify the macros and improve performance of LOG() and LOG_PID(). If the message has no chance of being logged there's no reason to call logInternal().
Rename logWill() to logAny() because it seems more intuitive.
The branch coverage exclusion rules were overly broad and included functions that ended in a capital letter, which disabled all coverage for the statement. Improve matching so that all characters in the name must be upper-case for a match.
Some macros with internal branches accepted parameters that might contain conditionals. This made it impossible to tell which branches belonged to which, and in any case an overzealous exclusion rule was ignoring all branches in such cases. Add the DEBUG_COVERAGE flag to build a modified version of the macros without any internal branches to be used for coverage testing. In most cases, the branches were optimizations (like checking logWill()) that improve production performance but are not needed for testing. In other cases, a parameter needed to be added to the underlying function to handle the branch during coverage testing.
Also tweak the coverage rules so that macros without conditionals are automatically excluded from branch coverage as long as they are not themselves a parameter.
Finally, update tests and code where missing coverage was exposed by these changes. Some code was updated to remove existing coverage exclusions when it was a simple change.
Filters had different ideas about what "done" meant and this added complication to the group filter processing. For example, gzip decompression would detect end of stream and mark the filter as done before it had been flushed.
Improve the IoFilter interface to give a consistent definition of done across all filters, i.e. no filter can be done until it has started flushing no matter what the underlying driver reports. This removes quite a bit of tricky logic in the processing loop which tried to determine when a filter was "really" done.
Also improve management of the input buffers by pointing directly to the prior output buffer (or the caller's input) to eliminate loops that set/cleared these buffers.
If content was zero-length then the IO object was not created. This put the burden on the caller to test that the IO object existed before checking eof.
Instead, create an IO object even if it will immediately return eof. This has little cost and makes the calling code simpler.
Also add an explicit test for zero-length files in S3 and a few assertions.
The rules for when a C remote is required are getting complicated and will get worse when restoreFile() is migrated.
Instead, set the --c option when a C remote is required. This option will be removed when the remote is entirely implemented in C.
Most of the *Free() functions are pretty generic so add macros to make creating them as easy as possible.
Create a distinction between *Free() functions that the caller uses to free memory and callbacks that free third-party resources. There are a number of cases where a driver needs to free resources but does not need a normal *Free() because it is handled by the interface.
Add common/object.h for macros that make object maintenance easier. This pattern can also be used for many more object functions.
Rename memContextCallback() to memContextCallbackSet() to be more consistent with other parts of the code.
Free all context memory when an exception is thrown from a callback. Previously only the child contexts would be freed and this resulted in some allocations being lost. In practice this is probably not a big deal since the process will likely terminate shortly, but there may well be cases where that is not true.
Add GLUE() macro which is useful for creating identifiers.
Move MACRO_TO_STR() here and rename it STRINGIFY(). This appears to be the standard name for this type of macro and it is also an awesome name.
Remove "File" and "Driver" from object names so they are shorter and easier to keep consistent.
Also remove the "driver" directory so storage implementations are visible directly under "storage".
The function pointer casting used when creating drivers made changing interfaces difficult and led to slightly divergent driver implementations. Unit testing caught production-level errors but there were a lot of small issues and the process was harder than it should have been.
Use void pointers instead so that no casts are required. Introduce the THIS_VOID and THIS() macros to make dealing with void pointers a little safer.
Since we don't want to expose void pointers in header files, driver functions have been removed from the headers and the various driver objects return their interface type. This cuts down on accessor methods and the vast majority of those functions were not being used. Move functions that are still required to .intern.h.
Remove the special "C" crypto functions that were used in libc and instead use the standard interface.
Add bufDup() and bufNewUsedC().
Arrange bufNewC() params to match bufNewUsedC() since they have always seemed backward.
Fix bufHex() to only render the used portion of the buffer and fix some places where used was not being set correctly.
Use a union to make macro assignments for all legal values without casting. This is much more likely to catch bad assignments.
There is only one instance in the core code where this helps. It is mostly helpful in the tests.
There is an argument to be made that only THROW_SYS_ERROR*() variants should be used in the core code to improve test coverage. If so, that will be the subject of a future commit.