Refactor the lock module to split command-specific logic from the basic file locking functionality. Command specific logic is now in command/lock.c. This will make it easier to implement new features such as repository locking and updating lock file contents on remotes.
This implementation is essentially a drop-in replacement but there are a few differences. First, the lock names no longer require a path (the path is added in the lock module). Second, the timeout functionality has been removed since it was not being used.
lcov does not seem to be very well maintained and is often not compatible with the version of gcc it ships with until a few months after a new distro is released. In any case, lcov is that not useful for us because it generates reports on all coverage while we are mainly interested in missing coverage during development.
Instead use the JSON output generated by gcov to generate our minimal coverage report and metrics for the documentation.
There are some slight differences in the metrics. The difference in the common module was due to a bug in the old code -- build/common was being added into common as well as being reported separately. The source of the two additional branches in the backup module is unknown but almost certainly down to how exclusions are processed with regular expressions. Since there is additional coverage rather than coverage missing this seems fine.
Since this was pretty much a rewrite it was also a good time to migrate to C.
The GCS driver sent a single file delete request for each file while deleting a path. Depending on latency this could lead to rather long delete times, especially noticeable during expiration.
Improve GCS delete to use batches, which require multipart HTTP, so also add multipart HTTP infrastructure.
Connections are established using the "happy eyeballs" approach from RFC 8305, i.e. new addresses (if available) are tried if the prior address has already had a reasonable time to connect. This prevents waiting too long on a failed connection but does not try all the addresses at once. Prior connections that are still waiting are rechecked periodically if no subsequent connection is successful.
This improves substantially on 39bb8a0, which failed to take into account connection attempts that do not fail (but never connect) and use up all the available time.
The Perl integration tests were migrated as faithfully as possible, but there was some cruft and a few unit tests that it did not make sense to migrate.
Also remove all Perl code made obsolete by this migration.
All unit, performance, and integration tests are now written in C but significant parts of the test harness remain to be migrated.
The core Exec object is efficient but geared toward the specific needs of core and not ease-of-use as required for build, documentation, and testing.
execOne() works similarly to system() except that it automatically redirects stderr to stdout and captures the output.
These tests have not been maintained for several years, i.e. no tests for new features have been added. They are highly duplicative of the unit tests but do have the advantage of mixing in different storage drivers. They were allowed to remain because they were not doing any harm even if they were probably not doing any good.
However, the real integration tests (that run directly against PostgreSQL) also test storage drivers and have been updated with new features over time. The real integration tests are now being migrated to C and as part of that effort the mock integration tests need to be removed or migrated, and they do not provide enough value to migrate.
Remove all mock integration tests and a leftover Perl performance test.
During restore it is possible to read all the blocks out of a compressed super block without reading all the input. This is because the compression format may have some trailing bytes that are not required for decompression but are required to indicate that data has ended. If a buffer aligned with the compressed data in a certain way, these last bytes might not be read.
Explicitly read out any final bytes at the end of each super block to handle this case. This should always result in no additional data out and we check for that, but it does move the read position to the beginning of the next compressed super block so decompression can begin without error.
The backupFile() tests were written before the bulk of the backup command had been migrated to C. Some of them have been migrated to the complete backup tests, but others were left because there was no way to make changes to files during a backup.
Now that we have the backup script harness introduced in 337da35a it is now possible to migrate all the tests. The new tests are better because they not only test backupFile() but all the functions upstream and downstream of it.
This behavior violates an assertion but is completely possible with the current implementation. This behavior will be fixed in a future commit, but for now at least test how it works correctly and remove the assertion so the test runs without error.
Also add a new harness that allows changes during the backup to be scripted.
Migrate generation of these files from help.xml to the intermediate documentation format. This allows us to share a lot of code that is already in C and remove duplicated code in Perl. More duplicate code can be removed in Perl once man generation is migrated.
Also update the unit test harness to allow testing of modules in the doc directory.
This option is intended to eventually create a comprehensive report about the state of the pgBackRest configuration based on the results of the check command.
Implement a detailed report of the configuration options in the environment and configuration files. This should be useful information when debugging configuration errors, since invalid options and configurations are automatically noted. While custom config locations will not be found automatically, it will at least be clear that the config is not in a standard location.
For now keep this option internal since there is a lot of work to be done, but commit it so that it can be used when needed and tested in various environments.
Note that for now when --report is specified, the check command is not being run at all. Only the config report is generated. This behavior will be improved in the future.
This new option allows tags to be added to objects in S3, GCS, and Azure repositories.
This was fairly straightforward for S3 and Azure, but GCS does not allow tags for a simple upload using the JSON interface. If tags are required then the resumable interface must be used even if the file falls below the limit that usually triggers a resumable upload (i.e. size < repo-storage-upload-chunk-size).
This option is structured so that tags must be specified per-repo rather than globally for all repos. This seems logical since the tag keys and values may vary by service, e.g. S3 vs GCS.
These storage tags are independent of backup annotations since they are likely to be used for different purposes, e.g. billing, while the backup annotations are primarily intended for monitoring.
The prior code did one list command against the storage for each WAL segment. This led to a lot of lists and was especially inefficient when the WAL (or the majority of it) was already present.
Optimize to keep the contents of a WAL directory and use them on a subsequent search. Leave the optimizations for a single WAL segment since other places still use that mode.
sckHostLookup() only returned the first address record returned from getaddrinfo(). The new AddressInfo object provides a full list of values returned from getaddrinfo(). Freeing the list is also handled by the object so there is no longer a need for FINALLY blocks to ensure the list is freed.
Add the selected address to the client/server names for debugging purposes.
This code does not attempt to connect to multiple addresses. It just lays the groundwork for a future commit to do so.
On certain file systems (e.g. ext4) pg_control may appear torn if there is a concurrent write while reading the file. To prevent an invalid read, retry until the checksum matches the control data.
Special handling is required for the pg-version-force feature since the offset of the checksum is not known. In this case, scan from the default position to the end of the data looking for a checksum match. This is a bit imprecise, but better than nothing, and the chance of a random collision in the control data seems very remote considering the ratio of data size (< 512 bytes) to checksum size (4 bytes).
This was discovered and a possible solution proposed for PostgreSQL in [1]. The proposed solution may work for backup, but pgBackRest needs to be able to read pg_control reliably outside of backup. So no matter what fix is adopted for PostgreSQL, pgBackRest need retries. Further adjustment may be required as the PostgreSQL fix evolves.
[1] https://www.postgresql.org/message-id/20221123014224.xisi44byq3cf5psi%40awork3.anarazel.de
If there are a lot of retries then the output might be very large and even be truncated by the error module. Either way, it is not good information for the user.
When a message is repeated, aggregate so that total retries and time range are output for the message. This provides helpful information about what happened without overwhelming the user with data.
Check command now checks multiple stanzas when the stanza option is omitted.
The stanza list is extracted from the current configuration rather than scanning the repository like the info command. Scanning the repository is a problem because configuration for each stanza may not be present in the current configuration. Since this functionality is new for check there is no regression.
Add a new section to the user guide to cover multi-stanza configuration and provide additional coverage for this feature.
Also fix a small issue in the parser when an indexed option has a dependency on a non-indexed option. There were no examples of this case in the previous configuration.
The initial implementation used simple waits when having to loop due to getting a LIBSSH2_ERROR_EAGAIN, but we don't want to just wait some amount of time, we want to wait until we're able to read or write on the fd that we would have blocked on.
This change removes all of the wait code from the SFTP driver and changes the loops to call the newly introduced storageSftpWaitFd(), which in turn checks with libssh2 to determine the appropriate direction to wait on (read, write, or both) and then calls fdReady() to perform the wait using the provided timeout.
This also removes the need to pass ioSession or timeout down into the SFTP read/write code.
Double spaces have fallen out of favor in recent years because they no longer contribute to readability.
We have been using single spaces and editing related paragraphs for some time, but now it seems best to update the remaining instances to avoid churn in unrelated commits and to make it clearer what spacing contributors should use.
These were intended to allow the block list to be scanned without reading the map but were never utilized. They were left in "just in case" and because they did not seem to be doing any harm.
In fact, it is better not to have the block numbers because this allows us set the block size at a future time as long as it is a factor of the super block size. One way this could be useful is to store older files without super blocks or a map in the full backup and then build a map for them if the file gets modified in a diff/incr backup. This would require reading the file from the full backup to build the map but it would be more space efficient and we could make more intelligent decisions about block size. It would also be possible to change the block size even if one had already been selected in a prior backup.
Omitting the block numbers makes the chunking unnecessary since there is now no way to make sense of the block list without the map. Also, we might want to build maps for unchunked block lists, i.e. files that were copied normally.
Centralize the code to allow it to be used in more places and update the protocol/server module to use the new code.
Since the time measurements make testing difficult, also add time and errorRetry harnesses to allow specific data to be used for testing. In the case of errorRetry, the production behavior is turned off by default during testing and only enabled for the errorRetry test module.
Each call to lockAcquireP() passed enough information to initialize the lock system. This was somewhat inefficient and as locks become more complicated it will lead to more code duplication. Since a process can only take one type of lock it makes sense to do most of the initialization up front.
Also reduce the log level of lockRelease() since it is only called at exit and the lock will be released in any case.
Output a manifest in text or JSON format. Neither format is complete but they cover the basics.
In particular the manifest command outputs the complete block restore when the filter option is specified and the block delta when the pg option is also specified. This is non-destructive so it is safe to execute against a running cluster.
xxHash is significantly faster than SHA-1 so this helps reduce the overhead of the feature.
A variable number of bytes are used from the xxHash depending on the block size with a minimum of six bytes for the smallest block size. This keeps the maps smaller while still providing enough bits to detect block changes.
Small blocks sizes can lead to reduced compression efficiency, so allow multiple blocks to be compressed together in a super block. The disadvantage is that the super block must be read sequentially to retrieve blocks. However, different super block sizes can be used for different backup types, so the full backup super block sizes are large for compression efficiency and diff/incr are smaller for retrieval efficiency.
The primary goal of the block incremental backup is to save space in the repository by only storing changed parts of a file rather than the entire file. This implementation is focused on restore performance more than saving space in the repository, though there may be substantial savings depending on the workload.
The repo-block option enables the feature (when repo-bundle is already enabled). The block size is determined based on the file size and age. Very old or very small files will not use block incremental.
The callbacks in iniLoad() made the downstream code more complicated than it needed to be so use an iterator model instead.
Combine the two functions that were used to load the ini data to remove code duplication. In theory it would be nice to use iniValueNext() in the config/parse module rather than loading a KeyValue store but this would mean a big change to the parser, which does not seem worthwhile at this time.
It is possible for functions to accidentally leak child contexts into the calling context, which may use a lot of memory depending on the use case and where it happens.
Use the function return type to determine what should be returned and error when something else is returned. Add FUNCTION_AUDIT_*() macros to handle exceptions.
This checking is only performed during unit tests on the code being covered by the specific unit test.
Note that this does not work yet for memory allocations, i.e. memNew(). These are pretty rare so are not as much of an issue and they can be added in the future.
Allocating memory made these functions simpler but it meant that memory was leaking into the calling context when logging was enabled. It is not clear that this was an issue but it seems that trace level logging could result it a lot of memory usage depending on the use case.
This also makes it possible to audit allocations returned to the calling context, which will be done in a followup commit.
Also rename objToLog() to objNameToLog() since it seemed logical to name the new function objToLog().
This fills in backtrace info at the bottom of the call stack when the stack trace is incomplete due to testing. This does not affect release builds, which is why it did not make the first cut, but it turns out to be useful for testing and barely changes the release code (when we do release this).
The recursion test in common/error was simplified because it would now return a very large trace.
When this code was migrated to C the unit tests were not included because there were more important priorities at the time.
This also requires some adjustments to coverage because of the new code location.
Similar to b9be4fa5, these functions are not used by the core code so move them to the build module. The new implementation is a little less efficient but that is much less of a worry in the build/test code.
Also remove regExpMatchSize() since it was not longer needed.
Neither of these functions were used by the core code. strReplace() is only used in the tests but it doesn't hurt to put it in build since the build code is not distributed.
This was done by checking the extension but it is possible to include a module that does not have a vendor or auto extension. Instead make it explicit that the module is included in another module.
Also change the variable from "include" to "included" to make it clearer what it indicates.
This allows test backups to be run in other test modules.
It is likely that more logic will be moved here but for now this suffices to get test backups working in the restore module.
The prior code required coverage in the storage/remote module for all filters that could be used remotely.
Now the filter handlers are set at runtime so any filter list can be used with a remote. This is more flexible and makes coverage testing easier. It also resolves a test dependency.
Move the command/remote unit test near the end so it will have access to all filters without using depends.