Coverity complained that this pass by value was inefficient:
CID 376402: Performance inefficiencies (PASS_BY_VALUE)
Passing parameter file of type "ManifestFile" (size 136 bytes) by value.
This was completely intentional since it gives us a copy of the struct that we can change without bothering the caller. However, updating fields is fine and may benefit the caller at some future data, and in any case does no harm now.
And as usual it is easier not to fight with Coverity.
As much as possible it is better to get coverage with more realistic tests. Merging these modules will allow the page checksum code to be covered with real backups.
Limit which files can be added to bundles, which allows resume to work reasonably well. On resume, the bundles are removed and any remaining file is eligible to be to be resumed.
Also reduce the bundle-size default to 20MiB. This is pretty arbitrary, but a smaller default seems better.
Bundle (combine) smaller files during backup to reduce the number of files written to the repository (enable with --bundle). Reducing the number of files is a benefit on all file systems, but especially so on object stores such as S3 that have a high file creation cost. Another benefit is that zero-length files are only stored as metadata in the manifest.
Files are batched up to bundle-size and then compressed/encrypted individually and stored sequentially in the bundle. The bundle id and offset of each file is stored in the manifest so files can be retrieved randomly without needing to read the entire bundle. Files are ordered by timestamp descending when being assigned to bundles to reduce the amount of random access that needs to be done. The idea is that bundles with older files can be read in their entirety on restore and only bundles with newer files will get fragmented.
Bundles are a custom format with metadata stored in the manifest. Tar was considered but it is too limited a format, the major issue being that the size of the file must be known in advance and that is very contrary to how pgBackRest works, especially once we introduce page-level incremental backups.
Bundles are stored numbered in the bundle directory. Some files may still end up in pg_data if they are added after the backup is complete. backup_label is an example.
Currently, only the backup command works in batches. The restore and verify commands use the offsets to pull individual files out of the bundle. It seems better to finalize how this is going to work before optimizing the other commands. Even as is, this is a major step forward, and all commands function with bundling.
One caveat: resume is currently not supported when bundle is enabled.
In theory, the additional stat() call after a file has been copied to the repo can determine if additional compression has been applied by the file system. However, it has been a very long time since we tested this in practice. There are currently no unit tests that accurately test this feature since it requires a compressed file system like ZFS to work, which never seemed worth the extra cost.
It can also add a lot of time to backups if there are a large quantity of small files.
In addition, it stands as a blocker for combining files for small file support since it is no longer possible to get per-file sizes from the viewpoint of the file system. There are several ways this could be reworked but none of them are easy while at the same time maintaining current info functionality.
It doesn't seem worth keeping an untested feature that will only work in some special cases (if it still works) when it is blocking development.
Previously manifest load required two passes through the file list, one to load the data and one to set the defaults. This required each file to be packed twice.
Instead simply note that the file value is default and then set the file defaults when they are loaded from the manifest. This is made possible by the different internal/external representations for files so the same method cannot be applied to paths and links.
This change seems to resolve the performance issues noted in 61ce586 but there is no obvious reason why.
Manifests with a very large number of files can use a considerable amount of memory. There are a lot of zeroes in the data so it can be stored more efficiently by using base-128 varint encoding for the integers and storing the strings in the same allocation.
The downside is that the data needs to be unpacked in order to be used, but in most cases this seems fast enough (about 10% slower than before) except for saving the manifest, which is 10% slower up to 10 million files and then gets about 5x slower by 100 million (two minutes on my M1 Mac). Profiling does not show this slowdown so I wonder if this is related to the change in memory layout. Curiously, the function that increased most was jsonFromStrInternal(), which was not modified. That gives more weight to the idea that there is some kind of memory issue going on here and one hopes that servers would be less affected. Either way, they largest use cases we have seen are for about 6 million files so if we can improve that case I believe we will be better off.
Further analysis showed that most of the time was taken up writing the size and timestamp fields, which makes almost no sense. The same amount of time was used if they were hard-coded to 0, which points to some odd memory issue on the M1 architecture.
This change has been planned for a while, but the particular impetus at this time is that small file support requires additional fields that would increase manifest memory usage by about 20%, even if the feature is not used.
Note that the Pack code has been updated to use the new varint encoder, but the decoder remains separate because it needs to fetch one byte at a time.
This flag was only being used by the backup command after manifestNewBuild() and had no other uses. There was a time when it was important for integration testing but the unit tests now fulfill this role.
Since backup is the only code concerned with the primary flag, move the code into the backup module.
We don't have any cross-version testing but this change was tested manually with the most recent version of pgBackRest to make sure it was tolerant of the missing primary info. When an older version of pgBackRest loads a newer manifest the primary flag will always be set to false, which is fine since it is not used.
There is no evidence that users need 8.3/8.4 anymore but it does cost us in terms of development and testing, especially now that we have a number of new backup/restore features planned.
It seems to make sense to remove this support now. If there are users who need to use/migrate from these versions they can use an older version of pgBackRest.
Fail the backup if a cluster stops or the standby is promoted. Previously, shutting down the primary would cause an error but it was not detected until the end of the backup. Now the error will happen sooner and a promotion on the standby will also cause an error.
Add the following checks:
* Checkpoint is updated in pg_control after pg_start_backup(). This helps ensure that PostgreSQL and pgBackRest have a consistent view of the storage and that PGDATA paths match.
* Timeline of backup start WAL file matches pg_control. Hard to see how this one could get hit, but we have the power...
* Standby is on the same timeline as the primary. If not, this standby is not following the primary.
* Last standby checkpoint is not greater than the backup checkpoint. If so, this standby is not following the primary.
This also requires some additional plumbing to read/write timeline/checkpoint from pg_control and parse timelines from WAL filenames. There were some changes in the backup tests caused by the fact that pg_control now has different contents for each backup.
The check to ensure that the required checkpoint was reached on the standby should also be updated to use pg_control (it currently uses pg_control_checkpoint()), but that requires non-trivial changes to the test harness and will need to wait.
These macros simplify management of pg_control test files.
Centralize time updates for pg_control in the command/backup module. This caused some time updates in the logs.
Finally, move the postgres module after the storage module so it can use storage macros.
hrnPgControlToBuffer() and hrnPgWalToBuffer() now generate the system id based on the version of Postgres. If a value less than 100 is specified for systemId then it will be added to the default system id so there can be multiple ids for a single version of PostgreSQL.
Add constants to represent version system ids in tests. These will eventually be auto-generated.
This changes some checksums and we no longer have big-endian tests systems, so X those checksums out so it is obvious they are no longer valid.
Eliminate summing and passing of copied files sizes for logging backup size.
Instead, utilize infoBackupDataByLabel() to pull the backup size for the log message.
Previously the archive was only checked at the end of the backup to ensure all WAL required to make the backup consistent was present. The problem was that if archiving was not functioning then the backup had to complete before the user found out, which could be a while if the database was large enough.
Add an archive check immediately after backup start so failures are reported earlier.
The trick is to determine which WAL to check. If the repo is new there may not be any WAL in it and pg_start_backup() will not switch the WAL segment if it is empty. These are both likely scenarios when setting up and/or testing pgBackRest.
If the WAL segment is switched by pg_start_backup(), then check the archive for the segment that was detected prior to backup start. This should be common on normal running clusters with regular activity. Note that this might not be the segment immediately prior to the backup start segment if WAL volume is high.
If pg_start_backup() did not switch the WAL then we can force a switch on PostgreSQL >= 9.3 by creating a restore point. In that case the WAL to check will be the backup start WAL. This is most likely to happen on idle systems, during testing, or immediately after a repo switch.
An advantage of this approach other than earlier notification is that the backup directory will not be created so no resume will be attempted on the next backup.
Note that some additional churn was created in backup.c because the load of archive.info needs to be done earlier.
Properly log the size of files copied during the backup, matching the backup size returned from the info command.
In the reference issue, the incremental backup after switchover logs the size of all files evaluated rather than only the size of the files copied in the backup.
The strIdFrom*() forced the caller to pick an encoding, which led to a number of TRY...CATCH blocks in the code. In practice the caller does not care which encoding is used as long as the string is valid for some encoding.
Update the strIdFrom*() function to try all possible encodings and only throw an error when the string is not valid for any of them.
Currently empty CATCH() blocks are always marked as covered because of the loop structure of error handling.
A prototype implementation of error handling without looping has shown that these CATCH() blocks are not covered without new tests. Whether or not that prototype gets committed it is worth adding the tests.
The pack is both more compact and more efficient than a variant.
Also aggregate the page error info in the main process rather than in the filter to allow additional LSN filtering, to be added in a future commit.
Rather than working directly with Buffer types, define a new Pack pseudo-type that represents a Buffer containing a pack. This makes it clearer that a pack is being stored and allows stronger typing.
The Pack type is more compact and flexible than the Variant type. The Pack type also allows binary data to be stored, which is useful for transferring the passphrase in the CipherBlock filter.
The primary purpose is to allow more (and more complex) result data to be returned efficiently from the PageChecksum filter. For now the PageChecksum filter still returns the original Variant. Converting the result data will be the subject of a future commit.
Also convert filter types to StringId.
The primary benefit is that objects can allocate memory for their struct with the context, which saves an additional allocation and makes it easier to read context/allocation dumps. Also, the memory context does not need to be stored with the object since it can be determined using the object pointer.
Object pointers cannot be moved, so this means whatever additional memory is allocated cannot be resized. That makes the additional memory ideal for object structs, but not so much for allocating a list that might change size.
Mem contexts can no longer be reused since they will probably be the wrong size so their memory is freed on memContextFree(). This still means fewer allocations and frees overall.
Interfaces still need to be freed by mem context so the old objMove() and objFree() have been preserved as objMoveContext() and objFreeContext(). This will be addressed in a future commit.
The backup size was a bit off because it did not include any files (e.g. backup_label, WAL files) that were added to the manifest after the main copy. To fix this move the log message to the very end of the backup.
Add size/file total log message to restore since it did not exist before.
Includes backup and backupCommon tests.
Some tests in backupTest were split out where they were originally combined into a single boolean check - which made it difficult to determine which part of the conditional failed.
String values were also removed where they were no longer needed.
The log level for copied files in the backup/restore commands has been changed to detail. This makes the info log level less noisy but if these messages are required then set the log level for the backup/restore commands to detail.
Switch from JSON-based to binary protocol for communicating with local and remote process. The pack type is used to implement the binary protocol.
There are a number advantages:
* The pack type is more compact than JSON and are more efficient to render/parse.
* Packs are more strictly typed than JSON.
* Each protocol message is written entirely within ProtocolServer/ProtocolClient so is less likely to get interrupted by an error and leave the protocol in a bad state.
* There is no limit on message size. Previously this was limited by buffer size without a custom implementation, as was done for read/writing files.
Some cruft from the Perl days was removed, specifically allowing NULL messages and stack traces. This is no longer possible in C.
There is room for improvement here, in particular locking down the allowed sequence of protocol messages and building a state machine to enforce it. This will be useful for resetting the protocol when it gets in a bad state.
This allows TEST_STORAGE_EXISTS() to be used in most cases where TEST_STORAGE_REMOVE() was used before.
Rename TEST_STORAGE_REMOVE() to HRN_STORAGE_REMOVE() now that is is no longer used as a test. Still allow an error when the file is missing just to help keep tests tidy.
HRN_CFG_LOAD() handles the majority of test configuration loads and has various options for special cases.
It was not clear when to use harnessCfgLoadRaw() vs harnessCfgLoad(). Now "raw" functionality is granular and enabled by parameters, e.g. noStd.
A define was already added for TEST_PATH but it was not widely used. Replace all occurrences of testPath() with TEST_PATH in the tests.
Replace testUser() with TEST_USER, testGroup() with TEST_GROUP, testRepoPath() with HRN_PATH_REPO, testDataPath() with HRN_PATH, testProjectExe() with TEST_PROJECT_EXE, and testScale() with TEST_SCALE.
Replace {[path]}, {[user]}, {[group]}, etc. with defines and remove hrnReplaceKey(). This is better than having two ways to deal with replacements.
In some cases the original test*() getters were kept because they are used by the harness, which does not have access to the new defines. Move them to harnessTest.intern.h to indicate that the tests should no longer use them.
Replace all instances of strNew("") with strNew() and use strNewZ() for non-empty zero-terminated strings. Besides saving a useless parameter, this will allow smarter memory allocation in a future commit by signaling intent, in general, to append or not.
In the tests use STRDEF() or VARSTRDEF() where more appropriate rather than blindly replacing with strNewZ(). Also replace strLstAdd() with strLstAddZ() where appropriate for the same reason.
Run the local process inside a forked child process instead of exec'ing it. This allows coverage to accumulate in the local process rather than needing to test the local protocol functions directly, resulting in better end-to-end testing and less test duplication. Another advantage is that the pgbackrest binary does not need to be built for the test.
The backup, restore, and verify command tests have been updated to use the new shim for coverage.
Some version interface test functions were integrated into the core code because they relied on the PostgreSQL versioned interface. Even though they were compiled out for production builds they cluttered the core code and made it harder to determine what was required by core.
Create a PostgreSQL version interface in a test harness to contain these functions. This does require some duplication but the cleaner core code seems a good tradeoff. It is possible for some of this code to be auto-generated but since it is only updated once per year the matter is not pressing.
Allows removal of backupType()/backupTypeStr() and improves debug logging of the enum.
Move BackupType enum and string constants to info/infoBackup.h so they are available to more modules. Also convert InfoBackup to use BackupType instead of a String.
There is no need to write the file atomically (e.g. via a temp file on Posix) because checksums are tested on resume after a failed backup. The path does not need be synced for each file because all paths are synced at the end of the backup.
This functionality was not lost during the migration -- it never existed in the Perl code, though these settings are used in restore. See 59f1353 where backupFile() was migrated to C.
It is often useful to represent identifiers as strings when they cannot easily be represented as an enum/integer, e.g. because they are distributed among a number of unrelated modules or need to be passed to remote processes. Strings are also more helpful in debugging since they can be recognized without cross-referencing the source. However, strings are awkward to work with in C since they cannot be directly used in switch statements leading to less efficient if-else structures.
A StringId encodes a short string into an integer so it can be used in switch statements but may also be readily converted back into a string for debugging purposes. StringIds may also be suitable for matching user input providing the strings are short enough.
This patch includes a sample of StringId usage by converting protocol commands to StringIds. There are many other possible use cases. To list a few:
* All "types" in storage, filters. IO , etc. These types are primarily for identification and debugging so they fit well with this model.
* MemContext names would work well as StringIds since these are entirely for debugging.
* Option values could be represented as StringIds which would mean we could remove the functions that convert strings to enums, e.g. CipherType.
* There are a number of places where enums need to be converted back to strings for logging/debugging purposes. An example is protocolParallelJobToConstZ. If ProtocolParallelJobState were defined as:
typedef enum
{
protocolParallelJobStatePending = STRID5("pend", ...),
protocolParallelJobStateRunning = STRID5("run", ...),
protocolParallelJobStateDone = STRID5("done", ...),
} ProtocolParallelJobState;
then protocolParallelJobToConstZ() could be replaced with strIdToZ(). This also applies to many enums that we don't covert to strings for logging, such as CipherMode.
As an example of usage, convert all protocol commands from strings to StringIds.