These are similar to what mktime() and strptime() do but they ignore the local system timezone which saves having to munge the TZ env variable to do time conversions.
Specifies the database user name when connecting to PostgreSQL.
If not specified pgBackRest will connect with the local OS user or PGUSER, which was the previous behavior.
\ was not being properly escaped when calculating the manifest checksum which prevented the manifest from loading.
Use jsonFromStr() to properly quote and escape \.
Since instances of \ in cluster filenames should be rare to nonexistent this does not seem likely to be a serious problem in the field.
Remove embedded Perl from the distributed binary. This includes code, configure, Makefile, and packages. The distributed binary is now pure C.
Remove storagePathEnforceSet() from the C Storage object which allowed Perl to write outside of the storage base directory. Update mock/all and real/all integration tests to use storageLocal() where they were violating this rule.
Remove "c" option that allowed the remote to tell if it was being called from C or Perl.
Code to convert options to JSON for passing to Perl (perl/config.c) has been moved to LibC since it is still required for Perl integration tests.
Update build and installation instructions in the user guide.
Remove all Perl unit tests.
Remove obsolete Perl code. In particular this included all the Perl protocol code which required modifications to the Perl storage, manifest, and db objects that are still required for integration testing but only run locally. Any remaining Perl code is required for testing, documentation, or code generation.
Rename perlReq to binReq in define.yaml to indicate that the binary is required for a test. This had been the actual meaning for quite some time but the key was never renamed.
For the most part this is a direct migration of the Perl code into C except as noted below.
A backup can now be initiated from a linked directory. The link will not be stored in the manifest or recreated on restore. If a link or directory does not already exist in the restore location then a directory will be created.
The logic for creating backup labels has been improved and it should no longer be possible to get a backup label earlier than the latest backup even with timezone changes or clock skew. This has never been an issue in the field that we know of, but we found it in testing.
For online backups all times are fetched from the PostgreSQL primary host (before only copy start was). This doesn't affect backup integrity but it does prevent clock skew between hosts affecting backup duration reporting.
Archive copy now works as expected when the archive and backup have different compression settings, i.e. when one is compressed and the other is not. This was a long-standing bug in the Perl code.
Resume will now work even if hardlink settings have been changed.
Reviewed by Cynthia Shang.
Bug Fixes:
* Fix archive-push/archive-get when PGDATA is symlinked. These commands tried to use cwd() as PGDATA but this would disagree with the path configured in pgBackRest if PGDATA was symlinked. If cwd() does not match the pgBackRest path then chdir() to the path and make sure the next cwd() matches the result from the first call. (Reported by Stephen Frost, Milosz Suchy.)
* Fix reference list when backup.info is reconstructed in expire command. Since the backup command is still using the Perl version of reconstruct this issue will not express unless 1) there is a backup missing from backup.info and 2) the expire command is run directly instead of running after backup as usual. This unlikely combination of events means this is probably not a problem in the field.
* Fix segfault on unexpected EOF in gzip decompression. (Reported by Stephen Frost.)
Commit 7168e074 tried to use cwd() as PGDATA but this would disagree with the path configured in pgBackRest if PGDATA was symlinked.
If cwd() does not match the pgBackRest path then chdir() to the path and make sure the next cwd() matches the result from the first call.
If the compressed stream terminated early then the decompression process would get a flush request (NULL input buffer) since the filter was not marked as done. This could happen on a zero-length or truncated (i.e. invalid) compressed file.
Change the existing assertion to an error to catch this condition in production gracefully.
Adding a manifest to backup.info was migrated to C in 4e4d1f41 but deduplication of the references was missed leading to a reference for every file being added to backup.info.
Since the backup command is still using the Perl version of reconstruct this issue will not express unless 1) there is a backup missing from backup.info and 2) the expire command is run directly instead of running after backup as usual.
This unlikely combination of events means this is probably not a problem in the field.
Bug Fixes:
* Fix remote timeout in delta restore. When performing a delta restore on a largely unchanged cluster the remote could timeout if no files were fetched from the repository within protocol-timeout. Add keep-alives to prevent remote timeout. (Reported by James Sewell, Jens Wilke.)
* Fix handling of repeated HTTP headers. When HTTP headers are repeated they should be considered equivalent to a single comma-separated header rather than generating an error, which was the prior behavior. (Reported by donicrosby.)
Improvements:
* JSON output from the info command is no longer pretty-printed. Monitoring systems can more easily ingest the JSON without linefeeds. External tools such as jq can be used to pretty-print if desired. (Contributed by Cynthia Shang.)
* The check command is implemented entirely in C. (Contributed by Cynthia Shang.)
Documentation Improvements:
* Document how to contribute to pgBackRest. (Contributed by Cynthia Shang.)
* Document maximum version for auto-stop option. (Contributed by Brad Nicholson.)
Test Suite Improvements:
* Fix container test path being used when --vm=none. (Suggested by Stephen Frost.)
* Fix mismatched timezone in expect test. (Suggested by Stephen Frost.)
* Don't autogenerate embedded libc code by default. (Suggested by Stephen Frost.)
When HTTP headers are repeated they should be considered equivalent to a single comma-separated header rather than generating an error, which was the prior behavior.
Reported by donicrosby.
This is only needed when new code is added to the Perl C library, which is becoming rare as the migration progresses.
Also, the code will vary slightly based on the Perl version used for generation so for normal users it is just noise.
Suggested by Stephen Frost.
When performing a delta restore on a largely unchanged cluster the remote could timeout if no files were fetched from the repository within protocol-timeout.
Add keep-alives to prevent remote timeout.
Reported by James Sewell, Jens Wilke.
Note that building the manifest on each host has been temporarily removed.
This feature will likely be brought back as a non-default option (after the manifest code has been fully migrated to C) since it can be fairly expensive.
Features:
* PostgreSQL 12 support.
* Add info command set option for detailed text output. The additional details include databases that can be used for selective restore and a list of tablespaces and symlinks with their default destinations. (Contributed by Cynthia Shang. Suggested by Stephen Frost, ejberdecia.)
* Add standby restore type. This restore type automatically adds standby_mode=on to recovery.conf for PostgreSQL < 12 and creates standby.signal for PostgreSQL ≥ 12, creating a common interface between PostgreSQL versions. (Reviewed by Cynthia Shang.)
Improvements:
* The restore command is implemented entirely in C. (Reviewed by Cynthia Shang.)
Documentation Improvements:
* Document the relationship between db-timeout and protocol-timeout. (Contributed by Cynthia Shang. Suggested by James Chanco Jr.)
* Add documentation clarifications regarding standby repositories. (Contributed by Cynthia Shang.)
* Add FAQ for time-based Point-in-Time Recovery. (Contributed by Cynthia Shang.)
Recovery settings are now written into postgresql.auto.conf instead of recovery.conf. Existing recovery_target* settings will be commented out to help avoid conflicts.
A comment is added before recovery settings to identify them as written by pgBackRest since it is unclear how, in general, old settings will be removed.
recovery.signal and standby.signal are automatically created based on the recovery settings.
The additional details include databases that can be used for selective restore and a list of tablespaces and symlinks with their default destinations.
This information is not included in the JSON output because it requires reading the manifest which is too IO intensive to do for all manifests. We plan to include this information for JSON in a future release.
This restore type automatically adds standby_mode=on to recovery.conf.
This could be accomplished previously by setting --recovery-option=standby_mode=on but PostgreSQL 12 requires standby mode to be enabled by a special file named standby.signal.
The new restore type allows us to maintain a common interface between PostgreSQL versions.
For the most part this is a direct migration of the Perl code into C.
There is one important behavioral change with regard to how file permissions are handled. The Perl code tried to set ownership as it was in the manifest even when running as an unprivileged user. This usually just led to errors and frustration.
The C code works like this:
If a restore is run as a non-root user (the typical scenario) then all files restored will belong to the user/group executing pgBackRest. If existing files are not owned by the executing user/group then an error will result if the ownership cannot be updated to the executing user/group. In that case the file ownership will need to be updated by a privileged user before the restore can be retried.
If a restore is run as the root user then pgBackRest will attempt to recreate the ownership recorded in the manifest when the backup was made. Only user/group names are stored in the manifest so the same names must exist on the restore host for this to work. If the user/group name cannot be found locally then the user/group of the PostgreSQL data directory will be used and finally root if the data directory user/group cannot be mapped to a name.
Reviewed by Cynthia Shang.
The backup manifest stores a complete list of all files, links, and paths in a backup along with metadata such as checksums, sizes,
timestamps, etc. A list of databases is also included for selective restore.
The purpose of the manifest is to allow the restore command to confidently reconstruct the PostgreSQL data directory and ensure that
nothing is missing or corrupt. It is also useful for reporting, e.g. size of backup, backup time, etc.
For now, migrate enough functionality to implement the restore command.
Reviewed by Cynthia Shang.
Info files required three copies in memory to be loaded (the original string, an ini representation, and the final info object). Not only was this memory inefficient but the Ini object does sequential scans when searching for keys making large files very slow to load.
This has not been an issue since archive.info and backup.info are very small, but it becomes a big deal when loading manifests with hundreds of thousands of files.
Instead of holding copies of the data in memory, use a callback to deliver the ini data directly to the object when loading. Use a similar method for save to avoid having an intermediate copy. Save is a bit complex because sections/keys must be written in alpha order or older versions of pgBackRest will not calculate the correct checksum.
Also move the load retry logic to helper functions rather than embedding it in the Info object. This allows for more flexibility in loading and ensures that stack traces will be available when developing unit tests.
Reviewed by Cynthia Shang.
Bug Fixes:
* Improve slow manifest build for very large quantities of tables/segments. (Reported by Jens Wilke.)
* Fix exclusions for special files. (Reported by CluelessTechnologist, Janis Puris, Rachid Broum.)
Improvements:
* The stanza-create/update/delete commands are implemented entirely in C. (Contributed by Cynthia Shang.)
* The start/stop commands are implemented entirely in C. (Contributed by Cynthia Shang.)
* Create log directories/files with 0750/0640 mode. (Suggested by Damiano Albani.)
Documentation Bug Fixes:
* Fix yum.p.o package being installed when custom package specified. (Reported by Joe Ayers, John Harvey.)
Documentation Improvements:
* Build pgBackRest as an unprivileged user. (Suggested by Laurenz Albe.)
The {[os-type-is-centos]} expression was missing parens which meant "and" expressions built on it would always evaluate true if the os-type was centos6.
Reported by Joe Ayers, John Harvey.
Prior to 2.16 the Perl manifest code would skip any file that began with a dot. This was not intentional but it allowed PostgreSQL socket files to be located in the data directory. The new C code in 2.16 did not have this unintentional exclusion so socket files in the data directory caused errors.
Worse, the file type error was being thrown before the exclusion check so there was really no way around the issue except to move the socket files out of the data directory.
Special file types (e.g. socket, pipe) will now be automatically skipped and a warning logged to notify the user of the exclusion. The warning can be suppressed with an explicit --exclude.
Reported by CluelessTechnologist, Janis Puris, Rachid Broum.
Putting the checksum at the beginning of the file made it impossible to stream the file out when saving. The entire file had to be held in memory while it was checksummed so the checksum could be written at the beginning.
Instead place the checksum at the end. This does not break the existing Perl or C code since the read is not order dependent.
There are no plans to improve the Perl code to take advantage of this change, but it will make the C implementation more efficient.
Reviewed by Cynthia Shang.
pgBackRest was being built by root in the documentation which is definitely not best practice.
Instead build as the unprivileged default container user. Sudo privileges are still required to install.
Suggested by Laurenz Albe.
storagePosixInfoList() processed each directory in a single memory context. If the directory contained hundreds of thousands of files processing became very slow due to the number of allocations.
Instead, reset the memory context every thousand files to minimize the number of allocations active at once, improving both speed and memory consumption.
Reported by Jens Wilke.
The log directories/files were being created with a mix of modes depending on whether they were created in C or Perl. In particular, the C code was creating log files with the execute bit set for the user and group which was just odd.
Standardize on 750/640 for both code paths.
Suggested by Damiano Albani.
The Perl versions remain because they are still being used by the Perl stanza commands. Once the stanza commands are migrated they can be removed.
Contributed by Cynthia Shang.
Bug Fixes:
* Retry S3 RequestTimeTooSkewed errors instead of immediately terminating. (Reported by sean0101n, Tim Garton, Jesper St John, Aleš Zelený.)
* Fix incorrect handling of transfer-encoding response to HEAD request. (Reported by Pavel Suderevsky.)
* Fix scoping violations exposed by optimizations in gcc 9. (Reported by Christian Lange, Ned T. Crigler.)
Features:
* Add repo-s3-port option for setting a non-standard S3 service port.
Improvements:
* The local command for backup is implemented entirely in C. (Contributed by David Steele, Cynthia Shang.)
* The check command is implemented partly in C. (Reviewed by Cynthia Shang.)
Implement switch WAL and archive check in C but leave the rest in Perl for now.
The main idea was to have some real integration tests for the new database code so the rest of the migration can wait.
Reviewed by Cynthia Shang.
Migrate functionality from the Perl Db module to C. For now this is just enough to implement the WAL switch check.
Add the dbGet() helper function to get Db objects easily.
Create macros in harnessPq to make writing pq scripts easier by grouping commonly used functions together.
Reviewed by Cynthia Shang.
The cause of this error seems to be that a failed request takes so long that a subsequent retry at the http level uses outdated headers.
We're not sure if pgBackRest it to blame here (in one case a kernel downgrade fixed it, in another case an incorrect network driver was the problem) so add retries to hopefully deal with the issue if it is not too persistent. If SSL_write() has long delays before reporting an error then this will obviously affect backup performance.
Reported by sean0101n, Tim Garton, Jesper St John, Aleš Zelený.
If this option is set then ports appended to repo-s3-endpoint or repo-s3-host will be ignored.
Setting this option explicitly may be the only way to use a bare ipv6 address with S3 (since multiple colons confuse the parser) but we plan to improve this in the future.
This direct interface to libpq allows simple queries to be run against PostgreSQL and supports timeouts.
Testing is performed using a shim that can use scripted responses to test all aspects of the client code. The shim will be very useful for testing backup scenarios on complex topologies.
Reviewed by Cynthia Shang.
The local process is now entirely migrated to C. Since all major I/O operations are performed in the local process, the vast majority of I/O is now performed in C.
Contributed by David Steele, Cynthia Shang.
The HTTP server can use either content-length or transfer-encoding to indicate that there is content in the response. HEAD requests do not include content but return all the same headers as GET. In the HEAD case we were ignoring content-length but not transfer-encoding which led to unexpected eof errors on AWS S3. Our test server, minio, uses content-length so this was not caught in integration testing.
Ignore all content for HEAD requests (no matter how it is reported) and add a unit test for transfer-encoding to prevent a regression.
Found by Pavel Suderevsky.
gcc < 9 makes all compound literals function scope, even though the C spec requires them to be invalid outside the current scope. Since the compiler and valgrind were not enforcing this we had a few violations which caused problems in gcc >= 9.
Even though we are not quite ready to support gcc 9 officially, fix the scoping violations that currently exist in the codebase.
Reported by chrlange, Ned T. Crigler.
Maintaining the storage layer/drivers in two languages is burdensome. Since the integration tests require the Perl storage layer/drivers we'll need them even after the core code is migrated to C. Create an interface layer so the Perl code can be removed and new storage drivers/features introduced without adding Perl equivalents.
The goal is to move the integration tests to C so this interface will eventually be removed. That being the case, the interface was designed for maximum compatibility to ease the transition. The result looks a bit hacky but we'll improve it as needed until it can be retired.
Bug Fixes:
* Fix archive retention expiring too aggressively. (Fixed by Cynthia Shang. Reported by Mohamad El-Rifai.)
Improvements:
* The expire command is implemented entirely in C. (Contributed by Cynthia Shang.)
* The local command for restore is implemented entirely in C.
* Remove hard-coded PostgreSQL user so $PGUSER works. (Suggested by Julian Zhang, Janis Puris.)
* Honor configure --prefix option. (Suggested by Daniel Westermann.)
* Rename repo-s3-verify-ssl option to repo-s3-verify-tls. The new name is preferred because pgBackRest does not support any SSL protocol versions (they are all considered to be insecure). The old name will continue to be accepted.
Documentation Improvements:
* Add FAQ to the documentation. (Contributed by Cynthia Shang.)
* Use wal_level=replica in the documentation for PostgreSQL ≥ 9.6. (Suggested by Patrick McLaughlin.)
The problem expressed when repo1-archive-retention-type was set to diff. In this case repo1-archive-retention ended up being effectively equal to one, which meant PITR recovery was only possible from the last backup. WAL required for consistency was still preserved for all backups.
This issue is not present in the C migration committed at 434cd832, which was written before this bug was reported. Even so, we wanted to note this issue in the release notes in case any other users have been affected.
Fixed by Cynthia Shang.
Reported by Mohamad El-Rifai.
This implementation duplicates the functionality of the Perl code but does so with different logic and includes full unit tests.
Along the way at least one bug was fixed, see issue #748.
Contributed by Cynthia Shang.
The PostgreSQL user was hard-coded to the OS user which libpq will automatically use if $PGUSER is not set, so this code was redundant and prevented $PGUSER from working when set.
Suggested by Julian Zhang, Janis Puris.
These names more accurately reflect what the functions do and follow the convention started in Info and InfoPg.
Also remove the ignoreMissing parameter since it was never used.
Contributed by Cynthia Shang.
The documentation was using wal_level=hot_standby which is a deprecated setting.
Also remove the reference to wal_level=archive since it is no longer supported and is not recommended for older versions.
Suggested by Patrick McLaughlin.
The release notes are generally a direct reflection of the git log. So, ease the burden of maintaining the release notes by using the git log to determine what needs to be added.
Currently only non-dev items are required to be matched to a git commit but the goal is to account for all commits.
The git history cache is generated from the git log but can be modified to correct typos and match the release notes as they evolve. The commit hash is used to identify commits that have already been added to the cache.
There's plenty more to do here. For instance, links to the commits for each release item should be added to the release notes.
The first paragraph should match the first line of the commit message as closely as possible. The following paragraphs add more information.
Release items have been updated back to 2.01.
The new name is preferred because pgBackRest does not support any SSL protocol versions (they are all considered to be insecure).
The old name will continue to be accepted.
This is just the part of restore run by the local helper processes, not the entire command.
Even so, various optimizations in the code (like pipelining and optimizations for zero-length files) should make the restore command faster on object stores.
Bug Fixes:
* Fix segfault when process-max > 8 for archive-push/archive-get. (Reported by Jens Wilke.)
Improvements:
* Bypass database checks when stanza-delete issued with force. (Contributed by Cynthia Shang. Suggested by hatifnatt.)
* Add configure script for improved multi-platform support.
Documentation Features:
* Add user guides for CentOS/RHEL 6/7.
It would be better if the documentation could be generated on multiple operating systems all in one go, but the doc system currently does not allow vars to be changed once they are set.
The solution is to run the docs for each required OS and stitch the documentation together. It's not pretty but it works and the automation in release.pl should at least make it easy to use.
The url for the menu item referring to the index (i.e. site root page) should use {[project-url-root]}.
This allows the url to be set to different values depending on the location of the index.
This report replaces the lcov report that was generated manually for each release.
The lcov report was overly verbose just to say that we have virtually 100% coverage.
There are currently no options with multiple alternate (deprecated) names so the code to render them in the help command could not be covered.
Remove the uncovered code and add an error when multiple alternate names are configured. It's not clear that the current code was handling this correctly, so it will need to be reviewed if it comes up again.
Pre-calculate the value used by logAny() to improve performance and make it more likely to be inlined.
Move IF_LOG_ANY() into LOG_INTERNAL() to simplify the macros and improve performance of LOG() and LOG_PID(). If the message has no chance of being logged there's no reason to call logInternal().
Rename logWill() to logAny() because it seems more intuitive.
The branch coverage exclusion rules were overly broad and included functions that ended in a capital letter, which disabled all coverage for the statement. Improve matching so that all characters in the name must be upper-case for a match.
Some macros with internal branches accepted parameters that might contain conditionals. This made it impossible to tell which branches belonged to which, and in any case an overzealous exclusion rule was ignoring all branches in such cases. Add the DEBUG_COVERAGE flag to build a modified version of the macros without any internal branches to be used for coverage testing. In most cases, the branches were optimizations (like checking logWill()) that improve production performance but are not needed for testing. In other cases, a parameter needed to be added to the underlying function to handle the branch during coverage testing.
Also tweak the coverage rules so that macros without conditionals are automatically excluded from branch coverage as long as they are not themselves a parameter.
Finally, update tests and code where missing coverage was exposed by these changes. Some code was updated to remove existing coverage exclusions when it was a simple change.
Call stackTraceTestStop()/stackTraceTestStart() once per block instead of with every param call. This was done to be cautious but is not necessary and slows down development.
These functions were never built into production so had no impact there.
Filters had different ideas about what "done" meant and this added complication to the group filter processing. For example, gzip decompression would detect end of stream and mark the filter as done before it had been flushed.
Improve the IoFilter interface to give a consistent definition of done across all filters, i.e. no filter can be done until it has started flushing no matter what the underlying driver reports. This removes quite a bit of tricky logic in the processing loop which tried to determine when a filter was "really" done.
Also improve management of the input buffers by pointing directly to the prior output buffer (or the caller's input) to eliminate loops that set/cleared these buffers.
If content was zero-length then the IO object was not created. This put the burden on the caller to test that the IO object existed before checking eof.
Instead, create an IO object even if it will immediately return eof. This has little cost and makes the calling code simpler.
Also add an explicit test for zero-length files in S3 and a few assertions.
The rules for when a C remote is required are getting complicated and will get worse when restoreFile() is migrated.
Instead, set the --c option when a C remote is required. This option will be removed when the remote is entirely implemented in C.
Most of the *Free() functions are pretty generic so add macros to make creating them as easy as possible.
Create a distinction between *Free() functions that the caller uses to free memory and callbacks that free third-party resources. There are a number of cases where a driver needs to free resources but does not need a normal *Free() because it is handled by the interface.
Add common/object.h for macros that make object maintenance easier. This pattern can also be used for many more object functions.
Rename memContextCallback() to memContextCallbackSet() to be more consistent with other parts of the code.
Free all context memory when an exception is thrown from a callback. Previously only the child contexts would be freed and this resulted in some allocations being lost. In practice this is probably not a big deal since the process will likely terminate shortly, but there may well be cases where that is not true.
Add GLUE() macro which is useful for creating identifiers.
Move MACRO_TO_STR() here and rename it STRINGIFY(). This appears to be the standard name for this type of macro and it is also an awesome name.
Remove "File" and "Driver" from object names so they are shorter and easier to keep consistent.
Also remove the "driver" directory so storage implementations are visible directly under "storage".
The function pointer casting used when creating drivers made changing interfaces difficult and led to slightly divergent driver implementations. Unit testing caught production-level errors but there were a lot of small issues and the process was harder than it should have been.
Use void pointers instead so that no casts are required. Introduce the THIS_VOID and THIS() macros to make dealing with void pointers a little safer.
Since we don't want to expose void pointers in header files, driver functions have been removed from the headers and the various driver objects return their interface type. This cuts down on accessor methods and the vast majority of those functions were not being used. Move functions that are still required to .intern.h.
Remove the special "C" crypto functions that were used in libc and instead use the standard interface.
Add bufDup() and bufNewUsedC().
Arrange bufNewC() params to match bufNewUsedC() since they have always seemed backward.
Fix bufHex() to only render the used portion of the buffer and fix some places where used was not being set correctly.
Use a union to make macro assignments for all legal values without casting. This is much more likely to catch bad assignments.
There is only one instance in the core code where this helps. It is mostly helpful in the tests.
There is an argument to be made that only THROW_SYS_ERROR*() variants should be used in the core code to improve test coverage. If so, that will be the subject of a future commit.
Some functions (e.g. getpwnam()/getgrnam()) will return an error but not set errno. In this case there's no use in appending strerror(), which will be "Success". This is confusing since an error has just been reported.
At least in the examples above, an error with no errno set just means "missing" and our current error message already conveys that.
The remote list was at most 9 (based on pg[1-8]-* max index) so anything over 8 wrote into unallocated memory.
The remote for the main process is (currently) stored in position zero so do the same for remotes started from locals, since there should only be one. The main process will need to start more remotes in the future which is why there is extra space.
Reported by Jens Wilke.
Use autoconf to provide a basic configure script. WITH_BACKTRACE is yet to be migrated to configure and the unit tests still use a custom Makefile.
Each C file must include "build.auto.conf" before all other includes and defines. This is enforced by test.pl for includes, but it won't detect incorrect define ordering.
Update packages to call configure and use standard flags to pass options.
Update RHEL repos that have changed upstream. Remove PostgreSQL 9.3 since the RHEL6/7 packages have disappeared.
Remove PostgreSQL versions from U12 that are still getting minor updates so the container does not need to be rebuilt.
LZ4 is included for future development, but this seems like a good time to add it to the containers.
The function provides all the file/path/link information required to build a backup manifest.
Also update storageInfo() to provide the same information for a single file.
At the same time change the way that load constructors work (and are named) so that Ini objects do not persist after the constructors complete.
infoArchiveSave() is excluded from this commit since it is just a trivial call to infoPgSave() and won't be required soon.
In most cases the JSON type is known so this is more efficient than converting to Variant first, both in terms of memory and time.
Also rename some of the existing functions for consistency.
Variants were being used to expose String and StringList types but this can be done more simply with an additional method.
Using only strings also allows for a more efficient implementation down the road.
This greatly reduces calls to filter processing, which is a performance benefit, but also makes the trace logs smaller and easier to read.
However, this means that ioWriteFlush() will no longer work with filters since a full flush of IoFilterGroup would require an expensive reset. Currently ioWriteFlush() is not used in this scenario so for now just add an assert to ensure it stays that way.
These are more efficient than creating buffers in place when needed.
After replacement discovered that bufNewStr() and BufNewZ() were not being used in the core code so removed them. This required using the macros in tests which is not the usual pattern.
Since the introduction of blocking read drivers (e.g. IoHandleRead, TlsClient) the non-blocking drivers have used the same rules for determining maximum buffer size, i.e. read only as much as requested. This is necessary so the blocking drivers don't get stuck waiting for data that might not be coming.
Instead mark blocking drivers so IoRead knows how much buffer to allow for the read. The non-blocking drivers can now request the maximum number of bytes allowed by buffer-size.
Bug Fixes:
* Fix zero-length reads causing problems for IO filters that did not expect them. (Reported by brunre01, jwpit, Tomasz Kontusz, guruguruguru.)
* Fix reliability of error reporting from local/remote processes.
* Fix Posix/CIFS error messages reporting the wrong filename on write/sync/close.
Add production checks to ensure no filter gets a zero-size input buffer.
Also, optimize the case where a filter returns no output. There's no sense in running downstream filters if they have no new input.
The IoRead object was passing zero-length buffers into the filter processing code but not all the filters were happy about getting them.
In particular, the gzip compression filter failed if it was given no input directly after it had flushed all of its buffers. This made the problem rather intermittent even though a zero-length buffer was being passed to the filter at the end of every file. It also explains why tweaking compress-level or buffer-size allowed the file to go through.
Since this error was happening after all processing had completed, there does not appear to be any risk that successfully processed files were corrupted.
Reported by brunre01, jwpit, Tomasz Kontusz, guruguruguru.
Releasing the lock too early was allowing other async processes to sneak in and start running before the current process was completely shut down.
The only symptom seems to have been mixed up log messages so not a very serious issue.
Asserts were only only reported on stderr rather than being returned through the protocol layer. This did not appear to be very reliable.
Instead, report the assert through the protocol layer like any other error. Add a stack trace if an assert error or debug logging is enabled.
These work almost exactly like the String constant macros. However, a struct per variant type was required which meant custom constructors and destructors for each type.
Propagate the variant constants out into the codebase wherever they are useful.
The STRING_CONST() macro worked fine for constants but was not able to constify strings created at runtime.
Add the STR() macro to do this by using strlen() to get the size.
Also rename STRING_CONST() to STRDEF() for brevity and to match the other macro name.
Removed the "anchor" parameter because it was never used in any calls in the Perl code so it was just a dead parameter that always defaulted to true.
Contributed by Cynthia Shang.
These constants are easier than using cfgOptionName() and cfgCommandName() and lead to cleaner code and simpler to construct messages.
String versions are provided. Eventually all the strings will be used in the config structures, but for now they are useful to avoid wrapping with strNew().
IMPORTANT NOTE: The new TLS/SSL implementation forbids dots in S3 bucket names per RFC-2818. This security fix is required for compliant hostname verification.
Bug Fixes:
* Fix issues when a path option is / terminated. (Reported by Marc Cousin.)
* Fix issues when log-level-file=off is set for the archive-get command. (Reported by Brad Nicholson.)
* Fix C code to recognize host:port option format like Perl does. (Reported by Kyle Nevins.)
* Fix issues with remote/local command logging options.
Improvements:
* The archive-push command is implemented entirely in C.
* Increase process-max limit to 999. (Suggested by Rakshitha-BR.)
* Improve error message when an S3 bucket name contains dots.
Documentation Improvements:
* Clarify that S3-compatible object stores are supported. (Suggested by Magnus Hagander.)
This was not an intentional feature in Perl, but it works, so it makes sense to implement the same syntax in C.
This is a break from other places where a -port option is explicitly supplied, so it may make sense to support both styles going forward. This commit does not address that, however.
Reported by Kyle Nevins.
The Perl lib we have been using for TLS allows dots in wildcards, but this is forbidden by RFC-2818. The new TLS implementation in C forbids this pattern, just as PostgreSQL and curl do.
However, this does present a problem for users who have been using bucket names with dots in older versions of pgBackRest. Since this limitation exists for security reasons there appears to be no option but to take a hard line and do our best to notify the user of the issue as clearly as possible.
This problem was not specific to archive-get, but that was the only place it was expressing in the last release. The new archive-push was also affected.
The issue was with daemon processes that had closed all their file descriptors. When exec'ing and setting up pipes to communicate with a child process the dup2() function created file descriptors that overlapped with the first descriptor (stdout) that was being duped into. This descriptor was subsequently closed and wackiness ensued.
If logging was enabled (the default) that increased all the file descriptors by one and everything worked.
Fix this by checking if the file descriptor to be closed is the same one being dup'd into. This solution may not be generally applicable but it works fine in this case.
Reported by Brad Nicholson.
The documentation mentioned Amazon S3 frequently but failed to mention that other S3-compatible object stores are also supported.
Tone down the specific mentions of Amazon S3 and replace them with "S3-compatible object store" when appropriate.
Suggested by Magnus Hagander.
This new implementation should behave exactly like the old Perl code with the exception of updated log messages.
Remove as much of the Perl code as possible without breaking other commands.
When a repository server is configured, commands that modify the repository acquire a remote lock as well as a local lock for extra protection against multiple writers.
Instead of the custom logic used in Perl, make remote locking part of the command configuration.
This also means that the C remote needs the stanza since it is used to construct the lock name. We may need to revisit this at a later date.
While the local processes are doing their jobs the remote connection from the main process may timeout.
Send occasional noops to ensure that doesn't happen.
This may not be the best way to detect 64-bit platforms but it seems to be working fine so far.
Create a macro to make it clearer what is being done and to make it easier to change the implementation.
The test harness was not being built with warnings which caused some wackiness with an improperly structured switch. Just use the same warnings as the code being tested.
Also enable warnings on code that is not directly being tested since other code modules are frequently modified during testing.
We deal with some pretty big lists in archive-push so a nested-loop anti-join looked like it would not be efficient enough.
This merge anti-join should do the trick even though both lists must be sorted first.
The prior behavior on a global error (i.e. not file specific) was to write an individual error file for each WAL file being processed. On retry each of these error files would be removed, and if the error was persistent, they would then be recreated. In a busy environment this could mean tens or hundreds of thousands of files.
Another issue was that the error files could not be written until a list of WAL files to process had been generated. This was easy enough for archive-get but archive-push requires more processing and any errors that happened when generating the list would only be reported in the pgBackRest log rather than the PostgreSQL log.
Instead write a global.error file that applies to any WAL file that does not have an explicit ok or error file. This reduces churn and allows more errors to be reported directly to PostgreSQL.
Having a copy per version worked well until it was time to add new features or modify existing functions. Then it was necessary to modify every version and try to keep them all in sync.
Consolidate all the PostgreSQL types into a single file using #if for type versions. Many types do not change or change infrequently so this cuts down on duplication. In addition, it is far easier to see what has changed when a new version is added.
Use macros to write the interface functions. There is still duplication here since some changes require a new copy of the macro, but it is far less than before.
Move the documentation to postgres/interface.c so it can be updated without having to update N source files.
The "is" function was not very specific so rename to "controlIs".
Since archive-push is being moved to C, the Perl remote will no longer work with that command.
Eventually this module will need to be rewritten in C, but for now just use the restore command which is planned to be migrated last.
Now that repositories are writable the storage drivers that don't yet support file writes need to be updated to do so.
Note that the part size for multi-part upload has not been defined as a proper constant. This will become an option in the near future so it doesn't seem worth creating a constant that we might then forget to remove.
The xml objects only exposed read methods of the underlying libxml2.
This worked for S3 commands that only received data but to send data we need to be able to create XML documents from scratch.
Add the ability to create empty documents and add nodes and contents.
The C code was assuming that the current PostgreSQL version in archive.info/backup.info was the most recent item in the history, but this is not always the case with some stanza-upgrade scenarios. If a cluster is restored from before the upgrade and stanza-upgrade is run again, it will revert db-id to the original history item.
Instead, load db-id from the db section explicitly as the Perl code does.
This did not affect archive-get since it does a reverse scan through the history versions and does not rely on the current version.
Logging was being enable on local/remote processes even if --log-subprocess was not specified, so fix that.
Also, make sure that stderr is enabled at error level as it was on Perl. This helps expose error information for debugging.
For remotes, suppress log and lock paths since these are not applicable on remote hosts. These options should be set in the local config if they need to be overridden.
None of our C HTTP requests have needed to output a body, but they will with the migration of archive-push.
Also, add constants that are useful when POSTing/PUTing data.
The size constants are convenient for creating data structures of the proper size.
The hash type constant must be extern'd so that results can be pulled from a filter.
This was missing when bufUsed() was introduced.
It is not currently a live issue, but becomes a problem in the new archive-push code where the entire buffer is not always used.
This condition was not being properly checked for in the C code and it caused problems in the info command, at the very least.
Instead of applying a local fix, introduce a new path option type that will rigorously check the format of any incoming paths.
Reported by Marc Cousin.
This command was previously forked off from the archive-push command which required a bit of artificial option and log manipulation.
A separate command is easier to test and will work on platforms that don't have fork(), e.g. Windows.
This driver borrows heavily from the Posix driver.
At this point the only difference is that CIFS does not allow explicit directory fsyncs so they need to be suppressed. At some point the CIFS diver will also omit link support.
With the addition of this driver repository storage is now writable.
Bug Fixes:
* Fix possible truncated WAL segments when an error occurs mid-write. (Reported by blogh.)
* Fix info command missing WAL min/max when stanza specified. (Fixed by Stefan Fercot.)
* Fix non-compliant JSON for options passed from C to Perl. (Reported by Leo Khomenko.)
Improvements:
* The archive-get command is implemented entirely in C.
* Enable socket keep-alive on older Perl versions. (Contributed by Marc Cousin.)
* Error when parameters are passed to a command that does not accept parameters. (Suggested by Jason O'Donnell.)
* Add hints when unable to find a WAL segment in the archive. (Suggested by Hans-Jürgen Schönig.)
* Improve error when hostname cannot be found in a certificate. (Suggested by James Badger.)
* Add additional options to backup.manifest for debugging purposes. (Contributed by blogh.)
Add the buffer-size, compress-level, compress-level-network, and process-max options to the backup:option section in backup.manifest to aid in debugging.
It may also make sense to propagate these options up to backup.info so they can be displayed in the info command, but for now this is deemed sufficient.
Contributed by blogh.
When this error happens in the context of a backup it can be a bit mystifying as to why the backup is failing. Add some hints to get the user started.
These hints will appear any time a WAL segment can't be found, which makes the hint about the check command redundant when the user is actually running the check command, but it doesn't seem worth trying to exclude the hint in that case.
Suggested by Hans-Jürgen Schönig.
DESTDIR always had /usr/bin appended which was a problem systems that don't use /usr/bin as the install location for binaries.
Instead, use the value of DESTDIR exactly and update the Debian packages accordingly.
Contributed by Douglas J Hunley.
This behavior allowed a command like this to run without error:
pgbackrest backup --stanza=db full
Even though it actually performed an incremental backup in most circumstances because the `full` parameter was ignored.
Instead, output an error and exit.
Suggested by Jason O'Donnell.
This warning was being output when getting help if retention was not set:
WARN: option repo1-retention-full is not set, the repository may run out of space
Suppress this when getting help since the warning will display by default on a system that is not completely configured.
The same test configurations are run on all four test VMs, which seems a real waste of resources.
Vary the tests per VM to increase coverage while reducing the total number of tests. Be sure to include each major feature (remote, s3, encryption) in each VM at least once.
The same test configurations are run on all four test VMs, which seems a real waste of resources.
Vary the tests per VM to increase coverage while reducing the total number of tests.
The same test configurations are run on all four test VMs, which seems a real waste of resources.
Vary the tests per VM to increase coverage while reducing the total number of tests. Be sure to include each major feature (remote, s3, encryption) in each VM at least once.
This is very inefficient in terms of memory and time and dynamic context names were never utilized.
Just require that context names be valid for the life of the context.
In practice they are all static strings.
Allocations required a sequential scan through the allocation list for both contexts and memory. This was very inefficient since for the most part individual memory allocations are seldom freed directly, rather they are freed when their context is freed.
For both types of allocations track an index for the lowest free position. After an allocation of the free position, a sequential search will be required for the next allocation but this is still far better than doing a scan for every allocation.
With a moderately-sized dataset (500 history entries in backup.info), there is a 237X performance improvement when combined with the f74e88bb refactor.
Before:
% cumulative self
time seconds seconds name
65.11 331.37 331.37 memContextAlloc
16.19 413.78 82.40 memContextCurrent
14.74 488.81 75.03 memContextTop
2.65 502.29 13.48 memContextNewIndex
1.18 508.31 6.02 memFind
After:
% cumulative self
time seconds seconds name
94.69 2.14 2.14 memFind
Finding memory allocations in order to free or resize them is the next bottleneck, but this does not seem to be a major issue presently.
Using the functions internally is great for abstraction but not so great for performance on non-optimized builds.
Also, the functions end up prominent in any profiled build.
The prior method depended on IO:Socket:SSL to push the keep-alive options down to the socket but it only worked for recent versions of the module.
Instead, create the socket directly using IO::Socket::IP if available or IO:Socket:INET as a fallback. The keep-alive option is set directly on the socket before it is passed to IO:Socket:SSL.
Contributed by Marc Cousin.
This new implementation should behave exactly like the old Perl code with the exception of a few updated log messages.
Remove as much of the Perl code as possible without breaking other commands.
The C local is only used for C commands in the main process.
Some tweaking of the existing protocolGet() command was required. Originally the idea was to share the function for local and remote requests but the differences (as in Perl) were too great to make that practical.
Some IO objects have file descriptors which can be useful for monitoring with select().
It might also be useful to expose handles for write objects but there is currently no use case.
There was a lot of extra boilerplate involved in setting up pipes so that is now automated.
In some cases testing with multiple children is useful so allow that as well.
This amends 70c30dfb which disabled test tracing in general.
Instead, only enable test tracing by default for modules that are being unit tested. This saves lots of time but still ensures that test tracing is working and helps with debugging in unit tests.
Also rename the option to --debug-test-trace for a clarity.
The same test configurations are run on all four test VMs, which seems a real waste of resources.
Vary the tests per VM to increase coverage while reducing the total number of tests. Be sure to include each major feature (remote, s3, encryption) in each VM at least once.
The expect tests were originally a rough-and-ready type of unit test so monitoring changes in the expect log helped us detect changes in behavior.
Now the stanza code is heavily unit-tested so the detailed logs mainly cause churn and don't have any measurable benefit.
Reduce the log level to DETAIL to make the logs less verbose and volatile, yet still check user-facing log messages.
The same test configurations are run on all four test VMs, which seems a real waste of resources.
Vary the tests per VM to increase coverage while reducing the total number of tests. Be sure to include each major feature (remote, s3, encryption) in each VM at least once.
The expect tests were originally a rough-and-ready type of unit test so monitoring changes in the expect log helped us detect changes in behavior.
Now the archive code is heavily unit-tested so the detailed logs mainly cause churn and don't have any measurable benefit.
Reduce the log level to DETAIL to make the logs less verbose and volatile, yet still check user-facing log messages.
Update error message with the hostname and more detail about what went wrong. Hopefully this will help in diagnosing certificate/hostname issues.
Suggested by James Badger.
We have been using a hacked-up JSON generator to pass options from C to Perl since the C binary was introduced. This generator was not very compliant which led to issues with \n, ", etc. inside strings.
We have a fully-compliant JSON generator now so use that instead.
Reported by Leo Khomenko.
Detailed stack traces for low-level functions (e.g. strCat, bufMove) can be very useful for debugging but leaving them on for all tests has become quite burdensome in terms of time. Complex operations like generating JSON on a large KevValue can lead to timeouts even with generous values.
Add a new param, --debug-trace, to enable test-level stack trace, but leave it off by default.
Expressions such as <REPO:ARCHIVE> require a stanza name in order to be resolved correctly. However, if the stanza name is passed to the remote then that remote will only work correctly for that one stanza.
Instead, resolved the expressions locally but still pass a relative path to the remote. That way, a storage path that is only configured on the remote does not need to be known locally.
This issue was a result of STORAGE_REPO_PATH prepending an extra stanza when the stanza was specified on the command line.
The tests missed this because by some strange coincidence the WAL dirs were empty for each test that specified a stanza. Add new tests to prevent a regression.
Fixed by Stefan Fercot.
Free all cached objects in the storage helper, especially the stanza name.
This clears the storage environment for tests that switch stanza names or go from a stanza name to no stanza name or vice versa. This is only useful for testing right now, but may be used in the future for commands than act on multiple stanzas.
This was previously 256, which was too small to log protocol parameters. Not only did this truncate important debug information but varying path lengths caused spurious differences in the expect logs.
This command was previously forked off from the archive-get command which required a bit of artificial option and log manipulation.
A separate command is easier to test and will work on platforms that don't have fork(), e.g. Windows.
These are intended to be temporary until a fully automated report is developed.
Since we don't know when that will happen, at least make it easier to generate the current report.
Prior to this the Perl remote was used to satisfy C requests. This worked fine but since the remote needed to be migrated to C anyway there was no reason to wait.
Add the ProtocolServer object and tweak ProtocolClient to work with it. It was also necessary to add a mechanism to get option values from the remote so that encryption settings could be read and used in the storage object.
Update the remote storage objects to comply with the protocol changes and add the storage protocol handler.
Ideally this commit would have been broken up into smaller chunks but there are cross-dependencies in the protocol layer and it didn't seem worth the extra effort.
The file write object destructors called close() and finalized the file even if it was not completely written. This was an issue in both the C and Perl code.
Rewrite the destructors to simply free resources (like file handles) rather than calling the close() method. This leaves the temp file in place for filesystems that use temp files.
Add unit tests to prevent regression.
Reported by blogh.
execRead() should be returning a size_t, not a void. Thankfully, this isn't actually used and therefore shouldn't be an issue, but we should fix it anyway.
Contributed by Stephen Frost.