Coverage of the documentation code is not important enough to report to users. If it were reported it should be in a separate section (along with test code coverage).
The exact functionality of start/stop has evolved over time and has become a bit confusing. It may be appropriate to make the behavior more consistent but for now at least document the behavior correctly. The documentation for start/stop was fairly inaccurate.
3c8819e1 replaced gmtime/localtime with gmtime_r/localtime_r but did not take into account a subtle difference in how they operate. While gmtime/localtime operate as if tzset() has been called, i.e. they operate on the TZ env variable directly, gmtime_r/localtime_r require tzset() to be called after changing TZ for consistent results.
Rather than call tzset() every time TZ is changed, add hrnTzSet() to encapsulate both operations.
This was copied from storagePosixInfo() in a474ba54 but there is no guarantee that errno will be valid at this point. In most cases errno was zero so no system error message was displayed, but when using the Posix driver it could output "[2] No such file or directory". For other drivers errno was generally not set but could output a random error message in that case that errno was set by some unrelated action.
Use THROW_FMT() instead since errno will not always be set correctly and in any case "[2] No such file or directory" is not very useful information since the main error message already says that.
While this is technically a bug it is so harmless that it doesn't merit mention in the release notes.
This was discovered while testing on Fedora 40 which threw "[38] Function not implemented" -- clearly unrelated to missing paths/files.
Coverity complains that this comparison might have a side effect because the variable is volatile. It's hard to see what that might be but since the assertion is not all that important, just remove it. During testing this sort of error will generally be caught by valgrind.
Coverity complains that the output from THROW_FMT will be unpredictable since the order of operations in the call is not deterministic, but it fails to understand that subsequent calls to jsonReadTypeNextIgnoreComma() are noops until the value has been processed.
Silence Coverity by assigning the actual type to a local variable so jsonReadTypeNextIgnoreComma() is only called once.
Also fix an adjacent comment typo.
Add const as appropriate and avoid initializing variables if the variable will definitely be set later on.
The storage/remote module will be updated with the protocol module once a major waiting refactor has been committed.
The GCS driver sent a single file delete request for each file while deleting a path. Depending on latency this could lead to rather long delete times, especially noticeable during expiration.
Improve GCS delete to use batches, which require multipart HTTP, so also add multipart HTTP infrastructure.
This is better than requiring a python3 binary to be on the path because some installations might have, e.g. python3.9.
Also add the python3-distutils package to Debian builds to make this work.
This should have been done in 434938e3 but somehow it didn't happen.
Fedora 38 requires 2048 bit keys so update the VM builds to use them. Update the documentation to use 2048 bit keys. This is not technically required by this commit but it makes sense to do it now.
Also update the key location for the yum.p.o repository.
Lastly, shuffle test PostgreSQL versions since PostgreSQL 11 is not longer available in the yum.p.o repository.
Since there were some issues found with the meson install (7877983a, 7b95fd3b) it makes sense for any packagers who have not made the migration to hold off until the next release.
Move the note to the next release where hopefully all issues have been addressed.
This feature (enabled with --repo-s3-sse-customer-key) provides an encryption key to encrypt the data after it has been transmitted to the server.
While not as secure as encrypting data before transmission (--repo-cipher-type), this may be useful in certain configurations.
On some platforms, e.g. FreeBSD, there is a requirement to allow the user to disable support for features even when the required library is present.
Introduce tri-state options for the optional features: auto mimics the current behavior and is the default, enable requires libraries for the feature to be present, and disable disables the feature without checking the libraries.
These should have been removed when the mock integration tests were removed.
Ideally we would also remove filecopy.table.bin but it serves to provide realistic page data for performance testing.
A valid StringId can never be zero so it more or less serves as a NULL value. In most cases zero will not be valid, but it is better to catch this condition with an assert rather than an error in logging.
NOTE TO PACKAGERS: The build system for pgBackRest is now meson. The autoconf/make build will not receive any new features and will be removed after a few releases.
Bug Fixes:
* Skip zero-length files for block incremental delta restore. (Reviewed by Sebastian Krause, René Højbjerg Larsen. Reported by Sebastian Krause.)
* Fix performance regression in storage list. (Reviewed by Stephen Frost. Reported by Maksym Boguk.)
* Fix progress logging when file size changes during backup. (Reviewed by Stephen Frost. Reported by samkingno.)
Improvements:
* Improved support for dual stack connections. (Reviewed by Stephen Frost. Suggested by Timothée Peignier.)
* Make meson the primary build system. (Reviewed by Stephen Frost.)
* Detect files that have not changed during non-delta incremental backup. (Reviewed by Stephen Frost.)
* Prevent invalid recovery when backup_label removed. (Reviewed by Stephen Frost.)
* Improve archive-push WAL segment queue handling. (Reviewed by Stephen Frost.)
* Limit resume functionality to full backups. (Reviewed by Stephen Frost, Stefan Fercot.)
* Update resume functionality for block incremental. (Reviewed by Stephen Frost.)
* Allow --version and --help for version and help. (Reviewed by Greg Sabino Mullane. Suggested by Greg Sabino Mullane.)
* Add detailed backtrace to autoconf/make build. (Reviewed by Stephen Frost.)
Documentation Improvements:
* Update references to recovery.conf. (Reviewed by Stefan Fercot. Suggested by Stephen Frost.)
If the file size changed during backup then the progress percentage in the log would not be accurate.
Fix this by using the original size to increment the progress since progress total was calculated from original file sizes.
For this to be practically useful secure options must be redacted. Otherwise, no user is likely to share the report.
Since this feature is still internal, there is no real world impact.
Resume was not updated for block incremental so block incremental files were always removed during a resume. Resume worked but was very inefficient with block incremental enabled.
Update resume to preserve block incremental files and add tests.
If backup_label is removed from a restored backup then PostgreSQL will instead use checkpoint information from pg_control to attempt (what is thinks is) crash recovery. This will nearly always result in a corrupt cluster because the checkpoint will not be from the beginning of the backup, and even if it is, the end point will not be specified, which could lead to recovery stopping too early.
To prevent this, invalidate the checkpoint LSN in pg_control on restore. If backup_label is removed then recovery will still fail because PostgreSQL will not be able to find the invalid checkpoint. The LSN of the checkpoint is not logged but it will be visible in pg_controldata output as 0/DEAD. This value is invalid because PostgreSQL always skips the first WAL segment when initializing a cluster.
This serves as an additional sanity check to be sure the pg_control format is as expected. The field is useful for being near the end and containing a limited number of discrete values.
This serves as an additional sanity check to be sure the pg_control format is as expected. The field is useful for being all the way at the end and being four bytes that can only have one of two values. Something more distinctive than 0 and 1 would be better, but this is what we have to work with.
Convert PgControl.pageChecksum to unsigned int and rename to PgControl.pageChecksumVersion and make all downstream changes required for the new datatype.
Connections are established using the "happy eyeballs" approach from RFC 8305, i.e. new addresses (if available) are tried if the prior address has already had a reasonable time to connect. This prevents waiting too long on a failed connection but does not try all the addresses at once. Prior connections that are still waiting are rechecked periodically if no subsequent connection is successful.
This improves substantially on 39bb8a0, which failed to take into account connection attempts that do not fail (but never connect) and use up all the available time.
This saves about 16KiB in the binary and reduces exported symbols by about 75%. All variables are still exported and any functions that are referenced by their pointers or extern'd but never used outside the module where they are defined.
In addition to modest space savings, this should also increase performance a bit since the compiler can simplify calls to these functions and load the binary should also be a little faster.
The GCC documentation does not make it clear that visibility can be used with variables, but it certainly makes a difference in the binary size, so something is happening. Other sources on the internet suggest that visibility can be used with variables. Clearly exports are not affected, but there may be some other optimization happening.
Meson has a lot of advantages over autoconf/make, primarily in ease-of-use and performance. Make meson the only build system used for testing and building the Debian documentation, but leave the RHEL documentation using autoconf/make for now so it gets some testing.
There seems to be a shortage of arm64 hosts because queue times have been steadily increasing over the last few weeks. It can now take several hours to get an arm64 test queued, which makes it difficult to get development done.
Disable for the time being and hope the resource issue gets resolved in the future.
The leak kind is usually definite but sometimes flaps to possible. For stability purposes accept any leak kind.
Note that this is a leak in a specific version of libssh2 and not a bug in pgBackRest.
Resume does not work correctly with delta diff/incr backups because the presence of a reference causes it to remove the file with the idea that it can just be referenced again. This is true for timestamp-based backups but for deltas all existing files need to be rechecked (which requires a reference).
This is fixable but not without significant effort and new tests and it calls into question the usefulness of non-full resumes. For diff/incr, if the file was changed since the prior backup there is a good chance it will be modified again before the resume occurs.
In order to keep this feature as useful as possible for the most valuable case, limit resumes to full backups.
02eea55 added code to load a buffer of data from a file being backup up to detect files that have been truncated to zero after manifest generation. This mechanism can also be used to detect files that have not changed since the prior backup.
If the result of the file copy fits into a single buffer, then the size and checksum can be compared to the prior file before anything gets stored. If the file matches then it is referenced to the file in to prior backup.
The size that can be compared for normal copies is limited by the buffer size but for block incremental it works with any size file since there is no output from block incremental when the file is identical.
Infer the size of all WAL segments from the size of the first segment rather than getting info for all segments (up to queue size). If the segments are not the same size then there are larger issues than the WAL queue.
storageListP() returns a list of entries in a path and should not need to stat/head, etc. in order to get more detailed info. This was broken by 75623d4 which failed to set the level correctly.
Set the correct level and update tests.
There's no easy way to directly test for a regression here but the SFTP tests will fail if more detailed info is requested since it would require script changes.
The Perl integration tests were migrated as faithfully as possible, but there was some cruft and a few unit tests that it did not make sense to migrate.
Also remove all Perl code made obsolete by this migration.
All unit, performance, and integration tests are now written in C but significant parts of the test harness remain to be migrated.