For some reason adding -D_POSIX_C_SOURCE=200112L caused MacOS builds to stop working. Combining both flags seems to work fine for all tested systems.
Contributed by Douglas J Hunley.
Including the C module after the headers required for testing meant that if headers were missing from the C module they were not caught while directly testing the C module.
The missing headers were caught in general testing, but it is frustrating to get an error in a module that has already passed while testing another module or running CI.
Move the C module include to the very top so missing headers cause immediate failures.
Bug Fixes:
* Remove request for S3 object info directly after putting it. (Reported by Matt Kunkel.)
* Correct archive-get-queue-max to be size type. (Reported by Ronan Dunklau.)
* Add error message when current user uid/gid does not map to a name. (Reported by Camilo Aguilar.)
* Error when --target-action=shutdown specified for PostgreSQL < 9.5.
Improvements:
* Set TCP keepalives on S3 connections. (Suggested by Ronan Dunklau.)
* Reorder info command text output so most recent backup is output last. (Contributed by Cynthia Shang. Suggested by Ryan Lambert.)
* Change file ownership only when required.
* Redact authentication header when throwing S3 errors. (Suggested by Brad Nicholson.)
Admonitions call out places where the user should take special care.
Support added for HTML, PDF, Markdown and help text renderers. XML files have been updated accordingly.
Contributed by Cynthia Shang.
A number of common characters are not allowed in latex without being escaped.
Also convert some HTML-specific codes that are used in the documentation.
Contributed by Cynthia Shang.
Keepalives may help in situations where RST packets are being blocked by a firewall or otherwise do not arrive.
The C code uses select on all reads so it should never block, but add keepalives just in case.
Suggested by Ronan Dunklau.
After a stanza-upgrade backups for the old cluster are displayed until they expire. Cluster info was output newest to oldest which meant after an upgrade the most recent backup would no longer be output last.
Update the text output ordering so the most recent backup is always output last.
Contributed by Cynthia Shang.
Suggested by Ryan Lambert.
The info command will only be executed in C if the repository is local, i.e. not located on a remote repository host. S3 is considered "local" in this case.
This is a direct migration from Perl to integrate as seamlessly with the remaining Perl code as possible. It should not be possible to determine if the C version is running unless debug-level logging is enabled.
Contributed by Cynthia Shang.
The infoBackup object is the counterpart to the infoArchive object which encapsulates the archive.info file.
Currently the object is read-only, i.e. it is not possible to create a new or modify an existing backup.info file.
There a number of constants that will also be used in the infoManifest object so go ahead and create a module to contain them so they don't need to be moved later.
Contributed by Cynthia Shang.
The previous way worked but was a head-scratcher when reading the code. This cast hopefully makes it a bit more obvious what is going on.
Contributed by Cynthia Shang.
INFO is generally used as the prefix for info file constants so rename these accordingly.
Also follow newly-adopted coding standards for when #define is required for a static String constant.
Contributed by Cynthia Shang.
Some commands (e.g. info) do not take a stanza or the stanza is optional. In that case it is the job of the command to construct the repository path with a stanza as needed.
Update helper functions to omit the stanza from the constructed path when it is NULL.
Contributed by Cynthia Shang.
- Add detail to errors when info files are loaded with incorrect encryption settings.
- Throw FileMissingError rather than FileOpenError when both copies of the info file are missing.
- If one file is present (but errors) and the other is missing, then return the error for the file that was present.
Contributed by Cynthia Shang.
This code was generated during testing and it seemed a good idea to keep it. It is only a partial solution since the primary also needs additional configuration to be able to fail back and forth.
Previously chown() would be called even when no ownership changes were required.
In most cases changes are not required and it seems better to perform an extra stat() rather than an extra chown().
Also add unit tests for owner() since there weren't any.
The authentication header contains the access key (not the secret key) so don't include it in errors that can be seen at any log level.
Suggested by Brad Nicholson.
This got missed in 1f8931f7 when the test binary was renamed.
Also output call graph along with the flat report. The flat report is generally most useful but it doesn't hurt to have both.
By default the documentation builds pgBackRest from source, but the documentation is also a good way to smoke-test packages.
Allow a package file to be specified by passing --var=package=/path/to/package.ext. This works for Debian and CentOS 6 builds.
Keywords were extremely limited and prevented us from generating multi-version documentation and other improvements.
Replace keywords with an if statement that can evaluate a Perl expression with variable replacement.
Since keywords were used to generate cache keys, add a --key-var parameter to identify which variables should make up the key.
This somehow was not configured as a size option when it was added. It worked, but queue sizes could not be specified in shorthand, e.g. 128GB.
This is not a breaking change because currently configured integer values will be read as bytes.
Reported by Ronan Dunklau.
After a file is copied during backup the size is requested from the storage in case it differs from what was written so that repo-size can be reported accurately. This is useful for situations where compression is being done by the filesystem (e.g. ZFS) and what is stored can differ in size from what was written.
In S3 the reported size will always be exactly what was written so there is no need to check the size and doing so immediately can cause problems because the new file might not appear in list commands. This has not been observed on S3 (though it seems to be possible) but it has been reported on the Swift S3 gateway.
Add a driver capability to determine if size needs to be called after a file is written and if not then simply use the number of bytes written for repo-size.
Reported by Matt Kunkel.
This allows the documentation to be built more quickly and offline during development when --pre is specified on the command line.
Each host gets a pre-built container with all the execute elements marked pre. As long as the pre elements do not change the container will not need to be rebuilt.
The feature should not be used for CI builds as it may hide errors in the documentation.
The previous error message only showed the last error. In addition, some errors were missed (such as directory permission errors) that could prevent the copy from being checked.
Show both errors below a generic "unable to load" error. Details are now given explaining exactly why the primary and copy failed.
Previously if one file could not be loaded a warning would be output. This has been removed because it is not clear what the user should do in this case. Should they do a stanza-create --force? Maybe the best idea is to automatically repair the corrupt file, but on the other hand that might just spread corruption if pgBackRest makes the wrong choice.
The decryption filter was added in archiveGetFile() and archiveGetCheck() was modified to return the WAL decryption key stored in archive.info. The rest was plumbing.
The mock/archive/1 integration test added encryption to provide coverage for the new code paths while mock/archive/2 dropped encryption to provide coverage for the existing code paths. This caused some churn in the expect logs but there was no change in behavior.
If InOut filters were placed next to each other then the second filter would never get a NULL input signaling it to flush. This arrangement only worked if the second filter had some other indication that it should flush, such as a decompression filter where the flush is indicated in the input stream.
This is not a live issue because currently no InOut filters are chained together.
This allows CipherBlock to be used as a filter in an IoFilterGroup. The C-style functions used by Perl are now deprecated and should not be used for any new code.
Also add functions to convert between cipher names and CipherType.
Add boolean and one-dimensional list types to jsonToKv().
Add varToJson() and kvToJson() to convert Variants and KeyValues to JSON.
Contributed by Cynthia Shang.
The only change required was to remove the filter that prevented S3 storage from being used. The archive-get command did not require any modification which demonstrates that the storage interface is working as intended.
The mock/archive/3 integration test was modified to run S3 storage locally to provide coverage for the new code paths while mock/stanza/3 was modified to run S3 storage remotely to provide coverage for the existing code paths. This caused some churn in the expect logs but there was no change in behavior.
TlsClient introduced a non-blocking read which is required to read protocol messages that are linefeed-terminated rather than a known size. However, in many cases the expected number of bytes is known in advance so in that case it is more efficient to have tlsClientRead() block until all the bytes are read.
Add block parameter to all read functions and use it when a blocking read is required. For most read functions this is a noop, i.e. if the read function never blocks then it can ignore the parameter.
In passing, set the log level of storageNew*() functions to debug to expose more high-level I/O operations.
A robust HTTP client with pipelining support and automatic retries.
Using a single object to make multiple requests is more efficient because requests are pipelined whenever possible. Requests are automatically retried when the connection has been closed by the server. Any 5xx response is also retried.
Only the HTTPS protocol is currently supported.
A simple, secure TLS client intended to allow access to services that are exposed via HTTPS. We call it TLS instead of SSL because SSL methods are disabled so only TLS connections are allowed.
This object is intended to be used for multiple TLS connections against a service so tlsClientOpen() can be called each time a new connection is needed. By default, an open connection will be reused for pipelining so the user must be prepared to retry their transaction on a read/write error if the server closes the connection before it can be reused. If this behavior is not desirable then tlsClientClose() may be used to ensure that the next call to tlsClientOpen() will create a new TLS session.
Note that tlsClientRead() is non-blocking unless there are *zero* bytes to be read from the session in which case it will raise an error after the defined timeout. In any case the tlsClientRead()/tlsClientWrite()/tlsClientEof() functions should not generally be called directly. Instead use the read/write interfaces available from tlsClientIoRead()/tlsClientIoWrite().
Test certificates were generated dynamically but there are advantages to using static certificates. For example, it possible to use the same certificate between container versions. Mostly, it is easier to document the certificates if they are not buried deep in the container code.
The new test certificates are initially intended to be used with the C unit tests but they will eventually be used for integration tests as well.
Two new certificates have been defined. See test/certificate/README.md for details.
The old dynamic certificates will be retained until they are replaced.
The embedded semicolon led to inconsistent semicolons when using the macro and is not our general convention.
Remove embedded semicolons from the macros and add semicolons in usage where they were not present.
Add XmlDocument, XmlNode, and XmlNodeList objects as a thin interface layer on libxml2.
This interface is not intended to be comprehensive. Only a few libxml2 capabilities are exposed but more can be added as needed.
S3 key options (repo1-s3-key/repo1-s3-key-secret) were not required which meant that users got an ugly assertion when they were missing rather than a tidy configuration error.
Only the local/remote commands need them to be optional. This is because local/remote commands get all their options from the command line but secrets cannot be passed on the command line. Instead, secrets are passed to the local/remote commands via the protocol for any operation that needs them.
The configuration system allows required to be set per command so use that to improve the error messages while not breaking the local/remote commands.
This allows a C unit test to access data in the code repository that might be useful for testing.
Add testRepoPathSet() to set the repository path.
In passing remove extra whitespace in the TEST_RESULT_VOID() macro.
Bug Fixes:
* Fix issue with archive-push-queue-max not being honored on connection error. (Reported by Lardière Sébastien.)
* Fix static WAL segment size used to determine if archive-push-queue-max has been exceeded.
* Fix error after log file open failure when processing should continue. (Reported by vthriller.)
Features:
* Automatically enable backup checksum delta when anomalies (e.g. timeline switch) are detected. (Contributed by Cynthia Shang.)
Improvements:
* Retry all S3 5xx errors rather than just 500 internal errors. (Suggested by Craig A. James.)
These interfaces previously used the memory context of the object they were associated with and did not have their own destructors.
There are times when it is useful to free the interface without also freeing the underlying object so give IoRead and IoWrite their own memory contexts and destructors.
In passing fix a comment type in bufferRead.c.
By default the IoWrite object does not write until the output buffer is full but this is a problem for protocol messages that must be sent in order to get a response.
ioWriteFlush() is not called internally by IoWrite but can be used at any time to immediately write all bytes from the output buffer without closing the IoWrite object.
The prior message stated that there had been a buffer overrun which is not true since the code prevents that.
In fact, this message means the parameter buffer filled while building the parameter list. Rather than display a partial list we output this message instead.
Also remove !!! which by convention we use as a marker for code that needs attention before it can be committed to master.
These macros provide a convenient way to output debug information in tests.
They are not intended to be left in test code when it is committed to master.
ioReadLine() calls ioRead(), which aggressively tries to fill the output buffer, but this doesn't play well with blocking reads.
Give ioReadLine() an option that tells it to read only what is available. That doesn't mean the function will never block but at least it won't do so by reading too far.
The report HTML generated by lcov is overly verbose and cumbersome to navigate. Since we maintain 100% coverage it's far more interesting to look at what is not covered than what is.
The new report presents all missing coverage on a single page and excludes code that is covered for brevity.
Add HTML tags for table elements.
The strExtra parameter allows adhoc tags to be added to an element for features that can't be implemented with CSS, e.g. colspan.
There are many places (and the number is growing) where a zero-terminated string constant must be transformed into a String object to be usable. This pattern wastes time and memory, especially since the created string is generally used in a read-only fashion.
Define macros to create constant String objects that are initialized at compile time rather than at run time.
The storageList() command accepts a regular expression as a filter. This works fine for local filesystems where it is relatively cheap to get a complete list of files and filter them in code. However, for remote filesystems like S3 it can be expensive to fetch a complete list of files only to discard the bulk of them locally.
S3 does not filter on regular expressions but it can accept a static prefix so this function extracts a prefix from a regular expression when possible.
Even a few characters can drastically reduce the amount of data that must be fetched remotely so the function does not try to be too clever. It requires a ^ anchor and stops scanning when the first special character is found.
Allow buffers to report a lower size than their allocated size. This means a larger buffer can be used to do the work of a smaller buffer without having to create a new buffer and concatenate.
This is useful for blocking I/O where the buffer may be too large for the amount of data that is available to read.
The Wait object accepted a double in the constructor for wait time but used TimeMSec internally. This was done for compatibility with the Perl code.
Instead, use TimeMSec in the Wait constructor and make changes as needed to calling code.
Note that Perl still uses a double for its Wait object so translation is needed in some places. There are no plans to update the Perl code as it will become obsolete.
If an object free() method was called manually when a callback was set then the callback would call free() again. This meant that each free() method had to protect against a subsequent call.
Instead, clear the callback (if present) before calling memContextFree(). This is faster (since there is no unecessary callback) and removes the need for semaphores to protect against a double free().
Code generation saved files even when they had not changed, which often caused code generation cascades. So, don't save files unless they have changed.
Use rsync to determine which files have changed since the last test run. The manifest of changed files is saved and not removed until all code generation and builds have completed. If an error occurs the work will be redone on the next run.
The eventual goal is to do all the builds from the test/repo directory created by rsync but for now it is only used to track changes.
The contents were already preserved between tests in a single test.pl run but for a separate execution the entire project had to be built from scratch, which was getting slower as we added code.
Save the important build flags in a file so the new execution knows whether the build contents can be reused.
Mounting/unmounting tmpfs on /home/[user]/test takes time, forces at least 3GB of memory to be available for tests, and makes it harder to preserve data between tests.
Instead, move mounting of tmpfs to the Vagrantfile and add it to fstab so it survives reboots.
There are a number of cases where a checksum delta is more appropriate than the default time-based delta:
* Timeline has switched since the prior backup
* File timestamp is older than recorded in the prior backup
* File size changed but timestamp did not
* File timestamp is in the future compared to the start of the backup
* Online option has changed since the prior backup
A practical example is that checksum delta will be enabled after a failover to standby due to the timeline switch. In this case, timestamps can't be trusted and our recommendation has been to run a full backup, which can impact the retention schedule and requires manual intervention.
Now, a checksum delta will be performed if the backup type is incr/diff. This means more CPU will be used during the backup but the backup size will be smaller and the retention schedule will not be impacted.
Contributed by Cynthia Shang.
We were already retrying 500 errors but 503 (rate-limiting) errors were not being retried and would cause an instant failure which aborted the command.
There are only two 5xx errors currently implemented by S3 but instead of adding 503 simply retry all 5xx errors. This is consistent with the http definition of this error class, "the server failed to fulfill an apparently valid request."
Suggested by Craig A. James.
This calculation was missed when the WAL segment size was made dynamic in preparation for PostgreSQL 11.
Fix the calculation by checking the actual WAL file sizes instead of using an estimate based on WAL segment size. This is more accurate because it takes into account .history and .backup files, which are smaller. Since the calculation is done in the async process the additional processing time should not adversely affect performance.
Remove the PG_WAL_SIZE constant and instead use local constants where the old value is still required. This is only the case for some tests and PostgreSQL 8.3 which does not provide a way to get the WAL segment size from pg_control.
If an error occurred while acquiring a lock on a remote server the error would be reported correctly, but the queue max detection code was not reached. The tests failed to detect this because they fixed the connection before queue max, allowing the ccde to be reached.
Move the queue max code before the lock so it will run even when remote connections are not working. This means that no attempt will be made to transfer WAL once queue max has been exceeded, but it makes it much more likely that the code will be reach without error.
Update tests to continue errors up to the point where queue max is exceeded.
Reported by Lardière Sébastien.
The C code was warning on failure and continuing but the Perl logging code was never updated with the same feature.
Rather than add the feature to Perl, just disable file logging if the log file cannot be opened. Log files are always opened by C first, so this will eliminate the error in Perl.
Reported by vthriller.
The existing tests were not adequate to ensure the history was being added in the correct order when some entries were loaded from a file and others added with infoPgAdd().
Contributed by Cynthia Shang.
The InfoPg object was partially modified in 960ad732 to place the current history item in position 0, but infoPgDataCurrent() didn't get updated correctly.
Remove this->indexCurrent and make the current position always equal 0. Use the new lstInsert() function when adding new history items via infoPgAdd(), but continue to use lstAdd() when loading from a file for efficiency.
This does not appear to be a live bug because infoPgDataCurrent() and infoPgAdd() are not yet used in any production code. The archive-get command is the only C code using InfoPG and it always looks at the entire list of items rather than just the current item.
Suggested by Cynthia Shang.
Bug Fixes:
* Fix missing missing URI encoding in S3 driver. (Reported by Dan Farrell.)
* Fix incorrect error message for duplicate options in configuration files. (Reported by Jesper St John.)
* Fix incorrectly reported error return in info logging. A return code of 1 from the archive-get was being logged as an error message at info level but otherwise worked correctly.
Features:
* Add checksum delta for incremental backups which uses checksums rather than timestamps to determine if files have changed. (Contributed by Cynthia Shang.)
* PostgreSQL 11 support, including configurable WAL segment size.
Improvements:
* Ignore all files in a linked tablespace directory except the subdirectory for the current version of PostgreSQL. Previously an error would be generated if other files were present and not owned by the PostgreSQL user.
* Improve info command to display the stanza cipher type. (Contributed by Cynthia Shang. Suggested by Douglas J Hunley.)
* Improve support for special characters in filenames.
* Allow delta option to be specified in the pgBackRest configuration file. (Contributed by Cynthia Shang.)
PostgreSQL 11 RC1 support was tested in 9ae3d8c46 when the u18 container was rebuilt. Nothing substantive changed after RC1 so pgBackRest is ready for PostgreSQL 11 GA.
The standard npm packages on Ubuntu 18.04 suddenly required libssl1.0 which broke the pgbackrest package builds. Installing nodejs from deb.nodesource.com seems to work fine with standard libssl.
This package is required by ScalityS3 which is used for local S3 testing.
When the filter interface internals were split out into a new header file the documentation was not moved as it should have been. Additionally some functions which should have been moved were left behind.
Move the documentation and functions to filter.internal.h and add more documentation. Filters are a tricky subject so the more documentation the better.
Also add documentation for the user-facing filter functions in filter.h.
Allow a single linefeed-terminated line to be read or written. This is useful for various protocol implementations, including HTTP and pgBackRest's protocol.
On read the maximum line size is limited to buffer-size to prevent runaway memory usage in case a linefeed is not found. This seems fine for HTTP but we may need to revisit this decision when implementing the pgBackRest protocol. Another option would be to increase the minimum buffer size (currently 16KB).
This test has been flapping since 9b9396c7. It seems to be some kind of timing issue since all integration tests pass and this unit passes on all other VMs. It only happens on Travis and is not reproducible in any development environment that we have tried.
For now, disable the test since the constant flapping is causing major delays in testing and quite a bit of time has been spent trying to identify the root cause. We are actively developing these tests and hope the issue will be identified during the course of normal development.
A number of improvements were made to the tests while searching for this issue. While none of them helped, it makes sense to keep the improvements.
Duplicating a non-multi-value option was not throwing the correct message when the option was a boolean.
The reason was that the option was being validated as a boolean before the multi-value check was being done. The validation code assumed it was operating on a string but was instead operating on a string list causing an assertion to fail.
Since it's not safe to do the multi-value check so late, move it up to the command-line and configuration file parse phases instead.
Reported by Jesper St John.
Previously this was done in two separate places by checking if an option was type hash or list.
Bad enough that it was in two places, but an upcoming bug fix will add another instance so make it a function.
There doesn't seem to be any need to implement this as a filter since current use cases (S3 authentication) work on small datasets.
So, use the single function method provided by OpenSSL for simplicity.
This constructor creates a Buffer object directly from a zero-terminated string. The old way was to create a String object first, then convert that to a Buffer using bufNewStr().
Updated in all places that used the old pattern.
PostgreSQL 11 introduces configurable WAL segment sizes, from 1MB to 1GB.
There are two areas that needed to be updated to support this: building the archive-get queue and checking that WAL has been archived after a backup. Both operations require the WAL segment size to properly build a list.
Checking the archive after a backup is still implemented in Perl and has an active database connection, so just get the WAL segment size from the database.
The archive-get command does not have a connection to the database, so get the WAL segment size from pg_control instead. This requires a deeper inspection of pg_control than has been done in the past, so it seemed best to copy the relevant data structures from each version of PostgreSQL and build a generic interface layer to address them. While this approach is a bit verbose, it has the advantage of being relatively simple, and can easily be updated for new versions of PostgreSQL.
Since the integration tests generate pg_control files for testing, teach Perl how to generate files with the correct offsets for both 32-bit and 64-bit architectures.
Unsecured, passwordless SSH can be a scary thing. If an attacker gains access to one system they can easily hop to other systems.
Add documentation on how to use the command parameter in authorized_keys to limit ssh to running a single command, pgbackrest. There is more that could be done for security but this likely addresses most needs.
Also change references to "trusted ssh" to "passwordless ssh" since this seems more correct.
Suggested by Stephen Frost, Magnus Hagander.
Use checksums rather than timestamps to determine if files have changed. This is useful in cases where the timestamps may not be trustworthy, e.g. when performing an incremental after failing over to a standby.
If checksum delta is enabled then checksums will be used for verification of resumed backups, even if they are full. Resumes have always used checksums to verify the files in the repository, enabling delta performs checksums on the database files as well.
Note that the user must manually enable this feature in cases were it would be useful or just keep in enabled all the time. A future commit will address automatically enabling the feature in cases where it seems likely to be useful.
Contributed by Cynthia Shang.
This option was previously allowed on the command-line only for no particular reason that we could determine.
Being able to specify it in the config file seems like a good idea and won't change current usage.
Contributed by Cynthia Shang.
Apparently we never needed to run this function remotely.
It will be needed by the backup checksum delta feature, so implement it now.
Contributed by Cynthia Shang.
The test to make sure that some files (e.g. pg_control) do not get removed during the backup was lost during the storage refactor committed at de7fc37f.
This did not impact the integrity of the backups, but bring it back since it is a nice sanity check.
Contributed by Cynthia Shang.
As we add storage drivers it's important to keep the tests for each completely separate. Rather than have three tests for each driver, standardize on having a single test unit for each driver.
This is a workaround for inefficient handling of many setjmps in gcc >= 4.9. Setjmp is used in all error handling, but in the unit tests each test macro contains an error handling block so they add up pretty quickly for large unit tests.
Enabling -ftree-coalesce-vars in affected versions reduces build time and memory requirements by nearly an order of magnitude. Even so, compiles are much slower than gcc <= 4.8.
We submitted a bug for this at: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=87316
Which was marked as a duplicate of: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63155
For read-only repositories the Posix and CIFS drivers behave exactly the same. Since that's all we support in C right now it's valid to treat them as the same thing. An assertion has been added to remind us to add the CIFS driver before allowing the repository to be writable.
Mostly we want to make sure that the C code does not blow up when the repository type is CIFS.
Previously it was the responsibility of the individual tests to clean up after themselves. Now the test harness now does the cleanup automatically.
This means that some paths/files need to be recreated with each run but that doesn't happen very often.
An attempt has been made to remove all redundant cleanup code but it's hard to know if everything has been caught. No issues will be caused by anything that was missed, but they will continue to chew up time in the tests.
Storing the expect log (created by common/harnessLog) in the regular test directory was not ideal. It showed up in tests and made it difficult to clear the test directory between each run.
Move the expect log to a purpose-built directory one level up so it does not interfere with regular testing.
These are separated the same way in the Perl code where the remote storage driver is located in the Protocol module. However, in the C code the intention is to implement the remote storage driver as a regular driver in the storage layer rather than making a special case out of it.
So, merge the storage helpers. This also has the benefit of making the code a bit simpler.
Also separate storageSpool() and storageSpoolWrite() to make it clearer which operations require write access and to maintain consistency with the other storage helper functions.
If the total bytes read from the expect log file was 0 then the last byte of whatever was in memory before harnessLogBuffer would be set to 0.
On 32-bit systems this expressed as the high order byte of a pointer being cleared and wackiness (in the form of segfaults) ensued.
Fixed parameter constructors made adding new interface functions a burden, so we switched to using structs to define interfaces in the storage module at c49eaec7.
While propagating this pattern to the IO interfaces it became obvious that the existing variable parameter function pattern (begun in the storage module) was more succinct and consistent with the existing code.
So, use variable parameter functions to define all interfaces. This assumes that the non-interface parameters will be fixed, which seems reasonable for low-level code.
C or Perl coverage tests can now be run on any VM provided a recent enough version of Devel::Cover or lcov is available.
For now, leave u18 as the only VM to run coverage tests due to some issues with older versions of lcov.
The external storage interfaces (Storage, StorageFileRead, etc.) have been stable for a while, but internally they were calling the posix driver functions directly.
Create driver interfaces for storage, fileRead, and fileWrite and remove all references to the posix driver outside storage/driver/posix (with the exception of a direct call to pathRemove() in Perl LibC).
Posix is still the only available driver so more adjustment may be needed, but this should represent the bulk of the changes.
The posix driver was developed over time and the naming is not very consistent.
Rename the files and functions to work well with other drivers and generally favor longer names since the driver functions are seldom (eventually never) used outside the driver itself.
The Storage object represents some some optional parameters as negated if the default is true. This allows sensible defaults without having to specify most optional parameters.
However, there's no need to propagate this down to functions that require all parameters to be passed -- it makes the code and logging more confusing. Rename the parameters and update logic to remove negations.
Previously, debug log functions had to handle NULLs and truncate output to the available buffer size. This was verbose for both coding and testing.
Instead, create a function/macro combination that allows log functions to return a simple String object. The wrapper function takes care of the memory context, handles NULLs, and truncates the log string based on the available buffer size.
The archive-get command will only be executed in C if the repository is local, unencrypted, and type posix or cifs. Admittedly a limited use case, but this is just the first step in migrating the archive-get command entirely into C.
This is a direct migration from the Perl code (including messages) to integrate as seamlessly with the remaining Perl code as possible. It should not be possible to determine if the C version is running unless debug-level logging is enabled.
The lock is now released before the fork and reacquired after the fork so the parent process no longer needs to worry about clearing the lock.
This is the same locking mechanism that will be used once archive-get-async is exec'd as a separate command, so introduce it now to simplify testing.
The info messages were spread around and logged differently based on the execution path and in some cases logged nothing at all.
Temporarily track the async server status with a flag so that info messages are not output in the async process. The async process will be refactored as a separate command to be exec'd in a future commit.
% characters caused issues in backup/restore due to filenames being appended directly into a format string.
Reserved XML characters (<>&') caused issues in the S3 driver due to improper escaping.
Add a file with all common special characters to regression testing.
File names with uncommon characters (e.g. @) caused authentication failures due to S3 encoding them correctly while the S3 driver did not.
Reported by Dan Farrell.
By default Valgrind does not exit with an error code when a non-fatal error is detected, e.g. unfreed memory. Use the --error-exitcode option to enabled this behavior.
Update some minor issues discovered in the tests as a result. Luckily, no issues were missed in the core code.
Basic functions to detect the presence of stanza or all stop files and error when they are present.
The functionality to detect stop files without error was not migrated. This functionality is only used by stanza-delete and will be migrated with that command.
Implement rules for generating paths within the archive part of the repository. Add a helper function, storageRepo(), to create the repository storage based on configuration settings.
The repository storage helper is located in the protocol module because it will support remote file systems in the future, just as the Perl version does.
Also, improve the existing helper functions a bit using string functions that were not available when they were written.
Use JSON code now that it is available and remove temporary hacks used to get things working initially.
Use passed storage objects rather than using storageLocal(). All storage objects in C are still local but this won't always be the case.
Also, move Postgres version conversion functions to postgres/info.c since they have no dependency on the info objects and will likely be useful elsewhere.
A return code of 1 from the archive-get was being logged as an error message at info level but otherwise worked correctly.
Also improve info messages when an archive segment is or is not found.
The Perl functions do so and the integration tests rely on checking for these errors. This has been exposed as more functionality is moved into C.
Passing the errors types is now a bit complicated so instead use a flag to determine which errors to throw.
Previously an error would be generated if other files were present and not owned by the PostgreSQL user. This hasn't been a big deal in practice but it could cause issues.
Also add tests to make sure the same logic applies with links to files, i.e. all other files in the directory should be ignored. This was actually working correctly, but there were no tests for it before.
Bug Fixes:
* Fix issue where relative links in $PGDATA could be stored in the backup with the wrong path. This issue did not affect absolute links and relative tablespace links were caught by other checks. (Reported by Cynthia Shang.)
* Remove incompletely implemented online option from the check command. Offline operation runs counter to the purpose of this command, which is to check if archiving and backups are working correctly. (Reported by Jason O'Donnell.)
* Fix issue where errors raised in C were not logged when called from Perl. pgBackRest properly terminated with the correct error code but lacked an error message to aid in debugging. (Reported by Douglas J Hunley.)
* Fix issue when a boolean option (e.g. delta) was specified more than once. (Reported by Yogesh Sharma.)
Features:
* Allow any option to be set in an environment variable. This includes options that previously could only be specified on the command line, e.g. stanza, and secret options that could not be specified on the command-line, e.g. repo1-s3-key-secret.
* Exclude temporary and unlogged relation (table/index) files from backup. Implemented using the same logic as the patches adding this feature to PostgreSQL, 8694cc96 and 920a5e50. Temporary relation exclusion is enabled in PostgreSQL ≥ 9.0. Unlogged relation exclusion is enabled in PostgreSQL ≥ 9.1, where the feature was introduced. (Contributed by Cynthia Shang.)
* Allow arbitrary directories and/or files to be excluded from a backup. Misuse of this feature can lead to inconsistent backups so read the --exclude documentation carefully before using. (Reviewed by Cynthia Shang.)
* Add log-subprocess option to allow file logging for local and remote subprocesses.
* PostgreSQL 11 Beta 3 support.
Improvements:
* Allow zero-size files in backup manifest to reference a prior manifest regardless of timestamp delta. (Contributed by Cynthia Shang.)
* Improve asynchronous archive-get/archive-push performance by directly checking status files. (Contributed by Stephen Frost.)
* Improve error message when a command is missing the stanza option. (Suggested by Sarah Conway.)
Relative link paths were being combined with the paths of previous links (relative or absolute) due to the $strPath variable being modified in the current iteration rather than simply being passed to the next level of recursion.
This issue did not affect absolute links and relative tablespace links were caught by other checks, though the error was confusing.
Reported by Cynthia Shang.
Prior to this commit, an expression was used to search the spool directory for ok/error files for a specific WAL segment. This involved setting up a regular expression and using opendir/readdir.
Instead, directly probe for the status files, checking directly if a '.ok' or '.error' file exists, avoiding the regular expression and eliminating the directory scan.
Only the two files now probed for could have ever matched the regular expression which had been provided and it's unlikely that many more additional files will be added, so this is a good improvement, and optimization, with little downside.
Contributed by Stephen Frost.
Offline operation runs counter to the purpose of this command, which is to check if archiving and backups are working correctly.
Reported by Jason O'Donnell.
Contributor names have always been presented in the release notes exactly as given, but we tried to assign internal IDs based on last/first name which can be hard to determine and ultimately doesn't make sense.
Inspired by Christophe Pettus' PostgresOpen 2017 talk, "Human Beings Do Not Have a Primary Key".
Implemented using the same logic as the patches adding this feature to PostgreSQL, 8694cc96 and 920a5e50. Temporary relation exclusion is enabled in PostgreSQL ≥ 9.0. Unlogged relation exclusion is enabled in PostgreSQL ≥ 9.1, where the feature was introduced.
Contributed by Cynthia Shang.
This includes PostgreSQL installation which had previously been included in the documentation. This way produces faster builds and there is no need for us to document PostgreSQL installation.
This allows setting the test log level independently from the general test harness setting, but current only works for the C tests. It is useful for seeing log output from functions on the console while a test is running.
common/harnessLog was not ideally suited for general testing and made all the tests quite awkward. Instead, move all code used to test the common/log module into the logTest module and repurpose common/harnessLog to do log expect testing for all other tests in a cleaner way.
Add a few exceptions for config testing since the log levels are reset by default in config/parse.
This is more efficient overall and allows the caller to specify how many bytes will be read on each call. Reads are appended if the buffer already contains data but the buffer size will never increase.
Allow Buffer object "used size" to be different than "allocated size". Add functions to manage used size and remaining size and update automatically when possible.
Use strtoll() instead of sprintf() for conversion. Also use available integer min/max constants rather than hard-coded values.
Reviewed by Stephen Frost.
Suggested by Stephen Frost.
IMPORTANT NOTE: This release fixes a critical bug in the backup resume feature. All resumed backups prior to this release should be considered inconsistent. A backup will be resumed after a prior backup fails, unless resume=n has been specified. A resumed backup can be identified by checking the backup log for the message "aborted backup of same type exists, will be cleaned to remove invalid files and resumed". If the message exists, do not use this backup or any backup in the same set for a restore and check the restore logs to see if a resumed backup was restored. If so, there may be inconsistent data in the cluster.
Bug Fixes:
* Fix critical bug in resume that resulted in inconsistent backups. A regression in v0.82 removed the timestamp comparison when deciding which files from the aborted backup to keep on resume. See note above for more details. (Reported by David Youatt, Yogesh Sharma, Stephen Frost.)
* Fix error in selective restore when only one user database exists in the cluster. (Fixed by Cynthia Shang. Reported by Nj Baliyan.)
* Fix non-compliant ISO-8601 timestamp format in S3 authorization headers. AWS and some gateways were tolerant of space rather than zero-padded hours while others were not. (Fixed by Andrew Schwartz.)
Features:
* PostgreSQL 11 Beta 2 support.
Improvements:
* Improve the HTTP client to set content-length to 0 when not specified by the server. S3 (and gateways) always set content-length or transfer-encoding but HTTP 1.1 does not require it and proxies (e.g. HAProxy) may not include either. (Suggested by Adam K. Sumner.)
* Set search_path = 'pg_catalog' on PostgreSQL connections. (Suggested by Stephen Frost.)
A regression in v0.82 removed the timestamp comparison when deciding which files from the aborted backup to keep on resume. All resumed backups should be considered inconsistent. A resumed backup can be identified by checking the log for the message "aborted backup of same type exists, will be cleaned to remove invalid files and resumed".
Reported by David Youatt, Yogesh Sharma, Stephen Frost.
S3 (and gateways) always set content-length or transfer-encoding but HTTP 1.1 does not require it and proxies (e.g. HAProxy) may not include either.
Suggested by Adam K. Sumner.