This was not being caught because the integration tests for S3 were running remotely and going through the Perl code rather than the new C code.
Implement the exists method for the S3 driver and add tests to prevent a regression.
Reported by mibiio.
The check to verify that pg-path and data_directory are equal was not working because pg-path was getting overwritten with data_directory before validation took place.
Reported by James Chanco Jr.
This already worked in reverse, but this case is needed when a command that only uses protocol-timeout (e.g. info) calls a remote process where protocol-timeout and db-timeout can be set. If protocol-timeout was set to less than the default db-timeout then an error resulted.
Bug Fixes:
* Fix issue with multiple async status files causing a hard error. (Reported by Vidhya Gurumoorthi, Joe Ayers, Douglas J Hunley.)
Improvements:
* The info command is implemented entirely in C.
* Simplify info command text message when no stanzas are present by replacing the repository path with "the repository".
* Add _DARWIN_C_SOURCE flag to Makefile for MacOS builds. (Contributed by Douglas J Hunley.)
* Update address lookup in C TLS client to use modern methods. (Suggested by Bruno Friedmann.)
* Include Posix-compliant header for strcasecmp() and fd_set. (Suggested by ucando.)
This prevented packages from being passed to the documentation unless they were in the /backrest directory on the host.
Also make the local path /pgbackrest instead of the deprecated /backrest.
Reported by Heath Lord.
Rather than create _P/_PP variants for every type that needs to pass/return pointers, create FUNCTION_*_P/PP() macros that will properly pass or return any single/double pointer types.
There remain a few unresolved edge cases such as CHARPY but this handles the majority of types well.
This parameter was always useless but commit 7333b630 removed all references to it so remove the parameter at all call sites as well.
The original intention was probably to allow logging of TEST return values but that never happened.
Rather than create a CONST_ variant for every type that needs to be returned const, create a FUNCTION_LOG_RETURN_CONST() macro that will return any type as const.
The string object was reallocating memory with every concatenation which is not very efficient. This is especially true for JSON rendering which does a lot of concatenations.
Instead allocate a pool of extra memory on the first concatenation (50% of size) to be used for future concatenations and reallocate when needed.
Also add a 1GB size limit to ensure that there are no overflows.
Multiple status files were being created by asynchronous archiving if a high-level error occurred after one or more WAL segments had already been transferred successfully. Error files were being written for every file in the queue regardless of whether it had already succeeded. To fix this, add an option to skip writing error files when an ok file already exists.
There are other situations where both files might exist (various fsync and filesystem error scenarios) so it seems best to retry in the case that multiple status files are found rather than throwing a hard error (which then means that archiving is completely stuck). In the case of multiple status files, a warning will be logged to alert the user that something unusual is happening and the command will be retried.
Reported by fpa-postgres, Joe Ayers, Douglas J Hunley.
gcc has apparently merged this function in string.h but Posix specifies that it should be in strings.h. FreeBSD at at least is sticking to the standard.
In the long run it might be better to implement our own strcasecmp() function but for now just add the header.
Suggested by ucando.
The implementation using gethostbyname() was only intended to be used during prototyping but was forgotten when the code was finalized.
Replace it with gettaddrinfo() which is more modern and supports IPv6.
Suggested by Bruno Friedmann.
Rename FUNCTION_DEBUG_* macros to FUNCTION_LOG_* to more accurately reflect what they do. Further rename FUNCTION_DEBUG_RESULT* macros to FUNCTION_LOG_RETURN* to make it clearer that they return from the function as well as logging. Leave FUNCTION_TEST_* macros as they are.
Consolidate the various ASSERT* macros into a single ASSERT macro that is always compiled out of production builds. It was difficult to figure out when an assert would be checked with all the different types in play. When ASSERTs are compiled in they will always be checked regardless of the log level -- tying these two concepts together was not a good idea.
The C info code has already been committed but this commit wires it into main.
Also remove the info Perl code and tests since they are no longer called.
This is a partial implementation of remote storage with just enough functionality to get the info command working. The client is written in C but the server is still in Perl, which limits progress until a C server is written.
This is a complete protocol client implementation in C.
Currently there is no C server implementation so the C client is talking to a Perl server. This won't work very long, though, as the protocol format, even though in JSON, has a lot of language-specific structure. While it would be possible to maintain compatibility between C and Perl it's probably not worth the effort in the long run.
Just as in Perl there are helper functions to make constructing protocol objects easier. Currently only repository remotes are supported.
Executes a child process and allows the calling process to communicate with it using read/write io.
This object is specially tailored to implement the protocol layer and may or may not be generally applicable to general purpose
execution.
Parameters for the local/remote commands are based on parameters that are passed to the current command.
Generate parameters for the new command based on the intersection of parameters between the current command and the command to be executed.
General i/o objects for reading and writing file descriptors, in particular those that can block. In other words, these are not generally to be used with file descriptors for actual files, but rather pipes, sockets, etc.
Replace the repository path with just "the repository". The path is not important in this context and it is clearer to state where the stanzas are missing from.
The Perl code has a tendency to generate absolute paths even when they are not needed. This change helps the C and Perl storage work together via the protocol layer.
The C storage object strives to use rules whenever possible instead of generating absolute paths. This change helps the C and Perl storage work together via the protocol layer.
The prior behavior was to throw an exception but this was not very helpful when something unexpected happened. Better to at least emit the error message even if the error code is not very helpful.
There were some small differences in ordering and how the C version handled missing directories. It may be that the C version is more consistent, but for now it is more important to be compatible with the Perl version.
These differences were missed because the C info command was not wired into main.c so it was not being tested in regression. This commit does not fix the wiring issue because there will likely be a release soon and it is too big a change to put in at the last moment.
Casting to int caused large values to be slightly inaccurate so cast to uint64_t instead.
Also, use multiplication where possible since the compiler should precompute multiplied values.
SIGPIPE immediately terminates the process but we would rather catch the EPIPE error and gracefully shutdown.
Ignore SIGPIPE and throw the EPIPE error via normal error handling.
For some reason adding -D_POSIX_C_SOURCE=200112L caused MacOS builds to stop working. Combining both flags seems to work fine for all tested systems.
Contributed by Douglas J Hunley.
Including the C module after the headers required for testing meant that if headers were missing from the C module they were not caught while directly testing the C module.
The missing headers were caught in general testing, but it is frustrating to get an error in a module that has already passed while testing another module or running CI.
Move the C module include to the very top so missing headers cause immediate failures.
Bug Fixes:
* Remove request for S3 object info directly after putting it. (Reported by Matt Kunkel.)
* Correct archive-get-queue-max to be size type. (Reported by Ronan Dunklau.)
* Add error message when current user uid/gid does not map to a name. (Reported by Camilo Aguilar.)
* Error when --target-action=shutdown specified for PostgreSQL < 9.5.
Improvements:
* Set TCP keepalives on S3 connections. (Suggested by Ronan Dunklau.)
* Reorder info command text output so most recent backup is output last. (Contributed by Cynthia Shang. Suggested by Ryan Lambert.)
* Change file ownership only when required.
* Redact authentication header when throwing S3 errors. (Suggested by Brad Nicholson.)
Admonitions call out places where the user should take special care.
Support added for HTML, PDF, Markdown and help text renderers. XML files have been updated accordingly.
Contributed by Cynthia Shang.
A number of common characters are not allowed in latex without being escaped.
Also convert some HTML-specific codes that are used in the documentation.
Contributed by Cynthia Shang.
Keepalives may help in situations where RST packets are being blocked by a firewall or otherwise do not arrive.
The C code uses select on all reads so it should never block, but add keepalives just in case.
Suggested by Ronan Dunklau.
After a stanza-upgrade backups for the old cluster are displayed until they expire. Cluster info was output newest to oldest which meant after an upgrade the most recent backup would no longer be output last.
Update the text output ordering so the most recent backup is always output last.
Contributed by Cynthia Shang.
Suggested by Ryan Lambert.
The info command will only be executed in C if the repository is local, i.e. not located on a remote repository host. S3 is considered "local" in this case.
This is a direct migration from Perl to integrate as seamlessly with the remaining Perl code as possible. It should not be possible to determine if the C version is running unless debug-level logging is enabled.
Contributed by Cynthia Shang.
The infoBackup object is the counterpart to the infoArchive object which encapsulates the archive.info file.
Currently the object is read-only, i.e. it is not possible to create a new or modify an existing backup.info file.
There a number of constants that will also be used in the infoManifest object so go ahead and create a module to contain them so they don't need to be moved later.
Contributed by Cynthia Shang.
The previous way worked but was a head-scratcher when reading the code. This cast hopefully makes it a bit more obvious what is going on.
Contributed by Cynthia Shang.
INFO is generally used as the prefix for info file constants so rename these accordingly.
Also follow newly-adopted coding standards for when #define is required for a static String constant.
Contributed by Cynthia Shang.
Some commands (e.g. info) do not take a stanza or the stanza is optional. In that case it is the job of the command to construct the repository path with a stanza as needed.
Update helper functions to omit the stanza from the constructed path when it is NULL.
Contributed by Cynthia Shang.
- Add detail to errors when info files are loaded with incorrect encryption settings.
- Throw FileMissingError rather than FileOpenError when both copies of the info file are missing.
- If one file is present (but errors) and the other is missing, then return the error for the file that was present.
Contributed by Cynthia Shang.
This code was generated during testing and it seemed a good idea to keep it. It is only a partial solution since the primary also needs additional configuration to be able to fail back and forth.
Previously chown() would be called even when no ownership changes were required.
In most cases changes are not required and it seems better to perform an extra stat() rather than an extra chown().
Also add unit tests for owner() since there weren't any.
The authentication header contains the access key (not the secret key) so don't include it in errors that can be seen at any log level.
Suggested by Brad Nicholson.
This got missed in 1f8931f7 when the test binary was renamed.
Also output call graph along with the flat report. The flat report is generally most useful but it doesn't hurt to have both.
By default the documentation builds pgBackRest from source, but the documentation is also a good way to smoke-test packages.
Allow a package file to be specified by passing --var=package=/path/to/package.ext. This works for Debian and CentOS 6 builds.
Keywords were extremely limited and prevented us from generating multi-version documentation and other improvements.
Replace keywords with an if statement that can evaluate a Perl expression with variable replacement.
Since keywords were used to generate cache keys, add a --key-var parameter to identify which variables should make up the key.
This somehow was not configured as a size option when it was added. It worked, but queue sizes could not be specified in shorthand, e.g. 128GB.
This is not a breaking change because currently configured integer values will be read as bytes.
Reported by Ronan Dunklau.
After a file is copied during backup the size is requested from the storage in case it differs from what was written so that repo-size can be reported accurately. This is useful for situations where compression is being done by the filesystem (e.g. ZFS) and what is stored can differ in size from what was written.
In S3 the reported size will always be exactly what was written so there is no need to check the size and doing so immediately can cause problems because the new file might not appear in list commands. This has not been observed on S3 (though it seems to be possible) but it has been reported on the Swift S3 gateway.
Add a driver capability to determine if size needs to be called after a file is written and if not then simply use the number of bytes written for repo-size.
Reported by Matt Kunkel.
This allows the documentation to be built more quickly and offline during development when --pre is specified on the command line.
Each host gets a pre-built container with all the execute elements marked pre. As long as the pre elements do not change the container will not need to be rebuilt.
The feature should not be used for CI builds as it may hide errors in the documentation.
The previous error message only showed the last error. In addition, some errors were missed (such as directory permission errors) that could prevent the copy from being checked.
Show both errors below a generic "unable to load" error. Details are now given explaining exactly why the primary and copy failed.
Previously if one file could not be loaded a warning would be output. This has been removed because it is not clear what the user should do in this case. Should they do a stanza-create --force? Maybe the best idea is to automatically repair the corrupt file, but on the other hand that might just spread corruption if pgBackRest makes the wrong choice.
The decryption filter was added in archiveGetFile() and archiveGetCheck() was modified to return the WAL decryption key stored in archive.info. The rest was plumbing.
The mock/archive/1 integration test added encryption to provide coverage for the new code paths while mock/archive/2 dropped encryption to provide coverage for the existing code paths. This caused some churn in the expect logs but there was no change in behavior.
If InOut filters were placed next to each other then the second filter would never get a NULL input signaling it to flush. This arrangement only worked if the second filter had some other indication that it should flush, such as a decompression filter where the flush is indicated in the input stream.
This is not a live issue because currently no InOut filters are chained together.
This allows CipherBlock to be used as a filter in an IoFilterGroup. The C-style functions used by Perl are now deprecated and should not be used for any new code.
Also add functions to convert between cipher names and CipherType.
Add boolean and one-dimensional list types to jsonToKv().
Add varToJson() and kvToJson() to convert Variants and KeyValues to JSON.
Contributed by Cynthia Shang.
The only change required was to remove the filter that prevented S3 storage from being used. The archive-get command did not require any modification which demonstrates that the storage interface is working as intended.
The mock/archive/3 integration test was modified to run S3 storage locally to provide coverage for the new code paths while mock/stanza/3 was modified to run S3 storage remotely to provide coverage for the existing code paths. This caused some churn in the expect logs but there was no change in behavior.
TlsClient introduced a non-blocking read which is required to read protocol messages that are linefeed-terminated rather than a known size. However, in many cases the expected number of bytes is known in advance so in that case it is more efficient to have tlsClientRead() block until all the bytes are read.
Add block parameter to all read functions and use it when a blocking read is required. For most read functions this is a noop, i.e. if the read function never blocks then it can ignore the parameter.
In passing, set the log level of storageNew*() functions to debug to expose more high-level I/O operations.
A robust HTTP client with pipelining support and automatic retries.
Using a single object to make multiple requests is more efficient because requests are pipelined whenever possible. Requests are automatically retried when the connection has been closed by the server. Any 5xx response is also retried.
Only the HTTPS protocol is currently supported.
A simple, secure TLS client intended to allow access to services that are exposed via HTTPS. We call it TLS instead of SSL because SSL methods are disabled so only TLS connections are allowed.
This object is intended to be used for multiple TLS connections against a service so tlsClientOpen() can be called each time a new connection is needed. By default, an open connection will be reused for pipelining so the user must be prepared to retry their transaction on a read/write error if the server closes the connection before it can be reused. If this behavior is not desirable then tlsClientClose() may be used to ensure that the next call to tlsClientOpen() will create a new TLS session.
Note that tlsClientRead() is non-blocking unless there are *zero* bytes to be read from the session in which case it will raise an error after the defined timeout. In any case the tlsClientRead()/tlsClientWrite()/tlsClientEof() functions should not generally be called directly. Instead use the read/write interfaces available from tlsClientIoRead()/tlsClientIoWrite().
Test certificates were generated dynamically but there are advantages to using static certificates. For example, it possible to use the same certificate between container versions. Mostly, it is easier to document the certificates if they are not buried deep in the container code.
The new test certificates are initially intended to be used with the C unit tests but they will eventually be used for integration tests as well.
Two new certificates have been defined. See test/certificate/README.md for details.
The old dynamic certificates will be retained until they are replaced.
The embedded semicolon led to inconsistent semicolons when using the macro and is not our general convention.
Remove embedded semicolons from the macros and add semicolons in usage where they were not present.
Add XmlDocument, XmlNode, and XmlNodeList objects as a thin interface layer on libxml2.
This interface is not intended to be comprehensive. Only a few libxml2 capabilities are exposed but more can be added as needed.
S3 key options (repo1-s3-key/repo1-s3-key-secret) were not required which meant that users got an ugly assertion when they were missing rather than a tidy configuration error.
Only the local/remote commands need them to be optional. This is because local/remote commands get all their options from the command line but secrets cannot be passed on the command line. Instead, secrets are passed to the local/remote commands via the protocol for any operation that needs them.
The configuration system allows required to be set per command so use that to improve the error messages while not breaking the local/remote commands.
This allows a C unit test to access data in the code repository that might be useful for testing.
Add testRepoPathSet() to set the repository path.
In passing remove extra whitespace in the TEST_RESULT_VOID() macro.
Bug Fixes:
* Fix issue with archive-push-queue-max not being honored on connection error. (Reported by Lardière Sébastien.)
* Fix static WAL segment size used to determine if archive-push-queue-max has been exceeded.
* Fix error after log file open failure when processing should continue. (Reported by vthriller.)
Features:
* Automatically enable backup checksum delta when anomalies (e.g. timeline switch) are detected. (Contributed by Cynthia Shang.)
Improvements:
* Retry all S3 5xx errors rather than just 500 internal errors. (Suggested by Craig A. James.)
These interfaces previously used the memory context of the object they were associated with and did not have their own destructors.
There are times when it is useful to free the interface without also freeing the underlying object so give IoRead and IoWrite their own memory contexts and destructors.
In passing fix a comment type in bufferRead.c.
By default the IoWrite object does not write until the output buffer is full but this is a problem for protocol messages that must be sent in order to get a response.
ioWriteFlush() is not called internally by IoWrite but can be used at any time to immediately write all bytes from the output buffer without closing the IoWrite object.
The prior message stated that there had been a buffer overrun which is not true since the code prevents that.
In fact, this message means the parameter buffer filled while building the parameter list. Rather than display a partial list we output this message instead.
Also remove !!! which by convention we use as a marker for code that needs attention before it can be committed to master.
These macros provide a convenient way to output debug information in tests.
They are not intended to be left in test code when it is committed to master.
ioReadLine() calls ioRead(), which aggressively tries to fill the output buffer, but this doesn't play well with blocking reads.
Give ioReadLine() an option that tells it to read only what is available. That doesn't mean the function will never block but at least it won't do so by reading too far.
The report HTML generated by lcov is overly verbose and cumbersome to navigate. Since we maintain 100% coverage it's far more interesting to look at what is not covered than what is.
The new report presents all missing coverage on a single page and excludes code that is covered for brevity.
Add HTML tags for table elements.
The strExtra parameter allows adhoc tags to be added to an element for features that can't be implemented with CSS, e.g. colspan.
There are many places (and the number is growing) where a zero-terminated string constant must be transformed into a String object to be usable. This pattern wastes time and memory, especially since the created string is generally used in a read-only fashion.
Define macros to create constant String objects that are initialized at compile time rather than at run time.
The storageList() command accepts a regular expression as a filter. This works fine for local filesystems where it is relatively cheap to get a complete list of files and filter them in code. However, for remote filesystems like S3 it can be expensive to fetch a complete list of files only to discard the bulk of them locally.
S3 does not filter on regular expressions but it can accept a static prefix so this function extracts a prefix from a regular expression when possible.
Even a few characters can drastically reduce the amount of data that must be fetched remotely so the function does not try to be too clever. It requires a ^ anchor and stops scanning when the first special character is found.
Allow buffers to report a lower size than their allocated size. This means a larger buffer can be used to do the work of a smaller buffer without having to create a new buffer and concatenate.
This is useful for blocking I/O where the buffer may be too large for the amount of data that is available to read.
The Wait object accepted a double in the constructor for wait time but used TimeMSec internally. This was done for compatibility with the Perl code.
Instead, use TimeMSec in the Wait constructor and make changes as needed to calling code.
Note that Perl still uses a double for its Wait object so translation is needed in some places. There are no plans to update the Perl code as it will become obsolete.
If an object free() method was called manually when a callback was set then the callback would call free() again. This meant that each free() method had to protect against a subsequent call.
Instead, clear the callback (if present) before calling memContextFree(). This is faster (since there is no unecessary callback) and removes the need for semaphores to protect against a double free().
Code generation saved files even when they had not changed, which often caused code generation cascades. So, don't save files unless they have changed.
Use rsync to determine which files have changed since the last test run. The manifest of changed files is saved and not removed until all code generation and builds have completed. If an error occurs the work will be redone on the next run.
The eventual goal is to do all the builds from the test/repo directory created by rsync but for now it is only used to track changes.
The contents were already preserved between tests in a single test.pl run but for a separate execution the entire project had to be built from scratch, which was getting slower as we added code.
Save the important build flags in a file so the new execution knows whether the build contents can be reused.
Mounting/unmounting tmpfs on /home/[user]/test takes time, forces at least 3GB of memory to be available for tests, and makes it harder to preserve data between tests.
Instead, move mounting of tmpfs to the Vagrantfile and add it to fstab so it survives reboots.
There are a number of cases where a checksum delta is more appropriate than the default time-based delta:
* Timeline has switched since the prior backup
* File timestamp is older than recorded in the prior backup
* File size changed but timestamp did not
* File timestamp is in the future compared to the start of the backup
* Online option has changed since the prior backup
A practical example is that checksum delta will be enabled after a failover to standby due to the timeline switch. In this case, timestamps can't be trusted and our recommendation has been to run a full backup, which can impact the retention schedule and requires manual intervention.
Now, a checksum delta will be performed if the backup type is incr/diff. This means more CPU will be used during the backup but the backup size will be smaller and the retention schedule will not be impacted.
Contributed by Cynthia Shang.
We were already retrying 500 errors but 503 (rate-limiting) errors were not being retried and would cause an instant failure which aborted the command.
There are only two 5xx errors currently implemented by S3 but instead of adding 503 simply retry all 5xx errors. This is consistent with the http definition of this error class, "the server failed to fulfill an apparently valid request."
Suggested by Craig A. James.
This calculation was missed when the WAL segment size was made dynamic in preparation for PostgreSQL 11.
Fix the calculation by checking the actual WAL file sizes instead of using an estimate based on WAL segment size. This is more accurate because it takes into account .history and .backup files, which are smaller. Since the calculation is done in the async process the additional processing time should not adversely affect performance.
Remove the PG_WAL_SIZE constant and instead use local constants where the old value is still required. This is only the case for some tests and PostgreSQL 8.3 which does not provide a way to get the WAL segment size from pg_control.
If an error occurred while acquiring a lock on a remote server the error would be reported correctly, but the queue max detection code was not reached. The tests failed to detect this because they fixed the connection before queue max, allowing the ccde to be reached.
Move the queue max code before the lock so it will run even when remote connections are not working. This means that no attempt will be made to transfer WAL once queue max has been exceeded, but it makes it much more likely that the code will be reach without error.
Update tests to continue errors up to the point where queue max is exceeded.
Reported by Lardière Sébastien.
The C code was warning on failure and continuing but the Perl logging code was never updated with the same feature.
Rather than add the feature to Perl, just disable file logging if the log file cannot be opened. Log files are always opened by C first, so this will eliminate the error in Perl.
Reported by vthriller.
The existing tests were not adequate to ensure the history was being added in the correct order when some entries were loaded from a file and others added with infoPgAdd().
Contributed by Cynthia Shang.
The InfoPg object was partially modified in 960ad732 to place the current history item in position 0, but infoPgDataCurrent() didn't get updated correctly.
Remove this->indexCurrent and make the current position always equal 0. Use the new lstInsert() function when adding new history items via infoPgAdd(), but continue to use lstAdd() when loading from a file for efficiency.
This does not appear to be a live bug because infoPgDataCurrent() and infoPgAdd() are not yet used in any production code. The archive-get command is the only C code using InfoPG and it always looks at the entire list of items rather than just the current item.
Suggested by Cynthia Shang.
Bug Fixes:
* Fix missing missing URI encoding in S3 driver. (Reported by Dan Farrell.)
* Fix incorrect error message for duplicate options in configuration files. (Reported by Jesper St John.)
* Fix incorrectly reported error return in info logging. A return code of 1 from the archive-get was being logged as an error message at info level but otherwise worked correctly.
Features:
* Add checksum delta for incremental backups which uses checksums rather than timestamps to determine if files have changed. (Contributed by Cynthia Shang.)
* PostgreSQL 11 support, including configurable WAL segment size.
Improvements:
* Ignore all files in a linked tablespace directory except the subdirectory for the current version of PostgreSQL. Previously an error would be generated if other files were present and not owned by the PostgreSQL user.
* Improve info command to display the stanza cipher type. (Contributed by Cynthia Shang. Suggested by Douglas J Hunley.)
* Improve support for special characters in filenames.
* Allow delta option to be specified in the pgBackRest configuration file. (Contributed by Cynthia Shang.)
PostgreSQL 11 RC1 support was tested in 9ae3d8c46 when the u18 container was rebuilt. Nothing substantive changed after RC1 so pgBackRest is ready for PostgreSQL 11 GA.
The standard npm packages on Ubuntu 18.04 suddenly required libssl1.0 which broke the pgbackrest package builds. Installing nodejs from deb.nodesource.com seems to work fine with standard libssl.
This package is required by ScalityS3 which is used for local S3 testing.
When the filter interface internals were split out into a new header file the documentation was not moved as it should have been. Additionally some functions which should have been moved were left behind.
Move the documentation and functions to filter.internal.h and add more documentation. Filters are a tricky subject so the more documentation the better.
Also add documentation for the user-facing filter functions in filter.h.
Allow a single linefeed-terminated line to be read or written. This is useful for various protocol implementations, including HTTP and pgBackRest's protocol.
On read the maximum line size is limited to buffer-size to prevent runaway memory usage in case a linefeed is not found. This seems fine for HTTP but we may need to revisit this decision when implementing the pgBackRest protocol. Another option would be to increase the minimum buffer size (currently 16KB).
This test has been flapping since 9b9396c7. It seems to be some kind of timing issue since all integration tests pass and this unit passes on all other VMs. It only happens on Travis and is not reproducible in any development environment that we have tried.
For now, disable the test since the constant flapping is causing major delays in testing and quite a bit of time has been spent trying to identify the root cause. We are actively developing these tests and hope the issue will be identified during the course of normal development.
A number of improvements were made to the tests while searching for this issue. While none of them helped, it makes sense to keep the improvements.
Duplicating a non-multi-value option was not throwing the correct message when the option was a boolean.
The reason was that the option was being validated as a boolean before the multi-value check was being done. The validation code assumed it was operating on a string but was instead operating on a string list causing an assertion to fail.
Since it's not safe to do the multi-value check so late, move it up to the command-line and configuration file parse phases instead.
Reported by Jesper St John.
Previously this was done in two separate places by checking if an option was type hash or list.
Bad enough that it was in two places, but an upcoming bug fix will add another instance so make it a function.
There doesn't seem to be any need to implement this as a filter since current use cases (S3 authentication) work on small datasets.
So, use the single function method provided by OpenSSL for simplicity.
This constructor creates a Buffer object directly from a zero-terminated string. The old way was to create a String object first, then convert that to a Buffer using bufNewStr().
Updated in all places that used the old pattern.
PostgreSQL 11 introduces configurable WAL segment sizes, from 1MB to 1GB.
There are two areas that needed to be updated to support this: building the archive-get queue and checking that WAL has been archived after a backup. Both operations require the WAL segment size to properly build a list.
Checking the archive after a backup is still implemented in Perl and has an active database connection, so just get the WAL segment size from the database.
The archive-get command does not have a connection to the database, so get the WAL segment size from pg_control instead. This requires a deeper inspection of pg_control than has been done in the past, so it seemed best to copy the relevant data structures from each version of PostgreSQL and build a generic interface layer to address them. While this approach is a bit verbose, it has the advantage of being relatively simple, and can easily be updated for new versions of PostgreSQL.
Since the integration tests generate pg_control files for testing, teach Perl how to generate files with the correct offsets for both 32-bit and 64-bit architectures.
Unsecured, passwordless SSH can be a scary thing. If an attacker gains access to one system they can easily hop to other systems.
Add documentation on how to use the command parameter in authorized_keys to limit ssh to running a single command, pgbackrest. There is more that could be done for security but this likely addresses most needs.
Also change references to "trusted ssh" to "passwordless ssh" since this seems more correct.
Suggested by Stephen Frost, Magnus Hagander.
Use checksums rather than timestamps to determine if files have changed. This is useful in cases where the timestamps may not be trustworthy, e.g. when performing an incremental after failing over to a standby.
If checksum delta is enabled then checksums will be used for verification of resumed backups, even if they are full. Resumes have always used checksums to verify the files in the repository, enabling delta performs checksums on the database files as well.
Note that the user must manually enable this feature in cases were it would be useful or just keep in enabled all the time. A future commit will address automatically enabling the feature in cases where it seems likely to be useful.
Contributed by Cynthia Shang.
This option was previously allowed on the command-line only for no particular reason that we could determine.
Being able to specify it in the config file seems like a good idea and won't change current usage.
Contributed by Cynthia Shang.
Apparently we never needed to run this function remotely.
It will be needed by the backup checksum delta feature, so implement it now.
Contributed by Cynthia Shang.
The test to make sure that some files (e.g. pg_control) do not get removed during the backup was lost during the storage refactor committed at de7fc37f.
This did not impact the integrity of the backups, but bring it back since it is a nice sanity check.
Contributed by Cynthia Shang.
As we add storage drivers it's important to keep the tests for each completely separate. Rather than have three tests for each driver, standardize on having a single test unit for each driver.
This is a workaround for inefficient handling of many setjmps in gcc >= 4.9. Setjmp is used in all error handling, but in the unit tests each test macro contains an error handling block so they add up pretty quickly for large unit tests.
Enabling -ftree-coalesce-vars in affected versions reduces build time and memory requirements by nearly an order of magnitude. Even so, compiles are much slower than gcc <= 4.8.
We submitted a bug for this at: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=87316
Which was marked as a duplicate of: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63155
For read-only repositories the Posix and CIFS drivers behave exactly the same. Since that's all we support in C right now it's valid to treat them as the same thing. An assertion has been added to remind us to add the CIFS driver before allowing the repository to be writable.
Mostly we want to make sure that the C code does not blow up when the repository type is CIFS.
Previously it was the responsibility of the individual tests to clean up after themselves. Now the test harness now does the cleanup automatically.
This means that some paths/files need to be recreated with each run but that doesn't happen very often.
An attempt has been made to remove all redundant cleanup code but it's hard to know if everything has been caught. No issues will be caused by anything that was missed, but they will continue to chew up time in the tests.
Storing the expect log (created by common/harnessLog) in the regular test directory was not ideal. It showed up in tests and made it difficult to clear the test directory between each run.
Move the expect log to a purpose-built directory one level up so it does not interfere with regular testing.