The remote protocol was calling into the Storage object but this required some translation which will get more awkward as time goes by.
Instead, call directly into the local driver so the communication is directly driver to driver. This still requires resolving the path and may eventually have more duplication with the Storage object methods but it seems the right thing to do.
Expressions such as <REPO:ARCHIVE> require a stanza name in order to be resolved correctly. However, if the stanza name is passed to the remote then that remote will only work correctly for that one stanza.
Instead, resolved the expressions locally but still pass a relative path to the remote. That way, a storage path that is only configured on the remote does not need to be known locally.
This issue was a result of STORAGE_REPO_PATH prepending an extra stanza when the stanza was specified on the command line.
The tests missed this because by some strange coincidence the WAL dirs were empty for each test that specified a stanza. Add new tests to prevent a regression.
Fixed by Stefan Fercot.
Free all cached objects in the storage helper, especially the stanza name.
This clears the storage environment for tests that switch stanza names or go from a stanza name to no stanza name or vice versa. This is only useful for testing right now, but may be used in the future for commands than act on multiple stanzas.
This command was previously forked off from the archive-get command which required a bit of artificial option and log manipulation.
A separate command is easier to test and will work on platforms that don't have fork(), e.g. Windows.
Prior to this the Perl remote was used to satisfy C requests. This worked fine but since the remote needed to be migrated to C anyway there was no reason to wait.
Add the ProtocolServer object and tweak ProtocolClient to work with it. It was also necessary to add a mechanism to get option values from the remote so that encryption settings could be read and used in the storage object.
Update the remote storage objects to comply with the protocol changes and add the storage protocol handler.
Ideally this commit would have been broken up into smaller chunks but there are cross-dependencies in the protocol layer and it didn't seem worth the extra effort.
The file write object destructors called close() and finalized the file even if it was not completely written. This was an issue in both the C and Perl code.
Rewrite the destructors to simply free resources (like file handles) rather than calling the close() method. This leaves the temp file in place for filesystems that use temp files.
Add unit tests to prevent regression.
Reported by blogh.
This was not being caught because the integration tests for S3 were running remotely and going through the Perl code rather than the new C code.
Implement the exists method for the S3 driver and add tests to prevent a regression.
Reported by mibiio.
This already worked in reverse, but this case is needed when a command that only uses protocol-timeout (e.g. info) calls a remote process where protocol-timeout and db-timeout can be set. If protocol-timeout was set to less than the default db-timeout then an error resulted.
Rather than create _P/_PP variants for every type that needs to pass/return pointers, create FUNCTION_*_P/PP() macros that will properly pass or return any single/double pointer types.
There remain a few unresolved edge cases such as CHARPY but this handles the majority of types well.
The string object was reallocating memory with every concatenation which is not very efficient. This is especially true for JSON rendering which does a lot of concatenations.
Instead allocate a pool of extra memory on the first concatenation (50% of size) to be used for future concatenations and reallocate when needed.
Also add a 1GB size limit to ensure that there are no overflows.
Multiple status files were being created by asynchronous archiving if a high-level error occurred after one or more WAL segments had already been transferred successfully. Error files were being written for every file in the queue regardless of whether it had already succeeded. To fix this, add an option to skip writing error files when an ok file already exists.
There are other situations where both files might exist (various fsync and filesystem error scenarios) so it seems best to retry in the case that multiple status files are found rather than throwing a hard error (which then means that archiving is completely stuck). In the case of multiple status files, a warning will be logged to alert the user that something unusual is happening and the command will be retried.
Reported by fpa-postgres, Joe Ayers, Douglas J Hunley.
The implementation using gethostbyname() was only intended to be used during prototyping but was forgotten when the code was finalized.
Replace it with gettaddrinfo() which is more modern and supports IPv6.
Suggested by Bruno Friedmann.
Rename FUNCTION_DEBUG_* macros to FUNCTION_LOG_* to more accurately reflect what they do. Further rename FUNCTION_DEBUG_RESULT* macros to FUNCTION_LOG_RETURN* to make it clearer that they return from the function as well as logging. Leave FUNCTION_TEST_* macros as they are.
Consolidate the various ASSERT* macros into a single ASSERT macro that is always compiled out of production builds. It was difficult to figure out when an assert would be checked with all the different types in play. When ASSERTs are compiled in they will always be checked regardless of the log level -- tying these two concepts together was not a good idea.
The C info code has already been committed but this commit wires it into main.
Also remove the info Perl code and tests since they are no longer called.
This is a partial implementation of remote storage with just enough functionality to get the info command working. The client is written in C but the server is still in Perl, which limits progress until a C server is written.
This is a complete protocol client implementation in C.
Currently there is no C server implementation so the C client is talking to a Perl server. This won't work very long, though, as the protocol format, even though in JSON, has a lot of language-specific structure. While it would be possible to maintain compatibility between C and Perl it's probably not worth the effort in the long run.
Just as in Perl there are helper functions to make constructing protocol objects easier. Currently only repository remotes are supported.
Executes a child process and allows the calling process to communicate with it using read/write io.
This object is specially tailored to implement the protocol layer and may or may not be generally applicable to general purpose
execution.
Parameters for the local/remote commands are based on parameters that are passed to the current command.
Generate parameters for the new command based on the intersection of parameters between the current command and the command to be executed.
General i/o objects for reading and writing file descriptors, in particular those that can block. In other words, these are not generally to be used with file descriptors for actual files, but rather pipes, sockets, etc.
The C code can't get the cipher type from the storage object because the C storage object does not have encryption baked in like the Perl code does.
Instead, check backup.info to see if encryption is enabled. This will need to rethought if another cipher type is added but for now it works fine.
Replace the repository path with just "the repository". The path is not important in this context and it is clearer to state where the stanzas are missing from.
The prior behavior was to throw an exception but this was not very helpful when something unexpected happened. Better to at least emit the error message even if the error code is not very helpful.
There were some small differences in ordering and how the C version handled missing directories. It may be that the C version is more consistent, but for now it is more important to be compatible with the Perl version.
These differences were missed because the C info command was not wired into main.c so it was not being tested in regression. This commit does not fix the wiring issue because there will likely be a release soon and it is too big a change to put in at the last moment.
Casting to int caused large values to be slightly inaccurate so cast to uint64_t instead.
Also, use multiplication where possible since the compiler should precompute multiplied values.
SIGPIPE immediately terminates the process but we would rather catch the EPIPE error and gracefully shutdown.
Ignore SIGPIPE and throw the EPIPE error via normal error handling.
Including the C module after the headers required for testing meant that if headers were missing from the C module they were not caught while directly testing the C module.
The missing headers were caught in general testing, but it is frustrating to get an error in a module that has already passed while testing another module or running CI.
Move the C module include to the very top so missing headers cause immediate failures.
Bug Fixes:
* Remove request for S3 object info directly after putting it. (Reported by Matt Kunkel.)
* Correct archive-get-queue-max to be size type. (Reported by Ronan Dunklau.)
* Add error message when current user uid/gid does not map to a name. (Reported by Camilo Aguilar.)
* Error when --target-action=shutdown specified for PostgreSQL < 9.5.
Improvements:
* Set TCP keepalives on S3 connections. (Suggested by Ronan Dunklau.)
* Reorder info command text output so most recent backup is output last. (Contributed by Cynthia Shang. Suggested by Ryan Lambert.)
* Change file ownership only when required.
* Redact authentication header when throwing S3 errors. (Suggested by Brad Nicholson.)