strPathAbsolute() generates an absolute path from an absolute base path and an absolute/relative path.
strLstRemoveIdx() is a support function based on lstRemoveIdx().
In general we don't care about path and link times since they are easily recreated when restoring.
So, outside of storageInfo() we don't need to bother testing them.
Loading jobs in advance uses a lot of memory in the case that there are millions of jobs to be performed. We haven't seen this yet, but with backup and restore on the horizon it will become the norm.
Instead, use a callback so that jobs are only created as they are needed and can be freed as soon as they are completed.
These features finally make the ls command practical.
Currently the JSON contains only name, type, and size. We may add more fields in the future, but these seem like the minimum needed to be useful.
Push the responsibility for sort and find down to the List object by introducing a general comparator function that can be used for both sorting and finding.
Update insert and add functions to return the item added rather than the list. This is more useful in the core code, though numerous updates to the tests were required.
Update StorageWritePosix to use the new functions.
A side effect is that storageWritePosixOpen() will no longer error when the user/group name does not exist. It will simply retain the original user/group, i.e. the user that executed the restore.
In general this is a feature since completing a restore is more important than setting permissions exactly from the source host. However, some notification of this omission to the user would be beneficial.
The control and catalog versions were stored a variety of places in the optimistic hope that they would be useful. In fact they never were.
We can't remove them from the backup.info and backup.manifest files due to backwards compatibility concerns, but we can at least avoid loading and storing them in C structures.
Add functions to the PostgreSQL interface which will return the control and catalog versions for any supported version of PostgreSQL to allow backwards compatibility for backup.info and backup.manifest. These functions will be useful in other ways, e.g. generating the tablespace identifier in PostgreSQL >= 9.0.
Info files required three copies in memory to be loaded (the original string, an ini representation, and the final info object). Not only was this memory inefficient but the Ini object does sequential scans when searching for keys making large files very slow to load.
This has not been an issue since archive.info and backup.info are very small, but it becomes a big deal when loading manifests with hundreds of thousands of files.
Instead of holding copies of the data in memory, use a callback to deliver the ini data directly to the object when loading. Use a similar method for save to avoid having an intermediate copy. Save is a bit complex because sections/keys must be written in alpha order or older versions of pgBackRest will not calculate the correct checksum.
Also move the load retry logic to helper functions rather than embedding it in the Info object. This allows for more flexibility in loading and ensures that stack traces will be available when developing unit tests.
Reviewed by Cynthia Shang.
ioReadLine() errors on eof because it has previously been used only for protocol reads.
Returning on eof is handy for reading lines from files where eof is not considered an error.
Checking the PostgreSQL-reported path and version against the pgBackRest configuration helps ensure that pgBackRest is operating against the correct cluster.
In Perl this functionality was in the Db object, but check seems like a better place for it in C.
Contributed by Cynthia Shang.
Previously the host id to use was pulled from the host-id option or defaulted to 1.
The stanza, check, and backup commands will all need the ability to address a specified pg host, so add functions to make that possible.
Previously, info files (e.g. archive.info, backup.info) were created in Perl and only loaded in C.
The upcoming stanza commands in C need to create these files so refactor the Info* objects to allow new, empty objects to be created. Also, add functions needed to initialize each Info* object to a valid state.
Contributed by Cynthia Shang.
Previously storageLocal() was being used internally but loading pg_control from remote storage is often required.
Also, storagePg() is more appropriate than storageLocal() for all current usage.
Contributed by Cynthia Shang.
The pg1-socket-path and pg1-port options were not being reset when options from a higher index were being pushed down for processing by a remote. Since remotes only talk to one cluster they always use the options in index 1. This requires moving options from the original index to 1 before starting the remote. All options already set on index 1 must be removed if they are not being overwritten.
Processing large datasets in a memory context can lead to high memory usage and long allocation times. Add a new MEM_CONTEXT_TEMP_RESET_BEGIN() macro that allows temp allocations to be automatically freed after N iterations.
Calculate the most common value in a list of variants. If there is a tie then the first value passed to mcvUpdate() wins.
mcvResult() can be called multiple times because it does not end processing, but there is a cost to calculating the result each time
since it is not stored.
Logging stayed in the backup log until the Perl code started. Fix this so it logs to the correct file and will still work after the Perl code is removed.
The Perl versions remain because they are still being used by the Perl stanza commands. Once the stanza commands are migrated they can be removed.
Contributed by Cynthia Shang.
Implement switch WAL and archive check in C but leave the rest in Perl for now.
The main idea was to have some real integration tests for the new database code so the rest of the migration can wait.
Reviewed by Cynthia Shang.
Migrate functionality from the Perl Db module to C. For now this is just enough to implement the WAL switch check.
Add the dbGet() helper function to get Db objects easily.
Create macros in harnessPq to make writing pq scripts easier by grouping commonly used functions together.
Reviewed by Cynthia Shang.
The cause of this error seems to be that a failed request takes so long that a subsequent retry at the http level uses outdated headers.
We're not sure if pgBackRest it to blame here (in one case a kernel downgrade fixed it, in another case an incorrect network driver was the problem) so add retries to hopefully deal with the issue if it is not too persistent. If SSL_write() has long delays before reporting an error then this will obviously affect backup performance.
Reported by sean0101n, Tim Garton, Jesper St John, Aleš Zelený.
Error codes were not being caught for SSL_write() so it was hard to see exactly what was happening in error cases. Report errors to aid in debugging.
Also add a retry for SSL_ERROR_WANT_READ. Even though we have not been able to reproduce this case it is required by SSL_write() so go ahead and implement it.
Multiple PostgreSQL hosts were supported via the host-id option but there are cases where it is useful to be able to directly specify the host id required, e.g. to iterate through pg* hosts when looking for candidate primaries and standbys during backup.
Keep trying to locate the WAL segment until timeout. This is useful for the check and backup commands which must wait for segments to arrive in the archive.
The remotes have their own config options (repo-host-config, etc.) so don't pass the local config* options.
This was a regression from the behavior of the Perl code and while there have been no field reports it caused breakage on test systems with multiple configurations.
If this option is set then ports appended to repo-s3-endpoint or repo-s3-host will be ignored.
Setting this option explicitly may be the only way to use a bare ipv6 address with S3 (since multiple colons confuse the parser) but we plan to improve this in the future.
This direct interface to libpq allows simple queries to be run against PostgreSQL and supports timeouts.
Testing is performed using a shim that can use scripted responses to test all aspects of the client code. The shim will be very useful for testing backup scenarios on complex topologies.
Reviewed by Cynthia Shang.
The local process is now entirely migrated to C. Since all major I/O operations are performed in the local process, the vast majority of I/O is now performed in C.
Contributed by David Steele, Cynthia Shang.
Add bool, array, and int64 as valid array subtypes.
Pretty print for the array subtype is not correct but is currently not in use (this can be seen at line 328 in typeJsonTest.c).
Discard all data passed to the filter. Useful for calculating size/checksum on a remote system when no data needs to be returned.
Update ioReadDrain() to automatically use the IoSink filter.
The HTTP server can use either content-length or transfer-encoding to indicate that there is content in the response. HEAD requests do not include content but return all the same headers as GET. In the HEAD case we were ignoring content-length but not transfer-encoding which led to unexpected eof errors on AWS S3. Our test server, minio, uses content-length so this was not caught in integration testing.
Ignore all content for HEAD requests (no matter how it is reported) and add a unit test for transfer-encoding to prevent a regression.
Found by Pavel Suderevsky.
This feature denotes storage that can compress files so that they take up less space than what was written. Currently this includes the Posix and CIFS drivers. The stored size of the file will be rechecked after write to determine if the reported size is different. This check would be wasted on object stores such as S3, and they might not report the file as existing immediately after write.
Also add tests to each storage driver to check features.
Previously only a single filter could be pushed to the remote since order was not being maintained. Now the filters are strictly ordered.
Results are returned from the remote and set in the local IoFilterGroup so they can be retrieved.
Expand remote filter support to include all filters.
Read all data from an IoRead object and discard it. This is handy for calculating size, hash, etc. when the output is not needed.
Update code where a loop was used before.
Some HTTP error tests were failing after the upgrade to openssl 1.1.1, though the rest of the unit and integration tests worked fine. This seemed to be related to the very small messages used in the error testing, but it pointed to an issue with the code not being fully compliant, made worse by auto-retry being enabled by default.
Disable auto-retry and implement better error handling to bring the code in line with openssl recommendations.
There's no evidence this is a problem in the field, but having all the tests pass seems like a good idea and the new code is certainly more robust.
Coverage will be complete in the next commit when openssl 1.1.1 is introduced.
Maintaining the storage layer/drivers in two languages is burdensome. Since the integration tests require the Perl storage layer/drivers we'll need them even after the core code is migrated to C. Create an interface layer so the Perl code can be removed and new storage drivers/features introduced without adding Perl equivalents.
The goal is to move the integration tests to C so this interface will eventually be removed. That being the case, the interface was designed for maximum compatibility to ease the transition. The result looks a bit hacky but we'll improve it as needed until it can be retired.
Secure options could show up in the help as "current". While the user must have permissions to see the source of the options (e.g. environment, config file) it's still not a good idea to display them in an unexpected context.
Instead show secure options as <redacted> in the help command.
Amend commit 434cd832 to error when the db history in archive.info and backup.info do not match.
The Perl code would attempt to reconcile the history by matching on system id and version but we are not planning to migrate that code to C. It's possible that there are users with mismatches but if so they should have been getting errors from info for the last six months. It's easy enough to manually fix these files if there are any mismatches in the field.
Contributed by Cynthia Shang.
If the file is compressible (i.e. not encrypted or already compressed) it can be marked as such in storageNewRead()/storageNewWrite(). If the file is being read from/written to a remote it will be compressed in transit using gzip.
Simplify filter group handling by having the IoRead/IoWrite objects create the filter group automatically. This removes the need for a lot of NULL checking and has a negligible effect on performance since a filter group needs to be created eventually unless the source file is missing.
Allow filters to be created using a VariantList so filter parameters can be passed to the remote.
This implementation duplicates the functionality of the Perl code but does so with different logic and includes full unit tests.
Along the way at least one bug was fixed, see issue #748.
Contributed by Cynthia Shang.
This filter exactly mimics the behavior of the Perl filter so is a drop-in replacement.
The filter is not integrated yet since it requires the Perl-to-C storage layer interface coming in a future commit.
These names more accurately reflect what the functions do and follow the convention started in Info and InfoPg.
Also remove the ignoreMissing parameter since it was never used.
Contributed by Cynthia Shang.
Some filters (e.g. encryption and compression) produce output even if there is no input. Since the filter group was marked as "done" initially, processing would not run when there was zero input and that resulted in zero output.
All filters start not done so start the filter group the same way.
This allows copying from one S3 object to another. We generally try to avoid doing this but there are a few cases where it is needed and the tests do it quite a bit.
One thing to look out for here is that reads require the http client to be explicitly released by calling httpClientDone(). This means than clients could grow if they are not released properly. The http statistics will hopefully alert us if this is happening.
This cache manages multiple http clients and returns one to the caller that is not busy. It is the responsibility of the caller to indicate when they are done with a client. If returnContent is set then the client will automatically be marked done.
Also add special handing for HEAD requests to recognize that content-length is informational only and no content is expected.
This was not enforced at parse time because repo1-cipher-type could be passed on the command-line even in cases where encryption was not needed by the subprocess.
Filter repo-cipher-type so it is never passed on the command line. If the subprocess does not have access to the passphrase then knowing the encryption type is useless anyway.
Filter groups could not be manipulated once they had been assigned to an IO object. Now they can be freely manipulated up to the time the IO object is opened.
Also, move the filter group into the IO object's context so they don't need to be tracked separately.
The previous implementation searched for the file in a list which worked but was not optimal. For arbitrary bucket structures it would also produce a false negative if a match was not found in the first 1000 entries. This was not an issue for our repo structure since the max hits on exists calls is two but it seems worth fixing to avoid future complications.
The C code was passing the host (if specified) with the request which could force the server into path-style URLs, which are not supported.
Instead, use the Perl logic of always passing bucket.endpoint in the request no matter what host is used for the HTTPS connection.
It's an open question whether we should support path-style URLs but since we don't it's useless to tell the server otherwise. Note that Amazon S3 has deprecated path-style URLs and they are no longer supported on newly created buckets.
Allow commands to be skipped by default in the command help but still work if help is requested for the command directly. There may be other uses for the flag in the future.
Update help for ls now that it is exposed.
Allows listing repo paths/files from the command-line, to be used primarily for testing and debugging.
This command is internal-only so the interface may change at any time without notice.
Not all storage types support paths as a physical thing that must be created/destroyed. Add a feature to determine which drivers use paths and simplify the driver API as much as possible given that knowledge and by implementing as much path logic as possible in the Storage object.
Remove the ignoreMissing parameter from pathSync() since it is not used and makes little sense.
Create a standard list of error messages for the drivers to use and apply them where the code was modified -- there is plenty of work still to be done here.
These functions are not required for repository storage so make them optional and error if they are not implemented for non-repository storage, .e.g. pg or spool.
The goal is to simplify the drivers (e.g. S3) that are intended only for repository storage.
The prior behavior was to return NULL so the caller would know the path was missing, but this is rarely useful, complicates the calling code, and increases the chance of segfaults.
The .nullOnMissing param has been added to enable the prior behavior.
/ is escaped in the spec but the Perl renderer we use does not escape it which leads to checksum mismatches between the two sets of code.
This particular escape seems to be a more recent addition to the spec and is targeted toward embedding JSON in JavaScript.
\/ is still allowed when parsing JSON.
The new name is preferred because pgBackRest does not support any SSL protocol versions (they are all considered to be insecure).
The old name will continue to be accepted.
This is just the part of restore run by the local helper processes, not the entire command.
Even so, various optimizations in the code (like pipelining and optimizations for zero-length files) should make the restore command faster on object stores.
This report replaces the lcov report that was generated manually for each release.
The lcov report was overly verbose just to say that we have virtually 100% coverage.
Pre-calculate the value used by logAny() to improve performance and make it more likely to be inlined.
Move IF_LOG_ANY() into LOG_INTERNAL() to simplify the macros and improve performance of LOG() and LOG_PID(). If the message has no chance of being logged there's no reason to call logInternal().
Rename logWill() to logAny() because it seems more intuitive.
The branch coverage exclusion rules were overly broad and included functions that ended in a capital letter, which disabled all coverage for the statement. Improve matching so that all characters in the name must be upper-case for a match.
Some macros with internal branches accepted parameters that might contain conditionals. This made it impossible to tell which branches belonged to which, and in any case an overzealous exclusion rule was ignoring all branches in such cases. Add the DEBUG_COVERAGE flag to build a modified version of the macros without any internal branches to be used for coverage testing. In most cases, the branches were optimizations (like checking logWill()) that improve production performance but are not needed for testing. In other cases, a parameter needed to be added to the underlying function to handle the branch during coverage testing.
Also tweak the coverage rules so that macros without conditionals are automatically excluded from branch coverage as long as they are not themselves a parameter.
Finally, update tests and code where missing coverage was exposed by these changes. Some code was updated to remove existing coverage exclusions when it was a simple change.
Filters had different ideas about what "done" meant and this added complication to the group filter processing. For example, gzip decompression would detect end of stream and mark the filter as done before it had been flushed.
Improve the IoFilter interface to give a consistent definition of done across all filters, i.e. no filter can be done until it has started flushing no matter what the underlying driver reports. This removes quite a bit of tricky logic in the processing loop which tried to determine when a filter was "really" done.
Also improve management of the input buffers by pointing directly to the prior output buffer (or the caller's input) to eliminate loops that set/cleared these buffers.
If content was zero-length then the IO object was not created. This put the burden on the caller to test that the IO object existed before checking eof.
Instead, create an IO object even if it will immediately return eof. This has little cost and makes the calling code simpler.
Also add an explicit test for zero-length files in S3 and a few assertions.
The rules for when a C remote is required are getting complicated and will get worse when restoreFile() is migrated.
Instead, set the --c option when a C remote is required. This option will be removed when the remote is entirely implemented in C.
Most of the *Free() functions are pretty generic so add macros to make creating them as easy as possible.
Create a distinction between *Free() functions that the caller uses to free memory and callbacks that free third-party resources. There are a number of cases where a driver needs to free resources but does not need a normal *Free() because it is handled by the interface.
Add common/object.h for macros that make object maintenance easier. This pattern can also be used for many more object functions.
Rename memContextCallback() to memContextCallbackSet() to be more consistent with other parts of the code.
Free all context memory when an exception is thrown from a callback. Previously only the child contexts would be freed and this resulted in some allocations being lost. In practice this is probably not a big deal since the process will likely terminate shortly, but there may well be cases where that is not true.
Add GLUE() macro which is useful for creating identifiers.
Move MACRO_TO_STR() here and rename it STRINGIFY(). This appears to be the standard name for this type of macro and it is also an awesome name.
Remove "File" and "Driver" from object names so they are shorter and easier to keep consistent.
Also remove the "driver" directory so storage implementations are visible directly under "storage".
The function pointer casting used when creating drivers made changing interfaces difficult and led to slightly divergent driver implementations. Unit testing caught production-level errors but there were a lot of small issues and the process was harder than it should have been.
Use void pointers instead so that no casts are required. Introduce the THIS_VOID and THIS() macros to make dealing with void pointers a little safer.
Since we don't want to expose void pointers in header files, driver functions have been removed from the headers and the various driver objects return their interface type. This cuts down on accessor methods and the vast majority of those functions were not being used. Move functions that are still required to .intern.h.
Remove the special "C" crypto functions that were used in libc and instead use the standard interface.
Add bufDup() and bufNewUsedC().
Arrange bufNewC() params to match bufNewUsedC() since they have always seemed backward.
Fix bufHex() to only render the used portion of the buffer and fix some places where used was not being set correctly.
Use a union to make macro assignments for all legal values without casting. This is much more likely to catch bad assignments.
There is only one instance in the core code where this helps. It is mostly helpful in the tests.
There is an argument to be made that only THROW_SYS_ERROR*() variants should be used in the core code to improve test coverage. If so, that will be the subject of a future commit.
Some functions (e.g. getpwnam()/getgrnam()) will return an error but not set errno. In this case there's no use in appending strerror(), which will be "Success". This is confusing since an error has just been reported.
At least in the examples above, an error with no errno set just means "missing" and our current error message already conveys that.
The remote list was at most 9 (based on pg[1-8]-* max index) so anything over 8 wrote into unallocated memory.
The remote for the main process is (currently) stored in position zero so do the same for remotes started from locals, since there should only be one. The main process will need to start more remotes in the future which is why there is extra space.
Reported by Jens Wilke.
The function provides all the file/path/link information required to build a backup manifest.
Also update storageInfo() to provide the same information for a single file.
At the same time change the way that load constructors work (and are named) so that Ini objects do not persist after the constructors complete.
infoArchiveSave() is excluded from this commit since it is just a trivial call to infoPgSave() and won't be required soon.
In most cases the JSON type is known so this is more efficient than converting to Variant first, both in terms of memory and time.
Also rename some of the existing functions for consistency.
Variants were being used to expose String and StringList types but this can be done more simply with an additional method.
Using only strings also allows for a more efficient implementation down the road.
This greatly reduces calls to filter processing, which is a performance benefit, but also makes the trace logs smaller and easier to read.
However, this means that ioWriteFlush() will no longer work with filters since a full flush of IoFilterGroup would require an expensive reset. Currently ioWriteFlush() is not used in this scenario so for now just add an assert to ensure it stays that way.
These are more efficient than creating buffers in place when needed.
After replacement discovered that bufNewStr() and BufNewZ() were not being used in the core code so removed them. This required using the macros in tests which is not the usual pattern.
Since the introduction of blocking read drivers (e.g. IoHandleRead, TlsClient) the non-blocking drivers have used the same rules for determining maximum buffer size, i.e. read only as much as requested. This is necessary so the blocking drivers don't get stuck waiting for data that might not be coming.
Instead mark blocking drivers so IoRead knows how much buffer to allow for the read. The non-blocking drivers can now request the maximum number of bytes allowed by buffer-size.
Add production checks to ensure no filter gets a zero-size input buffer.
Also, optimize the case where a filter returns no output. There's no sense in running downstream filters if they have no new input.
The IoRead object was passing zero-length buffers into the filter processing code but not all the filters were happy about getting them.
In particular, the gzip compression filter failed if it was given no input directly after it had flushed all of its buffers. This made the problem rather intermittent even though a zero-length buffer was being passed to the filter at the end of every file. It also explains why tweaking compress-level or buffer-size allowed the file to go through.
Since this error was happening after all processing had completed, there does not appear to be any risk that successfully processed files were corrupted.
Reported by brunre01, jwpit, Tomasz Kontusz, guruguruguru.
Asserts were only only reported on stderr rather than being returned through the protocol layer. This did not appear to be very reliable.
Instead, report the assert through the protocol layer like any other error. Add a stack trace if an assert error or debug logging is enabled.
These work almost exactly like the String constant macros. However, a struct per variant type was required which meant custom constructors and destructors for each type.
Propagate the variant constants out into the codebase wherever they are useful.
The STRING_CONST() macro worked fine for constants but was not able to constify strings created at runtime.
Add the STR() macro to do this by using strlen() to get the size.
Also rename STRING_CONST() to STRDEF() for brevity and to match the other macro name.
Removed the "anchor" parameter because it was never used in any calls in the Perl code so it was just a dead parameter that always defaulted to true.
Contributed by Cynthia Shang.
This was not an intentional feature in Perl, but it works, so it makes sense to implement the same syntax in C.
This is a break from other places where a -port option is explicitly supplied, so it may make sense to support both styles going forward. This commit does not address that, however.
Reported by Kyle Nevins.
The Perl lib we have been using for TLS allows dots in wildcards, but this is forbidden by RFC-2818. The new TLS implementation in C forbids this pattern, just as PostgreSQL and curl do.
However, this does present a problem for users who have been using bucket names with dots in older versions of pgBackRest. Since this limitation exists for security reasons there appears to be no option but to take a hard line and do our best to notify the user of the issue as clearly as possible.
This problem was not specific to archive-get, but that was the only place it was expressing in the last release. The new archive-push was also affected.
The issue was with daemon processes that had closed all their file descriptors. When exec'ing and setting up pipes to communicate with a child process the dup2() function created file descriptors that overlapped with the first descriptor (stdout) that was being duped into. This descriptor was subsequently closed and wackiness ensued.
If logging was enabled (the default) that increased all the file descriptors by one and everything worked.
Fix this by checking if the file descriptor to be closed is the same one being dup'd into. This solution may not be generally applicable but it works fine in this case.
Reported by Brad Nicholson.
This new implementation should behave exactly like the old Perl code with the exception of updated log messages.
Remove as much of the Perl code as possible without breaking other commands.
When a repository server is configured, commands that modify the repository acquire a remote lock as well as a local lock for extra protection against multiple writers.
Instead of the custom logic used in Perl, make remote locking part of the command configuration.
This also means that the C remote needs the stanza since it is used to construct the lock name. We may need to revisit this at a later date.
While the local processes are doing their jobs the remote connection from the main process may timeout.
Send occasional noops to ensure that doesn't happen.
This may not be the best way to detect 64-bit platforms but it seems to be working fine so far.
Create a macro to make it clearer what is being done and to make it easier to change the implementation.
This was missed because the unit tests were reusing a buffer without resetting it to zero, so this flag ended up still set when the test function was called.
This was not a live issue since it only expressed in tests and this code is not used in master yet.
The test harness was not being built with warnings which caused some wackiness with an improperly structured switch. Just use the same warnings as the code being tested.
Also enable warnings on code that is not directly being tested since other code modules are frequently modified during testing.
We deal with some pretty big lists in archive-push so a nested-loop anti-join looked like it would not be efficient enough.
This merge anti-join should do the trick even though both lists must be sorted first.
The prior behavior on a global error (i.e. not file specific) was to write an individual error file for each WAL file being processed. On retry each of these error files would be removed, and if the error was persistent, they would then be recreated. In a busy environment this could mean tens or hundreds of thousands of files.
Another issue was that the error files could not be written until a list of WAL files to process had been generated. This was easy enough for archive-get but archive-push requires more processing and any errors that happened when generating the list would only be reported in the pgBackRest log rather than the PostgreSQL log.
Instead write a global.error file that applies to any WAL file that does not have an explicit ok or error file. This reduces churn and allows more errors to be reported directly to PostgreSQL.
Having a copy per version worked well until it was time to add new features or modify existing functions. Then it was necessary to modify every version and try to keep them all in sync.
Consolidate all the PostgreSQL types into a single file using #if for type versions. Many types do not change or change infrequently so this cuts down on duplication. In addition, it is far easier to see what has changed when a new version is added.
Use macros to write the interface functions. There is still duplication here since some changes require a new copy of the macro, but it is far less than before.
Now that repositories are writable the storage drivers that don't yet support file writes need to be updated to do so.
Note that the part size for multi-part upload has not been defined as a proper constant. This will become an option in the near future so it doesn't seem worth creating a constant that we might then forget to remove.
The xml objects only exposed read methods of the underlying libxml2.
This worked for S3 commands that only received data but to send data we need to be able to create XML documents from scratch.
Add the ability to create empty documents and add nodes and contents.
The C code was assuming that the current PostgreSQL version in archive.info/backup.info was the most recent item in the history, but this is not always the case with some stanza-upgrade scenarios. If a cluster is restored from before the upgrade and stanza-upgrade is run again, it will revert db-id to the original history item.
Instead, load db-id from the db section explicitly as the Perl code does.
This did not affect archive-get since it does a reverse scan through the history versions and does not rely on the current version.
Logging was being enable on local/remote processes even if --log-subprocess was not specified, so fix that.
Also, make sure that stderr is enabled at error level as it was on Perl. This helps expose error information for debugging.
For remotes, suppress log and lock paths since these are not applicable on remote hosts. These options should be set in the local config if they need to be overridden.
None of our C HTTP requests have needed to output a body, but they will with the migration of archive-push.
Also, add constants that are useful when POSTing/PUTing data.
This condition was not being properly checked for in the C code and it caused problems in the info command, at the very least.
Instead of applying a local fix, introduce a new path option type that will rigorously check the format of any incoming paths.
Reported by Marc Cousin.
This command was previously forked off from the archive-push command which required a bit of artificial option and log manipulation.
A separate command is easier to test and will work on platforms that don't have fork(), e.g. Windows.
This driver borrows heavily from the Posix driver.
At this point the only difference is that CIFS does not allow explicit directory fsyncs so they need to be suppressed. At some point the CIFS diver will also omit link support.
With the addition of this driver repository storage is now writable.