This change adds first-class metadata support to the Azure Blob backend,
including headers, user metadata, tags, and modtime overrides, and wires
it through uploads and server-side copies.
There is a behavior change in that rclone will now set the "mtime"
custom metadata when doing server side copies to azure and the
`--metadata` argument is given.
- Map standard headers: cache-control, content-disposition, content-encoding,
content-language, content-type to corresponding x-ms-blob-* HTTP headers.
- Map user metadata: any non-reserved keys (excluding x-ms-*) are sent as
blob user metadata. Keys are normalized to lowercase for consistency.
- Support tags: parse `x-ms-tags` as a comma-separated list of key=value
pairs and apply them on uploads and copies.
- Support mtime override: accept `mtime` in metadata (RFC3339/RFC3339Nano)
to override the stored modtime persisted in user metadata.
Backblaze has updated its b2_authorize_account API endpoint, newly created
application keys are now "multi-bucket" keys, capable of being limited to
multiple buckets. These keys can only be used with the v4 endpoint, not v1 which
returns an HTTP 400.
This commit switches authorization to the v4 endpoint, and allowing such keys to
work with any of the allowed buckets.
With multi-bucket keys, missing restricted buckets can be non-fatal.
Supports listing root with multi-bucket API keys
#8947 implemented support for the If-Match and If-None-Match headers for S3 PUT
Object requests; however, this support did not extend to multi-part copy and
upload requests. These headers are implemented via inclusion in the
CompleteMultipartUpload request.
This updates the auto generated code also which was needed for multipart copy.
Especially when using rclone via rc it is helpful to configure the box
backend using the contents of the config file instead of heaving to
upload the file to the server that is running rclone.
The If-Match and If-None-Match headers were being dropped rather
than implemented in the Put Object request to S3. These headers
make requests conditional which allow AWS S3 Bucket Policies to
prevent Object overwriting.
The bisync tests have been failing as Dropbox is failing to move just
created objects. This seems to be caused by an eventual consistency
problem so this attempts to fix it by retrying the specific error.
The uloz.to backend was failing to download files, instead returning
an HTML page with a "Slow download" message. This was caused by
recent changes in the uloz.to API.
This commit fixes the issue by making the following changes to the
download process:
1. The `hash` received from the download link API is now appended as a
query parameter to the download URL.
2. The download is now performed using the authenticated `rest` client
to ensure premium access is recognized.
3. The `DeviceID` is now generated dynamically for each download request
to avoid potential rate-limiting of a static ID.
Added support for reading and writing zstd-compressed archives in seekable format
using "github.com/klauspost/compress/zstd" and
"github.com/SaveTheRbtz/zstd-seekable-format-go/pkg".
Bumped Go version from 1.24.0 to 1.24.4 due to requirements of
"github.com/SaveTheRbtz/zstd-seekable-format-go/pkg".
Before this fix using --sftp-ssh with the sftp backend could leave
zombie processes.
This patch fixes the problem that sshClientExternal.session was never
assigned, so Wait() always returned nil without waiting for the SSH
process to exit. This caused zombie processes because the process was
never reaped.
It also ensures that Wait() is only called once on each process.
I gave this issue to Copilot to fix as an experiment. It went off in
the wrong direction to start with and fixed something which wasn't the
problem but still needed fixing. With a bit of a nudge it fixed the
correct problem too.
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
94deb6bd6f b2: Add Server-Side encryption support
From the commit above, without setting SSE, rclone would send invalid
SSE requests with empty strings. This is as omitempty only works with
struct pointers not structs.
This commit adds SSE-C (Server-Side Encryption - Customer) support to
the B2 native backend. The server uses a customer provided AES-256 key
to encrypt the files when you upload them to the bucket, and then it
discards your key from the servers RAM after you're done uploading.
The option names and descriptions are based off the S3 backend
implementation as the way S3 and B2 does SSE-C is pretty similar.
Fixes#6585
Give users a way to explicitly acknowledge that pipes, sockets and block
devices are to be ignored without warnings.
This follows the precedent set in commit 6152bab28 (local: add
--skip-links to suppress symlink warnings, 2017-07-21) for ignoring
warnings about symlinks.
* Adds "aix/ppc64" to the cross-compile target list.
* Including AIX in the build tag of "metadata_other.go".
* Excluding AIX from the main ncdu build tags.
* Marking AIX as an unsupported platform for ncdu.
* Excluding AIX from the fallback redirect implementation.
* Excluding AIX from unix build tags to avoid undefined unix.WNOHANG.
Before this change, you had to modify a fragile data-structure
containing all providers. This often led to things being out of order,
duplicates and conflicts whilst merging. As well as the changes for
one provider being in different places across the file.
After this change, new providers are defined in an easy to edit YAML file,
one per provider.
The config output has been tested before and after for all providers
and any changes are cosmetic only.
This renames whitelabel authentication to traditional authentication and adds support for
the main Jottacloud service also here, as it can be used as an alternative to the
authentication based on personal login token for those who prefer it. Documentation
also adjusted correspondingly, and restructured the authentication section a bit more
since some of the sections that was under standard authentication in reality also
applies to the traditional authentication.