This renames whitelabel authentication to traditional authentication and adds support for
the main Jottacloud service also here, as it can be used as an alternative to the
authentication based on personal login token for those who prefer it. Documentation
also adjusted correspondingly, and restructured the authentication section a bit more
since some of the sections that was under standard authentication in reality also
applies to the traditional authentication.
This adds support for them in the whitelabel autentication type, relying on OpenID
Connect, same as Telia, Tele2 etc already uses.
Until recently the Elkjøp subsidiaries still supported the legacy authentication type
only, but that seem to have changed. They no longer support legacy authentication, which
made existing rclone version incompatible with them.
With this the legacy authentication has no known uses, however the implementation of
it is still kept for now.
Fixes#8852
This fixes the issue where configuration would fail after supplying passoword:
Reveal failed: input too short when revealing password - is it obscured?
Before this change, rclone would unnecessarily retry downloads when
the `Link.Expire` field was unreliable but the download URL contained
a valid expire query parameter. This primarily affects cases where
media links are unavailable or when `no_media_link` is enabled.
The `Link.Valid()` method now primarily checks the URL's expire query
parameter (as Unix timestamp) and falls back to the Expire field
only when URL parsing fails. This eliminates the `error no link`
retry loops while maintaining backward compatibility.
Signed-off-by: Youfu Zhang <zhangyoufu@gmail.com>
Before this change the minimum chunk size would default to 96M which
would allow a maximum size of just below 1TB file to be uploaded, due to
the 10000 part rule for b2.
Now the calculated chunk size is used so the chunk size can be 5GB
making a max file size of 50TB.
Fixes#8460
Before this change, TestMetadata could fail due to a difference between the
user's local time zone and UTC causing the string representation of the date to
be off by one day. This change fixes the issue by comparing both in the Local
time zone.
Before this change, Rmdir (and other commands that rely on Rmdir) would fail
with "Access is denied" on Windows, if the directory had
FILE_ATTRIBUTE_READONLY. This could happen if, for example, an empty folder had
a custom icon added via Windows Explorer's interface (Properties => Customize =>
Change Icon...).
However, Microsoft docs indicate that "This attribute is not honored on
directories."
https://learn.microsoft.com/en-us/windows/win32/fileio/file-attribute-constants#file_attribute_readonly
Accordingly, this created an odd situation where such directories were removable
(by their owner) via File Explorer and the rd command, but not via rclone.
An upstream issue has been open since 2018, but has not yet resulted in a fix.
https://github.com/golang/go/issues/26295
This change gets around the issue by doing os.Chmod on the dir and then retrying
os.Remove. If the dir is not empty, this will still fail with "The directory is
not empty."
A bisync user confirmed that it fixed their issue in
https://forum.rclone.org/t/bisync-leaving-empty-directories-on-unc-path-1-or-local-filesystem-path-2-on-directory-renames/52456/4?u=nielash
It is likely also a fix for #8019, although @ncw is correct that Purge would be
a more efficient solution in that particular scenario.
In this commit we broke server side copy for files with spaces
4c5764204d internetarchive: fix server side copy files with &
This fixes the problem by using rest.URLPathEscapeAll which escapes
everything possible.
Fixes#8754
This commit introduces a new validation step to ensure data integrity
during file uploads.
- The API's returned file name (new.File.Name) is now verified
against the requested file name (leaf) immediately after
the initial upload ticket is created.
- If a mismatch is detected, the upload process is aborted with an error,
and the defer cleanup logic is triggered to delete any partially created file.
- This addresses an unexpected API behavior where numbered suffixes
might be appended to filenames even without conflicts.
- This change prevents corrupted or misnamed files from being uploaded
without client-side awareness.
Before this change, server side copy of files with & gave the error:
Invalid Argument</Message><Resource>x-(amz|archive)-copy-source
header has bad character
This fix switches to using url.QueryEscape which escapes everything
from url.PathEscape which doesn't escape &.
Fixes#8754
This reverts commit 64ed9b175f.
This fails the integration tests with
s3_internal_test.go:434: Creating a bucket we already have created returned code: No Error
s3_internal_test.go:439:
Error Trace: backend/s3/s3_internal_test.go:439
Error: Should be true
Test: TestIntegration/FsMkdir/FsPutFiles/Internal/Versions/Mkdir
Messages: Need to set UseAlreadyExists quirk
In the current design, OpenWriterAt provides the interface for random-access
writes, and openChunkWriterFromOpenWriterAt wraps this interface to enable
parallel chunk uploads using multiple goroutines. A global connection pool is
already in place to manage SMB connections across files.
However, currently only one connection is used per file, which makes multiple
goroutines compete for the connection during multithreaded writes.
This changes create separate connections for each goroutine, which allows true
parallelism by giving each goroutine its own SMB connection
Signed-off-by: sudipto baral <sudiptobaral.me@gmail.com>
`Content-Type: aws-chunked` is used on S3 PUT requests to signal SigV4
streaming uploads: the body is sent in AWS-formatted chunks, each
chunk framed and HMAC-signed.
When copying from a non S3 compatible object store (like Digital
Ocean) the objects can have `Content-Type: aws-chunked` (which you
won't see on AWS S3). Attempting to copy these objects to S3 with
`--metadata` this produces this error.
aws-chunked encoding is not supported when x-amz-content-sha256 UNSIGNED-PAYLOAD is supplied
This patch makes sure `aws-chunked` is removed from the `Content-Type`
metadata both on the way in and the way out.
Fixes#8724
Before this change multipart uploads using OpenChunkWriter would
account for twice the space used.
This fixes the problem by adjusting the accounting delay.
Before this change the azureblob backend could deadlock when using
--max-connections. This is because when it receives InvalidBlockOrBlob
error it attempts to clear the condition before retrying. This in turn
involved recursively calling the pacer. At this point the pacer can
easily have no connections left which causes a deadlock as all the
other pacer connections are waiting for the InvalidBlockOrBlob to be
resolved.
This fixes the problem by using a temporary pacer when resolving the
InvalidBlockOrBlob errors.
Before this change, setting an object's modtime with o.SetModTime() (without
updating the file's content) would inadvertently erase its md5 hash.
The documentation notes: "If this property isn't specified on the request, the
property is cleared for the file. Subsequent calls to Get File Properties won't
return this property, unless it's explicitly set on the file again."
https://learn.microsoft.com/en-us/rest/api/storageservices/set-file-properties#common-request-headers
This change fixes the issue by setting ContentMD5 (and ContentType), to the
extent we have it, during SetModTime.
Discovered on bisync integration tests such as TestBisyncRemoteRemote/resolve