* Adds "aix/ppc64" to the cross-compile target list.
* Including AIX in the build tag of "metadata_other.go".
* Excluding AIX from the main ncdu build tags.
* Marking AIX as an unsupported platform for ncdu.
* Excluding AIX from the fallback redirect implementation.
* Excluding AIX from unix build tags to avoid undefined unix.WNOHANG.
If the pacer was used recursively and --max-connections was in use
then it could deadlock if all the connections were in use at the time
of recursive call (likely).
This affected the azureblob backend because when it receives an
InvalidBlockOrBlob error it attempts to clear the condition before
retrying. This in turn involves recursively calling the pacer.
This fixes the problem by skipping the --max-connections check if the
pacer is called recursively.
The recursive detection is done by stack inspection which isn't ideal,
but the alternative would be to add ctx to all >1,000 pacer calls. The
benchmark reveals stack inspection takes about 55nS per stack level so
it is relatively cheap.
This turned out to be a problem in the tests. The tests used to do
1. allocate
2. increment
3. free
4. decrement
But if one goroutine had just completed 2 and another had just
completed 3 then this can cause the test to register too many
allocations.
This was fixed by doing the test in this order instead:
1. allocate
2. increment
3. decrement
4. free
The 4 operations are atomic.
Fixes#8813
As of
4280ec75cc
the lib/transform docs are generated with //go:generate and embedded with
//go:embed.
Before this change, however, they were not getting automatically updated with
subsequent changes (like
fe62a2bb4e)
because `go generate ./lib/transform` was not being run as part of the release
making process.
This change fixes that by running it in `make commanddocs`.
Before this change we used an overcomplicated method of memory
reservations in the pool.RW which caused deadlocks.
This changes it to use a much simpler reservation system where we
actually reserve the memory and store it in the pool.RW. This allows
us to use the semaphore.Weighted to count the actually memory in use
(rather than the memory in use and in the cache). This in turn allows
accurate use of the semaphore by users wanting memory.
Because multipart transfers can need more than one buffer to complete,
if transfers was set very high, it was possible for lots of multipart
transfers to start, grab fewer buffers than chunk size, then deadlock
because no more memory was available.
This fixes the problem by introducing a reservation system which the
multipart transfer uses to ensure it can reserve all the memory for
one chunk before starting.
All user visible Durations should be fs.Duration rather than time.Duration. Suffix is then optional and defaults to s. Additional suffices d, w, M and y are supported, in addition to ms, s, m and h - which are the only ones supported by time.Duration. Absolute times can also be specified, and will be interpreted as duration relative to now.
Before this change, if RetryAfterError was called with a nil err, then
it's Error method would return this when wrapped in a fmt.Errorf
statement
error %!v(PANIC=Error method: runtime error: invalid memory address or nil pointer dereference))
Looking at the code, it looks like RetryAfterError will usually be
called with a nil pointer, so this patch makes sure it has a sensible
error.
This change adds a truncate_bytes mode which counts the number of bytes, as
opposed to the number of UTF-8 characters. This can be useful for ensuring that a
crypt-encoded filename will not exceed the underlying backend's length limits
(see https://forum.rclone.org/t/any-clear-file-name-length-when-using-crypt/36930 ).
This change also adds support for _keep_extension when using truncate and
truncate_bytes.
Before this change the help for convmv was generated by running the
examples each time rclone started up. Unfortunately this involved
running the echo command which did not work on Windows.
This pre-generates the help into `transform.md` and embeds it. It can
be re-generated with `go generate` which is a better solution.
See: https://forum.rclone.org/t/invoke-of-1-70-0-complains-of-echo-not-found/51618
lib/transform adds the transform library, supporting advanced path name
transformations for converting and renaming files and directories by applying
prefixes, suffixes, and other alterations.
It also adds the --name-transform flag for use with sync, copy, and move.
Multiple transformations can be used in sequence, applied in the order they are
specified on the command line.
By default --name-transform will only apply to file names. The means only the leaf
file name will be transformed. However some of the transforms would be better
applied to the whole path or just directories. To choose which which part of the
file path is affected some tags can be added to the --name-transform:
file Only transform the leaf name of files (DEFAULT)
dir Only transform name of directories - these may appear anywhere in the path
all Transform the entire path for files and directories
Example syntax:
--name-transform file,prefix=ABC
--name-transform dir,prefix=DEF
This commit modernizes Go usage. This was done with:
go run golang.org/x/tools/gopls/internal/analysis/modernize/cmd/modernize@latest -fix -test ./...
Then files needed to be `go fmt`ed and a few comments needed to be
restored.
The modernizations include replacing
- if/else conditional assignment by a call to the built-in min or max functions added in go1.21
- sort.Slice(x, func(i, j int) bool) { return s[i] < s[j] } by a call to slices.Sort(s), added in go1.21
- interface{} by the 'any' type added in go1.18
- append([]T(nil), s...) by slices.Clone(s) or slices.Concat(s), added in go1.21
- loop around an m[k]=v map update by a call to one of the Collect, Copy, Clone, or Insert functions from the maps package, added in go1.21
- []byte(fmt.Sprintf...) by fmt.Appendf(nil, ...), added in go1.19
- append(s[:i], s[i+1]...) by slices.Delete(s, i, i+1), added in go1.21
- a 3-clause for i := 0; i < n; i++ {} loop by for i := range n {}, added in go1.22
In this commit we introduced support for client credentials flow:
65012beea4 lib/oauthutil: add support for OAuth client credential flow
This involved re-organising the oauth credentials.
Unfortunately a small error was made which used a fixed redirect URL
rather than the one configured for the backend.
This caused the box backend oauth flow not to work properly with
redirect_uri_mismatch errors.
These backends were using the wrong redirect URL and will likely be
affected, though it is possible the backends have workarounds.
- box
- drive
- googlecloudstorage
- googlephotos
- hidrive
- pikpak
- premiumizeme
- sharefile
- yandex
Before this change, bucket.Join would tidy up object keys by removing
repeated / in them. This means we can't access objects with // in them
which is valid for object keys (but not for file system paths).
This could have consequences for users who are relying on rclone to
fix improper paths for them.
v4 is the latest version with bug fixes and enhancements. While there
are 4 breaking changes in v4, they do not affect us because we do not
use the impacted functions.
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
This will only search Windows System directory for the DLL if name is a base
name (like "advapi32.dll"), which prevents DLL preloading attacks.
To get access to NewLazySystemDLL imports of syscall needs to be swapped with
golang.org/x/sys/windows.
Before this change when setting up client credentials flow manually,
rclone would fail with this error message on first run despite the
fact that no existing token is needed.
empty token found - please run "rclone config reconnect remote:"
This fixes the problem by ignoring token loading problems for client
credentials flow.