diff --git a/MANUAL.html b/MANUAL.html
index b30001a19..db69f2569 100644
--- a/MANUAL.html
+++ b/MANUAL.html
@@ -17,7 +17,7 @@
Rclone syncs your files to cloud storage
@@ -35,7 +35,7 @@
Rclone has powerful cloud equivalents to the unix commands rsync, cp, mv, mount, ls, ncdu, tree, rm, and cat. Rclone's familiar syntax includes shell pipeline support, and --dry-run
protection. It is used at the command line, in scripts or via its API.
Users call rclone "The Swiss army knife of cloud storage", and "Technology indistinguishable from magic".
Rclone really looks after your data. It preserves timestamps and verifies checksums at all times. Transfers over limited bandwidth; intermittent connections, or subject to quota can be restarted, from the last good file transferred. You can check the integrity of your files. Where possible, rclone employs server-side transfers to minimise local bandwidth use and transfers from one provider to another without using local disk.
-Virtual backends wrap local and cloud file systems to apply encryption, caching, compression chunking and joining.
+Virtual backends wrap local and cloud file systems to apply encryption, compression chunking and joining.
Rclone mounts any local, cloud or virtual filesystem as a disk on Windows, macOS, linux and FreeBSD, and also serves these over SFTP, HTTP, WebDAV, FTP and DLNA.
Rclone is mature, open source software originally inspired by rsync and written in Go. The friendly support community are familiar with varied use cases. Official Ubuntu, Debian, Fedora, Brew and Chocolatey repos. include rclone. For the latest version downloading from rclone.org is recommended.
Rclone is widely used on Linux, Windows and Mac. Third party developers create innovative backup, restore, GUI and business process solutions using the rclone command line or API.
@@ -116,11 +116,13 @@
rsync.net
Scaleway
Seafile
+SeaweedFS
SFTP
StackPath
SugarSync
Tardigrade
Tencent Cloud Object Storage (COS)
+Uptobox
Wasabi
WebDAV
Yandex Disk
@@ -141,6 +143,7 @@
Download the relevant binary.
Extract the rclone
or rclone.exe
binary from the archive
Run rclone config
to setup. See rclone config docs for more details.
+Optionally configure automatic execution.
See below for some expanded Linux / macOS instructions.
See the Usage section of the docs for how to use rclone, or run rclone -h
.
@@ -257,6 +260,38 @@ go build
- hosts: rclone-hosts
roles:
- rclone
+Autostart
+After installing and configuring rclone, as described above, you are ready to use rclone as an interactive command line utility. If your goal is to perform periodic operations, such as a regular sync, you will probably want to configure your rclone command in your operating system's scheduler. If you need to expose service-like features, such as remote control, GUI, serve or mount, you will often want an rclone command always running in the background, and configuring it to run in a service infrastructure may be a better option. Below are some alternatives on how to achieve this on different operating systems.
+NOTE: Before setting up autorun it is highly recommended that you have tested your command manually from a Command Prompt first.
+Autostart on Windows
+The most relevant alternatives for autostart on Windows are: - Run at user log on using the Startup folder - Run at user log on, at system startup or at schedule using Task Scheduler - Run at system startup using Windows service
+Running in background
+Rclone is a console application, so if not starting from an existing Command Prompt, e.g. when starting rclone.exe from a shortcut, it will open a Command Prompt window. When configuring rclone to run from task scheduler and windows service you are able to set it to run hidden in background. From rclone version 1.54 you can also make it run hidden from anywhere by adding option --no-console
(it may still flash briefly when the program starts). Since rclone normally writes information and any error messages to the console, you must redirect this to a file to be able to see it. Rclone has a built-in option --log-file
for that.
+Example command to run a sync in background:
+c:\rclone\rclone.exe sync c:\files remote:/files --no-console --log-file c:\rclone\logs\sync_files.txt
+User account
+As mentioned in the mount documentation, mounted drives created as Administrator are not visible to other accounts, not even the account that was elevated as Administrator. By running the mount command as the built-in SYSTEM
user account, it will create drives accessible for everyone on the system. Both scheduled task and Windows service can be used to achieve this.
+NOTE: Remember that when rclone runs as the SYSTEM
user, the user profile that it sees will not be yours. This means that if you normally run rclone with configuration file in the default location, to be able to use the same configuration when running as the system user you must explicitely tell rclone where to find it with the --config
option, or else it will look in the system users profile path (C:\Windows\System32\config\systemprofile
). To test your command manually from a Command Prompt, you can run it with the PsExec utility from Microsoft's Sysinternals suite, which takes option -s
to execute commands as the SYSTEM
user.
+Start from Startup folder
+To quickly execute an rclone command you can simply create a standard Windows Explorer shortcut for the complete rclone command you want to run. If you store this shortcut in the special "Startup" start-menu folder, Windows will automatically run it at login. To open this folder in Windows Explorer, enter path %APPDATA%\Microsoft\Windows\Start Menu\Programs\Startup
, or C:\ProgramData\Microsoft\Windows\Start Menu\Programs\StartUp
if you want the command to start for every user that logs in.
+This is the easiest approach to autostarting of rclone, but it offers no functionality to set it to run as different user, or to set conditions or actions on certain events. Setting up a scheduled task as described below will often give you better results.
+Start from Task Scheduler
+Task Scheduler is an administrative tool built into Windows, and it can be used to configure rclone to be started automatically in a highly configurable way, e.g. periodically on a schedule, on user log on, or at system startup. It can run be configured to run as the current user, or for a mount command that needs to be available to all users it can run as the SYSTEM
user. For technical information, see https://docs.microsoft.com/windows/win32/taskschd/task-scheduler-start-page.
+Run as service
+For running rclone at system startup, you can create a Windows service that executes your rclone command, as an alternative to scheduled task configured to run at startup.
+Mount command built-in service integration
+For mount commands, Rclone has a built-in Windows service integration via the third party WinFsp library it uses. Registering as a regular Windows service easy, as you just have to execute the built-in PowerShell command New-Service
(requires administrative privileges).
+Example of a PowerShell command that creates a Windows service for mounting some remote:/files
as drive letter X:
, for all users (service will be running as the local system account):
+New-Service -Name Rclone -BinaryPathName 'c:\rclone\rclone.exe mount remote:/files X: --config c:\rclone\config\rclone.conf --log-file c:\rclone\logs\mount.txt'
+The WinFsp service infrastructure supports incorporating services for file system implementations, such as rclone, into its own launcher service, as kind of "child services". This has the additional advantage that it also implements a network provider that integrates into Windows standard methods for managing network drives. This is currently not officially supported by Rclone, but with WinFsp version 2019.3 B2 / v1.5B2 or later it should be possible through path rewriting as described here.
+Third party service integration
+To Windows service running any rclone command, the excellent third party utility NSSM, the "Non-Sucking Service Manager", can be used. It includes some advanced features such as adjusting process periority, defining process environment variables, redirect to file anything written to stdout, and customized response to different exit codes, with a GUI to configure everything from (although it can also be used from command line ).
+There are also several other alternatives. To mention one more, WinSW, "Windows Service Wrapper", is worth checking out. It requires .NET Framework, but it is preinstalled on newer versions of Windows, and it also provides alternative standalone distributions which includes necessary runtime (.NET 5). WinSW is a command-line only utility, where you have to manually create an XML file with service configuration. This may be a drawback for some, but it can also be an advantage as it is easy to back up and re-use the configuration settings, without having go through manual steps in a GUI. One thing to note is that by default it does not restart the service on error, one have to explicit enable this in the configuration file (via the "onfailure" parameter).
+Autostart on Linux
+Start as a service
+To always run rclone in background, relevant for mount commands etc, you can use systemd to set up rclone as a system or user service. Running as a system service ensures that it is run at startup even if the user it is running as has no active session. Running rclone as a user service ensures that it only starts after the configured user has logged into the system.
+Run periodically from cron
+To run a periodic command, such as a copy/sync, you can set up a cron job.
First, you'll need to configure rclone. As the object storage systems have quite complicated authentication these are kept in a config file. (See the --config
entry for how to find the config file and choose its location.)
The easiest way to make the config is to run rclone with the config option:
@@ -269,7 +304,6 @@ go build
Amazon S3
Backblaze B2
Box
-Cache
Chunker - transparently splits large files for other remotes
Citrix ShareFile
Compress
@@ -302,6 +336,7 @@ go build
SugarSync
Tardigrade
Union
+Uptobox
WebDAV
Yandex Disk
Zoho WorkDrive
@@ -334,12 +369,12 @@ rclone sync -i /local/path remote:path # syncs /local/path to the remote<
rclone config delete - Delete an existing remote name
.
rclone config disconnect - Disconnects user from remote
rclone config dump - Dump the config file as JSON.
-rclone config edit - Enter an interactive configuration session.
rclone config file - Show path of configuration file in use.
rclone config password - Update password in an existing remote.
rclone config providers - List in JSON format all the providers and options.
rclone config reconnect - Re-authenticates user with remote.
rclone config show - Print (decrypted) config file, or the config for a single remote.
+rclone config touch - Ensure configuration file exists.
rclone config update - Update options in an existing remote.
rclone config userinfo - Prints info about logged in user of remote.
@@ -421,12 +456,12 @@ destpath/sourcepath/two.txt
Remove the files in path. Unlike purge
it obeys include/exclude filters so can be used to selectively delete files.
rclone delete
only deletes files but leaves the directory structure alone. If you want to delete a directory and all of its contents use the purge
command.
If you supply the --rmdirs
flag, it will remove all empty directories along with it. You can also use the separate command rmdir
or rmdirs
to delete empty directories only.
-For example, to delete all files bigger than 100MBytes, you may first want to check what would be deleted (use either):
+For example, to delete all files bigger than 100 MiB, you may first want to check what would be deleted (use either):
rclone --min-size 100M lsl remote:path
rclone --dry-run --min-size 100M delete remote:path
Then proceed with the actual delete:
rclone --min-size 100M delete remote:path
-That reads "delete everything with a minimum size of 100 MB", hence delete all files bigger than 100MBytes.
+That reads "delete everything with a minimum size of 100 MiB", hence delete all files bigger than 100 MiB.
Important: Since this can cause data loss, test first with the --dry-run
or the --interactive
/-i
flag.
rclone delete remote:path [flags]
Options
@@ -479,6 +514,7 @@ rclone --dry-run --min-size 100M delete remote:path
Checks the files in the source and destination match. It compares sizes and hashes (MD5 or SHA1) and logs a report of files which don't match. It doesn't alter the source or destination.
If you supply the --size-only
flag, it will only compare the sizes not the hashes as well. Use this for a quick check.
If you supply the --download
flag, it will download the data from both remotes and check them against each other on the fly. This can be useful for remotes that don't support hashes or if you really want to check all the data.
+If you supply the --checkfile HASH
flag with a valid hash name, the source:path
must point to a text file in the SUM format.
If you supply the --one-way
flag, it will only check that files in the source match the files in the destination, not the other way around. This means that extra files in the destination that are not in the source will not be detected.
The --differ
, --missing-on-dst
, --missing-on-src
, --match
and --error
flags write paths, one per line, to the file name (or stdout if it is -
) supplied. What they write is described in the help below. For example --differ
will write all paths which are present on both the source and destination but different.
The --combined
flag will write a file (or stdout) which contains all file paths with a symbol and then a space and then the path to tell you what happened to it. These are reminiscent of diff files.
@@ -491,7 +527,8 @@ rclone --dry-run --min-size 100M delete remote:path
rclone check source:path dest:path [flags]
Options
- --combined string Make a combined report of changes to this file
+ -C, --checkfile string Treat source:path as a SUM file with hashes of given type
+ --combined string Make a combined report of changes to this file
--differ string Report all non-matching files to this file
--download Check by downloading rather than with hash.
--error string Report all files with errors (hashing or reading) to this file
@@ -611,6 +648,7 @@ rclone --dry-run --min-size 100M delete remote:path
rclone md5sum remote:path [flags]
Options
--base64 Output base64 encoded hashsum
+ -C, --checkfile string Validate hashes against a given SUM file instead of printing them
--download Download the file and hash it locally; if this flag is not specified, the hash is requested from the remote
-h, --help help for md5sum
--output-file string Output hashsums to a file rather than the terminal
@@ -627,6 +665,7 @@ rclone --dry-run --min-size 100M delete remote:path
rclone sha1sum remote:path [flags]
Options
--base64 Output base64 encoded hashsum
+ -C, --checkfile string Validate hashes against a given SUM file instead of printing them
--download Download the file and hash it locally; if this flag is not specified, the hash is requested from the remote
-h, --help help for sha1sum
--output-file string Output hashsums to a file rather than the terminal
@@ -649,10 +688,12 @@ rclone --dry-run --min-size 100M delete remote:path
rclone version
Show the version number.
Synopsis
-Show the rclone version number, the go version, the build target OS and architecture, build tags and the type of executable (static or dynamic).
+Show the rclone version number, the go version, the build target OS and architecture, the runtime OS and kernel version and bitness, build tags and the type of executable (static or dynamic).
For example:
$ rclone version
-rclone v1.54
+rclone v1.55.0
+- os/version: ubuntu 18.04 (64 bit)
+- os/kernel: 4.15.0-136-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.16
@@ -774,8 +815,8 @@ two-3.txt: renamed from: two.txt
rclone about
Get quota information from the remote.
Synopsis
-rclone about
prints quota information about a remote to standard output. The output is typically used, free, quota and trash contents.
-E.g. Typical output fromrclone about remote:
is:
+rclone about
prints quota information about a remote to standard output. The output is typically used, free, quota and trash contents.
+E.g. Typical output from rclone about remote:
is:
Total: 17G
Used: 7.444G
Free: 1.315G
@@ -797,7 +838,7 @@ Used: 7993453766
Free: 1411001220
Trashed: 104857602
Other: 8849156022
-A --json
flag generates conveniently computer readable output, e.g.
+A --json
flag generates conveniently computer readable output, e.g.
{
"total": 18253611008,
"used": 7993453766,
@@ -879,168 +920,311 @@ rclone backend help <backendname>
- rclone - Show help for rclone commands, flags and backends.
-rclone config create
-Create a new remote with name, type and options.
+rclone checksum
+Checks the files in the source against a SUM file.
Synopsis
-Create a new remote of name
with type
and options. The options should be passed in pairs of key
value
.
-For example to make a swift remote of name myremote using auto config you would do:
-rclone config create myremote swift env_auth true
-Note that if the config process would normally ask a question the default is taken. Each time that happens rclone will print a message saying how to affect the value taken.
-If any of the parameters passed is a password field, then rclone will automatically obscure them if they aren't already obscured before putting them in the config file.
-NB If the password parameter is 22 characters or longer and consists only of base64 characters then rclone can get confused about whether the password is already obscured or not and put unobscured passwords into the config file. If you want to be 100% certain that the passwords get obscured then use the "--obscure" flag, or if you are 100% certain you are already passing obscured passwords then use "--no-obscure". You can also set obscured passwords using the "rclone config password" command.
-So for example if you wanted to configure a Google Drive remote but using remote authorization you would do this:
-rclone config create mydrive drive config_is_local false
-rclone config create `name` `type` [`key` `value`]* [flags]
+Checks that hashsums of source files match the SUM file. It compares hashes (MD5, SHA1, etc) and logs a report of files which don't match. It doesn't alter the file system.
+If you supply the --download
flag, it will download the data from remote and calculate the contents hash on the fly. This can be useful for remotes that don't support hashes or if you really want to check all the data.
+If you supply the --one-way
flag, it will only check that files in the source match the files in the destination, not the other way around. This means that extra files in the destination that are not in the source will not be detected.
+The --differ
, --missing-on-dst
, --missing-on-src
, --match
and --error
flags write paths, one per line, to the file name (or stdout if it is -
) supplied. What they write is described in the help below. For example --differ
will write all paths which are present on both the source and destination but different.
+The --combined
flag will write a file (or stdout) which contains all file paths with a symbol and then a space and then the path to tell you what happened to it. These are reminiscent of diff files.
+
+= path
means path was found in source and destination and was identical
+- path
means path was missing on the source, so only in the destination
++ path
means path was missing on the destination, so only in the source
+* path
means path was present in source and destination but different.
+! path
means there was an error reading or hashing the source or dest.
+
+rclone checksum <hash> sumfile src:path [flags]
Options
- -h, --help help for create
- --no-obscure Force any passwords not to be obscured.
- --obscure Force any passwords to be obscured.
+ --combined string Make a combined report of changes to this file
+ --differ string Report all non-matching files to this file
+ --download Check by hashing the contents.
+ --error string Report all files with errors (hashing or reading) to this file
+ -h, --help help for checksum
+ --match string Report all matching files to this file
+ --missing-on-dst string Report all files missing from the destination to this file
+ --missing-on-src string Report all files missing from the source to this file
+ --one-way Check one way only, source files must exist on remote
See the global flags page for global options not listed here.
SEE ALSO
+- rclone - Show help for rclone commands, flags and backends.
+
+rclone config create
+Create a new remote with name, type and options.
+Synopsis
+Create a new remote of name
with type
and options. The options should be passed in pairs of key
value
or as key=value
.
+For example to make a swift remote of name myremote using auto config you would do:
+rclone config create myremote swift env_auth true
+rclone config create myremote swift env_auth=true
+So for example if you wanted to configure a Google Drive remote but using remote authorization you would do this:
+rclone config create mydrive drive config_is_local=false
+Note that if the config process would normally ask a question the default is taken (unless --non-interactive
is used). Each time that happens rclone will print or DEBUG a message saying how to affect the value taken.
+If any of the parameters passed is a password field, then rclone will automatically obscure them if they aren't already obscured before putting them in the config file.
+NB If the password parameter is 22 characters or longer and consists only of base64 characters then rclone can get confused about whether the password is already obscured or not and put unobscured passwords into the config file. If you want to be 100% certain that the passwords get obscured then use the --obscure
flag, or if you are 100% certain you are already passing obscured passwords then use --no-obscure
. You can also set obscured passwords using the rclone config password
command.
+The flag --non-interactive
is for use by applications that wish to configure rclone themeselves, rather than using rclone's text based configuration questions. If this flag is set, and rclone needs to ask the user a question, a JSON blob will be returned with the question in it.
+This will look something like (some irrelevant detail removed):
+{
+ "State": "*oauth-islocal,teamdrive,,",
+ "Option": {
+ "Name": "config_is_local",
+ "Help": "Use auto config?\n * Say Y if not sure\n * Say N if you are working on a remote or headless machine\n",
+ "Default": true,
+ "Examples": [
+ {
+ "Value": "true",
+ "Help": "Yes"
+ },
+ {
+ "Value": "false",
+ "Help": "No"
+ }
+ ],
+ "Required": false,
+ "IsPassword": false,
+ "Type": "bool",
+ "Exclusive": true,
+ },
+ "Error": "",
+}
+The format of Option
is the same as returned by rclone config providers
. The question should be asked to the user and returned to rclone as the --result
option along with the --state
parameter.
+The keys of Option
are used as follows:
+
+Name
- name of variable - show to user
+Help
- help text. Hard wrapped at 80 chars. Any URLs should be clicky.
+Default
- default value - return this if the user just wants the default.
+Examples
- the user should be able to choose one of these
+Required
- the value should be non-empty
+IsPassword
- the value is a password and should be edited as such
+Type
- type of value, eg bool
, string
, int
and others
+Exclusive
- if set no free-form entry allowed only the Examples
+- Irrelevant keys
Provider
, ShortOpt
, Hide
, NoPrefix
, Advanced
+
+If Error
is set then it should be shown to the user at the same time as the question.
+rclone config update name --continue --state "*oauth-islocal,teamdrive,," --result "true"
+Note that when using --continue
all passwords should be passed in the clear (not obscured). Any default config values should be passed in with each invocation of --continue
.
+At the end of the non interactive process, rclone will return a result with State
as empty string.
+If --all
is passed then rclone will ask all the config questions, not just the post config questions. Any parameters are used as defaults for questions as usual.
+Note that bin/config.py
in the rclone source implements this protocol as a readable demonstration.
+rclone config create `name` `type` [`key` `value`]* [flags]
+Options
+ --all Ask the full set of config questions.
+ --continue Continue the configuration process with an answer.
+ -h, --help help for create
+ --no-obscure Force any passwords not to be obscured.
+ --non-interactive Don't interact with user and return questions.
+ --obscure Force any passwords to be obscured.
+ --result string Result - use with --continue.
+ --state string State - use with --continue.
+See the global flags page for global options not listed here.
+SEE ALSO
+
rclone config delete
Delete an existing remote name
.
rclone config delete `name` [flags]
-Options
+Options
-h, --help help for delete
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
rclone config disconnect
Disconnects user from remote
-Synopsis
+Synopsis
This disconnects the remote: passed in to the cloud storage system.
This normally means revoking the oauth token.
To reconnect use "rclone config reconnect".
rclone config disconnect remote: [flags]
-Options
+Options
-h, --help help for disconnect
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
rclone config dump
Dump the config file as JSON.
rclone config dump [flags]
-Options
+Options
-h, --help help for dump
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
rclone config edit
Enter an interactive configuration session.
-Synopsis
+Synopsis
Enter an interactive configuration session where you can setup new remotes and manage existing ones. You may also set or remove a password to protect your configuration.
rclone config edit [flags]
-Options
+Options
-h, --help help for edit
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
rclone config file
Show path of configuration file in use.
rclone config file [flags]
-Options
+Options
-h, --help help for file
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
rclone config password
Update password in an existing remote.
-Synopsis
-Update an existing remote's password. The password should be passed in pairs of key
value
.
+Synopsis
+Update an existing remote's password. The password should be passed in pairs of key
password
or as key=password
. The password
should be passed in in clear (unobscured).
For example to set password of a remote of name myremote you would do:
-rclone config password myremote fieldname mypassword
+rclone config password myremote fieldname mypassword
+rclone config password myremote fieldname=mypassword
This command is obsolete now that "config update" and "config create" both support obscuring passwords directly.
rclone config password `name` [`key` `value`]+ [flags]
-Options
+Options
-h, --help help for password
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
rclone config providers
List in JSON format all the providers and options.
rclone config providers [flags]
-Options
+Options
-h, --help help for providers
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
rclone config reconnect
Re-authenticates user with remote.
-Synopsis
+Synopsis
This reconnects remote: passed in to the cloud storage system.
To disconnect the remote use "rclone config disconnect".
This normally means going through the interactive oauth flow again.
rclone config reconnect remote: [flags]
-Options
+Options
-h, --help help for reconnect
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
rclone config show
Print (decrypted) config file, or the config for a single remote.
rclone config show [<remote>] [flags]
-Options
- -h, --help help for show
-See the global flags page for global options not listed here.
-SEE ALSO
-
-rclone config update
-Update options in an existing remote.
-Synopsis
-Update an existing remote's options. The options should be passed in in pairs of key
value
.
-For example to update the env_auth field of a remote of name myremote you would do:
-rclone config update myremote swift env_auth true
-If any of the parameters passed is a password field, then rclone will automatically obscure them if they aren't already obscured before putting them in the config file.
-NB If the password parameter is 22 characters or longer and consists only of base64 characters then rclone can get confused about whether the password is already obscured or not and put unobscured passwords into the config file. If you want to be 100% certain that the passwords get obscured then use the "--obscure" flag, or if you are 100% certain you are already passing obscured passwords then use "--no-obscure". You can also set obscured passwords using the "rclone config password" command.
-If the remote uses OAuth the token will be updated, if you don't require this add an extra parameter thus:
-rclone config update myremote swift env_auth true config_refresh_token false
-rclone config update `name` [`key` `value`]+ [flags]
Options
- -h, --help help for update
- --no-obscure Force any passwords not to be obscured.
- --obscure Force any passwords to be obscured.
+ -h, --help help for show
See the global flags page for global options not listed here.
SEE ALSO
-rclone config userinfo
-Prints info about logged in user of remote.
-Synopsis
-This prints the details of the person logged in to the cloud storage system.
-rclone config userinfo remote: [flags]
+rclone config touch
+Ensure configuration file exists.
+rclone config touch [flags]
Options
- -h, --help help for userinfo
- --json Format output as JSON
+ -h, --help help for touch
See the global flags page for global options not listed here.
SEE ALSO
+rclone config update
+Update options in an existing remote.
+Synopsis
+Update an existing remote's options. The options should be passed in pairs of key
value
or as key=value
.
+For example to update the env_auth field of a remote of name myremote you would do:
+rclone config update myremote env_auth true
+rclone config update myremote env_auth=true
+If the remote uses OAuth the token will be updated, if you don't require this add an extra parameter thus:
+rclone config update myremote env_auth=true config_refresh_token=false
+Note that if the config process would normally ask a question the default is taken (unless --non-interactive
is used). Each time that happens rclone will print or DEBUG a message saying how to affect the value taken.
+If any of the parameters passed is a password field, then rclone will automatically obscure them if they aren't already obscured before putting them in the config file.
+NB If the password parameter is 22 characters or longer and consists only of base64 characters then rclone can get confused about whether the password is already obscured or not and put unobscured passwords into the config file. If you want to be 100% certain that the passwords get obscured then use the --obscure
flag, or if you are 100% certain you are already passing obscured passwords then use --no-obscure
. You can also set obscured passwords using the rclone config password
command.
+The flag --non-interactive
is for use by applications that wish to configure rclone themeselves, rather than using rclone's text based configuration questions. If this flag is set, and rclone needs to ask the user a question, a JSON blob will be returned with the question in it.
+This will look something like (some irrelevant detail removed):
+{
+ "State": "*oauth-islocal,teamdrive,,",
+ "Option": {
+ "Name": "config_is_local",
+ "Help": "Use auto config?\n * Say Y if not sure\n * Say N if you are working on a remote or headless machine\n",
+ "Default": true,
+ "Examples": [
+ {
+ "Value": "true",
+ "Help": "Yes"
+ },
+ {
+ "Value": "false",
+ "Help": "No"
+ }
+ ],
+ "Required": false,
+ "IsPassword": false,
+ "Type": "bool",
+ "Exclusive": true,
+ },
+ "Error": "",
+}
+The format of Option
is the same as returned by rclone config providers
. The question should be asked to the user and returned to rclone as the --result
option along with the --state
parameter.
+The keys of Option
are used as follows:
+
+Name
- name of variable - show to user
+Help
- help text. Hard wrapped at 80 chars. Any URLs should be clicky.
+Default
- default value - return this if the user just wants the default.
+Examples
- the user should be able to choose one of these
+Required
- the value should be non-empty
+IsPassword
- the value is a password and should be edited as such
+Type
- type of value, eg bool
, string
, int
and others
+Exclusive
- if set no free-form entry allowed only the Examples
+- Irrelevant keys
Provider
, ShortOpt
, Hide
, NoPrefix
, Advanced
+
+If Error
is set then it should be shown to the user at the same time as the question.
+rclone config update name --continue --state "*oauth-islocal,teamdrive,," --result "true"
+Note that when using --continue
all passwords should be passed in the clear (not obscured). Any default config values should be passed in with each invocation of --continue
.
+At the end of the non interactive process, rclone will return a result with State
as empty string.
+If --all
is passed then rclone will ask all the config questions, not just the post config questions. Any parameters are used as defaults for questions as usual.
+Note that bin/config.py
in the rclone source implements this protocol as a readable demonstration.
+rclone config update `name` [`key` `value`]+ [flags]
+Options
+ --all Ask the full set of config questions.
+ --continue Continue the configuration process with an answer.
+ -h, --help help for update
+ --no-obscure Force any passwords not to be obscured.
+ --non-interactive Don't interact with user and return questions.
+ --obscure Force any passwords to be obscured.
+ --result string Result - use with --continue.
+ --state string State - use with --continue.
+See the global flags page for global options not listed here.
+SEE ALSO
+
+rclone config userinfo
+Prints info about logged in user of remote.
+Synopsis
+This prints the details of the person logged in to the cloud storage system.
+rclone config userinfo remote: [flags]
+Options
+ -h, --help help for userinfo
+ --json Format output as JSON
+See the global flags page for global options not listed here.
+SEE ALSO
+
rclone copyto
Copy files from source to dest, skipping already copied.
-Synopsis
+Synopsis
If source:path is a file or directory then it copies it to a file or directory named dest:path.
This can be used to upload single files to other than their current name. If the source is a directory then it acts exactly like the copy command.
So
@@ -1055,35 +1239,35 @@ if src is directory
This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. It doesn't delete files from the destination.
Note: Use the -P
/--progress
flag to view real-time transfer statistics
rclone copyto source:path dest:path [flags]
-Options
+Options
-h, --help help for copyto
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
- rclone - Show help for rclone commands, flags and backends.
rclone copyurl
Copy url content to dest.
-Synopsis
+Synopsis
Download a URL's content and copy it to the destination without saving it in temporary storage.
-Setting --auto-filename
will cause the file name to be retrieved from the from URL (after any redirections) and used in the destination path. With --print-filename
in addition, the resuling file name will be printed.
+Setting --auto-filename
will cause the file name to be retrieved from the URL (after any redirections) and used in the destination path. With --print-filename
in addition, the resulting file name will be printed.
Setting --no-clobber
will prevent overwriting file on the destination if there is one with the same name.
Setting --stdout
or making the output file name -
will cause the output to be written to standard output.
rclone copyurl https://example.com dest:path [flags]
-Options
+Options
-a, --auto-filename Get the file name from the URL and use it for destination file path
-h, --help help for copyurl
--no-clobber Prevent overwriting file with same name
-p, --print-filename Print the resulting name from --auto-filename
--stdout Write the output to stdout rather than a file
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
- rclone - Show help for rclone commands, flags and backends.
rclone cryptcheck
Cryptcheck checks the integrity of a crypted remote.
-Synopsis
+Synopsis
rclone cryptcheck checks a remote against a crypted remote. This is the equivalent of running rclone check, but able to check the checksums of the crypted remote.
For it to work the underlying remote of the cryptedremote must support some kind of checksum.
It works by reading the nonce from each file on the cryptedremote: and using that to encrypt each file on the remote:. It then checks the checksum of the underlying file on the cryptedremote: against the checksum of the file it has just encrypted.
@@ -1103,7 +1287,7 @@ if src is directory
! path
means there was an error reading or hashing the source or dest.
rclone cryptcheck remote:path cryptedremote:path [flags]
-Options
+Options
--combined string Make a combined report of changes to this file
--differ string Report all non-matching files to this file
--error string Report all files with errors (hashing or reading) to this file
@@ -1113,13 +1297,13 @@ if src is directory
--missing-on-src string Report all files missing from the source to this file
--one-way Check one way only, source files must exist on remote
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
- rclone - Show help for rclone commands, flags and backends.
rclone cryptdecode
Cryptdecode returns unencrypted file names.
-Synopsis
+Synopsis
rclone cryptdecode returns unencrypted file names when provided with a list of encrypted file names. List limit is 10 items.
If you supply the --reverse flag, it will return encrypted file names.
use it like this
@@ -1128,34 +1312,34 @@ if src is directory
rclone cryptdecode --reverse encryptedremote: filename1 filename2
Another way to accomplish this is by using the rclone backend encode
(or decode
)command. See the documentation on the crypt
overlay for more info.
rclone cryptdecode encryptedremote: encryptedfilename [flags]
-Options
+Options
-h, --help help for cryptdecode
--reverse Reverse cryptdecode, encrypts filenames
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
- rclone - Show help for rclone commands, flags and backends.
rclone deletefile
Remove a single file from remote.
-Synopsis
+Synopsis
Remove a single file from remote. Unlike delete
it cannot be used to remove a directory and it doesn't obey include/exclude filters - if the specified file exists, it will always be removed.
rclone deletefile remote:path [flags]
-Options
+Options
-h, --help help for deletefile
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
- rclone - Show help for rclone commands, flags and backends.
rclone genautocomplete
Output completion script for a given shell.
-Synopsis
+Synopsis
Generates a shell completion script for rclone. Run with --help to list the supported shells.
-Options
+Options
-h, --help help for genautocomplete
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
- rclone - Show help for rclone commands, flags and backends.
- rclone genautocomplete bash - Output bash completion script for rclone.
@@ -1164,7 +1348,7 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2
rclone genautocomplete bash
Output bash completion script for rclone.
-Synopsis
+Synopsis
Generates a bash shell autocompletion script for rclone.
This writes to /etc/bash_completion.d/rclone by default so will probably need to be run with sudo or as root, e.g.
sudo rclone genautocomplete bash
@@ -1173,16 +1357,16 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2
If you supply a command line argument the script will be written there.
If output_file is "-", then the output will be written to stdout.
rclone genautocomplete bash [output_file] [flags]
-Options
+Options
-h, --help help for bash
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
rclone genautocomplete fish
Output fish completion script for rclone.
-Synopsis
+Synopsis
Generates a fish autocompletion script for rclone.
This writes to /etc/fish/completions/rclone.fish by default so will probably need to be run with sudo or as root, e.g.
sudo rclone genautocomplete fish
@@ -1191,16 +1375,16 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2
If you supply a command line argument the script will be written there.
If output_file is "-", then the output will be written to stdout.
rclone genautocomplete fish [output_file] [flags]
-Options
+Options
-h, --help help for fish
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
rclone genautocomplete zsh
Output zsh completion script for rclone.
-Synopsis
+Synopsis
Generates a zsh autocompletion script for rclone.
This writes to /usr/share/zsh/vendor-completions/_rclone by default so will probably need to be run with sudo or as root, e.g.
sudo rclone genautocomplete zsh
@@ -1209,53 +1393,58 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2
If you supply a command line argument the script will be written there.
If output_file is "-", then the output will be written to stdout.
rclone genautocomplete zsh [output_file] [flags]
-Options
+Options
-h, --help help for zsh
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
rclone gendocs
Output markdown docs for rclone to the directory supplied.
-Synopsis
+Synopsis
This produces markdown docs for the rclone commands to the directory supplied. These are in a format suitable for hugo to render into the rclone.org website.
rclone gendocs output_directory [flags]
-Options
+Options
-h, --help help for gendocs
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
- rclone - Show help for rclone commands, flags and backends.
rclone hashsum
Produces a hashsum file for all the objects in the path.
-Synopsis
+Synopsis
Produces a hash file for all the objects in the path using the hash named. The output is in the same format as the standard md5sum/sha1sum tool.
By default, the hash is requested from the remote. If the hash is not supported by the remote, no hash will be returned. With the download flag, the file will be downloaded from the remote and hashed locally enabling any hash for any remote.
Run without a hash to see the list of all supported hashes, e.g.
$ rclone hashsum
Supported hashes are:
- * MD5
- * SHA-1
- * DropboxHash
- * QuickXorHash
+ * md5
+ * sha1
+ * whirlpool
+ * crc32
+ * dropbox
+ * mailru
+ * quickxor
Then
$ rclone hashsum MD5 remote:path
+Note that hash names are case insensitive.
rclone hashsum <hash> remote:path [flags]
-Options
+Options
--base64 Output base64 encoded hashsum
+ -C, --checkfile string Validate hashes against a given SUM file instead of printing them
--download Download the file and hash it locally; if this flag is not specified, the hash is requested from the remote
-h, --help help for hashsum
--output-file string Output hashsums to a file rather than the terminal
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
- rclone - Show help for rclone commands, flags and backends.
rclone link
Generate public link to file/folder.
-Synopsis
+Synopsis
rclone link will create, retrieve or remove a public link to the given file or folder.
rclone link remote:path/to/file
rclone link remote:path/to/folder/
@@ -1265,32 +1454,32 @@ rclone link --expire 1d remote:path/to/file
Use the --unlink flag to remove existing public links to the file or folder. Note not all backends support "--unlink" flag - those that don't will just ignore it.
If successful, the last line of the output will contain the link. Exact capabilities depend on the remote, but the link will always by default be created with the least constraints – e.g. no expiry, no password protection, accessible without account.
rclone link remote:path [flags]
-Options
- --expire Duration The amount of time that the link will be valid (default 100y)
+Options
+ --expire Duration The amount of time that the link will be valid (default off)
-h, --help help for link
--unlink Remove existing public link to file/folder
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
- rclone - Show help for rclone commands, flags and backends.
rclone listremotes
List all the remotes in the config file.
-Synopsis
+Synopsis
rclone listremotes lists all the available remotes from the config file.
When uses with the -l flag it lists the types too.
rclone listremotes [flags]
-Options
+Options
-h, --help help for listremotes
--long Show the type as well as names.
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
- rclone - Show help for rclone commands, flags and backends.
rclone lsf
List directories and objects in remote:path formatted for parsing.
-Synopsis
+Synopsis
List the contents of the source path (directories and objects) to standard output in a form which is easy to parse by scripts. By default this will just be the names of the objects and directories, one per line. The directories will have a / suffix.
Eg
$ rclone lsf swift:bucket
@@ -1360,25 +1549,25 @@ rclone copy --files-from-raw new_files /path/to/local remote:path
The other list commands lsd
,lsf
,lsjson
do not recurse by default - use -R
to make them recurse.
Listing a non existent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket based remotes).
rclone lsf remote:path [flags]
-Options
+Options
--absolute Put a leading / in front of path names.
--csv Output in CSV format.
-d, --dir-slash Append a slash to directory names. (default true)
--dirs-only Only list directories.
--files-only Only list files.
-F, --format string Output format - see help for details (default "p")
- --hash h Use this hash when h is used in the format MD5|SHA-1|DropboxHash (default "MD5")
+ --hash h Use this hash when h is used in the format MD5|SHA-1|DropboxHash (default "md5")
-h, --help help for lsf
-R, --recursive Recurse into the listing.
-s, --separator string Separator for the items in the format. (default ";")
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
- rclone - Show help for rclone commands, flags and backends.
rclone lsjson
List directories and objects in the path in JSON format.
-Synopsis
+Synopsis
List directories and objects in the path in JSON format.
The output is an array of Items, where each Item looks like this
{ "Hashes" : { "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f", "MD5" : "b1946ac92492d2347c6235b4d2611184", "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" }, "ID": "y2djkhiujf83u33", "OrigID": "UYOJVTUW00Q1RzTDA", "IsBucket" : false, "IsDir" : false, "MimeType" : "application/octet-stream", "ModTime" : "2017-05-31T16:15:57.034468261+01:00", "Name" : "file.txt", "Encrypted" : "v0qpsdq8anpci8n929v3uu9338", "EncryptedPath" : "kja9098349023498/v0qpsdq8anpci8n929v3uu9338", "Path" : "full/path/goes/here/file.txt", "Size" : 6, "Tier" : "hot", }
@@ -1406,7 +1595,7 @@ rclone copy --files-from-raw new_files /path/to/local remote:path
The other list commands lsd
,lsf
,lsjson
do not recurse by default - use -R
to make them recurse.
Listing a non existent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket based remotes).
rclone lsjson remote:path [flags]
-Options
+Options
--dirs-only Show only directories in the listing.
-M, --encrypted Show the encrypted names.
--files-only Show only files in the listing.
@@ -1418,13 +1607,13 @@ rclone copy --files-from-raw new_files /path/to/local remote:path
--original Show the ID of the underlying Object.
-R, --recursive Recurse into the listing.
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
- rclone - Show help for rclone commands, flags and backends.
rclone mount
Mount the remote as file system on a mountpoint.
-Synopsis
+Synopsis
rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE.
First set up your remote using rclone config
. Check it works with rclone ls
etc.
On Linux and OSX, you can either run mount in foreground mode or background (daemon) mode. Mount runs in foreground mode by default, use the --daemon
flag to specify background mode. You can only run mount in foreground mode on Windows.
@@ -1442,7 +1631,7 @@ fusermount -u /path/to/local/mount
# OS X
umount /path/to/local/mount
The umount operation can fail, for example when the mountpoint is busy. When that happens, it is the user's responsibility to stop the mount manually.
-The size of the mounted file system will be set according to information retrieved from the remote, the same as returned by the rclone about command. Remotes with unlimited storage may report the used size only, then an additional 1PB of free space is assumed. If the remote does not support the about feature at all, then 1PB is set as both the total and the free size.
+The size of the mounted file system will be set according to information retrieved from the remote, the same as returned by the rclone about command. Remotes with unlimited storage may report the used size only, then an additional 1 PiB of free space is assumed. If the remote does not support the about feature at all, then 1 PiB is set as both the total and the free size.
Note: As of rclone
1.52.2, rclone mount
now requires Go version 1.13 or newer on some platforms depending on the underlying FUSE library in use.
Installing on Windows
To run rclone mount on Windows, you will need to download and install WinFsp.
@@ -1475,10 +1664,13 @@ rclone mount remote:path/to/files * --volname \\cloud\remote
The permissions on each entry will be set according to options --dir-perms
and --file-perms
, which takes a value in traditional numeric notation, where the default corresponds to --file-perms 0666 --dir-perms 0777
.
Note that the mapping of permissions is not always trivial, and the result you see in Windows Explorer may not be exactly like you expected. For example, when setting a value that includes write access, this will be mapped to individual permissions "write attributes", "write data" and "append data", but not "write extended attributes". Windows will then show this as basic permission "Special" instead of "Write", because "Write" includes the "write extended attributes" permission.
If you set POSIX permissions for only allowing access to the owner, using --file-perms 0600 --dir-perms 0700
, the user group and the built-in "Everyone" group will still be given some special permissions, such as "read attributes" and "read permissions", in Windows. This is done for compatibility reasons, e.g. to allow users without additional permissions to be able to read basic metadata about files like in UNIX. One case that may arise is that other programs (incorrectly) interprets this as the file being accessible by everyone. For example an SSH client may warn about "unprotected private key file".
-WinFsp 2021 (version 1.9, still in beta) introduces a new FUSE option "FileSecurity", that allows the complete specification of file security descriptors using SDDL. With this you can work around issues such as the mentioned "unprotected private key file" by specifying -o FileSecurity="D:P(A;;FA;;;OW)"
, for file all access (FA) to the owner (OW).
+WinFsp 2021 (version 1.9) introduces a new FUSE option "FileSecurity", that allows the complete specification of file security descriptors using SDDL. With this you can work around issues such as the mentioned "unprotected private key file" by specifying -o FileSecurity="D:P(A;;FA;;;OW)"
, for file all access (FA) to the owner (OW).
Windows caveats
-Note that drives created as Administrator are not visible by other accounts (including the account that was elevated as Administrator). So if you start a Windows drive from an Administrative Command Prompt and then try to access the same drive from Explorer (which does not run as Administrator), you will not be able to see the new drive.
-The easiest way around this is to start the drive from a normal command prompt. It is also possible to start a drive from the SYSTEM account (using the WinFsp.Launcher infrastructure) which creates drives accessible for everyone on the system or alternatively using the nssm service manager.
+Drives created as Administrator are not visible to other accounts, not even an account that was elevated to Administrator with the User Account Control (UAC) feature. A result of this is that if you mount to a drive letter from a Command Prompt run as Administrator, and then try to access the same drive from Windows Explorer (which does not run as Administrator), you will not be able to see the mounted drive.
+If you don't need to access the drive from applications running with administrative privileges, the easiest way around this is to always create the mount from a non-elevated command prompt.
+To make mapped drives available to the user account that created them regardless if elevated or not, there is a special Windows setting called linked connections that can be enabled.
+It is also possible to make a drive mount available to everyone on the system, by running the process creating it as the built-in SYSTEM account. There are several ways to do this: One is to use the command-line utility PsExec, from Microsoft's Sysinternals suite, which has option -s
to start processes as the SYSTEM account. Another alternative is to run the mount command from a Windows Scheduled Task, or a Windows Service, configured to run as the SYSTEM account. A third alternative is to use the WinFsp.Launcher infrastructure). Note that when running rclone as another user, it will not use the configuration file from your profile unless you tell it to with the --config
option. Read more in the install documentation.
+Note that mapping to a directory path, instead of a drive letter, does not suffer from the same limitations.
Limitations
Without the use of --vfs-cache-mode
this can only write files sequentially, it can only seek when reading. This means that many applications won't work with their files on an rclone mount without --vfs-cache-mode writes
or --vfs-cache-mode full
. See the VFS File Caching section for more info.
The bucket based remotes (e.g. Swift, S3, Google Compute Storage, B2, Hubic) do not support the concept of empty directories, so empty directories will have a tendency to disappear once they fall out of the directory cache.
@@ -1508,7 +1700,7 @@ rclone mount remote:path/to/files * --volname \\cloud\remote
VFS Directory Cache
Using the --dir-cache-time
flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
---poll-interval duration Time to wait between polling for changes.
+--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.
You can send a SIGHUP
signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
kill -SIGHUP $(pidof rclone)
@@ -1597,7 +1789,7 @@ rclone mount remote:path/to/files * --volname \\cloud\remote
Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df
on the filesystem, then pass the flag --vfs-used-is-size
to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size
and compute the total used space itself.
WARNING. Contrary to rclone size
, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
rclone mount remote:path /path/to/mountpoint [flags]
-Options
+Options
--allow-non-empty Allow mounting over a non-empty directory. Not supported on Windows.
--allow-other Allow access to other users. Not supported on Windows.
--allow-root Allow access to root user. Not supported on Windows.
@@ -1613,7 +1805,7 @@ rclone mount remote:path/to/files * --volname \\cloud\remote
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
--gid uint32 Override the gid field set by the filesystem. Not supported on Windows. (default 1000)
-h, --help help for mount
- --max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads. Not supported on Windows. (default 128k)
+ --max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads. Not supported on Windows. (default 128Ki)
--network-mode Mount as remote network drive, instead of fixed disk drive. Supported on Windows only
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
@@ -1624,14 +1816,14 @@ rclone mount remote:path/to/files * --volname \\cloud\remote
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
- --umask int Override the permission bits set by the filesystem. Not supported on Windows.
+ --umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match.
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full.
- --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
@@ -1640,13 +1832,13 @@ rclone mount remote:path/to/files * --volname \\cloud\remote
--volname string Set the volume name. Supported on Windows and OSX only.
--write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used. Not supported on Windows.
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
- rclone - Show help for rclone commands, flags and backends.
rclone moveto
Move file or directory from source to dest.
-Synopsis
+Synopsis
If source:path is a file or directory then it moves it to a file or directory named dest:path.
This can be used to rename files or upload single files to other than their existing name. If the source is a directory then it acts exactly like the move command.
So
@@ -1662,16 +1854,16 @@ if src is directory
Important: Since this can cause data loss, test first with the --dry-run
or the --interactive
/-i
flag.
Note: Use the -P
/--progress
flag to view real-time transfer statistics.
rclone moveto source:path dest:path [flags]
-Options
+Options
-h, --help help for moveto
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
- rclone - Show help for rclone commands, flags and backends.
rclone ncdu
Explore a remote with a text based user interface.
-Synopsis
+Synopsis
This displays a text based user interface allowing the navigation of a remote. It is most useful for answering the question - "What is using all my disk space?".
To make the user interface it first scans the entire remote given and builds an in memory representation. rclone ncdu can be used during this scanning phase and you will see it building up the directory structure as it goes along.
Here are the keys - press '?' to toggle the help on and off
@@ -1691,16 +1883,16 @@ if src is directory
This an homage to the ncdu tool but for rclone remotes. It is missing lots of features at the moment but is useful as it stands.
Note that it might take some time to delete big files/folders. The UI won't respond in the meantime since the deletion is done synchronously.
rclone ncdu remote:path [flags]
-Options
+Options
-h, --help help for ncdu
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
- rclone - Show help for rclone commands, flags and backends.
rclone obscure
Obscure password for use in the rclone config file.
-Synopsis
+Synopsis
In the rclone config file, human readable passwords are obscured. Obscuring them is done by encrypting them and writing them out in base64. This is not a secure way of encrypting these passwords as rclone can decrypt them - it is to prevent "eyedropping" - namely someone seeing a password in the rclone config file by accident.
Many equally important things (like access tokens) are not obscured in the config file. However it is very hard to shoulder surf a 64 character hex token.
This command can also accept a password through STDIN instead of an argument by passing a hyphen as an argument. This will use the first line of STDIN as the password not including the trailing newline.
@@ -1708,16 +1900,16 @@ if src is directory
If there is no data on STDIN to read, rclone obscure will default to obfuscating the hyphen itself.
If you want to encrypt the config file then please use config file encryption - see rclone config for more info.
rclone obscure password [flags]
-Options
+Options
-h, --help help for obscure
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
- rclone - Show help for rclone commands, flags and backends.
rclone rc
Run a command against a running rclone.
-Synopsis
+Synopsis
This runs a command against a running rclone. Use the --url flag to specify an non default URL to connect on. This can be either a ":port" which is taken to mean "http://localhost:port" or a "host:port" which is taken to mean "http://host:port"
A username and password can be passed in with --user and --pass.
Note that --rc-addr, --rc-user, --rc-pass will be read also for --url, --user, --pass.
@@ -1736,7 +1928,7 @@ if src is directory
rclone rc --loopback operations/about fs=/
Use "rclone rc" to see a list of all possible commands.
rclone rc commands parameter [flags]
-Options
+Options
-a, --arg stringArray Argument placed in the "arg" array.
-h, --help help for rc
--json string Input JSON - use instead of key=value args.
@@ -1747,71 +1939,74 @@ if src is directory
--url string URL to connect to rclone remote control. (default "http://localhost:5572/")
--user string Username to use to rclone remote control.
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
- rclone - Show help for rclone commands, flags and backends.
rclone rcat
Copies standard input to file on remote.
-Synopsis
+Synopsis
rclone rcat reads from standard input (stdin) and copies it to a single remote file.
echo "hello world" | rclone rcat remote:path/to/file
ffmpeg - | rclone rcat remote:path/to/file
If the remote file already exists, it will be overwritten.
rcat will try to upload small files in a single request, which is usually more efficient than the streaming/chunked upload endpoints, which use multiple requests. Exact behaviour depends on the remote. What is considered a small file may be set through --streaming-upload-cutoff
. Uploading only starts after the cutoff is reached or if the file ends before that. The data must fit into RAM. The cutoff needs to be small enough to adhere the limits of your remote, please see there. Generally speaking, setting this cutoff too high will decrease your performance.
+Use the |--size| flag to preallocate the file in advance at the remote end and actually stream it, even if remote backend doesn't support streaming.
+|--size| should be the exact size of the input stream in bytes. If the size of the stream is different in length to the |--size| passed in then the transfer will likely fail.
Note that the upload can also not be retried because the data is not kept around until the upload succeeds. If you need to transfer a lot of data, you're better off caching locally and then rclone move
it to the destination.
rclone rcat remote:path [flags]
-Options
- -h, --help help for rcat
-See the global flags page for global options not listed here.
-SEE ALSO
-
-- rclone - Show help for rclone commands, flags and backends.
-
-rclone rcd
-Run rclone listening to remote control commands only.
-Synopsis
-This runs rclone so that it only listens to remote control commands.
-This is useful if you are controlling rclone via the rc API.
-If you pass in a path to a directory, rclone will serve that directory for GET requests on the URL passed in. It will also open the URL in the browser when rclone is run.
-See the rc documentation for more info on the rc flags.
-rclone rcd <path to files to serve>* [flags]
-Options
- -h, --help help for rcd
-See the global flags page for global options not listed here.
-SEE ALSO
-
-- rclone - Show help for rclone commands, flags and backends.
-
-rclone rmdirs
-Remove empty directories under the path.
-Synopsis
-This recursively removes any empty directories (including directories that only contain empty directories), that it finds under the path. The root path itself will also be removed if it is empty, unless you supply the --leave-root
flag.
-Use command rmdir
to delete just the empty directory given by path, not recurse.
-This is useful for tidying up remotes that rclone has left a lot of empty directories in. For example the delete
command will delete files but leave the directory structure (unless used with option --rmdirs
).
-To delete a path and any objects in it, use purge
command.
-rclone rmdirs remote:path [flags]
Options
- -h, --help help for rmdirs
- --leave-root Do not remove root directory if empty
+ -h, --help help for rcat
+ --size int File size hint to preallocate (default -1)
See the global flags page for global options not listed here.
SEE ALSO
- rclone - Show help for rclone commands, flags and backends.
+rclone rcd
+Run rclone listening to remote control commands only.
+Synopsis
+This runs rclone so that it only listens to remote control commands.
+This is useful if you are controlling rclone via the rc API.
+If you pass in a path to a directory, rclone will serve that directory for GET requests on the URL passed in. It will also open the URL in the browser when rclone is run.
+See the rc documentation for more info on the rc flags.
+rclone rcd <path to files to serve>* [flags]
+Options
+ -h, --help help for rcd
+See the global flags page for global options not listed here.
+SEE ALSO
+
+- rclone - Show help for rclone commands, flags and backends.
+
+rclone rmdirs
+Remove empty directories under the path.
+Synopsis
+This recursively removes any empty directories (including directories that only contain empty directories), that it finds under the path. The root path itself will also be removed if it is empty, unless you supply the --leave-root
flag.
+Use command rmdir
to delete just the empty directory given by path, not recurse.
+This is useful for tidying up remotes that rclone has left a lot of empty directories in. For example the delete
command will delete files but leave the directory structure (unless used with option --rmdirs
).
+To delete a path and any objects in it, use purge
command.
+rclone rmdirs remote:path [flags]
+Options
+ -h, --help help for rmdirs
+ --leave-root Do not remove root directory if empty
+See the global flags page for global options not listed here.
+SEE ALSO
+
+- rclone - Show help for rclone commands, flags and backends.
+
rclone selfupdate
Update the rclone binary.
-Synopsis
+Synopsis
This command downloads the latest release of rclone and replaces the currently running binary. The download is verified with a hashsum and cryptographically signed signature.
If used without flags (or with implied --stable
flag), this command will install the latest stable release. However, some issues may be fixed (or features added) only in the latest beta release. In such cases you should run the command with the --beta
flag, i.e. rclone selfupdate --beta
. You can check in advance what version would be installed by adding the --check
flag, then repeat the command without it when you are satisfied.
Sometimes the rclone team may recommend you a concrete beta or stable rclone release to troubleshoot your issue or add a bleeding edge feature. The --version VER
flag, if given, will update to the concrete version instead of the latest one. If you omit micro version from VER
(for example 1.53
), the latest matching micro version will be used.
-Upon successful update rclone will print a message that contains a previous version number. You will need it if you later decide to revert your update for some reason. Then you'll have to note the previous version and run the following command: rclone selfupdate [--beta] OLDVER
. If the old version contains only dots and digits (for example v1.54.0
) then it's a stable release so you won't need the --beta
flag. Beta releases have an additional information similar to v1.54.0-beta.5111.06f1c0c61
. (if you are a developer and use a locally built rclone, the version number will end with -DEV
, you will have to rebuild it as it obvisously can't be distributed).
+Upon successful update rclone will print a message that contains a previous version number. You will need it if you later decide to revert your update for some reason. Then you'll have to note the previous version and run the following command: rclone selfupdate [--beta] OLDVER
. If the old version contains only dots and digits (for example v1.54.0
) then it's a stable release so you won't need the --beta
flag. Beta releases have an additional information similar to v1.54.0-beta.5111.06f1c0c61
. (if you are a developer and use a locally built rclone, the version number will end with -DEV
, you will have to rebuild it as it obviously can't be distributed).
If you previously installed rclone via a package manager, the package may include local documentation or configure services. You may wish to update with the flag --package deb
or --package rpm
(whichever is correct for your OS) to update these too. This command with the default --package zip
will update only the rclone executable so the local manual may become inaccurate after it.
The rclone mount
command (https://rclone.org/commands/rclone_mount/) may or may not support extended FUSE options depending on the build and OS. selfupdate
will refuse to update if the capability would be discarded.
Note: Windows forbids deletion of a currently running executable so this command will rename the old executable to 'rclone.old.exe' upon success.
Please note that this command was not available before rclone version 1.55. If it fails for you with the message unknown command "selfupdate"
then you will need to update manually following the install instructions located at https://rclone.org/install/
rclone selfupdate [flags]
-Options
+Options
--beta Install beta release.
--check Check for latest release, do not download.
-h, --help help for selfupdate
@@ -1820,24 +2015,25 @@ ffmpeg - | rclone rcat remote:path/to/file
--stable Install stable release (this is the default)
--version string Install the given rclone version (default: latest)
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
- rclone - Show help for rclone commands, flags and backends.
rclone serve
Serve a remote over a protocol.
-Synopsis
+Synopsis
rclone serve is used to serve a remote over a given protocol. This command requires the use of a subcommand to specify the protocol, e.g.
rclone serve http remote:
Each subcommand has its own options which you can see in their help.
rclone serve <protocol> [opts] <remote> [flags]
-Options
+Options
-h, --help help for serve
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
rclone serve dlna
Serve remote:path over DLNA
-Synopsis
+Synopsis
rclone serve dlna is a DLNA media server for media stored in an rclone remote. Many devices, such as the Xbox and PlayStation, can automatically discover this server in the LAN and play audio/video from it. VLC is also supported. Service discovery uses UDP multicast packets (SSDP) and will thus only work on LANs.
Rclone will list all files present in the remote, without filtering based on media formats or file extensions. Additionally, there is no media transcoding support. This means that some players might show files that they are not able to play back correctly.
Server options
@@ -1860,7 +2056,7 @@ ffmpeg - | rclone rcat remote:path/to/file
VFS Directory Cache
Using the --dir-cache-time
flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
---poll-interval duration Time to wait between polling for changes.
+--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.
You can send a SIGHUP
signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
kill -SIGHUP $(pidof rclone)
@@ -1949,7 +2145,7 @@ ffmpeg - | rclone rcat remote:path/to/file
Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df
on the filesystem, then pass the flag --vfs-used-is-size
to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size
and compute the total used space itself.
WARNING. Contrary to rclone size
, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
rclone serve dlna remote:path [flags]
-Options
+Options
--addr string ip:port or :port to bind the DLNA http server to. (default ":7879")
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
@@ -1971,27 +2167,27 @@ ffmpeg - | rclone rcat remote:path/to/file
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match.
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full.
- --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
--vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
-rclone serve ftp
-Serve remote:path over FTP.
-Synopsis
-rclone serve ftp implements a basic ftp server to serve the remote over FTP protocol. This can be viewed with a ftp client or you can make a remote of type ftp to read and write it.
-Server options
-Use --addr to specify which IP address and port the server should listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
-If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
-Authentication
-By default this will serve files without needing a login.
-You can set a single username and password with the --user and --pass flags.
+rclone serve docker
+Serve any remote on docker's volume plugin API.
+Synopsis
+This command implements the Docker volume plugin API allowing docker to use rclone as a data storage mechanism for various cloud providers. rclone provides docker volume plugin based on it.
+To create a docker plugin, one must create a Unix or TCP socket that Docker will look for when you use the plugin and then it listens for commands from docker daemon and runs the corresponding code when necessary. Docker plugins can run as a managed plugin under control of the docker daemon or as an independent native service. For testing, you can just run it directly from the command line, for example:
+sudo rclone serve docker --base-dir /tmp/rclone-volumes --socket-addr localhost:8787 -vv
+Running rclone serve docker
will create the said socket, listening for commands from Docker to create the necessary Volumes. Normally you need not give the --socket-addr
flag. The API will listen on the unix domain socket at /run/docker/plugins/rclone.sock
. In the example above rclone will create a TCP socket and a small file /etc/docker/plugins/rclone.spec
containing the socket address. We use sudo
because both paths are writeable only by the root user.
+If you later decide to change listening socket, the docker daemon must be restarted to reconnect to /run/docker/plugins/rclone.sock
or parse new /etc/docker/plugins/rclone.spec
. Until you restart, any volume related docker commands will timeout trying to access the old socket. Running directly is supported on Linux only, not on Windows or MacOS. This is not a problem with managed plugin mode described in details in the full documentation.
+The command will create volume mounts under the path given by --base-dir
(by default /var/lib/docker-volumes/rclone
available only to root) and maintain the JSON formatted file docker-plugin.state
in the rclone cache directory with book-keeping records of created and mounted volumes.
+All mount and VFS options are submitted by the docker daemon via API, but you can also provide defaults on the command line as well as set path to the config file and cache directory or adjust logging verbosity.
VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system.
Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below.
@@ -1999,7 +2195,7 @@ ffmpeg - | rclone rcat remote:path/to/file
VFS Directory Cache
Using the --dir-cache-time
flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
---poll-interval duration Time to wait between polling for changes.
+--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.
You can send a SIGHUP
signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
kill -SIGHUP $(pidof rclone)
@@ -2087,6 +2283,164 @@ ffmpeg - | rclone rcat remote:path/to/file
Alternate report of used bytes
Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df
on the filesystem, then pass the flag --vfs-used-is-size
to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size
and compute the total used space itself.
WARNING. Contrary to rclone size
, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
+rclone serve docker [flags]
+Options
+ --allow-non-empty Allow mounting over a non-empty directory. Not supported on Windows.
+ --allow-other Allow access to other users. Not supported on Windows.
+ --allow-root Allow access to root user. Not supported on Windows.
+ --async-read Use asynchronous reads. Not supported on Windows. (default true)
+ --attr-timeout duration Time for which file/directory attributes are cached. (default 1s)
+ --base-dir string base directory for volumes (default "/var/lib/docker-volumes/rclone")
+ --daemon Run mount as a daemon (background mode). Not supported on Windows.
+ --daemon-timeout duration Time limit for rclone to respond to kernel. Not supported on Windows.
+ --debug-fuse Debug the FUSE internals - needs -v.
+ --default-permissions Makes kernel enforce access control based on the file mode. Not supported on Windows.
+ --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
+ --dir-perms FileMode Directory permissions (default 0777)
+ --file-perms FileMode File permissions (default 0666)
+ --forget-state skip restoring previous state
+ --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
+ --gid uint32 Override the gid field set by the filesystem. Not supported on Windows. (default 1000)
+ -h, --help help for docker
+ --max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads. Not supported on Windows. (default 128Ki)
+ --network-mode Mount as remote network drive, instead of fixed disk drive. Supported on Windows only
+ --no-checksum Don't compare checksums on up/download.
+ --no-modtime Don't read/write the modification time (can speed things up).
+ --no-seek Don't allow seeking in files.
+ --no-spec do not write spec file
+ --noappledouble Ignore Apple Double (._) and .DS_Store files. Supported on OSX only. (default true)
+ --noapplexattr Ignore all "com.apple.*" extended attributes. Supported on OSX only.
+ -o, --option stringArray Option for libfuse/WinFsp. Repeat if required.
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
+ --read-only Mount read-only.
+ --socket-addr string <host:port> or absolute path (default: /run/docker/plugins/rclone.sock)
+ --socket-gid int GID for unix socket (default: current process GID) (default 1000)
+ --uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
+ --umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
+ --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
+ --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
+ --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+ --vfs-case-insensitive If a file name not found, find a case insensitive match.
+ --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full.
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128Mi)
+ --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
+ --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
+ --vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
+ --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
+ --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
+ --volname string Set the volume name. Supported on Windows and OSX only.
+ --write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used. Not supported on Windows.
+See the global flags page for global options not listed here.
+SEE ALSO
+
+rclone serve ftp
+Serve remote:path over FTP.
+Synopsis
+rclone serve ftp implements a basic ftp server to serve the remote over FTP protocol. This can be viewed with a ftp client or you can make a remote of type ftp to read and write it.
+Server options
+Use --addr to specify which IP address and port the server should listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
+If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
+Authentication
+By default this will serve files without needing a login.
+You can set a single username and password with the --user and --pass flags.
+VFS - Virtual File System
+This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system.
+Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below.
+The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.
+VFS Directory Cache
+Using the --dir-cache-time
flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache.
+--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
+--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
+However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.
+You can send a SIGHUP
signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
+kill -SIGHUP $(pidof rclone)
+If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache:
+rclone rc vfs/forget
+Or individual files or directories:
+rclone rc vfs/forget file=path/to/file dir=path/to/dir
+VFS File Buffering
+The --buffer-size
flag determines the amount of memory, that will be used to buffer data in advance.
+Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared.
+This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used.
+The maximum memory used by rclone for buffering can be up to --buffer-size * open files
.
+VFS File Caching
+These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility.
+For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details.
+Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.
+--cache-dir string Directory rclone will use for caching.
+--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
+--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
+--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
+--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+--vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
+If run with -vv
rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir
or setting the appropriate environment variable.
+The cache has 4 different modes selected by --vfs-cache-mode
. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.
+Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.
+If using --vfs-cache-max-size
note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval
. Secondly because open files cannot be evicted from the cache.
+You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using --vfs-cache-mode > off
. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir
. You don't need to worry about this if the remotes in use don't overlap.
+--vfs-cache-mode off
+In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk.
+This will mean some operations are not possible
+
+- Files can't be opened for both read AND write
+- Files opened for write can't be seeked
+- Existing files opened for write must have O_TRUNC set
+- Files open for read with O_TRUNC will be opened write only
+- Files open for write only will behave as if O_TRUNC was supplied
+- Open modes O_APPEND, O_TRUNC are ignored
+- If an upload fails it can't be retried
+
+--vfs-cache-mode minimal
+This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.
+These operations are not possible
+
+- Files opened for write only can't be seeked
+- Existing files opened for write must have O_TRUNC set
+- Files opened for write only will ignore O_APPEND, O_TRUNC
+- If an upload fails it can't be retried
+
+--vfs-cache-mode writes
+In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.
+This mode should support all normal file system operations.
+If an upload fails it will be retried at exponentially increasing intervals up to 1 minute.
+--vfs-cache-mode full
+In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well.
+In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has downloaded.
+So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them.
+This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes.
+When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.
+When using this mode it is recommended that --buffer-size is not set too big and --vfs-read-ahead is set large if required.
+IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.
+
+These flags may be used to enable/disable features of the VFS for performance or other reasons.
+In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.
+--no-checksum Don't compare checksums on up/download.
+--no-modtime Don't read/write the modification time (can speed things up).
+--no-seek Don't allow seeking in files.
+--read-only Mount read-only.
+When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This is advantageous because some cloud providers account for reads being all the data requested, not all the data delivered.
+Rclone will keep doubling the chunk size requested starting at --vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit unless it is set to "off" in which case there will be no limit.
+--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
+--vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default "off")
+Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.
+--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
+--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
+When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from cache (the related global flag --checkers have no effect on mount).
+--transfers int Number of file transfers to run in parallel. (default 4)
+VFS Case Sensitivity
+Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.
+File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.
+Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default
+The --vfs-case-insensitive
mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below.
+The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.
+Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.
+If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".
+Alternate report of used bytes
+Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df
on the filesystem, then pass the flag --vfs-used-is-size
to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size
and compute the total used space itself.
+WARNING. Contrary to rclone size
, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
Auth Proxy
If you supply the parameter --auth-proxy /path/to/program
then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocol with input on STDIN and output on STDOUT.
PLEASE NOTE: --auth-proxy
and --authorized-keys
cannot be used together, if --auth-proxy
is set the authorized keys option will be ignored.
@@ -2118,7 +2472,7 @@ ffmpeg - | rclone rcat remote:path/to/file
Note that an internal cache is keyed on user
so only use that for configuration, don't use pass
or public_key
. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.
This can be used to build general purpose proxies to any kind of backend that rclone supports.
rclone serve ftp remote:path [flags]
-Options
+Options
--addr string IPaddress:Port or :Port to bind server to. (default "localhost:2121")
--auth-proxy string A program to use to create the backend from the auth.
--cert string TLS PEM key (concatenation of certificate and CA certificate)
@@ -2145,31 +2499,33 @@ ffmpeg - | rclone rcat remote:path/to/file
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match.
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full.
- --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
--vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
rclone serve http
Serve the remote over HTTP.
-Synopsis
+Synopsis
rclone serve http implements a basic web server to serve the remote over HTTP. This can be viewed in a web browser or you can make a remote of type http read from it.
You can use the filter flags (e.g. --include, --exclude) to control what is served.
The server will log errors. Use -v to see access logs.
--bwlimit will be respected for file transfers. Use --stats to control the stats printing.
Server options
-Use --addr to specify which IP address and port the server should listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
+Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
--server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.
--max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.
--baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.
---template allows a user to specify a custom markup template for http and webdav serve functions. The server exports the following markup to be used within the template to server pages:
+SSL/TLS
+By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.
+--cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate. --template allows a user to specify a custom markup template for http and webdav serve functions. The server exports the following markup to be used within the template to server pages:
@@ -2258,17 +2614,14 @@ htpasswd -B htpasswd user
htpasswd -B htpasswd anotherUser
The password file can be updated while rclone is running.
Use --realm to set the authentication realm.
-SSL/TLS
-By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.
---cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.
-VFS - Virtual File System
+VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system.
Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below.
The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.
-VFS Directory Cache
+VFS Directory Cache
Using the --dir-cache-time
flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
---poll-interval duration Time to wait between polling for changes.
+--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.
You can send a SIGHUP
signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
kill -SIGHUP $(pidof rclone)
@@ -2276,12 +2629,12 @@ htpasswd -B htpasswd anotherUser
rclone rc vfs/forget
Or individual files or directories:
rclone rc vfs/forget file=path/to/file dir=path/to/dir
-VFS File Buffering
+VFS File Buffering
The --buffer-size
flag determines the amount of memory, that will be used to buffer data in advance.
Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared.
This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used.
The maximum memory used by rclone for buffering can be up to --buffer-size * open files
.
-VFS File Caching
+VFS File Caching
These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility.
For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details.
Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.
@@ -2296,7 +2649,7 @@ htpasswd -B htpasswd anotherUser
Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.
If using --vfs-cache-max-size
note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval
. Secondly because open files cannot be evicted from the cache.
You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using --vfs-cache-mode > off
. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir
. You don't need to worry about this if the remotes in use don't overlap.
---vfs-cache-mode off
+--vfs-cache-mode off
In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk.
This will mean some operations are not possible
@@ -2308,7 +2661,7 @@ htpasswd -B htpasswd anotherUser
- Open modes O_APPEND, O_TRUNC are ignored
- If an upload fails it can't be retried
---vfs-cache-mode minimal
+--vfs-cache-mode minimal
This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.
These operations are not possible
@@ -2317,11 +2670,11 @@ htpasswd -B htpasswd anotherUser
- Files opened for write only will ignore O_APPEND, O_TRUNC
- If an upload fails it can't be retried
---vfs-cache-mode writes
+--vfs-cache-mode writes
In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.
This mode should support all normal file system operations.
If an upload fails it will be retried at exponentially increasing intervals up to 1 minute.
---vfs-cache-mode full
+--vfs-cache-mode full
In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well.
In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has downloaded.
So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them.
@@ -2329,7 +2682,7 @@ htpasswd -B htpasswd anotherUser
When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.
When using this mode it is recommended that --buffer-size is not set too big and --vfs-read-ahead is set large if required.
IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.
-
+
These flags may be used to enable/disable features of the VFS for performance or other reasons.
In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.
--no-checksum Don't compare checksums on up/download.
@@ -2345,7 +2698,7 @@ htpasswd -B htpasswd anotherUser
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from cache (the related global flag --checkers have no effect on mount).
--transfers int Number of file transfers to run in parallel. (default 4)
-VFS Case Sensitivity
+VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.
File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default
@@ -2353,12 +2706,12 @@ htpasswd -B htpasswd anotherUser
The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.
Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.
If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".
-Alternate report of used bytes
+Alternate report of used bytes
Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df
on the filesystem, then pass the flag --vfs-used-is-size
to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size
and compute the total used space itself.
WARNING. Contrary to rclone size
, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
rclone serve http remote:path [flags]
-Options
- --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
+Options
+ --addr string IPaddress:Port or :Port to bind server to. (default "127.0.0.1:8080")
--baseurl string Prefix for URLs - leave blank for root.
--cert string SSL PEM key (concatenation of certificate and CA certificate)
--client-ca string Client certificate authority to verify clients with
@@ -2376,7 +2729,7 @@ htpasswd -B htpasswd anotherUser
--pass string Password for authentication.
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
- --realm string realm for authentication (default "rclone")
+ --realm string realm for authentication
--server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--template string User Specified Template.
@@ -2389,20 +2742,20 @@ htpasswd -B htpasswd anotherUser
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match.
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full.
- --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
--vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
rclone serve restic
Serve the remote for restic's REST API.
-Synopsis
+Synopsis
rclone serve restic implements restic's REST backend API over HTTP. This allows restic to use rclone as a data storage mechanism for cloud providers that restic does not support directly.
Restic is a command line program for doing backups.
The server will log errors. Use -v to see access logs.
@@ -2542,7 +2895,7 @@ htpasswd -B htpasswd anotherUser
By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.
--cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.
rclone serve restic remote:path [flags]
-Options
+Options
--addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
--append-only disallow deletion of repository data
--baseurl string Prefix for URLs - leave blank for root.
@@ -2562,13 +2915,13 @@ htpasswd -B htpasswd anotherUser
--template string User Specified Template.
--user string User name for authentication.
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
rclone serve sftp
Serve the remote over SFTP.
-Synopsis
+Synopsis
rclone serve sftp implements an SFTP server to serve the remote over SFTP. This can be used with an SFTP client or you can make a remote of type sftp to use with it.
You can use the filter flags (e.g. --include, --exclude) to control what is served.
The server will log errors. Use -v to see access logs.
@@ -2578,14 +2931,16 @@ htpasswd -B htpasswd anotherUser
If you don't supply a --key then rclone will generate one and cache it for later use.
By default the server binds to localhost:2022 - if you want it to be reachable externally then supply "--addr :2022" for example.
Note that the default of "--vfs-cache-mode off" is fine for the rclone sftp backend, but it may not be with other SFTP clients.
-VFS - Virtual File System
+If --stdio is specified, rclone will serve SFTP over stdio, which can be used with sshd via ~/.ssh/authorized_keys, for example:
+restrict,command="rclone serve sftp --stdio ./photos" ssh-rsa ...
+VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system.
Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below.
The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.
-VFS Directory Cache
+VFS Directory Cache
Using the --dir-cache-time
flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
---poll-interval duration Time to wait between polling for changes.
+--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.
You can send a SIGHUP
signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
kill -SIGHUP $(pidof rclone)
@@ -2593,12 +2948,12 @@ htpasswd -B htpasswd anotherUser
rclone rc vfs/forget
Or individual files or directories:
rclone rc vfs/forget file=path/to/file dir=path/to/dir
-VFS File Buffering
+VFS File Buffering
The --buffer-size
flag determines the amount of memory, that will be used to buffer data in advance.
Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared.
This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used.
The maximum memory used by rclone for buffering can be up to --buffer-size * open files
.
-VFS File Caching
+VFS File Caching
These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility.
For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details.
Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.
@@ -2613,7 +2968,7 @@ htpasswd -B htpasswd anotherUser
Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.
If using --vfs-cache-max-size
note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval
. Secondly because open files cannot be evicted from the cache.
You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using --vfs-cache-mode > off
. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir
. You don't need to worry about this if the remotes in use don't overlap.
---vfs-cache-mode off
+--vfs-cache-mode off
In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk.
This will mean some operations are not possible
@@ -2625,7 +2980,7 @@ htpasswd -B htpasswd anotherUser
- Open modes O_APPEND, O_TRUNC are ignored
- If an upload fails it can't be retried
---vfs-cache-mode minimal
+--vfs-cache-mode minimal
This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.
These operations are not possible
@@ -2634,11 +2989,11 @@ htpasswd -B htpasswd anotherUser
- Files opened for write only will ignore O_APPEND, O_TRUNC
- If an upload fails it can't be retried
---vfs-cache-mode writes
+--vfs-cache-mode writes
In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.
This mode should support all normal file system operations.
If an upload fails it will be retried at exponentially increasing intervals up to 1 minute.
---vfs-cache-mode full
+--vfs-cache-mode full
In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well.
In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has downloaded.
So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them.
@@ -2646,7 +3001,7 @@ htpasswd -B htpasswd anotherUser
When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.
When using this mode it is recommended that --buffer-size is not set too big and --vfs-read-ahead is set large if required.
IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.
-
+
These flags may be used to enable/disable features of the VFS for performance or other reasons.
In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.
--no-checksum Don't compare checksums on up/download.
@@ -2662,7 +3017,7 @@ htpasswd -B htpasswd anotherUser
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from cache (the related global flag --checkers have no effect on mount).
--transfers int Number of file transfers to run in parallel. (default 4)
-VFS Case Sensitivity
+VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.
File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default
@@ -2670,7 +3025,7 @@ htpasswd -B htpasswd anotherUser
The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.
Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.
If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".
-Alternate report of used bytes
+Alternate report of used bytes
Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df
on the filesystem, then pass the flag --vfs-used-is-size
to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size
and compute the total used space itself.
WARNING. Contrary to rclone size
, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
Auth Proxy
@@ -2704,7 +3059,7 @@ htpasswd -B htpasswd anotherUser
Note that an internal cache is keyed on user
so only use that for configuration, don't use pass
or public_key
. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.
This can be used to build general purpose proxies to any kind of backend that rclone supports.
rclone serve sftp remote:path [flags]
-Options
+Options
--addr string IPaddress:Port or :Port to bind server to. (default "localhost:2022")
--auth-proxy string A program to use to create the backend from the auth.
--authorized-keys string Authorized keys file (default "~/.ssh/authorized_keys")
@@ -2721,6 +3076,7 @@ htpasswd -B htpasswd anotherUser
--pass string Password for authentication.
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
+ --stdio Run an sftp server on run stdin/stdout
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
--user string User name for authentication.
@@ -2730,20 +3086,20 @@ htpasswd -B htpasswd anotherUser
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match.
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full.
- --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
--vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
rclone serve webdav
Serve remote:path over webdav.
-Synopsis
+Synopsis
rclone serve webdav implements a basic webdav server to serve the remote over HTTP via the webdav protocol. This can be viewed with a webdav client, through a web browser, or you can make a remote of type webdav to read and write it.
Webdav options
--etag-hash
@@ -2848,14 +3204,14 @@ htpasswd -B htpasswd anotherUser
SSL/TLS
By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.
--cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.
-VFS - Virtual File System
+VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system.
Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below.
The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.
-VFS Directory Cache
+VFS Directory Cache
Using the --dir-cache-time
flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
---poll-interval duration Time to wait between polling for changes.
+--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.
You can send a SIGHUP
signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
kill -SIGHUP $(pidof rclone)
@@ -2863,12 +3219,12 @@ htpasswd -B htpasswd anotherUser
rclone rc vfs/forget
Or individual files or directories:
rclone rc vfs/forget file=path/to/file dir=path/to/dir
-VFS File Buffering
+VFS File Buffering
The --buffer-size
flag determines the amount of memory, that will be used to buffer data in advance.
Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared.
This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used.
The maximum memory used by rclone for buffering can be up to --buffer-size * open files
.
-VFS File Caching
+VFS File Caching
These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility.
For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details.
Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.
@@ -2883,7 +3239,7 @@ htpasswd -B htpasswd anotherUser
Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.
If using --vfs-cache-max-size
note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval
. Secondly because open files cannot be evicted from the cache.
You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using --vfs-cache-mode > off
. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir
. You don't need to worry about this if the remotes in use don't overlap.
---vfs-cache-mode off
+--vfs-cache-mode off
In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk.
This will mean some operations are not possible
@@ -2895,7 +3251,7 @@ htpasswd -B htpasswd anotherUser
- Open modes O_APPEND, O_TRUNC are ignored
- If an upload fails it can't be retried
---vfs-cache-mode minimal
+--vfs-cache-mode minimal
This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.
These operations are not possible
@@ -2904,11 +3260,11 @@ htpasswd -B htpasswd anotherUser
- Files opened for write only will ignore O_APPEND, O_TRUNC
- If an upload fails it can't be retried
---vfs-cache-mode writes
+--vfs-cache-mode writes
In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.
This mode should support all normal file system operations.
If an upload fails it will be retried at exponentially increasing intervals up to 1 minute.
---vfs-cache-mode full
+--vfs-cache-mode full
In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well.
In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has downloaded.
So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them.
@@ -2916,7 +3272,7 @@ htpasswd -B htpasswd anotherUser
When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.
When using this mode it is recommended that --buffer-size is not set too big and --vfs-read-ahead is set large if required.
IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.
-
+
These flags may be used to enable/disable features of the VFS for performance or other reasons.
In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.
--no-checksum Don't compare checksums on up/download.
@@ -2932,7 +3288,7 @@ htpasswd -B htpasswd anotherUser
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from cache (the related global flag --checkers have no effect on mount).
--transfers int Number of file transfers to run in parallel. (default 4)
-VFS Case Sensitivity
+VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.
File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default
@@ -2940,7 +3296,7 @@ htpasswd -B htpasswd anotherUser
The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.
Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.
If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".
-Alternate report of used bytes
+Alternate report of used bytes
Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df
on the filesystem, then pass the flag --vfs-used-is-size
to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size
and compute the total used space itself.
WARNING. Contrary to rclone size
, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
Auth Proxy
@@ -2974,7 +3330,7 @@ htpasswd -B htpasswd anotherUser
Note that an internal cache is keyed on user
so only use that for configuration, don't use pass
or public_key
. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.
This can be used to build general purpose proxies to any kind of backend that rclone supports.
rclone serve webdav remote:path [flags]
-Options
+Options
--addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
--auth-proxy string A program to use to create the backend from the auth.
--baseurl string Prefix for URLs - leave blank for root.
@@ -3009,20 +3365,20 @@ htpasswd -B htpasswd anotherUser
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match.
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full.
- --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
--vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
rclone settier
Changes storage class/tier of objects in remote.
-Synopsis
+Synopsis
rclone settier changes storage tier or class at remote if supported. Few cloud storage services provides different storage classes on objects, for example AWS S3 and Glacier, Azure Blob storage - Hot, Cool and Archive, Google Cloud Storage, Regional Storage, Nearline, Coldline etc.
Note that, certain tier changes make objects not available to access immediately. For example tiering to archive in azure blob storage makes objects in frozen state, user can restore by setting tier to Hot/Cool, similarly S3 to Glacier makes object inaccessible.true
You can use it to tier single object
@@ -3032,53 +3388,64 @@ htpasswd -B htpasswd anotherUser
Or just provide remote directory and all files in directory will be tiered
rclone settier tier remote:path/dir
rclone settier tier remote:path [flags]
-Options
+Options
-h, --help help for settier
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
- rclone - Show help for rclone commands, flags and backends.
rclone test
Run a test command
-Synopsis
+Synopsis
Rclone test is used to run test commands.
Select which test comand you want with the subcommand, eg
rclone test memory remote:
Each subcommand has its own options which you can see in their help.
NB Be careful running these commands, they may do strange things so reading their documentation first is recommended.
-Options
+Options
-h, --help help for test
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
+rclone test changenotify
+Log any change notify requests for the remote passed in.
+rclone test changenotify remote: [flags]
+Options
+ -h, --help help for changenotify
+ --poll-interval duration Time to wait between polling for changes. (default 10s)
+See the global flags page for global options not listed here.
+SEE ALSO
+
rclone test histogram
Makes a histogram of file name characters.
-Synopsis
+Synopsis
This command outputs JSON which shows the histogram of characters used in filenames in the remote:path specified.
The data doesn't contain any identifying information but is useful for the rclone developers when developing filename compression.
rclone test histogram [remote:path] [flags]
-Options
+Options
-h, --help help for histogram
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
rclone test info
Discovers file name or other limitations for paths.
-Synopsis
+Synopsis
rclone info discovers what filenames and upload methods are possible to write to the paths passed in and how long they can be. It can take some time. It will write test files into the remote:path passed in. It outputs a bit of go code for each one.
NB this can create undeletable files and other hazards - use with care
rclone test info [remote:path]+ [flags]
-Options
+Options
--all Run all tests.
--check-control Check control characters.
--check-length Check max filename length.
@@ -3088,40 +3455,40 @@ htpasswd -B htpasswd anotherUser
--upload-wait duration Wait after writing a file.
--write-json string Write results to file.
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
rclone test makefiles
-Make a random file hierarchy in
-
+Make a random file hierarchy in a directory
rclone test makefiles <dir> [flags]
-Options
+Options
--files int Number of files to create (default 1000)
--files-per-directory int Average number of files per directory (default 10)
-h, --help help for makefiles
--max-file-size SizeSuffix Maximum size of files to create (default 100)
--max-name-length int Maximum size of file names (default 12)
--min-file-size SizeSuffix Minimum size of file to create
- --min-name-length int Minimum size of file names (default 4)
+ --min-name-length int Minimum size of file names (default 4)
+ --seed int Seed for the random number generator (0 for random) (default 1)
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
rclone test memory
Load all the objects at remote:path into memory and report memory stats.
rclone test memory remote:path [flags]
-Options
+Options
-h, --help help for memory
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
rclone touch
Create new file or change file modification time.
-Synopsis
+Synopsis
Set the modification time on object(s) as specified by remote:path to have the current time.
If remote:path does not exist then a zero sized object will be created unless the --no-create flag is provided.
If --timestamp is used then it will set the modification time to that time instead of the current time. Times may be specified as one of:
@@ -3132,19 +3499,19 @@ Make a random file hierarchy in
Note that --timestamp is in UTC if you want local time then add the --localtime flag.
rclone touch remote:path [flags]
-Options
+Options
-h, --help help for touch
--localtime Use localtime for timestamp, not UTC.
-C, --no-create Do not create the file if it does not exist.
-t, --timestamp string Use specified time instead of the current time of day.
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
- rclone - Show help for rclone commands, flags and backends.
rclone tree
List the contents of the remote in a tree like fashion.
-Synopsis
+Synopsis
rclone tree lists the contents of a remote in a similar way to the unix tree command.
For example
$ rclone tree remote:path
@@ -3160,7 +3527,7 @@ Make a random file hierarchy in
You can use any of the filtering options with the tree command (e.g. --include and --exclude). You can also use --fast-list.
The tree command has many options for controlling the listing which are compatible with the tree command. Note that not all of them have short options as they conflict with rclone's short options.
rclone tree remote:path [flags]
-Options
+Options
-a, --all All files are listed (list . files too).
-C, --color Turn colorization on always.
-d, --dirs-only List directories only.
@@ -3183,7 +3550,7 @@ Make a random file hierarchy in
-U, --unsorted Leave files unsorted.
--version Sort files alphanumerically by version.
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
- rclone - Show help for rclone commands, flags and backends.
@@ -3201,7 +3568,7 @@ Make a random file hierarchy in
The syntax of the paths passed to the rclone command are as follows.
/path/to/dir
This refers to the local file system.
-On Windows \
may be used instead of /
in local paths only, non local paths must use /
. See local filesystem documentation for more about Windows-specific paths.
+On Windows \
may be used instead of /
in local paths only, non local paths must use /
. See local filesystem documentation for more about Windows-specific paths.
These paths needn't start with a leading /
- if they don't then they will be relative to the current directory.
remote:path/to/dir
This refers to a directory path/to/dir
on remote:
as defined in the config file (configured with rclone config
).
@@ -3226,10 +3593,12 @@ rclone copy ":http,url='https://example.com':path/to/dir" /tmp
rclone copy :sftp,host=example.com:path/to/dir /tmp/dir
These can apply to modify existing remotes as well as create new remotes with the on the fly syntax. This example is equivalent to adding the --drive-shared-with-me
parameter to the remote gdrive:
.
rclone lsf "gdrive,shared_with_me:path/to/dir"
-The major advantage to using the connection string style syntax is that it only applies the the remote, not to all the remotes of that type of the command line. A common confusion is this attempt to copy a file shared on google drive to the normal drive which does not work because the --drive-shared-with-me
flag applies to both the source and the destination.
+The major advantage to using the connection string style syntax is that it only applies to the remote, not to all the remotes of that type of the command line. A common confusion is this attempt to copy a file shared on google drive to the normal drive which does not work because the --drive-shared-with-me
flag applies to both the source and the destination.
rclone copy --drive-shared-with-me gdrive:shared-file.txt gdrive:
However using the connection string syntax, this does work.
rclone copy "gdrive,shared_with_me:shared-file.txt" gdrive:
+Note that the connection string only affects the options of the immediate backend. If for example gdriveCrypt is a crypt based on gdrive, then the following command will not work as intended, because shared_with_me
is ignored by the crypt backend:
+rclone copy "gdriveCrypt,shared_with_me:shared-file.txt" gdriveCrypt:
The connection strings have the following syntax
remote,parameter=value,parameter2=value2:path/to/dir
:backend,parameter=value,parameter2=value2:path/to/dir
@@ -3296,11 +3665,11 @@ rclone copy :sftp,host=example.com:path/to/dir /tmp/dir
This can be used when scripting to make aged backups efficiently, e.g.
rclone sync -i remote:current-backup remote:previous-backup
rclone sync -i /path/to/files remote:current-backup
-Options
+Options
Rclone has a number of options to control its behaviour.
Options that take parameters can have the values passed in two ways, --option=value
or --option value
. However boolean (true/false) options behave slightly differently to the other options in that --boolean
sets the option to true
and the absence of the flag sets it to false
. It is also possible to specify --boolean=false
or --boolean=true
. Note that --boolean false
is not valid - this is parsed as --boolean
and the false
is parsed as an extra command line argument for rclone.
Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
-Options which use SIZE use kByte by default. However, a suffix of b
for bytes, k
for kBytes, M
for MBytes, G
for GBytes, T
for TBytes and P
for PBytes may be used. These are the binary units, e.g. 1, 2**10, 2**20, 2**30 respectively.
+Options which use SIZE use KiByte (multiples of 1024 bytes) by default. However, a suffix of B
for Byte, K
for KiByte, M
for MiByte, G
for GiByte, T
for TiByte and P
for PiByte may be used. These are the binary units, e.g. 1, 2**10, 2**20, 2**30 respectively.
--backup-dir=DIR
When using sync
, copy
or move
any files which would have been overwritten or deleted are moved in their original hierarchy into this directory.
If --suffix
is set, then the moved files will have the suffix added to them. If there is a file with the same path (after the suffix has been added) in DIR, then it will be overwritten.
@@ -3315,12 +3684,12 @@ rclone sync -i /path/to/files remote:current-backup
--bwlimit=BANDWIDTH_SPEC
This option controls the bandwidth limit. For example
--bwlimit 10M
-would mean limit the upload and download bandwidth to 10 MByte/s. NB this is bytes per second not bits per second. To use a single limit, specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. The default is 0
which means to not limit bandwidth.
+would mean limit the upload and download bandwidth to 10 MiByte/s. NB this is bytes per second not bits per second. To use a single limit, specify the desired bandwidth in KiByte/s, or use a suffix B|K|M|G|T|P. The default is 0
which means to not limit bandwidth.
The upload and download bandwidth can be specified seperately, as --bwlimit UP:DOWN
, so
--bwlimit 10M:100k
-would mean limit the upload bandwidth to 10 MByte/s and the download bandwidth to 100 kByte/s. Either limit can be "off" meaning no limit, so to just limit the upload bandwidth you would use
+would mean limit the upload bandwidth to 10 MiByte/s and the download bandwidth to 100 KiByte/s. Either limit can be "off" meaning no limit, so to just limit the upload bandwidth you would use
--bwlimit 10M:off
-this would limit the upload bandwidth to 10MByte/s but the download bandwidth would be unlimited.
+this would limit the upload bandwidth to 10 MiByte/s but the download bandwidth would be unlimited.
When specified as above the bandwidth limits last for the duration of run of the rclone binary.
It is also possible to specify a "timetable" of limits, which will cause certain limits to be applied at certain times. To specify a timetable, format your entries as WEEKDAY-HH:MM,BANDWIDTH WEEKDAY-HH:MM,BANDWIDTH...
where: WEEKDAY
is optional element.
@@ -3330,23 +3699,23 @@ rclone sync -i /path/to/files remote:current-backup
An example of a typical timetable to avoid link saturation during daytime working hours could be:
--bwlimit "08:00,512k 12:00,10M 13:00,512k 18:00,30M 23:00,off"
-In this example, the transfer bandwidth will be set to 512kBytes/sec at 8am every day. At noon, it will rise to 10MByte/s, and drop back to 512kBytes/sec at 1pm. At 6pm, the bandwidth limit will be set to 30MByte/s, and at 11pm it will be completely disabled (full speed). Anything between 11pm and 8am will remain unlimited.
+In this example, the transfer bandwidth will be set to 512 KiByte/s at 8am every day. At noon, it will rise to 10 MiByte/s, and drop back to 512 KiByte/sec at 1pm. At 6pm, the bandwidth limit will be set to 30 MiByte/s, and at 11pm it will be completely disabled (full speed). Anything between 11pm and 8am will remain unlimited.
An example of timetable with WEEKDAY
could be:
--bwlimit "Mon-00:00,512 Fri-23:59,10M Sat-10:00,1M Sun-20:00,off"
-It means that, the transfer bandwidth will be set to 512kBytes/sec on Monday. It will rise to 10MByte/s before the end of Friday. At 10:00 on Saturday it will be set to 1MByte/s. From 20:00 on Sunday it will be unlimited.
+It means that, the transfer bandwidth will be set to 512 KiByte/s on Monday. It will rise to 10 MiByte/s before the end of Friday. At 10:00 on Saturday it will be set to 1 MiByte/s. From 20:00 on Sunday it will be unlimited.
Timeslots without WEEKDAY
are extended to the whole week. So this example:
--bwlimit "Mon-00:00,512 12:00,1M Sun-20:00,off"
Is equivalent to this:
--bwlimit "Mon-00:00,512Mon-12:00,1M Tue-12:00,1M Wed-12:00,1M Thu-12:00,1M Fri-12:00,1M Sat-12:00,1M Sun-12:00,1M Sun-20:00,off"
Bandwidth limit apply to the data transfer for all backends. For most backends the directory listing bandwidth is also included (exceptions being the non HTTP backends, ftp
, sftp
and tardigrade
).
-Note that the units are Bytes/s, not Bits/s. Typically connections are measured in Bits/s - to convert divide by 8. For example, let's say you have a 10 Mbit/s connection and you wish rclone to use half of it - 5 Mbit/s. This is 5/8 = 0.625MByte/s so you would use a --bwlimit 0.625M
parameter for rclone.
+Note that the units are Byte/s, not bit/s. Typically connections are measured in bit/s - to convert divide by 8. For example, let's say you have a 10 Mbit/s connection and you wish rclone to use half of it - 5 Mbit/s. This is 5/8 = 0.625 MiByte/s so you would use a --bwlimit 0.625M
parameter for rclone.
On Unix systems (Linux, macOS, …) the bandwidth limiter can be toggled by sending a SIGUSR2
signal to rclone. This allows to remove the limitations of a long running rclone transfer and to restore it back to the value specified with --bwlimit
quickly when needed. Assuming there is only one rclone instance running, you can toggle the limiter like this:
kill -SIGUSR2 $(pidof rclone)
If you configure rclone with a remote control then you can use change the bwlimit dynamically:
rclone rc core/bwlimit rate=1M
--bwlimit-file=BANDWIDTH_SPEC
This option controls per file bandwidth limit. For the options see the --bwlimit
flag.
-For example use this to allow no transfers to be faster than 1MByte/s
+For example use this to allow no transfers to be faster than 1 MiByte/s
--bwlimit-file 1M
This can be used in conjunction with --bwlimit
.
Note that if a schedule is provided the file will use the schedule in effect at the start of the transfer.
@@ -3374,12 +3743,34 @@ rclone sync -i /path/to/files remote:current-backup
You must use the same remote as the destination of the sync. The compare directory must not overlap the destination directory.
See --copy-dest
and --backup-dir
.
--config=CONFIG_FILE
-Specify the location of the rclone configuration file.
-Normally the config file is in your home directory as a file called .config/rclone/rclone.conf
(or .rclone.conf
if created with an older version). If $XDG_CONFIG_HOME
is set it will be at $XDG_CONFIG_HOME/rclone/rclone.conf
.
-If there is a file rclone.conf
in the same directory as the rclone executable it will be preferred. This file must be created manually for Rclone to use it, it will never be created automatically.
+Specify the location of the rclone configuration file, to override the default. E.g. rclone config --config="rclone.conf"
.
+The exact default is a bit complex to describe, due to changes introduced through different versions of rclone while preserving backwards compatibility, but in most cases it is as simple as:
+
+%APPDATA%/rclone/rclone.conf
on Windows
+~/.config/rclone/rclone.conf
on other
+
+The complete logic is as follows: Rclone will look for an existing configuration file in any of the following locations, in priority order:
+
+rclone.conf
(in program directory, where rclone executable is)
+%APPDATA%/rclone/rclone.conf
(only on Windows)
+$XDG_CONFIG_HOME/rclone/rclone.conf
(on all systems, including Windows)
+~/.config/rclone/rclone.conf
(see below for explanation of ~ symbol)
+~/.rclone.conf
+
+If no existing configuration file is found, then a new one will be created in the following location:
+
+- On Windows: Location 2 listed above, except in the unlikely event that
APPDATA
is not defined, then location 4 is used instead.
+- On Unix: Location 3 if
XDG_CONFIG_HOME
is defined, else location 4.
+- Fallback to location 5 (on all OS), when the rclone directory cannot be created, but if also a home directory was not found then path
.rclone.conf
relative to current working directory will be used as a final resort.
+
+The ~
symbol in paths above represent the home directory of the current user on any OS, and the value is defined as following:
+
+- On Windows:
%HOME%
if defined, else %USERPROFILE%
, or else %HOMEDRIVE%\%HOMEPATH%
.
+- On Unix:
$HOME
if defined, else by looking up current user in OS-specific user database (e.g. passwd file), or else use the result from shell command cd && pwd
.
+
If you run rclone config file
you will see where the default location is for you.
-Use this flag to override the config location, e.g. rclone --config=".myconfig" .config
.
-If the location is set to empty string ""
or the special value /notfound
, or the os null device represented by value NUL
on Windows and /dev/null
on Unix systems, then rclone will keep the config file in memory only.
+The fact that an existing file rclone.conf
in the same directory as the rclone executable is always preferred, means that it is easy to run in "portable" mode by downloading rclone executable to a writable directory and then create an empty file rclone.conf
in the same directory.
+If the location is set to empty string ""
or path to a file with name notfound
, or the os null device represented by value NUL
on Windows and /dev/null
on Unix systems, then rclone will keep the config file in memory only.
The file format is basic INI: Sections of text, led by a [section]
header and followed by key=value
entries on separate lines. In rclone each remote is represented by its own section, where the section name defines the name of the remote. Options are specified as the key=value
entries, where the key is the option name without the --backend-
prefix, in lowercase and with _
instead of -
. E.g. option --mega-hard-delete
corresponds to key hard_delete
. Only backend options can be specified. A special, and required, key type
identifies the storage system, where the value is the internal lowercase name as returned by command rclone help backends
. Comments are indicated by ;
or #
at the beginning of a line.
Example:
[megaremote]
@@ -3405,13 +3796,14 @@ pass = PDPcQVVjVtzFY-GTdDFozqBhTdsPg3qH
To see a list of which features can be disabled use:
--disable help
See the overview features and optional features to get an idea of which feature does what.
-This flag can be useful for debugging and in exceptional circumstances (e.g. Google Drive limiting the total volume of Server Side Copies to 100GB/day).
+This flag can be useful for debugging and in exceptional circumstances (e.g. Google Drive limiting the total volume of Server Side Copies to 100 GiB/day).
--dscp VALUE
Specify a DSCP value or name to use in connections. This could help QoS system to identify traffic class. BE, EF, DF, LE, CSx and AFxx are allowed.
See the description of differentiated services to get an idea of this field. Setting this to 1 (LE) to identify the flow to SCAVENGER class can avoid occupying too much bandwidth in a network with DiffServ support (RFC 8622).
For example, if you configured QoS on router to handle LE properly. Running:
rclone copy --dscp LE from:/from to:/to
would make the priority lower than usual internet flows.
+This option has no effect on Windows (see golang/go#42728).
-n, --dry-run
Do a trial run with no permanent changes. Use this to see what rclone would do without actually doing it. Useful when setting up the sync
command which deletes files in the destination.
--expect-continue-timeout=TIME
@@ -3504,7 +3896,7 @@ y/n/s/!/q> n
Disable low level retries with --low-level-retries 1
.
--max-backlog=N
This is the maximum allowable backlog of files in a sync/copy/move queued for being checked or transferred.
-This can be set arbitrarily large. It will only use memory when the queue is in use. Note that it will use in the order of N kB of memory when the backlog is in use.
+This can be set arbitrarily large. It will only use memory when the queue is in use. Note that it will use in the order of N KiB of memory when the backlog is in use.
Setting this large allows rclone to calculate how many files are pending more accurately, give a more accurate estimated finish time and make --order-by
work more accurately.
Setting this small will make rclone more synchronous to the listings of the remote which may be desirable.
Setting this to a negative number will make the backlog as large as possible.
@@ -3546,12 +3938,12 @@ y/n/s/!/q> n
--multi-thread-streams=N
When using multi thread downloads (see above --multi-thread-cutoff
) this sets the maximum number of streams to use. Set to 0
to disable multi thread downloads (Default 4).
Exactly how many streams rclone uses for the download depends on the size of the file. To calculate the number of download streams Rclone divides the size of the file by the --multi-thread-cutoff
and rounds up, up to the maximum set with --multi-thread-streams
.
-So if --multi-thread-cutoff 250MB
and --multi-thread-streams 4
are in effect (the defaults):
+So if --multi-thread-cutoff 250M
and --multi-thread-streams 4
are in effect (the defaults):
-- 0MB..250MB files will be downloaded with 1 stream
-- 250MB..500MB files will be downloaded with 2 streams
-- 500MB..750MB files will be downloaded with 3 streams
-- 750MB+ files will be downloaded with 4 streams
+- 0..250 MiB files will be downloaded with 1 stream
+- 250..500 MiB files will be downloaded with 2 streams
+- 500..750 MiB files will be downloaded with 3 streams
+- 750+ MiB files will be downloaded with 4 streams
--no-check-dest
The --no-check-dest
can be used with move
or copy
and it causes rclone not to check the destination at all when copying files.
@@ -3666,10 +4058,10 @@ y/n/s/!/q> n
When this is specified, rclone enables the single-line stats and prepends the display with a user-supplied date string. The date string MUST be enclosed in quotes. Follow golang specs for date formatting syntax.
--stats-unit=bits|bytes
-By default, data transfer rates will be printed in bytes/second.
-This option allows the data rate to be printed in bits/second.
+By default, data transfer rates will be printed in bytes per second.
+This option allows the data rate to be printed in bits per second.
Data transfer volume will still be reported in bytes.
-The rate is reported as a binary unit, not SI unit. So 1 Mbit/s equals 1,048,576 bits/s and not 1,000,000 bits/s.
+The rate is reported as a binary unit, not SI unit. So 1 Mbit/s equals 1,048,576 bit/s and not 1,000,000 bit/s.
The default is bytes
.
--suffix=SUFFIX
When using sync
, copy
or move
any files which would have been overwritten or deleted will have the suffix added to them. If there is a file with the same path (after the suffix has been added), then it will be overwritten.
@@ -3826,7 +4218,7 @@ export RCLONE_CONFIG_PASS
export RCLONE_PASSWORD_COMMAND="pass rclone/config"
If the passwordstore
password manager holds the password for the rclone configuration, using the script method means the password is primarily protected by the passwordstore
system, and is never embedded in the clear in scripts, nor available for examination using the standard commands available. It is quite possible with long running rclone sessions for copies of passwords to be innocently captured in log files or terminal scroll buffers, etc. Using the script method of supplying the password enhances the security of the config password considerably.
If you are running rclone inside a script, unless you are using the --password-command
method, you might want to disable password prompts. To do that, pass the parameter --ask-password=false
to rclone. This will make rclone fail instead of asking for a password if RCLONE_CONFIG_PASS
doesn't contain a valid password, and --password-command
has not been supplied.
-Some rclone commands, such as genautocomplete
, do not require configuration. Nevertheless, rclone will read any configuration file found according to the rules described above. If an encrypted configuration file is found, this means you will be prompted for password (unless using --password-command
). To avoid this, you can bypass the loading of the configuration file by overriding the location with an empty string ""
or the special value /notfound
, or the os null device represented by value NUL
on Windows and /dev/null
on Unix systems (before rclone version 1.55 only this null device alternative was supported). E.g. rclone --config="" genautocomplete bash
.
+Whenever running commands that may be affected by options in a configuration file, rclone will look for an existing file according to the rules described above, and load any it finds. If an encrypted file is found, this includes decrypting it, with the possible consequence of a password prompt. When executing a command line that you know are not actually using anything from such a configuration file, you can avoid it being loaded by overriding the location, e.g. with one of the documented special values for memory-only configuration. Since only backend options can be stored in configuration files, this is normally unnecessary for commands that do not operate on backends, e.g. genautocomplete
. However, it will be relevant for commands that do operate on backends in general, but are used without referencing a stored remote, e.g. listing local filesystem paths, or connection strings: rclone --config="" ls .
Developer options
These options are useful when developing or debugging rclone. There are also some more remote specific options which aren't documented here which are used for testing. These start with remote name e.g. --drive-test-option
- see the docs for the remote in question.
--cpuprofile=FILE
@@ -3911,12 +4303,13 @@ export RCLONE_CONFIG_PASS
Environment Variables
Rclone can be configured entirely using environment variables. These can be used to set defaults for options or config file entries.
-Options
+Options
Every option in rclone can have its default set by environment variable.
To find the name of the environment variable, first, take the long option name, strip the leading --
, change -
to _
, make upper case and prepend RCLONE_
.
For example, to always set --stats 5s
, set the environment variable RCLONE_STATS=5s
. If you set stats on the command line this will override the environment variable setting.
Or to always use the trash in drive --drive-use-trash
, set RCLONE_DRIVE_USE_TRASH=true
.
The same parser is used for the options and the environment variables so they take exactly the same form.
+The options set by environment variables can be seen with the -vv
flag, e.g. rclone version -vv
.
Config file
You can set defaults for values in the config file on an individual remote basis. The names of the config items are documented in the page for each backend.
To find the name of the environment variable, you need to set, take RCLONE_CONFIG_
+ name of remote + _
+ name of config file option and make it all uppercase.
@@ -3929,18 +4322,22 @@ $ rclone lsd MYS3:
$ rclone listremotes | grep mys3
mys3:
Note that if you want to create a remote using environment variables you must create the ..._TYPE
variable as above.
-Note also that now rclone has connectionstrings, it is probably easier to use those instead which makes the above example
+Note that you can only set the options of the immediate backend, so RCLONE_CONFIG_MYS3CRYPT_ACCESS_KEY_ID has no effect, if myS3Crypt is a crypt remote based on an S3 remote. However RCLONE_S3_ACCESS_KEY_ID will set the access key of all remotes using S3, including myS3Crypt.
+Note also that now rclone has connection strings, it is probably easier to use those instead which makes the above example
rclone lsd :s3,access_key_id=XXX,secret_access_key=XXX:
Precedence
The various different methods of backend configuration are read in this order and the first one with a value is used.
-- Flag values as supplied on the command line, e.g.
--drive-use-trash
.
-- Remote specific environment vars, e.g.
RCLONE_CONFIG_MYREMOTE_USE_TRASH
(see above).
-- Backend specific environment vars, e.g.
RCLONE_DRIVE_USE_TRASH
.
-- Config file, e.g.
use_trash = false
.
-- Default values, e.g.
true
- these can't be changed.
+- Parameters in connection strings, e.g.
myRemote,skip_links:
+- Flag values as supplied on the command line, e.g.
--skip-links
+- Remote specific environment vars, e.g.
RCLONE_CONFIG_MYREMOTE_SKIP_LINKS
(see above).
+- Backend specific environment vars, e.g.
RCLONE_LOCAL_SKIP_LINKS
.
+- Backend generic environment vars, e.g.
RCLONE_SKIP_LINKS
.
+- Config file, e.g.
skip_links = true
.
+- Default values, e.g.
false
- these can't be changed.
-So if both --drive-use-trash
is supplied on the config line and an environment variable RCLONE_DRIVE_USE_TRASH
is set, the command line flag will take preference.
+So if both --skip-links
is supplied on the command line and an environment variable RCLONE_LOCAL_SKIP_LINKS
is set, the command line flag will take preference.
+The backend configurations set by environment variables can be seen with the -vv
flag, e.g. rclone about myRemote: -vv
.
For non backend configuration the order is as follows:
- Flag values as supplied on the command line, e.g.
--stats 5s
.
@@ -3955,8 +4352,10 @@ mys3:
HTTPS_PROXY
takes precedence over HTTP_PROXY
for https requests.
- The environment values may be either a complete URL or a "host[:port]" for, in which case the "http" scheme is assumed.
+USER
and LOGNAME
values are used as fallbacks for current username. The primary method for looking up username is OS-specific: Windows API on Windows, real user ID in /etc/passwd on Unix systems. In the documentation the current username is simply referred to as $USER
.
RCLONE_CONFIG_DIR
- rclone sets this variable for use in config files and sub processes to point to the directory holding the config file.
+The options set by environment variables can be seen with the -vv
and --log-level=DEBUG
flags, e.g. rclone version -vv
.
Configuring rclone on a remote / headless machine
Some of the configurations (those involving oauth2) require an Internet connected web browser.
If you are trying to set rclone up on a remote or headless box with no browser available on it (e.g. a NAS or a server in a datacenter) then you will need to use an alternative means of configuration. There are two ways of doing it, described below.
@@ -4024,22 +4423,22 @@ Configuration file is stored at:
Patterns for matching path/file names
Pattern syntax
Rclone matching rules follow a glob style:
-`*` matches any sequence of non-separator (`/`) characters
-`**` matches any sequence of characters including `/` separators
-`?` matches any single non-separator (`/`) character
-`[` [ `!` ] { character-range } `]`
- character class (must be non-empty)
-`{` pattern-list `}`
- pattern alternatives
-c matches character c (c != `*`, `**`, `?`, `\`, `[`, `{`, `}`)
-`\` c matches character c
+* matches any sequence of non-separator (/) characters
+** matches any sequence of characters including / separators
+? matches any single non-separator (/) character
+[ [ ! ] { character-range } ]
+ character class (must be non-empty)
+{ pattern-list }
+ pattern alternatives
+c matches character c (c != *, **, ?, \, [, {, })
+\c matches reserved character c (c = *, **, ?, \, [, {, })
character-range:
-c matches character c (c != `\\`, `-`, `]`)
-`\` c matches character c
-lo `-` hi matches character c for lo <= c <= hi
+c matches character c (c != \, -, ])
+\c matches reserved character c (c = \, -, ])
+lo - hi matches character c for lo <= c <= hi
pattern-list:
-pattern { `,` pattern }
- comma-separated (without spaces) patterns
+pattern { , pattern }
+ comma-separated (without spaces) patterns
character classes (see Go regular expression reference) include:
Named character classes (e.g. [\d], [^\d], [\D], [^\D])
Perl character classes (e.g. \s, \S, \w, \W)
@@ -4261,11 +4660,11 @@ user2/prefect
If the rclone error Command .... needs .... arguments maximum: you provided .... non flag arguments:
is encountered, the cause is commonly spaces within the name of a remote or flag value. The fix then is to quote values containing spaces.
Other filters
--min-size
- Don't transfer any file smaller than this
-Controls the minimum size file within the scope of an rclone command. Default units are kBytes
but abbreviations k
, M
, or G
are valid.
-E.g. rclone ls remote: --min-size 50k
lists files on remote:
of 50kByte size or larger.
+Controls the minimum size file within the scope of an rclone command. Default units are KiByte
but abbreviations K
, M
, G
, T
or P
are valid.
+E.g. rclone ls remote: --min-size 50k
lists files on remote:
of 50 KiByte size or larger.
--max-size
- Don't transfer any file larger than this
-Controls the maximum size file within the scope of an rclone command. Default units are kBytes
but abbreviations k
, M
, or G
are valid.
-E.g. rclone ls remote: --max-size 1G
lists files on remote:
of 1GByte size or smaller.
+Controls the maximum size file within the scope of an rclone command. Default units are KiByte
but abbreviations K
, M
, G
, T
or P
are valid.
+E.g. rclone ls remote: --max-size 1G
lists files on remote:
of 1 GiByte size or smaller.
--max-age
- Don't transfer any file older than this
Controls the maximum age of files within the scope of an rclone command. Default units are seconds or the following abbreviations are valid:
@@ -4297,7 +4696,7 @@ user2/prefect
In conjunction with rclone sync
, --delete-excluded
deletes any files on the destination which are excluded from the command.
E.g. the scope of rclone sync -i A: B:
can be restricted:
rclone --min-size 50k --delete-excluded sync A: B:
-All files on B:
which are less than 50 kBytes are deleted because they are excluded from the rclone sync command.
+All files on B:
which are less than 50 KiByte are deleted because they are excluded from the rclone sync command.
--dump filters
- dump the filters to the output
Dumps the defined filters to standard output in regular expression format.
Useful for debugging.
@@ -4650,8 +5049,16 @@ rclone rc cache/expire remote=/ withData=true
name - name of remote
parameters - a map of { "key": "value" } pairs
type - type of the new remote
-obscure - optional bool - forces obscuring of passwords
-noObscure - optional bool - forces passwords not to be obscured
+opt - a dictionary of options to control the configuration
+
+- obscure - declare passwords are plain and need obscuring
+- noObscure - declare passwords are already obscured and don't need obscuring
+- nonInteractive - don't interact with a user, return questions
+- continue - continue the config process with an answer
+- all - ask all the config questions not just the post config ones
+- state - state to restart with - used with continue
+- result - result to restart with - used with continue
+
See the config create command command for more information on the above.
Authentication is required for this call.
@@ -4695,8 +5102,16 @@ rclone rc cache/expire remote=/ withData=true
- name - name of remote
- parameters - a map of { "key": "value" } pairs
-- obscure - optional bool - forces obscuring of passwords
-- noObscure - optional bool - forces passwords not to be obscured
+- opt - a dictionary of options to control the configuration
+
+- obscure - declare passwords are plain and need obscuring
+- noObscure - declare passwords are already obscured and don't need obscuring
+- nonInteractive - don't interact with a user, return questions
+- continue - continue the config process with an answer
+- all - ask all the config questions not just the post config ones
+- state - state to restart with - used with continue
+- result - result to restart with - used with continue
+
See the config update command command for more information on the above.
Authentication is required for this call.
@@ -4823,7 +5238,7 @@ OR
"lastError": last error string,
"renames" : number of files renamed,
"retryError": boolean showing whether there has been at least one non-NoRetryError,
- "speed": average speed in bytes/sec since start of the group,
+ "speed": average speed in bytes per second since start of the group,
"totalBytes": total number of bytes in the group,
"totalChecks": total number of checks in the group,
"totalTransfers": total number of transfers in the group,
@@ -4836,8 +5251,8 @@ OR
"eta": estimated time in seconds until file transfer completion
"name": name of the file,
"percentage": progress of the file transfer in percent,
- "speed": average speed over the whole transfer in bytes/sec,
- "speedAvg": current speed in bytes/sec as an exponentially weighted moving average,
+ "speed": average speed over the whole transfer in bytes per second,
+ "speedAvg": current speed in bytes per second as an exponentially weighted moving average,
"size": size of the file in bytes
}
],
@@ -5748,6 +6163,14 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
+Uptobox |
+- |
+No |
+No |
+Yes |
+- |
+
+
WebDAV |
MD5, SHA1 ³ |
Yes ⁴ |
@@ -5755,7 +6178,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No |
- |
-
+
Yandex Disk |
MD5 |
Yes |
@@ -5763,7 +6186,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No |
R |
-
+
Zoho WorkDrive |
- |
No |
@@ -5771,7 +6194,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No |
- |
-
+
The local filesystem |
All |
Yes |
@@ -5782,7 +6205,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Notes
-¹ Dropbox supports its own custom hash. This is an SHA256 sum of all the 4MB block SHA256s.
+¹ Dropbox supports its own custom hash. This is an SHA256 sum of all the 4 MiB block SHA256s.
² SFTP supports checksums if the same login has shell access and md5sum
or sha1sum
as well as echo
are in the remote's PATH.
³ WebDAV supports hashes when used with Owncloud and Nextcloud only.
⁴ WebDAV supports modtimes when used with Owncloud and Nextcloud only.
@@ -6580,6 +7003,19 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No |
+Uptobox |
+No |
+Yes |
+Yes |
+Yes |
+No |
+No |
+No |
+No |
+No |
+No |
+
+
WebDAV |
Yes |
Yes |
@@ -6592,7 +7028,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes |
Yes |
-
+
Yandex Disk |
Yes |
Yes |
@@ -6605,7 +7041,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes |
Yes |
-
+
Zoho WorkDrive |
Yes |
Yes |
@@ -6618,7 +7054,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes |
Yes |
-
+
The local filesystem |
Yes |
No |
@@ -6670,9 +7106,9 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--auto-confirm If enabled, do not request console confirmation.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --bwlimit-file BwTimetable Bandwidth limit per file in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16Mi)
+ --bwlimit BwTimetable Bandwidth limit in KiByte/s, or use suffix B|K|M|G|T|P or a full timetable.
+ --bwlimit-file BwTimetable Bandwidth limit per file in KiByte/s, or use suffix B|K|M|G|T|P or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--check-first Do all the checks before starting transfers.
@@ -6690,7 +7126,8 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--delete-before When synchronizing, delete files on destination before transferring
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
+ --disable string Disable a comma separated list of features. Use --disable help to see a list.
+ --disable-http2 Disable HTTP/2 in the global transport.
-n, --dry-run Do a trial run with no permanent changes
--dscp string Set DSCP value to connections. Can be value or names, eg. CS1, LE, DF, AF21.
--dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
@@ -6732,14 +7169,14 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-duration duration Maximum duration rclone will transfer data for.
- --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
--max-stats-groups int Maximum number of stats groups to keep in memory. On max oldest is discarded. (default 1000)
--max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--memprofile string Write memory profile to file
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
- --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size. (default 250M)
+ --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size. (default 250Mi)
--multi-thread-streams int Max number of streams to use for multi-thread downloads. (default 4)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-check-dest Don't check the destination, copy regardless.
@@ -6789,8 +7226,8 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--stats-one-line Make the stats fit on one line.
--stats-one-line-date Enables --stats-one-line and add current date/time prefix.
--stats-one-line-date-format string Enables --stats-one-line-date and uses custom formatted date. Enclose date string in double quotes ("). See https://golang.org/pkg/time/#Time.Format
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes' per second (default "bytes")
+ --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100Ki)
--suffix string Suffix to add to changed files.
--suffix-keep-extension Preserve the extension when using --suffix.
--syslog Use Syslog for logging
@@ -6806,7 +7243,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--use-json-log Use json log format.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.55.0")
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.56.0")
-v, --verbose count Print lots more stuff (repeat for more)
Backend Flags
These flags are available for every command. They control the backends and may be set in the config file.
@@ -6814,15 +7251,15 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--acd-client-id string OAuth Client Id
--acd-client-secret string OAuth Client Secret
--acd-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8,Dot)
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9Gi)
--acd-token string OAuth Access Token as a JSON blob.
--acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --acd-upload-wait-per-gb Duration Additional time per GiB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use SAS URL or Emulator)
--azureblob-archive-tier-delete Delete archive tier blobs before overwriting.
- --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100 MiB). (default 4Mi)
--azureblob-disable-checksum Don't store MD5 checksum with object metadata.
--azureblob-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
--azureblob-endpoint string Endpoint for the service
@@ -6836,12 +7273,12 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--azureblob-public-access string Public access level of a container: blob, container.
--azureblob-sas-url string SAS URL for container level access only
--azureblob-service-principal-file string Path to file containing credentials for use with a service principal.
- --azureblob-upload-cutoff string Cutoff for switching to chunked upload (<= 256MB). (Deprecated)
+ --azureblob-upload-cutoff string Cutoff for switching to chunked upload (<= 256 MiB). (Deprecated)
--azureblob-use-emulator Uses local storage emulator if provided as 'true' (leave blank if using real azure storage endpoint)
--azureblob-use-msi Use a managed service identity to authenticate (only works in Azure)
--b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4G)
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96Mi)
+ --b2-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4Gi)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-auth-duration Duration Time before the authorization token will expire in s or suffix ms|s|m|h|d. (default 1w)
--b2-download-url string Custom endpoint for downloads.
@@ -6852,7 +7289,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--b2-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed. (default 1m0s)
--b2-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200Mi)
--b2-versions Include old versions in directory listings.
--box-access-token string Box App Primary Access Token
--box-auth-url string Auth server URL.
@@ -6865,12 +7302,12 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--box-root-folder-id string Fill in for rclone to use a non root folder as its starting point.
--box-token string OAuth Access Token as a JSON blob.
--box-token-url string Token server url.
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50 MiB). (default 50Mi)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
- --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5Mi)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10Gi)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-purge Clear all the cached data for this remote on start.
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
@@ -6886,13 +7323,13 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks. (default 4)
--cache-writes Cache file data on writes through the FS
- --chunker-chunk-size SizeSuffix Files larger than chunk size will be split in chunks. (default 2G)
+ --chunker-chunk-size SizeSuffix Files larger than chunk size will be split in chunks. (default 2Gi)
--chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks.
--chunker-hash-type string Choose how chunker handles hash sums. All modes but "none" require metadata. (default "md5")
--chunker-remote string Remote to chunk/unchunk.
--compress-level int GZIP compression level (-2 to 9). (default -1)
--compress-mode string Compression mode. (default "gzip")
- --compress-ram-cache-limit SizeSuffix Some remotes don't allow the upload of files with unknown size. (default 20M)
+ --compress-ram-cache-limit SizeSuffix Some remotes don't allow the upload of files with unknown size. (default 20Mi)
--compress-remote string Remote to compress.
-L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
@@ -6907,7 +7344,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-auth-url string Auth server URL.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8Mi)
--drive-client-id string Google Application Client Id
--drive-client-secret string OAuth Client Secret
--drive-disable-http2 Disable drive using http2 (default true)
@@ -6937,13 +7374,16 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--drive-token string OAuth Access Token as a JSON blob.
--drive-token-url string Token server url.
--drive-trashed-only Only show files that are in the trash.
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8Mi)
--drive-use-created-date Use file created date instead of modified date.,
--drive-use-shared-date Use date file was shared instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-auth-url string Auth server URL.
- --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-batch-mode string Upload file batching sync|async|off. (default "sync")
+ --dropbox-batch-size int Max number of files in upload batch.
+ --dropbox-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150Mi). (default 48Mi)
--dropbox-client-id string OAuth Client Id
--dropbox-client-secret string OAuth Client Secret
--dropbox-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
@@ -6954,6 +7394,8 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--dropbox-token-url string Token server url.
--fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl
--fichier-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
+ --fichier-file-password string If you want to download a shared file that is password protected, add this parameter (obscured)
+ --fichier-folder-password string If you want to list the files in a shared folder that is password protected, add this parameter (obscured)
--fichier-shared-folder string If you want to download a shared folder, add this parameter
--filefabric-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Del,Ctl,InvalidUtf8,Dot)
--filefabric-permanent-token string Permanent Authentication Token
@@ -7008,7 +7450,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-auth-url string Auth server URL.
- --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5Gi)
--hubic-client-id string OAuth Client Id
--hubic-client-secret string OAuth Client Secret
--hubic-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8)
@@ -7017,9 +7459,10 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--hubic-token-url string Token server url.
--jottacloud-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10Mi)
+ --jottacloud-no-versions Avoid server side versioning by deleting files and recreating files instead of overwriting them.
--jottacloud-trashed-only Only show files that are in the trash.
- --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
+ --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10Mi)
--koofr-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
@@ -7034,16 +7477,16 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--local-no-preallocate Disable preallocation of disk space for transferred files
--local-no-set-modtime Disable setting modtime
--local-no-sparse Disable sparse files for multi-thread downloads
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--local-nounc string Disable UNC (long path names) conversion on Windows
- --local-zero-size-links Assume the Stat size of links is zero (and read them instead)
+ --local-unicode-normalization Apply unicode NFC normalization to paths and filenames
+ --local-zero-size-links Assume the Stat size of links is zero (and read them instead) (Deprecated)
--mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true)
--mailru-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--mailru-pass string Password (obscured)
--mailru-speedup-enable Skip full upload if there is another file with same data hash. (default true)
--mailru-speedup-file-patterns string Comma separated list of file name patterns eligible for speedup (put by hash). (default "*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf")
- --mailru-speedup-max-disk SizeSuffix This option allows you to disable speedup (put by hash) for large files (default 3G)
- --mailru-speedup-max-memory SizeSuffix Files larger than the size given below will always be hashed on disk. (default 32M)
+ --mailru-speedup-max-disk SizeSuffix This option allows you to disable speedup (put by hash) for large files (default 3Gi)
+ --mailru-speedup-max-memory SizeSuffix Files larger than the size given below will always be hashed on disk. (default 32Mi)
--mailru-user string User name (usually email)
--mega-debug Output more debug from Mega.
--mega-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8,Dot)
@@ -7052,7 +7495,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--mega-user string User name
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-auth-url string Auth server URL.
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes). (default 10M)
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes). (default 10Mi)
--onedrive-client-id string OAuth Client Id
--onedrive-client-secret string OAuth Client Secret
--onedrive-drive-id string The ID of the drive to use
@@ -7062,12 +7505,13 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--onedrive-link-password string Set the password for links created by the link command.
--onedrive-link-scope string Set the scope of the links created by the link command. (default "anonymous")
--onedrive-link-type string Set the type of the links created by the link command. (default "view")
+ --onedrive-list-chunk int Size of listing chunk. (default 1000)
--onedrive-no-versions Remove all versions on modifying operations
--onedrive-region string Choose national cloud region for OneDrive. (default "global")
--onedrive-server-side-across-configs Allow server-side operations (e.g. copy) to work across different onedrive configs.
--onedrive-token string OAuth Access Token as a JSON blob.
--onedrive-token-url string Token server url.
- --opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size. (default 10M)
+ --opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size. (default 10Mi)
--opendrive-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
--opendrive-password string Password. (obscured)
--opendrive-username string Username
@@ -7082,20 +7526,20 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--premiumizeme-encoding MultiEncoder This sets the encoding for the backend. (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--putio-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--qingstor-access-key-id string QingStor Access Key ID
- --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
+ --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4Mi)
--qingstor-connection-retries int Number of connection retries. (default 3)
--qingstor-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Ctl,InvalidUtf8)
--qingstor-endpoint string Enter an endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
- --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
+ --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-bucket-acl string Canned ACL used when creating buckets.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
- --s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656G)
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5Mi)
+ --s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-disable-http2 Disable usage of http2 for S3 backends
--s3-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8,Dot)
@@ -7110,6 +7554,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--s3-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool.
--s3-no-check-bucket If set, don't attempt to check the bucket exists or create it
--s3-no-head If set, don't HEAD uploaded objects to check integrity
+ --s3-no-head-object If set, don't HEAD objects
--s3-profile string Profile to use in the shared credentials file
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
@@ -7124,7 +7569,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing new objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
- --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
+ --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint.
--s3-v2-auth If true use v2 authentication.
--seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled)
@@ -7137,6 +7582,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--seafile-user string User name (usually email address)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-concurrent-reads If set don't use concurrent reads
+ --sftp-disable-concurrent-writes If set don't use concurrent writes
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
@@ -7158,11 +7604,11 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--sftp-use-fstat If set use fstat instead of stat
--sftp-use-insecure-cipher Enable the use of insecure ciphers and key exchange methods.
--sftp-user string SSH username, leave blank for current username, $USER
- --sharefile-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 64M)
+ --sharefile-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 64Mi)
--sharefile-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot)
--sharefile-endpoint string Endpoint for API calls.
--sharefile-root-folder-id string ID of the root folder
- --sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 128M)
+ --sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 128Mi)
--skip-links Don't warn about skipped symlinks.
--sugarsync-access-key-id string Sugarsync Access Key ID.
--sugarsync-app-id string Sugarsync App ID.
@@ -7181,7 +7627,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5Gi)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
@@ -7207,9 +7653,12 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--union-create-policy string Policy to choose upstream on CREATE category. (default "epmfs")
--union-search-policy string Policy to choose upstream on SEARCH category. (default "ff")
--union-upstreams string List of space separated upstreams.
+ --uptobox-access-token string Your access Token, get it from https://uptobox.com/my_account
+ --uptobox-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
--webdav-bearer-token-command string Command to run to get a bearer token
--webdav-encoding string This sets the encoding for the backend.
+ --webdav-headers CommaSepList Set HTTP headers for all transactions
--webdav-pass string Password. (obscured)
--webdav-url string URL of http host to connect to
--webdav-user string User name. In case NTLM authentication is used, the username should be in the format 'Domain\User'.
@@ -7224,10 +7673,184 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--zoho-client-id string OAuth Client Id
--zoho-client-secret string OAuth Client Secret
--zoho-encoding MultiEncoder This sets the encoding for the backend. (default Del,Ctl,InvalidUtf8)
- --zoho-region string Zoho region to connect to. You'll have to use the region you organization is registered in.
+ --zoho-region string Zoho region to connect to.
--zoho-token string OAuth Access Token as a JSON blob.
--zoho-token-url string Token server url.
-1Fichier
+Docker Volume Plugin
+Introduction
+Docker 1.9 has added support for creating named volumes via command-line interface and mounting them in containers as a way to share data between them. Since Docker 1.10 you can create named volumes with Docker Compose by descriptions in docker-compose.yml files for use by container groups on a single host. As of Docker 1.12 volumes are supported by Docker Swarm included with Docker Engine and created from descriptions in swarm compose v3 files for use with swarm stacks across multiple cluster nodes.
+Docker Volume Plugins augment the default local
volume driver included in Docker with stateful volumes shared across containers and hosts. Unlike local volumes, your data will not be deleted when such volume is removed. Plugins can run managed by the docker daemon, as a native system service (under systemd, sysv or upstart) or as a standalone executable. Rclone can run as docker volume plugin in all these modes. It interacts with the local docker daemon via plugin API and handles mounting of remote file systems into docker containers so it must run on the same host as the docker daemon or on every Swarm node.
+Getting started
+In the first example we will use the SFTP rclone volume with Docker engine on a standalone Ubuntu machine.
+Start from installing Docker on the host.
+The FUSE driver is a prerequisite for rclone mounting and should be installed on host:
+sudo apt-get -y install fuse
+Create two directories required by rclone docker plugin:
+sudo mkdir -p /var/lib/docker-plugins/rclone/config
+sudo mkdir -p /var/lib/docker-plugins/rclone/cache
+Install the managed rclone docker plugin:
+docker plugin install rclone/docker-volume-rclone args="-v" --alias rclone --grant-all-permissions
+docker plugin list
+Create your SFTP volume:
+docker volume create firstvolume -d rclone -o type=sftp -o sftp-host=_hostname_ -o sftp-user=_username_ -o sftp-pass=_password_ -o allow-other=true
+Note that since all options are static, you don't even have to run rclone config
or create the rclone.conf
file (but the config
directory should still be present). In the simplest case you can use localhost
as hostname and your SSH credentials as username and password. You can also change the remote path to your home directory on the host, for example -o path=/home/username
.
+Time to create a test container and mount the volume into it:
+docker run --rm -it -v firstvolume:/mnt --workdir /mnt ubuntu:latest bash
+If all goes well, you will enter the new container and change right to the mounted SFTP remote. You can type ls
to list the mounted directory or otherwise play with it. Type exit
when you are done. The container will stop but the volume will stay, ready to be reused. When it's not needed anymore, remove it:
+docker volume list
+docker volume remove firstvolume
+Now let us try something more elaborate: Google Drive volume on multi-node Docker Swarm.
+You should start from installing Docker and FUSE, creating plugin directories and installing rclone plugin on every swarm node. Then setup the Swarm.
+Google Drive volumes need an access token which can be setup via web browser and will be periodically renewed by rclone. The managed plugin cannot run a browser so we will use a technique similar to the rclone setup on a headless box.
+Run rclone config on another machine equipped with web browser and graphical user interface. Create the Google Drive remote. When done, transfer the resulting rclone.conf
to the Swarm cluster and save as /var/lib/docker-plugins/rclone/config/rclone.conf
on every node. By default this location is accessible only to the root user so you will need appropriate privileges. The resulting config will look like this:
+[gdrive]
+type = drive
+scope = drive
+drive_id = 1234567...
+root_folder_id = 0Abcd...
+token = {"access_token":...}
+Now create the file named example.yml
with a swarm stack description like this:
+version: '3'
+services:
+ heimdall:
+ image: linuxserver/heimdall:latest
+ ports: [8080:80]
+ volumes: [configdata:/config]
+volumes:
+ configdata:
+ driver: rclone
+ driver_opts:
+ remote: 'gdrive:heimdall'
+ allow_other: 'true'
+ vfs_cache_mode: full
+ poll_interval: 0
+and run the stack:
+docker stack deploy example -c ./example.yml
+After a few seconds docker will spread the parsed stack description over cluster, create the example_heimdall
service on port 8080, run service containers on one or more cluster nodes and request the example_configdata
volume from rclone plugins on the node hosts. You can use the following commands to confirm results:
+docker service ls
+docker service ps example_heimdall
+docker volume ls
+Point your browser to http://cluster.host.address:8080
and play with the service. Stop it with docker stack remove example
when you are done. Note that the example_configdata
volume(s) created on demand at the cluster nodes will not be automatically removed together with the stack but stay for future reuse. You can remove them manually by invoking the docker volume remove example_configdata
command on every node.
+Creating Volumes via CLI
+Volumes can be created with docker volume create. Here are a few examples:
+docker volume create vol1 -d rclone -o remote=storj: -o vfs-cache-mode=full
+docker volume create vol2 -d rclone -o remote=:tardigrade,access_grant=xxx:heimdall
+docker volume create vol3 -d rclone -o type=tardigrade -o path=heimdall -o tardigrade-access-grant=xxx -o poll-interval=0
+Note the -d rclone
flag that tells docker to request volume from the rclone driver. This works even if you installed managed driver by its full name rclone/docker-volume-rclone
because you provided the --alias rclone
option.
+Volumes can be inspected as follows:
+docker volume list
+docker volume inspect vol1
+Volume Configuration
+Rclone flags and volume options are set via the -o
flag to the docker volume create
command. They include backend-specific parameters as well as mount and VFS options. Also there are a few special -o
options: remote
, fs
, type
, path
, mount-type
and persist
.
+remote
determines an existing remote name from the config file, with trailing colon and optionally with a remote path. See the full syntax in the rclone documentation. This option can be aliased as fs
to prevent confusion with the remote parameter of such backends as crypt or alias.
+The remote=:backend:dir/subdir
syntax can be used to create on-the-fly (config-less) remotes, while the type
and path
options provide a simpler alternative for this. Using two split options
+-o type=backend -o path=dir/subdir
+is equivalent to the combined syntax
+-o remote=:backend:dir/subdir
+but is arguably easier to parameterize in scripts. The path
part is optional.
+Mount and VFS options as well as backend parameters are named like their twin command-line flags without the --
CLI prefix. Optionally you can use underscores instead of dashes in option names. For example, --vfs-cache-mode full
becomes -o vfs-cache-mode=full
or -o vfs_cache_mode=full
. Boolean CLI flags without value will gain the true
value, e.g. --allow-other
becomes -o allow-other=true
or -o allow_other=true
.
+Please note that you can provide parameters only for the backend immediately referenced by the backend type of mounted remote
. If this is a wrapping backend like alias, chunker or crypt, you cannot provide options for the referred to remote or backend. This limitation is imposed by the rclone connection string parser. The only workaround is to feed plugin with rclone.conf
or configure plugin arguments (see below).
+Special Volume Options
+mount-type
determines the mount method and in general can be one of: mount
, cmount
, or mount2
. This can be aliased as mount_type
. It should be noted that the managed rclone docker plugin currently does not support the cmount
method and mount2
is rarely needed. This option defaults to the first found method, which is usually mount
so you generally won't need it.
+persist
is a reserved boolean (true/false) option. In future it will allow to persist on-the-fly remotes in the plugin rclone.conf
file.
+Connection Strings
+The remote
value can be extended with connection strings as an alternative way to supply backend parameters. This is equivalent to the -o
backend options with one syntactic difference. Inside connection string the backend prefix must be dropped from parameter names but in the -o param=value
array it must be present. For instance, compare the following option array
+-o remote=:sftp:/home -o sftp-host=localhost
+with equivalent connection string:
+-o remote=:sftp,host=localhost:/home
+This difference exists because flag options -o key=val
include not only backend parameters but also mount/VFS flags and possibly other settings. Also it allows to discriminate the remote
option from the crypt-remote
(or similarly named backend parameters) and arguably simplifies scripting due to clearer value substitution.
+Using with Swarm or Compose
+Both Docker Swarm and Docker Compose use YAML-formatted text files to describe groups (stacks) of containers, their properties, networks and volumes. Compose uses the compose v2 format, Swarm uses the compose v3 format. They are mostly similar, differences are explained in the docker documentation.
+Volumes are described by the children of the top-level volumes:
node. Each of them should be named after its volume and have at least two elements, the self-explanatory driver: rclone
value and the driver_opts:
structure playing the same role as -o key=val
CLI flags:
+volumes:
+ volume_name_1:
+ driver: rclone
+ driver_opts:
+ remote: 'gdrive:'
+ allow_other: 'true'
+ vfs_cache_mode: full
+ token: '{"type": "borrower", "expires": "2021-12-31"}'
+ poll_interval: 0
+Notice a few important details: - YAML prefers _
in option names instead of -
. - YAML treats single and double quotes interchangeably. Simple strings and integers can be left unquoted. - Boolean values must be quoted like 'true'
or "false"
because these two words are reserved by YAML. - The filesystem string is keyed with remote
(or with fs
). Normally you can omit quotes here, but if the string ends with colon, you must quote it like remote: "storage_box:"
. - YAML is picky about surrounding braces in values as this is in fact another syntax for key/value mappings. For example, JSON access tokens usually contain double quotes and surrounding braces, so you must put them in single quotes.
+Installing as Managed Plugin
+Docker daemon can install plugins from an image registry and run them managed. We maintain the docker-volume-rclone plugin image on Docker Hub.
+The plugin requires presence of two directories on the host before it can be installed. Note that plugin will not create them automatically. By default they must exist on host at the following locations (though you can tweak the paths): - /var/lib/docker-plugins/rclone/config
is reserved for the rclone.conf
config file and must exist even if it's empty and the config file is not present. - /var/lib/docker-plugins/rclone/cache
holds the plugin state file as well as optional VFS caches.
+You can install managed plugin with default settings as follows:
+docker plugin install rclone/docker-volume-rclone:latest --grant-all-permissions --alias rclone
+Managed plugin is in fact a special container running in a namespace separate from normal docker containers. Inside it runs the rclone serve docker
command. The config and cache directories are bind-mounted into the container at start. The docker daemon connects to a unix socket created by the command inside the container. The command creates on-demand remote mounts right inside, then docker machinery propagates them through kernel mount namespaces and bind-mounts into requesting user containers.
+You can tweak a few plugin settings after installation when it's disabled (not in use), for instance:
+docker plugin disable rclone
+docker plugin set rclone RCLONE_VERBOSE=2 config=/etc/rclone args="--vfs-cache-mode=writes --allow-other"
+docker plugin enable rclone
+docker plugin inspect rclone
+Note that if docker refuses to disable the plugin, you should find and remove all active volumes connected with it as well as containers and swarm services that use them. This is rather tedious so please carefully plan in advance.
+You can tweak the following settings: args
, config
, cache
, and RCLONE_VERBOSE
. It's your task to keep plugin settings in sync across swarm cluster nodes.
+args
sets command-line arguments for the rclone serve docker
command (none by default). Arguments should be separated by space so you will normally want to put them in quotes on the docker plugin set command line. Both serve docker flags and generic rclone flags are supported, including backend parameters that will be used as defaults for volume creation. Note that plugin will fail (due to this docker bug) if the args
value is empty. Use e.g. args="-v"
as a workaround.
+config=/host/dir
sets alternative host location for the config directory. Plugin will look for rclone.conf
here. It's not an error if the config file is not present but the directory must exist. Please note that plugin can periodically rewrite the config file, for example when it renews storage access tokens. Keep this in mind and try to avoid races between the plugin and other instances of rclone on the host that might try to change the config simultaneously resulting in corrupted rclone.conf
. You can also put stuff like private key files for SFTP remotes in this directory. Just note that it's bind-mounted inside the plugin container at the predefined path /data/config
. For example, if your key file is named sftp-box1.key
on the host, the corresponding volume config option should read -o sftp-key-file=/data/config/sftp-box1.key
.
+cache=/host/dir
sets alternative host location for the cache directory. The plugin will keep VFS caches here. Also it will create and maintain the docker-plugin.state
file in this directory. When the plugin is restarted or reinstalled, it will look in this file to recreate any volumes that existed previously. However, they will not be re-mounted into consuming containers after restart. Usually this is not a problem as the docker daemon normally will restart affected user containers after failures, daemon restarts or host reboots.
+RCLONE_VERBOSE
sets plugin verbosity from 0
(errors only, by default) to 2
(debugging). Verbosity can be also tweaked via args="-v [-v] ..."
. Since arguments are more generic, you will rarely need this setting. The plugin output by default feeds the docker daemon log on local host. Log entries are reflected as errors in the docker log but retain their actual level assigned by rclone in the encapsulated message string.
+You can set custom plugin options right when you install it, in one go:
+docker plugin remove rclone
+docker plugin install rclone/docker-volume-rclone:latest \
+ --alias rclone --grant-all-permissions \
+ args="-v --allow-other" config=/etc/rclone
+docker plugin inspect rclone
+Healthchecks
+The docker plugin volume protocol doesn't provide a way for plugins to inform the docker daemon that a volume is (un-)available. As a workaround you can setup a healthcheck to verify that the mount is responding, for example:
+services:
+ my_service:
+ image: my_image
+ healthcheck:
+ test: ls /path/to/rclone/mount || exit 1
+ interval: 1m
+ timeout: 15s
+ retries: 3
+ start_period: 15s
+Running Plugin under Systemd
+In most cases you should prefer managed mode. Moreover, MacOS and Windows do not support native Docker plugins. Please use managed mode on these systems. Proceed further only if you are on Linux.
+First, install rclone. You can just run it (type rclone serve docker
and hit enter) for the test.
+Install FUSE:
+sudo apt-get -y install fuse
+Download two systemd configuration files: docker-volume-rclone.service and docker-volume-rclone.socket.
+Put them to the /etc/systemd/system/
directory:
+cp docker-volume-plugin.service /etc/systemd/system/
+cp docker-volume-plugin.socket /etc/systemd/system/
+Please note that all commands in this section must be run as root but we omit sudo
prefix for brevity. Now create directories required by the service:
+mkdir -p /var/lib/docker-volumes/rclone
+mkdir -p /var/lib/docker-plugins/rclone/config
+mkdir -p /var/lib/docker-plugins/rclone/cache
+Run the docker plugin service in the socket activated mode:
+systemctl daemon-reload
+systemctl start docker-volume-rclone.service
+systemctl enable docker-volume-rclone.socket
+systemctl start docker-volume-rclone.socket
+systemctl restart docker
+Or run the service directly: - run systemctl daemon-reload
to let systemd pick up new config - run systemctl enable docker-volume-rclone.service
to make the new service start automatically when you power on your machine. - run systemctl start docker-volume-rclone.service
to start the service now. - run systemctl restart docker
to restart docker daemon and let it detect the new plugin socket. Note that this step is not needed in managed mode where docker knows about plugin state changes.
+The two methods are equivalent from the user perspective, but I personally prefer socket activation.
+Troubleshooting
+You can see managed plugin settings with
+docker plugin list
+docker plugin inspect rclone
+Note that docker (including latest 20.10.7) will not show actual values of args
, just the defaults.
+Use journalctl --unit docker
to see managed plugin output as part of the docker daemon log. Note that docker reflects plugin lines as errors but their actual level can be seen from encapsulated message string.
+You will usually install the latest version of managed plugin. Use the following commands to print the actual installed version:
+PLUGID=$(docker plugin list --no-trunc | awk '/rclone/{print$1}')
+sudo runc --root /run/docker/runtime-runc/plugins.moby exec $PLUGID rclone version
+You can even use runc
to run shell inside the plugin container:
+sudo runc --root /run/docker/runtime-runc/plugins.moby exec --tty $PLUGID bash
+Also you can use curl to check the plugin socket connectivity:
+docker plugin list --no-trunc
+PLUGID=123abc...
+sudo curl -H Content-Type:application/json -XPOST -d {} --unix-socket /run/docker/plugins/$PLUGID/rclone.sock http://localhost/Plugin.Activate
+though this is rarely needed.
+Finally I'd like to mention a caveat with updating volume settings. Docker CLI does not have a dedicated command like docker volume update
. It may be tempting to invoke docker volume create
with updated options on existing volume, but there is a gotcha. The command will do nothing, it won't even return an error. I hope that docker maintainers will fix this some day. In the meantime be aware that you must remove your volume before recreating it with new settings:
+docker volume remove my_vol
+docker volume create my_vol -d rclone -o opt1=new_val1 ...
+and verify that settings did update:
+docker volume list
+docker volume inspect my_vol
+If docker refuses to remove the volume, you should find containers or swarm services that use it and stop them first.
+1Fichier
This is a backend for the 1fichier cloud storage service. Note that a Premium subscription is required to use the API.
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
@@ -7367,6 +7990,24 @@ y/e/d> y
Type: string
Default: ""
+--fichier-file-password
+If you want to download a shared file that is password protected, add this parameter
+NB Input to this must be obscured - see rclone obscure.
+
+- Config: file_password
+- Env Var: RCLONE_FICHIER_FILE_PASSWORD
+- Type: string
+- Default: ""
+
+--fichier-folder-password
+If you want to list the files in a shared folder that is password protected, add this parameter
+NB Input to this must be obscured - see rclone obscure.
+
+- Config: folder_password
+- Env Var: RCLONE_FICHIER_FOLDER_PASSWORD
+- Type: string
+- Default: ""
+
--fichier-encoding
This sets the encoding for the backend.
See: the encoding section in the overview for more info.
@@ -7379,7 +8020,7 @@ y/e/d> y
Limitations
rclone about
is not supported by the 1Fichier backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
-Alias
+Alias
The alias
remote provides a new name for another remote.
Paths may be as deep as required or a local path, e.g. remote:directory/subdirectory
or /directory/subdirectory
.
During the initial setup with rclone config
you will specify the target remote. The target remote can either be a local path or another remote.
@@ -7444,7 +8085,7 @@ e/n/d/r/c/s/q> q
Type: string
Default: ""
-Amazon Drive
+Amazon Drive
Amazon Drive, formerly known as Amazon Cloud Drive, is a cloud storage service run by Amazon for consumers.
Status
Important: rclone supports Amazon Drive only if you have your own set of API keys. Unfortunately the Amazon Drive developer program is now closed to new entries so if you don't already have your own set of keys you will not be able to use rclone with Amazon Drive.
@@ -7597,9 +8238,9 @@ y/e/d> y
Default: ""
--acd-upload-wait-per-gb
-Additional time per GB to wait after a failed complete upload to see if it appears.
-Sometimes Amazon Drive gives an error when a file has been fully uploaded but the file appears anyway after a little while. This happens sometimes for files over 1GB in size and nearly every time for files bigger than 10GB. This parameter controls the time rclone waits for the file to appear.
-The default value for this parameter is 3 minutes per GB, so by default it will wait 3 minutes for every GB uploaded to see if the file appears.
+Additional time per GiB to wait after a failed complete upload to see if it appears.
+Sometimes Amazon Drive gives an error when a file has been fully uploaded but the file appears anyway after a little while. This happens sometimes for files over 1 GiB in size and nearly every time for files bigger than 10 GiB. This parameter controls the time rclone waits for the file to appear.
+The default value for this parameter is 3 minutes per GiB, so by default it will wait 3 minutes for every GiB uploaded to see if the file appears.
You can disable this feature by setting it to 0. This may cause conflict errors as rclone retries the failed upload but the file will most likely appear correctly eventually.
These values were determined empirically by observing lots of uploads of big files for a range of file sizes.
Upload with the "-v" flag to see more info about what rclone is doing in this situation.
@@ -7611,13 +8252,13 @@ y/e/d> y
--acd-templink-threshold
Files >= this size will be downloaded via their tempLink.
-Files this size or more will be downloaded via their "tempLink". This is to work around a problem with Amazon Drive which blocks downloads of files bigger than about 10GB. The default for this is 9GB which shouldn't need to be changed.
+Files this size or more will be downloaded via their "tempLink". This is to work around a problem with Amazon Drive which blocks downloads of files bigger than about 10 GiB. The default for this is 9 GiB which shouldn't need to be changed.
To download files above this threshold, rclone requests a "tempLink" which downloads the file through a temporary URL directly from the underlying S3 storage.
- Config: templink_threshold
- Env Var: RCLONE_ACD_TEMPLINK_THRESHOLD
- Type: SizeSuffix
-- Default: 9G
+- Default: 9Gi
--acd-encoding
This sets the encoding for the backend.
@@ -7632,11 +8273,11 @@ y/e/d> y
Note that Amazon Drive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
Amazon Drive has rate limiting so you may notice errors in the sync (429 errors). rclone will automatically retry the sync up to 3 times by default (see --retries
flag) which should hopefully work around this problem.
Amazon Drive has an internal limit of file sizes that can be uploaded to the service. This limit is not officially published, but all files larger than this will fail.
-At the time of writing (Jan 2016) is in the area of 50GB per file. This means that larger files are likely to fail.
+At the time of writing (Jan 2016) is in the area of 50 GiB per file. This means that larger files are likely to fail.
Unfortunately there is no way for rclone to see that this failure is because of file size, so it will retry the operation, as any other failure. To avoid this problem, use --max-size 50000M
option to limit the maximum size of uploaded files. Note that --max-size
does not split files into segments, it only ignores files over this size.
rclone about
is not supported by the Amazon Drive backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
-Amazon S3 Storage Providers
+Amazon S3 Storage Providers
The S3 backend can be used with a number of different providers:
- AWS S3
@@ -7647,6 +8288,7 @@ y/e/d> y
- IBM COS S3
- Minio
- Scaleway
+- SeaweedFS
- StackPath
- Tencent Cloud Object Storage (COS)
- Wasabi
@@ -7898,7 +8540,7 @@ y/e/d>
Avoiding GET requests to read directory listings
Rclone's default directory traversal is to process each directory individually. This takes one API call per directory. Using the --fast-list
flag will read all info about the the objects into memory first using a smaller number of API calls (one per 1000 objects). See the rclone docs for more details.
rclone sync --fast-list --checksum /path/to/source s3:bucket
---fast-list
trades off API transactions for memory use. As a rough guide rclone uses 1k of memory per object stored, so using --fast-list
on a sync of a million objects will use roughly 1 GB of RAM.
+--fast-list
trades off API transactions for memory use. As a rough guide rclone uses 1k of memory per object stored, so using --fast-list
on a sync of a million objects will use roughly 1 GiB of RAM.
If you are only copying a small number of files into a big repository then using --no-traverse
is a good idea. This finds objects directly instead of through directory listings. You can do a "top-up" sync very cheaply by using --max-age
and --no-traverse
to copy only recent files, eg
rclone copy --min-age 24h --no-traverse /path/to/source s3:bucket
You'd then do a full rclone sync
less often.
@@ -7959,9 +8601,9 @@ y/e/d>
Multipart uploads
-rclone supports multipart uploads with S3 which means that it can upload files bigger than 5GB.
+rclone supports multipart uploads with S3 which means that it can upload files bigger than 5 GiB.
Note that files uploaded both with multipart upload and through crypt remotes do not have MD5 sums.
-rclone switches from single part uploads to multipart uploads at the point specified by --s3-upload-cutoff
. This can be a maximum of 5GB and a minimum of 0 (ie always upload multipart files).
+rclone switches from single part uploads to multipart uploads at the point specified by --s3-upload-cutoff
. This can be a maximum of 5 GiB and a minimum of 0 (ie always upload multipart files).
The chunk sizes used in the multipart upload are specified by --s3-chunk-size
and the number of chunks uploaded concurrently is specified by --s3-upload-concurrency
.
Multipart uploads will use --transfers
* --s3-upload-concurrency
* --s3-chunk-size
extra memory. Single part uploads to not use extra memory.
Single part transfers can be faster than multipart transfers or slower depending on your latency from S3 - the more latency, the more likely single part transfers will be faster.
@@ -8052,7 +8694,7 @@ y/e/d>
In this case you need to restore the object(s) in question before using rclone.
Note that rclone only speaks the S3 API it does not speak the Glacier Vault API, so rclone cannot directly access Glacier Vaults.
Standard Options
-Here are the standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, and Tencent COS).
+Here are the standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, SeaweedFS, and Tencent COS).
--s3-provider
Choose your S3 provider.
@@ -8098,6 +8740,10 @@ y/e/d>
+- "SeaweedFS"
+
- "StackPath"
- StackPath Object Storage
@@ -8602,6 +9248,14 @@ y/e/d>
- Default: ""
- Examples:
+- "oss-accelerate.aliyuncs.com"
+
+- "oss-accelerate-overseas.aliyuncs.com"
+
+- Global Accelerate (outside mainland China)
+
- "oss-cn-hangzhou.aliyuncs.com"
- East China 1 (Hangzhou)
@@ -8624,12 +9278,28 @@ y/e/d>
- "oss-cn-huhehaote.aliyuncs.com"
-- North China 5 (Huhehaote)
+- North China 5 (Hohhot)
+
+- "oss-cn-wulanchabu.aliyuncs.com"
+
- "oss-cn-shenzhen.aliyuncs.com"
+- "oss-cn-heyuan.aliyuncs.com"
+
+- South China 2 (Heyuan)
+
+- "oss-cn-guangzhou.aliyuncs.com"
+
+- South China 3 (Guangzhou)
+
+- "oss-cn-chengdu.aliyuncs.com"
+
+- West China 1 (Chengdu)
+
- "oss-cn-hongkong.aliyuncs.com"
- Hong Kong (Hong Kong)
@@ -8834,6 +9504,10 @@ y/e/d>
- Digital Ocean Spaces Singapore 1
+- "localhost:8333"
+
+- SeaweedFS S3 localhost
+
- "s3.wasabisys.com"
- Wasabi US East endpoint
@@ -9330,7 +10004,7 @@ y/e/d>
Advanced Options
-Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, and Tencent COS).
+Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, SeaweedFS, and Tencent COS).
--s3-bucket-acl
Canned ACL used when creating buckets.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
@@ -9421,12 +10095,12 @@ y/e/d>
--s3-upload-cutoff
Cutoff for switching to chunked upload
-Any files larger than this will be uploaded in chunks of chunk_size. The minimum is 0 and the maximum is 5GB.
+Any files larger than this will be uploaded in chunks of chunk_size. The minimum is 0 and the maximum is 5 GiB.
- Config: upload_cutoff
- Env Var: RCLONE_S3_UPLOAD_CUTOFF
- Type: SizeSuffix
-- Default: 200M
+- Default: 200Mi
--s3-chunk-size
Chunk size to use for uploading.
@@ -9434,12 +10108,12 @@ y/e/d>
Note that "--s3-upload-concurrency" chunks of this size are buffered in memory per transfer.
If you are transferring large files over high-speed links and you have enough memory, then increasing this will speed up the transfers.
Rclone will automatically increase the chunk size when uploading a large file of known size to stay below the 10,000 chunks limit.
-Files of unknown size are uploaded with the configured chunk_size. Since the default chunk size is 5MB and there can be at most 10,000 chunks, this means that by default the maximum size of a file you can stream upload is 48GB. If you wish to stream upload larger files then you will need to increase chunk_size.
+Files of unknown size are uploaded with the configured chunk_size. Since the default chunk size is 5 MiB and there can be at most 10,000 chunks, this means that by default the maximum size of a file you can stream upload is 48 GiB. If you wish to stream upload larger files then you will need to increase chunk_size.
- Config: chunk_size
- Env Var: RCLONE_S3_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 5M
+- Default: 5Mi
--s3-max-upload-parts
Maximum number of parts in a multipart upload.
@@ -9455,12 +10129,12 @@ y/e/d>
--s3-copy-cutoff
Cutoff for switching to multipart copy
Any files larger than this that need to be server-side copied will be copied in chunks of this size.
-The minimum is 0 and the maximum is 5GB.
+The minimum is 0 and the maximum is 5 GiB.
- Config: copy_cutoff
- Env Var: RCLONE_S3_COPY_CUTOFF
- Type: SizeSuffix
-- Default: 4.656G
+- Default: 4.656Gi
--s3-disable-checksum
Don't store MD5 checksum with object metadata
@@ -9592,6 +10266,14 @@ Windows: "%USERPROFILE%\.aws\credentials"
- Type: bool
- Default: false
+--s3-no-head-object
+If set, don't HEAD objects
+
+- Config: no_head_object
+- Env Var: RCLONE_S3_NO_HEAD_OBJECT
+- Type: bool
+- Default: false
+
--s3-encoding
This sets the encoding for the backend.
See: the encoding section in the overview for more info.
@@ -9719,7 +10401,7 @@ storage_class =
Then use it as normal with the name of the public bucket, e.g.
rclone lsd anons3:1000genomes
You will be able to list and copy data but not upload it.
-Ceph
+Ceph
Ceph is an open source unified, distributed storage system designed for excellent performance, reliability and scalability. It has an S3 compatible object storage interface.
To use rclone with Ceph, configure as above but leave the region blank and set the endpoint. You should end up with something like this in your config:
[ceph]
@@ -9749,7 +10431,7 @@ storage_class =
],
}
Because this is a json dump, it is encoding the /
as \/
, so if you use the secret key as xxxxxx/xxxx
it will work fine.
-Dreamhost
+Dreamhost
Dreamhost DreamObjects is an object storage system based on CEPH.
To use rclone with Dreamhost, configure as above but leave the region blank and set the endpoint. You should end up with something like this in your config:
[dreamobjects]
@@ -9764,7 +10446,7 @@ location_constraint =
acl = private
server_side_encryption =
storage_class =
-DigitalOcean Spaces
+DigitalOcean Spaces
Spaces is an S3-interoperable object storage service from cloud provider DigitalOcean.
To connect to DigitalOcean Spaces you will need an access key and secret key. These can be retrieved on the "Applications & API" page of the DigitalOcean control panel. They will be needed when prompted by rclone config
for your access_key_id
and secret_access_key
.
When prompted for a region
or location_constraint
, press enter to use the default value. The region must be included in the endpoint
setting (e.g. nyc3.digitaloceanspaces.com
). The default values can be used for other settings.
@@ -9794,7 +10476,7 @@ storage_class =
Once configured, you can create a new Space and begin copying files. For example:
rclone mkdir spaces:my-new-space
rclone copy /path/to/files spaces:my-new-space
-IBM COS (S3)
+IBM COS (S3)
Information stored with IBM Cloud Object Storage is encrypted and dispersed across multiple geographic locations, and accessed through an implementation of the S3 API. This service makes use of the distributed storage technologies provided by IBM’s Cloud Object Storage System (formerly Cleversafe). For more information visit: (http://www.ibm.com/cloud/object-storage)
To configure access to IBM COS S3, follow the steps below:
@@ -9951,7 +10633,7 @@ acl> 1
rclone copy IBM-COS-XREGION:newbucket/file.txt .
6) Delete a file on remote.
rclone delete IBM-COS-XREGION:newbucket/file.txt
-Minio
+Minio
Minio is an object storage server built for cloud application developers and devops.
It is very easy to install and provides an S3 compatible server which can be used by rclone.
To use it, install Minio following the instructions here.
@@ -9997,7 +10679,7 @@ location_constraint =
server_side_encryption =
So once set up, for example to copy files into a bucket
rclone copy /path/to/files minio:bucket
-Scaleway
+Scaleway
Scaleway The Object Storage platform allows you to store anything from backups, logs and web assets to documents and photos. Files can be dropped from the Scaleway console or transferred through our API and CLI or using any S3-compatible tool.
Scaleway provides an S3 interface which can be configured for use with rclone like this:
[scaleway]
@@ -10012,7 +10694,41 @@ location_constraint =
acl = private
server_side_encryption =
storage_class =
-Wasabi
+SeaweedFS
+SeaweedFS is a distributed storage system for blobs, objects, files, and data lake, with O(1) disk seek and a scalable file metadata store. It has an S3 compatible object storage interface.
+Assuming the SeaweedFS are configured with weed shell
as such:
+> s3.bucket.create -name foo
+> s3.configure -access_key=any -secret_key=any -buckets=foo -user=me -actions=Read,Write,List,Tagging,Admin -apply
+{
+ "identities": [
+ {
+ "name": "me",
+ "credentials": [
+ {
+ "accessKey": "any",
+ "secretKey": "any"
+ }
+ ],
+ "actions": [
+ "Read:foo",
+ "Write:foo",
+ "List:foo",
+ "Tagging:foo",
+ "Admin:foo"
+ ]
+ }
+ ]
+}
+To use rclone with SeaweedFS, above configuration should end up with something like this in your config:
+[seaweedfs_s3]
+type = s3
+provider = SeaweedFS
+access_key_id = any
+secret_access_key = any
+endpoint = localhost:8333
+So once set up, for example to copy files into a bucket
+rclone copy /path/to/files seaweedfs_s3:foo
+Wasabi
Wasabi is a cloud-based object storage service for a broad range of applications and use cases. Wasabi is designed for individuals and organizations that require a high-performance, reliable, and secure data storage infrastructure at minimal cost.
Wasabi provides an S3 interface which can be configured for use with rclone like this.
No remotes found - make a new one
@@ -10111,7 +10827,7 @@ location_constraint =
acl =
server_side_encryption =
storage_class =
-Alibaba OSS
+Alibaba OSS
Here is an example of making an Alibaba Cloud (Aliyun) OSS configuration. First run:
rclone config
This will guide you through an interactive setup process.
@@ -10213,7 +10929,7 @@ y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
-Tencent COS
+Tencent COS
Tencent Cloud Object Storage (COS) is a distributed storage service offered by Tencent Cloud for unstructured data. It is secure, stable, massive, convenient, low-delay and low-cost.
To configure access to Tencent COS, follow the steps below:
@@ -10329,12 +11045,12 @@ Current remotes:
Name Type
==== ====
cos s3
-Netease NOS
+Netease NOS
For Netease NOS configure as per the configurator rclone config
setting the provider Netease
. This will automatically set force_path_style = false
which is necessary for it to run properly.
-Limitations
+Limitations
rclone about
is not supported by the S3 backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
-Backblaze B2
+Backblaze B2
B2 is Backblaze's cloud storage system.
Paths are specified as remote:bucket
(or remote:
for the lsd
command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir
.
Here is an example of making a b2 configuration. First run
@@ -10416,11 +11132,10 @@ y/e/d> y
Files sizes below --b2-upload-cutoff
will always have an SHA1 regardless of the source.
Transfers
Backblaze recommends that you do lots of transfers simultaneously for maximum speed. In tests from my SSD equipped laptop the optimum setting is about --transfers 32
though higher numbers may be used for a slight speed improvement. The optimum number for you may vary depending on your hardware, how big the files are, how much you want to load your computer, etc. The default of --transfers 4
is definitely too low for Backblaze B2 though.
-Note that uploading big files (bigger than 200 MB by default) will use a 96 MB RAM buffer by default. There can be at most --transfers
of these in use at any moment, so this sets the upper limit on the memory used.
+Note that uploading big files (bigger than 200 MiB by default) will use a 96 MiB RAM buffer by default. There can be at most --transfers
of these in use at any moment, so this sets the upper limit on the memory used.
Versions
When rclone uploads a new version of a file it creates a new version of it. Likewise when you delete a file, the old version will be marked hidden and still be available. Conversely, you may opt in to a "hard delete" of files with the --b2-hard-delete
flag which would permanently remove the file instead of hiding it.
Old versions of files, where available, are visible using the --b2-versions
flag.
-NB Note that --b2-versions
does not work with crypt at the moment #1627. Using --backup-dir with rclone is the recommended way of working around this.
If you wish to remove all the old versions then you can use the rclone cleanup remote:bucket
command which will delete all the old versions of files, leaving the current ones intact. You can also supply a path and only old versions under that path will be deleted, e.g. rclone cleanup remote:bucket/path/to/stuff
.
Note that cleanup
will remove partially uploaded files from the bucket if they are more than a day old.
When you purge
a bucket, the current and the old versions will be deleted then the bucket will be deleted.
@@ -10552,22 +11267,22 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxx
--b2-upload-cutoff
Cutoff for switching to chunked upload.
Files above this size will be uploaded in chunks of "--b2-chunk-size".
-This value should be set no larger than 4.657GiB (== 5GB).
+This value should be set no larger than 4.657 GiB (== 5 GB).
- Config: upload_cutoff
- Env Var: RCLONE_B2_UPLOAD_CUTOFF
- Type: SizeSuffix
-- Default: 200M
+- Default: 200Mi
--b2-copy-cutoff
Cutoff for switching to multipart copy
Any files larger than this that need to be server-side copied will be copied in chunks of this size.
-The minimum is 0 and the maximum is 4.6GB.
+The minimum is 0 and the maximum is 4.6 GiB.
- Config: copy_cutoff
- Env Var: RCLONE_B2_COPY_CUTOFF
- Type: SizeSuffix
-- Default: 4G
+- Default: 4Gi
--b2-chunk-size
Upload chunk size. Must fit in memory.
@@ -10576,7 +11291,7 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxx
- Config: chunk_size
- Env Var: RCLONE_B2_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 96M
+- Default: 96Mi
--b2-disable-checksum
Disable checksums for large (> upload cutoff) files
@@ -10633,7 +11348,7 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxx
Limitations
rclone about
is not supported by the B2 backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
-Box
+Box
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
The initial setup for Box involves getting a token from Box which you can do either in your browser, or with a config.json downloaded from Box to use JWT authentication. rclone config
walks you through it.
@@ -10818,7 +11533,7 @@ y/e/d> y
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
Transfers
-For files above 50MB rclone will use a chunked transfer. Rclone will upload up to --transfers
chunks at the same time (shared among all the multipart uploads). Chunks are buffered in memory and are normally 8MB so increasing --transfers
will increase memory use.
+For files above 50 MiB rclone will use a chunked transfer. Rclone will upload up to --transfers
chunks at the same time (shared among all the multipart uploads). Chunks are buffered in memory and are normally 8 MiB so increasing --transfers
will increase memory use.
Deleting files
Depending on the enterprise settings for your user, the item will either be actually deleted from Box or moved to the trash.
Emptying the trash is supported via the rclone however cleanup command however this deletes every trashed file and folder individually so it may take a very long time. Emptying the trash via the WebUI does not have this limitation so it is advised to empty the trash via the WebUI.
@@ -10916,12 +11631,12 @@ y/e/d> y
- Default: "0"
--box-upload-cutoff
-Cutoff for switching to multipart upload (>= 50MB).
+Cutoff for switching to multipart upload (>= 50 MiB).
- Config: upload_cutoff
- Env Var: RCLONE_BOX_UPLOAD_CUTOFF
- Type: SizeSuffix
-- Default: 50M
+- Default: 50Mi
--box-commit-retries
Max number of times to try committing a multipart file.
@@ -10946,7 +11661,7 @@ y/e/d> y
Box only supports filenames up to 255 characters in length.
rclone about
is not supported by the Box backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
-Cache (BETA)
+Cache (DEPRECATED)
The cache
remote wraps another existing remote and stores file structure and its data for long running tasks like rclone mount
.
Status
The cache backend code is working but it currently doesn't have a maintainer so there are outstanding bugs which aren't getting fixed.
@@ -10992,11 +11707,11 @@ password:
The size of a chunk. Lower value good for slow connections but can affect seamless reading.
Default: 5M
Choose a number from below, or type in your own value
- 1 / 1MB
- \ "1m"
- 2 / 5 MB
+ 1 / 1 MiB
+ \ "1M"
+ 2 / 5 MiB
\ "5M"
- 3 / 10 MB
+ 3 / 10 MiB
\ "10M"
chunk_size> 2
How much time should object info (file size, file hashes, etc.) be stored in cache. Use a very high value if you don't plan on changing the source FS from outside the cache.
@@ -11013,11 +11728,11 @@ info_age> 2
The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted.
Default: 10G
Choose a number from below, or type in your own value
- 1 / 500 MB
+ 1 / 500 MiB
\ "500M"
- 2 / 1 GB
+ 2 / 1 GiB
\ "1G"
- 3 / 10 GB
+ 3 / 10 GiB
\ "10G"
chunk_total_size> 3
Remote config
@@ -11148,20 +11863,20 @@ chunk_total_size = 10G
- Config: chunk_size
- Env Var: RCLONE_CACHE_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 5M
+- Default: 5Mi
- Examples:
@@ -11195,20 +11910,20 @@ chunk_total_size = 10G
- Config: chunk_total_size
- Env Var: RCLONE_CACHE_CHUNK_TOTAL_SIZE
- Type: SizeSuffix
-- Default: 10G
+- Default: 10Gi
- Examples:
@@ -11356,7 +12071,7 @@ chunk_total_size = 10G
stats
Print stats on the cache backend in JSON format.
rclone backend stats remote: [options] [<arguments>+]
-Chunker (BETA)
+Chunker (BETA)
The chunker
overlay transparently splits large files into smaller chunks during upload to wrapped remote and transparently assembles them back when the file is downloaded. This allows to effectively overcome size limits imposed by storage providers.
To use it, first set up the underlying remote following the configuration instructions for that remote. You can also use a local pathname instead of a remote.
First check your chosen remote is working - we'll call it remote:path
here. Note that anything inside remote:path
will be chunked and anything outside won't. This means that if you are using a bucket based remote (e.g. S3, B2, swift) then you should probably put the bucket in the remote s3:bucket
.
@@ -11380,7 +12095,7 @@ Normally should contain a ':' and a path, e.g. "myremote:path/to/di
Enter a string value. Press Enter for the default ("").
remote> remote:path
Files larger than chunk size will be split in chunks.
-Enter a size with suffix k,M,G,T. Press Enter for the default ("2G").
+Enter a size with suffix K,M,G,T. Press Enter for the default ("2G").
chunk_size> 100M
Choose how chunker handles hash sums. All modes but "none" require metadata.
Enter a string value. Press Enter for the default ("md5").
@@ -11485,7 +12200,7 @@ y/e/d> y
- Config: chunk_size
- Env Var: RCLONE_CHUNKER_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 2G
+- Default: 2Gi
--chunker-hash-type
Choose how chunker handles hash sums. All modes but "none" require metadata.
@@ -11613,7 +12328,7 @@ y/e/d> y
-Citrix ShareFile
+Citrix ShareFile
Citrix ShareFile is a secure file sharing and transfer service aimed as business.
The initial setup for Citrix ShareFile involves getting a token from Citrix ShareFile which you can in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
@@ -11689,7 +12404,7 @@ y/e/d> y
ShareFile allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.
ShareFile supports MD5 type hashes, so you can use the --checksum
flag.
Transfers
-For files above 128MB rclone will use a chunked transfer. Rclone will upload up to --transfers
chunks at the same time (shared among all the multipart uploads). Chunks are buffered in memory and are normally 64MB so increasing --transfers
will increase memory use.
+For files above 128 MiB rclone will use a chunked transfer. Rclone will upload up to --transfers
chunks at the same time (shared among all the multipart uploads). Chunks are buffered in memory and are normally 64 MiB so increasing --transfers
will increase memory use.
Limitations
Note that ShareFile is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
ShareFile only supports filenames up to 256 characters in length.
@@ -11811,7 +12526,7 @@ y/e/d> y
- Config: upload_cutoff
- Env Var: RCLONE_SHAREFILE_UPLOAD_CUTOFF
- Type: SizeSuffix
-- Default: 128M
+- Default: 128Mi
--sharefile-chunk-size
Upload chunk size. Must a power of 2 >= 256k.
@@ -11821,7 +12536,7 @@ y/e/d> y
- Config: chunk_size
- Env Var: RCLONE_SHAREFILE_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 64M
+- Default: 64Mi
--sharefile-endpoint
Endpoint for API calls.
@@ -11844,7 +12559,7 @@ y/e/d> y
Limitations
rclone about
is not supported by the Citrix ShareFile backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
-Crypt
+Crypt
Rclone crypt
remotes encrypt and decrypt other remotes.
A remote of type crypt
does not access a storage system directly, but instead wraps another remote, which in turn accesses the storage system. This is similar to how alias, union, chunker and a few others work. It makes the usage very flexible, as you can add a layer, in this case an encryption layer, on top of any other backend, even in multiple layers. Rclone's functionality can be used as with any other remote, for example you can mount a crypt remote.
Accessing a storage system through a crypt remote realizes client-side encryption, which makes it safe to keep your data in a location you do not trust will not get compromised. When working against the crypt
remote, rclone will automatically encrypt (before uploading) and decrypt (after downloading) on your local system as needed on the fly, leaving the data encrypted at rest in the wrapped remote. If you access the storage system using an application other than rclone, or access the wrapped remote directly using rclone, there will not be any encryption/decryption: Downloading existing content will just give you the encrypted (scrambled) format, and anything you upload will not become encrypted.
@@ -12194,7 +12909,7 @@ rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile
The initial nonce is generated from the operating systems crypto strong random number generator. The nonce is incremented for each chunk read making sure each nonce is unique for each block written. The chance of a nonce being re-used is minuscule. If you wrote an exabyte of data (10¹⁸ bytes) you would have a probability of approximately 2×10⁻³² of re-using a nonce.
Chunk
-Each chunk will contain 64kB of data, except for the last one which may have less data. The data chunk is in standard NaCl SecretBox format. SecretBox uses XSalsa20 and Poly1305 to encrypt and authenticate messages.
+Each chunk will contain 64 KiB of data, except for the last one which may have less data. The data chunk is in standard NaCl SecretBox format. SecretBox uses XSalsa20 and Poly1305 to encrypt and authenticate messages.
Each chunk contains:
- 16 Bytes of Poly1305 authenticator
@@ -12209,7 +12924,7 @@ rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile
- 17 bytes data chunk
49 bytes total
-1MB (1048576 bytes) file will encrypt to
+1 MiB (1048576 bytes) file will encrypt to
- 32 bytes header
- 16 chunks of 65568 bytes
@@ -12235,11 +12950,11 @@ rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile
Key derivation
Rclone uses scrypt
with parameters N=16384, r=8, p=1
with an optional user supplied salt (password2) to derive the 32+32+16 = 80 bytes of key material required. If the user doesn't supply a salt then rclone uses an internal one.
scrypt
makes it impractical to mount a dictionary attack on rclone encrypted data. For full protection against this you should always use a salt.
-SEE ALSO
+SEE ALSO
-Compress (Experimental)
+Compress (Experimental)
Warning
This remote is currently experimental. Things may break and data may be lost. Anything you do with this remote is at your own risk. Please understand the risks associated with using experimental code and don't use this remote in critical applications.
The Compress
remote adds compression to another remote. It is best used with remotes containing many large compressible files.
@@ -12347,9 +13062,9 @@ y/e/d> y
- Config: ram_cache_limit
- Env Var: RCLONE_COMPRESS_RAM_CACHE_LIMIT
- Type: SizeSuffix
-- Default: 20M
+- Default: 20Mi
-Dropbox
+Dropbox
Paths are specified as remote:path
Dropbox paths may be as deep as required, e.g. remote:directory/subdirectory
.
The initial setup for dropbox involves getting a token from Dropbox which you need to do in your browser. rclone config
walks you through it.
@@ -12403,7 +13118,7 @@ y/e/d> y
Dropbox supports modified times, but the only way to set a modification time is to re-upload the file.
This means that if you uploaded your data with an older version of rclone which didn't support the v2 API and modified times, rclone will decide to upload all your old data to fix the modification times. If you don't want this to happen use --size-only
or --checksum
flag to stop it.
Dropbox supports its own hash type which is checked for all transfers.
-Restricted filename characters
+Restricted filename characters
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
+Batch mode uploads
+Using batch mode uploads is very important for performance when using the Dropbox API. See the dropbox performance guide for more info.
+There are 3 modes rclone can use for uploads.
+--dropbox-batch-mode off
+In this mode rclone will not use upload batching. This was the default before rclone v1.55. It has the disadvantage that it is very likely to encounter too_many_requests
errors like this
+NOTICE: too_many_requests/.: Too many requests or write operations. Trying again in 15 seconds.
+When rclone receives these it has to wait for 15s or sometimes 300s before continuing which really slows down transfers.
+This will happen especially if --transfers
is large, so this mode isn't recommended except for compatibility or investigating problems.
+--dropbox-batch-mode sync
+In this mode rclone will batch up uploads to the size specified by --dropbox-batch-size
and commit them together.
+Using this mode means you can use a much higher --transfers
parameter (32 or 64 works fine) without receiving too_many_requests
errors.
+This mode ensures full data integrity.
+Note that there may be a pause when quitting rclone while rclone finishes up the last batch using this mode.
+--dropbox-batch-mode async
+In this mode rclone will batch up uploads to the size specified by --dropbox-batch-size
and commit them together.
+However it will not wait for the status of the batch to be returned to the caller. This means rclone can use a much bigger batch size (much bigger than --transfers
), at the cost of not being able to check the status of the upload.
+This provides the maximum possible upload speed especially with lots of small files, however rclone can't check the file got uploaded properly using this mode.
+If you are using this mode then using "rclone check" after the transfer completes is recommended. Or you could do an initial transfer with --dropbox-batch-mode async
then do a final transfer with --dropbox-batch-mode sync
(the default).
+Note that there may be a pause when quitting rclone while rclone finishes up the last batch using this mode.
Standard Options
Here are the standard options specific to dropbox (Dropbox).
--dropbox-client-id
@@ -12498,14 +13232,14 @@ y/e/d> y
- Default: ""
--dropbox-chunk-size
-Upload chunk size. (< 150M).
+Upload chunk size. (< 150Mi).
Any files larger than this will be uploaded in chunks of this size.
-Note that chunks are buffered in memory (one at a time) so rclone can deal with retries. Setting this larger will increase the speed slightly (at most 10% for 128MB in tests) at the cost of using more memory. It can be set smaller if you are tight on memory.
+Note that chunks are buffered in memory (one at a time) so rclone can deal with retries. Setting this larger will increase the speed slightly (at most 10% for 128 MiB in tests) at the cost of using more memory. It can be set smaller if you are tight on memory.
- Config: chunk_size
- Env Var: RCLONE_DROPBOX_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 48M
+- Default: 48Mi
--dropbox-impersonate
Impersonate this user when using a business account.
@@ -12537,6 +13271,53 @@ y/e/d> y
- Type: bool
- Default: false
+--dropbox-batch-mode
+Upload file batching sync|async|off.
+This sets the batch mode used by rclone.
+For full info see the main docs
+This has 3 possible values
+
+- off - no batching
+- sync - batch uploads and check completion (default)
+- async - batch upload and don't check completion
+
+Rclone will close any outstanding batches when it exits which may make a delay on quit.
+
+- Config: batch_mode
+- Env Var: RCLONE_DROPBOX_BATCH_MODE
+- Type: string
+- Default: "sync"
+
+--dropbox-batch-size
+Max number of files in upload batch.
+This sets the batch size of files to upload. It has to be less than 1000.
+By default this is 0 which means rclone which calculate the batch size depending on the setting of batch_mode.
+
+- batch_mode: async - default batch_size is 100
+- batch_mode: sync - default batch_size is the same as --transfers
+- batch_mode: off - not in use
+
+Rclone will close any outstanding batches when it exits which may make a delay on quit.
+Setting this is a great idea if you are uploading lots of small files as it will make them a lot quicker. You can use --transfers 32 to maximise throughput.
+
+- Config: batch_size
+- Env Var: RCLONE_DROPBOX_BATCH_SIZE
+- Type: int
+- Default: 0
+
+--dropbox-batch-timeout
+Max time to allow an idle upload batch before uploading
+If an upload batch is idle for more than this long then it will be uploaded.
+The default for this is 0 which means rclone will choose a sensible default based on the batch_mode in use.
+
--dropbox-encoding
This sets the encoding for the backend.
See: the encoding section in the overview for more info.
@@ -12551,6 +13332,7 @@ y/e/d> y
There are some file names such as thumbs.db
which Dropbox can't store. There is a full list of them in the "Ignored Files" section of this document. Rclone will issue an error message File name disallowed - not uploading
if it attempts to upload one of those file names, but the sync won't fail.
Some errors may occur if you try to sync copyright-protected files because Dropbox has its own copyright detector that prevents this sort of file being downloaded. This will return the error ERROR : /path/to/your/file: Failed to copy: failed to open source object: path/restricted_content/.
If you have more than 10,000 files in a directory then rclone purge dropbox:dir
will return the error Failed to purge: There are too many files involved in this operation
. As a work-around do an rclone delete dropbox:dir
followed by an rclone rmdir dropbox:dir
.
+When using rclone link
you'll need to set --expire
if using a non-personal account otherwise the visibility may not be correct. (Note that --expire
isn't supported on personal accounts). See the forum discussion and the dropbox SDK issue.
Get your own Dropbox App ID
When you use rclone with Dropbox in its default configuration you are using rclone's App ID. This is shared between all the rclone users.
Here is how to create your own Dropbox App ID for rclone:
@@ -12560,10 +13342,11 @@ y/e/d> y
Choose the type of access you want to use => Full Dropbox
or App Folder
Name your App. The app name is global, so you can't use rclone
for example
Click the button Create App
-Fill Redirect URIs
as http://localhost:53682/
-Find the App key
and App secret
Use these values in rclone config to add a new remote or edit an existing remote.
+Switch to the Permissions
tab. Enable at least the following permissions: account_info.read
, files.metadata.write
, files.content.write
, files.content.read
, sharing.write
. The files.metadata.read
and sharing.read
checkboxes will be marked too. Click Submit
+Switch to the Settings
tab. Fill OAuth2 - Redirect URIs
as http://localhost:53682/
+Find the App key
and App secret
values on the Settings
tab. Use these values in rclone config to add a new remote or edit an existing remote. The App key
setting corresponds to client_id
in rclone config, the App secret
corresponds to client_secret
-Enterprise File Fabric
+Enterprise File Fabric
This backend supports Storage Made Easy's Enterprise File Fabric™ which provides a software solution to integrate and unify File and Object Storage accessible through a global file system.
The initial setup for the Enterprise File Fabric backend involves getting a token from the the Enterprise File Fabric which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
@@ -12742,7 +13525,7 @@ y/e/d> y
- Type: MultiEncoder
- Default: Slash,Del,Ctl,InvalidUtf8,Dot
-FTP
+FTP
FTP is the File Transfer Protocol. Rclone FTP support is provided using the github.com/jlaffaye/ftp package.
Limitations of Rclone's FTP backend
Paths are specified as remote:path
. If the path does not begin with a /
it is relative to the home directory of the user. An empty path remote:
refers to the user's home directory.
@@ -12984,7 +13767,7 @@ y/e/d> y
-Google Cloud Storage
+Google Cloud Storage
Paths are specified as remote:bucket
(or remote:
for the lsd
command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir
.
The initial setup for google cloud storage involves getting a token from Google Cloud Storage which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
@@ -13143,9 +13926,11 @@ y/e/d> y
Eg --header-upload "Content-Type text/potato"
Note that the last of these is for setting custom metadata in the form --header-upload "x-goog-meta-key: value"
-Modified time
-Google google cloud storage stores md5sums natively and rclone stores modification times as metadata on the object, under the "mtime" key in RFC3339 format accurate to 1ns.
-Restricted filename characters
+Modification time
+Google Cloud Storage stores md5sum natively. Google's gsutil tool stores modification time with one-second precision as goog-reserved-file-mtime
in file metadata.
+To ensure compatibility with gsutil, rclone stores modification time in 2 separate metadata entries. mtime
uses RFC3339 format with one-nanosecond precision. goog-reserved-file-mtime
uses Unix timestamp format with one-second precision. To get modification time from object metadata, rclone reads the metadata in the following order: mtime
, goog-reserved-file-mtime
, object updated time.
+Note that rclone's default modify window is 1ns. Files uploaded by gsutil only contain timestamps with one-second precision. If you use rclone to sync files previously uploaded by gsutil, rclone will attempt to update modification time for all these files. To avoid these possibly unnecessary updates, use --modify-window 1s
.
+Restricted filename characters
-Setup
+Setup
Here is an example of how to make a union called remote
for local folders. First run:
rclone config
This will guide you through an interactive setup process:
@@ -19265,7 +20198,7 @@ e/n/d/r/c/s/q> q
rclone ls remote:
Copy another local directory to the union directory called source, which will be placed into remote3:dir3
rclone copy C:\source remote:source
-Standard Options
+Standard Options
Here are the standard options specific to union (Union merges the contents of several upstream fs).
--union-upstreams
List of space separated upstreams. Can be 'upstreama:test/dir upstreamb:', '"upstreama:test/space:ro dir" upstreamb:', etc.
@@ -19307,7 +20240,7 @@ e/n/d/r/c/s/q> q
Type: int
Default: 120
-WebDAV
+WebDAV
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
To configure the WebDAV remote you will need to have a URL for it, and a username and password. If you know what kind of system you are connecting to then rclone can enable extra features.
@@ -19379,10 +20312,10 @@ y/e/d> y
rclone ls remote:
To copy a local directory to an WebDAV directory called backup
rclone copy /home/source remote:backup
-Modified time and hashes
+Modified time and hashes
Plain WebDAV does not support modified times. However when used with Owncloud or Nextcloud rclone will support modified times.
Likewise plain WebDAV does not support hashes, however when used with Owncloud or Nextcloud rclone will support SHA1 and MD5 hashes. Depending on the exact version of Owncloud or Nextcloud hashes may appear on all objects, or only on objects which had a hash uploaded with them.
-Standard Options
+Standard Options
Here are the standard options specific to webdav (Webdav).
--webdav-url
URL of http host to connect to
@@ -19455,7 +20388,7 @@ y/e/d> y
Type: string
Default: ""
-Advanced Options
+Advanced Options
Here are the advanced options specific to webdav (Webdav).
--webdav-bearer-token-command
Command to run to get a bearer token
@@ -19475,6 +20408,18 @@ y/e/d> y
Type: string
Default: ""
+
+Set HTTP headers for all transactions
+Use this to set additional HTTP headers for all transactions
+The input format is comma separated list of key,value pairs. Standard CSV encoding may be used.
+For example to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'.
+You can set multiple headers, e.g. '"Cookie","name=value","Authorization","xxx"'.
+
+- Config: headers
+- Env Var: RCLONE_WEBDAV_HEADERS
+- Type: CommaSepList
+- Default:
+
Provider notes
See below for notes on specific providers.
Owncloud
@@ -19545,7 +20490,7 @@ type = webdav
url = https://dcache.example.org/
vendor = other
bearer_token_command = oidc-token XDC
-Yandex Disk
+Yandex Disk
Yandex Disk is a cloud storage solution created by Yandex.
Here is an example of making a yandex configuration. First run
rclone config
@@ -19581,7 +20526,7 @@ Got code
[remote]
client_id =
client_secret =
-token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","expiry":"2016-12-29T12:27:11.362788025Z"}
+token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","token_type":"OAuth","expiry":"2016-12-29T12:27:11.362788025Z"}
--------------------
y) Yes this is OK
e) Edit this remote
@@ -19599,7 +20544,7 @@ y/e/d> y
Sync /home/local/directory
to the remote path, deleting any excess files in the path.
rclone sync -i /home/local/directory remote:directory
Yandex paths may be as deep as required, e.g. remote:directory/subdirectory
.
-Modified time
+Modified time
Modified times are supported and are stored accurate to 1 ns in custom metadata called rclone_modified
in RFC3339 with nanoseconds format.
MD5 checksums
MD5 checksums are natively supported by Yandex Disk.
@@ -19607,12 +20552,12 @@ y/e/d> y
If you wish to empty your trash you can use the rclone cleanup remote:
command which will permanently delete all your trashed files. This command does not take any path arguments.
To view your current quota you can use the rclone about remote:
command which will display your usage limit (quota) and the current usage.
-Restricted filename characters
+Restricted filename characters
The default restricted characters set are replaced.
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Limitations
-When uploading very large files (bigger than about 5GB) you will need to increase the --timeout
parameter. This is because Yandex pauses (perhaps to calculate the MD5SUM for the entire file) before returning confirmation that the file has been uploaded. The default handling of timeouts in rclone is to assume a 5 minute pause is an error and close the connection - you'll see net/http: timeout awaiting response headers
errors in the logs if this is happening. Setting the timeout to twice the max size of file in GB should be enough, so if you want to upload a 30GB file set a timeout of 2 * 30 = 60m
, that is --timeout 60m
.
-Standard Options
+Limitations
+When uploading very large files (bigger than about 5 GiB) you will need to increase the --timeout
parameter. This is because Yandex pauses (perhaps to calculate the MD5SUM for the entire file) before returning confirmation that the file has been uploaded. The default handling of timeouts in rclone is to assume a 5 minute pause is an error and close the connection - you'll see net/http: timeout awaiting response headers
errors in the logs if this is happening. Setting the timeout to twice the max size of file in GiB should be enough, so if you want to upload a 30 GiB file set a timeout of 2 * 30 = 60m
, that is --timeout 60m
.
+Standard Options
Here are the standard options specific to yandex (Yandex Disk).
--yandex-client-id
OAuth Client Id Leave blank normally.
@@ -19630,7 +20575,7 @@ y/e/d> y
Type: string
Default: ""
-Advanced Options
+Advanced Options
Here are the advanced options specific to yandex (Yandex Disk).
--yandex-token
OAuth Access Token as a JSON blob.
@@ -19665,7 +20610,7 @@ y/e/d> y
Type: MultiEncoder
Default: Slash,Del,Ctl,InvalidUtf8,Dot
-Zoho Workdrive
+Zoho Workdrive
Zoho WorkDrive is a cloud storage solution created by Zoho.
Here is an example of making a zoho configuration. First run
rclone config
@@ -19738,15 +20683,15 @@ y/e/d>
Sync /home/local/directory
to the remote path, deleting any excess files in the path.
rclone sync -i /home/local/directory remote:directory
Zoho paths may be as deep as required, eg remote:directory/subdirectory
.
-Modified time
+Modified time
Modified times are currently not supported for Zoho Workdrive
Checksums
No checksums are supported.
To view your current quota you can use the rclone about remote:
command which will display your current usage.
-Restricted filename characters
+Restricted filename characters
Only control characters and invalid UTF-8 are replaced. In addition most Unicode full-width characters are not supported at all and will be removed from filenames during upload.
-Standard Options
+Standard Options
Here are the standard options specific to zoho (Zoho).
--zoho-client-id
OAuth Client Id Leave blank normally.
@@ -19765,7 +20710,8 @@ y/e/d>
Default: ""
--zoho-region
-Zoho region to connect to. You'll have to use the region you organization is registered in.
+Zoho region to connect to.
+You'll have to use the region your organization is registered in. If not sure use the same top level domain as you connect to in your browser.
- Config: region
- Env Var: RCLONE_ZOHO_REGION
@@ -19791,7 +20737,7 @@ y/e/d>
-Advanced Options
+Advanced Options
Here are the advanced options specific to zoho (Zoho).
--zoho-token
OAuth Access Token as a JSON blob.
@@ -19826,12 +20772,12 @@ y/e/d>
Type: MultiEncoder
Default: Del,Ctl,InvalidUtf8
-Local Filesystem
+Local Filesystem
Local paths are specified as normal filesystem paths, e.g. /path/to/wherever
, so
rclone sync -i /home/source /tmp/destination
Will sync /home/source
to /tmp/destination
.
For consistencies sake one can also configure a remote of type local
in the config file, and access the local filesystem using rclone remote paths, e.g. remote:path/to/wherever
, but it is probably easier not to.
-Modified time
+Modified time
Rclone reads and writes the modified time using an accuracy determined by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X.
Filenames
Filenames should be encoded in UTF-8 on disk. This is the normal case for Windows and OS X.
@@ -20102,7 +21048,7 @@ y/e/d>
Invalid UTF-8 bytes will also be replaced, as they can't be converted to UTF-16.
Paths on Windows
-On Windows there are many ways of specifying a path to a file system resource. Both absolute paths like C:\path\to\wherever
, and relative paths like ..\wherever
can be used, and path separator can be either \
(as in C:\path\to\wherever
) or /
(as in C:/path/to/wherever
). Length of these paths are limited to 259 characters for files and 247 characters for directories, but there is an alternative extended-length path format increasing the limit to (approximately) 32,767 characters. This format requires absolute paths and the use of prefix \\?\
, e.g. \\?\D:\some\very\long\path
. For convenience rclone will automatically convert regular paths into the corresponding extended-length paths, so in most cases you do not have to worry about this (read more below).
+On Windows there are many ways of specifying a path to a file system resource. Local paths can be absolute, like C:\path\to\wherever
, or relative, like ..\wherever
. Network paths in UNC format, \\server\share
, are also supported. Path separator can be either \
(as in C:\path\to\wherever
) or /
(as in C:/path/to/wherever
). Length of these paths are limited to 259 characters for files and 247 characters for directories, but there is an alternative extended-length path format increasing the limit to (approximately) 32,767 characters. This format requires absolute paths and the use of prefix \\?\
, e.g. \\?\D:\some\very\long\path
. For convenience rclone will automatically convert regular paths into the corresponding extended-length paths, so in most cases you do not have to worry about this (read more below).
Note that Windows supports using the same prefix \\?\
to specify path to volumes identified by their GUID, e.g. \\?\Volume{b75e2c83-0000-0000-0000-602f00000000}\some\path
. This is not supported in rclone, due to an issue in go.
Long paths
Rclone handles long paths automatically, by converting all paths to extended-length path format, which allows paths up to 32,767 characters.
@@ -20119,7 +21065,7 @@ nounc = true
This will use UNC paths on c:\src
but not on z:\dst
. Of course this will cause problems if the absolute path length of a file exceeds 259 characters on z, so only use this option if you have to.
Symlinks / Junction points
Normally rclone will ignore symlinks or junction points (which behave like symlinks under Windows).
-If you supply --copy-links
or -L
then rclone will follow the symlink and copy the pointed to file or directory. Note that this flag is incompatible with -links
/ -l
.
+If you supply --copy-links
or -L
then rclone will follow the symlink and copy the pointed to file or directory. Note that this flag is incompatible with --links
/ -l
.
This flag applies to all commands.
For example, supposing you have a directory structure like this
$ tree /tmp/a
@@ -20199,7 +21145,7 @@ $ tree /tmp/b
0 file2
NB Rclone (like most unix tools such as du
, rsync
and tar
) treats a bind mount to the same device as being on the same filesystem.
NB This flag is only available on Unix based systems. On systems where it isn't supported (e.g. Windows) it will be ignored.
-Advanced Options
+Advanced Options
Here are the advanced options specific to local (Local Disk).
--local-nounc
Disable UNC (long path names) conversion on Windows
@@ -20241,22 +21187,29 @@ $ tree /tmp/b
Default: false
--local-zero-size-links
-Assume the Stat size of links is zero (and read them instead)
-On some virtual filesystems (such ash LucidLink), reading a link size via a Stat call always returns 0. However, on unix it reads as the length of the text in the link. This may cause errors like this when syncing:
-Failed to copy: corrupted on transfer: sizes differ 0 vs 13
-Setting this flag causes rclone to read the link and use that as the size of the link instead of 0 which in most cases fixes the problem.
+Assume the Stat size of links is zero (and read them instead) (Deprecated)
+Rclone used to use the Stat size of links as the link size, but this fails in quite a few places
+
+- Windows
+- On some virtual filesystems (such ash LucidLink)
+- Android
+
+So rclone now always reads the link
- Config: zero_size_links
- Env Var: RCLONE_LOCAL_ZERO_SIZE_LINKS
- Type: bool
- Default: false
---local-no-unicode-normalization
-Don't apply unicode normalization to paths and filenames (Deprecated)
-This flag is deprecated now. Rclone no longer normalizes unicode file names, but it compares them with unicode normalization in the sync routine instead.
+--local-unicode-normalization
+Apply unicode NFC normalization to paths and filenames
+This flag can be used to normalize file names into unicode NFC form that are read from the local filesystem.
+Rclone does not normally touch the encoding of file names it reads from the file system.
+This can be useful when using macOS as it normally provides decomposed (NFD) unicode which in some language (eg Korean) doesn't display properly on some OSes.
+Note that rclone compares filenames with unicode normalization in the sync routine so this flag shouldn't normally be used.
-- Config: no_unicode_normalization
-- Env Var: RCLONE_LOCAL_NO_UNICODE_NORMALIZATION
+- Config: unicode_normalization
+- Env Var: RCLONE_LOCAL_UNICODE_NORMALIZATION
- Type: bool
- Default: false
@@ -20355,6 +21308,279 @@ $ tree /tmp/b
"error": return an error based on option value
Changelog
+v1.56.0 - 2021-07-20
+See commits
+
+- New backends
+
+- New commands
+
+- serve docker (Antoine GIRARD) (Ivan Andreev)
+
+- checksum to check files against a file of checksums (Ivan Andreev)
+
+- this is also available as
rclone md5sum -C
etc
+
+- config touch: ensure config exists at configured location (albertony)
+- test changenotify: command to help debugging changenotify (Nick Craig-Wood)
+
+- Deprecations
+
+dbhashsum
: Remove command deprecated a year ago (Ivan Andreev)
+cache
: Deprecate cache backend (Ivan Andreev)
+
+- New Features
+
+- rework config system so it can be used non-interactively via cli and rc API.
+
+- See docs in config create
+- This is a very big change to all the backends so may cause breakages - please file bugs!
+
+- librclone - export the rclone RC as a C library (lewisxy) (Nick Craig-Wood)
+
+- Link a C-API rclone shared object into your project
+- Use the RC as an in memory interface
+- Python example supplied
+- Also supports Android and gomobile
+
+- fs
+
+- Add
--disable-http2
for global http2 disable (Nick Craig-Wood)
+- Make
--dump
imply -vv
(Alex Chen)
+- Use binary prefixes for size and rate units (albertony)
+- Use decimal prefixes for counts (albertony)
+- Add google search widget to rclone.org (Ivan Andreev)
+
+- accounting: Calculate rolling average speed (Haochen Tong)
+- atexit: Terminate with non-zero status after receiving signal (Michael Hanselmann)
+- build
+
+- Only run event-based workflow scripts under rclone repo with manual override (Mathieu Carbou)
+- Add Android build with gomobile (x0b)
+
+- check: Log the hash in use like cryptcheck does (Nick Craig-Wood)
+- version: Print os/version, kernel and bitness (Ivan Andreev)
+- config
+
+- Prevent use of Windows reserved names in config file name (albertony)
+- Create config file in windows appdata directory by default (albertony)
+- Treat any config file paths with filename notfound as memory-only config (albertony)
+- Delay load config file (albertony)
+- Replace defaultConfig with a thread-safe in-memory implementation (Chris Macklin)
+- Allow
config create
and friends to take key=value
parameters (Nick Craig-Wood)
+- Fixed issues with flags/options set by environment vars. (Ole Frost)
+
+- fshttp: Implement graceful DSCP error handling (Tyson Moore)
+- lib/http - provides an abstraction for a central http server that services can bind routes to (Nolan Woods)
+
+- Add
--template
config and flags to serve/data (Nolan Woods)
+- Add default 404 handler (Nolan Woods)
+
+- link: Use "off" value for unset expiry (Nick Craig-Wood)
+- oauthutil: Raise fatal error if token expired without refresh token (Alex Chen)
+- rcat: Add
--size
flag for more efficient uploads of known size (Nazar Mishturak)
+- serve sftp: Add
--stdio
flag to serve via stdio (Tom)
+- sync: Don't warn about
--no-traverse
when --files-from
is set (Nick Gaya)
+test makefiles
+
+- Add
--seed
flag and make data generated repeatable (Nick Craig-Wood)
+- Add log levels and speed summary (Nick Craig-Wood)
+
+
+- Bug Fixes
+
+- accounting: Fix startTime of statsGroups.sum (Haochen Tong)
+- cmd/ncdu: Fix out of range panic in delete (buengese)
+- config
+
+- Fix issues with memory-only config file paths (albertony)
+- Fix in memory config not saving on the fly backend config (Nick Craig-Wood)
+
+- fshttp: Fix address parsing for DSCP (Tyson Moore)
+- ncdu: Update termbox-go library to fix crash (Nick Craig-Wood)
+- oauthutil: Fix old authorize result not recognised (Cnly)
+- operations: Don't update timestamps of files in
--compare-dest
(Nick Gaya)
+- selfupdate: fix archive name on macos (Ivan Andreev)
+
+- Mount
+
+- Refactor before adding serve docker (Antoine GIRARD)
+
+- VFS
+
+- Add cache reset for
--vfs-cache-max-size
handling at cache poll interval (Leo Luan)
+- Fix modtime changing when reading file into cache (Nick Craig-Wood)
+- Avoid unnecessary subdir in cache path (albertony)
+- Fix that umask option cannot be set as environment variable (albertony)
+- Do not print notice about missing poll-interval support when set to 0 (albertony)
+
+- Local
+
+- Always use readlink to read symlink size for better compatibility (Nick Craig-Wood)
+- Add
--local-unicode-normalization
(and remove --local-no-unicode-normalization
) (Nick Craig-Wood)
+- Skip entries removed concurrently with List() (Ivan Andreev)
+
+- Crypt
+
+- Support timestamped filenames from
--b2-versions
(Dominik Mydlil)
+
+- B2
+
+- Don't include the bucket name in public link file prefixes (Jeffrey Tolar)
+- Fix versions and .files with no extension (Nick Craig-Wood)
+- Factor version handling into lib/version (Dominik Mydlil)
+
+- Box
+
+- Use upload preflight check to avoid listings in file uploads (Nick Craig-Wood)
+- Return errors instead of calling log.Fatal with them (Nick Craig-Wood)
+
+- Drive
+
+- Switch to the Drives API for looking up shared drives (Nick Craig-Wood)
+- Fix some google docs being treated as files (Nick Craig-Wood)
+
+- Dropbox
+
+- Add
--dropbox-batch-mode
flag to speed up uploading (Nick Craig-Wood)
+
+- Set visibility in link sharing when
--expire
is set (Nick Craig-Wood)
+- Simplify chunked uploads (Alexey Ivanov)
+- Improve "own App IP" instructions (Ivan Andreev)
+
+- Fichier
+
+- Check if more than one upload link is returned (Nick Craig-Wood)
+- Support downloading password protected files and folders (Florian Penzkofer)
+- Make error messages report text from the API (Nick Craig-Wood)
+- Fix move of files in the same directory (Nick Craig-Wood)
+- Check that we actually got a download token and retry if we didn't (buengese)
+
+- Filefabric
+
+- Fix listing after change of from field from "int" to int. (Nick Craig-Wood)
+
+- FTP
+
+- Make upload error 250 indicate success (Nick Craig-Wood)
+
+- GCS
+
+- Make compatible with gsutil's mtime metadata (database64128)
+- Clean up time format constants (database64128)
+
+- Google Photos
+
+- Fix read only scope not being used properly (Nick Craig-Wood)
+
+- HTTP
+
+- Replace httplib with lib/http (Nolan Woods)
+- Clean up Bind to better use middleware (Nolan Woods)
+
+- Jottacloud
+
+- Fix legacy auth with state based config system (buengese)
+- Fix invalid url in output from link command (albertony)
+- Add no versions option (buengese)
+
+- Onedrive
+
+- Add
list_chunk option
(Nick Gaya)
+- Also report root error if unable to cancel multipart upload (Cnly)
+- Fix failed to configure: empty token found error (Nick Craig-Wood)
+- Make link return direct download link (Xuanchen Wu)
+
+- S3
+
+- Add
--s3-no-head-object
(Tatsuya Noyori)
+- Remove WebIdentityRoleProvider to fix crash on auth (Nick Craig-Wood)
+- Don't check to see if remote is object if it ends with / (Nick Craig-Wood)
+- Add SeaweedFS (Chris Lu)
+- Update Alibaba OSS endpoints (Chuan Zh)
+
+- SFTP
+
+- Fix performance regression by re-enabling concurrent writes (Nick Craig-Wood)
+- Expand tilde and environment variables in configured
known_hosts_file
(albertony)
+
+- Tardigrade
+
+- Upgrade to uplink v1.4.6 (Caleb Case)
+- Use negative offset (Caleb Case)
+- Add warning about
too many open files
(acsfer)
+
+- WebDAV
+
+- Fix sharepoint auth over http (Nick Craig-Wood)
+- Add headers option (Antoon Prins)
+
+
+v1.55.1 - 2021-04-26
+See commits
+
+- Bug Fixes
+
+- selfupdate
+
+- Dont detect FUSE if build is static (Ivan Andreev)
+- Add build tag noselfupdate (Ivan Andreev)
+
+- sync: Fix incorrect error reported by graceful cutoff (Nick Craig-Wood)
+- install.sh: fix macOS arm64 download (Nick Craig-Wood)
+- build: Fix version numbers in android branch builds (Nick Craig-Wood)
+- docs
+
+- Contributing.md: update setup instructions for go1.16 (Nick Gaya)
+- WinFsp 2021 is out of beta (albertony)
+- Minor cleanup of space around code section (albertony)
+- Fixed some typos (albertony)
+
+
+- VFS
+
+- Fix a code path which allows dirty data to be removed causing data loss (Nick Craig-Wood)
+
+- Compress
+
+- Fix compressed name regexp (buengese)
+
+- Drive
+
+- Fix backend copyid of google doc to directory (Nick Craig-Wood)
+- Don't open browser when service account... (Ansh Mittal)
+
+- Dropbox
+
+- Add missing team_data.member scope for use with --impersonate (Nick Craig-Wood)
+- Fix About after scopes changes - rclone config reconnect needed (Nick Craig-Wood)
+- Fix Unable to decrypt returned paths from changeNotify (Nick Craig-Wood)
+
+- FTP
+
+- Fix implicit TLS (Ivan Andreev)
+
+- Onedrive
+
+- Work around for random "Unable to initialize RPS" errors (OleFrost)
+
+- SFTP
+
+- Revert sftp library to v1.12.0 from v1.13.0 to fix performance regression (Nick Craig-Wood)
+- Fix Update ReadFrom failed: failed to send packet: EOF errors (Nick Craig-Wood)
+
+- Zoho
+
+- Fix error when region isn't set (buengese)
+- Do not ask for mountpoint twice when using headless setup (buengese)
+
+
v1.55.0 - 2021-03-31
See commits
@@ -25265,7 +26491,7 @@ $ tree /tmp/b
- Project started
Bugs and Limitations
-Limitations
+Limitations
Directory timestamps aren't preserved
Rclone doesn't currently preserve the timestamps of directories. This is because rclone only really considers objects when syncing.
Rclone struggles with millions of files in a directory/bucket
@@ -25740,7 +26966,7 @@ THE SOFTWARE.
Fred fred@creativeprojects.tech
Sébastien Gross renard@users.noreply.github.com
Maxime Suret 11944422+msuret@users.noreply.github.com
-Caleb Case caleb@storj.io
+Caleb Case caleb@storj.io calebcase@gmail.com
Ben Zenker imbenzenker@gmail.com
Martin Michlmayr tbm@cyrius.com
Brandon McNama bmcnama@pagerduty.com
@@ -25799,7 +27025,7 @@ THE SOFTWARE.
Laurens Janssen BD69BM@insim.biz
Bob Bagwill bobbagwill@gmail.com
Nathan Collins colli372@msu.edu
-lostheli
+lostheli
kelv kelvin@acks.org
Milly milly.ca@gmail.com
gtorelly gtorelly@gmail.com
@@ -25846,6 +27072,39 @@ THE SOFTWARE.
Manish Kumar krmanish260@gmail.com
x0b x0bdev@gmail.com
CERN through the CS3MESH4EOSC Project
+Nick Gaya nicholasgaya+github@gmail.com
+Ashok Gelal 401055+ashokgelal@users.noreply.github.com
+Dominik Mydlil dominik.mydlil@outlook.com
+Nazar Mishturak nazarmx@gmail.com
+Ansh Mittal iamAnshMittal@gmail.com
+noabody noabody@yahoo.com
+OleFrost 82263101+olefrost@users.noreply.github.com
+Kenny Parsons kennyparsons93@gmail.com
+Jeffrey Tolar tolar.jeffrey@gmail.com
+jtagcat git-514635f7@jtag.cat
+Tatsuya Noyori 63089076+public-tatsuya-noyori@users.noreply.github.com
+lewisxy lewisxy@users.noreply.github.com
+Nolan Woods nolan_w@sfu.ca
+Gautam Kumar 25435568+gautamajay52@users.noreply.github.com
+Chris Macklin chris.macklin@10xgenomics.com
+Antoon Prins antoon.prins@surfsara.nl
+Alexey Ivanov rbtz@dropbox.com
+Serge Pouliquen sp31415@free.fr
+acsfer carlos@reendex.com
+Tom tom@tom-fitzhenry.me.uk
+Tyson Moore tyson@tyson.me
+database64128 free122448@hotmail.com
+Chris Lu chrislusf@users.noreply.github.com
+Reid Buzby reid@rethink.software
+darrenrhs darrenrhs@gmail.com
+Florian Penzkofer fp@nullptr.de
+Xuanchen Wu 117010292@link.cuhk.edu.cn
+partev petrosyan@gmail.com
+Dmitry Sitnikov fo2@inbox.ru
+Haochen Tong i@hexchain.org
+Michael Hanselmann public@hansmi.ch
+Chuan Zh zhchuan7@gmail.com
+Antoine GIRARD antoine.girard@sapk.fr
Forum
diff --git a/MANUAL.md b/MANUAL.md
index 255dfc3af..32c10102c 100644
--- a/MANUAL.md
+++ b/MANUAL.md
@@ -1,6 +1,6 @@
% rclone(1) User Manual
% Nick Craig-Wood
-% Mar 31, 2021
+% Jul 20, 2021
# Rclone syncs your files to cloud storage
@@ -41,7 +41,6 @@ using local disk.
Virtual backends wrap local and cloud file systems to apply
[encryption](https://rclone.org/crypt/),
-[caching](https://rclone.org/cache/),
[compression](https://rclone.org/compress/)
[chunking](https://rclone.org/chunker/) and
[joining](https://rclone.org/union/).
@@ -145,11 +144,13 @@ WebDAV or S3, that work out of the box.)
- rsync.net
- Scaleway
- Seafile
+- SeaweedFS
- SFTP
- StackPath
- SugarSync
- Tardigrade
- Tencent Cloud Object Storage (COS)
+- Uptobox
- Wasabi
- WebDAV
- Yandex Disk
@@ -173,6 +174,7 @@ Rclone is a Go program and comes as a single binary file.
* [Download](https://rclone.org/downloads/) the relevant binary.
* Extract the `rclone` or `rclone.exe` binary from the archive
* Run `rclone config` to setup. See [rclone config docs](https://rclone.org/docs/) for more details.
+ * Optionally configure [automatic execution](#autostart).
See below for some expanded Linux / macOS instructions.
@@ -388,6 +390,150 @@ Instructions
- rclone
```
+# Autostart #
+
+After installing and configuring rclone, as described above, you are ready to use rclone
+as an interactive command line utility. If your goal is to perform *periodic* operations,
+such as a regular [sync](https://rclone.org/commands/rclone_sync/), you will probably want
+to configure your rclone command in your operating system's scheduler. If you need to
+expose *service*-like features, such as [remote control](https://rclone.org/rc/),
+[GUI](https://rclone.org/gui/), [serve](https://rclone.org/commands/rclone_serve/)
+or [mount](https://rclone.org/commands/rclone_move/), you will often want an rclone
+command always running in the background, and configuring it to run in a service infrastructure
+may be a better option. Below are some alternatives on how to achieve this on
+different operating systems.
+
+NOTE: Before setting up autorun it is highly recommended that you have tested your command
+manually from a Command Prompt first.
+
+## Autostart on Windows ##
+
+The most relevant alternatives for autostart on Windows are:
+- Run at user log on using the Startup folder
+- Run at user log on, at system startup or at schedule using Task Scheduler
+- Run at system startup using Windows service
+
+### Running in background
+
+Rclone is a console application, so if not starting from an existing Command Prompt,
+e.g. when starting rclone.exe from a shortcut, it will open a Command Prompt window.
+When configuring rclone to run from task scheduler and windows service you are able
+to set it to run hidden in background. From rclone version 1.54 you can also make it
+run hidden from anywhere by adding option `--no-console` (it may still flash briefly
+when the program starts). Since rclone normally writes information and any error
+messages to the console, you must redirect this to a file to be able to see it.
+Rclone has a built-in option `--log-file` for that.
+
+Example command to run a sync in background:
+```
+c:\rclone\rclone.exe sync c:\files remote:/files --no-console --log-file c:\rclone\logs\sync_files.txt
+```
+
+### User account
+
+As mentioned in the [mount](https://rclone.org/commands/rclone_move/) documentation,
+mounted drives created as Administrator are not visible to other accounts, not even the
+account that was elevated as Administrator. By running the mount command as the
+built-in `SYSTEM` user account, it will create drives accessible for everyone on
+the system. Both scheduled task and Windows service can be used to achieve this.
+
+NOTE: Remember that when rclone runs as the `SYSTEM` user, the user profile
+that it sees will not be yours. This means that if you normally run rclone with
+configuration file in the default location, to be able to use the same configuration
+when running as the system user you must explicitely tell rclone where to find
+it with the [`--config`](https://rclone.org/docs/#config-config-file) option,
+or else it will look in the system users profile path (`C:\Windows\System32\config\systemprofile`).
+To test your command manually from a Command Prompt, you can run it with
+the [PsExec](https://docs.microsoft.com/en-us/sysinternals/downloads/psexec)
+utility from Microsoft's Sysinternals suite, which takes option `-s` to
+execute commands as the `SYSTEM` user.
+
+### Start from Startup folder ###
+
+To quickly execute an rclone command you can simply create a standard
+Windows Explorer shortcut for the complete rclone command you want to run. If you
+store this shortcut in the special "Startup" start-menu folder, Windows will
+automatically run it at login. To open this folder in Windows Explorer,
+enter path `%APPDATA%\Microsoft\Windows\Start Menu\Programs\Startup`,
+or `C:\ProgramData\Microsoft\Windows\Start Menu\Programs\StartUp` if you want
+the command to start for *every* user that logs in.
+
+This is the easiest approach to autostarting of rclone, but it offers no
+functionality to set it to run as different user, or to set conditions or
+actions on certain events. Setting up a scheduled task as described below
+will often give you better results.
+
+### Start from Task Scheduler ###
+
+Task Scheduler is an administrative tool built into Windows, and it can be used to
+configure rclone to be started automatically in a highly configurable way, e.g.
+periodically on a schedule, on user log on, or at system startup. It can run
+be configured to run as the current user, or for a mount command that needs to
+be available to all users it can run as the `SYSTEM` user.
+For technical information, see
+https://docs.microsoft.com/windows/win32/taskschd/task-scheduler-start-page.
+
+### Run as service ###
+
+For running rclone at system startup, you can create a Windows service that executes
+your rclone command, as an alternative to scheduled task configured to run at startup.
+
+#### Mount command built-in service integration ####
+
+For mount commands, Rclone has a built-in Windows service integration via the third party
+WinFsp library it uses. Registering as a regular Windows service easy, as you just have to
+execute the built-in PowerShell command `New-Service` (requires administrative privileges).
+
+Example of a PowerShell command that creates a Windows service for mounting
+some `remote:/files` as drive letter `X:`, for *all* users (service will be running as the
+local system account):
+
+```
+New-Service -Name Rclone -BinaryPathName 'c:\rclone\rclone.exe mount remote:/files X: --config c:\rclone\config\rclone.conf --log-file c:\rclone\logs\mount.txt'
+```
+
+The [WinFsp service infrastructure](https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Architecture)
+supports incorporating services for file system implementations, such as rclone,
+into its own launcher service, as kind of "child services". This has the additional
+advantage that it also implements a network provider that integrates into
+Windows standard methods for managing network drives. This is currently not
+officially supported by Rclone, but with WinFsp version 2019.3 B2 / v1.5B2 or later
+it should be possible through path rewriting as described [here](https://github.com/rclone/rclone/issues/3340).
+
+#### Third party service integration ####
+
+To Windows service running any rclone command, the excellent third party utility
+[NSSM](http://nssm.cc), the "Non-Sucking Service Manager", can be used.
+It includes some advanced features such as adjusting process periority, defining
+process environment variables, redirect to file anything written to stdout, and
+customized response to different exit codes, with a GUI to configure everything from
+(although it can also be used from command line ).
+
+There are also several other alternatives. To mention one more,
+[WinSW](https://github.com/winsw/winsw), "Windows Service Wrapper", is worth checking out.
+It requires .NET Framework, but it is preinstalled on newer versions of Windows, and it
+also provides alternative standalone distributions which includes necessary runtime (.NET 5).
+WinSW is a command-line only utility, where you have to manually create an XML file with
+service configuration. This may be a drawback for some, but it can also be an advantage
+as it is easy to back up and re-use the configuration
+settings, without having go through manual steps in a GUI. One thing to note is that
+by default it does not restart the service on error, one have to explicit enable this
+in the configuration file (via the "onfailure" parameter).
+
+## Autostart on Linux
+
+### Start as a service
+
+To always run rclone in background, relevant for mount commands etc,
+you can use systemd to set up rclone as a system or user service. Running as a
+system service ensures that it is run at startup even if the user it is running as
+has no active session. Running rclone as a user service ensures that it only
+starts after the configured user has logged into the system.
+
+### Run periodically from cron
+
+To run a periodic command, such as a copy/sync, you can set up a cron job.
+
Configure
---------
@@ -409,7 +555,6 @@ See the following for detailed instructions for
* [Amazon S3](https://rclone.org/s3/)
* [Backblaze B2](https://rclone.org/b2/)
* [Box](https://rclone.org/box/)
- * [Cache](https://rclone.org/cache/)
* [Chunker](https://rclone.org/chunker/) - transparently splits large files for other remotes
* [Citrix ShareFile](https://rclone.org/sharefile/)
* [Compress](https://rclone.org/compress/)
@@ -442,6 +587,7 @@ See the following for detailed instructions for
* [SugarSync](https://rclone.org/sugarsync/)
* [Tardigrade](https://rclone.org/tardigrade/)
* [Union](https://rclone.org/union/)
+ * [Uptobox](https://rclone.org/uptobox/)
* [WebDAV](https://rclone.org/webdav/)
* [Yandex Disk](https://rclone.org/yandex/)
* [Zoho WorkDrive](https://rclone.org/zoho/)
@@ -504,12 +650,12 @@ See the [global flags page](https://rclone.org/flags/) for global options not li
* [rclone config delete](https://rclone.org/commands/rclone_config_delete/) - Delete an existing remote `name`.
* [rclone config disconnect](https://rclone.org/commands/rclone_config_disconnect/) - Disconnects user from remote
* [rclone config dump](https://rclone.org/commands/rclone_config_dump/) - Dump the config file as JSON.
-* [rclone config edit](https://rclone.org/commands/rclone_config_edit/) - Enter an interactive configuration session.
* [rclone config file](https://rclone.org/commands/rclone_config_file/) - Show path of configuration file in use.
* [rclone config password](https://rclone.org/commands/rclone_config_password/) - Update password in an existing remote.
* [rclone config providers](https://rclone.org/commands/rclone_config_providers/) - List in JSON format all the providers and options.
* [rclone config reconnect](https://rclone.org/commands/rclone_config_reconnect/) - Re-authenticates user with remote.
* [rclone config show](https://rclone.org/commands/rclone_config_show/) - Print (decrypted) config file, or the config for a single remote.
+* [rclone config touch](https://rclone.org/commands/rclone_config_touch/) - Ensure configuration file exists.
* [rclone config update](https://rclone.org/commands/rclone_config_update/) - Update options in an existing remote.
* [rclone config userinfo](https://rclone.org/commands/rclone_config_userinfo/) - Prints info about logged in user of remote.
@@ -712,8 +858,8 @@ If you supply the `--rmdirs` flag, it will remove all empty directories along wi
You can also use the separate command `rmdir` or `rmdirs` to
delete empty directories only.
-For example, to delete all files bigger than 100MBytes, you may first want to check what
-would be deleted (use either):
+For example, to delete all files bigger than 100 MiB, you may first want to
+check what would be deleted (use either):
rclone --min-size 100M lsl remote:path
rclone --dry-run --min-size 100M delete remote:path
@@ -722,8 +868,8 @@ Then proceed with the actual delete:
rclone --min-size 100M delete remote:path
-That reads "delete everything with a minimum size of 100 MB", hence
-delete all files bigger than 100MBytes.
+That reads "delete everything with a minimum size of 100 MiB", hence
+delete all files bigger than 100 MiB.
**Important**: Since this can cause data loss, test first with the
`--dry-run` or the `--interactive`/`-i` flag.
@@ -848,6 +994,9 @@ both remotes and check them against each other on the fly. This can
be useful for remotes that don't support hashes or if you really want
to check all the data.
+If you supply the `--checkfile HASH` flag with a valid hash name,
+the `source:path` must point to a text file in the SUM format.
+
If you supply the `--one-way` flag, it will only check that files in
the source match the files in the destination, not the other way
around. This means that extra files in the destination that are not in
@@ -877,6 +1026,7 @@ rclone check source:path dest:path [flags]
## Options
```
+ -C, --checkfile string Treat source:path as a SUM file with hashes of given type
--combined string Make a combined report of changes to this file
--differ string Report all non-matching files to this file
--download Check by downloading rather than with hash.
@@ -1103,6 +1253,7 @@ rclone md5sum remote:path [flags]
```
--base64 Output base64 encoded hashsum
+ -C, --checkfile string Validate hashes against a given SUM file instead of printing them
--download Download the file and hash it locally; if this flag is not specified, the hash is requested from the remote
-h, --help help for md5sum
--output-file string Output hashsums to a file rather than the terminal
@@ -1138,6 +1289,7 @@ rclone sha1sum remote:path [flags]
```
--base64 Output base64 encoded hashsum
+ -C, --checkfile string Validate hashes against a given SUM file instead of printing them
--download Download the file and hash it locally; if this flag is not specified, the hash is requested from the remote
-h, --help help for sha1sum
--output-file string Output hashsums to a file rather than the terminal
@@ -1177,13 +1329,16 @@ Show the version number.
## Synopsis
-Show the rclone version number, the go version, the build target OS and
-architecture, build tags and the type of executable (static or dynamic).
+Show the rclone version number, the go version, the build target
+OS and architecture, the runtime OS and kernel version and bitness,
+build tags and the type of executable (static or dynamic).
For example:
$ rclone version
- rclone v1.54
+ rclone v1.55.0
+ - os/version: ubuntu 18.04 (64 bit)
+ - os/kernel: 4.15.0-136-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.16
@@ -1395,10 +1550,10 @@ Get quota information from the remote.
## Synopsis
-`rclone about`prints quota information about a remote to standard
+`rclone about` prints quota information about a remote to standard
output. The output is typically used, free, quota and trash contents.
-E.g. Typical output from`rclone about remote:`is:
+E.g. Typical output from `rclone about remote:` is:
Total: 17G
Used: 7.444G
@@ -1426,7 +1581,7 @@ Applying a `--full` flag to the command prints the bytes in full, e.g.
Trashed: 104857602
Other: 8849156022
-A `--json`flag generates conveniently computer readable output, e.g.
+A `--json` flag generates conveniently computer readable output, e.g.
{
"total": 18253611008,
@@ -1590,6 +1745,67 @@ See the [global flags page](https://rclone.org/flags/) for global options not li
* [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.
+# rclone checksum
+
+Checks the files in the source against a SUM file.
+
+## Synopsis
+
+
+Checks that hashsums of source files match the SUM file.
+It compares hashes (MD5, SHA1, etc) and logs a report of files which
+don't match. It doesn't alter the file system.
+
+If you supply the `--download` flag, it will download the data from remote
+and calculate the contents hash on the fly. This can be useful for remotes
+that don't support hashes or if you really want to check all the data.
+
+If you supply the `--one-way` flag, it will only check that files in
+the source match the files in the destination, not the other way
+around. This means that extra files in the destination that are not in
+the source will not be detected.
+
+The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match`
+and `--error` flags write paths, one per line, to the file name (or
+stdout if it is `-`) supplied. What they write is described in the
+help below. For example `--differ` will write all paths which are
+present on both the source and destination but different.
+
+The `--combined` flag will write a file (or stdout) which contains all
+file paths with a symbol and then a space and then the path to tell
+you what happened to it. These are reminiscent of diff files.
+
+- `= path` means path was found in source and destination and was identical
+- `- path` means path was missing on the source, so only in the destination
+- `+ path` means path was missing on the destination, so only in the source
+- `* path` means path was present in source and destination but different.
+- `! path` means there was an error reading or hashing the source or dest.
+
+
+```
+rclone checksum sumfile src:path [flags]
+```
+
+## Options
+
+```
+ --combined string Make a combined report of changes to this file
+ --differ string Report all non-matching files to this file
+ --download Check by hashing the contents.
+ --error string Report all files with errors (hashing or reading) to this file
+ -h, --help help for checksum
+ --match string Report all matching files to this file
+ --missing-on-dst string Report all files missing from the destination to this file
+ --missing-on-src string Report all files missing from the source to this file
+ --one-way Check one way only, source files must exist on remote
+```
+
+See the [global flags page](https://rclone.org/flags/) for global options not listed here.
+
+## SEE ALSO
+
+* [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.
+
# rclone config create
Create a new remote with name, type and options.
@@ -1598,16 +1814,23 @@ Create a new remote with name, type and options.
Create a new remote of `name` with `type` and options. The options
-should be passed in pairs of `key` `value`.
+should be passed in pairs of `key` `value` or as `key=value`.
For example to make a swift remote of name myremote using auto config
you would do:
rclone config create myremote swift env_auth true
+ rclone config create myremote swift env_auth=true
+
+So for example if you wanted to configure a Google Drive remote but
+using remote authorization you would do this:
+
+ rclone config create mydrive drive config_is_local=false
Note that if the config process would normally ask a question the
-default is taken. Each time that happens rclone will print a message
-saying how to affect the value taken.
+default is taken (unless `--non-interactive` is used). Each time
+that happens rclone will print or DEBUG a message saying how to
+affect the value taken.
If any of the parameters passed is a password field, then rclone will
automatically obscure them if they aren't already obscured before
@@ -1617,15 +1840,79 @@ putting them in the config file.
consists only of base64 characters then rclone can get confused about
whether the password is already obscured or not and put unobscured
passwords into the config file. If you want to be 100% certain that
-the passwords get obscured then use the "--obscure" flag, or if you
+the passwords get obscured then use the `--obscure` flag, or if you
are 100% certain you are already passing obscured passwords then use
-"--no-obscure". You can also set obscured passwords using the
-"rclone config password" command.
+`--no-obscure`. You can also set obscured passwords using the
+`rclone config password` command.
-So for example if you wanted to configure a Google Drive remote but
-using remote authorization you would do this:
+The flag `--non-interactive` is for use by applications that wish to
+configure rclone themeselves, rather than using rclone's text based
+configuration questions. If this flag is set, and rclone needs to ask
+the user a question, a JSON blob will be returned with the question in
+it.
- rclone config create mydrive drive config_is_local false
+This will look something like (some irrelevant detail removed):
+
+```
+{
+ "State": "*oauth-islocal,teamdrive,,",
+ "Option": {
+ "Name": "config_is_local",
+ "Help": "Use auto config?\n * Say Y if not sure\n * Say N if you are working on a remote or headless machine\n",
+ "Default": true,
+ "Examples": [
+ {
+ "Value": "true",
+ "Help": "Yes"
+ },
+ {
+ "Value": "false",
+ "Help": "No"
+ }
+ ],
+ "Required": false,
+ "IsPassword": false,
+ "Type": "bool",
+ "Exclusive": true,
+ },
+ "Error": "",
+}
+```
+
+The format of `Option` is the same as returned by `rclone config
+providers`. The question should be asked to the user and returned to
+rclone as the `--result` option along with the `--state` parameter.
+
+The keys of `Option` are used as follows:
+
+- `Name` - name of variable - show to user
+- `Help` - help text. Hard wrapped at 80 chars. Any URLs should be clicky.
+- `Default` - default value - return this if the user just wants the default.
+- `Examples` - the user should be able to choose one of these
+- `Required` - the value should be non-empty
+- `IsPassword` - the value is a password and should be edited as such
+- `Type` - type of value, eg `bool`, `string`, `int` and others
+- `Exclusive` - if set no free-form entry allowed only the `Examples`
+- Irrelevant keys `Provider`, `ShortOpt`, `Hide`, `NoPrefix`, `Advanced`
+
+If `Error` is set then it should be shown to the user at the same
+time as the question.
+
+ rclone config update name --continue --state "*oauth-islocal,teamdrive,," --result "true"
+
+Note that when using `--continue` all passwords should be passed in
+the clear (not obscured). Any default config values should be passed
+in with each invocation of `--continue`.
+
+At the end of the non interactive process, rclone will return a result
+with `State` as empty string.
+
+If `--all` is passed then rclone will ask all the config questions,
+not just the post config questions. Any parameters are used as
+defaults for questions as usual.
+
+Note that `bin/config.py` in the rclone source implements this protocol
+as a readable demonstration.
```
@@ -1635,9 +1922,14 @@ rclone config create `name` `type` [`key` `value`]* [flags]
## Options
```
- -h, --help help for create
- --no-obscure Force any passwords not to be obscured.
- --obscure Force any passwords to be obscured.
+ --all Ask the full set of config questions.
+ --continue Continue the configuration process with an answer.
+ -h, --help help for create
+ --no-obscure Force any passwords not to be obscured.
+ --non-interactive Don't interact with user and return questions.
+ --obscure Force any passwords to be obscured.
+ --result string Result - use with --continue.
+ --state string State - use with --continue.
```
See the [global flags page](https://rclone.org/flags/) for global options not listed here.
@@ -1720,7 +2012,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li
Enter an interactive configuration session.
-## Synopsis
+# Synopsis
Enter an interactive configuration session where you can setup new
remotes and manage existing ones. You may also set or remove a
@@ -1731,7 +2023,7 @@ password to protect your configuration.
rclone config edit [flags]
```
-## Options
+# Options
```
-h, --help help for edit
@@ -1739,7 +2031,7 @@ rclone config edit [flags]
See the [global flags page](https://rclone.org/flags/) for global options not listed here.
-## SEE ALSO
+# SEE ALSO
* [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session.
@@ -1771,11 +2063,13 @@ Update password in an existing remote.
Update an existing remote's password. The password
-should be passed in pairs of `key` `value`.
+should be passed in pairs of `key` `password` or as `key=password`.
+The `password` should be passed in in clear (unobscured).
For example to set password of a remote of name myremote you would do:
rclone config password myremote fieldname mypassword
+ rclone config password myremote fieldname=mypassword
This command is obsolete now that "config update" and "config create"
both support obscuring passwords directly.
@@ -1867,6 +2161,26 @@ See the [global flags page](https://rclone.org/flags/) for global options not li
* [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session.
+# rclone config touch
+
+Ensure configuration file exists.
+
+```
+rclone config touch [flags]
+```
+
+## Options
+
+```
+ -h, --help help for touch
+```
+
+See the [global flags page](https://rclone.org/flags/) for global options not listed here.
+
+## SEE ALSO
+
+* [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session.
+
# rclone config update
Update options in an existing remote.
@@ -1875,12 +2189,23 @@ Update options in an existing remote.
Update an existing remote's options. The options should be passed in
-in pairs of `key` `value`.
+pairs of `key` `value` or as `key=value`.
For example to update the env_auth field of a remote of name myremote
you would do:
- rclone config update myremote swift env_auth true
+ rclone config update myremote env_auth true
+ rclone config update myremote env_auth=true
+
+If the remote uses OAuth the token will be updated, if you don't
+require this add an extra parameter thus:
+
+ rclone config update myremote env_auth=true config_refresh_token=false
+
+Note that if the config process would normally ask a question the
+default is taken (unless `--non-interactive` is used). Each time
+that happens rclone will print or DEBUG a message saying how to
+affect the value taken.
If any of the parameters passed is a password field, then rclone will
automatically obscure them if they aren't already obscured before
@@ -1890,15 +2215,79 @@ putting them in the config file.
consists only of base64 characters then rclone can get confused about
whether the password is already obscured or not and put unobscured
passwords into the config file. If you want to be 100% certain that
-the passwords get obscured then use the "--obscure" flag, or if you
+the passwords get obscured then use the `--obscure` flag, or if you
are 100% certain you are already passing obscured passwords then use
-"--no-obscure". You can also set obscured passwords using the
-"rclone config password" command.
+`--no-obscure`. You can also set obscured passwords using the
+`rclone config password` command.
-If the remote uses OAuth the token will be updated, if you don't
-require this add an extra parameter thus:
+The flag `--non-interactive` is for use by applications that wish to
+configure rclone themeselves, rather than using rclone's text based
+configuration questions. If this flag is set, and rclone needs to ask
+the user a question, a JSON blob will be returned with the question in
+it.
- rclone config update myremote swift env_auth true config_refresh_token false
+This will look something like (some irrelevant detail removed):
+
+```
+{
+ "State": "*oauth-islocal,teamdrive,,",
+ "Option": {
+ "Name": "config_is_local",
+ "Help": "Use auto config?\n * Say Y if not sure\n * Say N if you are working on a remote or headless machine\n",
+ "Default": true,
+ "Examples": [
+ {
+ "Value": "true",
+ "Help": "Yes"
+ },
+ {
+ "Value": "false",
+ "Help": "No"
+ }
+ ],
+ "Required": false,
+ "IsPassword": false,
+ "Type": "bool",
+ "Exclusive": true,
+ },
+ "Error": "",
+}
+```
+
+The format of `Option` is the same as returned by `rclone config
+providers`. The question should be asked to the user and returned to
+rclone as the `--result` option along with the `--state` parameter.
+
+The keys of `Option` are used as follows:
+
+- `Name` - name of variable - show to user
+- `Help` - help text. Hard wrapped at 80 chars. Any URLs should be clicky.
+- `Default` - default value - return this if the user just wants the default.
+- `Examples` - the user should be able to choose one of these
+- `Required` - the value should be non-empty
+- `IsPassword` - the value is a password and should be edited as such
+- `Type` - type of value, eg `bool`, `string`, `int` and others
+- `Exclusive` - if set no free-form entry allowed only the `Examples`
+- Irrelevant keys `Provider`, `ShortOpt`, `Hide`, `NoPrefix`, `Advanced`
+
+If `Error` is set then it should be shown to the user at the same
+time as the question.
+
+ rclone config update name --continue --state "*oauth-islocal,teamdrive,," --result "true"
+
+Note that when using `--continue` all passwords should be passed in
+the clear (not obscured). Any default config values should be passed
+in with each invocation of `--continue`.
+
+At the end of the non interactive process, rclone will return a result
+with `State` as empty string.
+
+If `--all` is passed then rclone will ask all the config questions,
+not just the post config questions. Any parameters are used as
+defaults for questions as usual.
+
+Note that `bin/config.py` in the rclone source implements this protocol
+as a readable demonstration.
```
@@ -1908,9 +2297,14 @@ rclone config update `name` [`key` `value`]+ [flags]
## Options
```
- -h, --help help for update
- --no-obscure Force any passwords not to be obscured.
- --obscure Force any passwords to be obscured.
+ --all Ask the full set of config questions.
+ --continue Continue the configuration process with an answer.
+ -h, --help help for update
+ --no-obscure Force any passwords not to be obscured.
+ --non-interactive Don't interact with user and return questions.
+ --obscure Force any passwords to be obscured.
+ --result string Result - use with --continue.
+ --state string State - use with --continue.
```
See the [global flags page](https://rclone.org/flags/) for global options not listed here.
@@ -2009,9 +2403,9 @@ Copy url content to dest.
Download a URL's content and copy it to the destination without saving
it in temporary storage.
-Setting `--auto-filename`will cause the file name to be retrieved from
-the from URL (after any redirections) and used in the destination
-path. With `--print-filename` in addition, the resuling file name will
+Setting `--auto-filename` will cause the file name to be retrieved from
+the URL (after any redirections) and used in the destination
+path. With `--print-filename` in addition, the resulting file name will
be printed.
Setting `--no-clobber` will prevent overwriting file on the
@@ -2379,15 +2773,20 @@ Run without a hash to see the list of all supported hashes, e.g.
$ rclone hashsum
Supported hashes are:
- * MD5
- * SHA-1
- * DropboxHash
- * QuickXorHash
+ * md5
+ * sha1
+ * whirlpool
+ * crc32
+ * dropbox
+ * mailru
+ * quickxor
Then
$ rclone hashsum MD5 remote:path
+Note that hash names are case insensitive.
+
```
rclone hashsum remote:path [flags]
@@ -2397,6 +2796,7 @@ rclone hashsum remote:path [flags]
```
--base64 Output base64 encoded hashsum
+ -C, --checkfile string Validate hashes against a given SUM file instead of printing them
--download Download the file and hash it locally; if this flag is not specified, the hash is requested from the remote
-h, --help help for hashsum
--output-file string Output hashsums to a file rather than the terminal
@@ -2444,7 +2844,7 @@ rclone link remote:path [flags]
## Options
```
- --expire Duration The amount of time that the link will be valid (default 100y)
+ --expire Duration The amount of time that the link will be valid (default off)
-h, --help help for link
--unlink Remove existing public link to file/folder
```
@@ -2622,7 +3022,7 @@ rclone lsf remote:path [flags]
--dirs-only Only list directories.
--files-only Only list files.
-F, --format string Output format - see help for details (default "p")
- --hash h Use this hash when h is used in the format MD5|SHA-1|DropboxHash (default "MD5")
+ --hash h Use this hash when h is used in the format MD5|SHA-1|DropboxHash (default "md5")
-h, --help help for lsf
-R, --recursive Recurse into the listing.
-s, --separator string Separator for the items in the format. (default ";")
@@ -2803,9 +3203,9 @@ When that happens, it is the user's responsibility to stop the mount manually.
The size of the mounted file system will be set according to information retrieved
from the remote, the same as returned by the [rclone about](https://rclone.org/commands/rclone_about/)
command. Remotes with unlimited storage may report the used size only,
-then an additional 1PB of free space is assumed. If the remote does not
+then an additional 1 PiB of free space is assumed. If the remote does not
[support](https://rclone.org/overview/#optional-features) the about feature
-at all, then 1PB is set as both the total and the free size.
+at all, then 1 PiB is set as both the total and the free size.
**Note**: As of `rclone` 1.52.2, `rclone mount` now requires Go version 1.13
or newer on some platforms depending on the underlying FUSE library in use.
@@ -2931,7 +3331,7 @@ metadata about files like in UNIX. One case that may arise is that other program
(incorrectly) interprets this as the file being accessible by everyone. For example
an SSH client may warn about "unprotected private key file".
-WinFsp 2021 (version 1.9, still in beta) introduces a new FUSE option "FileSecurity",
+WinFsp 2021 (version 1.9) introduces a new FUSE option "FileSecurity",
that allows the complete specification of file security descriptors using
[SDDL](https://docs.microsoft.com/en-us/windows/win32/secauthz/security-descriptor-string-format).
With this you can work around issues such as the mentioned "unprotected private key file"
@@ -2939,19 +3339,38 @@ by specifying `-o FileSecurity="D:P(A;;FA;;;OW)"`, for file all access (FA) to t
### Windows caveats
-Note that drives created as Administrator are not visible by other
-accounts (including the account that was elevated as
-Administrator). So if you start a Windows drive from an Administrative
-Command Prompt and then try to access the same drive from Explorer
-(which does not run as Administrator), you will not be able to see the
-new drive.
+Drives created as Administrator are not visible to other accounts,
+not even an account that was elevated to Administrator with the
+User Account Control (UAC) feature. A result of this is that if you mount
+to a drive letter from a Command Prompt run as Administrator, and then try
+to access the same drive from Windows Explorer (which does not run as
+Administrator), you will not be able to see the mounted drive.
-The easiest way around this is to start the drive from a normal
-command prompt. It is also possible to start a drive from the SYSTEM
-account (using [the WinFsp.Launcher
-infrastructure](https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Architecture))
-which creates drives accessible for everyone on the system or
-alternatively using [the nssm service manager](https://nssm.cc/usage).
+If you don't need to access the drive from applications running with
+administrative privileges, the easiest way around this is to always
+create the mount from a non-elevated command prompt.
+
+To make mapped drives available to the user account that created them
+regardless if elevated or not, there is a special Windows setting called
+[linked connections](https://docs.microsoft.com/en-us/troubleshoot/windows-client/networking/mapped-drives-not-available-from-elevated-command#detail-to-configure-the-enablelinkedconnections-registry-entry)
+that can be enabled.
+
+It is also possible to make a drive mount available to everyone on the system,
+by running the process creating it as the built-in SYSTEM account.
+There are several ways to do this: One is to use the command-line
+utility [PsExec](https://docs.microsoft.com/en-us/sysinternals/downloads/psexec),
+from Microsoft's Sysinternals suite, which has option `-s` to start
+processes as the SYSTEM account. Another alternative is to run the mount
+command from a Windows Scheduled Task, or a Windows Service, configured
+to run as the SYSTEM account. A third alternative is to use the
+[WinFsp.Launcher infrastructure](https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Architecture)).
+Note that when running rclone as another user, it will not use
+the configuration file from your profile unless you tell it to
+with the [`--config`](https://rclone.org/docs/#config-config-file) option.
+Read more in the [install documentation](https://rclone.org/install/).
+
+Note that mapping to a directory path, instead of a drive letter,
+does not suffer from the same limitations.
## Limitations
@@ -3060,7 +3479,7 @@ backend. Changes made through the mount will appear immediately or
invalidate the cache.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
- --poll-interval duration Time to wait between polling for changes.
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
However, changes made directly on the cloud storage by the web
interface or a different copy of rclone will only be picked up once
@@ -3325,7 +3744,7 @@ rclone mount remote:path /path/to/mountpoint [flags]
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
--gid uint32 Override the gid field set by the filesystem. Not supported on Windows. (default 1000)
-h, --help help for mount
- --max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads. Not supported on Windows. (default 128k)
+ --max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads. Not supported on Windows. (default 128Ki)
--network-mode Mount as remote network drive, instead of fixed disk drive. Supported on Windows only
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
@@ -3336,14 +3755,14 @@ rclone mount remote:path /path/to/mountpoint [flags]
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
- --umask int Override the permission bits set by the filesystem. Not supported on Windows.
+ --umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match.
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full.
- --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
@@ -3624,6 +4043,13 @@ must fit into RAM. The cutoff needs to be small enough to adhere
the limits of your remote, please see there. Generally speaking,
setting this cutoff too high will decrease your performance.
+Use the |--size| flag to preallocate the file in advance at the remote end
+and actually stream it, even if remote backend doesn't support streaming.
+
+|--size| should be the exact size of the input stream in bytes. If the
+size of the stream is different in length to the |--size| passed in
+then the transfer will likely fail.
+
Note that the upload can also not be retried because the data is
not kept around until the upload succeeds. If you need to transfer
a lot of data, you're better off caching locally and then
@@ -3636,7 +4062,8 @@ rclone rcat remote:path [flags]
## Options
```
- -h, --help help for rcat
+ -h, --help help for rcat
+ --size int File size hint to preallocate (default -1)
```
See the [global flags page](https://rclone.org/flags/) for global options not listed here.
@@ -3751,7 +4178,7 @@ If the old version contains only dots and digits (for example `v1.54.0`)
then it's a stable release so you won't need the `--beta` flag. Beta releases
have an additional information similar to `v1.54.0-beta.5111.06f1c0c61`.
(if you are a developer and use a locally built rclone, the version number
-will end with `-DEV`, you will have to rebuild it as it obvisously can't
+will end with `-DEV`, you will have to rebuild it as it obviously can't
be distributed).
If you previously installed rclone via a package manager, the package may
@@ -3826,6 +4253,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li
* [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.
* [rclone serve dlna](https://rclone.org/commands/rclone_serve_dlna/) - Serve remote:path over DLNA
+* [rclone serve docker](https://rclone.org/commands/rclone_serve_docker/) - Serve any remote on docker's volume plugin API.
* [rclone serve ftp](https://rclone.org/commands/rclone_serve_ftp/) - Serve remote:path over FTP.
* [rclone serve http](https://rclone.org/commands/rclone_serve_http/) - Serve the remote over HTTP.
* [rclone serve restic](https://rclone.org/commands/rclone_serve_restic/) - Serve the remote for restic's REST API.
@@ -3882,7 +4310,7 @@ backend. Changes made through the mount will appear immediately or
invalidate the cache.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
- --poll-interval duration Time to wait between polling for changes.
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
However, changes made directly on the cloud storage by the web
interface or a different copy of rclone will only be picked up once
@@ -4153,7 +4581,7 @@ rclone serve dlna remote:path [flags]
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match.
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full.
- --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
@@ -4167,6 +4595,378 @@ See the [global flags page](https://rclone.org/flags/) for global options not li
* [rclone serve](https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol.
+# rclone serve docker
+
+Serve any remote on docker's volume plugin API.
+
+## Synopsis
+
+
+This command implements the Docker volume plugin API allowing docker to use
+rclone as a data storage mechanism for various cloud providers.
+rclone provides [docker volume plugin](/docker) based on it.
+
+To create a docker plugin, one must create a Unix or TCP socket that Docker
+will look for when you use the plugin and then it listens for commands from
+docker daemon and runs the corresponding code when necessary.
+Docker plugins can run as a managed plugin under control of the docker daemon
+or as an independent native service. For testing, you can just run it directly
+from the command line, for example:
+```
+sudo rclone serve docker --base-dir /tmp/rclone-volumes --socket-addr localhost:8787 -vv
+```
+
+Running `rclone serve docker` will create the said socket, listening for
+commands from Docker to create the necessary Volumes. Normally you need not
+give the `--socket-addr` flag. The API will listen on the unix domain socket
+at `/run/docker/plugins/rclone.sock`. In the example above rclone will create
+a TCP socket and a small file `/etc/docker/plugins/rclone.spec` containing
+the socket address. We use `sudo` because both paths are writeable only by
+the root user.
+
+If you later decide to change listening socket, the docker daemon must be
+restarted to reconnect to `/run/docker/plugins/rclone.sock`
+or parse new `/etc/docker/plugins/rclone.spec`. Until you restart, any
+volume related docker commands will timeout trying to access the old socket.
+Running directly is supported on **Linux only**, not on Windows or MacOS.
+This is not a problem with managed plugin mode described in details
+in the [full documentation](https://rclone.org/docker).
+
+The command will create volume mounts under the path given by `--base-dir`
+(by default `/var/lib/docker-volumes/rclone` available only to root)
+and maintain the JSON formatted file `docker-plugin.state` in the rclone cache
+directory with book-keeping records of created and mounted volumes.
+
+All mount and VFS options are submitted by the docker daemon via API, but
+you can also provide defaults on the command line as well as set path to the
+config file and cache directory or adjust logging verbosity.
+
+## VFS - Virtual File System
+
+This command uses the VFS layer. This adapts the cloud storage objects
+that rclone uses into something which looks much more like a disk
+filing system.
+
+Cloud storage objects have lots of properties which aren't like disk
+files - you can't extend them or write to the middle of them, so the
+VFS layer has to deal with that. Because there is no one right way of
+doing this there are various options explained below.
+
+The VFS layer also implements a directory cache - this caches info
+about files and directories (but not the data) in memory.
+
+## VFS Directory Cache
+
+Using the `--dir-cache-time` flag, you can control how long a
+directory should be considered up to date and not refreshed from the
+backend. Changes made through the mount will appear immediately or
+invalidate the cache.
+
+ --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
+
+However, changes made directly on the cloud storage by the web
+interface or a different copy of rclone will only be picked up once
+the directory cache expires if the backend configured does not support
+polling for changes. If the backend supports polling, changes will be
+picked up within the polling interval.
+
+You can send a `SIGHUP` signal to rclone for it to flush all
+directory caches, regardless of how old they are. Assuming only one
+rclone instance is running, you can reset the cache like this:
+
+ kill -SIGHUP $(pidof rclone)
+
+If you configure rclone with a [remote control](/rc) then you can use
+rclone rc to flush the whole directory cache:
+
+ rclone rc vfs/forget
+
+Or individual files or directories:
+
+ rclone rc vfs/forget file=path/to/file dir=path/to/dir
+
+## VFS File Buffering
+
+The `--buffer-size` flag determines the amount of memory,
+that will be used to buffer data in advance.
+
+Each open file will try to keep the specified amount of data in memory
+at all times. The buffered data is bound to one open file and won't be
+shared.
+
+This flag is a upper limit for the used memory per open file. The
+buffer will only use memory for data that is downloaded but not not
+yet read. If the buffer is empty, only a small amount of memory will
+be used.
+
+The maximum memory used by rclone for buffering can be up to
+`--buffer-size * open files`.
+
+## VFS File Caching
+
+These flags control the VFS file caching options. File caching is
+necessary to make the VFS layer appear compatible with a normal file
+system. It can be disabled at the cost of some compatibility.
+
+For example you'll need to enable VFS caching if you want to read and
+write simultaneously to a file. See below for more details.
+
+Note that the VFS cache is separate from the cache backend and you may
+find that you need one or the other or both.
+
+ --cache-dir string Directory rclone will use for caching.
+ --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
+ --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
+ --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+ --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
+
+If run with `-vv` rclone will print the location of the file cache. The
+files are stored in the user cache file area which is OS dependent but
+can be controlled with `--cache-dir` or setting the appropriate
+environment variable.
+
+The cache has 4 different modes selected by `--vfs-cache-mode`.
+The higher the cache mode the more compatible rclone becomes at the
+cost of using disk space.
+
+Note that files are written back to the remote only when they are
+closed and if they haven't been accessed for --vfs-write-back
+second. If rclone is quit or dies with files that haven't been
+uploaded, these will be uploaded next time rclone is run with the same
+flags.
+
+If using `--vfs-cache-max-size` note that the cache may exceed this size
+for two reasons. Firstly because it is only checked every
+`--vfs-cache-poll-interval`. Secondly because open files cannot be
+evicted from the cache.
+
+You **should not** run two copies of rclone using the same VFS cache
+with the same or overlapping remotes if using `--vfs-cache-mode > off`.
+This can potentially cause data corruption if you do. You can work
+around this by giving each rclone its own cache hierarchy with
+`--cache-dir`. You don't need to worry about this if the remotes in
+use don't overlap.
+
+### --vfs-cache-mode off
+
+In this mode (the default) the cache will read directly from the remote and write
+directly to the remote without caching anything on disk.
+
+This will mean some operations are not possible
+
+ * Files can't be opened for both read AND write
+ * Files opened for write can't be seeked
+ * Existing files opened for write must have O_TRUNC set
+ * Files open for read with O_TRUNC will be opened write only
+ * Files open for write only will behave as if O_TRUNC was supplied
+ * Open modes O_APPEND, O_TRUNC are ignored
+ * If an upload fails it can't be retried
+
+### --vfs-cache-mode minimal
+
+This is very similar to "off" except that files opened for read AND
+write will be buffered to disk. This means that files opened for
+write will be a lot more compatible, but uses the minimal disk space.
+
+These operations are not possible
+
+ * Files opened for write only can't be seeked
+ * Existing files opened for write must have O_TRUNC set
+ * Files opened for write only will ignore O_APPEND, O_TRUNC
+ * If an upload fails it can't be retried
+
+### --vfs-cache-mode writes
+
+In this mode files opened for read only are still read directly from
+the remote, write only and read/write files are buffered to disk
+first.
+
+This mode should support all normal file system operations.
+
+If an upload fails it will be retried at exponentially increasing
+intervals up to 1 minute.
+
+### --vfs-cache-mode full
+
+In this mode all reads and writes are buffered to and from disk. When
+data is read from the remote this is buffered to disk as well.
+
+In this mode the files in the cache will be sparse files and rclone
+will keep track of which bits of the files it has downloaded.
+
+So if an application only reads the starts of each file, then rclone
+will only buffer the start of the file. These files will appear to be
+their full size in the cache, but they will be sparse files with only
+the data that has been downloaded present in them.
+
+This mode should support all normal file system operations and is
+otherwise identical to --vfs-cache-mode writes.
+
+When reading a file rclone will read --buffer-size plus
+--vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory
+whereas the --vfs-read-ahead is buffered on disk.
+
+When using this mode it is recommended that --buffer-size is not set
+too big and --vfs-read-ahead is set large if required.
+
+**IMPORTANT** not all file systems support sparse files. In particular
+FAT/exFAT do not. Rclone will perform very badly if the cache
+directory is on a filesystem which doesn't support sparse files and it
+will log an ERROR message if one is detected.
+
+## VFS Performance
+
+These flags may be used to enable/disable features of the VFS for
+performance or other reasons.
+
+In particular S3 and Swift benefit hugely from the --no-modtime flag
+(or use --use-server-modtime for a slightly different effect) as each
+read of the modification time takes a transaction.
+
+ --no-checksum Don't compare checksums on up/download.
+ --no-modtime Don't read/write the modification time (can speed things up).
+ --no-seek Don't allow seeking in files.
+ --read-only Mount read-only.
+
+When rclone reads files from a remote it reads them in chunks. This
+means that rather than requesting the whole file rclone reads the
+chunk specified. This is advantageous because some cloud providers
+account for reads being all the data requested, not all the data
+delivered.
+
+Rclone will keep doubling the chunk size requested starting at
+--vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit
+unless it is set to "off" in which case there will be no limit.
+
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
+ --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default "off")
+
+Sometimes rclone is delivered reads or writes out of order. Rather
+than seeking rclone will wait a short time for the in sequence read or
+write to come in. These flags only come into effect when not using an
+on disk cache file.
+
+ --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
+ --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
+
+When using VFS write caching (--vfs-cache-mode with value writes or full),
+the global flag --transfers can be set to adjust the number of parallel uploads of
+modified files from cache (the related global flag --checkers have no effect on mount).
+
+ --transfers int Number of file transfers to run in parallel. (default 4)
+
+## VFS Case Sensitivity
+
+Linux file systems are case-sensitive: two files can differ only
+by case, and the exact case must be used when opening a file.
+
+File systems in modern Windows are case-insensitive but case-preserving:
+although existing files can be opened using any case, the exact case used
+to create the file is preserved and available for programs to query.
+It is not allowed for two files in the same directory to differ only by case.
+
+Usually file systems on macOS are case-insensitive. It is possible to make macOS
+file systems case-sensitive but that is not the default
+
+The `--vfs-case-insensitive` mount flag controls how rclone handles these
+two cases. If its value is "false", rclone passes file names to the mounted
+file system as-is. If the flag is "true" (or appears without a value on
+command line), rclone may perform a "fixup" as explained below.
+
+The user may specify a file name to open/delete/rename/etc with a case
+different than what is stored on mounted file system. If an argument refers
+to an existing file with exactly the same name, then the case of the existing
+file on the disk will be used. However, if a file name with exactly the same
+name is not found but a name differing only by case exists, rclone will
+transparently fixup the name. This fixup happens only when an existing file
+is requested. Case sensitivity of file names created anew by rclone is
+controlled by an underlying mounted file system.
+
+Note that case sensitivity of the operating system running rclone (the target)
+may differ from case sensitivity of a file system mounted by rclone (the source).
+The flag controls whether "fixup" is performed to satisfy the target.
+
+If the flag is not provided on the command line, then its default value depends
+on the operating system where rclone runs: "true" on Windows and macOS, "false"
+otherwise. If the flag is provided without a value, then it is "true".
+
+## Alternate report of used bytes
+
+Some backends, most notably S3, do not report the amount of bytes used.
+If you need this information to be available when running `df` on the
+filesystem, then pass the flag `--vfs-used-is-size` to rclone.
+With this flag set, instead of relying on the backend to report this
+information, rclone will scan the whole remote similar to `rclone size`
+and compute the total used space itself.
+
+_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
+result is accurate. However, this is very inefficient and may cost lots of API
+calls resulting in extra charges. Use it as a last resort and only with caching.
+
+
+```
+rclone serve docker [flags]
+```
+
+## Options
+
+```
+ --allow-non-empty Allow mounting over a non-empty directory. Not supported on Windows.
+ --allow-other Allow access to other users. Not supported on Windows.
+ --allow-root Allow access to root user. Not supported on Windows.
+ --async-read Use asynchronous reads. Not supported on Windows. (default true)
+ --attr-timeout duration Time for which file/directory attributes are cached. (default 1s)
+ --base-dir string base directory for volumes (default "/var/lib/docker-volumes/rclone")
+ --daemon Run mount as a daemon (background mode). Not supported on Windows.
+ --daemon-timeout duration Time limit for rclone to respond to kernel. Not supported on Windows.
+ --debug-fuse Debug the FUSE internals - needs -v.
+ --default-permissions Makes kernel enforce access control based on the file mode. Not supported on Windows.
+ --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
+ --dir-perms FileMode Directory permissions (default 0777)
+ --file-perms FileMode File permissions (default 0666)
+ --forget-state skip restoring previous state
+ --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
+ --gid uint32 Override the gid field set by the filesystem. Not supported on Windows. (default 1000)
+ -h, --help help for docker
+ --max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads. Not supported on Windows. (default 128Ki)
+ --network-mode Mount as remote network drive, instead of fixed disk drive. Supported on Windows only
+ --no-checksum Don't compare checksums on up/download.
+ --no-modtime Don't read/write the modification time (can speed things up).
+ --no-seek Don't allow seeking in files.
+ --no-spec do not write spec file
+ --noappledouble Ignore Apple Double (._) and .DS_Store files. Supported on OSX only. (default true)
+ --noapplexattr Ignore all "com.apple.*" extended attributes. Supported on OSX only.
+ -o, --option stringArray Option for libfuse/WinFsp. Repeat if required.
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
+ --read-only Mount read-only.
+ --socket-addr string or absolute path (default: /run/docker/plugins/rclone.sock)
+ --socket-gid int GID for unix socket (default: current process GID) (default 1000)
+ --uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
+ --umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
+ --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
+ --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
+ --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+ --vfs-case-insensitive If a file name not found, find a case insensitive match.
+ --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full.
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128Mi)
+ --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
+ --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
+ --vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
+ --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
+ --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
+ --volname string Set the volume name. Supported on Windows and OSX only.
+ --write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used. Not supported on Windows.
+```
+
+See the [global flags page](https://rclone.org/flags/) for global options not listed here.
+
+## SEE ALSO
+
+* [rclone serve](https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol.
+
# rclone serve ftp
Serve remote:path over FTP.
@@ -4216,7 +5016,7 @@ backend. Changes made through the mount will appear immediately or
invalidate the cache.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
- --poll-interval duration Time to wait between polling for changes.
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
However, changes made directly on the cloud storage by the web
interface or a different copy of rclone will only be picked up once
@@ -4573,7 +5373,7 @@ rclone serve ftp remote:path [flags]
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match.
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full.
- --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
@@ -4608,7 +5408,7 @@ control the stats printing.
## Server options
Use --addr to specify which IP address and port the server should
-listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all
+listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all
IPs. By default it only listens on localhost. You can use port
:0 to let the OS choose an available port.
@@ -4630,6 +5430,17 @@ inserts leading and trailing "/" on --baseurl, so --baseurl "rclone",
--baseurl "/rclone" and --baseurl "/rclone/" are all treated
identically.
+### SSL/TLS
+
+By default this will serve over http. If you want you can serve over
+https. You will need to supply the --cert and --key flags. If you
+wish to do client side certificate validation then you will need to
+supply --client-ca also.
+
+--cert should be a either a PEM encoded certificate or a concatenation
+of that with the CA certificate. --key should be the PEM encoded
+private key and --client-ca should be the PEM encoded client
+certificate authority certificate.
--template allows a user to specify a custom markup template for http
and webdav serve functions. The server exports the following markup
to be used within the template to server pages:
@@ -4674,18 +5485,6 @@ The password file can be updated while rclone is running.
Use --realm to set the authentication realm.
-### SSL/TLS
-
-By default this will serve over http. If you want you can serve over
-https. You will need to supply the --cert and --key flags. If you
-wish to do client side certificate validation then you will need to
-supply --client-ca also.
-
---cert should be either a PEM encoded certificate or a concatenation
-of that with the CA certificate. --key should be the PEM encoded
-private key and --client-ca should be the PEM encoded client
-certificate authority certificate.
-
## VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects
@@ -4708,7 +5507,7 @@ backend. Changes made through the mount will appear immediately or
invalidate the cache.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
- --poll-interval duration Time to wait between polling for changes.
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
However, changes made directly on the cloud storage by the web
interface or a different copy of rclone will only be picked up once
@@ -4958,7 +5757,7 @@ rclone serve http remote:path [flags]
## Options
```
- --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
+ --addr string IPaddress:Port or :Port to bind server to. (default "127.0.0.1:8080")
--baseurl string Prefix for URLs - leave blank for root.
--cert string SSL PEM key (concatenation of certificate and CA certificate)
--client-ca string Client certificate authority to verify clients with
@@ -4976,7 +5775,7 @@ rclone serve http remote:path [flags]
--pass string Password for authentication.
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
- --realm string realm for authentication (default "rclone")
+ --realm string realm for authentication
--server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--template string User Specified Template.
@@ -4989,7 +5788,7 @@ rclone serve http remote:path [flags]
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match.
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full.
- --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
@@ -5243,6 +6042,11 @@ reachable externally then supply "--addr :2022" for example.
Note that the default of "--vfs-cache-mode off" is fine for the rclone
sftp backend, but it may not be with other SFTP clients.
+If --stdio is specified, rclone will serve SFTP over stdio, which can
+be used with sshd via ~/.ssh/authorized_keys, for example:
+
+ restrict,command="rclone serve sftp --stdio ./photos" ssh-rsa ...
+
## VFS - Virtual File System
@@ -5266,7 +6070,7 @@ backend. Changes made through the mount will appear immediately or
invalidate the cache.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
- --poll-interval duration Time to wait between polling for changes.
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
However, changes made directly on the cloud storage by the web
interface or a different copy of rclone will only be picked up once
@@ -5613,6 +6417,7 @@ rclone serve sftp remote:path [flags]
--pass string Password for authentication.
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
+ --stdio Run an sftp server on run stdin/stdout
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
--user string User name for authentication.
@@ -5622,7 +6427,7 @@ rclone serve sftp remote:path [flags]
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match.
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full.
- --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
@@ -5765,7 +6570,7 @@ backend. Changes made through the mount will appear immediately or
invalidate the cache.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
- --poll-interval duration Time to wait between polling for changes.
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
However, changes made directly on the cloud storage by the web
interface or a different copy of rclone will only be picked up once
@@ -6130,7 +6935,7 @@ rclone serve webdav remote:path [flags]
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match.
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full.
- --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
@@ -6219,11 +7024,33 @@ See the [global flags page](https://rclone.org/flags/) for global options not li
## SEE ALSO
* [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.
+* [rclone test changenotify](https://rclone.org/commands/rclone_test_changenotify/) - Log any change notify requests for the remote passed in.
* [rclone test histogram](https://rclone.org/commands/rclone_test_histogram/) - Makes a histogram of file name characters.
* [rclone test info](https://rclone.org/commands/rclone_test_info/) - Discovers file name or other limitations for paths.
-* [rclone test makefiles](https://rclone.org/commands/rclone_test_makefiles/) - Make a random file hierarchy in
+* [rclone test makefiles](https://rclone.org/commands/rclone_test_makefiles/) - Make a random file hierarchy in a directory
* [rclone test memory](https://rclone.org/commands/rclone_test_memory/) - Load all the objects at remote:path into memory and report memory stats.
+# rclone test changenotify
+
+Log any change notify requests for the remote passed in.
+
+```
+rclone test changenotify remote: [flags]
+```
+
+## Options
+
+```
+ -h, --help help for changenotify
+ --poll-interval duration Time to wait between polling for changes. (default 10s)
+```
+
+See the [global flags page](https://rclone.org/flags/) for global options not listed here.
+
+## SEE ALSO
+
+* [rclone test](https://rclone.org/commands/rclone_test/) - Run a test command
+
# rclone test histogram
Makes a histogram of file name characters.
@@ -6292,7 +7119,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li
# rclone test makefiles
-Make a random file hierarchy in
+Make a random file hierarchy in a directory
```
rclone test makefiles [flags]
@@ -6308,6 +7135,7 @@ rclone test makefiles [flags]
--max-name-length int Maximum size of file names (default 12)
--min-file-size SizeSuffix Minimum size of file to create
--min-name-length int Minimum size of file names (default 4)
+ --seed int Seed for the random number generator (0 for random) (default 1)
```
See the [global flags page](https://rclone.org/flags/) for global options not listed here.
@@ -6485,7 +7313,7 @@ The syntax of the paths passed to the rclone command are as follows.
This refers to the local file system.
On Windows `\` may be used instead of `/` in local paths **only**,
-non local paths must use `/`. See [local filesystem](https://rclone.org/local/#windows-paths)
+non local paths must use `/`. See [local filesystem](https://rclone.org/local/#paths-on-windows)
documentation for more about Windows-specific paths.
These paths needn't start with a leading `/` - if they don't then they
@@ -6550,7 +7378,7 @@ adding the `--drive-shared-with-me` parameter to the remote `gdrive:`.
rclone lsf "gdrive,shared_with_me:path/to/dir"
The major advantage to using the connection string style syntax is
-that it only applies the the remote, not to all the remotes of that
+that it only applies to the remote, not to all the remotes of that
type of the command line. A common confusion is this attempt to copy a
file shared on google drive to the normal drive which **does not
work** because the `--drive-shared-with-me` flag applies to both the
@@ -6562,6 +7390,13 @@ However using the connection string syntax, this does work.
rclone copy "gdrive,shared_with_me:shared-file.txt" gdrive:
+Note that the connection string only affects the options of the immediate
+backend. If for example gdriveCrypt is a crypt based on gdrive, then the
+following command **will not work** as intended, because
+`shared_with_me` is ignored by the crypt backend:
+
+ rclone copy "gdriveCrypt,shared_with_me:shared-file.txt" gdriveCrypt:
+
The connection strings have the following syntax
remote,parameter=value,parameter2=value2:path/to/dir
@@ -6743,10 +7578,10 @@ possibly signed sequence of decimal numbers, each with optional
fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid
time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
-Options which use SIZE use kByte by default. However, a suffix of `b`
-for bytes, `k` for kBytes, `M` for MBytes, `G` for GBytes, `T` for
-TBytes and `P` for PBytes may be used. These are the binary units, e.g.
-1, 2\*\*10, 2\*\*20, 2\*\*30 respectively.
+Options which use SIZE use KiByte (multiples of 1024 bytes) by default.
+However, a suffix of `B` for Byte, `K` for KiByte, `M` for MiByte,
+`G` for GiByte, `T` for TiByte and `P` for PiByte may be used. These are
+the binary units, e.g. 1, 2\*\*10, 2\*\*20, 2\*\*30 respectively.
### --backup-dir=DIR ###
@@ -6789,23 +7624,23 @@ This option controls the bandwidth limit. For example
--bwlimit 10M
-would mean limit the upload and download bandwidth to 10 MByte/s.
+would mean limit the upload and download bandwidth to 10 MiByte/s.
**NB** this is **bytes** per second not **bits** per second. To use a
-single limit, specify the desired bandwidth in kBytes/s, or use a
-suffix b|k|M|G. The default is `0` which means to not limit bandwidth.
+single limit, specify the desired bandwidth in KiByte/s, or use a
+suffix B|K|M|G|T|P. The default is `0` which means to not limit bandwidth.
The upload and download bandwidth can be specified seperately, as
`--bwlimit UP:DOWN`, so
--bwlimit 10M:100k
-would mean limit the upload bandwidth to 10 MByte/s and the download
-bandwidth to 100 kByte/s. Either limit can be "off" meaning no limit, so
+would mean limit the upload bandwidth to 10 MiByte/s and the download
+bandwidth to 100 KiByte/s. Either limit can be "off" meaning no limit, so
to just limit the upload bandwidth you would use
--bwlimit 10M:off
-this would limit the upload bandwidth to 10MByte/s but the download
+this would limit the upload bandwidth to 10 MiByte/s but the download
bandwidth would be unlimited.
When specified as above the bandwidth limits last for the duration of
@@ -6827,19 +7662,19 @@ working hours could be:
`--bwlimit "08:00,512k 12:00,10M 13:00,512k 18:00,30M 23:00,off"`
-In this example, the transfer bandwidth will be set to 512kBytes/sec
-at 8am every day. At noon, it will rise to 10MByte/s, and drop back
-to 512kBytes/sec at 1pm. At 6pm, the bandwidth limit will be set to
-30MByte/s, and at 11pm it will be completely disabled (full speed).
+In this example, the transfer bandwidth will be set to 512 KiByte/s
+at 8am every day. At noon, it will rise to 10 MiByte/s, and drop back
+to 512 KiByte/sec at 1pm. At 6pm, the bandwidth limit will be set to
+30 MiByte/s, and at 11pm it will be completely disabled (full speed).
Anything between 11pm and 8am will remain unlimited.
An example of timetable with `WEEKDAY` could be:
`--bwlimit "Mon-00:00,512 Fri-23:59,10M Sat-10:00,1M Sun-20:00,off"`
-It means that, the transfer bandwidth will be set to 512kBytes/sec on
-Monday. It will rise to 10MByte/s before the end of Friday. At 10:00
-on Saturday it will be set to 1MByte/s. From 20:00 on Sunday it will
+It means that, the transfer bandwidth will be set to 512 KiByte/s on
+Monday. It will rise to 10 MiByte/s before the end of Friday. At 10:00
+on Saturday it will be set to 1 MiByte/s. From 20:00 on Sunday it will
be unlimited.
Timeslots without `WEEKDAY` are extended to the whole week. So this
@@ -6855,10 +7690,10 @@ Bandwidth limit apply to the data transfer for all backends. For most
backends the directory listing bandwidth is also included (exceptions
being the non HTTP backends, `ftp`, `sftp` and `tardigrade`).
-Note that the units are **Bytes/s**, not **Bits/s**. Typically
-connections are measured in Bits/s - to convert divide by 8. For
+Note that the units are **Byte/s**, not **bit/s**. Typically
+connections are measured in bit/s - to convert divide by 8. For
example, let's say you have a 10 Mbit/s connection and you wish rclone
-to use half of it - 5 Mbit/s. This is 5/8 = 0.625MByte/s so you would
+to use half of it - 5 Mbit/s. This is 5/8 = 0.625 MiByte/s so you would
use a `--bwlimit 0.625M` parameter for rclone.
On Unix systems (Linux, macOS, …) the bandwidth limiter can be toggled by
@@ -6879,7 +7714,7 @@ change the bwlimit dynamically:
This option controls per file bandwidth limit. For the options see the
`--bwlimit` flag.
-For example use this to allow no transfers to be faster than 1MByte/s
+For example use this to allow no transfers to be faster than 1 MiByte/s
--bwlimit-file 1M
@@ -6962,25 +7797,54 @@ See `--copy-dest` and `--backup-dir`.
### --config=CONFIG_FILE ###
-Specify the location of the rclone configuration file.
+Specify the location of the rclone configuration file, to override
+the default. E.g. `rclone config --config="rclone.conf"`.
-Normally the config file is in your home directory as a file called
-`.config/rclone/rclone.conf` (or `.rclone.conf` if created with an
-older version). If `$XDG_CONFIG_HOME` is set it will be at
-`$XDG_CONFIG_HOME/rclone/rclone.conf`.
+The exact default is a bit complex to describe, due to changes
+introduced through different versions of rclone while preserving
+backwards compatibility, but in most cases it is as simple as:
-If there is a file `rclone.conf` in the same directory as the rclone
-executable it will be preferred. This file must be created manually
-for Rclone to use it, it will never be created automatically.
+ - `%APPDATA%/rclone/rclone.conf` on Windows
+ - `~/.config/rclone/rclone.conf` on other
+
+The complete logic is as follows: Rclone will look for an existing
+configuration file in any of the following locations, in priority order:
+
+ 1. `rclone.conf` (in program directory, where rclone executable is)
+ 2. `%APPDATA%/rclone/rclone.conf` (only on Windows)
+ 3. `$XDG_CONFIG_HOME/rclone/rclone.conf` (on all systems, including Windows)
+ 4. `~/.config/rclone/rclone.conf` (see below for explanation of ~ symbol)
+ 5. `~/.rclone.conf`
+
+If no existing configuration file is found, then a new one will be created
+in the following location:
+
+- On Windows: Location 2 listed above, except in the unlikely event
+ that `APPDATA` is not defined, then location 4 is used instead.
+- On Unix: Location 3 if `XDG_CONFIG_HOME` is defined, else location 4.
+- Fallback to location 5 (on all OS), when the rclone directory cannot be
+ created, but if also a home directory was not found then path
+ `.rclone.conf` relative to current working directory will be used as
+ a final resort.
+
+The `~` symbol in paths above represent the home directory of the current user
+on any OS, and the value is defined as following:
+
+ - On Windows: `%HOME%` if defined, else `%USERPROFILE%`, or else `%HOMEDRIVE%\%HOMEPATH%`.
+ - On Unix: `$HOME` if defined, else by looking up current user in OS-specific user database
+ (e.g. passwd file), or else use the result from shell command `cd && pwd`.
If you run `rclone config file` you will see where the default
location is for you.
-Use this flag to override the config location, e.g. `rclone
---config=".myconfig" .config`.
+The fact that an existing file `rclone.conf` in the same directory
+as the rclone executable is always preferred, means that it is easy
+to run in "portable" mode by downloading rclone executable to a
+writable directory and then create an empty file `rclone.conf` in the
+same directory.
-If the location is set to empty string `""` or the special value
-`/notfound`, or the os null device represented by value `NUL` on
+If the location is set to empty string `""` or path to a file
+with name `notfound`, or the os null device represented by value `NUL` on
Windows and `/dev/null` on Unix systems, then rclone will keep the
config file in memory only.
@@ -7063,7 +7927,7 @@ which feature does what.
This flag can be useful for debugging and in exceptional circumstances
(e.g. Google Drive limiting the total volume of Server Side Copies to
-100GB/day).
+100 GiB/day).
### --dscp VALUE ###
@@ -7080,6 +7944,8 @@ rclone copy --dscp LE from:/from to:/to
```
would make the priority lower than usual internet flows.
+This option has no effect on Windows (see [golang/go#42728](https://github.com/golang/go/issues/42728)).
+
### -n, --dry-run ###
Do a trial run with no permanent changes. Use this to see what rclone
@@ -7340,7 +8206,7 @@ This is the maximum allowable backlog of files in a sync/copy/move
queued for being checked or transferred.
This can be set arbitrarily large. It will only use memory when the
-queue is in use. Note that it will use in the order of N kB of memory
+queue is in use. Note that it will use in the order of N KiB of memory
when the backlog is in use.
Setting this large allows rclone to calculate how many files are
@@ -7469,13 +8335,13 @@ size of the file. To calculate the number of download streams Rclone
divides the size of the file by the `--multi-thread-cutoff` and rounds
up, up to the maximum set with `--multi-thread-streams`.
-So if `--multi-thread-cutoff 250MB` and `--multi-thread-streams 4` are
+So if `--multi-thread-cutoff 250M` and `--multi-thread-streams 4` are
in effect (the defaults):
-- 0MB..250MB files will be downloaded with 1 stream
-- 250MB..500MB files will be downloaded with 2 streams
-- 500MB..750MB files will be downloaded with 3 streams
-- 750MB+ files will be downloaded with 4 streams
+- 0..250 MiB files will be downloaded with 1 stream
+- 250..500 MiB files will be downloaded with 2 streams
+- 500..750 MiB files will be downloaded with 3 streams
+- 750+ MiB files will be downloaded with 4 streams
### --no-check-dest ###
@@ -7766,14 +8632,14 @@ date formatting syntax.
### --stats-unit=bits|bytes ###
-By default, data transfer rates will be printed in bytes/second.
+By default, data transfer rates will be printed in bytes per second.
-This option allows the data rate to be printed in bits/second.
+This option allows the data rate to be printed in bits per second.
Data transfer volume will still be reported in bytes.
The rate is reported as a binary unit, not SI unit. So 1 Mbit/s
-equals 1,048,576 bits/s and not 1,000,000 bits/s.
+equals 1,048,576 bit/s and not 1,000,000 bit/s.
The default is `bytes`.
@@ -8208,16 +9074,21 @@ password prompts. To do that, pass the parameter
of asking for a password if `RCLONE_CONFIG_PASS` doesn't contain
a valid password, and `--password-command` has not been supplied.
-Some rclone commands, such as `genautocomplete`, do not require configuration.
-Nevertheless, rclone will read any configuration file found
-according to the rules described [above](https://rclone.org/docs/#config-config-file).
-If an encrypted configuration file is found, this means you will be prompted for
-password (unless using `--password-command`). To avoid this, you can bypass
-the loading of the configuration file by overriding the location with an empty
-string `""` or the special value `/notfound`, or the os null device represented
-by value `NUL` on Windows and `/dev/null` on Unix systems (before rclone
-version 1.55 only this null device alternative was supported).
-E.g. `rclone --config="" genautocomplete bash`.
+Whenever running commands that may be affected by options in a
+configuration file, rclone will look for an existing file according
+to the rules described [above](#config-config-file), and load any it
+finds. If an encrypted file is found, this includes decrypting it,
+with the possible consequence of a password prompt. When executing
+a command line that you know are not actually using anything from such
+a configuration file, you can avoid it being loaded by overriding the
+location, e.g. with one of the documented special values for
+memory-only configuration. Since only backend options can be stored
+in configuration files, this is normally unnecessary for commands
+that do not operate on backends, e.g. `genautocomplete`. However,
+it will be relevant for commands that do operate on backends in
+general, but are used without referencing a stored remote, e.g.
+listing local filesystem paths, or
+[connection strings](#connection-strings): `rclone --config="" ls .`
Developer options
-----------------
@@ -8416,6 +9287,8 @@ Or to always use the trash in drive `--drive-use-trash`, set
The same parser is used for the options and the environment variables
so they take exactly the same form.
+The options set by environment variables can be seen with the `-vv` flag, e.g. `rclone version -vv`.
+
### Config file ###
You can set defaults for values in the config file on an individual
@@ -8442,7 +9315,12 @@ mys3:
Note that if you want to create a remote using environment variables
you must create the `..._TYPE` variable as above.
-Note also that now rclone has [connectionstrings](#connection-strings),
+Note that you can only set the options of the immediate backend,
+so RCLONE_CONFIG_MYS3CRYPT_ACCESS_KEY_ID has no effect, if myS3Crypt is
+a crypt remote based on an S3 remote. However RCLONE_S3_ACCESS_KEY_ID will
+set the access key of all remotes using S3, including myS3Crypt.
+
+Note also that now rclone has [connection strings](#connection-strings),
it is probably easier to use those instead which makes the above example
rclone lsd :s3,access_key_id=XXX,secret_access_key=XXX:
@@ -8452,16 +9330,20 @@ it is probably easier to use those instead which makes the above example
The various different methods of backend configuration are read in
this order and the first one with a value is used.
-- Flag values as supplied on the command line, e.g. `--drive-use-trash`.
-- Remote specific environment vars, e.g. `RCLONE_CONFIG_MYREMOTE_USE_TRASH` (see above).
-- Backend specific environment vars, e.g. `RCLONE_DRIVE_USE_TRASH`.
-- Config file, e.g. `use_trash = false`.
-- Default values, e.g. `true` - these can't be changed.
+- Parameters in connection strings, e.g. `myRemote,skip_links:`
+- Flag values as supplied on the command line, e.g. `--skip-links`
+- Remote specific environment vars, e.g. `RCLONE_CONFIG_MYREMOTE_SKIP_LINKS` (see above).
+- Backend specific environment vars, e.g. `RCLONE_LOCAL_SKIP_LINKS`.
+- Backend generic environment vars, e.g. `RCLONE_SKIP_LINKS`.
+- Config file, e.g. `skip_links = true`.
+- Default values, e.g. `false` - these can't be changed.
-So if both `--drive-use-trash` is supplied on the config line and an
-environment variable `RCLONE_DRIVE_USE_TRASH` is set, the command line
+So if both `--skip-links` is supplied on the command line and an
+environment variable `RCLONE_LOCAL_SKIP_LINKS` is set, the command line
flag will take preference.
+The backend configurations set by environment variables can be seen with the `-vv` flag, e.g. `rclone about myRemote: -vv`.
+
For non backend configuration the order is as follows:
- Flag values as supplied on the command line, e.g. `--stats 5s`.
@@ -8474,8 +9356,11 @@ For non backend configuration the order is as follows:
- `HTTP_PROXY`, `HTTPS_PROXY` and `NO_PROXY` (or the lowercase versions thereof).
- `HTTPS_PROXY` takes precedence over `HTTP_PROXY` for https requests.
- The environment values may be either a complete URL or a "host[:port]" for, in which case the "http" scheme is assumed.
+- `USER` and `LOGNAME` values are used as fallbacks for current username. The primary method for looking up username is OS-specific: Windows API on Windows, real user ID in /etc/passwd on Unix systems. In the documentation the current username is simply referred to as `$USER`.
- `RCLONE_CONFIG_DIR` - rclone **sets** this variable for use in config files and sub processes to point to the directory holding the config file.
+The options set by environment variables can be seen with the `-vv` and `--log-level=DEBUG` flags, e.g. `rclone version -vv`.
+
# Configuring rclone on a remote / headless machine #
Some of the configurations (those involving oauth2) require an
@@ -8598,26 +9483,26 @@ you expect. Instead use a `--filter...` flag.
Rclone matching rules follow a glob style:
- `*` matches any sequence of non-separator (`/`) characters
- `**` matches any sequence of characters including `/` separators
- `?` matches any single non-separator (`/`) character
- `[` [ `!` ] { character-range } `]`
- character class (must be non-empty)
- `{` pattern-list `}`
- pattern alternatives
- c matches character c (c != `*`, `**`, `?`, `\`, `[`, `{`, `}`)
- `\` c matches character c
+ * matches any sequence of non-separator (/) characters
+ ** matches any sequence of characters including / separators
+ ? matches any single non-separator (/) character
+ [ [ ! ] { character-range } ]
+ character class (must be non-empty)
+ { pattern-list }
+ pattern alternatives
+ c matches character c (c != *, **, ?, \, [, {, })
+ \c matches reserved character c (c = *, **, ?, \, [, {, })
character-range:
- c matches character c (c != `\\`, `-`, `]`)
- `\` c matches character c
- lo `-` hi matches character c for lo <= c <= hi
+ c matches character c (c != \, -, ])
+ \c matches reserved character c (c = \, -, ])
+ lo - hi matches character c for lo <= c <= hi
pattern-list:
- pattern { `,` pattern }
- comma-separated (without spaces) patterns
+ pattern { , pattern }
+ comma-separated (without spaces) patterns
character classes (see [Go regular expression reference](https://golang.org/pkg/regexp/syntax/)) include:
@@ -9149,17 +10034,17 @@ remote or flag value. The fix then is to quote values containing spaces.
### `--min-size` - Don't transfer any file smaller than this
Controls the minimum size file within the scope of an rclone command.
-Default units are `kBytes` but abbreviations `k`, `M`, or `G` are valid.
+Default units are `KiByte` but abbreviations `K`, `M`, `G`, `T` or `P` are valid.
-E.g. `rclone ls remote: --min-size 50k` lists files on `remote:` of 50kByte
+E.g. `rclone ls remote: --min-size 50k` lists files on `remote:` of 50 KiByte
size or larger.
### `--max-size` - Don't transfer any file larger than this
Controls the maximum size file within the scope of an rclone command.
-Default units are `kBytes` but abbreviations `k`, `M`, or `G` are valid.
+Default units are `KiByte` but abbreviations `K`, `M`, `G`, `T` or `P` are valid.
-E.g. `rclone ls remote: --max-size 1G` lists files on `remote:` of 1GByte
+E.g. `rclone ls remote: --max-size 1G` lists files on `remote:` of 1 GiByte
size or smaller.
### `--max-age` - Don't transfer any file older than this
@@ -9213,8 +10098,8 @@ E.g. the scope of `rclone sync -i A: B:` can be restricted:
rclone --min-size 50k --delete-excluded sync A: B:
-All files on `B:` which are less than 50 kBytes are deleted
-because they are excluded from the rclone sync command.
+All files on `B:` which are less than 50 KiByte are deleted
+because they are excluded from the rclone sync command.
### `--dump filters` - dump the filters to the output
@@ -9878,8 +10763,14 @@ This takes the following parameters
- name - name of remote
- parameters - a map of \{ "key": "value" \} pairs
- type - type of the new remote
-- obscure - optional bool - forces obscuring of passwords
-- noObscure - optional bool - forces passwords not to be obscured
+- opt - a dictionary of options to control the configuration
+ - obscure - declare passwords are plain and need obscuring
+ - noObscure - declare passwords are already obscured and don't need obscuring
+ - nonInteractive - don't interact with a user, return questions
+ - continue - continue the config process with an answer
+ - all - ask all the config questions not just the post config ones
+ - state - state to restart with - used with continue
+ - result - result to restart with - used with continue
See the [config create command](https://rclone.org/commands/rclone_config_create/) command for more information on the above.
@@ -9953,8 +10844,14 @@ This takes the following parameters
- name - name of remote
- parameters - a map of \{ "key": "value" \} pairs
-- obscure - optional bool - forces obscuring of passwords
-- noObscure - optional bool - forces passwords not to be obscured
+- opt - a dictionary of options to control the configuration
+ - obscure - declare passwords are plain and need obscuring
+ - noObscure - declare passwords are already obscured and don't need obscuring
+ - nonInteractive - don't interact with a user, return questions
+ - continue - continue the config process with an answer
+ - all - ask all the config questions not just the post config ones
+ - state - state to restart with - used with continue
+ - result - result to restart with - used with continue
See the [config update command](https://rclone.org/commands/rclone_config_update/) command for more information on the above.
@@ -10128,7 +11025,7 @@ Returns the following values:
"lastError": last error string,
"renames" : number of files renamed,
"retryError": boolean showing whether there has been at least one non-NoRetryError,
- "speed": average speed in bytes/sec since start of the group,
+ "speed": average speed in bytes per second since start of the group,
"totalBytes": total number of bytes in the group,
"totalChecks": total number of checks in the group,
"totalTransfers": total number of transfers in the group,
@@ -10141,8 +11038,8 @@ Returns the following values:
"eta": estimated time in seconds until file transfer completion
"name": name of the file,
"percentage": progress of the file transfer in percent,
- "speed": average speed over the whole transfer in bytes/sec,
- "speedAvg": current speed in bytes/sec as an exponentially weighted moving average,
+ "speed": average speed over the whole transfer in bytes per second,
+ "speedAvg": current speed in bytes per second as an exponentially weighted moving average,
"size": size of the file in bytes
}
],
@@ -11198,6 +12095,7 @@ Here is an overview of the major features of each cloud storage system.
| SFTP | MD5, SHA1 ² | Yes | Depends | No | - |
| SugarSync | - | No | No | No | - |
| Tardigrade | - | Yes | No | No | - |
+| Uptobox | - | No | No | Yes | - |
| WebDAV | MD5, SHA1 ³ | Yes ⁴ | Depends | No | - |
| Yandex Disk | MD5 | Yes | No | No | R |
| Zoho WorkDrive | - | No | No | No | - |
@@ -11207,7 +12105,7 @@ Here is an overview of the major features of each cloud storage system.
¹ Dropbox supports [its own custom
hash](https://www.dropbox.com/developers/reference/content-hash).
-This is an SHA256 sum of all the 4MB block SHA256s.
+This is an SHA256 sum of all the 4 MiB block SHA256s.
² SFTP supports checksums if the same login has shell access and
`md5sum` or `sha1sum` as well as `echo` are in the remote's PATH.
@@ -11511,6 +12409,7 @@ upon backend specific capabilities.
| SFTP | No | No | Yes | Yes | No | No | Yes | No | Yes | Yes |
| SugarSync | Yes | Yes | Yes | Yes | No | No | Yes | Yes | No | Yes |
| Tardigrade | Yes † | No | No | No | No | Yes | Yes | No | No | No |
+| Uptobox | No | Yes | Yes | Yes | No | No | No | No | No | No |
| WebDAV | Yes | Yes | Yes | Yes | No | No | Yes ‡ | No | Yes | Yes |
| Yandex Disk | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | Yes | Yes |
| Zoho WorkDrive | Yes | Yes | Yes | Yes | No | No | No | No | Yes | Yes |
@@ -11614,9 +12513,9 @@ These flags are available for every command.
--auto-confirm If enabled, do not request console confirmation.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --bwlimit-file BwTimetable Bandwidth limit per file in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16Mi)
+ --bwlimit BwTimetable Bandwidth limit in KiByte/s, or use suffix B|K|M|G|T|P or a full timetable.
+ --bwlimit-file BwTimetable Bandwidth limit per file in KiByte/s, or use suffix B|K|M|G|T|P or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--check-first Do all the checks before starting transfers.
@@ -11634,7 +12533,8 @@ These flags are available for every command.
--delete-before When synchronizing, delete files on destination before transferring
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
+ --disable string Disable a comma separated list of features. Use --disable help to see a list.
+ --disable-http2 Disable HTTP/2 in the global transport.
-n, --dry-run Do a trial run with no permanent changes
--dscp string Set DSCP value to connections. Can be value or names, eg. CS1, LE, DF, AF21.
--dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
@@ -11676,14 +12576,14 @@ These flags are available for every command.
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-duration duration Maximum duration rclone will transfer data for.
- --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
--max-stats-groups int Maximum number of stats groups to keep in memory. On max oldest is discarded. (default 1000)
--max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--memprofile string Write memory profile to file
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
- --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size. (default 250M)
+ --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size. (default 250Mi)
--multi-thread-streams int Max number of streams to use for multi-thread downloads. (default 4)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-check-dest Don't check the destination, copy regardless.
@@ -11733,8 +12633,8 @@ These flags are available for every command.
--stats-one-line Make the stats fit on one line.
--stats-one-line-date Enables --stats-one-line and add current date/time prefix.
--stats-one-line-date-format string Enables --stats-one-line-date and uses custom formatted date. Enclose date string in double quotes ("). See https://golang.org/pkg/time/#Time.Format
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes' per second (default "bytes")
+ --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100Ki)
--suffix string Suffix to add to changed files.
--suffix-keep-extension Preserve the extension when using --suffix.
--syslog Use Syslog for logging
@@ -11750,7 +12650,7 @@ These flags are available for every command.
--use-json-log Use json log format.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.55.0")
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.56.0")
-v, --verbose count Print lots more stuff (repeat for more)
```
@@ -11764,15 +12664,15 @@ and may be set in the config file.
--acd-client-id string OAuth Client Id
--acd-client-secret string OAuth Client Secret
--acd-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8,Dot)
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9Gi)
--acd-token string OAuth Access Token as a JSON blob.
--acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --acd-upload-wait-per-gb Duration Additional time per GiB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use SAS URL or Emulator)
--azureblob-archive-tier-delete Delete archive tier blobs before overwriting.
- --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100 MiB). (default 4Mi)
--azureblob-disable-checksum Don't store MD5 checksum with object metadata.
--azureblob-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
--azureblob-endpoint string Endpoint for the service
@@ -11786,12 +12686,12 @@ and may be set in the config file.
--azureblob-public-access string Public access level of a container: blob, container.
--azureblob-sas-url string SAS URL for container level access only
--azureblob-service-principal-file string Path to file containing credentials for use with a service principal.
- --azureblob-upload-cutoff string Cutoff for switching to chunked upload (<= 256MB). (Deprecated)
+ --azureblob-upload-cutoff string Cutoff for switching to chunked upload (<= 256 MiB). (Deprecated)
--azureblob-use-emulator Uses local storage emulator if provided as 'true' (leave blank if using real azure storage endpoint)
--azureblob-use-msi Use a managed service identity to authenticate (only works in Azure)
--b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4G)
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96Mi)
+ --b2-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4Gi)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-auth-duration Duration Time before the authorization token will expire in s or suffix ms|s|m|h|d. (default 1w)
--b2-download-url string Custom endpoint for downloads.
@@ -11802,7 +12702,7 @@ and may be set in the config file.
--b2-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed. (default 1m0s)
--b2-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200Mi)
--b2-versions Include old versions in directory listings.
--box-access-token string Box App Primary Access Token
--box-auth-url string Auth server URL.
@@ -11815,12 +12715,12 @@ and may be set in the config file.
--box-root-folder-id string Fill in for rclone to use a non root folder as its starting point.
--box-token string OAuth Access Token as a JSON blob.
--box-token-url string Token server url.
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50 MiB). (default 50Mi)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
- --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5Mi)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10Gi)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-purge Clear all the cached data for this remote on start.
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
@@ -11836,13 +12736,13 @@ and may be set in the config file.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks. (default 4)
--cache-writes Cache file data on writes through the FS
- --chunker-chunk-size SizeSuffix Files larger than chunk size will be split in chunks. (default 2G)
+ --chunker-chunk-size SizeSuffix Files larger than chunk size will be split in chunks. (default 2Gi)
--chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks.
--chunker-hash-type string Choose how chunker handles hash sums. All modes but "none" require metadata. (default "md5")
--chunker-remote string Remote to chunk/unchunk.
--compress-level int GZIP compression level (-2 to 9). (default -1)
--compress-mode string Compression mode. (default "gzip")
- --compress-ram-cache-limit SizeSuffix Some remotes don't allow the upload of files with unknown size. (default 20M)
+ --compress-ram-cache-limit SizeSuffix Some remotes don't allow the upload of files with unknown size. (default 20Mi)
--compress-remote string Remote to compress.
-L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
@@ -11857,7 +12757,7 @@ and may be set in the config file.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-auth-url string Auth server URL.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8Mi)
--drive-client-id string Google Application Client Id
--drive-client-secret string OAuth Client Secret
--drive-disable-http2 Disable drive using http2 (default true)
@@ -11887,13 +12787,16 @@ and may be set in the config file.
--drive-token string OAuth Access Token as a JSON blob.
--drive-token-url string Token server url.
--drive-trashed-only Only show files that are in the trash.
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8Mi)
--drive-use-created-date Use file created date instead of modified date.,
--drive-use-shared-date Use date file was shared instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-auth-url string Auth server URL.
- --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-batch-mode string Upload file batching sync|async|off. (default "sync")
+ --dropbox-batch-size int Max number of files in upload batch.
+ --dropbox-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150Mi). (default 48Mi)
--dropbox-client-id string OAuth Client Id
--dropbox-client-secret string OAuth Client Secret
--dropbox-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
@@ -11904,6 +12807,8 @@ and may be set in the config file.
--dropbox-token-url string Token server url.
--fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl
--fichier-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
+ --fichier-file-password string If you want to download a shared file that is password protected, add this parameter (obscured)
+ --fichier-folder-password string If you want to list the files in a shared folder that is password protected, add this parameter (obscured)
--fichier-shared-folder string If you want to download a shared folder, add this parameter
--filefabric-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Del,Ctl,InvalidUtf8,Dot)
--filefabric-permanent-token string Permanent Authentication Token
@@ -11958,7 +12863,7 @@ and may be set in the config file.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-auth-url string Auth server URL.
- --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5Gi)
--hubic-client-id string OAuth Client Id
--hubic-client-secret string OAuth Client Secret
--hubic-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8)
@@ -11967,9 +12872,10 @@ and may be set in the config file.
--hubic-token-url string Token server url.
--jottacloud-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10Mi)
+ --jottacloud-no-versions Avoid server side versioning by deleting files and recreating files instead of overwriting them.
--jottacloud-trashed-only Only show files that are in the trash.
- --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
+ --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10Mi)
--koofr-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
@@ -11984,16 +12890,16 @@ and may be set in the config file.
--local-no-preallocate Disable preallocation of disk space for transferred files
--local-no-set-modtime Disable setting modtime
--local-no-sparse Disable sparse files for multi-thread downloads
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--local-nounc string Disable UNC (long path names) conversion on Windows
- --local-zero-size-links Assume the Stat size of links is zero (and read them instead)
+ --local-unicode-normalization Apply unicode NFC normalization to paths and filenames
+ --local-zero-size-links Assume the Stat size of links is zero (and read them instead) (Deprecated)
--mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true)
--mailru-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--mailru-pass string Password (obscured)
--mailru-speedup-enable Skip full upload if there is another file with same data hash. (default true)
--mailru-speedup-file-patterns string Comma separated list of file name patterns eligible for speedup (put by hash). (default "*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf")
- --mailru-speedup-max-disk SizeSuffix This option allows you to disable speedup (put by hash) for large files (default 3G)
- --mailru-speedup-max-memory SizeSuffix Files larger than the size given below will always be hashed on disk. (default 32M)
+ --mailru-speedup-max-disk SizeSuffix This option allows you to disable speedup (put by hash) for large files (default 3Gi)
+ --mailru-speedup-max-memory SizeSuffix Files larger than the size given below will always be hashed on disk. (default 32Mi)
--mailru-user string User name (usually email)
--mega-debug Output more debug from Mega.
--mega-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8,Dot)
@@ -12002,7 +12908,7 @@ and may be set in the config file.
--mega-user string User name
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-auth-url string Auth server URL.
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes). (default 10M)
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes). (default 10Mi)
--onedrive-client-id string OAuth Client Id
--onedrive-client-secret string OAuth Client Secret
--onedrive-drive-id string The ID of the drive to use
@@ -12012,12 +12918,13 @@ and may be set in the config file.
--onedrive-link-password string Set the password for links created by the link command.
--onedrive-link-scope string Set the scope of the links created by the link command. (default "anonymous")
--onedrive-link-type string Set the type of the links created by the link command. (default "view")
+ --onedrive-list-chunk int Size of listing chunk. (default 1000)
--onedrive-no-versions Remove all versions on modifying operations
--onedrive-region string Choose national cloud region for OneDrive. (default "global")
--onedrive-server-side-across-configs Allow server-side operations (e.g. copy) to work across different onedrive configs.
--onedrive-token string OAuth Access Token as a JSON blob.
--onedrive-token-url string Token server url.
- --opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size. (default 10M)
+ --opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size. (default 10Mi)
--opendrive-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
--opendrive-password string Password. (obscured)
--opendrive-username string Username
@@ -12032,20 +12939,20 @@ and may be set in the config file.
--premiumizeme-encoding MultiEncoder This sets the encoding for the backend. (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--putio-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--qingstor-access-key-id string QingStor Access Key ID
- --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
+ --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4Mi)
--qingstor-connection-retries int Number of connection retries. (default 3)
--qingstor-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Ctl,InvalidUtf8)
--qingstor-endpoint string Enter an endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
- --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
+ --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-bucket-acl string Canned ACL used when creating buckets.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
- --s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656G)
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5Mi)
+ --s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-disable-http2 Disable usage of http2 for S3 backends
--s3-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8,Dot)
@@ -12060,6 +12967,7 @@ and may be set in the config file.
--s3-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool.
--s3-no-check-bucket If set, don't attempt to check the bucket exists or create it
--s3-no-head If set, don't HEAD uploaded objects to check integrity
+ --s3-no-head-object If set, don't HEAD objects
--s3-profile string Profile to use in the shared credentials file
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
@@ -12074,7 +12982,7 @@ and may be set in the config file.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing new objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
- --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
+ --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint.
--s3-v2-auth If true use v2 authentication.
--seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled)
@@ -12087,6 +12995,7 @@ and may be set in the config file.
--seafile-user string User name (usually email address)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-concurrent-reads If set don't use concurrent reads
+ --sftp-disable-concurrent-writes If set don't use concurrent writes
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
@@ -12108,11 +13017,11 @@ and may be set in the config file.
--sftp-use-fstat If set use fstat instead of stat
--sftp-use-insecure-cipher Enable the use of insecure ciphers and key exchange methods.
--sftp-user string SSH username, leave blank for current username, $USER
- --sharefile-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 64M)
+ --sharefile-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 64Mi)
--sharefile-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot)
--sharefile-endpoint string Endpoint for API calls.
--sharefile-root-folder-id string ID of the root folder
- --sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 128M)
+ --sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 128Mi)
--skip-links Don't warn about skipped symlinks.
--sugarsync-access-key-id string Sugarsync Access Key ID.
--sugarsync-app-id string Sugarsync App ID.
@@ -12131,7 +13040,7 @@ and may be set in the config file.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5Gi)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
@@ -12157,9 +13066,12 @@ and may be set in the config file.
--union-create-policy string Policy to choose upstream on CREATE category. (default "epmfs")
--union-search-policy string Policy to choose upstream on SEARCH category. (default "ff")
--union-upstreams string List of space separated upstreams.
+ --uptobox-access-token string Your access Token, get it from https://uptobox.com/my_account
+ --uptobox-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
--webdav-bearer-token-command string Command to run to get a bearer token
--webdav-encoding string This sets the encoding for the backend.
+ --webdav-headers CommaSepList Set HTTP headers for all transactions
--webdav-pass string Password. (obscured)
--webdav-url string URL of http host to connect to
--webdav-user string User name. In case NTLM authentication is used, the username should be in the format 'Domain\User'.
@@ -12174,13 +13086,534 @@ and may be set in the config file.
--zoho-client-id string OAuth Client Id
--zoho-client-secret string OAuth Client Secret
--zoho-encoding MultiEncoder This sets the encoding for the backend. (default Del,Ctl,InvalidUtf8)
- --zoho-region string Zoho region to connect to. You'll have to use the region you organization is registered in.
+ --zoho-region string Zoho region to connect to.
--zoho-token string OAuth Access Token as a JSON blob.
--zoho-token-url string Token server url.
```
- 1Fichier
------------------------------------------
+# Docker Volume Plugin
+
+## Introduction
+
+Docker 1.9 has added support for creating
+[named volumes](https://docs.docker.com/storage/volumes/) via
+[command-line interface](https://docs.docker.com/engine/reference/commandline/volume_create/)
+and mounting them in containers as a way to share data between them.
+Since Docker 1.10 you can create named volumes with
+[Docker Compose](https://docs.docker.com/compose/) by descriptions in
+[docker-compose.yml](https://docs.docker.com/compose/compose-file/compose-file-v2/#volume-configuration-reference)
+files for use by container groups on a single host.
+As of Docker 1.12 volumes are supported by
+[Docker Swarm](https://docs.docker.com/engine/swarm/key-concepts/)
+included with Docker Engine and created from descriptions in
+[swarm compose v3](https://docs.docker.com/compose/compose-file/compose-file-v3/#volume-configuration-reference)
+files for use with _swarm stacks_ across multiple cluster nodes.
+
+[Docker Volume Plugins](https://docs.docker.com/engine/extend/plugins_volume/)
+augment the default `local` volume driver included in Docker with stateful
+volumes shared across containers and hosts. Unlike local volumes, your
+data will _not_ be deleted when such volume is removed. Plugins can run
+managed by the docker daemon, as a native system service
+(under systemd, _sysv_ or _upstart_) or as a standalone executable.
+Rclone can run as docker volume plugin in all these modes.
+It interacts with the local docker daemon
+via [plugin API](https://docs.docker.com/engine/extend/plugin_api/) and
+handles mounting of remote file systems into docker containers so it must
+run on the same host as the docker daemon or on every Swarm node.
+
+## Getting started
+
+In the first example we will use the [SFTP](https://rclone.org/sftp/)
+rclone volume with Docker engine on a standalone Ubuntu machine.
+
+Start from [installing Docker](https://docs.docker.com/engine/install/)
+on the host.
+
+The _FUSE_ driver is a prerequisite for rclone mounting and should be
+installed on host:
+```
+sudo apt-get -y install fuse
+```
+
+Create two directories required by rclone docker plugin:
+```
+sudo mkdir -p /var/lib/docker-plugins/rclone/config
+sudo mkdir -p /var/lib/docker-plugins/rclone/cache
+```
+
+Install the managed rclone docker plugin:
+```
+docker plugin install rclone/docker-volume-rclone args="-v" --alias rclone --grant-all-permissions
+docker plugin list
+```
+
+Create your [SFTP volume](https://rclone.org/sftp/#standard-options):
+```
+docker volume create firstvolume -d rclone -o type=sftp -o sftp-host=_hostname_ -o sftp-user=_username_ -o sftp-pass=_password_ -o allow-other=true
+```
+
+Note that since all options are static, you don't even have to run
+`rclone config` or create the `rclone.conf` file (but the `config` directory
+should still be present). In the simplest case you can use `localhost`
+as _hostname_ and your SSH credentials as _username_ and _password_.
+You can also change the remote path to your home directory on the host,
+for example `-o path=/home/username`.
+
+
+Time to create a test container and mount the volume into it:
+```
+docker run --rm -it -v firstvolume:/mnt --workdir /mnt ubuntu:latest bash
+```
+
+If all goes well, you will enter the new container and change right to
+the mounted SFTP remote. You can type `ls` to list the mounted directory
+or otherwise play with it. Type `exit` when you are done.
+The container will stop but the volume will stay, ready to be reused.
+When it's not needed anymore, remove it:
+```
+docker volume list
+docker volume remove firstvolume
+```
+
+Now let us try **something more elaborate**:
+[Google Drive](https://rclone.org/drive/) volume on multi-node Docker Swarm.
+
+You should start from installing Docker and FUSE, creating plugin
+directories and installing rclone plugin on _every_ swarm node.
+Then [setup the Swarm](https://docs.docker.com/engine/swarm/swarm-mode/).
+
+Google Drive volumes need an access token which can be setup via web
+browser and will be periodically renewed by rclone. The managed
+plugin cannot run a browser so we will use a technique similar to the
+[rclone setup on a headless box](https://rclone.org/remote_setup/).
+
+Run [rclone config](https://rclone.org/commands/rclone_config_create/)
+on _another_ machine equipped with _web browser_ and graphical user interface.
+Create the [Google Drive remote](https://rclone.org/drive/#standard-options).
+When done, transfer the resulting `rclone.conf` to the Swarm cluster
+and save as `/var/lib/docker-plugins/rclone/config/rclone.conf`
+on _every_ node. By default this location is accessible only to the
+root user so you will need appropriate privileges. The resulting config
+will look like this:
+```
+[gdrive]
+type = drive
+scope = drive
+drive_id = 1234567...
+root_folder_id = 0Abcd...
+token = {"access_token":...}
+```
+
+Now create the file named `example.yml` with a swarm stack description
+like this:
+```
+version: '3'
+services:
+ heimdall:
+ image: linuxserver/heimdall:latest
+ ports: [8080:80]
+ volumes: [configdata:/config]
+volumes:
+ configdata:
+ driver: rclone
+ driver_opts:
+ remote: 'gdrive:heimdall'
+ allow_other: 'true'
+ vfs_cache_mode: full
+ poll_interval: 0
+```
+
+and run the stack:
+```
+docker stack deploy example -c ./example.yml
+```
+
+After a few seconds docker will spread the parsed stack description
+over cluster, create the `example_heimdall` service on port _8080_,
+run service containers on one or more cluster nodes and request
+the `example_configdata` volume from rclone plugins on the node hosts.
+You can use the following commands to confirm results:
+```
+docker service ls
+docker service ps example_heimdall
+docker volume ls
+```
+
+Point your browser to `http://cluster.host.address:8080` and play with
+the service. Stop it with `docker stack remove example` when you are done.
+Note that the `example_configdata` volume(s) created on demand at the
+cluster nodes will not be automatically removed together with the stack
+but stay for future reuse. You can remove them manually by invoking
+the `docker volume remove example_configdata` command on every node.
+
+## Creating Volumes via CLI
+
+Volumes can be created with [docker volume create](https://docs.docker.com/engine/reference/commandline/volume_create/).
+Here are a few examples:
+```
+docker volume create vol1 -d rclone -o remote=storj: -o vfs-cache-mode=full
+docker volume create vol2 -d rclone -o remote=:tardigrade,access_grant=xxx:heimdall
+docker volume create vol3 -d rclone -o type=tardigrade -o path=heimdall -o tardigrade-access-grant=xxx -o poll-interval=0
+```
+
+Note the `-d rclone` flag that tells docker to request volume from the
+rclone driver. This works even if you installed managed driver by its full
+name `rclone/docker-volume-rclone` because you provided the `--alias rclone`
+option.
+
+Volumes can be inspected as follows:
+```
+docker volume list
+docker volume inspect vol1
+```
+
+## Volume Configuration
+
+Rclone flags and volume options are set via the `-o` flag to the
+`docker volume create` command. They include backend-specific parameters
+as well as mount and _VFS_ options. Also there are a few
+special `-o` options:
+`remote`, `fs`, `type`, `path`, `mount-type` and `persist`.
+
+`remote` determines an existing remote name from the config file, with
+trailing colon and optionally with a remote path. See the full syntax in
+the [rclone documentation](https://rclone.org/docs/#syntax-of-remote-paths).
+This option can be aliased as `fs` to prevent confusion with the
+_remote_ parameter of such backends as _crypt_ or _alias_.
+
+The `remote=:backend:dir/subdir` syntax can be used to create
+[on-the-fly (config-less) remotes](https://rclone.org/docs/#backend-path-to-dir),
+while the `type` and `path` options provide a simpler alternative for this.
+Using two split options
+```
+-o type=backend -o path=dir/subdir
+```
+is equivalent to the combined syntax
+```
+-o remote=:backend:dir/subdir
+```
+but is arguably easier to parameterize in scripts.
+The `path` part is optional.
+
+[Mount and VFS options](https://rclone.org/commands/rclone_serve_docker/#options)
+as well as [backend parameters](https://rclone.org/flags/#backend-flags) are named
+like their twin command-line flags without the `--` CLI prefix.
+Optionally you can use underscores instead of dashes in option names.
+For example, `--vfs-cache-mode full` becomes
+`-o vfs-cache-mode=full` or `-o vfs_cache_mode=full`.
+Boolean CLI flags without value will gain the `true` value, e.g.
+`--allow-other` becomes `-o allow-other=true` or `-o allow_other=true`.
+
+Please note that you can provide parameters only for the backend immediately
+referenced by the backend type of mounted `remote`.
+If this is a wrapping backend like _alias, chunker or crypt_, you cannot
+provide options for the referred to remote or backend. This limitation is
+imposed by the rclone connection string parser. The only workaround is to
+feed plugin with `rclone.conf` or configure plugin arguments (see below).
+
+## Special Volume Options
+
+`mount-type` determines the mount method and in general can be one of:
+`mount`, `cmount`, or `mount2`. This can be aliased as `mount_type`.
+It should be noted that the managed rclone docker plugin currently does
+not support the `cmount` method and `mount2` is rarely needed.
+This option defaults to the first found method, which is usually `mount`
+so you generally won't need it.
+
+`persist` is a reserved boolean (true/false) option.
+In future it will allow to persist on-the-fly remotes in the plugin
+`rclone.conf` file.
+
+## Connection Strings
+
+The `remote` value can be extended
+with [connection strings](https://rclone.org/docs/#connection-strings)
+as an alternative way to supply backend parameters. This is equivalent
+to the `-o` backend options with one _syntactic difference_.
+Inside connection string the backend prefix must be dropped from parameter
+names but in the `-o param=value` array it must be present.
+For instance, compare the following option array
+```
+-o remote=:sftp:/home -o sftp-host=localhost
+```
+with equivalent connection string:
+```
+-o remote=:sftp,host=localhost:/home
+```
+This difference exists because flag options `-o key=val` include not only
+backend parameters but also mount/VFS flags and possibly other settings.
+Also it allows to discriminate the `remote` option from the `crypt-remote`
+(or similarly named backend parameters) and arguably simplifies scripting
+due to clearer value substitution.
+
+## Using with Swarm or Compose
+
+Both _Docker Swarm_ and _Docker Compose_ use
+[YAML](http://yaml.org/spec/1.2/spec.html)-formatted text files to describe
+groups (stacks) of containers, their properties, networks and volumes.
+_Compose_ uses the [compose v2](https://docs.docker.com/compose/compose-file/compose-file-v2/#volume-configuration-reference) format,
+_Swarm_ uses the [compose v3](https://docs.docker.com/compose/compose-file/compose-file-v3/#volume-configuration-reference) format.
+They are mostly similar, differences are explained in the
+[docker documentation](https://docs.docker.com/compose/compose-file/compose-versioning/#upgrading).
+
+Volumes are described by the children of the top-level `volumes:` node.
+Each of them should be named after its volume and have at least two
+elements, the self-explanatory `driver: rclone` value and the
+`driver_opts:` structure playing the same role as `-o key=val` CLI flags:
+
+```
+volumes:
+ volume_name_1:
+ driver: rclone
+ driver_opts:
+ remote: 'gdrive:'
+ allow_other: 'true'
+ vfs_cache_mode: full
+ token: '{"type": "borrower", "expires": "2021-12-31"}'
+ poll_interval: 0
+```
+
+Notice a few important details:
+- YAML prefers `_` in option names instead of `-`.
+- YAML treats single and double quotes interchangeably.
+ Simple strings and integers can be left unquoted.
+- Boolean values must be quoted like `'true'` or `"false"` because
+ these two words are reserved by YAML.
+- The filesystem string is keyed with `remote` (or with `fs`).
+ Normally you can omit quotes here, but if the string ends with colon,
+ you **must** quote it like `remote: "storage_box:"`.
+- YAML is picky about surrounding braces in values as this is in fact
+ another [syntax for key/value mappings](http://yaml.org/spec/1.2/spec.html#id2790832).
+ For example, JSON access tokens usually contain double quotes and
+ surrounding braces, so you must put them in single quotes.
+
+## Installing as Managed Plugin
+
+Docker daemon can install plugins from an image registry and run them managed.
+We maintain the
+[docker-volume-rclone](https://hub.docker.com/p/rclone/docker-volume-rclone/)
+plugin image on [Docker Hub](https://hub.docker.com).
+
+The plugin requires presence of two directories on the host before it can
+be installed. Note that plugin will **not** create them automatically.
+By default they must exist on host at the following locations
+(though you can tweak the paths):
+- `/var/lib/docker-plugins/rclone/config`
+ is reserved for the `rclone.conf` config file and **must** exist
+ even if it's empty and the config file is not present.
+- `/var/lib/docker-plugins/rclone/cache`
+ holds the plugin state file as well as optional VFS caches.
+
+You can [install managed plugin](https://docs.docker.com/engine/reference/commandline/plugin_install/)
+with default settings as follows:
+```
+docker plugin install rclone/docker-volume-rclone:latest --grant-all-permissions --alias rclone
+```
+
+Managed plugin is in fact a special container running in a namespace separate
+from normal docker containers. Inside it runs the `rclone serve docker`
+command. The config and cache directories are bind-mounted into the
+container at start. The docker daemon connects to a unix socket created
+by the command inside the container. The command creates on-demand remote
+mounts right inside, then docker machinery propagates them through kernel
+mount namespaces and bind-mounts into requesting user containers.
+
+You can tweak a few plugin settings after installation when it's disabled
+(not in use), for instance:
+```
+docker plugin disable rclone
+docker plugin set rclone RCLONE_VERBOSE=2 config=/etc/rclone args="--vfs-cache-mode=writes --allow-other"
+docker plugin enable rclone
+docker plugin inspect rclone
+```
+
+Note that if docker refuses to disable the plugin, you should find and
+remove all active volumes connected with it as well as containers and
+swarm services that use them. This is rather tedious so please carefully
+plan in advance.
+
+You can tweak the following settings:
+`args`, `config`, `cache`, and `RCLONE_VERBOSE`.
+It's _your_ task to keep plugin settings in sync across swarm cluster nodes.
+
+`args` sets command-line arguments for the `rclone serve docker` command
+(_none_ by default). Arguments should be separated by space so you will
+normally want to put them in quotes on the
+[docker plugin set](https://docs.docker.com/engine/reference/commandline/plugin_set/)
+command line. Both [serve docker flags](https://rclone.org/commands/rclone_serve_docker/#options)
+and [generic rclone flags](https://rclone.org/flags/) are supported, including backend
+parameters that will be used as defaults for volume creation.
+Note that plugin will fail (due to [this docker bug](https://github.com/moby/moby/blob/v20.10.7/plugin/v2/plugin.go#L195))
+if the `args` value is empty. Use e.g. `args="-v"` as a workaround.
+
+`config=/host/dir` sets alternative host location for the config directory.
+Plugin will look for `rclone.conf` here. It's not an error if the config
+file is not present but the directory must exist. Please note that plugin
+can periodically rewrite the config file, for example when it renews
+storage access tokens. Keep this in mind and try to avoid races between
+the plugin and other instances of rclone on the host that might try to
+change the config simultaneously resulting in corrupted `rclone.conf`.
+You can also put stuff like private key files for SFTP remotes in this
+directory. Just note that it's bind-mounted inside the plugin container
+at the predefined path `/data/config`. For example, if your key file is
+named `sftp-box1.key` on the host, the corresponding volume config option
+should read `-o sftp-key-file=/data/config/sftp-box1.key`.
+
+`cache=/host/dir` sets alternative host location for the _cache_ directory.
+The plugin will keep VFS caches here. Also it will create and maintain
+the `docker-plugin.state` file in this directory. When the plugin is
+restarted or reinstalled, it will look in this file to recreate any volumes
+that existed previously. However, they will not be re-mounted into
+consuming containers after restart. Usually this is not a problem as
+the docker daemon normally will restart affected user containers after
+failures, daemon restarts or host reboots.
+
+`RCLONE_VERBOSE` sets plugin verbosity from `0` (errors only, by default)
+to `2` (debugging). Verbosity can be also tweaked via `args="-v [-v] ..."`.
+Since arguments are more generic, you will rarely need this setting.
+The plugin output by default feeds the docker daemon log on local host.
+Log entries are reflected as _errors_ in the docker log but retain their
+actual level assigned by rclone in the encapsulated message string.
+
+You can set custom plugin options right when you install it, _in one go_:
+```
+docker plugin remove rclone
+docker plugin install rclone/docker-volume-rclone:latest \
+ --alias rclone --grant-all-permissions \
+ args="-v --allow-other" config=/etc/rclone
+docker plugin inspect rclone
+```
+
+## Healthchecks
+
+The docker plugin volume protocol doesn't provide a way for plugins
+to inform the docker daemon that a volume is (un-)available.
+As a workaround you can setup a healthcheck to verify that the mount
+is responding, for example:
+```
+services:
+ my_service:
+ image: my_image
+ healthcheck:
+ test: ls /path/to/rclone/mount || exit 1
+ interval: 1m
+ timeout: 15s
+ retries: 3
+ start_period: 15s
+```
+
+## Running Plugin under Systemd
+
+In most cases you should prefer managed mode. Moreover, MacOS and Windows
+do not support native Docker plugins. Please use managed mode on these
+systems. Proceed further only if you are on Linux.
+
+First, [install rclone](https://rclone.org/install/).
+You can just run it (type `rclone serve docker` and hit enter) for the test.
+
+Install _FUSE_:
+```
+sudo apt-get -y install fuse
+```
+
+Download two systemd configuration files:
+[docker-volume-rclone.service](https://raw.githubusercontent.com/rclone/rclone/master/cmd/serve/docker/contrib/systemd/docker-volume-rclone.service)
+and [docker-volume-rclone.socket](https://raw.githubusercontent.com/rclone/rclone/master/cmd/serve/docker/contrib/systemd/docker-volume-rclone.socket).
+
+Put them to the `/etc/systemd/system/` directory:
+```
+cp docker-volume-plugin.service /etc/systemd/system/
+cp docker-volume-plugin.socket /etc/systemd/system/
+```
+
+Please note that all commands in this section must be run as _root_ but
+we omit `sudo` prefix for brevity.
+Now create directories required by the service:
+```
+mkdir -p /var/lib/docker-volumes/rclone
+mkdir -p /var/lib/docker-plugins/rclone/config
+mkdir -p /var/lib/docker-plugins/rclone/cache
+```
+
+Run the docker plugin service in the socket activated mode:
+```
+systemctl daemon-reload
+systemctl start docker-volume-rclone.service
+systemctl enable docker-volume-rclone.socket
+systemctl start docker-volume-rclone.socket
+systemctl restart docker
+```
+
+Or run the service directly:
+- run `systemctl daemon-reload` to let systemd pick up new config
+- run `systemctl enable docker-volume-rclone.service` to make the new
+ service start automatically when you power on your machine.
+- run `systemctl start docker-volume-rclone.service`
+ to start the service now.
+- run `systemctl restart docker` to restart docker daemon and let it
+ detect the new plugin socket. Note that this step is not needed in
+ managed mode where docker knows about plugin state changes.
+
+The two methods are equivalent from the user perspective, but I personally
+prefer socket activation.
+
+## Troubleshooting
+
+You can [see managed plugin settings](https://docs.docker.com/engine/extend/#debugging-plugins)
+with
+```
+docker plugin list
+docker plugin inspect rclone
+```
+Note that docker (including latest 20.10.7) will not show actual values
+of `args`, just the defaults.
+
+Use `journalctl --unit docker` to see managed plugin output as part of
+the docker daemon log. Note that docker reflects plugin lines as _errors_
+but their actual level can be seen from encapsulated message string.
+
+You will usually install the latest version of managed plugin.
+Use the following commands to print the actual installed version:
+```
+PLUGID=$(docker plugin list --no-trunc | awk '/rclone/{print$1}')
+sudo runc --root /run/docker/runtime-runc/plugins.moby exec $PLUGID rclone version
+```
+
+You can even use `runc` to run shell inside the plugin container:
+```
+sudo runc --root /run/docker/runtime-runc/plugins.moby exec --tty $PLUGID bash
+```
+
+Also you can use curl to check the plugin socket connectivity:
+```
+docker plugin list --no-trunc
+PLUGID=123abc...
+sudo curl -H Content-Type:application/json -XPOST -d {} --unix-socket /run/docker/plugins/$PLUGID/rclone.sock http://localhost/Plugin.Activate
+```
+though this is rarely needed.
+
+Finally I'd like to mention a _caveat with updating volume settings_.
+Docker CLI does not have a dedicated command like `docker volume update`.
+It may be tempting to invoke `docker volume create` with updated options
+on existing volume, but there is a gotcha. The command will do nothing,
+it won't even return an error. I hope that docker maintainers will fix
+this some day. In the meantime be aware that you must remove your volume
+before recreating it with new settings:
+```
+docker volume remove my_vol
+docker volume create my_vol -d rclone -o opt1=new_val1 ...
+```
+
+and verify that settings did update:
+```
+docker volume list
+docker volume inspect my_vol
+```
+
+If docker refuses to remove the volume, you should find containers
+or swarm services that use it and stop them first.
+
+# 1Fichier
This is a backend for the [1fichier](https://1fichier.com) cloud
storage service. Note that a Premium subscription is required to use
@@ -12315,6 +13748,28 @@ If you want to download a shared folder, add this parameter
- Type: string
- Default: ""
+#### --fichier-file-password
+
+If you want to download a shared file that is password protected, add this parameter
+
+**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
+
+- Config: file_password
+- Env Var: RCLONE_FICHIER_FILE_PASSWORD
+- Type: string
+- Default: ""
+
+#### --fichier-folder-password
+
+If you want to list the files in a shared folder that is password protected, add this parameter
+
+**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
+
+- Config: folder_password
+- Env Var: RCLONE_FICHIER_FOLDER_PASSWORD
+- Type: string
+- Default: ""
+
#### --fichier-encoding
This sets the encoding for the backend.
@@ -12337,8 +13792,7 @@ remote.
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features)
See [rclone about](https://rclone.org/commands/rclone_about/)
- Alias
------------------------------------------
+# Alias
The `alias` remote provides a new name for another remote.
@@ -12438,8 +13892,7 @@ Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path".
- Amazon Drive
------------------------------------------
+# Amazon Drive
Amazon Drive, formerly known as Amazon Cloud Drive, is a cloud storage
service run by Amazon for consumers.
@@ -12662,16 +14115,16 @@ Checkpoint for internal polling (debug).
#### --acd-upload-wait-per-gb
-Additional time per GB to wait after a failed complete upload to see if it appears.
+Additional time per GiB to wait after a failed complete upload to see if it appears.
Sometimes Amazon Drive gives an error when a file has been fully
uploaded but the file appears anyway after a little while. This
-happens sometimes for files over 1GB in size and nearly every time for
-files bigger than 10GB. This parameter controls the time rclone waits
+happens sometimes for files over 1 GiB in size and nearly every time for
+files bigger than 10 GiB. This parameter controls the time rclone waits
for the file to appear.
-The default value for this parameter is 3 minutes per GB, so by
-default it will wait 3 minutes for every GB uploaded to see if the
+The default value for this parameter is 3 minutes per GiB, so by
+default it will wait 3 minutes for every GiB uploaded to see if the
file appears.
You can disable this feature by setting it to 0. This may cause
@@ -12695,7 +14148,7 @@ Files >= this size will be downloaded via their tempLink.
Files this size or more will be downloaded via their "tempLink". This
is to work around a problem with Amazon Drive which blocks downloads
-of files bigger than about 10GB. The default for this is 9GB which
+of files bigger than about 10 GiB. The default for this is 9 GiB which
shouldn't need to be changed.
To download files above this threshold, rclone requests a "tempLink"
@@ -12705,7 +14158,7 @@ underlying S3 storage.
- Config: templink_threshold
- Env Var: RCLONE_ACD_TEMPLINK_THRESHOLD
- Type: SizeSuffix
-- Default: 9G
+- Default: 9Gi
#### --acd-encoding
@@ -12734,7 +14187,7 @@ Amazon Drive has an internal limit of file sizes that can be uploaded
to the service. This limit is not officially published, but all files
larger than this will fail.
-At the time of writing (Jan 2016) is in the area of 50GB per file.
+At the time of writing (Jan 2016) is in the area of 50 GiB per file.
This means that larger files are likely to fail.
Unfortunately there is no way for rclone to see that this failure is
@@ -12751,8 +14204,7 @@ remote.
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features)
See [rclone about](https://rclone.org/commands/rclone_about/)
- Amazon S3 Storage Providers
---------------------------------------------------------
+# Amazon S3 Storage Providers
The S3 backend can be used with a number of different providers:
@@ -12765,6 +14217,7 @@ The S3 backend can be used with a number of different providers:
- IBM COS S3
- Minio
- Scaleway
+- SeaweedFS
- StackPath
- Tencent Cloud Object Storage (COS)
- Wasabi
@@ -13075,7 +14528,7 @@ objects). See the [rclone docs](https://rclone.org/docs/#fast-list) for more det
`--fast-list` trades off API transactions for memory use. As a rough
guide rclone uses 1k of memory per object stored, so using
-`--fast-list` on a sync of a million objects will use roughly 1 GB of
+`--fast-list` on a sync of a million objects will use roughly 1 GiB of
RAM.
If you are only copying a small number of files into a big repository
@@ -13155,13 +14608,13 @@ work with the SDK properly:
### Multipart uploads ###
rclone supports multipart uploads with S3 which means that it can
-upload files bigger than 5GB.
+upload files bigger than 5 GiB.
Note that files uploaded *both* with multipart upload *and* through
crypt remotes do not have MD5 sums.
rclone switches from single part uploads to multipart uploads at the
-point specified by `--s3-upload-cutoff`. This can be a maximum of 5GB
+point specified by `--s3-upload-cutoff`. This can be a maximum of 5 GiB
and a minimum of 0 (ie always upload multipart files).
The chunk sizes used in the multipart upload are specified by
@@ -13297,7 +14750,7 @@ Vault API, so rclone cannot directly access Glacier Vaults.
### Standard Options
-Here are the standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, and Tencent COS).
+Here are the standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, SeaweedFS, and Tencent COS).
#### --s3-provider
@@ -13326,6 +14779,8 @@ Choose your S3 provider.
- Netease Object Storage (NOS)
- "Scaleway"
- Scaleway Object Storage
+ - "SeaweedFS"
+ - SeaweedFS S3
- "StackPath"
- StackPath Object Storage
- "TencentCOS"
@@ -13639,6 +15094,10 @@ Endpoint for OSS API.
- Type: string
- Default: ""
- Examples:
+ - "oss-accelerate.aliyuncs.com"
+ - Global Accelerate
+ - "oss-accelerate-overseas.aliyuncs.com"
+ - Global Accelerate (outside mainland China)
- "oss-cn-hangzhou.aliyuncs.com"
- East China 1 (Hangzhou)
- "oss-cn-shanghai.aliyuncs.com"
@@ -13650,9 +15109,17 @@ Endpoint for OSS API.
- "oss-cn-zhangjiakou.aliyuncs.com"
- North China 3 (Zhangjiakou)
- "oss-cn-huhehaote.aliyuncs.com"
- - North China 5 (Huhehaote)
+ - North China 5 (Hohhot)
+ - "oss-cn-wulanchabu.aliyuncs.com"
+ - North China 6 (Ulanqab)
- "oss-cn-shenzhen.aliyuncs.com"
- South China 1 (Shenzhen)
+ - "oss-cn-heyuan.aliyuncs.com"
+ - South China 2 (Heyuan)
+ - "oss-cn-guangzhou.aliyuncs.com"
+ - South China 3 (Guangzhou)
+ - "oss-cn-chengdu.aliyuncs.com"
+ - West China 1 (Chengdu)
- "oss-cn-hongkong.aliyuncs.com"
- Hong Kong (Hong Kong)
- "oss-us-west-1.aliyuncs.com"
@@ -13774,6 +15241,8 @@ Required when using an S3 clone.
- Digital Ocean Spaces Amsterdam 3
- "sgp1.digitaloceanspaces.com"
- Digital Ocean Spaces Singapore 1
+ - "localhost:8333"
+ - SeaweedFS S3 localhost
- "s3.wasabisys.com"
- Wasabi US East endpoint
- "s3.us-west-1.wasabisys.com"
@@ -14079,7 +15548,7 @@ The storage class to use when storing new objects in S3.
### Advanced Options
-Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, and Tencent COS).
+Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, SeaweedFS, and Tencent COS).
#### --s3-bucket-acl
@@ -14160,12 +15629,12 @@ If you leave it blank, this is calculated automatically from the sse_customer_ke
Cutoff for switching to chunked upload
Any files larger than this will be uploaded in chunks of chunk_size.
-The minimum is 0 and the maximum is 5GB.
+The minimum is 0 and the maximum is 5 GiB.
- Config: upload_cutoff
- Env Var: RCLONE_S3_UPLOAD_CUTOFF
- Type: SizeSuffix
-- Default: 200M
+- Default: 200Mi
#### --s3-chunk-size
@@ -14186,15 +15655,15 @@ Rclone will automatically increase the chunk size when uploading a
large file of known size to stay below the 10,000 chunks limit.
Files of unknown size are uploaded with the configured
-chunk_size. Since the default chunk size is 5MB and there can be at
+chunk_size. Since the default chunk size is 5 MiB and there can be at
most 10,000 chunks, this means that by default the maximum size of
-a file you can stream upload is 48GB. If you wish to stream upload
+a file you can stream upload is 48 GiB. If you wish to stream upload
larger files then you will need to increase chunk_size.
- Config: chunk_size
- Env Var: RCLONE_S3_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 5M
+- Default: 5Mi
#### --s3-max-upload-parts
@@ -14222,12 +15691,12 @@ Cutoff for switching to multipart copy
Any files larger than this that need to be server-side copied will be
copied in chunks of this size.
-The minimum is 0 and the maximum is 5GB.
+The minimum is 0 and the maximum is 5 GiB.
- Config: copy_cutoff
- Env Var: RCLONE_S3_COPY_CUTOFF
- Type: SizeSuffix
-- Default: 4.656G
+- Default: 4.656Gi
#### --s3-disable-checksum
@@ -14429,6 +15898,15 @@ very small even with this flag.
- Type: bool
- Default: false
+#### --s3-no-head-object
+
+If set, don't HEAD objects
+
+- Config: no_head_object
+- Env Var: RCLONE_S3_NO_HEAD_OBJECT
+- Type: bool
+- Default: false
+
#### --s3-encoding
This sets the encoding for the backend.
@@ -14629,7 +16107,7 @@ Then use it as normal with the name of the public bucket, e.g.
You will be able to list and copy data but not upload it.
-### Ceph ###
+## Ceph
[Ceph](https://ceph.com/) is an open source unified, distributed
storage system designed for excellent performance, reliability and
@@ -14685,7 +16163,7 @@ removed).
Because this is a json dump, it is encoding the `/` as `\/`, so if you
use the secret key as `xxxxxx/xxxx` it will work fine.
-### Dreamhost ###
+## Dreamhost
Dreamhost [DreamObjects](https://www.dreamhost.com/cloud/storage/) is
an object storage system based on CEPH.
@@ -14709,7 +16187,7 @@ server_side_encryption =
storage_class =
```
-### DigitalOcean Spaces ###
+## DigitalOcean Spaces
[Spaces](https://www.digitalocean.com/products/object-storage/) is an [S3-interoperable](https://developers.digitalocean.com/documentation/spaces/) object storage service from cloud provider DigitalOcean.
@@ -14755,7 +16233,7 @@ rclone mkdir spaces:my-new-space
rclone copy /path/to/files spaces:my-new-space
```
-### IBM COS (S3) ###
+## IBM COS (S3)
Information stored with IBM Cloud Object Storage is encrypted and dispersed across multiple geographic locations, and accessed through an implementation of the S3 API. This service makes use of the distributed storage technologies provided by IBM’s Cloud Object Storage System (formerly Cleversafe). For more information visit: (http://www.ibm.com/cloud/object-storage)
@@ -14927,7 +16405,7 @@ acl> 1
rclone delete IBM-COS-XREGION:newbucket/file.txt
```
-### Minio ###
+## Minio
[Minio](https://minio.io/) is an object storage server built for cloud application developers and devops.
@@ -14994,7 +16472,7 @@ So once set up, for example to copy files into a bucket
rclone copy /path/to/files minio:bucket
```
-### Scaleway {#scaleway}
+## Scaleway
[Scaleway](https://www.scaleway.com/object-storage/) The Object Storage platform allows you to store anything from backups, logs and web assets to documents and photos.
Files can be dropped from the Scaleway console or transferred through our API and CLI or using any S3-compatible tool.
@@ -15016,7 +16494,57 @@ server_side_encryption =
storage_class =
```
-### Wasabi ###
+## SeaweedFS
+
+[SeaweedFS](https://github.com/chrislusf/seaweedfs/) is a distributed storage system for
+blobs, objects, files, and data lake, with O(1) disk seek and a scalable file metadata store.
+It has an S3 compatible object storage interface.
+
+Assuming the SeaweedFS are configured with `weed shell` as such:
+```
+> s3.bucket.create -name foo
+> s3.configure -access_key=any -secret_key=any -buckets=foo -user=me -actions=Read,Write,List,Tagging,Admin -apply
+{
+ "identities": [
+ {
+ "name": "me",
+ "credentials": [
+ {
+ "accessKey": "any",
+ "secretKey": "any"
+ }
+ ],
+ "actions": [
+ "Read:foo",
+ "Write:foo",
+ "List:foo",
+ "Tagging:foo",
+ "Admin:foo"
+ ]
+ }
+ ]
+}
+```
+
+To use rclone with SeaweedFS, above configuration should end up with something like this in
+your config:
+
+```
+[seaweedfs_s3]
+type = s3
+provider = SeaweedFS
+access_key_id = any
+secret_access_key = any
+endpoint = localhost:8333
+```
+
+So once set up, for example to copy files into a bucket
+
+```
+rclone copy /path/to/files seaweedfs_s3:foo
+```
+
+## Wasabi
[Wasabi](https://wasabi.com) is a cloud-based object storage service for a
broad range of applications and use cases. Wasabi is designed for
@@ -15129,7 +16657,7 @@ server_side_encryption =
storage_class =
```
-### Alibaba OSS {#alibaba-oss}
+## Alibaba OSS {#alibaba-oss}
Here is an example of making an [Alibaba Cloud (Aliyun) OSS](https://www.alibabacloud.com/product/oss/)
configuration. First run:
@@ -15239,7 +16767,7 @@ d) Delete this remote
y/e/d> y
```
-### Tencent COS {#tencent-cos}
+## Tencent COS {#tencent-cos}
[Tencent Cloud Object Storage (COS)](https://intl.cloud.tencent.com/product/cos) is a distributed storage service offered by Tencent Cloud for unstructured data. It is secure, stable, massive, convenient, low-delay and low-cost.
@@ -15371,13 +16899,13 @@ Name Type
cos s3
```
-### Netease NOS
+## Netease NOS
For Netease NOS configure as per the configurator `rclone config`
setting the provider `Netease`. This will automatically set
`force_path_style = false` which is necessary for it to run properly.
-### Limitations
+## Limitations
`rclone about` is not supported by the S3 backend. Backends without
this capability cannot determine free space for an rclone mount or
@@ -15387,8 +16915,7 @@ remote.
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features)
See [rclone about](https://rclone.org/commands/rclone_about/)
- Backblaze B2
-----------------------------------------
+# Backblaze B2
B2 is [Backblaze's cloud storage system](https://www.backblaze.com/b2/).
@@ -15539,8 +17066,8 @@ depending on your hardware, how big the files are, how much you want
to load your computer, etc. The default of `--transfers 4` is
definitely too low for Backblaze B2 though.
-Note that uploading big files (bigger than 200 MB by default) will use
-a 96 MB RAM buffer by default. There can be at most `--transfers` of
+Note that uploading big files (bigger than 200 MiB by default) will use
+a 96 MiB RAM buffer by default. There can be at most `--transfers` of
these in use at any moment, so this sets the upper limit on the memory
used.
@@ -15556,11 +17083,6 @@ the file instead of hiding it.
Old versions of files, where available, are visible using the
`--b2-versions` flag.
-**NB** Note that `--b2-versions` does not work with crypt at the
-moment [#1627](https://github.com/rclone/rclone/issues/1627). Using
-[--backup-dir](https://rclone.org/docs/#backup-dir-dir) with rclone is the recommended
-way of working around this.
-
If you wish to remove all the old versions then you can use the
`rclone cleanup remote:bucket` command which will delete all the old
versions of files, leaving the current ones intact. You can also
@@ -15790,12 +17312,12 @@ Cutoff for switching to chunked upload.
Files above this size will be uploaded in chunks of "--b2-chunk-size".
-This value should be set no larger than 4.657GiB (== 5GB).
+This value should be set no larger than 4.657 GiB (== 5 GB).
- Config: upload_cutoff
- Env Var: RCLONE_B2_UPLOAD_CUTOFF
- Type: SizeSuffix
-- Default: 200M
+- Default: 200Mi
#### --b2-copy-cutoff
@@ -15804,12 +17326,12 @@ Cutoff for switching to multipart copy
Any files larger than this that need to be server-side copied will be
copied in chunks of this size.
-The minimum is 0 and the maximum is 4.6GB.
+The minimum is 0 and the maximum is 4.6 GiB.
- Config: copy_cutoff
- Env Var: RCLONE_B2_COPY_CUTOFF
- Type: SizeSuffix
-- Default: 4G
+- Default: 4Gi
#### --b2-chunk-size
@@ -15823,7 +17345,7 @@ minimum size.
- Config: chunk_size
- Env Var: RCLONE_B2_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 96M
+- Default: 96Mi
#### --b2-disable-checksum
@@ -15909,8 +17431,7 @@ remote.
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features)
See [rclone about](https://rclone.org/commands/rclone_about/)
- Box
------------------------------------------
+# Box
Paths are specified as `remote:path`
@@ -16131,10 +17652,10 @@ as they can't be used in JSON strings.
### Transfers ###
-For files above 50MB rclone will use a chunked transfer. Rclone will
+For files above 50 MiB rclone will use a chunked transfer. Rclone will
upload up to `--transfers` chunks at the same time (shared among all
the multipart uploads). Chunks are buffered in memory and are
-normally 8MB so increasing `--transfers` will increase memory use.
+normally 8 MiB so increasing `--transfers` will increase memory use.
### Deleting files ###
@@ -16275,12 +17796,12 @@ Fill in for rclone to use a non root folder as its starting point.
#### --box-upload-cutoff
-Cutoff for switching to multipart upload (>= 50MB).
+Cutoff for switching to multipart upload (>= 50 MiB).
- Config: upload_cutoff
- Env Var: RCLONE_BOX_UPLOAD_CUTOFF
- Type: SizeSuffix
-- Default: 50M
+- Default: 50Mi
#### --box-commit-retries
@@ -16323,8 +17844,7 @@ remote.
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features)
See [rclone about](https://rclone.org/commands/rclone_about/)
- Cache (BETA)
------------------------------------------
+# Cache (DEPRECATED)
The `cache` remote wraps another existing remote and stores file structure
and its data for long running tasks like `rclone mount`.
@@ -16390,11 +17910,11 @@ password:
The size of a chunk. Lower value good for slow connections but can affect seamless reading.
Default: 5M
Choose a number from below, or type in your own value
- 1 / 1MB
- \ "1m"
- 2 / 5 MB
+ 1 / 1 MiB
+ \ "1M"
+ 2 / 5 MiB
\ "5M"
- 3 / 10 MB
+ 3 / 10 MiB
\ "10M"
chunk_size> 2
How much time should object info (file size, file hashes, etc.) be stored in cache. Use a very high value if you don't plan on changing the source FS from outside the cache.
@@ -16411,11 +17931,11 @@ info_age> 2
The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted.
Default: 10G
Choose a number from below, or type in your own value
- 1 / 500 MB
+ 1 / 500 MiB
\ "500M"
- 2 / 1 GB
+ 2 / 1 GiB
\ "1G"
- 3 / 10 GB
+ 3 / 10 GiB
\ "10G"
chunk_total_size> 3
Remote config
@@ -16681,14 +18201,14 @@ will need to be cleared or unexpected EOF errors will occur.
- Config: chunk_size
- Env Var: RCLONE_CACHE_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 5M
+- Default: 5Mi
- Examples:
- - "1m"
- - 1MB
+ - "1M"
+ - 1 MiB
- "5M"
- - 5 MB
+ - 5 MiB
- "10M"
- - 10 MB
+ - 10 MiB
#### --cache-info-age
@@ -16718,14 +18238,14 @@ oldest chunks until it goes under this value.
- Config: chunk_total_size
- Env Var: RCLONE_CACHE_CHUNK_TOTAL_SIZE
- Type: SizeSuffix
-- Default: 10G
+- Default: 10Gi
- Examples:
- "500M"
- - 500 MB
+ - 500 MiB
- "1G"
- - 1 GB
+ - 1 GiB
- "10G"
- - 10 GB
+ - 10 GiB
### Advanced Options
@@ -16962,8 +18482,7 @@ Print stats on the cache backend in JSON format.
-Chunker (BETA)
-----------------------------------------
+# Chunker (BETA)
The `chunker` overlay transparently splits large files into smaller chunks
during upload to wrapped remote and transparently assembles them back
@@ -17002,7 +18521,7 @@ Normally should contain a ':' and a path, e.g. "myremote:path/to/dir",
Enter a string value. Press Enter for the default ("").
remote> remote:path
Files larger than chunk size will be split in chunks.
-Enter a size with suffix k,M,G,T. Press Enter for the default ("2G").
+Enter a size with suffix K,M,G,T. Press Enter for the default ("2G").
chunk_size> 100M
Choose how chunker handles hash sums. All modes but "none" require metadata.
Enter a string value. Press Enter for the default ("md5").
@@ -17291,7 +18810,7 @@ Files larger than chunk size will be split in chunks.
- Config: chunk_size
- Env Var: RCLONE_CHUNKER_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 2G
+- Default: 2Gi
#### --chunker-hash-type
@@ -17400,7 +18919,7 @@ Choose how chunker should handle temporary files during transactions.
-## Citrix ShareFile
+# Citrix ShareFile
[Citrix ShareFile](https://sharefile.com) is a secure file sharing and transfer service aimed as business.
@@ -17509,10 +19028,10 @@ flag.
### Transfers ###
-For files above 128MB rclone will use a chunked transfer. Rclone will
+For files above 128 MiB rclone will use a chunked transfer. Rclone will
upload up to `--transfers` chunks at the same time (shared among all
the multipart uploads). Chunks are buffered in memory and are
-normally 64MB so increasing `--transfers` will increase memory use.
+normally 64 MiB so increasing `--transfers` will increase memory use.
### Limitations ###
@@ -17588,7 +19107,7 @@ Cutoff for switching to multipart upload.
- Config: upload_cutoff
- Env Var: RCLONE_SHAREFILE_UPLOAD_CUTOFF
- Type: SizeSuffix
-- Default: 128M
+- Default: 128Mi
#### --sharefile-chunk-size
@@ -17602,7 +19121,7 @@ Reducing this will reduce memory usage but decrease performance.
- Config: chunk_size
- Env Var: RCLONE_SHAREFILE_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 64M
+- Default: 64Mi
#### --sharefile-endpoint
@@ -17639,8 +19158,7 @@ remote.
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features)
See [rclone about](https://rclone.org/commands/rclone_about/)
-Crypt
-----------------------------------------
+# Crypt
Rclone `crypt` remotes encrypt and decrypt other remotes.
@@ -18263,7 +19781,7 @@ approximately 2×10⁻³² of re-using a nonce.
#### Chunk
-Each chunk will contain 64kB of data, except for the last one which
+Each chunk will contain 64 KiB of data, except for the last one which
may have less data. The data chunk is in standard NaCl SecretBox
format. SecretBox uses XSalsa20 and Poly1305 to encrypt and
authenticate messages.
@@ -18289,12 +19807,12 @@ This uses a 32 byte (256 bit key) key derived from the user password.
49 bytes total
-1MB (1048576 bytes) file will encrypt to
+1 MiB (1048576 bytes) file will encrypt to
* 32 bytes header
* 16 chunks of 65568 bytes
-1049120 bytes total (a 0.05% overhead). This is the overhead for big
+1049120 bytes total (a 0.05% overhead). This is the overhead for big
files.
### Name encryption
@@ -18346,8 +19864,7 @@ a salt.
* [rclone cryptdecode](https://rclone.org/commands/rclone_cryptdecode/) - Show forward/reverse mapping of encrypted filenames
-Compress (Experimental)
------------------------------------------
+# Compress (Experimental)
### Warning
This remote is currently **experimental**. Things may break and data may be lost. Anything you do with this remote is
@@ -18485,12 +20002,11 @@ Some remotes don't allow the upload of files with unknown size.
- Config: ram_cache_limit
- Env Var: RCLONE_COMPRESS_RAM_CACHE_LIMIT
- Type: SizeSuffix
-- Default: 20M
+- Default: 20Mi
- Dropbox
----------------------------------
+# Dropbox
Paths are specified as `remote:path`
@@ -18586,7 +20102,7 @@ Dropbox supports [its own hash
type](https://www.dropbox.com/developers/reference/content-hash) which
is checked for all transfers.
-#### Restricted filename characters
+### Restricted filename characters
| Character | Value | Replacement |
| --------- |:-----:|:-----------:|
@@ -18605,6 +20121,65 @@ These only get replaced if they are the last character in the name:
Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
as they can't be used in JSON strings.
+### Batch mode uploads {#batch-mode}
+
+Using batch mode uploads is very important for performance when using
+the Dropbox API. See [the dropbox performance guide](https://developers.dropbox.com/dbx-performance-guide)
+for more info.
+
+There are 3 modes rclone can use for uploads.
+
+#### --dropbox-batch-mode off
+
+In this mode rclone will not use upload batching. This was the default
+before rclone v1.55. It has the disadvantage that it is very likely to
+encounter `too_many_requests` errors like this
+
+ NOTICE: too_many_requests/.: Too many requests or write operations. Trying again in 15 seconds.
+
+When rclone receives these it has to wait for 15s or sometimes 300s
+before continuing which really slows down transfers.
+
+This will happen especially if `--transfers` is large, so this mode
+isn't recommended except for compatibility or investigating problems.
+
+#### --dropbox-batch-mode sync
+
+In this mode rclone will batch up uploads to the size specified by
+`--dropbox-batch-size` and commit them together.
+
+Using this mode means you can use a much higher `--transfers`
+parameter (32 or 64 works fine) without receiving `too_many_requests`
+errors.
+
+This mode ensures full data integrity.
+
+Note that there may be a pause when quitting rclone while rclone
+finishes up the last batch using this mode.
+
+#### --dropbox-batch-mode async
+
+In this mode rclone will batch up uploads to the size specified by
+`--dropbox-batch-size` and commit them together.
+
+However it will not wait for the status of the batch to be returned to
+the caller. This means rclone can use a much bigger batch size (much
+bigger than `--transfers`), at the cost of not being able to check the
+status of the upload.
+
+This provides the maximum possible upload speed especially with lots
+of small files, however rclone can't check the file got uploaded
+properly using this mode.
+
+If you are using this mode then using "rclone check" after the
+transfer completes is recommended. Or you could do an initial transfer
+with `--dropbox-batch-mode async` then do a final transfer with
+`--dropbox-batch-mode sync` (the default).
+
+Note that there may be a pause when quitting rclone while rclone
+finishes up the last batch using this mode.
+
+
### Standard Options
@@ -18665,19 +20240,19 @@ Leave blank to use the provider defaults.
#### --dropbox-chunk-size
-Upload chunk size. (< 150M).
+Upload chunk size. (< 150Mi).
Any files larger than this will be uploaded in chunks of this size.
Note that chunks are buffered in memory (one at a time) so rclone can
deal with retries. Setting this larger will increase the speed
-slightly (at most 10% for 128MB in tests) at the cost of using more
+slightly (at most 10% for 128 MiB in tests) at the cost of using more
memory. It can be set smaller if you are tight on memory.
- Config: chunk_size
- Env Var: RCLONE_DROPBOX_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 48M
+- Default: 48Mi
#### --dropbox-impersonate
@@ -18736,6 +20311,75 @@ shared folder.
- Type: bool
- Default: false
+#### --dropbox-batch-mode
+
+Upload file batching sync|async|off.
+
+This sets the batch mode used by rclone.
+
+For full info see [the main docs](https://rclone.org/dropbox/#batch-mode)
+
+This has 3 possible values
+
+- off - no batching
+- sync - batch uploads and check completion (default)
+- async - batch upload and don't check completion
+
+Rclone will close any outstanding batches when it exits which may make
+a delay on quit.
+
+
+- Config: batch_mode
+- Env Var: RCLONE_DROPBOX_BATCH_MODE
+- Type: string
+- Default: "sync"
+
+#### --dropbox-batch-size
+
+Max number of files in upload batch.
+
+This sets the batch size of files to upload. It has to be less than 1000.
+
+By default this is 0 which means rclone which calculate the batch size
+depending on the setting of batch_mode.
+
+- batch_mode: async - default batch_size is 100
+- batch_mode: sync - default batch_size is the same as --transfers
+- batch_mode: off - not in use
+
+Rclone will close any outstanding batches when it exits which may make
+a delay on quit.
+
+Setting this is a great idea if you are uploading lots of small files
+as it will make them a lot quicker. You can use --transfers 32 to
+maximise throughput.
+
+
+- Config: batch_size
+- Env Var: RCLONE_DROPBOX_BATCH_SIZE
+- Type: int
+- Default: 0
+
+#### --dropbox-batch-timeout
+
+Max time to allow an idle upload batch before uploading
+
+If an upload batch is idle for more than this long then it will be
+uploaded.
+
+The default for this is 0 which means rclone will choose a sensible
+default based on the batch_mode in use.
+
+- batch_mode: async - default batch_timeout is 500ms
+- batch_mode: sync - default batch_timeout is 10s
+- batch_mode: off - not in use
+
+
+- Config: batch_timeout
+- Env Var: RCLONE_DROPBOX_BATCH_TIMEOUT
+- Type: Duration
+- Default: 0s
+
#### --dropbox-encoding
This sets the encoding for the backend.
@@ -18771,6 +20415,12 @@ dropbox:dir` will return the error `Failed to purge: There are too
many files involved in this operation`. As a work-around do an
`rclone delete dropbox:dir` followed by an `rclone rmdir dropbox:dir`.
+When using `rclone link` you'll need to set `--expire` if using a
+non-personal account otherwise the visibility may not be correct.
+(Note that `--expire` isn't supported on personal accounts). See the
+[forum discussion](https://forum.rclone.org/t/rclone-link-dropbox-permissions/23211) and the
+[dropbox SDK issue](https://github.com/dropbox/dropbox-sdk-go-unofficial/issues/75).
+
### Get your own Dropbox App ID ###
When you use rclone with Dropbox in its default configuration you are using rclone's App ID. This is shared between all the rclone users.
@@ -18788,12 +20438,13 @@ to be the same account as the Dropbox you want to access)
5. Click the button `Create App`
-5. Fill `Redirect URIs` as `http://localhost:53682/`
+6. Switch to the `Permissions` tab. Enable at least the following permissions: `account_info.read`, `files.metadata.write`, `files.content.write`, `files.content.read`, `sharing.write`. The `files.metadata.read` and `sharing.read` checkboxes will be marked too. Click `Submit`
-6. Find the `App key` and `App secret` Use these values in rclone config to add a new remote or edit an existing remote.
+7. Switch to the `Settings` tab. Fill `OAuth2 - Redirect URIs` as `http://localhost:53682/`
- Enterprise File Fabric
------------------------------------------
+8. Find the `App key` and `App secret` values on the `Settings` tab. Use these values in rclone config to add a new remote or edit an existing remote. The `App key` setting corresponds to `client_id` in rclone config, the `App secret` corresponds to `client_secret`
+
+# Enterprise File Fabric
This backend supports [Storage Made Easy's Enterprise File
Fabric™](https://storagemadeeasy.com/about/) which provides a software
@@ -19048,8 +20699,7 @@ See: the [encoding section in the overview](https://rclone.org/overview/#encodin
- FTP
-------------------------------
+# FTP
FTP is the File Transfer Protocol. Rclone FTP support is provided using the
[github.com/jlaffaye/ftp](https://godoc.org/github.com/jlaffaye/ftp)
@@ -19357,8 +21007,7 @@ Not all FTP servers can have all characters in file names, for example:
| proftpd | `*` |
| pureftpd | `\ [ ]` |
- Google Cloud Storage
--------------------------------------------------
+# Google Cloud Storage
Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`.
@@ -19521,7 +21170,7 @@ files in the bucket.
rclone sync -i /home/local/directory remote:bucket
-### Service Account support ###
+### Service Account support
You can set up rclone with Google Cloud Storage in an unattended mode,
i.e. not tied to a specific end-user Google account. This is useful
@@ -19548,14 +21197,14 @@ the rclone config file, you can set `service_account_credentials` with
the actual contents of the file instead, or set the equivalent
environment variable.
-### Anonymous Access ###
+### Anonymous Access
For downloads of objects that permit public access you can configure rclone
to use anonymous access by setting `anonymous` to `true`.
With unauthorized access you can't write or create files but only read or list
those buckets and objects that have public read access.
-### Application Default Credentials ###
+### Application Default Credentials
If no other source of credentials is provided, rclone will fall back
to
@@ -19569,13 +21218,13 @@ additional commands on your google compute machine -
Note that in the case application default credentials are used, there
is no need to explicitly configure a project number.
-### --fast-list ###
+### --fast-list
This remote supports `--fast-list` which allows you to use fewer
transactions in exchange for more memory. See the [rclone
docs](https://rclone.org/docs/#fast-list) for more details.
-### Custom upload headers ###
+### Custom upload headers
You can set custom upload headers with the `--header-upload`
flag. Google Cloud Storage supports the headers as described in the
@@ -19594,13 +21243,24 @@ Eg `--header-upload "Content-Type text/potato"`
Note that the last of these is for setting custom metadata in the form
`--header-upload "x-goog-meta-key: value"`
-### Modified time ###
+### Modification time
-Google google cloud storage stores md5sums natively and rclone stores
-modification times as metadata on the object, under the "mtime" key in
-RFC3339 format accurate to 1ns.
+Google Cloud Storage stores md5sum natively.
+Google's [gsutil](https://cloud.google.com/storage/docs/gsutil) tool stores modification time
+with one-second precision as `goog-reserved-file-mtime` in file metadata.
-#### Restricted filename characters
+To ensure compatibility with gsutil, rclone stores modification time in 2 separate metadata entries.
+`mtime` uses RFC3339 format with one-nanosecond precision.
+`goog-reserved-file-mtime` uses Unix timestamp format with one-second precision.
+To get modification time from object metadata, rclone reads the metadata in the following order: `mtime`, `goog-reserved-file-mtime`, object updated time.
+
+Note that rclone's default modify window is 1ns.
+Files uploaded by gsutil only contain timestamps with one-second precision.
+If you use rclone to sync files previously uploaded by gsutil,
+rclone will attempt to update modification time for all these files.
+To avoid these possibly unnecessary updates, use `--modify-window 1s`.
+
+### Restricted filename characters
| Character | Value | Replacement |
| --------- |:-----:|:-----------:|
@@ -19874,8 +21534,7 @@ remote.
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features)
See [rclone about](https://rclone.org/commands/rclone_about/)
- Google Drive
------------------------------------------
+# Google Drive
Paths are specified as `drive:path`
@@ -20739,7 +22398,7 @@ Cutoff for switching to chunked upload
- Config: upload_cutoff
- Env Var: RCLONE_DRIVE_UPLOAD_CUTOFF
- Type: SizeSuffix
-- Default: 8M
+- Default: 8Mi
#### --drive-chunk-size
@@ -20753,7 +22412,7 @@ Reducing this will reduce memory usage but decrease performance.
- Config: chunk_size
- Env Var: RCLONE_DRIVE_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 8M
+- Default: 8Mi
#### --drive-acknowledge-abuse
@@ -20864,7 +22523,7 @@ See: https://github.com/rclone/rclone/issues/3631
Make upload limit errors be fatal
-At the time of writing it is only possible to upload 750GB of data to
+At the time of writing it is only possible to upload 750 GiB of data to
Google Drive a day (this is an undocumented limit). When this limit is
reached Google Drive produces a slightly different error message. When
this flag is set it causes these errors to be fatal. These will stop
@@ -20885,7 +22544,7 @@ See: https://github.com/rclone/rclone/issues/3857
Make download limit errors be fatal
-At the time of writing it is only possible to download 10TB of data from
+At the time of writing it is only possible to download 10 TiB of data from
Google Drive a day (this is an undocumented limit). When this limit is
reached Google Drive produces a slightly different error message. When
this flag is set it causes these errors to be fatal. These will stop
@@ -21097,7 +22756,7 @@ Use the -i flag to see what would be copied before copying.
Drive has quite a lot of rate limiting. This causes rclone to be
limited to transferring about 2 files per second only. Individual
-files may be transferred much faster at 100s of MBytes/s but lots of
+files may be transferred much faster at 100s of MiByte/s but lots of
small files can take a long time.
Server side copies are also subject to a separate rate limit. If you
@@ -21195,8 +22854,11 @@ then select "OAuth client ID".
7. Choose an application type of "Desktop app" if you using a Google account or "Other" if
you using a GSuite account and click "Create". (the default name is fine)
-8. It will show you a client ID and client secret. Use these values
-in rclone config to add a new remote or edit an existing remote.
+8. It will show you a client ID and client secret. Make a note of these.
+
+9. Go to "Oauth consent screen" and press "Publish App"
+
+10. Provide the noted client ID and client secret to rclone.
Be aware that, due to the "enhanced security" recently introduced by
Google, you are theoretically expected to "submit your app for verification"
@@ -21215,8 +22877,7 @@ As a convenient workaround, the necessary Google Drive API key can be created on
Just push the Enable the Drive API button to receive the Client ID and Secret.
Note that it will automatically create a new project in the API Console.
- Google Photos
--------------------------------------------------
+# Google Photos
The rclone backend for [Google Photos](https://www.google.com/photos/about/) is
a specialized backend for transferring photos and videos to and from
@@ -21645,8 +23306,7 @@ listings and won't be transferred.
- HDFS
--------------------------------------------------
+# HDFS
[HDFS](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html) is a
distributed file-system, part of the [Apache Hadoop](https://hadoop.apache.org/) framework.
@@ -21832,7 +23492,7 @@ Here are the advanced options specific to hdfs (Hadoop distributed file system).
Kerberos service principal name for the namenode
Enables KERBEROS authentication. Specifies the Service Principal Name
-(/) for the namenode.
+(SERVICE/FQDN) for the namenode.
- Config: service_principal_name
- Env Var: RCLONE_HDFS_SERVICE_PRINCIPAL_NAME
@@ -21872,8 +23532,7 @@ See: the [encoding section in the overview](https://rclone.org/overview/#encodin
- HTTP
--------------------------------------------------
+# HTTP
The HTTP remote is a read only remote for reading files of a
webserver. The webserver should provide file listings which rclone
@@ -22065,8 +23724,7 @@ remote.
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features)
See [rclone about](https://rclone.org/commands/rclone_about/)
- Hubic
------------------------------------------
+# Hubic
Paths are specified as `remote:path`
@@ -22230,12 +23888,12 @@ Leave blank to use the provider defaults.
Above this size files will be chunked into a _segments container.
Above this size files will be chunked into a _segments container. The
-default for this is 5GB which is its maximum value.
+default for this is 5 GiB which is its maximum value.
- Config: chunk_size
- Env Var: RCLONE_HUBIC_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 5G
+- Default: 5Gi
#### --hubic-no-chunk
@@ -22244,7 +23902,7 @@ Don't chunk files during streaming upload.
When doing streaming uploads (e.g. using rcat or mount) setting this
flag will cause the swift backend to not upload chunked files.
-This will limit the maximum upload size to 5GB. However non chunked
+This will limit the maximum upload size to 5 GiB. However non chunked
files are easier to deal with and have an MD5SUM.
Rclone will still chunk files bigger than chunk_size when doing normal
@@ -22278,8 +23936,7 @@ The Swift API doesn't return a correct MD5SUM for segmented files
(Dynamic or Static Large Objects) so rclone won't check or use the
MD5SUM for these.
- Jottacloud
------------------------------------------
+# Jottacloud
Jottacloud is a cloud storage service provider from a Norwegian company, using its own datacenters in Norway.
@@ -22470,6 +24127,9 @@ Emptying the trash is supported by the [cleanup](https://rclone.org/commands/rcl
Jottacloud supports file versioning. When rclone uploads a new version of a file it creates a new version of it.
Currently rclone only supports retrieving the current version but older versions can be accessed via the Jottacloud Website.
+Versioning can be disabled by `--jottacloud-no-versions` option. This is achieved by deleting the remote file prior to uploading
+a new version. If the upload the fails no version of the file will be available in the remote.
+
### Quota information
To view your current quota you can use the `rclone about remote:`
@@ -22488,7 +24148,7 @@ Files bigger than this will be cached on disk to calculate the MD5 if required.
- Config: md5_memory_limit
- Env Var: RCLONE_JOTTACLOUD_MD5_MEMORY_LIMIT
- Type: SizeSuffix
-- Default: 10M
+- Default: 10Mi
#### --jottacloud-trashed-only
@@ -22516,7 +24176,16 @@ Files bigger than this can be resumed if the upload fail's.
- Config: upload_resume_limit
- Env Var: RCLONE_JOTTACLOUD_UPLOAD_RESUME_LIMIT
- Type: SizeSuffix
-- Default: 10M
+- Default: 10Mi
+
+#### --jottacloud-no-versions
+
+Avoid server side versioning by deleting files and recreating files instead of overwriting them.
+
+- Config: no_versions
+- Env Var: RCLONE_JOTTACLOUD_NO_VERSIONS
+- Type: bool
+- Default: false
#### --jottacloud-encoding
@@ -22546,8 +24215,7 @@ Jottacloud only supports filenames up to 255 characters in length.
Jottacloud exhibits some inconsistent behaviours regarding deleted files and folders which may cause Copy, Move and DirMove
operations to previously deleted paths to fail. Emptying the trash should help in such cases.
- Koofr
------------------------------------------
+# Koofr
Paths are specified as `remote:path`
@@ -22714,8 +24382,7 @@ See: the [encoding section in the overview](https://rclone.org/overview/#encodin
Note that Koofr is case insensitive so you can't have a file called
"Hello.doc" and one called "hello.doc".
- Mail.ru Cloud
-----------------------------------------
+# Mail.ru Cloud
[Mail.ru Cloud](https://cloud.mail.ru/) is a cloud storage provided by a Russian internet company [Mail.Ru Group](https://mail.ru). The official desktop client is [Disk-O:](https://disk-o.cloud/en), available on Windows and Mac OS.
@@ -22952,7 +24619,7 @@ This option allows you to disable speedup (put by hash) for large files
- Config: speedup_max_disk
- Env Var: RCLONE_MAILRU_SPEEDUP_MAX_DISK
- Type: SizeSuffix
-- Default: 3G
+- Default: 3Gi
- Examples:
- "0"
- Completely disable speedup (put by hash).
@@ -22968,7 +24635,7 @@ Files larger than the size given below will always be hashed on disk.
- Config: speedup_max_memory
- Env Var: RCLONE_MAILRU_SPEEDUP_MAX_MEMORY
- Type: SizeSuffix
-- Default: 32M
+- Default: 32Mi
- Examples:
- "0"
- Preliminary hashing will always be done in a temporary disk location.
@@ -23028,8 +24695,7 @@ See: the [encoding section in the overview](https://rclone.org/overview/#encodin
- Mega
------------------------------------------
+# Mega
[Mega](https://mega.nz/) is a cloud storage and file hosting service
known for its security feature where all files are encrypted locally
@@ -23251,8 +24917,7 @@ so there are likely quite a few errors still remaining in this library.
Mega allows duplicate files which may confuse rclone.
- Memory
------------------------------------------
+# Memory
The memory backend is an in RAM backend. It does not persist its
data - use the local backend for that.
@@ -23312,8 +24977,7 @@ set](https://rclone.org/overview/#restricted-characters).
- Microsoft Azure Blob Storage
------------------------------------------
+# Microsoft Azure Blob Storage
Paths are specified as `remote:container` (or `remote:` for the `lsd`
command.) You may put subdirectories in too, e.g.
@@ -23475,13 +25139,12 @@ Path to file containing credentials for use with a service principal.
Leave blank normally. Needed only if you want to use a service principal instead of interactive login.
- $ az sp create-for-rbac --name "" \
+ $ az ad sp create-for-rbac --name "" \
--role "Storage Blob Data Owner" \
--scopes "/subscriptions//resourceGroups//providers/Microsoft.Storage/storageAccounts//blobServices/default/containers/" \
> azure-principal.json
-See [Use Azure CLI to assign an Azure role for access to blob and queue data](https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-cli)
-for more details.
+See ["Create an Azure service principal"](https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli) and ["Assign an Azure role for access to blob data"](https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-cli) pages for more details.
- Config: service_principal_file
@@ -23578,7 +25241,7 @@ Leave blank normally.
#### --azureblob-upload-cutoff
-Cutoff for switching to chunked upload (<= 256MB). (Deprecated)
+Cutoff for switching to chunked upload (<= 256 MiB). (Deprecated)
- Config: upload_cutoff
- Env Var: RCLONE_AZUREBLOB_UPLOAD_CUTOFF
@@ -23587,7 +25250,7 @@ Cutoff for switching to chunked upload (<= 256MB). (Deprecated)
#### --azureblob-chunk-size
-Upload chunk size (<= 100MB).
+Upload chunk size (<= 100 MiB).
Note that this is stored in memory and there may be up to
"--transfers" chunks stored at once in memory.
@@ -23595,7 +25258,7 @@ Note that this is stored in memory and there may be up to
- Config: chunk_size
- Env Var: RCLONE_AZUREBLOB_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 4M
+- Default: 4Mi
#### --azureblob-list-chunk
@@ -23737,8 +25400,7 @@ installed locally and set up a new remote with `rclone config` follow instructio
introduction, set `use_emulator` config as `true`, you do not need to provide default account name
or key if using emulator.
- Microsoft OneDrive
------------------------------------------
+# Microsoft OneDrive
Paths are specified as `remote:path`
@@ -24011,7 +25673,7 @@ Note that the chunks will be buffered into memory.
- Config: chunk_size
- Env Var: RCLONE_ONEDRIVE_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 10M
+- Default: 10Mi
#### --onedrive-drive-id
@@ -24059,6 +25721,15 @@ fall back to normal copy (which will be slightly slower).
- Type: bool
- Default: false
+#### --onedrive-list-chunk
+
+Size of listing chunk.
+
+- Config: list_chunk
+- Env Var: RCLONE_ONEDRIVE_LIST_CHUNK
+- Type: int
+- Default: 1000
+
#### --onedrive-no-versions
Remove all versions on modifying operations
@@ -24155,7 +25826,7 @@ in it will be mapped to `?` instead.
#### File sizes ####
-The largest allowed file size is 250GB for both OneDrive Personal and OneDrive for Business [(Updated 13 Jan 2021)](https://support.microsoft.com/en-us/office/invalid-file-names-and-file-types-in-onedrive-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa?ui=en-us&rs=en-us&ad=us#individualfilesize).
+The largest allowed file size is 250 GiB for both OneDrive Personal and OneDrive for Business [(Updated 13 Jan 2021)](https://support.microsoft.com/en-us/office/invalid-file-names-and-file-types-in-onedrive-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa?ui=en-us&rs=en-us&ad=us#individualfilesize).
#### Path length ####
@@ -24242,6 +25913,12 @@ is a great way to see what it would do.
### Troubleshooting ###
+#### Excessive throttling or blocked on SharePoint
+
+If you experience excessive throttling or is being blocked on SharePoint then it may help to set the user agent explicitly with a flag like this: `--user-agent "ISV|rclone.org|rclone/v1.55.1"`
+
+The specific details can be found in the Microsoft document: [Avoid getting throttled or blocked in SharePoint Online](https://docs.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online#how-to-decorate-your-http-traffic-to-avoid-throttling)
+
#### Unexpected file size/hash differences on Sharepoint ####
It is a
@@ -24303,8 +25980,7 @@ Description: Due to a configuration change made by your administrator, or becaus
If you see the error above after enabling multi-factor authentication for your account, you can fix it by refreshing your OAuth refresh token. To do that, run `rclone config`, and choose to edit your OneDrive backend. Then, you don't need to actually make any changes until you reach this question: `Already have a token - refresh?`. For this question, answer `y` and go through the process to refresh your token, just like the first time the backend is configured. After this, rclone should work again for this backend.
- OpenDrive
-------------------------------------
+# OpenDrive
Paths are specified as `remote:path`
@@ -24448,7 +26124,7 @@ increase memory use.
- Config: chunk_size
- Env Var: RCLONE_OPENDRIVE_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 10M
+- Default: 10Mi
@@ -24471,8 +26147,7 @@ remote.
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features)
See [rclone about](https://rclone.org/commands/rclone_about/)
- QingStor
----------------------------------------
+# QingStor
Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`.
@@ -24569,7 +26244,7 @@ docs](https://rclone.org/docs/#fast-list) for more details.
### Multipart uploads ###
rclone supports multipart uploads with QingStor which means that it can
-upload files bigger than 5GB. Note that files uploaded with multipart
+upload files bigger than 5 GiB. Note that files uploaded with multipart
upload don't have an MD5SUM.
Note that incomplete multipart uploads older than 24 hours can be
@@ -24695,12 +26370,12 @@ Number of connection retries.
Cutoff for switching to chunked upload
Any files larger than this will be uploaded in chunks of chunk_size.
-The minimum is 0 and the maximum is 5GB.
+The minimum is 0 and the maximum is 5 GiB.
- Config: upload_cutoff
- Env Var: RCLONE_QINGSTOR_UPLOAD_CUTOFF
- Type: SizeSuffix
-- Default: 200M
+- Default: 200Mi
#### --qingstor-chunk-size
@@ -24718,7 +26393,7 @@ enough memory, then increasing this will speed up the transfers.
- Config: chunk_size
- Env Var: RCLONE_QINGSTOR_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 4M
+- Default: 4Mi
#### --qingstor-upload-concurrency
@@ -24761,8 +26436,7 @@ remote.
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features)
See [rclone about](https://rclone.org/commands/rclone_about/)
-Swift
-----------------------------------------
+# Swift
Swift refers to [OpenStack Object Storage](https://docs.openstack.org/swift/latest/).
Commercial implementations of that being:
@@ -25202,12 +26876,12 @@ If true avoid calling abort upload on a failure. It should be set to true for re
Above this size files will be chunked into a _segments container.
Above this size files will be chunked into a _segments container. The
-default for this is 5GB which is its maximum value.
+default for this is 5 GiB which is its maximum value.
- Config: chunk_size
- Env Var: RCLONE_SWIFT_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 5G
+- Default: 5Gi
#### --swift-no-chunk
@@ -25216,7 +26890,7 @@ Don't chunk files during streaming upload.
When doing streaming uploads (e.g. using rcat or mount) setting this
flag will cause the swift backend to not upload chunked files.
-This will limit the maximum upload size to 5GB. However non chunked
+This will limit the maximum upload size to 5 GiB. However non chunked
files are easier to deal with and have an MD5SUM.
Rclone will still chunk files bigger than chunk_size when doing normal
@@ -25284,8 +26958,7 @@ have (e.g. OVH).
This is most likely caused by forgetting to specify your tenant when
setting up a swift remote.
- pCloud
------------------------------------------
+# pCloud
Paths are specified as `remote:path`
@@ -25516,8 +27189,7 @@ with rclone authorize.
- premiumize.me
------------------------------------------
+# premiumize.me
Paths are specified as `remote:path`
@@ -25658,8 +27330,7 @@ rclone maps these to and from an identical looking unicode equivalents
premiumize.me only supports filenames up to 255 characters in length.
- put.io
----------------------------------
+# put.io
Paths are specified as `remote:path`
@@ -25780,8 +27451,7 @@ See: the [encoding section in the overview](https://rclone.org/overview/#encodin
-Seafile
-----------------------------------------
+# Seafile
This is a backend for the [Seafile](https://www.seafile.com/) storage service:
- It works with both the free community edition or the professional edition.
@@ -26142,8 +27812,7 @@ See: the [encoding section in the overview](https://rclone.org/overview/#encodin
- SFTP
-----------------------------------------
+# SFTP
SFTP is the [Secure (or SSH) File Transfer
Protocol](https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol).
@@ -26160,7 +27829,10 @@ SSH installations.
Paths are specified as `remote:path`. If the path does not begin with
a `/` it is relative to the home directory of the user. An empty path
-`remote:` refers to the user's home directory.
+`remote:` refers to the user's home directory. For example, `rclone lsd remote:`
+would list the home directory of the user cofigured in the rclone remote config
+(`i.e /home/sftpuser`). However, `rclone lsd remote:/` would list the root
+directory for remote machine (i.e. `/`)
"Note that some SFTP servers will need the leading / - Synology is a
good example of this. rsync.net, on the other hand, requires users to
@@ -26223,6 +27895,10 @@ See all directories in the home directory
rclone lsd remote:
+See all directories in the root directory
+
+ rclone lsd remote:/
+
Make a new directory
rclone mkdir remote:path/to/directory
@@ -26236,6 +27912,11 @@ excess files in the directory.
rclone sync -i /home/local/directory remote:directory
+Mount the remote path `/srv/www-data/` to the local path
+`/mnt/www-data`
+
+ rclone mount remote:/srv/www-data/ /mnt/www-data
+
### SSH Authentication ###
The SFTP remote supports three authentication methods:
@@ -26658,6 +28339,21 @@ If concurrent reads are disabled, the use_fstat option is ignored.
- Type: bool
- Default: false
+#### --sftp-disable-concurrent-writes
+
+If set don't use concurrent writes
+
+Normally rclone uses concurrent writes to upload files. This improves
+the performance greatly, especially for distant servers.
+
+This option disables concurrent writes should that be necessary.
+
+
+- Config: disable_concurrent_writes
+- Env Var: RCLONE_SFTP_DISABLE_CONCURRENT_WRITES
+- Type: bool
+- Default: false
+
#### --sftp-idle-timeout
Max time before closing idle connections
@@ -26727,8 +28423,7 @@ rsync.net is supported through the SFTP backend.
See [rsync.net's documentation of rclone examples](https://www.rsync.net/products/rclone.html).
- SugarSync
------------------------------------------
+# SugarSync
[SugarSync](https://sugarsync.com) is a cloud service that enables
active synchronization of files across computers and other devices for
@@ -26983,8 +28678,7 @@ remote.
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features)
See [rclone about](https://rclone.org/commands/rclone_about/)
- Tardigrade
------------------------------------------
+# Tardigrade
[Tardigrade](https://tardigrade.io) is an encrypted, secure, and
cost-effective object storage service that enables you to store, back up, and
@@ -27290,8 +28984,149 @@ remote.
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features)
See [rclone about](https://rclone.org/commands/rclone_about/)
- Union
------------------------------------------
+### Known issues
+
+If you get errors like `too many open files` this usually happens when the default `ulimit` for system max open files is exceeded. Native Storj protocol opens a large number of TCP connections (each of which is counted as an open file). For a single upload stream you can expect 110 TCP connections to be opened. For a single download stream you can expect 35. This batch of connections will be opened for every 64 MiB segment and you should also expect TCP connections to be reused. If you do many transfers you eventually open a connection to most storage nodes (thousands of nodes).
+
+To fix these, please raise your system limits. You can do this issuing a `ulimit -n 65536` just before you run rclone. To change the limits more permanently you can add this to your shell startup script, e.g. `$HOME/.bashrc`, or change the system-wide configuration, usually `/etc/sysctl.conf` and/or `/etc/security/limits.conf`, but please refer to your operating system manual.
+
+# Uptobox
+
+This is a Backend for Uptobox file storage service. Uptobox is closer to a one-click hoster than a traditional
+cloud storage provider and therefore not suitable for long term storage.
+
+Paths are specified as `remote:path`
+
+Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
+
+### Setup
+
+To configure an Uptobox backend you'll need your personal api token. You'll find it in your
+[account settings](https://uptobox.com/my_account)
+
+
+### Example
+
+Here is an example of how to make a remote called `remote` with the default setup. First run:
+
+ rclone config
+
+This will guide you through an interactive setup process:
+
+```
+Current remotes:
+
+Name Type
+==== ====
+TestUptobox uptobox
+
+e) Edit existing remote
+n) New remote
+d) Delete remote
+r) Rename remote
+c) Copy remote
+s) Set configuration password
+q) Quit config
+e/n/d/r/c/s/q> n
+name> uptobox
+Type of storage to configure.
+Enter a string value. Press Enter for the default ("").
+Choose a number from below, or type in your own value
+[...]
+37 / Uptobox
+ \ "uptobox"
+[...]
+Storage> uptobox
+** See help for uptobox backend at: https://rclone.org/uptobox/ **
+
+Your API Key, get it from https://uptobox.com/my_account
+Enter a string value. Press Enter for the default ("").
+api_key> xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+Edit advanced config? (y/n)
+y) Yes
+n) No (default)
+y/n> n
+Remote config
+--------------------
+[uptobox]
+type = uptobox
+api_key = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+--------------------
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d>
+```
+Once configured you can then use `rclone` like this,
+
+List directories in top level of your Uptobox
+
+ rclone lsd remote:
+
+List all the files in your Uptobox
+
+ rclone ls remote:
+
+To copy a local directory to an Uptobox directory called backup
+
+ rclone copy /home/source remote:backup
+
+### Modified time and hashes
+
+Uptobox supports neither modified times nor checksums.
+
+#### Restricted filename characters
+
+In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters)
+the following characters are also replaced:
+
+| Character | Value | Replacement |
+| --------- |:-----:|:-----------:|
+| " | 0x22 | " |
+| ` | 0x41 | ` |
+
+Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
+as they can't be used in XML strings.
+
+
+### Standard Options
+
+Here are the standard options specific to uptobox (Uptobox).
+
+#### --uptobox-access-token
+
+Your access Token, get it from https://uptobox.com/my_account
+
+- Config: access_token
+- Env Var: RCLONE_UPTOBOX_ACCESS_TOKEN
+- Type: string
+- Default: ""
+
+### Advanced Options
+
+Here are the advanced options specific to uptobox (Uptobox).
+
+#### --uptobox-encoding
+
+This sets the encoding for the backend.
+
+See: the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
+
+- Config: encoding
+- Env Var: RCLONE_UPTOBOX_ENCODING
+- Type: MultiEncoder
+- Default: Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot
+
+
+
+### Limitations
+
+Uptobox will delete inactive files that have not been accessed in 60 days.
+
+`rclone about` is not supported by this backend an overview of used space can however
+been seen in the uptobox web interface.
+
+# Union
The `union` remote provides a unification similar to UnionFS using other remotes.
@@ -27513,8 +29348,7 @@ Cache time of usage and free space (in seconds). This option is only useful when
- WebDAV
------------------------------------------
+# WebDAV
Paths are specified as `remote:path`
@@ -27708,6 +29542,25 @@ Default encoding is Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Hash,Per
- Type: string
- Default: ""
+#### --webdav-headers
+
+Set HTTP headers for all transactions
+
+Use this to set additional HTTP headers for all transactions
+
+The input format is comma separated list of key,value pairs. Standard
+[CSV encoding](https://godoc.org/encoding/csv) may be used.
+
+For example to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'.
+
+You can set multiple headers, e.g. '"Cookie","name=value","Authorization","xxx"'.
+
+
+- Config: headers
+- Env Var: RCLONE_WEBDAV_HEADERS
+- Type: CommaSepList
+- Default:
+
## Provider notes ##
@@ -27884,8 +29737,7 @@ vendor = other
bearer_token_command = oidc-token XDC
```
-Yandex Disk
-----------------------------------------
+# Yandex Disk
[Yandex Disk](https://disk.yandex.com) is a cloud storage solution created by [Yandex](https://yandex.com).
@@ -27927,7 +29779,7 @@ Got code
[remote]
client_id =
client_secret =
-token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","expiry":"2016-12-29T12:27:11.362788025Z"}
+token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","token_type":"OAuth","expiry":"2016-12-29T12:27:11.362788025Z"}
--------------------
y) Yes this is OK
e) Edit this remote
@@ -27995,15 +29847,15 @@ as they can't be used in JSON strings.
### Limitations ###
-When uploading very large files (bigger than about 5GB) you will need
+When uploading very large files (bigger than about 5 GiB) you will need
to increase the `--timeout` parameter. This is because Yandex pauses
(perhaps to calculate the MD5SUM for the entire file) before returning
confirmation that the file has been uploaded. The default handling of
timeouts in rclone is to assume a 5 minute pause is an error and close
the connection - you'll see `net/http: timeout awaiting response
headers` errors in the logs if this is happening. Setting the timeout
-to twice the max size of file in GB should be enough, so if you want
-to upload a 30GB file set a timeout of `2 * 30 = 60m`, that is
+to twice the max size of file in GiB should be enough, so if you want
+to upload a 30 GiB file set a timeout of `2 * 30 = 60m`, that is
`--timeout 60m`.
@@ -28077,8 +29929,7 @@ See: the [encoding section in the overview](https://rclone.org/overview/#encodin
-Zoho Workdrive
-----------------------------------------
+# Zoho Workdrive
[Zoho WorkDrive](https://www.zoho.com/workdrive/) is a cloud storage solution created by [Zoho](https://zoho.com).
@@ -28224,7 +30075,11 @@ Leave blank normally.
#### --zoho-region
-Zoho region to connect to. You'll have to use the region you organization is registered in.
+Zoho region to connect to.
+
+You'll have to use the region your organization is registered in. If
+not sure use the same top level domain as you connect to in your
+browser.
- Config: region
- Env Var: RCLONE_ZOHO_REGION
@@ -28286,8 +30141,7 @@ See: the [encoding section in the overview](https://rclone.org/overview/#encodin
- Local Filesystem
--------------------------------------------
+# Local Filesystem
Local paths are specified as normal filesystem paths, e.g. `/path/to/wherever`, so
@@ -28397,9 +30251,9 @@ as they can't be converted to UTF-16.
### Paths on Windows ###
On Windows there are many ways of specifying a path to a file system resource.
-Both absolute paths like `C:\path\to\wherever`, and relative paths like
-`..\wherever` can be used, and path separator can be either
-`\` (as in `C:\path\to\wherever`) or `/` (as in `C:/path/to/wherever`).
+Local paths can be absolute, like `C:\path\to\wherever`, or relative,
+like `..\wherever`. Network paths in UNC format, `\\server\share`, are also supported.
+Path separator can be either `\` (as in `C:\path\to\wherever`) or `/` (as in `C:/path/to/wherever`).
Length of these paths are limited to 259 characters for files and 247
characters for directories, but there is an alternative extended-length
path format increasing the limit to (approximately) 32,767 characters.
@@ -28455,7 +30309,7 @@ like symlinks under Windows).
If you supply `--copy-links` or `-L` then rclone will follow the
symlink and copy the pointed to file or directory. Note that this
-flag is incompatible with `-links` / `-l`.
+flag is incompatible with `--links` / `-l`.
This flag applies to all commands.
@@ -28650,32 +30504,41 @@ points, as you explicitly acknowledge that they should be skipped.
#### --local-zero-size-links
-Assume the Stat size of links is zero (and read them instead)
+Assume the Stat size of links is zero (and read them instead) (Deprecated)
-On some virtual filesystems (such ash LucidLink), reading a link size via a Stat call always returns 0.
-However, on unix it reads as the length of the text in the link. This may cause errors like this when
-syncing:
+Rclone used to use the Stat size of links as the link size, but this fails in quite a few places
- Failed to copy: corrupted on transfer: sizes differ 0 vs 13
+- Windows
+- On some virtual filesystems (such ash LucidLink)
+- Android
+
+So rclone now always reads the link
-Setting this flag causes rclone to read the link and use that as the size of the link
-instead of 0 which in most cases fixes the problem.
- Config: zero_size_links
- Env Var: RCLONE_LOCAL_ZERO_SIZE_LINKS
- Type: bool
- Default: false
-#### --local-no-unicode-normalization
+#### --local-unicode-normalization
-Don't apply unicode normalization to paths and filenames (Deprecated)
+Apply unicode NFC normalization to paths and filenames
-This flag is deprecated now. Rclone no longer normalizes unicode file
-names, but it compares them with unicode normalization in the sync
-routine instead.
+This flag can be used to normalize file names into unicode NFC form
+that are read from the local filesystem.
-- Config: no_unicode_normalization
-- Env Var: RCLONE_LOCAL_NO_UNICODE_NORMALIZATION
+Rclone does not normally touch the encoding of file names it reads from
+the file system.
+
+This can be useful when using macOS as it normally provides decomposed (NFD)
+unicode which in some language (eg Korean) doesn't display properly on
+some OSes.
+
+Note that rclone compares filenames with unicode normalization in the sync
+routine so this flag shouldn't normally be used.
+
+- Config: unicode_normalization
+- Env Var: RCLONE_LOCAL_UNICODE_NORMALIZATION
- Type: bool
- Default: false
@@ -28837,6 +30700,187 @@ Options:
# Changelog
+## v1.56.0 - 2021-07-20
+
+[See commits](https://github.com/rclone/rclone/compare/v1.55.0...v1.56.0)
+
+* New backends
+ * [Uptobox](https://rclone.org/uptobox/) (buengese)
+* New commands
+ * [serve docker](https://rclone.org/commands/rclone_serve_docker/) (Antoine GIRARD) (Ivan Andreev)
+ * and accompanying [docker volume plugin](https://rclone.org/docker/)
+ * [checksum](https://rclone.org/commands/rclone_checksum/) to check files against a file of checksums (Ivan Andreev)
+ * this is also available as `rclone md5sum -C` etc
+ * [config touch](https://rclone.org/commands/rclone_config_touch/): ensure config exists at configured location (albertony)
+ * [test changenotify](https://rclone.org/commands/rclone_test_changenotify/): command to help debugging changenotify (Nick Craig-Wood)
+* Deprecations
+ * `dbhashsum`: Remove command deprecated a year ago (Ivan Andreev)
+ * `cache`: Deprecate cache backend (Ivan Andreev)
+* New Features
+ * rework config system so it can be used non-interactively via cli and rc API.
+ * See docs in [config create](https://rclone.org/commands/rclone_config_create/)
+ * This is a very big change to all the backends so may cause breakages - please file bugs!
+ * librclone - export the rclone RC as a C library (lewisxy) (Nick Craig-Wood)
+ * Link a C-API rclone shared object into your project
+ * Use the RC as an in memory interface
+ * Python example supplied
+ * Also supports Android and gomobile
+ * fs
+ * Add `--disable-http2` for global http2 disable (Nick Craig-Wood)
+ * Make `--dump` imply `-vv` (Alex Chen)
+ * Use binary prefixes for size and rate units (albertony)
+ * Use decimal prefixes for counts (albertony)
+ * Add google search widget to rclone.org (Ivan Andreev)
+ * accounting: Calculate rolling average speed (Haochen Tong)
+ * atexit: Terminate with non-zero status after receiving signal (Michael Hanselmann)
+ * build
+ * Only run event-based workflow scripts under rclone repo with manual override (Mathieu Carbou)
+ * Add Android build with gomobile (x0b)
+ * check: Log the hash in use like cryptcheck does (Nick Craig-Wood)
+ * version: Print os/version, kernel and bitness (Ivan Andreev)
+ * config
+ * Prevent use of Windows reserved names in config file name (albertony)
+ * Create config file in windows appdata directory by default (albertony)
+ * Treat any config file paths with filename notfound as memory-only config (albertony)
+ * Delay load config file (albertony)
+ * Replace defaultConfig with a thread-safe in-memory implementation (Chris Macklin)
+ * Allow `config create` and friends to take `key=value` parameters (Nick Craig-Wood)
+ * Fixed issues with flags/options set by environment vars. (Ole Frost)
+ * fshttp: Implement graceful DSCP error handling (Tyson Moore)
+ * lib/http - provides an abstraction for a central http server that services can bind routes to (Nolan Woods)
+ * Add `--template` config and flags to serve/data (Nolan Woods)
+ * Add default 404 handler (Nolan Woods)
+ * link: Use "off" value for unset expiry (Nick Craig-Wood)
+ * oauthutil: Raise fatal error if token expired without refresh token (Alex Chen)
+ * rcat: Add `--size` flag for more efficient uploads of known size (Nazar Mishturak)
+ * serve sftp: Add `--stdio` flag to serve via stdio (Tom)
+ * sync: Don't warn about `--no-traverse` when `--files-from` is set (Nick Gaya)
+ * `test makefiles`
+ * Add `--seed` flag and make data generated repeatable (Nick Craig-Wood)
+ * Add log levels and speed summary (Nick Craig-Wood)
+* Bug Fixes
+ * accounting: Fix startTime of statsGroups.sum (Haochen Tong)
+ * cmd/ncdu: Fix out of range panic in delete (buengese)
+ * config
+ * Fix issues with memory-only config file paths (albertony)
+ * Fix in memory config not saving on the fly backend config (Nick Craig-Wood)
+ * fshttp: Fix address parsing for DSCP (Tyson Moore)
+ * ncdu: Update termbox-go library to fix crash (Nick Craig-Wood)
+ * oauthutil: Fix old authorize result not recognised (Cnly)
+ * operations: Don't update timestamps of files in `--compare-dest` (Nick Gaya)
+ * selfupdate: fix archive name on macos (Ivan Andreev)
+* Mount
+ * Refactor before adding serve docker (Antoine GIRARD)
+* VFS
+ * Add cache reset for `--vfs-cache-max-size` handling at cache poll interval (Leo Luan)
+ * Fix modtime changing when reading file into cache (Nick Craig-Wood)
+ * Avoid unnecessary subdir in cache path (albertony)
+ * Fix that umask option cannot be set as environment variable (albertony)
+ * Do not print notice about missing poll-interval support when set to 0 (albertony)
+* Local
+ * Always use readlink to read symlink size for better compatibility (Nick Craig-Wood)
+ * Add `--local-unicode-normalization` (and remove `--local-no-unicode-normalization`) (Nick Craig-Wood)
+ * Skip entries removed concurrently with List() (Ivan Andreev)
+* Crypt
+ * Support timestamped filenames from `--b2-versions` (Dominik Mydlil)
+* B2
+ * Don't include the bucket name in public link file prefixes (Jeffrey Tolar)
+ * Fix versions and .files with no extension (Nick Craig-Wood)
+ * Factor version handling into lib/version (Dominik Mydlil)
+* Box
+ * Use upload preflight check to avoid listings in file uploads (Nick Craig-Wood)
+ * Return errors instead of calling log.Fatal with them (Nick Craig-Wood)
+* Drive
+ * Switch to the Drives API for looking up shared drives (Nick Craig-Wood)
+ * Fix some google docs being treated as files (Nick Craig-Wood)
+* Dropbox
+ * Add `--dropbox-batch-mode` flag to speed up uploading (Nick Craig-Wood)
+ * Read the [batch mode](https://rclone.org/dropbox/#batch-mode) docs for more info
+ * Set visibility in link sharing when `--expire` is set (Nick Craig-Wood)
+ * Simplify chunked uploads (Alexey Ivanov)
+ * Improve "own App IP" instructions (Ivan Andreev)
+* Fichier
+ * Check if more than one upload link is returned (Nick Craig-Wood)
+ * Support downloading password protected files and folders (Florian Penzkofer)
+ * Make error messages report text from the API (Nick Craig-Wood)
+ * Fix move of files in the same directory (Nick Craig-Wood)
+ * Check that we actually got a download token and retry if we didn't (buengese)
+* Filefabric
+ * Fix listing after change of from field from "int" to int. (Nick Craig-Wood)
+* FTP
+ * Make upload error 250 indicate success (Nick Craig-Wood)
+* GCS
+ * Make compatible with gsutil's mtime metadata (database64128)
+ * Clean up time format constants (database64128)
+* Google Photos
+ * Fix read only scope not being used properly (Nick Craig-Wood)
+* HTTP
+ * Replace httplib with lib/http (Nolan Woods)
+ * Clean up Bind to better use middleware (Nolan Woods)
+* Jottacloud
+ * Fix legacy auth with state based config system (buengese)
+ * Fix invalid url in output from link command (albertony)
+ * Add no versions option (buengese)
+* Onedrive
+ * Add `list_chunk option` (Nick Gaya)
+ * Also report root error if unable to cancel multipart upload (Cnly)
+ * Fix failed to configure: empty token found error (Nick Craig-Wood)
+ * Make link return direct download link (Xuanchen Wu)
+* S3
+ * Add `--s3-no-head-object` (Tatsuya Noyori)
+ * Remove WebIdentityRoleProvider to fix crash on auth (Nick Craig-Wood)
+ * Don't check to see if remote is object if it ends with / (Nick Craig-Wood)
+ * Add SeaweedFS (Chris Lu)
+ * Update Alibaba OSS endpoints (Chuan Zh)
+* SFTP
+ * Fix performance regression by re-enabling concurrent writes (Nick Craig-Wood)
+ * Expand tilde and environment variables in configured `known_hosts_file` (albertony)
+* Tardigrade
+ * Upgrade to uplink v1.4.6 (Caleb Case)
+ * Use negative offset (Caleb Case)
+ * Add warning about `too many open files` (acsfer)
+* WebDAV
+ * Fix sharepoint auth over http (Nick Craig-Wood)
+ * Add headers option (Antoon Prins)
+
+## v1.55.1 - 2021-04-26
+
+[See commits](https://github.com/rclone/rclone/compare/v1.55.0...v1.55.1)
+
+* Bug Fixes
+ * selfupdate
+ * Dont detect FUSE if build is static (Ivan Andreev)
+ * Add build tag noselfupdate (Ivan Andreev)
+ * sync: Fix incorrect error reported by graceful cutoff (Nick Craig-Wood)
+ * install.sh: fix macOS arm64 download (Nick Craig-Wood)
+ * build: Fix version numbers in android branch builds (Nick Craig-Wood)
+ * docs
+ * Contributing.md: update setup instructions for go1.16 (Nick Gaya)
+ * WinFsp 2021 is out of beta (albertony)
+ * Minor cleanup of space around code section (albertony)
+ * Fixed some typos (albertony)
+* VFS
+ * Fix a code path which allows dirty data to be removed causing data loss (Nick Craig-Wood)
+* Compress
+ * Fix compressed name regexp (buengese)
+* Drive
+ * Fix backend copyid of google doc to directory (Nick Craig-Wood)
+ * Don't open browser when service account... (Ansh Mittal)
+* Dropbox
+ * Add missing team_data.member scope for use with --impersonate (Nick Craig-Wood)
+ * Fix About after scopes changes - rclone config reconnect needed (Nick Craig-Wood)
+ * Fix Unable to decrypt returned paths from changeNotify (Nick Craig-Wood)
+* FTP
+ * Fix implicit TLS (Ivan Andreev)
+* Onedrive
+ * Work around for random "Unable to initialize RPS" errors (OleFrost)
+* SFTP
+ * Revert sftp library to v1.12.0 from v1.13.0 to fix performance regression (Nick Craig-Wood)
+ * Fix Update ReadFrom failed: failed to send packet: EOF errors (Nick Craig-Wood)
+* Zoho
+ * Fix error when region isn't set (buengese)
+ * Do not ask for mountpoint twice when using headless setup (buengese)
+
## v1.55.0 - 2021-03-31
[See commits](https://github.com/rclone/rclone/compare/v1.54.0...v1.55.0)
@@ -32831,7 +34875,7 @@ put them back in again.` >}}
* Fred
* Sébastien Gross
* Maxime Suret <11944422+msuret@users.noreply.github.com>
- * Caleb Case
+ * Caleb Case
* Ben Zenker
* Martin Michlmayr
* Brandon McNama
@@ -32890,7 +34934,7 @@ put them back in again.` >}}
* Laurens Janssen
* Bob Bagwill
* Nathan Collins
- * lostheli
+ * lostheli
* kelv
* Milly
* gtorelly
@@ -32937,6 +34981,39 @@ put them back in again.` >}}
* Manish Kumar
* x0b
* CERN through the CS3MESH4EOSC Project
+ * Nick Gaya
+ * Ashok Gelal <401055+ashokgelal@users.noreply.github.com>
+ * Dominik Mydlil
+ * Nazar Mishturak
+ * Ansh Mittal
+ * noabody
+ * OleFrost <82263101+olefrost@users.noreply.github.com>
+ * Kenny Parsons
+ * Jeffrey Tolar
+ * jtagcat
+ * Tatsuya Noyori <63089076+public-tatsuya-noyori@users.noreply.github.com>
+ * lewisxy
+ * Nolan Woods
+ * Gautam Kumar <25435568+gautamajay52@users.noreply.github.com>
+ * Chris Macklin
+ * Antoon Prins
+ * Alexey Ivanov
+ * Serge Pouliquen
+ * acsfer
+ * Tom
+ * Tyson Moore
+ * database64128
+ * Chris Lu
+ * Reid Buzby
+ * darrenrhs
+ * Florian Penzkofer
+ * Xuanchen Wu <117010292@link.cuhk.edu.cn>
+ * partev
+ * Dmitry Sitnikov
+ * Haochen Tong
+ * Michael Hanselmann
+ * Chuan Zh
+ * Antoine GIRARD
# Contact the rclone project #
diff --git a/MANUAL.txt b/MANUAL.txt
index 849ac5763..5ea0a75d9 100644
--- a/MANUAL.txt
+++ b/MANUAL.txt
@@ -1,6 +1,6 @@
rclone(1) User Manual
Nick Craig-Wood
-Mar 31, 2021
+Jul 20, 2021
@@ -41,7 +41,7 @@ bandwidth use and transfers from one provider to another without using
local disk.
Virtual backends wrap local and cloud file systems to apply encryption,
-caching, compression chunking and joining.
+compression chunking and joining.
Rclone mounts any local, cloud or virtual filesystem as a disk on
Windows, macOS, linux and FreeBSD, and also serves these over SFTP,
@@ -141,11 +141,13 @@ S3, that work out of the box.)
- rsync.net
- Scaleway
- Seafile
+- SeaweedFS
- SFTP
- StackPath
- SugarSync
- Tardigrade
- Tencent Cloud Object Storage (COS)
+- Uptobox
- Wasabi
- WebDAV
- Yandex Disk
@@ -172,6 +174,7 @@ Quickstart
- Download the relevant binary.
- Extract the rclone or rclone.exe binary from the archive
- Run rclone config to setup. See rclone config docs for more details.
+- Optionally configure automatic execution.
See below for some expanded Linux / macOS instructions.
@@ -393,6 +396,161 @@ Instructions
- rclone
+
+AUTOSTART
+
+
+After installing and configuring rclone, as described above, you are
+ready to use rclone as an interactive command line utility. If your goal
+is to perform _periodic_ operations, such as a regular sync, you will
+probably want to configure your rclone command in your operating
+system's scheduler. If you need to expose _service_-like features, such
+as remote control, GUI, serve or mount, you will often want an rclone
+command always running in the background, and configuring it to run in a
+service infrastructure may be a better option. Below are some
+alternatives on how to achieve this on different operating systems.
+
+NOTE: Before setting up autorun it is highly recommended that you have
+tested your command manually from a Command Prompt first.
+
+
+Autostart on Windows
+
+The most relevant alternatives for autostart on Windows are: - Run at
+user log on using the Startup folder - Run at user log on, at system
+startup or at schedule using Task Scheduler - Run at system startup
+using Windows service
+
+Running in background
+
+Rclone is a console application, so if not starting from an existing
+Command Prompt, e.g. when starting rclone.exe from a shortcut, it will
+open a Command Prompt window. When configuring rclone to run from task
+scheduler and windows service you are able to set it to run hidden in
+background. From rclone version 1.54 you can also make it run hidden
+from anywhere by adding option --no-console (it may still flash briefly
+when the program starts). Since rclone normally writes information and
+any error messages to the console, you must redirect this to a file to
+be able to see it. Rclone has a built-in option --log-file for that.
+
+Example command to run a sync in background:
+
+ c:\rclone\rclone.exe sync c:\files remote:/files --no-console --log-file c:\rclone\logs\sync_files.txt
+
+User account
+
+As mentioned in the mount documentation, mounted drives created as
+Administrator are not visible to other accounts, not even the account
+that was elevated as Administrator. By running the mount command as the
+built-in SYSTEM user account, it will create drives accessible for
+everyone on the system. Both scheduled task and Windows service can be
+used to achieve this.
+
+NOTE: Remember that when rclone runs as the SYSTEM user, the user
+profile that it sees will not be yours. This means that if you normally
+run rclone with configuration file in the default location, to be able
+to use the same configuration when running as the system user you must
+explicitely tell rclone where to find it with the --config option, or
+else it will look in the system users profile path
+(C:\Windows\System32\config\systemprofile). To test your command
+manually from a Command Prompt, you can run it with the PsExec utility
+from Microsoft's Sysinternals suite, which takes option -s to execute
+commands as the SYSTEM user.
+
+Start from Startup folder
+
+To quickly execute an rclone command you can simply create a standard
+Windows Explorer shortcut for the complete rclone command you want to
+run. If you store this shortcut in the special "Startup" start-menu
+folder, Windows will automatically run it at login. To open this folder
+in Windows Explorer, enter path
+%APPDATA%\Microsoft\Windows\Start Menu\Programs\Startup, or
+C:\ProgramData\Microsoft\Windows\Start Menu\Programs\StartUp if you want
+the command to start for _every_ user that logs in.
+
+This is the easiest approach to autostarting of rclone, but it offers no
+functionality to set it to run as different user, or to set conditions
+or actions on certain events. Setting up a scheduled task as described
+below will often give you better results.
+
+Start from Task Scheduler
+
+Task Scheduler is an administrative tool built into Windows, and it can
+be used to configure rclone to be started automatically in a highly
+configurable way, e.g. periodically on a schedule, on user log on, or at
+system startup. It can run be configured to run as the current user, or
+for a mount command that needs to be available to all users it can run
+as the SYSTEM user. For technical information, see
+https://docs.microsoft.com/windows/win32/taskschd/task-scheduler-start-page.
+
+Run as service
+
+For running rclone at system startup, you can create a Windows service
+that executes your rclone command, as an alternative to scheduled task
+configured to run at startup.
+
+Mount command built-in service integration
+
+For mount commands, Rclone has a built-in Windows service integration
+via the third party WinFsp library it uses. Registering as a regular
+Windows service easy, as you just have to execute the built-in
+PowerShell command New-Service (requires administrative privileges).
+
+Example of a PowerShell command that creates a Windows service for
+mounting some remote:/files as drive letter X:, for _all_ users (service
+will be running as the local system account):
+
+ New-Service -Name Rclone -BinaryPathName 'c:\rclone\rclone.exe mount remote:/files X: --config c:\rclone\config\rclone.conf --log-file c:\rclone\logs\mount.txt'
+
+The WinFsp service infrastructure supports incorporating services for
+file system implementations, such as rclone, into its own launcher
+service, as kind of "child services". This has the additional advantage
+that it also implements a network provider that integrates into Windows
+standard methods for managing network drives. This is currently not
+officially supported by Rclone, but with WinFsp version 2019.3 B2 /
+v1.5B2 or later it should be possible through path rewriting as
+described here.
+
+Third party service integration
+
+To Windows service running any rclone command, the excellent third party
+utility NSSM, the "Non-Sucking Service Manager", can be used. It
+includes some advanced features such as adjusting process periority,
+defining process environment variables, redirect to file anything
+written to stdout, and customized response to different exit codes, with
+a GUI to configure everything from (although it can also be used from
+command line ).
+
+There are also several other alternatives. To mention one more, WinSW,
+"Windows Service Wrapper", is worth checking out. It requires .NET
+Framework, but it is preinstalled on newer versions of Windows, and it
+also provides alternative standalone distributions which includes
+necessary runtime (.NET 5). WinSW is a command-line only utility, where
+you have to manually create an XML file with service configuration. This
+may be a drawback for some, but it can also be an advantage as it is
+easy to back up and re-use the configuration settings, without having go
+through manual steps in a GUI. One thing to note is that by default it
+does not restart the service on error, one have to explicit enable this
+in the configuration file (via the "onfailure" parameter).
+
+
+Autostart on Linux
+
+Start as a service
+
+To always run rclone in background, relevant for mount commands etc, you
+can use systemd to set up rclone as a system or user service. Running as
+a system service ensures that it is run at startup even if the user it
+is running as has no active session. Running rclone as a user service
+ensures that it only starts after the configured user has logged into
+the system.
+
+Run periodically from cron
+
+To run a periodic command, such as a copy/sync, you can set up a cron
+job.
+
+
Configure
First, you'll need to configure rclone. As the object storage systems
@@ -413,7 +571,6 @@ See the following for detailed instructions for
- Amazon S3
- Backblaze B2
- Box
-- Cache
- Chunker - transparently splits large files for other remotes
- Citrix ShareFile
- Compress
@@ -446,6 +603,7 @@ See the following for detailed instructions for
- SugarSync
- Tardigrade
- Union
+- Uptobox
- WebDAV
- Yandex Disk
- Zoho WorkDrive
@@ -510,7 +668,6 @@ SEE ALSO
- rclone config delete - Delete an existing remote name.
- rclone config disconnect - Disconnects user from remote
- rclone config dump - Dump the config file as JSON.
-- rclone config edit - Enter an interactive configuration session.
- rclone config file - Show path of configuration file in use.
- rclone config password - Update password in an existing remote.
- rclone config providers - List in JSON format all the providers and
@@ -518,6 +675,7 @@ SEE ALSO
- rclone config reconnect - Re-authenticates user with remote.
- rclone config show - Print (decrypted) config file, or the config
for a single remote.
+- rclone config touch - Ensure configuration file exists.
- rclone config update - Update options in an existing remote.
- rclone config userinfo - Prints info about logged in user of remote.
@@ -720,8 +878,8 @@ If you supply the --rmdirs flag, it will remove all empty directories
along with it. You can also use the separate command rmdir or rmdirs to
delete empty directories only.
-For example, to delete all files bigger than 100MBytes, you may first
-want to check what would be deleted (use either):
+For example, to delete all files bigger than 100 MiB, you may first want
+to check what would be deleted (use either):
rclone --min-size 100M lsl remote:path
rclone --dry-run --min-size 100M delete remote:path
@@ -730,8 +888,8 @@ Then proceed with the actual delete:
rclone --min-size 100M delete remote:path
-That reads "delete everything with a minimum size of 100 MB", hence
-delete all files bigger than 100MBytes.
+That reads "delete everything with a minimum size of 100 MiB", hence
+delete all files bigger than 100 MiB.
IMPORTANT: Since this can cause data loss, test first with the --dry-run
or the --interactive/-i flag.
@@ -856,6 +1014,9 @@ remotes and check them against each other on the fly. This can be useful
for remotes that don't support hashes or if you really want to check all
the data.
+If you supply the --checkfile HASH flag with a valid hash name, the
+source:path must point to a text file in the SUM format.
+
If you supply the --one-way flag, it will only check that files in the
source match the files in the destination, not the other way around.
This means that extra files in the destination that are not in the
@@ -887,6 +1048,7 @@ what happened to it. These are reminiscent of diff files.
Options
+ -C, --checkfile string Treat source:path as a SUM file with hashes of given type
--combined string Make a combined report of changes to this file
--differ string Report all non-matching files to this file
--download Check by downloading rather than with hash.
@@ -1114,6 +1276,7 @@ enabling MD5 for any remote.
Options
--base64 Output base64 encoded hashsum
+ -C, --checkfile string Validate hashes against a given SUM file instead of printing them
--download Download the file and hash it locally; if this flag is not specified, the hash is requested from the remote
-h, --help help for md5sum
--output-file string Output hashsums to a file rather than the terminal
@@ -1149,6 +1312,7 @@ enabling SHA-1 for any remote.
Options
--base64 Output base64 encoded hashsum
+ -C, --checkfile string Validate hashes against a given SUM file instead of printing them
--download Download the file and hash it locally; if this flag is not specified, the hash is requested from the remote
-h, --help help for sha1sum
--output-file string Output hashsums to a file rather than the terminal
@@ -1193,12 +1357,15 @@ Show the version number.
Synopsis
Show the rclone version number, the go version, the build target OS and
-architecture, build tags and the type of executable (static or dynamic).
+architecture, the runtime OS and kernel version and bitness, build tags
+and the type of executable (static or dynamic).
For example:
$ rclone version
- rclone v1.54
+ rclone v1.55.0
+ - os/version: ubuntu 18.04 (64 bit)
+ - os/kernel: 4.15.0-136-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.16
@@ -1419,10 +1586,10 @@ Get quota information from the remote.
Synopsis
-rclone aboutprints quota information about a remote to standard output.
+rclone about prints quota information about a remote to standard output.
The output is typically used, free, quota and trash contents.
-E.g. Typical output fromrclone about remote:is:
+E.g. Typical output from rclone about remote: is:
Total: 17G
Used: 7.444G
@@ -1450,7 +1617,7 @@ Applying a --full flag to the command prints the bytes in full, e.g.
Trashed: 104857602
Other: 8849156022
-A --jsonflag generates conveniently computer readable output, e.g.
+A --json flag generates conveniently computer readable output, e.g.
{
"total": 18253611008,
@@ -1613,6 +1780,73 @@ SEE ALSO
+RCLONE CHECKSUM
+
+
+Checks the files in the source against a SUM file.
+
+
+Synopsis
+
+Checks that hashsums of source files match the SUM file. It compares
+hashes (MD5, SHA1, etc) and logs a report of files which don't match. It
+doesn't alter the file system.
+
+If you supply the --download flag, it will download the data from remote
+and calculate the contents hash on the fly. This can be useful for
+remotes that don't support hashes or if you really want to check all the
+data.
+
+If you supply the --one-way flag, it will only check that files in the
+source match the files in the destination, not the other way around.
+This means that extra files in the destination that are not in the
+source will not be detected.
+
+The --differ, --missing-on-dst, --missing-on-src, --match and --error
+flags write paths, one per line, to the file name (or stdout if it is -)
+supplied. What they write is described in the help below. For example
+--differ will write all paths which are present on both the source and
+destination but different.
+
+The --combined flag will write a file (or stdout) which contains all
+file paths with a symbol and then a space and then the path to tell you
+what happened to it. These are reminiscent of diff files.
+
+- = path means path was found in source and destination and was
+ identical
+- - path means path was missing on the source, so only in the
+ destination
+- + path means path was missing on the destination, so only in the
+ source
+- * path means path was present in source and destination but
+ different.
+- ! path means there was an error reading or hashing the source or
+ dest.
+
+ rclone checksum sumfile src:path [flags]
+
+
+Options
+
+ --combined string Make a combined report of changes to this file
+ --differ string Report all non-matching files to this file
+ --download Check by hashing the contents.
+ --error string Report all files with errors (hashing or reading) to this file
+ -h, --help help for checksum
+ --match string Report all matching files to this file
+ --missing-on-dst string Report all files missing from the destination to this file
+ --missing-on-src string Report all files missing from the source to this file
+ --one-way Check one way only, source files must exist on remote
+
+See the global flags page for global options not listed here.
+
+
+SEE ALSO
+
+- rclone - Show help for rclone commands, flags and backends.
+
+
+
RCLONE CONFIG CREATE
@@ -1622,16 +1856,23 @@ Create a new remote with name, type and options.
Synopsis
Create a new remote of name with type and options. The options should be
-passed in pairs of key value.
+passed in pairs of key value or as key=value.
For example to make a swift remote of name myremote using auto config
you would do:
rclone config create myremote swift env_auth true
+ rclone config create myremote swift env_auth=true
+
+So for example if you wanted to configure a Google Drive remote but
+using remote authorization you would do this:
+
+ rclone config create mydrive drive config_is_local=false
Note that if the config process would normally ask a question the
-default is taken. Each time that happens rclone will print a message
-saying how to affect the value taken.
+default is taken (unless --non-interactive is used). Each time that
+happens rclone will print or DEBUG a message saying how to affect the
+value taken.
If any of the parameters passed is a password field, then rclone will
automatically obscure them if they aren't already obscured before
@@ -1641,23 +1882,92 @@ NB If the password parameter is 22 characters or longer and consists
only of base64 characters then rclone can get confused about whether the
password is already obscured or not and put unobscured passwords into
the config file. If you want to be 100% certain that the passwords get
-obscured then use the "--obscure" flag, or if you are 100% certain you
-are already passing obscured passwords then use "--no-obscure". You can
-also set obscured passwords using the "rclone config password" command.
+obscured then use the --obscure flag, or if you are 100% certain you are
+already passing obscured passwords then use --no-obscure. You can also
+set obscured passwords using the rclone config password command.
-So for example if you wanted to configure a Google Drive remote but
-using remote authorization you would do this:
+The flag --non-interactive is for use by applications that wish to
+configure rclone themeselves, rather than using rclone's text based
+configuration questions. If this flag is set, and rclone needs to ask
+the user a question, a JSON blob will be returned with the question in
+it.
- rclone config create mydrive drive config_is_local false
+This will look something like (some irrelevant detail removed):
+
+ {
+ "State": "*oauth-islocal,teamdrive,,",
+ "Option": {
+ "Name": "config_is_local",
+ "Help": "Use auto config?\n * Say Y if not sure\n * Say N if you are working on a remote or headless machine\n",
+ "Default": true,
+ "Examples": [
+ {
+ "Value": "true",
+ "Help": "Yes"
+ },
+ {
+ "Value": "false",
+ "Help": "No"
+ }
+ ],
+ "Required": false,
+ "IsPassword": false,
+ "Type": "bool",
+ "Exclusive": true,
+ },
+ "Error": "",
+ }
+
+The format of Option is the same as returned by rclone config providers.
+The question should be asked to the user and returned to rclone as the
+--result option along with the --state parameter.
+
+The keys of Option are used as follows:
+
+- Name - name of variable - show to user
+- Help - help text. Hard wrapped at 80 chars. Any URLs should be
+ clicky.
+- Default - default value - return this if the user just wants the
+ default.
+- Examples - the user should be able to choose one of these
+- Required - the value should be non-empty
+- IsPassword - the value is a password and should be edited as such
+- Type - type of value, eg bool, string, int and others
+- Exclusive - if set no free-form entry allowed only the Examples
+- Irrelevant keys Provider, ShortOpt, Hide, NoPrefix, Advanced
+
+If Error is set then it should be shown to the user at the same time as
+the question.
+
+ rclone config update name --continue --state "*oauth-islocal,teamdrive,," --result "true"
+
+Note that when using --continue all passwords should be passed in the
+clear (not obscured). Any default config values should be passed in with
+each invocation of --continue.
+
+At the end of the non interactive process, rclone will return a result
+with State as empty string.
+
+If --all is passed then rclone will ask all the config questions, not
+just the post config questions. Any parameters are used as defaults for
+questions as usual.
+
+Note that bin/config.py in the rclone source implements this protocol as
+a readable demonstration.
rclone config create `name` `type` [`key` `value`]* [flags]
Options
- -h, --help help for create
- --no-obscure Force any passwords not to be obscured.
- --obscure Force any passwords to be obscured.
+ --all Ask the full set of config questions.
+ --continue Continue the configuration process with an answer.
+ -h, --help help for create
+ --no-obscure Force any passwords not to be obscured.
+ --non-interactive Don't interact with user and return questions.
+ --obscure Force any passwords to be obscured.
+ --result string Result - use with --continue.
+ --state string State - use with --continue.
See the global flags page for global options not listed here.
@@ -1746,7 +2056,9 @@ RCLONE CONFIG EDIT
Enter an interactive configuration session.
-Synopsis
+
+SYNOPSIS
+
Enter an interactive configuration session where you can setup new
remotes and manage existing ones. You may also set or remove a password
@@ -1755,15 +2067,19 @@ to protect your configuration.
rclone config edit [flags]
-Options
+
+OPTIONS
+
-h, --help help for edit
See the global flags page for global options not listed here.
+
SEE ALSO
+
- rclone config - Enter an interactive configuration session.
@@ -1798,11 +2114,13 @@ Update password in an existing remote.
Synopsis
Update an existing remote's password. The password should be passed in
-pairs of key value.
+pairs of key password or as key=password. The password should be passed
+in in clear (unobscured).
For example to set password of a remote of name myremote you would do:
rclone config password myremote fieldname mypassword
+ rclone config password myremote fieldname=mypassword
This command is obsolete now that "config update" and "config create"
both support obscuring passwords directly.
@@ -1895,6 +2213,27 @@ SEE ALSO
+RCLONE CONFIG TOUCH
+
+
+Ensure configuration file exists.
+
+ rclone config touch [flags]
+
+
+Options
+
+ -h, --help help for touch
+
+See the global flags page for global options not listed here.
+
+
+SEE ALSO
+
+- rclone config - Enter an interactive configuration session.
+
+
+
RCLONE CONFIG UPDATE
@@ -1903,13 +2242,24 @@ Update options in an existing remote.
Synopsis
-Update an existing remote's options. The options should be passed in in
-pairs of key value.
+Update an existing remote's options. The options should be passed in
+pairs of key value or as key=value.
For example to update the env_auth field of a remote of name myremote
you would do:
- rclone config update myremote swift env_auth true
+ rclone config update myremote env_auth true
+ rclone config update myremote env_auth=true
+
+If the remote uses OAuth the token will be updated, if you don't require
+this add an extra parameter thus:
+
+ rclone config update myremote env_auth=true config_refresh_token=false
+
+Note that if the config process would normally ask a question the
+default is taken (unless --non-interactive is used). Each time that
+happens rclone will print or DEBUG a message saying how to affect the
+value taken.
If any of the parameters passed is a password field, then rclone will
automatically obscure them if they aren't already obscured before
@@ -1919,23 +2269,92 @@ NB If the password parameter is 22 characters or longer and consists
only of base64 characters then rclone can get confused about whether the
password is already obscured or not and put unobscured passwords into
the config file. If you want to be 100% certain that the passwords get
-obscured then use the "--obscure" flag, or if you are 100% certain you
-are already passing obscured passwords then use "--no-obscure". You can
-also set obscured passwords using the "rclone config password" command.
+obscured then use the --obscure flag, or if you are 100% certain you are
+already passing obscured passwords then use --no-obscure. You can also
+set obscured passwords using the rclone config password command.
-If the remote uses OAuth the token will be updated, if you don't require
-this add an extra parameter thus:
+The flag --non-interactive is for use by applications that wish to
+configure rclone themeselves, rather than using rclone's text based
+configuration questions. If this flag is set, and rclone needs to ask
+the user a question, a JSON blob will be returned with the question in
+it.
- rclone config update myremote swift env_auth true config_refresh_token false
+This will look something like (some irrelevant detail removed):
+
+ {
+ "State": "*oauth-islocal,teamdrive,,",
+ "Option": {
+ "Name": "config_is_local",
+ "Help": "Use auto config?\n * Say Y if not sure\n * Say N if you are working on a remote or headless machine\n",
+ "Default": true,
+ "Examples": [
+ {
+ "Value": "true",
+ "Help": "Yes"
+ },
+ {
+ "Value": "false",
+ "Help": "No"
+ }
+ ],
+ "Required": false,
+ "IsPassword": false,
+ "Type": "bool",
+ "Exclusive": true,
+ },
+ "Error": "",
+ }
+
+The format of Option is the same as returned by rclone config providers.
+The question should be asked to the user and returned to rclone as the
+--result option along with the --state parameter.
+
+The keys of Option are used as follows:
+
+- Name - name of variable - show to user
+- Help - help text. Hard wrapped at 80 chars. Any URLs should be
+ clicky.
+- Default - default value - return this if the user just wants the
+ default.
+- Examples - the user should be able to choose one of these
+- Required - the value should be non-empty
+- IsPassword - the value is a password and should be edited as such
+- Type - type of value, eg bool, string, int and others
+- Exclusive - if set no free-form entry allowed only the Examples
+- Irrelevant keys Provider, ShortOpt, Hide, NoPrefix, Advanced
+
+If Error is set then it should be shown to the user at the same time as
+the question.
+
+ rclone config update name --continue --state "*oauth-islocal,teamdrive,," --result "true"
+
+Note that when using --continue all passwords should be passed in the
+clear (not obscured). Any default config values should be passed in with
+each invocation of --continue.
+
+At the end of the non interactive process, rclone will return a result
+with State as empty string.
+
+If --all is passed then rclone will ask all the config questions, not
+just the post config questions. Any parameters are used as defaults for
+questions as usual.
+
+Note that bin/config.py in the rclone source implements this protocol as
+a readable demonstration.
rclone config update `name` [`key` `value`]+ [flags]
Options
- -h, --help help for update
- --no-obscure Force any passwords not to be obscured.
- --obscure Force any passwords to be obscured.
+ --all Ask the full set of config questions.
+ --continue Continue the configuration process with an answer.
+ -h, --help help for update
+ --no-obscure Force any passwords not to be obscured.
+ --non-interactive Don't interact with user and return questions.
+ --obscure Force any passwords to be obscured.
+ --result string Result - use with --continue.
+ --state string State - use with --continue.
See the global flags page for global options not listed here.
@@ -2036,9 +2455,9 @@ Synopsis
Download a URL's content and copy it to the destination without saving
it in temporary storage.
-Setting --auto-filenamewill cause the file name to be retrieved from the
-from URL (after any redirections) and used in the destination path. With
---print-filename in addition, the resuling file name will be printed.
+Setting --auto-filename will cause the file name to be retrieved from
+the URL (after any redirections) and used in the destination path. With
+--print-filename in addition, the resulting file name will be printed.
Setting --no-clobber will prevent overwriting file on the destination if
there is one with the same name.
@@ -2414,21 +2833,27 @@ Run without a hash to see the list of all supported hashes, e.g.
$ rclone hashsum
Supported hashes are:
- * MD5
- * SHA-1
- * DropboxHash
- * QuickXorHash
+ * md5
+ * sha1
+ * whirlpool
+ * crc32
+ * dropbox
+ * mailru
+ * quickxor
Then
$ rclone hashsum MD5 remote:path
+Note that hash names are case insensitive.
+
rclone hashsum remote:path [flags]
Options
--base64 Output base64 encoded hashsum
+ -C, --checkfile string Validate hashes against a given SUM file instead of printing them
--download Download the file and hash it locally; if this flag is not specified, the hash is requested from the remote
-h, --help help for hashsum
--output-file string Output hashsums to a file rather than the terminal
@@ -2477,7 +2902,7 @@ protection, accessible without account.
Options
- --expire Duration The amount of time that the link will be valid (default 100y)
+ --expire Duration The amount of time that the link will be valid (default off)
-h, --help help for link
--unlink Remove existing public link to file/folder
@@ -2653,7 +3078,7 @@ Options
--dirs-only Only list directories.
--files-only Only list files.
-F, --format string Output format - see help for details (default "p")
- --hash h Use this hash when h is used in the format MD5|SHA-1|DropboxHash (default "MD5")
+ --hash h Use this hash when h is used in the format MD5|SHA-1|DropboxHash (default "md5")
-h, --help help for lsf
-R, --recursive Recurse into the listing.
-s, --separator string Separator for the items in the format. (default ";")
@@ -2833,9 +3258,9 @@ manually.
The size of the mounted file system will be set according to information
retrieved from the remote, the same as returned by the rclone about
command. Remotes with unlimited storage may report the used size only,
-then an additional 1PB of free space is assumed. If the remote does not
-support the about feature at all, then 1PB is set as both the total and
-the free size.
+then an additional 1 PiB of free space is assumed. If the remote does
+not support the about feature at all, then 1 PiB is set as both the
+total and the free size.
NOTE: As of rclone 1.52.2, rclone mount now requires Go version 1.13 or
newer on some platforms depending on the underlying FUSE library in use.
@@ -2968,26 +3393,44 @@ One case that may arise is that other programs (incorrectly) interprets
this as the file being accessible by everyone. For example an SSH client
may warn about "unprotected private key file".
-WinFsp 2021 (version 1.9, still in beta) introduces a new FUSE option
-"FileSecurity", that allows the complete specification of file security
-descriptors using SDDL. With this you can work around issues such as the
-mentioned "unprotected private key file" by specifying
+WinFsp 2021 (version 1.9) introduces a new FUSE option "FileSecurity",
+that allows the complete specification of file security descriptors
+using SDDL. With this you can work around issues such as the mentioned
+"unprotected private key file" by specifying
-o FileSecurity="D:P(A;;FA;;;OW)", for file all access (FA) to the owner
(OW).
Windows caveats
-Note that drives created as Administrator are not visible by other
-accounts (including the account that was elevated as Administrator). So
-if you start a Windows drive from an Administrative Command Prompt and
-then try to access the same drive from Explorer (which does not run as
-Administrator), you will not be able to see the new drive.
+Drives created as Administrator are not visible to other accounts, not
+even an account that was elevated to Administrator with the User Account
+Control (UAC) feature. A result of this is that if you mount to a drive
+letter from a Command Prompt run as Administrator, and then try to
+access the same drive from Windows Explorer (which does not run as
+Administrator), you will not be able to see the mounted drive.
-The easiest way around this is to start the drive from a normal command
-prompt. It is also possible to start a drive from the SYSTEM account
-(using the WinFsp.Launcher infrastructure) which creates drives
-accessible for everyone on the system or alternatively using the nssm
-service manager.
+If you don't need to access the drive from applications running with
+administrative privileges, the easiest way around this is to always
+create the mount from a non-elevated command prompt.
+
+To make mapped drives available to the user account that created them
+regardless if elevated or not, there is a special Windows setting called
+linked connections that can be enabled.
+
+It is also possible to make a drive mount available to everyone on the
+system, by running the process creating it as the built-in SYSTEM
+account. There are several ways to do this: One is to use the
+command-line utility PsExec, from Microsoft's Sysinternals suite, which
+has option -s to start processes as the SYSTEM account. Another
+alternative is to run the mount command from a Windows Scheduled Task,
+or a Windows Service, configured to run as the SYSTEM account. A third
+alternative is to use the WinFsp.Launcher infrastructure). Note that
+when running rclone as another user, it will not use the configuration
+file from your profile unless you tell it to with the --config option.
+Read more in the install documentation.
+
+Note that mapping to a directory path, instead of a drive letter, does
+not suffer from the same limitations.
Limitations
@@ -3104,7 +3547,7 @@ Changes made through the mount will appear immediately or invalidate the
cache.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
- --poll-interval duration Time to wait between polling for changes.
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
However, changes made directly on the cloud storage by the web interface
or a different copy of rclone will only be picked up once the directory
@@ -3372,7 +3815,7 @@ Options
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
--gid uint32 Override the gid field set by the filesystem. Not supported on Windows. (default 1000)
-h, --help help for mount
- --max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads. Not supported on Windows. (default 128k)
+ --max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads. Not supported on Windows. (default 128Ki)
--network-mode Mount as remote network drive, instead of fixed disk drive. Supported on Windows only
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
@@ -3383,14 +3826,14 @@ Options
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
- --umask int Override the permission bits set by the filesystem. Not supported on Windows.
+ --umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match.
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full.
- --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
@@ -3668,6 +4111,14 @@ before that. The data must fit into RAM. The cutoff needs to be small
enough to adhere the limits of your remote, please see there. Generally
speaking, setting this cutoff too high will decrease your performance.
+Use the |--size| flag to preallocate the file in advance at the remote
+end and actually stream it, even if remote backend doesn't support
+streaming.
+
+|--size| should be the exact size of the input stream in bytes. If the
+size of the stream is different in length to the |--size| passed in then
+the transfer will likely fail.
+
Note that the upload can also not be retried because the data is not
kept around until the upload succeeds. If you need to transfer a lot of
data, you're better off caching locally and then rclone move it to the
@@ -3678,7 +4129,8 @@ destination.
Options
- -h, --help help for rcat
+ -h, --help help for rcat
+ --size int File size hint to preallocate (default -1)
See the global flags page for global options not listed here.
@@ -3797,7 +4249,7 @@ and digits (for example v1.54.0) then it's a stable release so you won't
need the --beta flag. Beta releases have an additional information
similar to v1.54.0-beta.5111.06f1c0c61. (if you are a developer and use
a locally built rclone, the version number will end with -DEV, you will
-have to rebuild it as it obvisously can't be distributed).
+have to rebuild it as it obviously can't be distributed).
If you previously installed rclone via a package manager, the package
may include local documentation or configure services. You may wish to
@@ -3869,6 +4321,8 @@ SEE ALSO
- rclone - Show help for rclone commands, flags and backends.
- rclone serve dlna - Serve remote:path over DLNA
+- rclone serve docker - Serve any remote on docker's volume plugin
+ API.
- rclone serve ftp - Serve remote:path over FTP.
- rclone serve http - Serve the remote over HTTP.
- rclone serve restic - Serve the remote for restic's REST API.
@@ -3932,7 +4386,7 @@ Changes made through the mount will appear immediately or invalidate the
cache.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
- --poll-interval duration Time to wait between polling for changes.
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
However, changes made directly on the cloud storage by the web interface
or a different copy of rclone will only be picked up once the directory
@@ -4206,7 +4660,7 @@ Options
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match.
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full.
- --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
@@ -4222,6 +4676,385 @@ SEE ALSO
+RCLONE SERVE DOCKER
+
+
+Serve any remote on docker's volume plugin API.
+
+
+Synopsis
+
+This command implements the Docker volume plugin API allowing docker to
+use rclone as a data storage mechanism for various cloud providers.
+rclone provides docker volume plugin based on it.
+
+To create a docker plugin, one must create a Unix or TCP socket that
+Docker will look for when you use the plugin and then it listens for
+commands from docker daemon and runs the corresponding code when
+necessary. Docker plugins can run as a managed plugin under control of
+the docker daemon or as an independent native service. For testing, you
+can just run it directly from the command line, for example:
+
+ sudo rclone serve docker --base-dir /tmp/rclone-volumes --socket-addr localhost:8787 -vv
+
+Running rclone serve docker will create the said socket, listening for
+commands from Docker to create the necessary Volumes. Normally you need
+not give the --socket-addr flag. The API will listen on the unix domain
+socket at /run/docker/plugins/rclone.sock. In the example above rclone
+will create a TCP socket and a small file
+/etc/docker/plugins/rclone.spec containing the socket address. We use
+sudo because both paths are writeable only by the root user.
+
+If you later decide to change listening socket, the docker daemon must
+be restarted to reconnect to /run/docker/plugins/rclone.sock or parse
+new /etc/docker/plugins/rclone.spec. Until you restart, any volume
+related docker commands will timeout trying to access the old socket.
+Running directly is supported on LINUX ONLY, not on Windows or MacOS.
+This is not a problem with managed plugin mode described in details in
+the full documentation.
+
+The command will create volume mounts under the path given by --base-dir
+(by default /var/lib/docker-volumes/rclone available only to root) and
+maintain the JSON formatted file docker-plugin.state in the rclone cache
+directory with book-keeping records of created and mounted volumes.
+
+All mount and VFS options are submitted by the docker daemon via API,
+but you can also provide defaults on the command line as well as set
+path to the config file and cache directory or adjust logging verbosity.
+
+
+VFS - Virtual File System
+
+This command uses the VFS layer. This adapts the cloud storage objects
+that rclone uses into something which looks much more like a disk filing
+system.
+
+Cloud storage objects have lots of properties which aren't like disk
+files - you can't extend them or write to the middle of them, so the VFS
+layer has to deal with that. Because there is no one right way of doing
+this there are various options explained below.
+
+The VFS layer also implements a directory cache - this caches info about
+files and directories (but not the data) in memory.
+
+
+VFS Directory Cache
+
+Using the --dir-cache-time flag, you can control how long a directory
+should be considered up to date and not refreshed from the backend.
+Changes made through the mount will appear immediately or invalidate the
+cache.
+
+ --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
+
+However, changes made directly on the cloud storage by the web interface
+or a different copy of rclone will only be picked up once the directory
+cache expires if the backend configured does not support polling for
+changes. If the backend supports polling, changes will be picked up
+within the polling interval.
+
+You can send a SIGHUP signal to rclone for it to flush all directory
+caches, regardless of how old they are. Assuming only one rclone
+instance is running, you can reset the cache like this:
+
+ kill -SIGHUP $(pidof rclone)
+
+If you configure rclone with a remote control then you can use rclone rc
+to flush the whole directory cache:
+
+ rclone rc vfs/forget
+
+Or individual files or directories:
+
+ rclone rc vfs/forget file=path/to/file dir=path/to/dir
+
+
+VFS File Buffering
+
+The --buffer-size flag determines the amount of memory, that will be
+used to buffer data in advance.
+
+Each open file will try to keep the specified amount of data in memory
+at all times. The buffered data is bound to one open file and won't be
+shared.
+
+This flag is a upper limit for the used memory per open file. The buffer
+will only use memory for data that is downloaded but not not yet read.
+If the buffer is empty, only a small amount of memory will be used.
+
+The maximum memory used by rclone for buffering can be up to
+--buffer-size * open files.
+
+
+VFS File Caching
+
+These flags control the VFS file caching options. File caching is
+necessary to make the VFS layer appear compatible with a normal file
+system. It can be disabled at the cost of some compatibility.
+
+For example you'll need to enable VFS caching if you want to read and
+write simultaneously to a file. See below for more details.
+
+Note that the VFS cache is separate from the cache backend and you may
+find that you need one or the other or both.
+
+ --cache-dir string Directory rclone will use for caching.
+ --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
+ --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
+ --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+ --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
+
+If run with -vv rclone will print the location of the file cache. The
+files are stored in the user cache file area which is OS dependent but
+can be controlled with --cache-dir or setting the appropriate
+environment variable.
+
+The cache has 4 different modes selected by --vfs-cache-mode. The higher
+the cache mode the more compatible rclone becomes at the cost of using
+disk space.
+
+Note that files are written back to the remote only when they are closed
+and if they haven't been accessed for --vfs-write-back second. If rclone
+is quit or dies with files that haven't been uploaded, these will be
+uploaded next time rclone is run with the same flags.
+
+If using --vfs-cache-max-size note that the cache may exceed this size
+for two reasons. Firstly because it is only checked every
+--vfs-cache-poll-interval. Secondly because open files cannot be evicted
+from the cache.
+
+You SHOULD NOT run two copies of rclone using the same VFS cache with
+the same or overlapping remotes if using --vfs-cache-mode > off. This
+can potentially cause data corruption if you do. You can work around
+this by giving each rclone its own cache hierarchy with --cache-dir. You
+don't need to worry about this if the remotes in use don't overlap.
+
+--vfs-cache-mode off
+
+In this mode (the default) the cache will read directly from the remote
+and write directly to the remote without caching anything on disk.
+
+This will mean some operations are not possible
+
+- Files can't be opened for both read AND write
+- Files opened for write can't be seeked
+- Existing files opened for write must have O_TRUNC set
+- Files open for read with O_TRUNC will be opened write only
+- Files open for write only will behave as if O_TRUNC was supplied
+- Open modes O_APPEND, O_TRUNC are ignored
+- If an upload fails it can't be retried
+
+--vfs-cache-mode minimal
+
+This is very similar to "off" except that files opened for read AND
+write will be buffered to disk. This means that files opened for write
+will be a lot more compatible, but uses the minimal disk space.
+
+These operations are not possible
+
+- Files opened for write only can't be seeked
+- Existing files opened for write must have O_TRUNC set
+- Files opened for write only will ignore O_APPEND, O_TRUNC
+- If an upload fails it can't be retried
+
+--vfs-cache-mode writes
+
+In this mode files opened for read only are still read directly from the
+remote, write only and read/write files are buffered to disk first.
+
+This mode should support all normal file system operations.
+
+If an upload fails it will be retried at exponentially increasing
+intervals up to 1 minute.
+
+--vfs-cache-mode full
+
+In this mode all reads and writes are buffered to and from disk. When
+data is read from the remote this is buffered to disk as well.
+
+In this mode the files in the cache will be sparse files and rclone will
+keep track of which bits of the files it has downloaded.
+
+So if an application only reads the starts of each file, then rclone
+will only buffer the start of the file. These files will appear to be
+their full size in the cache, but they will be sparse files with only
+the data that has been downloaded present in them.
+
+This mode should support all normal file system operations and is
+otherwise identical to --vfs-cache-mode writes.
+
+When reading a file rclone will read --buffer-size plus --vfs-read-ahead
+bytes ahead. The --buffer-size is buffered in memory whereas the
+--vfs-read-ahead is buffered on disk.
+
+When using this mode it is recommended that --buffer-size is not set too
+big and --vfs-read-ahead is set large if required.
+
+IMPORTANT not all file systems support sparse files. In particular
+FAT/exFAT do not. Rclone will perform very badly if the cache directory
+is on a filesystem which doesn't support sparse files and it will log an
+ERROR message if one is detected.
+
+
+VFS Performance
+
+These flags may be used to enable/disable features of the VFS for
+performance or other reasons.
+
+In particular S3 and Swift benefit hugely from the --no-modtime flag (or
+use --use-server-modtime for a slightly different effect) as each read
+of the modification time takes a transaction.
+
+ --no-checksum Don't compare checksums on up/download.
+ --no-modtime Don't read/write the modification time (can speed things up).
+ --no-seek Don't allow seeking in files.
+ --read-only Mount read-only.
+
+When rclone reads files from a remote it reads them in chunks. This
+means that rather than requesting the whole file rclone reads the chunk
+specified. This is advantageous because some cloud providers account for
+reads being all the data requested, not all the data delivered.
+
+Rclone will keep doubling the chunk size requested starting at
+--vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit
+unless it is set to "off" in which case there will be no limit.
+
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
+ --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default "off")
+
+Sometimes rclone is delivered reads or writes out of order. Rather than
+seeking rclone will wait a short time for the in sequence read or write
+to come in. These flags only come into effect when not using an on disk
+cache file.
+
+ --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
+ --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
+
+When using VFS write caching (--vfs-cache-mode with value writes or
+full), the global flag --transfers can be set to adjust the number of
+parallel uploads of modified files from cache (the related global flag
+--checkers have no effect on mount).
+
+ --transfers int Number of file transfers to run in parallel. (default 4)
+
+
+VFS Case Sensitivity
+
+Linux file systems are case-sensitive: two files can differ only by
+case, and the exact case must be used when opening a file.
+
+File systems in modern Windows are case-insensitive but case-preserving:
+although existing files can be opened using any case, the exact case
+used to create the file is preserved and available for programs to
+query. It is not allowed for two files in the same directory to differ
+only by case.
+
+Usually file systems on macOS are case-insensitive. It is possible to
+make macOS file systems case-sensitive but that is not the default
+
+The --vfs-case-insensitive mount flag controls how rclone handles these
+two cases. If its value is "false", rclone passes file names to the
+mounted file system as-is. If the flag is "true" (or appears without a
+value on command line), rclone may perform a "fixup" as explained below.
+
+The user may specify a file name to open/delete/rename/etc with a case
+different than what is stored on mounted file system. If an argument
+refers to an existing file with exactly the same name, then the case of
+the existing file on the disk will be used. However, if a file name with
+exactly the same name is not found but a name differing only by case
+exists, rclone will transparently fixup the name. This fixup happens
+only when an existing file is requested. Case sensitivity of file names
+created anew by rclone is controlled by an underlying mounted file
+system.
+
+Note that case sensitivity of the operating system running rclone (the
+target) may differ from case sensitivity of a file system mounted by
+rclone (the source). The flag controls whether "fixup" is performed to
+satisfy the target.
+
+If the flag is not provided on the command line, then its default value
+depends on the operating system where rclone runs: "true" on Windows and
+macOS, "false" otherwise. If the flag is provided without a value, then
+it is "true".
+
+
+Alternate report of used bytes
+
+Some backends, most notably S3, do not report the amount of bytes used.
+If you need this information to be available when running df on the
+filesystem, then pass the flag --vfs-used-is-size to rclone. With this
+flag set, instead of relying on the backend to report this information,
+rclone will scan the whole remote similar to rclone size and compute the
+total used space itself.
+
+_WARNING._ Contrary to rclone size, this flag ignores filters so that
+the result is accurate. However, this is very inefficient and may cost
+lots of API calls resulting in extra charges. Use it as a last resort
+and only with caching.
+
+ rclone serve docker [flags]
+
+
+Options
+
+ --allow-non-empty Allow mounting over a non-empty directory. Not supported on Windows.
+ --allow-other Allow access to other users. Not supported on Windows.
+ --allow-root Allow access to root user. Not supported on Windows.
+ --async-read Use asynchronous reads. Not supported on Windows. (default true)
+ --attr-timeout duration Time for which file/directory attributes are cached. (default 1s)
+ --base-dir string base directory for volumes (default "/var/lib/docker-volumes/rclone")
+ --daemon Run mount as a daemon (background mode). Not supported on Windows.
+ --daemon-timeout duration Time limit for rclone to respond to kernel. Not supported on Windows.
+ --debug-fuse Debug the FUSE internals - needs -v.
+ --default-permissions Makes kernel enforce access control based on the file mode. Not supported on Windows.
+ --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
+ --dir-perms FileMode Directory permissions (default 0777)
+ --file-perms FileMode File permissions (default 0666)
+ --forget-state skip restoring previous state
+ --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
+ --gid uint32 Override the gid field set by the filesystem. Not supported on Windows. (default 1000)
+ -h, --help help for docker
+ --max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads. Not supported on Windows. (default 128Ki)
+ --network-mode Mount as remote network drive, instead of fixed disk drive. Supported on Windows only
+ --no-checksum Don't compare checksums on up/download.
+ --no-modtime Don't read/write the modification time (can speed things up).
+ --no-seek Don't allow seeking in files.
+ --no-spec do not write spec file
+ --noappledouble Ignore Apple Double (._) and .DS_Store files. Supported on OSX only. (default true)
+ --noapplexattr Ignore all "com.apple.*" extended attributes. Supported on OSX only.
+ -o, --option stringArray Option for libfuse/WinFsp. Repeat if required.
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
+ --read-only Mount read-only.
+ --socket-addr string or absolute path (default: /run/docker/plugins/rclone.sock)
+ --socket-gid int GID for unix socket (default: current process GID) (default 1000)
+ --uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
+ --umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
+ --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
+ --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
+ --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+ --vfs-case-insensitive If a file name not found, find a case insensitive match.
+ --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full.
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128Mi)
+ --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
+ --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
+ --vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
+ --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
+ --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
+ --volname string Set the volume name. Supported on Windows and OSX only.
+ --write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used. Not supported on Windows.
+
+See the global flags page for global options not listed here.
+
+
+SEE ALSO
+
+- rclone serve - Serve a remote over a protocol.
+
+
+
RCLONE SERVE FTP
@@ -4276,7 +5109,7 @@ Changes made through the mount will appear immediately or invalidate the
cache.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
- --poll-interval duration Time to wait between polling for changes.
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
However, changes made directly on the cloud storage by the web interface
or a different copy of rclone will only be picked up once the directory
@@ -4627,7 +5460,7 @@ Options
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match.
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full.
- --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
@@ -4667,7 +5500,7 @@ the stats printing.
Server options
Use --addr to specify which IP address and port the server should listen
-on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By
+on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By
default it only listens on localhost. You can use port :0 to let the OS
choose an available port.
@@ -4688,9 +5521,19 @@ if you wish to proxy rclone serve. Rclone automatically inserts leading
and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl
"/rclone" and --baseurl "/rclone/" are all treated identically.
---template allows a user to specify a custom markup template for http
-and webdav serve functions. The server exports the following markup to
-be used within the template to server pages:
+SSL/TLS
+
+By default this will serve over http. If you want you can serve over
+https. You will need to supply the --cert and --key flags. If you wish
+to do client side certificate validation then you will need to supply
+--client-ca also.
+
+--cert should be a either a PEM encoded certificate or a concatenation
+of that with the CA certificate. --key should be the PEM encoded private
+key and --client-ca should be the PEM encoded client certificate
+authority certificate. --template allows a user to specify a custom
+markup template for http and webdav serve functions. The server exports
+the following markup to be used within the template to server pages:
-----------------------------------------------------------------------
Parameter Description
@@ -4759,18 +5602,6 @@ The password file can be updated while rclone is running.
Use --realm to set the authentication realm.
-SSL/TLS
-
-By default this will serve over http. If you want you can serve over
-https. You will need to supply the --cert and --key flags. If you wish
-to do client side certificate validation then you will need to supply
---client-ca also.
-
---cert should be either a PEM encoded certificate or a concatenation of
-that with the CA certificate. --key should be the PEM encoded private
-key and --client-ca should be the PEM encoded client certificate
-authority certificate.
-
VFS - Virtual File System
@@ -4795,7 +5626,7 @@ Changes made through the mount will appear immediately or invalidate the
cache.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
- --poll-interval duration Time to wait between polling for changes.
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
However, changes made directly on the cloud storage by the web interface
or a different copy of rclone will only be picked up once the directory
@@ -5048,7 +5879,7 @@ and only with caching.
Options
- --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
+ --addr string IPaddress:Port or :Port to bind server to. (default "127.0.0.1:8080")
--baseurl string Prefix for URLs - leave blank for root.
--cert string SSL PEM key (concatenation of certificate and CA certificate)
--client-ca string Client certificate authority to verify clients with
@@ -5066,7 +5897,7 @@ Options
--pass string Password for authentication.
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
- --realm string realm for authentication (default "rclone")
+ --realm string realm for authentication
--server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--template string User Specified Template.
@@ -5079,7 +5910,7 @@ Options
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match.
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full.
- --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
@@ -5364,6 +6195,11 @@ reachable externally then supply "--addr :2022" for example.
Note that the default of "--vfs-cache-mode off" is fine for the rclone
sftp backend, but it may not be with other SFTP clients.
+If --stdio is specified, rclone will serve SFTP over stdio, which can be
+used with sshd via ~/.ssh/authorized_keys, for example:
+
+ restrict,command="rclone serve sftp --stdio ./photos" ssh-rsa ...
+
VFS - Virtual File System
@@ -5388,7 +6224,7 @@ Changes made through the mount will appear immediately or invalidate the
cache.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
- --poll-interval duration Time to wait between polling for changes.
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
However, changes made directly on the cloud storage by the web interface
or a different copy of rclone will only be picked up once the directory
@@ -5729,6 +6565,7 @@ Options
--pass string Password for authentication.
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
+ --stdio Run an sftp server on run stdin/stdout
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
--user string User name for authentication.
@@ -5738,7 +6575,7 @@ Options
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match.
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full.
- --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
@@ -5913,7 +6750,7 @@ Changes made through the mount will appear immediately or invalidate the
cache.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
- --poll-interval duration Time to wait between polling for changes.
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
However, changes made directly on the cloud storage by the web interface
or a different copy of rclone will only be picked up once the directory
@@ -6272,7 +7109,7 @@ Options
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match.
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full.
- --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
@@ -6365,15 +7202,39 @@ See the global flags page for global options not listed here.
SEE ALSO
- rclone - Show help for rclone commands, flags and backends.
+- rclone test changenotify - Log any change notify requests for the
+ remote passed in.
- rclone test histogram - Makes a histogram of file name characters.
- rclone test info - Discovers file name or other limitations for
paths.
-- rclone test makefiles - Make a random file hierarchy in
+- rclone test makefiles - Make a random file hierarchy in a directory
- rclone test memory - Load all the objects at remote:path into memory
and report memory stats.
+RCLONE TEST CHANGENOTIFY
+
+
+Log any change notify requests for the remote passed in.
+
+ rclone test changenotify remote: [flags]
+
+
+Options
+
+ -h, --help help for changenotify
+ --poll-interval duration Time to wait between polling for changes. (default 10s)
+
+See the global flags page for global options not listed here.
+
+
+SEE ALSO
+
+- rclone test - Run a test command
+
+
+
RCLONE TEST HISTOGRAM
@@ -6445,7 +7306,8 @@ SEE ALSO
RCLONE TEST MAKEFILES
-Make a random file hierarchy in
+Make a random file hierarchy in a directory
+
rclone test makefiles [flags]
@@ -6458,6 +7320,7 @@ Options
--max-name-length int Maximum size of file names (default 12)
--min-file-size SizeSuffix Minimum size of file to create
--min-name-length int Minimum size of file names (default 4)
+ --seed int Seed for the random number generator (0 for random) (default 1)
See the global flags page for global options not listed here.
@@ -6701,7 +7564,7 @@ with the on the fly syntax. This example is equivalent to adding the
rclone lsf "gdrive,shared_with_me:path/to/dir"
The major advantage to using the connection string style syntax is that
-it only applies the the remote, not to all the remotes of that type of
+it only applies to the remote, not to all the remotes of that type of
the command line. A common confusion is this attempt to copy a file
shared on google drive to the normal drive which DOES NOT WORK because
the --drive-shared-with-me flag applies to both the source and the
@@ -6713,6 +7576,13 @@ However using the connection string syntax, this does work.
rclone copy "gdrive,shared_with_me:shared-file.txt" gdrive:
+Note that the connection string only affects the options of the
+immediate backend. If for example gdriveCrypt is a crypt based on
+gdrive, then the following command WILL NOT WORK as intended, because
+shared_with_me is ignored by the crypt backend:
+
+ rclone copy "gdriveCrypt,shared_with_me:shared-file.txt" gdriveCrypt:
+
The connection strings have the following syntax
remote,parameter=value,parameter2=value2:path/to/dir
@@ -6891,10 +7761,10 @@ possibly signed sequence of decimal numbers, each with optional fraction
and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units
are "ns", "us" (or "µs"), "ms", "s", "m", "h".
-Options which use SIZE use kByte by default. However, a suffix of b for
-bytes, k for kBytes, M for MBytes, G for GBytes, T for TBytes and P for
-PBytes may be used. These are the binary units, e.g. 1, 2**10, 2**20,
-2**30 respectively.
+Options which use SIZE use KiByte (multiples of 1024 bytes) by default.
+However, a suffix of B for Byte, K for KiByte, M for MiByte, G for
+GiByte, T for TiByte and P for PiByte may be used. These are the binary
+units, e.g. 1, 2**10, 2**20, 2**30 respectively.
--backup-dir=DIR
@@ -6936,23 +7806,23 @@ This option controls the bandwidth limit. For example
--bwlimit 10M
-would mean limit the upload and download bandwidth to 10 MByte/s. NB
+would mean limit the upload and download bandwidth to 10 MiByte/s. NB
this is BYTES per second not BITS per second. To use a single limit,
-specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. The
-default is 0 which means to not limit bandwidth.
+specify the desired bandwidth in KiByte/s, or use a suffix B|K|M|G|T|P.
+The default is 0 which means to not limit bandwidth.
The upload and download bandwidth can be specified seperately, as
--bwlimit UP:DOWN, so
--bwlimit 10M:100k
-would mean limit the upload bandwidth to 10 MByte/s and the download
-bandwidth to 100 kByte/s. Either limit can be "off" meaning no limit, so
-to just limit the upload bandwidth you would use
+would mean limit the upload bandwidth to 10 MiByte/s and the download
+bandwidth to 100 KiByte/s. Either limit can be "off" meaning no limit,
+so to just limit the upload bandwidth you would use
--bwlimit 10M:off
-this would limit the upload bandwidth to 10MByte/s but the download
+this would limit the upload bandwidth to 10 MiByte/s but the download
bandwidth would be unlimited.
When specified as above the bandwidth limits last for the duration of
@@ -6975,20 +7845,20 @@ daytime working hours could be:
--bwlimit "08:00,512k 12:00,10M 13:00,512k 18:00,30M 23:00,off"
-In this example, the transfer bandwidth will be set to 512kBytes/sec at
-8am every day. At noon, it will rise to 10MByte/s, and drop back to
-512kBytes/sec at 1pm. At 6pm, the bandwidth limit will be set to
-30MByte/s, and at 11pm it will be completely disabled (full speed).
+In this example, the transfer bandwidth will be set to 512 KiByte/s at
+8am every day. At noon, it will rise to 10 MiByte/s, and drop back to
+512 KiByte/sec at 1pm. At 6pm, the bandwidth limit will be set to 30
+MiByte/s, and at 11pm it will be completely disabled (full speed).
Anything between 11pm and 8am will remain unlimited.
An example of timetable with WEEKDAY could be:
--bwlimit "Mon-00:00,512 Fri-23:59,10M Sat-10:00,1M Sun-20:00,off"
-It means that, the transfer bandwidth will be set to 512kBytes/sec on
-Monday. It will rise to 10MByte/s before the end of Friday. At 10:00 on
-Saturday it will be set to 1MByte/s. From 20:00 on Sunday it will be
-unlimited.
+It means that, the transfer bandwidth will be set to 512 KiByte/s on
+Monday. It will rise to 10 MiByte/s before the end of Friday. At 10:00
+on Saturday it will be set to 1 MiByte/s. From 20:00 on Sunday it will
+be unlimited.
Timeslots without WEEKDAY are extended to the whole week. So this
example:
@@ -7003,10 +7873,10 @@ Bandwidth limit apply to the data transfer for all backends. For most
backends the directory listing bandwidth is also included (exceptions
being the non HTTP backends, ftp, sftp and tardigrade).
-Note that the units are BYTES/S, not BITS/S. Typically connections are
-measured in Bits/s - to convert divide by 8. For example, let's say you
+Note that the units are BYTE/S, not BIT/S. Typically connections are
+measured in bit/s - to convert divide by 8. For example, let's say you
have a 10 Mbit/s connection and you wish rclone to use half of it - 5
-Mbit/s. This is 5/8 = 0.625MByte/s so you would use a --bwlimit 0.625M
+Mbit/s. This is 5/8 = 0.625 MiByte/s so you would use a --bwlimit 0.625M
parameter for rclone.
On Unix systems (Linux, macOS, …) the bandwidth limiter can be toggled
@@ -7028,7 +7898,7 @@ the bwlimit dynamically:
This option controls per file bandwidth limit. For the options see the
--bwlimit flag.
-For example use this to allow no transfers to be faster than 1MByte/s
+For example use this to allow no transfers to be faster than 1 MiByte/s
--bwlimit-file 1M
@@ -7108,25 +7978,56 @@ See --copy-dest and --backup-dir.
--config=CONFIG_FILE
-Specify the location of the rclone configuration file.
+Specify the location of the rclone configuration file, to override the
+default. E.g. rclone config --config="rclone.conf".
-Normally the config file is in your home directory as a file called
-.config/rclone/rclone.conf (or .rclone.conf if created with an older
-version). If $XDG_CONFIG_HOME is set it will be at
-$XDG_CONFIG_HOME/rclone/rclone.conf.
+The exact default is a bit complex to describe, due to changes
+introduced through different versions of rclone while preserving
+backwards compatibility, but in most cases it is as simple as:
-If there is a file rclone.conf in the same directory as the rclone
-executable it will be preferred. This file must be created manually for
-Rclone to use it, it will never be created automatically.
+- %APPDATA%/rclone/rclone.conf on Windows
+- ~/.config/rclone/rclone.conf on other
+
+The complete logic is as follows: Rclone will look for an existing
+configuration file in any of the following locations, in priority order:
+
+1. rclone.conf (in program directory, where rclone executable is)
+2. %APPDATA%/rclone/rclone.conf (only on Windows)
+3. $XDG_CONFIG_HOME/rclone/rclone.conf (on all systems, including
+ Windows)
+4. ~/.config/rclone/rclone.conf (see below for explanation of ~ symbol)
+5. ~/.rclone.conf
+
+If no existing configuration file is found, then a new one will be
+created in the following location:
+
+- On Windows: Location 2 listed above, except in the unlikely event
+ that APPDATA is not defined, then location 4 is used instead.
+- On Unix: Location 3 if XDG_CONFIG_HOME is defined, else location 4.
+- Fallback to location 5 (on all OS), when the rclone directory cannot
+ be created, but if also a home directory was not found then path
+ .rclone.conf relative to current working directory will be used as a
+ final resort.
+
+The ~ symbol in paths above represent the home directory of the current
+user on any OS, and the value is defined as following:
+
+- On Windows: %HOME% if defined, else %USERPROFILE%, or else
+ %HOMEDRIVE%\%HOMEPATH%.
+- On Unix: $HOME if defined, else by looking up current user in
+ OS-specific user database (e.g. passwd file), or else use the result
+ from shell command cd && pwd.
If you run rclone config file you will see where the default location is
for you.
-Use this flag to override the config location, e.g.
-rclone --config=".myconfig" .config.
+The fact that an existing file rclone.conf in the same directory as the
+rclone executable is always preferred, means that it is easy to run in
+"portable" mode by downloading rclone executable to a writable directory
+and then create an empty file rclone.conf in the same directory.
-If the location is set to empty string "" or the special value
-/notfound, or the os null device represented by value NUL on Windows and
+If the location is set to empty string "" or path to a file with name
+notfound, or the os null device represented by value NUL on Windows and
/dev/null on Unix systems, then rclone will keep the config file in
memory only.
@@ -7209,7 +8110,7 @@ feature does what.
This flag can be useful for debugging and in exceptional circumstances
(e.g. Google Drive limiting the total volume of Server Side Copies to
-100GB/day).
+100 GiB/day).
--dscp VALUE
@@ -7229,6 +8130,8 @@ Running:
would make the priority lower than usual internet flows.
+This option has no effect on Windows (see golang/go#42728).
+
-n, --dry-run
Do a trial run with no permanent changes. Use this to see what rclone
@@ -7477,7 +8380,7 @@ This is the maximum allowable backlog of files in a sync/copy/move
queued for being checked or transferred.
This can be set arbitrarily large. It will only use memory when the
-queue is in use. Note that it will use in the order of N kB of memory
+queue is in use. Note that it will use in the order of N KiB of memory
when the backlog is in use.
Setting this large allows rclone to calculate how many files are pending
@@ -7602,13 +8505,13 @@ size of the file. To calculate the number of download streams Rclone
divides the size of the file by the --multi-thread-cutoff and rounds up,
up to the maximum set with --multi-thread-streams.
-So if --multi-thread-cutoff 250MB and --multi-thread-streams 4 are in
+So if --multi-thread-cutoff 250M and --multi-thread-streams 4 are in
effect (the defaults):
-- 0MB..250MB files will be downloaded with 1 stream
-- 250MB..500MB files will be downloaded with 2 streams
-- 500MB..750MB files will be downloaded with 3 streams
-- 750MB+ files will be downloaded with 4 streams
+- 0..250 MiB files will be downloaded with 1 stream
+- 250..500 MiB files will be downloaded with 2 streams
+- 500..750 MiB files will be downloaded with 3 streams
+- 750+ MiB files will be downloaded with 4 streams
--no-check-dest
@@ -7899,14 +8802,14 @@ syntax.
--stats-unit=bits|bytes
-By default, data transfer rates will be printed in bytes/second.
+By default, data transfer rates will be printed in bytes per second.
-This option allows the data rate to be printed in bits/second.
+This option allows the data rate to be printed in bits per second.
Data transfer volume will still be reported in bytes.
The rate is reported as a binary unit, not SI unit. So 1 Mbit/s equals
-1,048,576 bits/s and not 1,000,000 bits/s.
+1,048,576 bit/s and not 1,000,000 bit/s.
The default is bytes.
@@ -8334,16 +9237,19 @@ make rclone fail instead of asking for a password if RCLONE_CONFIG_PASS
doesn't contain a valid password, and --password-command has not been
supplied.
-Some rclone commands, such as genautocomplete, do not require
-configuration. Nevertheless, rclone will read any configuration file
-found according to the rules described above. If an encrypted
-configuration file is found, this means you will be prompted for
-password (unless using --password-command). To avoid this, you can
-bypass the loading of the configuration file by overriding the location
-with an empty string "" or the special value /notfound, or the os null
-device represented by value NUL on Windows and /dev/null on Unix systems
-(before rclone version 1.55 only this null device alternative was
-supported). E.g. rclone --config="" genautocomplete bash.
+Whenever running commands that may be affected by options in a
+configuration file, rclone will look for an existing file according to
+the rules described above, and load any it finds. If an encrypted file
+is found, this includes decrypting it, with the possible consequence of
+a password prompt. When executing a command line that you know are not
+actually using anything from such a configuration file, you can avoid it
+being loaded by overriding the location, e.g. with one of the documented
+special values for memory-only configuration. Since only backend options
+can be stored in configuration files, this is normally unnecessary for
+commands that do not operate on backends, e.g. genautocomplete. However,
+it will be relevant for commands that do operate on backends in general,
+but are used without referencing a stored remote, e.g. listing local
+filesystem paths, or connection strings: rclone --config="" ls .
Developer options
@@ -8542,6 +9448,9 @@ RCLONE_DRIVE_USE_TRASH=true.
The same parser is used for the options and the environment variables so
they take exactly the same form.
+The options set by environment variables can be seen with the -vv flag,
+e.g. rclone version -vv.
+
Config file
You can set defaults for values in the config file on an individual
@@ -8566,7 +9475,12 @@ For example, to configure an S3 remote named mys3: without a config file
Note that if you want to create a remote using environment variables you
must create the ..._TYPE variable as above.
-Note also that now rclone has connectionstrings, it is probably easier
+Note that you can only set the options of the immediate backend, so
+RCLONE_CONFIG_MYS3CRYPT_ACCESS_KEY_ID has no effect, if myS3Crypt is a
+crypt remote based on an S3 remote. However RCLONE_S3_ACCESS_KEY_ID will
+set the access key of all remotes using S3, including myS3Crypt.
+
+Note also that now rclone has connection strings, it is probably easier
to use those instead which makes the above example
rclone lsd :s3,access_key_id=XXX,secret_access_key=XXX:
@@ -8576,17 +9490,22 @@ Precedence
The various different methods of backend configuration are read in this
order and the first one with a value is used.
-- Flag values as supplied on the command line, e.g. --drive-use-trash.
+- Parameters in connection strings, e.g. myRemote,skip_links:
+- Flag values as supplied on the command line, e.g. --skip-links
- Remote specific environment vars, e.g.
- RCLONE_CONFIG_MYREMOTE_USE_TRASH (see above).
-- Backend specific environment vars, e.g. RCLONE_DRIVE_USE_TRASH.
-- Config file, e.g. use_trash = false.
-- Default values, e.g. true - these can't be changed.
+ RCLONE_CONFIG_MYREMOTE_SKIP_LINKS (see above).
+- Backend specific environment vars, e.g. RCLONE_LOCAL_SKIP_LINKS.
+- Backend generic environment vars, e.g. RCLONE_SKIP_LINKS.
+- Config file, e.g. skip_links = true.
+- Default values, e.g. false - these can't be changed.
-So if both --drive-use-trash is supplied on the config line and an
-environment variable RCLONE_DRIVE_USE_TRASH is set, the command line
+So if both --skip-links is supplied on the command line and an
+environment variable RCLONE_LOCAL_SKIP_LINKS is set, the command line
flag will take preference.
+The backend configurations set by environment variables can be seen with
+the -vv flag, e.g. rclone about myRemote: -vv.
+
For non backend configuration the order is as follows:
- Flag values as supplied on the command line, e.g. --stats 5s.
@@ -8602,10 +9521,17 @@ Other environment variables
- HTTPS_PROXY takes precedence over HTTP_PROXY for https requests.
- The environment values may be either a complete URL or a
"host[:port]" for, in which case the "http" scheme is assumed.
+- USER and LOGNAME values are used as fallbacks for current username.
+ The primary method for looking up username is OS-specific: Windows
+ API on Windows, real user ID in /etc/passwd on Unix systems. In the
+ documentation the current username is simply referred to as $USER.
- RCLONE_CONFIG_DIR - rclone SETS this variable for use in config
files and sub processes to point to the directory holding the config
file.
+The options set by environment variables can be seen with the -vv and
+--log-level=DEBUG flags, e.g. rclone version -vv.
+
CONFIGURING RCLONE ON A REMOTE / HEADLESS MACHINE
@@ -8728,26 +9654,26 @@ Pattern syntax
Rclone matching rules follow a glob style:
- `*` matches any sequence of non-separator (`/`) characters
- `**` matches any sequence of characters including `/` separators
- `?` matches any single non-separator (`/`) character
- `[` [ `!` ] { character-range } `]`
- character class (must be non-empty)
- `{` pattern-list `}`
- pattern alternatives
- c matches character c (c != `*`, `**`, `?`, `\`, `[`, `{`, `}`)
- `\` c matches character c
+ * matches any sequence of non-separator (/) characters
+ ** matches any sequence of characters including / separators
+ ? matches any single non-separator (/) character
+ [ [ ! ] { character-range } ]
+ character class (must be non-empty)
+ { pattern-list }
+ pattern alternatives
+ c matches character c (c != *, **, ?, \, [, {, })
+ \c matches reserved character c (c = *, **, ?, \, [, {, })
character-range:
- c matches character c (c != `\\`, `-`, `]`)
- `\` c matches character c
- lo `-` hi matches character c for lo <= c <= hi
+ c matches character c (c != \, -, ])
+ \c matches reserved character c (c = \, -, ])
+ lo - hi matches character c for lo <= c <= hi
pattern-list:
- pattern { `,` pattern }
- comma-separated (without spaces) patterns
+ pattern { , pattern }
+ comma-separated (without spaces) patterns
character classes (see Go regular expression reference) include:
@@ -9278,17 +10204,17 @@ Other filters
--min-size - Don't transfer any file smaller than this
Controls the minimum size file within the scope of an rclone command.
-Default units are kBytes but abbreviations k, M, or G are valid.
+Default units are KiByte but abbreviations K, M, G, T or P are valid.
-E.g. rclone ls remote: --min-size 50k lists files on remote: of 50kByte
-size or larger.
+E.g. rclone ls remote: --min-size 50k lists files on remote: of 50
+KiByte size or larger.
--max-size - Don't transfer any file larger than this
Controls the maximum size file within the scope of an rclone command.
-Default units are kBytes but abbreviations k, M, or G are valid.
+Default units are KiByte but abbreviations K, M, G, T or P are valid.
-E.g. rclone ls remote: --max-size 1G lists files on remote: of 1GByte
+E.g. rclone ls remote: --max-size 1G lists files on remote: of 1 GiByte
size or smaller.
--max-age - Don't transfer any file older than this
@@ -9343,7 +10269,7 @@ E.g. the scope of rclone sync -i A: B: can be restricted:
rclone --min-size 50k --delete-excluded sync A: B:
-All files on B: which are less than 50 kBytes are deleted because they
+All files on B: which are less than 50 KiByte are deleted because they
are excluded from the rclone sync command.
--dump filters - dump the filters to the output
@@ -10009,8 +10935,15 @@ This takes the following parameters
- name - name of remote
- parameters - a map of { "key": "value" } pairs
- type - type of the new remote
-- obscure - optional bool - forces obscuring of passwords
-- noObscure - optional bool - forces passwords not to be obscured
+- opt - a dictionary of options to control the configuration
+ - obscure - declare passwords are plain and need obscuring
+ - noObscure - declare passwords are already obscured and don't
+ need obscuring
+ - nonInteractive - don't interact with a user, return questions
+ - continue - continue the config process with an answer
+ - all - ask all the config questions not just the post config ones
+ - state - state to restart with - used with continue
+ - result - result to restart with - used with continue
See the config create command command for more information on the above.
@@ -10081,8 +11014,15 @@ This takes the following parameters
- name - name of remote
- parameters - a map of { "key": "value" } pairs
-- obscure - optional bool - forces obscuring of passwords
-- noObscure - optional bool - forces passwords not to be obscured
+- opt - a dictionary of options to control the configuration
+ - obscure - declare passwords are plain and need obscuring
+ - noObscure - declare passwords are already obscured and don't
+ need obscuring
+ - nonInteractive - don't interact with a user, return questions
+ - continue - continue the config process with an answer
+ - all - ask all the config questions not just the post config ones
+ - state - state to restart with - used with continue
+ - result - result to restart with - used with continue
See the config update command command for more information on the above.
@@ -10253,7 +11193,7 @@ Returns the following values:
"lastError": last error string,
"renames" : number of files renamed,
"retryError": boolean showing whether there has been at least one non-NoRetryError,
- "speed": average speed in bytes/sec since start of the group,
+ "speed": average speed in bytes per second since start of the group,
"totalBytes": total number of bytes in the group,
"totalChecks": total number of checks in the group,
"totalTransfers": total number of transfers in the group,
@@ -10266,8 +11206,8 @@ Returns the following values:
"eta": estimated time in seconds until file transfer completion
"name": name of the file,
"percentage": progress of the file transfer in percent,
- "speed": average speed over the whole transfer in bytes/sec,
- "speedAvg": current speed in bytes/sec as an exponentially weighted moving average,
+ "speed": average speed over the whole transfer in bytes per second,
+ "speedAvg": current speed in bytes per second as an exponentially weighted moving average,
"size": size of the file in bytes
}
],
@@ -11313,6 +12253,7 @@ Here is an overview of the major features of each cloud storage system.
SFTP MD5, SHA1 ² Yes Depends No -
SugarSync - No No No -
Tardigrade - Yes No No -
+ Uptobox - No No Yes -
WebDAV MD5, SHA1 ³ Yes ⁴ Depends No -
Yandex Disk MD5 Yes No No R
Zoho WorkDrive - No No No -
@@ -11321,7 +12262,7 @@ Here is an overview of the major features of each cloud storage system.
Notes
¹ Dropbox supports its own custom hash. This is an SHA256 sum of all the
-4MB block SHA256s.
+4 MiB block SHA256s.
² SFTP supports checksums if the same login has shell access and md5sum
or sha1sum as well as echo are in the remote's PATH.
@@ -11622,6 +12563,7 @@ upon backend specific capabilities.
SFTP No No Yes Yes No No Yes No Yes Yes
SugarSync Yes Yes Yes Yes No No Yes Yes No Yes
Tardigrade Yes † No No No No Yes Yes No No No
+ Uptobox No Yes Yes Yes No No No No No No
WebDAV Yes Yes Yes Yes No No Yes ‡ No Yes Yes
Yandex Disk Yes Yes Yes Yes Yes No Yes Yes Yes Yes
Zoho WorkDrive Yes Yes Yes Yes No No No No Yes Yes
@@ -11726,9 +12668,9 @@ These flags are available for every command.
--auto-confirm If enabled, do not request console confirmation.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --bwlimit-file BwTimetable Bandwidth limit per file in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16Mi)
+ --bwlimit BwTimetable Bandwidth limit in KiByte/s, or use suffix B|K|M|G|T|P or a full timetable.
+ --bwlimit-file BwTimetable Bandwidth limit per file in KiByte/s, or use suffix B|K|M|G|T|P or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--check-first Do all the checks before starting transfers.
@@ -11746,7 +12688,8 @@ These flags are available for every command.
--delete-before When synchronizing, delete files on destination before transferring
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
+ --disable string Disable a comma separated list of features. Use --disable help to see a list.
+ --disable-http2 Disable HTTP/2 in the global transport.
-n, --dry-run Do a trial run with no permanent changes
--dscp string Set DSCP value to connections. Can be value or names, eg. CS1, LE, DF, AF21.
--dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
@@ -11788,14 +12731,14 @@ These flags are available for every command.
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-duration duration Maximum duration rclone will transfer data for.
- --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
--max-stats-groups int Maximum number of stats groups to keep in memory. On max oldest is discarded. (default 1000)
--max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--memprofile string Write memory profile to file
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
- --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size. (default 250M)
+ --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size. (default 250Mi)
--multi-thread-streams int Max number of streams to use for multi-thread downloads. (default 4)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-check-dest Don't check the destination, copy regardless.
@@ -11845,8 +12788,8 @@ These flags are available for every command.
--stats-one-line Make the stats fit on one line.
--stats-one-line-date Enables --stats-one-line and add current date/time prefix.
--stats-one-line-date-format string Enables --stats-one-line-date and uses custom formatted date. Enclose date string in double quotes ("). See https://golang.org/pkg/time/#Time.Format
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes' per second (default "bytes")
+ --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100Ki)
--suffix string Suffix to add to changed files.
--suffix-keep-extension Preserve the extension when using --suffix.
--syslog Use Syslog for logging
@@ -11862,7 +12805,7 @@ These flags are available for every command.
--use-json-log Use json log format.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.55.0")
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.56.0")
-v, --verbose count Print lots more stuff (repeat for more)
@@ -11875,15 +12818,15 @@ and may be set in the config file.
--acd-client-id string OAuth Client Id
--acd-client-secret string OAuth Client Secret
--acd-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8,Dot)
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9Gi)
--acd-token string OAuth Access Token as a JSON blob.
--acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --acd-upload-wait-per-gb Duration Additional time per GiB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use SAS URL or Emulator)
--azureblob-archive-tier-delete Delete archive tier blobs before overwriting.
- --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100 MiB). (default 4Mi)
--azureblob-disable-checksum Don't store MD5 checksum with object metadata.
--azureblob-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
--azureblob-endpoint string Endpoint for the service
@@ -11897,12 +12840,12 @@ and may be set in the config file.
--azureblob-public-access string Public access level of a container: blob, container.
--azureblob-sas-url string SAS URL for container level access only
--azureblob-service-principal-file string Path to file containing credentials for use with a service principal.
- --azureblob-upload-cutoff string Cutoff for switching to chunked upload (<= 256MB). (Deprecated)
+ --azureblob-upload-cutoff string Cutoff for switching to chunked upload (<= 256 MiB). (Deprecated)
--azureblob-use-emulator Uses local storage emulator if provided as 'true' (leave blank if using real azure storage endpoint)
--azureblob-use-msi Use a managed service identity to authenticate (only works in Azure)
--b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4G)
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96Mi)
+ --b2-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4Gi)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-auth-duration Duration Time before the authorization token will expire in s or suffix ms|s|m|h|d. (default 1w)
--b2-download-url string Custom endpoint for downloads.
@@ -11913,7 +12856,7 @@ and may be set in the config file.
--b2-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed. (default 1m0s)
--b2-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200Mi)
--b2-versions Include old versions in directory listings.
--box-access-token string Box App Primary Access Token
--box-auth-url string Auth server URL.
@@ -11926,12 +12869,12 @@ and may be set in the config file.
--box-root-folder-id string Fill in for rclone to use a non root folder as its starting point.
--box-token string OAuth Access Token as a JSON blob.
--box-token-url string Token server url.
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50 MiB). (default 50Mi)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
- --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5Mi)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10Gi)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-purge Clear all the cached data for this remote on start.
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
@@ -11947,13 +12890,13 @@ and may be set in the config file.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks. (default 4)
--cache-writes Cache file data on writes through the FS
- --chunker-chunk-size SizeSuffix Files larger than chunk size will be split in chunks. (default 2G)
+ --chunker-chunk-size SizeSuffix Files larger than chunk size will be split in chunks. (default 2Gi)
--chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks.
--chunker-hash-type string Choose how chunker handles hash sums. All modes but "none" require metadata. (default "md5")
--chunker-remote string Remote to chunk/unchunk.
--compress-level int GZIP compression level (-2 to 9). (default -1)
--compress-mode string Compression mode. (default "gzip")
- --compress-ram-cache-limit SizeSuffix Some remotes don't allow the upload of files with unknown size. (default 20M)
+ --compress-ram-cache-limit SizeSuffix Some remotes don't allow the upload of files with unknown size. (default 20Mi)
--compress-remote string Remote to compress.
-L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
@@ -11968,7 +12911,7 @@ and may be set in the config file.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-auth-url string Auth server URL.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8Mi)
--drive-client-id string Google Application Client Id
--drive-client-secret string OAuth Client Secret
--drive-disable-http2 Disable drive using http2 (default true)
@@ -11998,13 +12941,16 @@ and may be set in the config file.
--drive-token string OAuth Access Token as a JSON blob.
--drive-token-url string Token server url.
--drive-trashed-only Only show files that are in the trash.
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8Mi)
--drive-use-created-date Use file created date instead of modified date.,
--drive-use-shared-date Use date file was shared instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-auth-url string Auth server URL.
- --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-batch-mode string Upload file batching sync|async|off. (default "sync")
+ --dropbox-batch-size int Max number of files in upload batch.
+ --dropbox-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150Mi). (default 48Mi)
--dropbox-client-id string OAuth Client Id
--dropbox-client-secret string OAuth Client Secret
--dropbox-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
@@ -12015,6 +12961,8 @@ and may be set in the config file.
--dropbox-token-url string Token server url.
--fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl
--fichier-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
+ --fichier-file-password string If you want to download a shared file that is password protected, add this parameter (obscured)
+ --fichier-folder-password string If you want to list the files in a shared folder that is password protected, add this parameter (obscured)
--fichier-shared-folder string If you want to download a shared folder, add this parameter
--filefabric-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Del,Ctl,InvalidUtf8,Dot)
--filefabric-permanent-token string Permanent Authentication Token
@@ -12069,7 +13017,7 @@ and may be set in the config file.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-auth-url string Auth server URL.
- --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5Gi)
--hubic-client-id string OAuth Client Id
--hubic-client-secret string OAuth Client Secret
--hubic-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8)
@@ -12078,9 +13026,10 @@ and may be set in the config file.
--hubic-token-url string Token server url.
--jottacloud-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10Mi)
+ --jottacloud-no-versions Avoid server side versioning by deleting files and recreating files instead of overwriting them.
--jottacloud-trashed-only Only show files that are in the trash.
- --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
+ --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10Mi)
--koofr-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
@@ -12095,16 +13044,16 @@ and may be set in the config file.
--local-no-preallocate Disable preallocation of disk space for transferred files
--local-no-set-modtime Disable setting modtime
--local-no-sparse Disable sparse files for multi-thread downloads
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--local-nounc string Disable UNC (long path names) conversion on Windows
- --local-zero-size-links Assume the Stat size of links is zero (and read them instead)
+ --local-unicode-normalization Apply unicode NFC normalization to paths and filenames
+ --local-zero-size-links Assume the Stat size of links is zero (and read them instead) (Deprecated)
--mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true)
--mailru-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--mailru-pass string Password (obscured)
--mailru-speedup-enable Skip full upload if there is another file with same data hash. (default true)
--mailru-speedup-file-patterns string Comma separated list of file name patterns eligible for speedup (put by hash). (default "*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf")
- --mailru-speedup-max-disk SizeSuffix This option allows you to disable speedup (put by hash) for large files (default 3G)
- --mailru-speedup-max-memory SizeSuffix Files larger than the size given below will always be hashed on disk. (default 32M)
+ --mailru-speedup-max-disk SizeSuffix This option allows you to disable speedup (put by hash) for large files (default 3Gi)
+ --mailru-speedup-max-memory SizeSuffix Files larger than the size given below will always be hashed on disk. (default 32Mi)
--mailru-user string User name (usually email)
--mega-debug Output more debug from Mega.
--mega-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8,Dot)
@@ -12113,7 +13062,7 @@ and may be set in the config file.
--mega-user string User name
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-auth-url string Auth server URL.
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes). (default 10M)
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes). (default 10Mi)
--onedrive-client-id string OAuth Client Id
--onedrive-client-secret string OAuth Client Secret
--onedrive-drive-id string The ID of the drive to use
@@ -12123,12 +13072,13 @@ and may be set in the config file.
--onedrive-link-password string Set the password for links created by the link command.
--onedrive-link-scope string Set the scope of the links created by the link command. (default "anonymous")
--onedrive-link-type string Set the type of the links created by the link command. (default "view")
+ --onedrive-list-chunk int Size of listing chunk. (default 1000)
--onedrive-no-versions Remove all versions on modifying operations
--onedrive-region string Choose national cloud region for OneDrive. (default "global")
--onedrive-server-side-across-configs Allow server-side operations (e.g. copy) to work across different onedrive configs.
--onedrive-token string OAuth Access Token as a JSON blob.
--onedrive-token-url string Token server url.
- --opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size. (default 10M)
+ --opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size. (default 10Mi)
--opendrive-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
--opendrive-password string Password. (obscured)
--opendrive-username string Username
@@ -12143,20 +13093,20 @@ and may be set in the config file.
--premiumizeme-encoding MultiEncoder This sets the encoding for the backend. (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--putio-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--qingstor-access-key-id string QingStor Access Key ID
- --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
+ --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4Mi)
--qingstor-connection-retries int Number of connection retries. (default 3)
--qingstor-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Ctl,InvalidUtf8)
--qingstor-endpoint string Enter an endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
- --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
+ --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-bucket-acl string Canned ACL used when creating buckets.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
- --s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656G)
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5Mi)
+ --s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-disable-http2 Disable usage of http2 for S3 backends
--s3-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8,Dot)
@@ -12171,6 +13121,7 @@ and may be set in the config file.
--s3-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool.
--s3-no-check-bucket If set, don't attempt to check the bucket exists or create it
--s3-no-head If set, don't HEAD uploaded objects to check integrity
+ --s3-no-head-object If set, don't HEAD objects
--s3-profile string Profile to use in the shared credentials file
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
@@ -12185,7 +13136,7 @@ and may be set in the config file.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing new objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
- --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
+ --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint.
--s3-v2-auth If true use v2 authentication.
--seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled)
@@ -12198,6 +13149,7 @@ and may be set in the config file.
--seafile-user string User name (usually email address)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-concurrent-reads If set don't use concurrent reads
+ --sftp-disable-concurrent-writes If set don't use concurrent writes
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
@@ -12219,11 +13171,11 @@ and may be set in the config file.
--sftp-use-fstat If set use fstat instead of stat
--sftp-use-insecure-cipher Enable the use of insecure ciphers and key exchange methods.
--sftp-user string SSH username, leave blank for current username, $USER
- --sharefile-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 64M)
+ --sharefile-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 64Mi)
--sharefile-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot)
--sharefile-endpoint string Endpoint for API calls.
--sharefile-root-folder-id string ID of the root folder
- --sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 128M)
+ --sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 128Mi)
--skip-links Don't warn about skipped symlinks.
--sugarsync-access-key-id string Sugarsync Access Key ID.
--sugarsync-app-id string Sugarsync App ID.
@@ -12242,7 +13194,7 @@ and may be set in the config file.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5Gi)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
@@ -12268,9 +13220,12 @@ and may be set in the config file.
--union-create-policy string Policy to choose upstream on CREATE category. (default "epmfs")
--union-search-policy string Policy to choose upstream on SEARCH category. (default "ff")
--union-upstreams string List of space separated upstreams.
+ --uptobox-access-token string Your access Token, get it from https://uptobox.com/my_account
+ --uptobox-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
--webdav-bearer-token-command string Command to run to get a bearer token
--webdav-encoding string This sets the encoding for the backend.
+ --webdav-headers CommaSepList Set HTTP headers for all transactions
--webdav-pass string Password. (obscured)
--webdav-url string URL of http host to connect to
--webdav-user string User name. In case NTLM authentication is used, the username should be in the format 'Domain\User'.
@@ -12285,12 +13240,499 @@ and may be set in the config file.
--zoho-client-id string OAuth Client Id
--zoho-client-secret string OAuth Client Secret
--zoho-encoding MultiEncoder This sets the encoding for the backend. (default Del,Ctl,InvalidUtf8)
- --zoho-region string Zoho region to connect to. You'll have to use the region you organization is registered in.
+ --zoho-region string Zoho region to connect to.
--zoho-token string OAuth Access Token as a JSON blob.
--zoho-token-url string Token server url.
-1Fichier
+
+DOCKER VOLUME PLUGIN
+
+
+Introduction
+
+Docker 1.9 has added support for creating named volumes via command-line
+interface and mounting them in containers as a way to share data between
+them. Since Docker 1.10 you can create named volumes with Docker Compose
+by descriptions in docker-compose.yml files for use by container groups
+on a single host. As of Docker 1.12 volumes are supported by Docker
+Swarm included with Docker Engine and created from descriptions in swarm
+compose v3 files for use with _swarm stacks_ across multiple cluster
+nodes.
+
+Docker Volume Plugins augment the default local volume driver included
+in Docker with stateful volumes shared across containers and hosts.
+Unlike local volumes, your data will _not_ be deleted when such volume
+is removed. Plugins can run managed by the docker daemon, as a native
+system service (under systemd, _sysv_ or _upstart_) or as a standalone
+executable. Rclone can run as docker volume plugin in all these modes.
+It interacts with the local docker daemon via plugin API and handles
+mounting of remote file systems into docker containers so it must run on
+the same host as the docker daemon or on every Swarm node.
+
+
+Getting started
+
+In the first example we will use the SFTP rclone volume with Docker
+engine on a standalone Ubuntu machine.
+
+Start from installing Docker on the host.
+
+The _FUSE_ driver is a prerequisite for rclone mounting and should be
+installed on host:
+
+ sudo apt-get -y install fuse
+
+Create two directories required by rclone docker plugin:
+
+ sudo mkdir -p /var/lib/docker-plugins/rclone/config
+ sudo mkdir -p /var/lib/docker-plugins/rclone/cache
+
+Install the managed rclone docker plugin:
+
+ docker plugin install rclone/docker-volume-rclone args="-v" --alias rclone --grant-all-permissions
+ docker plugin list
+
+Create your SFTP volume:
+
+ docker volume create firstvolume -d rclone -o type=sftp -o sftp-host=_hostname_ -o sftp-user=_username_ -o sftp-pass=_password_ -o allow-other=true
+
+Note that since all options are static, you don't even have to run
+rclone config or create the rclone.conf file (but the config directory
+should still be present). In the simplest case you can use localhost as
+_hostname_ and your SSH credentials as _username_ and _password_. You
+can also change the remote path to your home directory on the host, for
+example -o path=/home/username.
+
+Time to create a test container and mount the volume into it:
+
+ docker run --rm -it -v firstvolume:/mnt --workdir /mnt ubuntu:latest bash
+
+If all goes well, you will enter the new container and change right to
+the mounted SFTP remote. You can type ls to list the mounted directory
+or otherwise play with it. Type exit when you are done. The container
+will stop but the volume will stay, ready to be reused. When it's not
+needed anymore, remove it:
+
+ docker volume list
+ docker volume remove firstvolume
+
+Now let us try SOMETHING MORE ELABORATE: Google Drive volume on
+multi-node Docker Swarm.
+
+You should start from installing Docker and FUSE, creating plugin
+directories and installing rclone plugin on _every_ swarm node. Then
+setup the Swarm.
+
+Google Drive volumes need an access token which can be setup via web
+browser and will be periodically renewed by rclone. The managed plugin
+cannot run a browser so we will use a technique similar to the rclone
+setup on a headless box.
+
+Run rclone config on _another_ machine equipped with _web browser_ and
+graphical user interface. Create the Google Drive remote. When done,
+transfer the resulting rclone.conf to the Swarm cluster and save as
+/var/lib/docker-plugins/rclone/config/rclone.conf on _every_ node. By
+default this location is accessible only to the root user so you will
+need appropriate privileges. The resulting config will look like this:
+
+ [gdrive]
+ type = drive
+ scope = drive
+ drive_id = 1234567...
+ root_folder_id = 0Abcd...
+ token = {"access_token":...}
+
+Now create the file named example.yml with a swarm stack description
+like this:
+
+ version: '3'
+ services:
+ heimdall:
+ image: linuxserver/heimdall:latest
+ ports: [8080:80]
+ volumes: [configdata:/config]
+ volumes:
+ configdata:
+ driver: rclone
+ driver_opts:
+ remote: 'gdrive:heimdall'
+ allow_other: 'true'
+ vfs_cache_mode: full
+ poll_interval: 0
+
+and run the stack:
+
+ docker stack deploy example -c ./example.yml
+
+After a few seconds docker will spread the parsed stack description over
+cluster, create the example_heimdall service on port _8080_, run service
+containers on one or more cluster nodes and request the
+example_configdata volume from rclone plugins on the node hosts. You can
+use the following commands to confirm results:
+
+ docker service ls
+ docker service ps example_heimdall
+ docker volume ls
+
+Point your browser to http://cluster.host.address:8080 and play with the
+service. Stop it with docker stack remove example when you are done.
+Note that the example_configdata volume(s) created on demand at the
+cluster nodes will not be automatically removed together with the stack
+but stay for future reuse. You can remove them manually by invoking the
+docker volume remove example_configdata command on every node.
+
+
+Creating Volumes via CLI
+
+Volumes can be created with docker volume create. Here are a few
+examples:
+
+ docker volume create vol1 -d rclone -o remote=storj: -o vfs-cache-mode=full
+ docker volume create vol2 -d rclone -o remote=:tardigrade,access_grant=xxx:heimdall
+ docker volume create vol3 -d rclone -o type=tardigrade -o path=heimdall -o tardigrade-access-grant=xxx -o poll-interval=0
+
+Note the -d rclone flag that tells docker to request volume from the
+rclone driver. This works even if you installed managed driver by its
+full name rclone/docker-volume-rclone because you provided the
+--alias rclone option.
+
+Volumes can be inspected as follows:
+
+ docker volume list
+ docker volume inspect vol1
+
+
+Volume Configuration
+
+Rclone flags and volume options are set via the -o flag to the
+docker volume create command. They include backend-specific parameters
+as well as mount and _VFS_ options. Also there are a few special -o
+options: remote, fs, type, path, mount-type and persist.
+
+remote determines an existing remote name from the config file, with
+trailing colon and optionally with a remote path. See the full syntax in
+the rclone documentation. This option can be aliased as fs to prevent
+confusion with the _remote_ parameter of such backends as _crypt_ or
+_alias_.
+
+The remote=:backend:dir/subdir syntax can be used to create on-the-fly
+(config-less) remotes, while the type and path options provide a simpler
+alternative for this. Using two split options
+
+ -o type=backend -o path=dir/subdir
+
+is equivalent to the combined syntax
+
+ -o remote=:backend:dir/subdir
+
+but is arguably easier to parameterize in scripts. The path part is
+optional.
+
+Mount and VFS options as well as backend parameters are named like their
+twin command-line flags without the -- CLI prefix. Optionally you can
+use underscores instead of dashes in option names. For example,
+--vfs-cache-mode full becomes -o vfs-cache-mode=full or
+-o vfs_cache_mode=full. Boolean CLI flags without value will gain the
+true value, e.g. --allow-other becomes -o allow-other=true or
+-o allow_other=true.
+
+Please note that you can provide parameters only for the backend
+immediately referenced by the backend type of mounted remote. If this is
+a wrapping backend like _alias, chunker or crypt_, you cannot provide
+options for the referred to remote or backend. This limitation is
+imposed by the rclone connection string parser. The only workaround is
+to feed plugin with rclone.conf or configure plugin arguments (see
+below).
+
+
+Special Volume Options
+
+mount-type determines the mount method and in general can be one of:
+mount, cmount, or mount2. This can be aliased as mount_type. It should
+be noted that the managed rclone docker plugin currently does not
+support the cmount method and mount2 is rarely needed. This option
+defaults to the first found method, which is usually mount so you
+generally won't need it.
+
+persist is a reserved boolean (true/false) option. In future it will
+allow to persist on-the-fly remotes in the plugin rclone.conf file.
+
+
+Connection Strings
+
+The remote value can be extended with connection strings as an
+alternative way to supply backend parameters. This is equivalent to the
+-o backend options with one _syntactic difference_. Inside connection
+string the backend prefix must be dropped from parameter names but in
+the -o param=value array it must be present. For instance, compare the
+following option array
+
+ -o remote=:sftp:/home -o sftp-host=localhost
+
+with equivalent connection string:
+
+ -o remote=:sftp,host=localhost:/home
+
+This difference exists because flag options -o key=val include not only
+backend parameters but also mount/VFS flags and possibly other settings.
+Also it allows to discriminate the remote option from the crypt-remote
+(or similarly named backend parameters) and arguably simplifies
+scripting due to clearer value substitution.
+
+
+Using with Swarm or Compose
+
+Both _Docker Swarm_ and _Docker Compose_ use YAML-formatted text files
+to describe groups (stacks) of containers, their properties, networks
+and volumes. _Compose_ uses the compose v2 format, _Swarm_ uses the
+compose v3 format. They are mostly similar, differences are explained in
+the docker documentation.
+
+Volumes are described by the children of the top-level volumes: node.
+Each of them should be named after its volume and have at least two
+elements, the self-explanatory driver: rclone value and the driver_opts:
+structure playing the same role as -o key=val CLI flags:
+
+ volumes:
+ volume_name_1:
+ driver: rclone
+ driver_opts:
+ remote: 'gdrive:'
+ allow_other: 'true'
+ vfs_cache_mode: full
+ token: '{"type": "borrower", "expires": "2021-12-31"}'
+ poll_interval: 0
+
+Notice a few important details: - YAML prefers _ in option names instead
+of -. - YAML treats single and double quotes interchangeably. Simple
+strings and integers can be left unquoted. - Boolean values must be
+quoted like 'true' or "false" because these two words are reserved by
+YAML. - The filesystem string is keyed with remote (or with fs).
+Normally you can omit quotes here, but if the string ends with colon,
+you MUST quote it like remote: "storage_box:". - YAML is picky about
+surrounding braces in values as this is in fact another syntax for
+key/value mappings. For example, JSON access tokens usually contain
+double quotes and surrounding braces, so you must put them in single
+quotes.
+
+
+Installing as Managed Plugin
+
+Docker daemon can install plugins from an image registry and run them
+managed. We maintain the docker-volume-rclone plugin image on Docker
+Hub.
+
+The plugin requires presence of two directories on the host before it
+can be installed. Note that plugin will NOT create them automatically.
+By default they must exist on host at the following locations (though
+you can tweak the paths): - /var/lib/docker-plugins/rclone/config is
+reserved for the rclone.conf config file and MUST exist even if it's
+empty and the config file is not present. -
+/var/lib/docker-plugins/rclone/cache holds the plugin state file as well
+as optional VFS caches.
+
+You can install managed plugin with default settings as follows:
+
+ docker plugin install rclone/docker-volume-rclone:latest --grant-all-permissions --alias rclone
+
+Managed plugin is in fact a special container running in a namespace
+separate from normal docker containers. Inside it runs the
+rclone serve docker command. The config and cache directories are
+bind-mounted into the container at start. The docker daemon connects to
+a unix socket created by the command inside the container. The command
+creates on-demand remote mounts right inside, then docker machinery
+propagates them through kernel mount namespaces and bind-mounts into
+requesting user containers.
+
+You can tweak a few plugin settings after installation when it's
+disabled (not in use), for instance:
+
+ docker plugin disable rclone
+ docker plugin set rclone RCLONE_VERBOSE=2 config=/etc/rclone args="--vfs-cache-mode=writes --allow-other"
+ docker plugin enable rclone
+ docker plugin inspect rclone
+
+Note that if docker refuses to disable the plugin, you should find and
+remove all active volumes connected with it as well as containers and
+swarm services that use them. This is rather tedious so please carefully
+plan in advance.
+
+You can tweak the following settings: args, config, cache, and
+RCLONE_VERBOSE. It's _your_ task to keep plugin settings in sync across
+swarm cluster nodes.
+
+args sets command-line arguments for the rclone serve docker command
+(_none_ by default). Arguments should be separated by space so you will
+normally want to put them in quotes on the docker plugin set command
+line. Both serve docker flags and generic rclone flags are supported,
+including backend parameters that will be used as defaults for volume
+creation. Note that plugin will fail (due to this docker bug) if the
+args value is empty. Use e.g. args="-v" as a workaround.
+
+config=/host/dir sets alternative host location for the config
+directory. Plugin will look for rclone.conf here. It's not an error if
+the config file is not present but the directory must exist. Please note
+that plugin can periodically rewrite the config file, for example when
+it renews storage access tokens. Keep this in mind and try to avoid
+races between the plugin and other instances of rclone on the host that
+might try to change the config simultaneously resulting in corrupted
+rclone.conf. You can also put stuff like private key files for SFTP
+remotes in this directory. Just note that it's bind-mounted inside the
+plugin container at the predefined path /data/config. For example, if
+your key file is named sftp-box1.key on the host, the corresponding
+volume config option should read
+-o sftp-key-file=/data/config/sftp-box1.key.
+
+cache=/host/dir sets alternative host location for the _cache_
+directory. The plugin will keep VFS caches here. Also it will create and
+maintain the docker-plugin.state file in this directory. When the plugin
+is restarted or reinstalled, it will look in this file to recreate any
+volumes that existed previously. However, they will not be re-mounted
+into consuming containers after restart. Usually this is not a problem
+as the docker daemon normally will restart affected user containers
+after failures, daemon restarts or host reboots.
+
+RCLONE_VERBOSE sets plugin verbosity from 0 (errors only, by default) to
+2 (debugging). Verbosity can be also tweaked via args="-v [-v] ...".
+Since arguments are more generic, you will rarely need this setting. The
+plugin output by default feeds the docker daemon log on local host. Log
+entries are reflected as _errors_ in the docker log but retain their
+actual level assigned by rclone in the encapsulated message string.
+
+You can set custom plugin options right when you install it, _in one
+go_:
+
+ docker plugin remove rclone
+ docker plugin install rclone/docker-volume-rclone:latest \
+ --alias rclone --grant-all-permissions \
+ args="-v --allow-other" config=/etc/rclone
+ docker plugin inspect rclone
+
+
+Healthchecks
+
+The docker plugin volume protocol doesn't provide a way for plugins to
+inform the docker daemon that a volume is (un-)available. As a
+workaround you can setup a healthcheck to verify that the mount is
+responding, for example:
+
+ services:
+ my_service:
+ image: my_image
+ healthcheck:
+ test: ls /path/to/rclone/mount || exit 1
+ interval: 1m
+ timeout: 15s
+ retries: 3
+ start_period: 15s
+
+
+Running Plugin under Systemd
+
+In most cases you should prefer managed mode. Moreover, MacOS and
+Windows do not support native Docker plugins. Please use managed mode on
+these systems. Proceed further only if you are on Linux.
+
+First, install rclone. You can just run it (type rclone serve docker and
+hit enter) for the test.
+
+Install _FUSE_:
+
+ sudo apt-get -y install fuse
+
+Download two systemd configuration files: docker-volume-rclone.service
+and docker-volume-rclone.socket.
+
+Put them to the /etc/systemd/system/ directory:
+
+ cp docker-volume-plugin.service /etc/systemd/system/
+ cp docker-volume-plugin.socket /etc/systemd/system/
+
+Please note that all commands in this section must be run as _root_ but
+we omit sudo prefix for brevity. Now create directories required by the
+service:
+
+ mkdir -p /var/lib/docker-volumes/rclone
+ mkdir -p /var/lib/docker-plugins/rclone/config
+ mkdir -p /var/lib/docker-plugins/rclone/cache
+
+Run the docker plugin service in the socket activated mode:
+
+ systemctl daemon-reload
+ systemctl start docker-volume-rclone.service
+ systemctl enable docker-volume-rclone.socket
+ systemctl start docker-volume-rclone.socket
+ systemctl restart docker
+
+Or run the service directly: - run systemctl daemon-reload to let
+systemd pick up new config - run
+systemctl enable docker-volume-rclone.service to make the new service
+start automatically when you power on your machine. - run
+systemctl start docker-volume-rclone.service to start the service now. -
+run systemctl restart docker to restart docker daemon and let it detect
+the new plugin socket. Note that this step is not needed in managed mode
+where docker knows about plugin state changes.
+
+The two methods are equivalent from the user perspective, but I
+personally prefer socket activation.
+
+
+Troubleshooting
+
+You can see managed plugin settings with
+
+ docker plugin list
+ docker plugin inspect rclone
+
+Note that docker (including latest 20.10.7) will not show actual values
+of args, just the defaults.
+
+Use journalctl --unit docker to see managed plugin output as part of the
+docker daemon log. Note that docker reflects plugin lines as _errors_
+but their actual level can be seen from encapsulated message string.
+
+You will usually install the latest version of managed plugin. Use the
+following commands to print the actual installed version:
+
+ PLUGID=$(docker plugin list --no-trunc | awk '/rclone/{print$1}')
+ sudo runc --root /run/docker/runtime-runc/plugins.moby exec $PLUGID rclone version
+
+You can even use runc to run shell inside the plugin container:
+
+ sudo runc --root /run/docker/runtime-runc/plugins.moby exec --tty $PLUGID bash
+
+Also you can use curl to check the plugin socket connectivity:
+
+ docker plugin list --no-trunc
+ PLUGID=123abc...
+ sudo curl -H Content-Type:application/json -XPOST -d {} --unix-socket /run/docker/plugins/$PLUGID/rclone.sock http://localhost/Plugin.Activate
+
+though this is rarely needed.
+
+Finally I'd like to mention a _caveat with updating volume settings_.
+Docker CLI does not have a dedicated command like docker volume update.
+It may be tempting to invoke docker volume create with updated options
+on existing volume, but there is a gotcha. The command will do nothing,
+it won't even return an error. I hope that docker maintainers will fix
+this some day. In the meantime be aware that you must remove your volume
+before recreating it with new settings:
+
+ docker volume remove my_vol
+ docker volume create my_vol -d rclone -o opt1=new_val1 ...
+
+and verify that settings did update:
+
+ docker volume list
+ docker volume inspect my_vol
+
+If docker refuses to remove the volume, you should find containers or
+swarm services that use it and stop them first.
+
+
+
+1FICHIER
+
This is a backend for the 1fichier cloud storage service. Note that a
Premium subscription is required to use the API.
@@ -12422,6 +13864,30 @@ If you want to download a shared folder, add this parameter
- Type: string
- Default: ""
+--fichier-file-password
+
+If you want to download a shared file that is password protected, add
+this parameter
+
+NB Input to this must be obscured - see rclone obscure.
+
+- Config: file_password
+- Env Var: RCLONE_FICHIER_FILE_PASSWORD
+- Type: string
+- Default: ""
+
+--fichier-folder-password
+
+If you want to list the files in a shared folder that is password
+protected, add this parameter
+
+NB Input to this must be obscured - see rclone obscure.
+
+- Config: folder_password
+- Env Var: RCLONE_FICHIER_FOLDER_PASSWORD
+- Type: string
+- Default: ""
+
--fichier-encoding
This sets the encoding for the backend.
@@ -12443,7 +13909,9 @@ policy mfs (most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
-Alias
+
+ALIAS
+
The alias remote provides a new name for another remote.
@@ -12540,7 +14008,9 @@ Remote or path to alias. Can be "myremote:path/to/dir",
- Default: ""
-Amazon Drive
+
+AMAZON DRIVE
+
Amazon Drive, formerly known as Amazon Cloud Drive, is a cloud storage
service run by Amazon for consumers.
@@ -12756,17 +14226,18 @@ Checkpoint for internal polling (debug).
--acd-upload-wait-per-gb
-Additional time per GB to wait after a failed complete upload to see if
+Additional time per GiB to wait after a failed complete upload to see if
it appears.
Sometimes Amazon Drive gives an error when a file has been fully
uploaded but the file appears anyway after a little while. This happens
-sometimes for files over 1GB in size and nearly every time for files
-bigger than 10GB. This parameter controls the time rclone waits for the
-file to appear.
+sometimes for files over 1 GiB in size and nearly every time for files
+bigger than 10 GiB. This parameter controls the time rclone waits for
+the file to appear.
-The default value for this parameter is 3 minutes per GB, so by default
-it will wait 3 minutes for every GB uploaded to see if the file appears.
+The default value for this parameter is 3 minutes per GiB, so by default
+it will wait 3 minutes for every GiB uploaded to see if the file
+appears.
You can disable this feature by setting it to 0. This may cause conflict
errors as rclone retries the failed upload but the file will most likely
@@ -12789,7 +14260,7 @@ Files >= this size will be downloaded via their tempLink.
Files this size or more will be downloaded via their "tempLink". This is
to work around a problem with Amazon Drive which blocks downloads of
-files bigger than about 10GB. The default for this is 9GB which
+files bigger than about 10 GiB. The default for this is 9 GiB which
shouldn't need to be changed.
To download files above this threshold, rclone requests a "tempLink"
@@ -12799,7 +14270,7 @@ underlying S3 storage.
- Config: templink_threshold
- Env Var: RCLONE_ACD_TEMPLINK_THRESHOLD
- Type: SizeSuffix
-- Default: 9G
+- Default: 9Gi
--acd-encoding
@@ -12826,8 +14297,8 @@ Amazon Drive has an internal limit of file sizes that can be uploaded to
the service. This limit is not officially published, but all files
larger than this will fail.
-At the time of writing (Jan 2016) is in the area of 50GB per file. This
-means that larger files are likely to fail.
+At the time of writing (Jan 2016) is in the area of 50 GiB per file.
+This means that larger files are likely to fail.
Unfortunately there is no way for rclone to see that this failure is
because of file size, so it will retry the operation, as any other
@@ -12843,7 +14314,9 @@ remote.
See List of backends that do not support rclone about See rclone about
-Amazon S3 Storage Providers
+
+AMAZON S3 STORAGE PROVIDERS
+
The S3 backend can be used with a number of different providers:
@@ -12855,6 +14328,7 @@ The S3 backend can be used with a number of different providers:
- IBM COS S3
- Minio
- Scaleway
+- SeaweedFS
- StackPath
- Tencent Cloud Object Storage (COS)
- Wasabi
@@ -13163,7 +14637,7 @@ the rclone docs for more details.
--fast-list trades off API transactions for memory use. As a rough guide
rclone uses 1k of memory per object stored, so using --fast-list on a
-sync of a million objects will use roughly 1 GB of RAM.
+sync of a million objects will use roughly 1 GiB of RAM.
If you are only copying a small number of files into a big repository
then using --no-traverse is a good idea. This finds objects directly
@@ -13241,14 +14715,14 @@ work with the SDK properly:
Multipart uploads
rclone supports multipart uploads with S3 which means that it can upload
-files bigger than 5GB.
+files bigger than 5 GiB.
Note that files uploaded _both_ with multipart upload _and_ through
crypt remotes do not have MD5 sums.
rclone switches from single part uploads to multipart uploads at the
-point specified by --s3-upload-cutoff. This can be a maximum of 5GB and
-a minimum of 0 (ie always upload multipart files).
+point specified by --s3-upload-cutoff. This can be a maximum of 5 GiB
+and a minimum of 0 (ie always upload multipart files).
The chunk sizes used in the multipart upload are specified by
--s3-chunk-size and the number of chunks uploaded concurrently is
@@ -13388,7 +14862,7 @@ Standard Options
Here are the standard options specific to s3 (Amazon S3 Compliant
Storage Providers including AWS, Alibaba, Ceph, Digital Ocean,
-Dreamhost, IBM COS, Minio, and Tencent COS).
+Dreamhost, IBM COS, Minio, SeaweedFS, and Tencent COS).
--s3-provider
@@ -13417,6 +14891,8 @@ Choose your S3 provider.
- Netease Object Storage (NOS)
- "Scaleway"
- Scaleway Object Storage
+ - "SeaweedFS"
+ - SeaweedFS S3
- "StackPath"
- StackPath Object Storage
- "TencentCOS"
@@ -13732,6 +15208,10 @@ Endpoint for OSS API.
- Type: string
- Default: ""
- Examples:
+ - "oss-accelerate.aliyuncs.com"
+ - Global Accelerate
+ - "oss-accelerate-overseas.aliyuncs.com"
+ - Global Accelerate (outside mainland China)
- "oss-cn-hangzhou.aliyuncs.com"
- East China 1 (Hangzhou)
- "oss-cn-shanghai.aliyuncs.com"
@@ -13743,9 +15223,17 @@ Endpoint for OSS API.
- "oss-cn-zhangjiakou.aliyuncs.com"
- North China 3 (Zhangjiakou)
- "oss-cn-huhehaote.aliyuncs.com"
- - North China 5 (Huhehaote)
+ - North China 5 (Hohhot)
+ - "oss-cn-wulanchabu.aliyuncs.com"
+ - North China 6 (Ulanqab)
- "oss-cn-shenzhen.aliyuncs.com"
- South China 1 (Shenzhen)
+ - "oss-cn-heyuan.aliyuncs.com"
+ - South China 2 (Heyuan)
+ - "oss-cn-guangzhou.aliyuncs.com"
+ - South China 3 (Guangzhou)
+ - "oss-cn-chengdu.aliyuncs.com"
+ - West China 1 (Chengdu)
- "oss-cn-hongkong.aliyuncs.com"
- Hong Kong (Hong Kong)
- "oss-us-west-1.aliyuncs.com"
@@ -13866,6 +15354,8 @@ Endpoint for S3 API. Required when using an S3 clone.
- Digital Ocean Spaces Amsterdam 3
- "sgp1.digitaloceanspaces.com"
- Digital Ocean Spaces Singapore 1
+ - "localhost:8333"
+ - SeaweedFS S3 localhost
- "s3.wasabisys.com"
- Wasabi US East endpoint
- "s3.us-west-1.wasabisys.com"
@@ -14196,7 +15686,7 @@ Advanced Options
Here are the advanced options specific to s3 (Amazon S3 Compliant
Storage Providers including AWS, Alibaba, Ceph, Digital Ocean,
-Dreamhost, IBM COS, Minio, and Tencent COS).
+Dreamhost, IBM COS, Minio, SeaweedFS, and Tencent COS).
--s3-bucket-acl
@@ -14285,12 +15775,12 @@ sse_customer_key provided.
Cutoff for switching to chunked upload
Any files larger than this will be uploaded in chunks of chunk_size. The
-minimum is 0 and the maximum is 5GB.
+minimum is 0 and the maximum is 5 GiB.
- Config: upload_cutoff
- Env Var: RCLONE_S3_UPLOAD_CUTOFF
- Type: SizeSuffix
-- Default: 200M
+- Default: 200Mi
--s3-chunk-size
@@ -14311,15 +15801,15 @@ Rclone will automatically increase the chunk size when uploading a large
file of known size to stay below the 10,000 chunks limit.
Files of unknown size are uploaded with the configured chunk_size. Since
-the default chunk size is 5MB and there can be at most 10,000 chunks,
+the default chunk size is 5 MiB and there can be at most 10,000 chunks,
this means that by default the maximum size of a file you can stream
-upload is 48GB. If you wish to stream upload larger files then you will
-need to increase chunk_size.
+upload is 48 GiB. If you wish to stream upload larger files then you
+will need to increase chunk_size.
- Config: chunk_size
- Env Var: RCLONE_S3_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 5M
+- Default: 5Mi
--s3-max-upload-parts
@@ -14346,12 +15836,12 @@ Cutoff for switching to multipart copy
Any files larger than this that need to be server-side copied will be
copied in chunks of this size.
-The minimum is 0 and the maximum is 5GB.
+The minimum is 0 and the maximum is 5 GiB.
- Config: copy_cutoff
- Env Var: RCLONE_S3_COPY_CUTOFF
- Type: SizeSuffix
-- Default: 4.656G
+- Default: 4.656Gi
--s3-disable-checksum
@@ -14550,6 +16040,15 @@ very small even with this flag.
- Type: bool
- Default: false
+--s3-no-head-object
+
+If set, don't HEAD objects
+
+- Config: no_head_object
+- Env Var: RCLONE_S3_NO_HEAD_OBJECT
+- Type: bool
+- Default: false
+
--s3-encoding
This sets the encoding for the backend.
@@ -14742,6 +16241,7 @@ Then use it as normal with the name of the public bucket, e.g.
You will be able to list and copy data but not upload it.
+
Ceph
Ceph is an open source unified, distributed storage system designed for
@@ -14793,6 +16293,7 @@ removed).
Because this is a json dump, it is encoding the / as \/, so if you use
the secret key as xxxxxx/xxxx it will work fine.
+
Dreamhost
Dreamhost DreamObjects is an object storage system based on CEPH.
@@ -14814,6 +16315,7 @@ in your config:
server_side_encryption =
storage_class =
+
DigitalOcean Spaces
Spaces is an S3-interoperable object storage service from cloud provider
@@ -14863,6 +16365,7 @@ example:
rclone mkdir spaces:my-new-space
rclone copy /path/to/files spaces:my-new-space
+
IBM COS (S3)
Information stored with IBM Cloud Object Storage is encrypted and
@@ -15034,6 +16537,7 @@ To configure access to IBM COS S3, follow the steps below:
6) Delete a file on remote.
rclone delete IBM-COS-XREGION:newbucket/file.txt
+
Minio
Minio is an object storage server built for cloud application developers
@@ -15095,6 +16599,7 @@ So once set up, for example to copy files into a bucket
rclone copy /path/to/files minio:bucket
+
Scaleway
Scaleway The Object Storage platform allows you to store anything from
@@ -15118,6 +16623,53 @@ rclone like this:
server_side_encryption =
storage_class =
+
+SeaweedFS
+
+SeaweedFS is a distributed storage system for blobs, objects, files, and
+data lake, with O(1) disk seek and a scalable file metadata store. It
+has an S3 compatible object storage interface.
+
+Assuming the SeaweedFS are configured with weed shell as such:
+
+ > s3.bucket.create -name foo
+ > s3.configure -access_key=any -secret_key=any -buckets=foo -user=me -actions=Read,Write,List,Tagging,Admin -apply
+ {
+ "identities": [
+ {
+ "name": "me",
+ "credentials": [
+ {
+ "accessKey": "any",
+ "secretKey": "any"
+ }
+ ],
+ "actions": [
+ "Read:foo",
+ "Write:foo",
+ "List:foo",
+ "Tagging:foo",
+ "Admin:foo"
+ ]
+ }
+ ]
+ }
+
+To use rclone with SeaweedFS, above configuration should end up with
+something like this in your config:
+
+ [seaweedfs_s3]
+ type = s3
+ provider = SeaweedFS
+ access_key_id = any
+ secret_access_key = any
+ endpoint = localhost:8333
+
+So once set up, for example to copy files into a bucket
+
+ rclone copy /path/to/files seaweedfs_s3:foo
+
+
Wasabi
Wasabi is a cloud-based object storage service for a broad range of
@@ -15227,6 +16779,7 @@ This will leave the config file looking like this.
server_side_encryption =
storage_class =
+
Alibaba OSS
Here is an example of making an Alibaba Cloud (Aliyun) OSS
@@ -15335,6 +16888,7 @@ This will guide you through an interactive setup process.
d) Delete this remote
y/e/d> y
+
Tencent COS
Tencent Cloud Object Storage (COS) is a distributed storage service
@@ -15457,12 +17011,14 @@ To configure access to Tencent COS, follow the steps below:
==== ====
cos s3
+
Netease NOS
For Netease NOS configure as per the configurator rclone config setting
the provider Netease. This will automatically set
force_path_style = false which is necessary for it to run properly.
+
Limitations
rclone about is not supported by the S3 backend. Backends without this
@@ -15472,7 +17028,9 @@ mfs (most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
-Backblaze B2
+
+BACKBLAZE B2
+
B2 is Backblaze's cloud storage system.
@@ -15619,9 +17177,10 @@ hardware, how big the files are, how much you want to load your
computer, etc. The default of --transfers 4 is definitely too low for
Backblaze B2 though.
-Note that uploading big files (bigger than 200 MB by default) will use a
-96 MB RAM buffer by default. There can be at most --transfers of these
-in use at any moment, so this sets the upper limit on the memory used.
+Note that uploading big files (bigger than 200 MiB by default) will use
+a 96 MiB RAM buffer by default. There can be at most --transfers of
+these in use at any moment, so this sets the upper limit on the memory
+used.
Versions
@@ -15634,10 +17193,6 @@ remove the file instead of hiding it.
Old versions of files, where available, are visible using the
--b2-versions flag.
-NB Note that --b2-versions does not work with crypt at the moment #1627.
-Using --backup-dir with rclone is the recommended way of working around
-this.
-
If you wish to remove all the old versions then you can use the
rclone cleanup remote:bucket command which will delete all the old
versions of files, leaving the current ones intact. You can also supply
@@ -15841,12 +17396,12 @@ Cutoff for switching to chunked upload.
Files above this size will be uploaded in chunks of "--b2-chunk-size".
-This value should be set no larger than 4.657GiB (== 5GB).
+This value should be set no larger than 4.657 GiB (== 5 GB).
- Config: upload_cutoff
- Env Var: RCLONE_B2_UPLOAD_CUTOFF
- Type: SizeSuffix
-- Default: 200M
+- Default: 200Mi
--b2-copy-cutoff
@@ -15855,12 +17410,12 @@ Cutoff for switching to multipart copy
Any files larger than this that need to be server-side copied will be
copied in chunks of this size.
-The minimum is 0 and the maximum is 4.6GB.
+The minimum is 0 and the maximum is 4.6 GiB.
- Config: copy_cutoff
- Env Var: RCLONE_B2_COPY_CUTOFF
- Type: SizeSuffix
-- Default: 4G
+- Default: 4Gi
--b2-chunk-size
@@ -15874,7 +17429,7 @@ size.
- Config: chunk_size
- Env Var: RCLONE_B2_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 96M
+- Default: 96Mi
--b2-disable-checksum
@@ -15960,7 +17515,9 @@ mfs (most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
-Box
+
+BOX
+
Paths are specified as remote:path
@@ -16175,9 +17732,9 @@ strings.
Transfers
-For files above 50MB rclone will use a chunked transfer. Rclone will
+For files above 50 MiB rclone will use a chunked transfer. Rclone will
upload up to --transfers chunks at the same time (shared among all the
-multipart uploads). Chunks are buffered in memory and are normally 8MB
+multipart uploads). Chunks are buffered in memory and are normally 8 MiB
so increasing --transfers will increase memory use.
Deleting files
@@ -16307,12 +17864,12 @@ Fill in for rclone to use a non root folder as its starting point.
--box-upload-cutoff
-Cutoff for switching to multipart upload (>= 50MB).
+Cutoff for switching to multipart upload (>= 50 MiB).
- Config: upload_cutoff
- Env Var: RCLONE_BOX_UPLOAD_CUTOFF
- Type: SizeSuffix
-- Default: 50M
+- Default: 50Mi
--box-commit-retries
@@ -16352,7 +17909,9 @@ mfs (most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
-Cache (BETA)
+
+CACHE (DEPRECATED)
+
The cache remote wraps another existing remote and stores file structure
and its data for long running tasks like rclone mount.
@@ -16418,11 +17977,11 @@ This will guide you through an interactive setup process:
The size of a chunk. Lower value good for slow connections but can affect seamless reading.
Default: 5M
Choose a number from below, or type in your own value
- 1 / 1MB
- \ "1m"
- 2 / 5 MB
+ 1 / 1 MiB
+ \ "1M"
+ 2 / 5 MiB
\ "5M"
- 3 / 10 MB
+ 3 / 10 MiB
\ "10M"
chunk_size> 2
How much time should object info (file size, file hashes, etc.) be stored in cache. Use a very high value if you don't plan on changing the source FS from outside the cache.
@@ -16439,11 +17998,11 @@ This will guide you through an interactive setup process:
The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted.
Default: 10G
Choose a number from below, or type in your own value
- 1 / 500 MB
+ 1 / 500 MiB
\ "500M"
- 2 / 1 GB
+ 2 / 1 GiB
\ "1G"
- 3 / 10 GB
+ 3 / 10 GiB
\ "10G"
chunk_total_size> 3
Remote config
@@ -16724,14 +18283,14 @@ be cleared or unexpected EOF errors will occur.
- Config: chunk_size
- Env Var: RCLONE_CACHE_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 5M
+- Default: 5Mi
- Examples:
- - "1m"
- - 1MB
+ - "1M"
+ - 1 MiB
- "5M"
- - 5 MB
+ - 5 MiB
- "10M"
- - 10 MB
+ - 10 MiB
--cache-info-age
@@ -16762,14 +18321,14 @@ chunks until it goes under this value.
- Config: chunk_total_size
- Env Var: RCLONE_CACHE_CHUNK_TOTAL_SIZE
- Type: SizeSuffix
-- Default: 10G
+- Default: 10Gi
- Examples:
- "500M"
- - 500 MB
+ - 500 MiB
- "1G"
- - 1 GB
+ - 1 GiB
- "10G"
- - 10 GB
+ - 10 GiB
Advanced Options
@@ -17005,7 +18564,9 @@ Print stats on the cache backend in JSON format.
rclone backend stats remote: [options] [+]
-Chunker (BETA)
+
+CHUNKER (BETA)
+
The chunker overlay transparently splits large files into smaller chunks
during upload to wrapped remote and transparently assembles them back
@@ -17044,7 +18605,7 @@ to separate it from the remote itself.
Enter a string value. Press Enter for the default ("").
remote> remote:path
Files larger than chunk size will be split in chunks.
- Enter a size with suffix k,M,G,T. Press Enter for the default ("2G").
+ Enter a size with suffix K,M,G,T. Press Enter for the default ("2G").
chunk_size> 100M
Choose how chunker handles hash sums. All modes but "none" require metadata.
Enter a string value. Press Enter for the default ("md5").
@@ -17327,7 +18888,7 @@ Files larger than chunk size will be split in chunks.
- Config: chunk_size
- Env Var: RCLONE_CHUNKER_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 2G
+- Default: 2Gi
--chunker-hash-type
@@ -17447,7 +19008,9 @@ Choose how chunker should handle temporary files during transactions.
systems.
-Citrix ShareFile
+
+CITRIX SHAREFILE
+
Citrix ShareFile is a secure file sharing and transfer service aimed as
business.
@@ -17554,10 +19117,10 @@ ShareFile supports MD5 type hashes, so you can use the --checksum flag.
Transfers
-For files above 128MB rclone will use a chunked transfer. Rclone will
+For files above 128 MiB rclone will use a chunked transfer. Rclone will
upload up to --transfers chunks at the same time (shared among all the
-multipart uploads). Chunks are buffered in memory and are normally 64MB
-so increasing --transfers will increase memory use.
+multipart uploads). Chunks are buffered in memory and are normally 64
+MiB so increasing --transfers will increase memory use.
Limitations
@@ -17633,7 +19196,7 @@ Cutoff for switching to multipart upload.
- Config: upload_cutoff
- Env Var: RCLONE_SHAREFILE_UPLOAD_CUTOFF
- Type: SizeSuffix
-- Default: 128M
+- Default: 128Mi
--sharefile-chunk-size
@@ -17647,7 +19210,7 @@ Reducing this will reduce memory usage but decrease performance.
- Config: chunk_size
- Env Var: RCLONE_SHAREFILE_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 64M
+- Default: 64Mi
--sharefile-endpoint
@@ -17683,7 +19246,9 @@ remote.
See List of backends that do not support rclone about See rclone about
-Crypt
+
+CRYPT
+
Rclone crypt remotes encrypt and decrypt other remotes.
@@ -18285,8 +19850,8 @@ a nonce.
Chunk
-Each chunk will contain 64kB of data, except for the last one which may
-have less data. The data chunk is in standard NaCl SecretBox format.
+Each chunk will contain 64 KiB of data, except for the last one which
+may have less data. The data chunk is in standard NaCl SecretBox format.
SecretBox uses XSalsa20 and Poly1305 to encrypt and authenticate
messages.
@@ -18311,7 +19876,7 @@ Examples
49 bytes total
-1MB (1048576 bytes) file will encrypt to
+1 MiB (1048576 bytes) file will encrypt to
- 32 bytes header
- 16 chunks of 65568 bytes
@@ -18371,7 +19936,9 @@ SEE ALSO
filenames
-Compress (Experimental)
+
+COMPRESS (EXPERIMENTAL)
+
Warning
@@ -18516,10 +20083,12 @@ case the compressed file will need to be cached to determine it's size.
- Config: ram_cache_limit
- Env Var: RCLONE_COMPRESS_RAM_CACHE_LIMIT
- Type: SizeSuffix
-- Default: 20M
+- Default: 20Mi
-Dropbox
+
+DROPBOX
+
Paths are specified as remote:path
@@ -18629,6 +20198,62 @@ get replaced if they are the last character in the name:
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON
strings.
+Batch mode uploads
+
+Using batch mode uploads is very important for performance when using
+the Dropbox API. See the dropbox performance guide for more info.
+
+There are 3 modes rclone can use for uploads.
+
+--dropbox-batch-mode off
+
+In this mode rclone will not use upload batching. This was the default
+before rclone v1.55. It has the disadvantage that it is very likely to
+encounter too_many_requests errors like this
+
+ NOTICE: too_many_requests/.: Too many requests or write operations. Trying again in 15 seconds.
+
+When rclone receives these it has to wait for 15s or sometimes 300s
+before continuing which really slows down transfers.
+
+This will happen especially if --transfers is large, so this mode isn't
+recommended except for compatibility or investigating problems.
+
+--dropbox-batch-mode sync
+
+In this mode rclone will batch up uploads to the size specified by
+--dropbox-batch-size and commit them together.
+
+Using this mode means you can use a much higher --transfers parameter
+(32 or 64 works fine) without receiving too_many_requests errors.
+
+This mode ensures full data integrity.
+
+Note that there may be a pause when quitting rclone while rclone
+finishes up the last batch using this mode.
+
+--dropbox-batch-mode async
+
+In this mode rclone will batch up uploads to the size specified by
+--dropbox-batch-size and commit them together.
+
+However it will not wait for the status of the batch to be returned to
+the caller. This means rclone can use a much bigger batch size (much
+bigger than --transfers), at the cost of not being able to check the
+status of the upload.
+
+This provides the maximum possible upload speed especially with lots of
+small files, however rclone can't check the file got uploaded properly
+using this mode.
+
+If you are using this mode then using "rclone check" after the transfer
+completes is recommended. Or you could do an initial transfer with
+--dropbox-batch-mode async then do a final transfer with
+--dropbox-batch-mode sync (the default).
+
+Note that there may be a pause when quitting rclone while rclone
+finishes up the last batch using this mode.
+
Standard Options
Here are the standard options specific to dropbox (Dropbox).
@@ -18684,19 +20309,19 @@ Token server url. Leave blank to use the provider defaults.
--dropbox-chunk-size
-Upload chunk size. (< 150M).
+Upload chunk size. (< 150Mi).
Any files larger than this will be uploaded in chunks of this size.
Note that chunks are buffered in memory (one at a time) so rclone can
deal with retries. Setting this larger will increase the speed slightly
-(at most 10% for 128MB in tests) at the cost of using more memory. It
+(at most 10% for 128 MiB in tests) at the cost of using more memory. It
can be set smaller if you are tight on memory.
- Config: chunk_size
- Env Var: RCLONE_DROPBOX_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 48M
+- Default: 48Mi
--dropbox-impersonate
@@ -18754,6 +20379,73 @@ particular shared folder.
- Type: bool
- Default: false
+--dropbox-batch-mode
+
+Upload file batching sync|async|off.
+
+This sets the batch mode used by rclone.
+
+For full info see the main docs
+
+This has 3 possible values
+
+- off - no batching
+- sync - batch uploads and check completion (default)
+- async - batch upload and don't check completion
+
+Rclone will close any outstanding batches when it exits which may make a
+delay on quit.
+
+- Config: batch_mode
+- Env Var: RCLONE_DROPBOX_BATCH_MODE
+- Type: string
+- Default: "sync"
+
+--dropbox-batch-size
+
+Max number of files in upload batch.
+
+This sets the batch size of files to upload. It has to be less than
+1000.
+
+By default this is 0 which means rclone which calculate the batch size
+depending on the setting of batch_mode.
+
+- batch_mode: async - default batch_size is 100
+- batch_mode: sync - default batch_size is the same as --transfers
+- batch_mode: off - not in use
+
+Rclone will close any outstanding batches when it exits which may make a
+delay on quit.
+
+Setting this is a great idea if you are uploading lots of small files as
+it will make them a lot quicker. You can use --transfers 32 to maximise
+throughput.
+
+- Config: batch_size
+- Env Var: RCLONE_DROPBOX_BATCH_SIZE
+- Type: int
+- Default: 0
+
+--dropbox-batch-timeout
+
+Max time to allow an idle upload batch before uploading
+
+If an upload batch is idle for more than this long then it will be
+uploaded.
+
+The default for this is 0 which means rclone will choose a sensible
+default based on the batch_mode in use.
+
+- batch_mode: async - default batch_timeout is 500ms
+- batch_mode: sync - default batch_timeout is 10s
+- batch_mode: off - not in use
+
+- Config: batch_timeout
+- Env Var: RCLONE_DROPBOX_BATCH_TIMEOUT
+- Type: Duration
+- Default: 0s
+
--dropbox-encoding
This sets the encoding for the backend.
@@ -18787,6 +20479,11 @@ Failed to purge: There are too many files involved in this operation. As
a work-around do an rclone delete dropbox:dir followed by an
rclone rmdir dropbox:dir.
+When using rclone link you'll need to set --expire if using a
+non-personal account otherwise the visibility may not be correct. (Note
+that --expire isn't supported on personal accounts). See the forum
+discussion and the dropbox SDK issue.
+
Get your own Dropbox App ID
When you use rclone with Dropbox in its default configuration you are
@@ -18807,13 +20504,24 @@ Here is how to create your own Dropbox App ID for rclone:
5. Click the button Create App
-6. Fill Redirect URIs as http://localhost:53682/
+6. Switch to the Permissions tab. Enable at least the following
+ permissions: account_info.read, files.metadata.write,
+ files.content.write, files.content.read, sharing.write. The
+ files.metadata.read and sharing.read checkboxes will be marked too.
+ Click Submit
-7. Find the App key and App secret Use these values in rclone config to
- add a new remote or edit an existing remote.
+7. Switch to the Settings tab. Fill OAuth2 - Redirect URIs as
+ http://localhost:53682/
+
+8. Find the App key and App secret values on the Settings tab. Use
+ these values in rclone config to add a new remote or edit an
+ existing remote. The App key setting corresponds to client_id in
+ rclone config, the App secret corresponds to client_secret
-Enterprise File Fabric
+
+ENTERPRISE FILE FABRIC
+
This backend supports Storage Made Easy's Enterprise File Fabric™ which
provides a software solution to integrate and unify File and Object
@@ -19059,8 +20767,10 @@ See: the encoding section in the overview for more info.
- Default: Slash,Del,Ctl,InvalidUtf8,Dot
+
FTP
+
FTP is the File Transfer Protocol. Rclone FTP support is provided using
the github.com/jlaffaye/ftp package.
@@ -19356,7 +21066,9 @@ Not all FTP servers can have all characters in file names, for example:
pureftpd \ [ ]
-Google Cloud Storage
+
+GOOGLE CLOUD STORAGE
+
Paths are specified as remote:bucket (or remote: for the lsd command.)
You may put subdirectories in too, e.g. remote:bucket/path/to/dir.
@@ -19583,11 +21295,24 @@ Eg --header-upload "Content-Type text/potato"
Note that the last of these is for setting custom metadata in the form
--header-upload "x-goog-meta-key: value"
-Modified time
+Modification time
-Google google cloud storage stores md5sums natively and rclone stores
-modification times as metadata on the object, under the "mtime" key in
-RFC3339 format accurate to 1ns.
+Google Cloud Storage stores md5sum natively. Google's gsutil tool stores
+modification time with one-second precision as goog-reserved-file-mtime
+in file metadata.
+
+To ensure compatibility with gsutil, rclone stores modification time in
+2 separate metadata entries. mtime uses RFC3339 format with
+one-nanosecond precision. goog-reserved-file-mtime uses Unix timestamp
+format with one-second precision. To get modification time from object
+metadata, rclone reads the metadata in the following order: mtime,
+goog-reserved-file-mtime, object updated time.
+
+Note that rclone's default modify window is 1ns. Files uploaded by
+gsutil only contain timestamps with one-second precision. If you use
+rclone to sync files previously uploaded by gsutil, rclone will attempt
+to update modification time for all these files. To avoid these possibly
+unnecessary updates, use --modify-window 1s.
Restricted filename characters
@@ -19865,7 +21590,9 @@ rclone union remote.
See List of backends that do not support rclone about See rclone about
-Google Drive
+
+GOOGLE DRIVE
+
Paths are specified as drive:path
@@ -20754,7 +22481,7 @@ Cutoff for switching to chunked upload
- Config: upload_cutoff
- Env Var: RCLONE_DRIVE_UPLOAD_CUTOFF
- Type: SizeSuffix
-- Default: 8M
+- Default: 8Mi
--drive-chunk-size
@@ -20768,7 +22495,7 @@ Reducing this will reduce memory usage but decrease performance.
- Config: chunk_size
- Env Var: RCLONE_DRIVE_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 8M
+- Default: 8Mi
--drive-acknowledge-abuse
@@ -20877,7 +22604,7 @@ See: https://github.com/rclone/rclone/issues/3631
Make upload limit errors be fatal
-At the time of writing it is only possible to upload 750GB of data to
+At the time of writing it is only possible to upload 750 GiB of data to
Google Drive a day (this is an undocumented limit). When this limit is
reached Google Drive produces a slightly different error message. When
this flag is set it causes these errors to be fatal. These will stop the
@@ -20897,11 +22624,11 @@ See: https://github.com/rclone/rclone/issues/3857
Make download limit errors be fatal
-At the time of writing it is only possible to download 10TB of data from
-Google Drive a day (this is an undocumented limit). When this limit is
-reached Google Drive produces a slightly different error message. When
-this flag is set it causes these errors to be fatal. These will stop the
-in-progress sync.
+At the time of writing it is only possible to download 10 TiB of data
+from Google Drive a day (this is an undocumented limit). When this limit
+is reached Google Drive produces a slightly different error message.
+When this flag is set it causes these errors to be fatal. These will
+stop the in-progress sync.
Note that this detection is relying on error message strings which
Google don't document so it may break in the future.
@@ -21100,7 +22827,7 @@ Limitations
Drive has quite a lot of rate limiting. This causes rclone to be limited
to transferring about 2 files per second only. Individual files may be
-transferred much faster at 100s of MBytes/s but lots of small files can
+transferred much faster at 100s of MiByte/s but lots of small files can
take a long time.
Server side copies are also subject to a separate rate limit. If you see
@@ -21203,8 +22930,12 @@ of "External" above, but this has not been tested/documented so far).
account or "Other" if you using a GSuite account and click "Create".
(the default name is fine)
-8. It will show you a client ID and client secret. Use these values in
- rclone config to add a new remote or edit an existing remote.
+8. It will show you a client ID and client secret. Make a note of
+ these.
+
+9. Go to "Oauth consent screen" and press "Publish App"
+
+10. Provide the noted client ID and client secret to rclone.
Be aware that, due to the "enhanced security" recently introduced by
Google, you are theoretically expected to "submit your app for
@@ -21226,7 +22957,9 @@ and Secret. Note that it will automatically create a new project in the
API Console.
-Google Photos
+
+GOOGLE PHOTOS
+
The rclone backend for Google Photos is a specialized backend for
transferring photos and videos to and from Google Photos.
@@ -21649,8 +23382,10 @@ listings and won't be transferred.
- Default: false
+
HDFS
+
HDFS is a distributed file-system, part of the Apache Hadoop framework.
Paths are specified as remote: or remote:path/to/dir.
@@ -21829,7 +23564,7 @@ system).
Kerberos service principal name for the namenode
Enables KERBEROS authentication. Specifies the Service Principal Name
-(/) for the namenode.
+(SERVICE/FQDN) for the namenode.
- Config: service_principal_name
- Env Var: RCLONE_HDFS_SERVICE_PRINCIPAL_NAME
@@ -21869,8 +23604,10 @@ See: the encoding section in the overview for more info.
- Default: Slash,Colon,Del,Ctl,InvalidUtf8,Dot
+
HTTP
+
The HTTP remote is a read only remote for reading files of a webserver.
The webserver should provide file listings which rclone will read and
turn into a remote. This has been tested with common webservers such as
@@ -22057,7 +23794,9 @@ mfs (most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
-Hubic
+
+HUBIC
+
Paths are specified as remote:path
@@ -22214,12 +23953,12 @@ Token server url. Leave blank to use the provider defaults.
Above this size files will be chunked into a _segments container.
Above this size files will be chunked into a _segments container. The
-default for this is 5GB which is its maximum value.
+default for this is 5 GiB which is its maximum value.
- Config: chunk_size
- Env Var: RCLONE_HUBIC_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 5G
+- Default: 5Gi
--hubic-no-chunk
@@ -22228,7 +23967,7 @@ Don't chunk files during streaming upload.
When doing streaming uploads (e.g. using rcat or mount) setting this
flag will cause the swift backend to not upload chunked files.
-This will limit the maximum upload size to 5GB. However non chunked
+This will limit the maximum upload size to 5 GiB. However non chunked
files are easier to deal with and have an MD5SUM.
Rclone will still chunk files bigger than chunk_size when doing normal
@@ -22260,7 +23999,9 @@ The Swift API doesn't return a correct MD5SUM for segmented files
MD5SUM for these.
-Jottacloud
+
+JOTTACLOUD
+
Jottacloud is a cloud storage service provider from a Norwegian company,
using its own datacenters in Norway.
@@ -22459,6 +24200,11 @@ of a file it creates a new version of it. Currently rclone only supports
retrieving the current version but older versions can be accessed via
the Jottacloud Website.
+Versioning can be disabled by --jottacloud-no-versions option. This is
+achieved by deleting the remote file prior to uploading a new version.
+If the upload the fails no version of the file will be available in the
+remote.
+
Quota information
To view your current quota you can use the rclone about remote: command
@@ -22477,7 +24223,7 @@ required.
- Config: md5_memory_limit
- Env Var: RCLONE_JOTTACLOUD_MD5_MEMORY_LIMIT
- Type: SizeSuffix
-- Default: 10M
+- Default: 10Mi
--jottacloud-trashed-only
@@ -22505,7 +24251,17 @@ Files bigger than this can be resumed if the upload fail's.
- Config: upload_resume_limit
- Env Var: RCLONE_JOTTACLOUD_UPLOAD_RESUME_LIMIT
- Type: SizeSuffix
-- Default: 10M
+- Default: 10Mi
+
+--jottacloud-no-versions
+
+Avoid server side versioning by deleting files and recreating files
+instead of overwriting them.
+
+- Config: no_versions
+- Env Var: RCLONE_JOTTACLOUD_NO_VERSIONS
+- Type: bool
+- Default: false
--jottacloud-encoding
@@ -22539,7 +24295,9 @@ previously deleted paths to fail. Emptying the trash should help in such
cases.
-Koofr
+
+KOOFR
+
Paths are specified as remote:path
@@ -22703,7 +24461,9 @@ Note that Koofr is case insensitive so you can't have a file called
"Hello.doc" and one called "hello.doc".
-Mail.ru Cloud
+
+MAIL.RU CLOUD
+
Mail.ru Cloud is a cloud storage provided by a Russian internet company
Mail.Ru Group. The official desktop client is Disk-O:, available on
@@ -22946,7 +24706,7 @@ This option allows you to disable speedup (put by hash) for large files
- Config: speedup_max_disk
- Env Var: RCLONE_MAILRU_SPEEDUP_MAX_DISK
- Type: SizeSuffix
-- Default: 3G
+- Default: 3Gi
- Examples:
- "0"
- Completely disable speedup (put by hash).
@@ -22963,7 +24723,7 @@ Files larger than the size given below will always be hashed on disk.
- Config: speedup_max_memory
- Env Var: RCLONE_MAILRU_SPEEDUP_MAX_MEMORY
- Type: SizeSuffix
-- Default: 32M
+- Default: 32Mi
- Examples:
- "0"
- Preliminary hashing will always be done in a temporary disk
@@ -23024,7 +24784,9 @@ See: the encoding section in the overview for more info.
Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot
-Mega
+
+MEGA
+
Mega is a cloud storage and file hosting service known for its security
feature where all files are encrypted locally before they are uploaded.
@@ -23242,7 +25004,9 @@ so there are likely quite a few errors still remaining in this library.
Mega allows duplicate files which may confuse rclone.
-Memory
+
+MEMORY
+
The memory backend is an in RAM backend. It does not persist its data -
use the local backend for that.
@@ -23298,7 +25062,9 @@ Restricted filename characters
The memory backend replaces the default restricted characters set.
-Microsoft Azure Blob Storage
+
+MICROSOFT AZURE BLOB STORAGE
+
Paths are specified as remote:container (or remote: for the lsd
command.) You may put subdirectories in too, e.g.
@@ -23458,13 +25224,13 @@ Path to file containing credentials for use with a service principal.
Leave blank normally. Needed only if you want to use a service principal
instead of interactive login.
- $ az sp create-for-rbac --name "" \
+ $ az ad sp create-for-rbac --name "" \
--role "Storage Blob Data Owner" \
--scopes "/subscriptions//resourceGroups//providers/Microsoft.Storage/storageAccounts//blobServices/default/containers/" \
> azure-principal.json
-See Use Azure CLI to assign an Azure role for access to blob and queue
-data for more details.
+See "Create an Azure service principal" and "Assign an Azure role for
+access to blob data" pages for more details.
- Config: service_principal_file
- Env Var: RCLONE_AZUREBLOB_SERVICE_PRINCIPAL_FILE
@@ -23566,7 +25332,7 @@ Endpoint for the service Leave blank normally.
--azureblob-upload-cutoff
-Cutoff for switching to chunked upload (<= 256MB). (Deprecated)
+Cutoff for switching to chunked upload (<= 256 MiB). (Deprecated)
- Config: upload_cutoff
- Env Var: RCLONE_AZUREBLOB_UPLOAD_CUTOFF
@@ -23575,7 +25341,7 @@ Cutoff for switching to chunked upload (<= 256MB). (Deprecated)
--azureblob-chunk-size
-Upload chunk size (<= 100MB).
+Upload chunk size (<= 100 MiB).
Note that this is stored in memory and there may be up to "--transfers"
chunks stored at once in memory.
@@ -23583,7 +25349,7 @@ chunks stored at once in memory.
- Config: chunk_size
- Env Var: RCLONE_AZUREBLOB_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 4M
+- Default: 4Mi
--azureblob-list-chunk
@@ -23726,7 +25492,9 @@ use_emulator config as true, you do not need to provide default account
name or key if using emulator.
-Microsoft OneDrive
+
+MICROSOFT ONEDRIVE
+
Paths are specified as remote:path
@@ -24005,7 +25773,7 @@ into memory.
- Config: chunk_size
- Env Var: RCLONE_ONEDRIVE_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 10M
+- Default: 10Mi
--onedrive-drive-id
@@ -24055,6 +25823,15 @@ slower).
- Type: bool
- Default: false
+--onedrive-list-chunk
+
+Size of listing chunk.
+
+- Config: list_chunk
+- Env Var: RCLONE_ONEDRIVE_LIST_CHUNK
+- Type: int
+- Default: 1000
+
--onedrive-no-versions
Remove all versions on modifying operations
@@ -24153,7 +25930,7 @@ mapped to ? instead.
File sizes
-The largest allowed file size is 250GB for both OneDrive Personal and
+The largest allowed file size is 250 GiB for both OneDrive Personal and
OneDrive for Business (Updated 13 Jan 2021).
Path length
@@ -24250,6 +26027,15 @@ NB Onedrive personal can't currently delete versions
Troubleshooting
+Excessive throttling or blocked on SharePoint
+
+If you experience excessive throttling or is being blocked on SharePoint
+then it may help to set the user agent explicitly with a flag like this:
+--user-agent "ISV|rclone.org|rclone/v1.55.1"
+
+The specific details can be found in the Microsoft document: Avoid
+getting throttled or blocked in SharePoint Online
+
Unexpected file size/hash differences on Sharepoint
It is a known issue that Sharepoint (not OneDrive or OneDrive for
@@ -24314,7 +26100,9 @@ time the backend is configured. After this, rclone should work again for
this backend.
-OpenDrive
+
+OPENDRIVE
+
Paths are specified as remote:path
@@ -24456,7 +26244,7 @@ increase memory use.
- Config: chunk_size
- Env Var: RCLONE_OPENDRIVE_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 10M
+- Default: 10Mi
Limitations
@@ -24476,7 +26264,9 @@ policy mfs (most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
-QingStor
+
+QINGSTOR
+
Paths are specified as remote:bucket (or remote: for the lsd command.)
You may put subdirectories in too, e.g. remote:bucket/path/to/dir.
@@ -24571,7 +26361,7 @@ details.
Multipart uploads
rclone supports multipart uploads with QingStor which means that it can
-upload files bigger than 5GB. Note that files uploaded with multipart
+upload files bigger than 5 GiB. Note that files uploaded with multipart
upload don't have an MD5SUM.
Note that incomplete multipart uploads older than 24 hours can be
@@ -24699,12 +26489,12 @@ Number of connection retries.
Cutoff for switching to chunked upload
Any files larger than this will be uploaded in chunks of chunk_size. The
-minimum is 0 and the maximum is 5GB.
+minimum is 0 and the maximum is 5 GiB.
- Config: upload_cutoff
- Env Var: RCLONE_QINGSTOR_UPLOAD_CUTOFF
- Type: SizeSuffix
-- Default: 200M
+- Default: 200Mi
--qingstor-chunk-size
@@ -24722,7 +26512,7 @@ enough memory, then increasing this will speed up the transfers.
- Config: chunk_size
- Env Var: RCLONE_QINGSTOR_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 4M
+- Default: 4Mi
--qingstor-upload-concurrency
@@ -24763,7 +26553,9 @@ policy mfs (most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
-Swift
+
+SWIFT
+
Swift refers to OpenStack Object Storage. Commercial implementations of
that being:
@@ -25203,12 +26995,12 @@ true for resuming uploads across different sessions.
Above this size files will be chunked into a _segments container.
Above this size files will be chunked into a _segments container. The
-default for this is 5GB which is its maximum value.
+default for this is 5 GiB which is its maximum value.
- Config: chunk_size
- Env Var: RCLONE_SWIFT_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 5G
+- Default: 5Gi
--swift-no-chunk
@@ -25217,7 +27009,7 @@ Don't chunk files during streaming upload.
When doing streaming uploads (e.g. using rcat or mount) setting this
flag will cause the swift backend to not upload chunked files.
-This will limit the maximum upload size to 5GB. However non chunked
+This will limit the maximum upload size to 5 GiB. However non chunked
files are easier to deal with and have an MD5SUM.
Rclone will still chunk files bigger than chunk_size when doing normal
@@ -25283,7 +27075,9 @@ This is most likely caused by forgetting to specify your tenant when
setting up a swift remote.
-pCloud
+
+PCLOUD
+
Paths are specified as remote:path
@@ -25504,7 +27298,9 @@ with rclone authorize.
- EU region
-premiumize.me
+
+PREMIUMIZE.ME
+
Paths are specified as remote:path
@@ -25641,7 +27437,9 @@ maps these to and from an identical looking unicode equivalents \ and
premiumize.me only supports filenames up to 255 characters in length.
-put.io
+
+PUT.IO
+
Paths are specified as remote:path
@@ -25757,7 +27555,9 @@ See: the encoding section in the overview for more info.
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
-Seafile
+
+SEAFILE
+
This is a backend for the Seafile storage service: - It works with both
the free community edition or the professional edition. - Seafile
@@ -26113,8 +27913,10 @@ See: the encoding section in the overview for more info.
- Default: Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8
+
SFTP
+
SFTP is the Secure (or SSH) File Transfer Protocol.
The SFTP backend can be used with a number of different providers:
@@ -26127,7 +27929,10 @@ installations.
Paths are specified as remote:path. If the path does not begin with a /
it is relative to the home directory of the user. An empty path remote:
-refers to the user's home directory.
+refers to the user's home directory. For example, rclone lsd remote:
+would list the home directory of the user cofigured in the rclone remote
+config (i.e /home/sftpuser). However, rclone lsd remote:/ would list the
+root directory for remote machine (i.e. /)
"Note that some SFTP servers will need the leading / - Synology is a
good example of this. rsync.net, on the other hand, requires users to
@@ -26188,6 +27993,10 @@ See all directories in the home directory
rclone lsd remote:
+See all directories in the root directory
+
+ rclone lsd remote:/
+
Make a new directory
rclone mkdir remote:path/to/directory
@@ -26201,6 +28010,10 @@ files in the directory.
rclone sync -i /home/local/directory remote:directory
+Mount the remote path /srv/www-data/ to the local path /mnt/www-data
+
+ rclone mount remote:/srv/www-data/ /mnt/www-data
+
SSH Authentication
The SFTP remote supports three authentication methods:
@@ -26631,6 +28444,20 @@ If concurrent reads are disabled, the use_fstat option is ignored.
- Type: bool
- Default: false
+--sftp-disable-concurrent-writes
+
+If set don't use concurrent writes
+
+Normally rclone uses concurrent writes to upload files. This improves
+the performance greatly, especially for distant servers.
+
+This option disables concurrent writes should that be necessary.
+
+- Config: disable_concurrent_writes
+- Env Var: RCLONE_SFTP_DISABLE_CONCURRENT_WRITES
+- Type: bool
+- Default: false
+
--sftp-idle-timeout
Max time before closing idle connections
@@ -26696,7 +28523,9 @@ rsync.net is supported through the SFTP backend.
See rsync.net's documentation of rclone examples.
-SugarSync
+
+SUGARSYNC
+
SugarSync is a cloud service that enables active synchronization of
files across computers and other devices for file backup, access,
@@ -26943,7 +28772,9 @@ policy mfs (most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
-Tardigrade
+
+TARDIGRADE
+
Tardigrade is an encrypted, secure, and cost-effective object storage
service that enables you to store, back up, and archive large amounts of
@@ -27250,8 +29081,167 @@ remote.
See List of backends that do not support rclone about See rclone about
+Known issues
+
+If you get errors like too many open files this usually happens when the
+default ulimit for system max open files is exceeded. Native Storj
+protocol opens a large number of TCP connections (each of which is
+counted as an open file). For a single upload stream you can expect 110
+TCP connections to be opened. For a single download stream you can
+expect 35. This batch of connections will be opened for every 64 MiB
+segment and you should also expect TCP connections to be reused. If you
+do many transfers you eventually open a connection to most storage nodes
+(thousands of nodes).
+
+To fix these, please raise your system limits. You can do this issuing a
+ulimit -n 65536 just before you run rclone. To change the limits more
+permanently you can add this to your shell startup script, e.g.
+$HOME/.bashrc, or change the system-wide configuration, usually
+/etc/sysctl.conf and/or /etc/security/limits.conf, but please refer to
+your operating system manual.
+
+
+
+UPTOBOX
+
+
+This is a Backend for Uptobox file storage service. Uptobox is closer to
+a one-click hoster than a traditional cloud storage provider and
+therefore not suitable for long term storage.
+
+Paths are specified as remote:path
+
+Paths may be as deep as required, e.g. remote:directory/subdirectory.
+
+Setup
+
+To configure an Uptobox backend you'll need your personal api token.
+You'll find it in your account settings
+
+Example
+
+Here is an example of how to make a remote called remote with the
+default setup. First run:
+
+ rclone config
+
+This will guide you through an interactive setup process:
+
+ Current remotes:
+
+ Name Type
+ ==== ====
+ TestUptobox uptobox
+
+ e) Edit existing remote
+ n) New remote
+ d) Delete remote
+ r) Rename remote
+ c) Copy remote
+ s) Set configuration password
+ q) Quit config
+ e/n/d/r/c/s/q> n
+ name> uptobox
+ Type of storage to configure.
+ Enter a string value. Press Enter for the default ("").
+ Choose a number from below, or type in your own value
+ [...]
+ 37 / Uptobox
+ \ "uptobox"
+ [...]
+ Storage> uptobox
+ ** See help for uptobox backend at: https://rclone.org/uptobox/ **
+
+ Your API Key, get it from https://uptobox.com/my_account
+ Enter a string value. Press Enter for the default ("").
+ api_key> xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+ Edit advanced config? (y/n)
+ y) Yes
+ n) No (default)
+ y/n> n
+ Remote config
+ --------------------
+ [uptobox]
+ type = uptobox
+ api_key = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+ --------------------
+ y) Yes this is OK (default)
+ e) Edit this remote
+ d) Delete this remote
+ y/e/d>
+
+Once configured you can then use rclone like this,
+
+List directories in top level of your Uptobox
+
+ rclone lsd remote:
+
+List all the files in your Uptobox
+
+ rclone ls remote:
+
+To copy a local directory to an Uptobox directory called backup
+
+ rclone copy /home/source remote:backup
+
+Modified time and hashes
+
+Uptobox supports neither modified times nor checksums.
+
+Restricted filename characters
+
+In addition to the default restricted characters set the following
+characters are also replaced:
+
+ Character Value Replacement
+ ----------- ------- -------------
+ " 0x22 "
+ ` 0x41 `
+
+Invalid UTF-8 bytes will also be replaced, as they can't be used in XML
+strings.
+
+Standard Options
+
+Here are the standard options specific to uptobox (Uptobox).
+
+--uptobox-access-token
+
+Your access Token, get it from https://uptobox.com/my_account
+
+- Config: access_token
+- Env Var: RCLONE_UPTOBOX_ACCESS_TOKEN
+- Type: string
+- Default: ""
+
+Advanced Options
+
+Here are the advanced options specific to uptobox (Uptobox).
+
+--uptobox-encoding
+
+This sets the encoding for the backend.
+
+See: the encoding section in the overview for more info.
+
+- Config: encoding
+- Env Var: RCLONE_UPTOBOX_ENCODING
+- Type: MultiEncoder
+- Default:
+ Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot
+
+Limitations
+
+Uptobox will delete inactive files that have not been accessed in 60
+days.
+
+rclone about is not supported by this backend an overview of used space
+can however been seen in the uptobox web interface.
+
+
+
+UNION
-Union
The union remote provides a unification similar to UnionFS using other
remotes.
@@ -27548,7 +29538,9 @@ useful when a path preserving policy is used.
- Default: 120
-WebDAV
+
+WEBDAV
+
Paths are specified as remote:path
@@ -27742,6 +29734,26 @@ for sharepoint-ntlm or identity otherwise.
- Type: string
- Default: ""
+--webdav-headers
+
+Set HTTP headers for all transactions
+
+Use this to set additional HTTP headers for all transactions
+
+The input format is comma separated list of key,value pairs. Standard
+CSV encoding may be used.
+
+For example to set a Cookie use 'Cookie,name=value', or
+'"Cookie","name=value"'.
+
+You can set multiple headers, e.g.
+'"Cookie","name=value","Authorization","xxx"'.
+
+- Config: headers
+- Env Var: RCLONE_WEBDAV_HEADERS
+- Type: CommaSepList
+- Default:
+
Provider notes
@@ -27909,7 +29921,9 @@ oidc-agent to supply an access token from the _XDC_ OIDC Provider.
bearer_token_command = oidc-token XDC
-Yandex Disk
+
+YANDEX DISK
+
Yandex Disk is a cloud storage solution created by Yandex.
@@ -27950,7 +29964,7 @@ This will guide you through an interactive setup process:
[remote]
client_id =
client_secret =
- token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","expiry":"2016-12-29T12:27:11.362788025Z"}
+ token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","token_type":"OAuth","expiry":"2016-12-29T12:27:11.362788025Z"}
--------------------
y) Yes this is OK
e) Edit this remote
@@ -28017,15 +30031,15 @@ strings.
Limitations
-When uploading very large files (bigger than about 5GB) you will need to
-increase the --timeout parameter. This is because Yandex pauses (perhaps
-to calculate the MD5SUM for the entire file) before returning
+When uploading very large files (bigger than about 5 GiB) you will need
+to increase the --timeout parameter. This is because Yandex pauses
+(perhaps to calculate the MD5SUM for the entire file) before returning
confirmation that the file has been uploaded. The default handling of
timeouts in rclone is to assume a 5 minute pause is an error and close
the connection - you'll see net/http: timeout awaiting response headers
errors in the logs if this is happening. Setting the timeout to twice
-the max size of file in GB should be enough, so if you want to upload a
-30GB file set a timeout of 2 * 30 = 60m, that is --timeout 60m.
+the max size of file in GiB should be enough, so if you want to upload a
+30 GiB file set a timeout of 2 * 30 = 60m, that is --timeout 60m.
Standard Options
@@ -28092,7 +30106,9 @@ See: the encoding section in the overview for more info.
- Default: Slash,Del,Ctl,InvalidUtf8,Dot
-Zoho Workdrive
+
+ZOHO WORKDRIVE
+
Zoho WorkDrive is a cloud storage solution created by Zoho.
@@ -28233,8 +30249,10 @@ OAuth Client Secret Leave blank normally.
--zoho-region
-Zoho region to connect to. You'll have to use the region you
-organization is registered in.
+Zoho region to connect to.
+
+You'll have to use the region your organization is registered in. If not
+sure use the same top level domain as you connect to in your browser.
- Config: region
- Env Var: RCLONE_ZOHO_REGION
@@ -28293,7 +30311,9 @@ See: the encoding section in the overview for more info.
- Default: Del,Ctl,InvalidUtf8
-Local Filesystem
+
+LOCAL FILESYSTEM
+
Local paths are specified as normal filesystem paths, e.g.
/path/to/wherever, so
@@ -28401,11 +30421,12 @@ UTF-16.
Paths on Windows
On Windows there are many ways of specifying a path to a file system
-resource. Both absolute paths like C:\path\to\wherever, and relative
-paths like ..\wherever can be used, and path separator can be either \
-(as in C:\path\to\wherever) or / (as in C:/path/to/wherever). Length of
-these paths are limited to 259 characters for files and 247 characters
-for directories, but there is an alternative extended-length path format
+resource. Local paths can be absolute, like C:\path\to\wherever, or
+relative, like ..\wherever. Network paths in UNC format, \\server\share,
+are also supported. Path separator can be either \ (as in
+C:\path\to\wherever) or / (as in C:/path/to/wherever). Length of these
+paths are limited to 259 characters for files and 247 characters for
+directories, but there is an alternative extended-length path format
increasing the limit to (approximately) 32,767 characters. This format
requires absolute paths and the use of prefix \\?\, e.g.
\\?\D:\some\very\long\path. For convenience rclone will automatically
@@ -28456,7 +30477,7 @@ like symlinks under Windows).
If you supply --copy-links or -L then rclone will follow the symlink and
copy the pointed to file or directory. Note that this flag is
-incompatible with -links / -l.
+incompatible with --links / -l.
This flag applies to all commands.
@@ -28629,32 +30650,41 @@ that they should be skipped.
--local-zero-size-links
Assume the Stat size of links is zero (and read them instead)
+(Deprecated)
-On some virtual filesystems (such ash LucidLink), reading a link size
-via a Stat call always returns 0. However, on unix it reads as the
-length of the text in the link. This may cause errors like this when
-syncing:
+Rclone used to use the Stat size of links as the link size, but this
+fails in quite a few places
- Failed to copy: corrupted on transfer: sizes differ 0 vs 13
+- Windows
+- On some virtual filesystems (such ash LucidLink)
+- Android
-Setting this flag causes rclone to read the link and use that as the
-size of the link instead of 0 which in most cases fixes the problem.
+So rclone now always reads the link
- Config: zero_size_links
- Env Var: RCLONE_LOCAL_ZERO_SIZE_LINKS
- Type: bool
- Default: false
---local-no-unicode-normalization
+--local-unicode-normalization
-Don't apply unicode normalization to paths and filenames (Deprecated)
+Apply unicode NFC normalization to paths and filenames
-This flag is deprecated now. Rclone no longer normalizes unicode file
-names, but it compares them with unicode normalization in the sync
-routine instead.
+This flag can be used to normalize file names into unicode NFC form that
+are read from the local filesystem.
-- Config: no_unicode_normalization
-- Env Var: RCLONE_LOCAL_NO_UNICODE_NORMALIZATION
+Rclone does not normally touch the encoding of file names it reads from
+the file system.
+
+This can be useful when using macOS as it normally provides decomposed
+(NFD) unicode which in some language (eg Korean) doesn't display
+properly on some OSes.
+
+Note that rclone compares filenames with unicode normalization in the
+sync routine so this flag shouldn't normally be used.
+
+- Config: unicode_normalization
+- Env Var: RCLONE_LOCAL_UNICODE_NORMALIZATION
- Type: bool
- Default: false
@@ -28814,6 +30844,245 @@ Options:
CHANGELOG
+v1.56.0 - 2021-07-20
+
+See commits
+
+- New backends
+ - Uptobox (buengese)
+- New commands
+ - serve docker (Antoine GIRARD) (Ivan Andreev)
+ - and accompanying docker volume plugin
+ - checksum to check files against a file of checksums (Ivan
+ Andreev)
+ - this is also available as rclone md5sum -C etc
+ - config touch: ensure config exists at configured location
+ (albertony)
+ - test changenotify: command to help debugging changenotify (Nick
+ Craig-Wood)
+- Deprecations
+ - dbhashsum: Remove command deprecated a year ago (Ivan Andreev)
+ - cache: Deprecate cache backend (Ivan Andreev)
+- New Features
+ - rework config system so it can be used non-interactively via cli
+ and rc API.
+ - See docs in config create
+ - This is a very big change to all the backends so may cause
+ breakages - please file bugs!
+ - librclone - export the rclone RC as a C library (lewisxy) (Nick
+ Craig-Wood)
+ - Link a C-API rclone shared object into your project
+ - Use the RC as an in memory interface
+ - Python example supplied
+ - Also supports Android and gomobile
+ - fs
+ - Add --disable-http2 for global http2 disable (Nick
+ Craig-Wood)
+ - Make --dump imply -vv (Alex Chen)
+ - Use binary prefixes for size and rate units (albertony)
+ - Use decimal prefixes for counts (albertony)
+ - Add google search widget to rclone.org (Ivan Andreev)
+ - accounting: Calculate rolling average speed (Haochen Tong)
+ - atexit: Terminate with non-zero status after receiving signal
+ (Michael Hanselmann)
+ - build
+ - Only run event-based workflow scripts under rclone repo with
+ manual override (Mathieu Carbou)
+ - Add Android build with gomobile (x0b)
+ - check: Log the hash in use like cryptcheck does (Nick
+ Craig-Wood)
+ - version: Print os/version, kernel and bitness (Ivan Andreev)
+ - config
+ - Prevent use of Windows reserved names in config file name
+ (albertony)
+ - Create config file in windows appdata directory by default
+ (albertony)
+ - Treat any config file paths with filename notfound as
+ memory-only config (albertony)
+ - Delay load config file (albertony)
+ - Replace defaultConfig with a thread-safe in-memory
+ implementation (Chris Macklin)
+ - Allow config create and friends to take key=value parameters
+ (Nick Craig-Wood)
+ - Fixed issues with flags/options set by environment vars.
+ (Ole Frost)
+ - fshttp: Implement graceful DSCP error handling (Tyson Moore)
+ - lib/http - provides an abstraction for a central http server
+ that services can bind routes to (Nolan Woods)
+ - Add --template config and flags to serve/data (Nolan Woods)
+ - Add default 404 handler (Nolan Woods)
+ - link: Use "off" value for unset expiry (Nick Craig-Wood)
+ - oauthutil: Raise fatal error if token expired without refresh
+ token (Alex Chen)
+ - rcat: Add --size flag for more efficient uploads of known size
+ (Nazar Mishturak)
+ - serve sftp: Add --stdio flag to serve via stdio (Tom)
+ - sync: Don't warn about --no-traverse when --files-from is set
+ (Nick Gaya)
+ - test makefiles
+ - Add --seed flag and make data generated repeatable (Nick
+ Craig-Wood)
+ - Add log levels and speed summary (Nick Craig-Wood)
+- Bug Fixes
+ - accounting: Fix startTime of statsGroups.sum (Haochen Tong)
+ - cmd/ncdu: Fix out of range panic in delete (buengese)
+ - config
+ - Fix issues with memory-only config file paths (albertony)
+ - Fix in memory config not saving on the fly backend config
+ (Nick Craig-Wood)
+ - fshttp: Fix address parsing for DSCP (Tyson Moore)
+ - ncdu: Update termbox-go library to fix crash (Nick Craig-Wood)
+ - oauthutil: Fix old authorize result not recognised (Cnly)
+ - operations: Don't update timestamps of files in --compare-dest
+ (Nick Gaya)
+ - selfupdate: fix archive name on macos (Ivan Andreev)
+- Mount
+ - Refactor before adding serve docker (Antoine GIRARD)
+- VFS
+ - Add cache reset for --vfs-cache-max-size handling at cache poll
+ interval (Leo Luan)
+ - Fix modtime changing when reading file into cache (Nick
+ Craig-Wood)
+ - Avoid unnecessary subdir in cache path (albertony)
+ - Fix that umask option cannot be set as environment variable
+ (albertony)
+ - Do not print notice about missing poll-interval support when set
+ to 0 (albertony)
+- Local
+ - Always use readlink to read symlink size for better
+ compatibility (Nick Craig-Wood)
+ - Add --local-unicode-normalization (and remove
+ --local-no-unicode-normalization) (Nick Craig-Wood)
+ - Skip entries removed concurrently with List() (Ivan Andreev)
+- Crypt
+ - Support timestamped filenames from --b2-versions (Dominik
+ Mydlil)
+- B2
+ - Don't include the bucket name in public link file prefixes
+ (Jeffrey Tolar)
+ - Fix versions and .files with no extension (Nick Craig-Wood)
+ - Factor version handling into lib/version (Dominik Mydlil)
+- Box
+ - Use upload preflight check to avoid listings in file uploads
+ (Nick Craig-Wood)
+ - Return errors instead of calling log.Fatal with them (Nick
+ Craig-Wood)
+- Drive
+ - Switch to the Drives API for looking up shared drives (Nick
+ Craig-Wood)
+ - Fix some google docs being treated as files (Nick Craig-Wood)
+- Dropbox
+ - Add --dropbox-batch-mode flag to speed up uploading (Nick
+ Craig-Wood)
+ - Read the batch mode docs for more info
+ - Set visibility in link sharing when --expire is set (Nick
+ Craig-Wood)
+ - Simplify chunked uploads (Alexey Ivanov)
+ - Improve "own App IP" instructions (Ivan Andreev)
+- Fichier
+ - Check if more than one upload link is returned (Nick Craig-Wood)
+ - Support downloading password protected files and folders
+ (Florian Penzkofer)
+ - Make error messages report text from the API (Nick Craig-Wood)
+ - Fix move of files in the same directory (Nick Craig-Wood)
+ - Check that we actually got a download token and retry if we
+ didn't (buengese)
+- Filefabric
+ - Fix listing after change of from field from "int" to int. (Nick
+ Craig-Wood)
+- FTP
+ - Make upload error 250 indicate success (Nick Craig-Wood)
+- GCS
+ - Make compatible with gsutil's mtime metadata (database64128)
+ - Clean up time format constants (database64128)
+- Google Photos
+ - Fix read only scope not being used properly (Nick Craig-Wood)
+- HTTP
+ - Replace httplib with lib/http (Nolan Woods)
+ - Clean up Bind to better use middleware (Nolan Woods)
+- Jottacloud
+ - Fix legacy auth with state based config system (buengese)
+ - Fix invalid url in output from link command (albertony)
+ - Add no versions option (buengese)
+- Onedrive
+ - Add list_chunk option (Nick Gaya)
+ - Also report root error if unable to cancel multipart upload
+ (Cnly)
+ - Fix failed to configure: empty token found error (Nick
+ Craig-Wood)
+ - Make link return direct download link (Xuanchen Wu)
+- S3
+ - Add --s3-no-head-object (Tatsuya Noyori)
+ - Remove WebIdentityRoleProvider to fix crash on auth (Nick
+ Craig-Wood)
+ - Don't check to see if remote is object if it ends with / (Nick
+ Craig-Wood)
+ - Add SeaweedFS (Chris Lu)
+ - Update Alibaba OSS endpoints (Chuan Zh)
+- SFTP
+ - Fix performance regression by re-enabling concurrent writes
+ (Nick Craig-Wood)
+ - Expand tilde and environment variables in configured
+ known_hosts_file (albertony)
+- Tardigrade
+ - Upgrade to uplink v1.4.6 (Caleb Case)
+ - Use negative offset (Caleb Case)
+ - Add warning about too many open files (acsfer)
+- WebDAV
+ - Fix sharepoint auth over http (Nick Craig-Wood)
+ - Add headers option (Antoon Prins)
+
+
+v1.55.1 - 2021-04-26
+
+See commits
+
+- Bug Fixes
+ - selfupdate
+ - Dont detect FUSE if build is static (Ivan Andreev)
+ - Add build tag noselfupdate (Ivan Andreev)
+ - sync: Fix incorrect error reported by graceful cutoff (Nick
+ Craig-Wood)
+ - install.sh: fix macOS arm64 download (Nick Craig-Wood)
+ - build: Fix version numbers in android branch builds (Nick
+ Craig-Wood)
+ - docs
+ - Contributing.md: update setup instructions for go1.16 (Nick
+ Gaya)
+ - WinFsp 2021 is out of beta (albertony)
+ - Minor cleanup of space around code section (albertony)
+ - Fixed some typos (albertony)
+- VFS
+ - Fix a code path which allows dirty data to be removed causing
+ data loss (Nick Craig-Wood)
+- Compress
+ - Fix compressed name regexp (buengese)
+- Drive
+ - Fix backend copyid of google doc to directory (Nick Craig-Wood)
+ - Don't open browser when service account... (Ansh Mittal)
+- Dropbox
+ - Add missing team_data.member scope for use with --impersonate
+ (Nick Craig-Wood)
+ - Fix About after scopes changes - rclone config reconnect needed
+ (Nick Craig-Wood)
+ - Fix Unable to decrypt returned paths from changeNotify (Nick
+ Craig-Wood)
+- FTP
+ - Fix implicit TLS (Ivan Andreev)
+- Onedrive
+ - Work around for random "Unable to initialize RPS" errors
+ (OleFrost)
+- SFTP
+ - Revert sftp library to v1.12.0 from v1.13.0 to fix performance
+ regression (Nick Craig-Wood)
+ - Fix Update ReadFrom failed: failed to send packet: EOF errors
+ (Nick Craig-Wood)
+- Zoho
+ - Fix error when region isn't set (buengese)
+ - Do not ask for mountpoint twice when using headless setup
+ (buengese)
+
+
v1.55.0 - 2021-03-31
See commits
@@ -33869,7 +36138,7 @@ email addresses removed from here need to be addeed to bin/.ignore-emails to mak
- Fred fred@creativeprojects.tech
- Sébastien Gross renard@users.noreply.github.com
- Maxime Suret 11944422+msuret@users.noreply.github.com
-- Caleb Case caleb@storj.io
+- Caleb Case caleb@storj.io calebcase@gmail.com
- Ben Zenker imbenzenker@gmail.com
- Martin Michlmayr tbm@cyrius.com
- Brandon McNama bmcnama@pagerduty.com
@@ -33976,6 +36245,40 @@ email addresses removed from here need to be addeed to bin/.ignore-emails to mak
- Manish Kumar krmanish260@gmail.com
- x0b x0bdev@gmail.com
- CERN through the CS3MESH4EOSC Project
+- Nick Gaya nicholasgaya+github@gmail.com
+- Ashok Gelal 401055+ashokgelal@users.noreply.github.com
+- Dominik Mydlil dominik.mydlil@outlook.com
+- Nazar Mishturak nazarmx@gmail.com
+- Ansh Mittal iamAnshMittal@gmail.com
+- noabody noabody@yahoo.com
+- OleFrost 82263101+olefrost@users.noreply.github.com
+- Kenny Parsons kennyparsons93@gmail.com
+- Jeffrey Tolar tolar.jeffrey@gmail.com
+- jtagcat git-514635f7@jtag.cat
+- Tatsuya Noyori
+ 63089076+public-tatsuya-noyori@users.noreply.github.com
+- lewisxy lewisxy@users.noreply.github.com
+- Nolan Woods nolan_w@sfu.ca
+- Gautam Kumar 25435568+gautamajay52@users.noreply.github.com
+- Chris Macklin chris.macklin@10xgenomics.com
+- Antoon Prins antoon.prins@surfsara.nl
+- Alexey Ivanov rbtz@dropbox.com
+- Serge Pouliquen sp31415@free.fr
+- acsfer carlos@reendex.com
+- Tom tom@tom-fitzhenry.me.uk
+- Tyson Moore tyson@tyson.me
+- database64128 free122448@hotmail.com
+- Chris Lu chrislusf@users.noreply.github.com
+- Reid Buzby reid@rethink.software
+- darrenrhs darrenrhs@gmail.com
+- Florian Penzkofer fp@nullptr.de
+- Xuanchen Wu 117010292@link.cuhk.edu.cn
+- partev petrosyan@gmail.com
+- Dmitry Sitnikov fo2@inbox.ru
+- Haochen Tong i@hexchain.org
+- Michael Hanselmann public@hansmi.ch
+- Chuan Zh zhchuan7@gmail.com
+- Antoine GIRARD antoine.girard@sapk.fr
diff --git a/bin/.ignore-emails b/bin/.ignore-emails
index a05a74b5f..ebf7cbecc 100644
--- a/bin/.ignore-emails
+++ b/bin/.ignore-emails
@@ -2,3 +2,4 @@
<33207650+sp31415t1@users.noreply.github.com>
+
diff --git a/bin/make_manual.py b/bin/make_manual.py
index 1cfaadbbc..8c1631c33 100755
--- a/bin/make_manual.py
+++ b/bin/make_manual.py
@@ -23,6 +23,7 @@ docs = [
"rc.md",
"overview.md",
"flags.md",
+ "docker.md",
# Keep these alphabetical by full name
"fichier.md",
diff --git a/docs/content/alias.md b/docs/content/alias.md
index 6325dd7f3..88b7d991a 100644
--- a/docs/content/alias.md
+++ b/docs/content/alias.md
@@ -3,8 +3,7 @@ title: "Alias"
description: "Remote Aliases"
---
-{{< icon "fa fa-link" >}} Alias
------------------------------------------
+# {{< icon "fa fa-link" >}} Alias
The `alias` remote provides a new name for another remote.
diff --git a/docs/content/amazonclouddrive.md b/docs/content/amazonclouddrive.md
index 28d71c08e..65f9913f1 100644
--- a/docs/content/amazonclouddrive.md
+++ b/docs/content/amazonclouddrive.md
@@ -3,8 +3,7 @@ title: "Amazon Drive"
description: "Rclone docs for Amazon Drive"
---
-{{< icon "fab fa-amazon" >}} Amazon Drive
------------------------------------------
+# {{< icon "fab fa-amazon" >}} Amazon Drive
Amazon Drive, formerly known as Amazon Cloud Drive, is a cloud storage
service run by Amazon for consumers.
@@ -260,7 +259,7 @@ Files >= this size will be downloaded via their tempLink.
Files this size or more will be downloaded via their "tempLink". This
is to work around a problem with Amazon Drive which blocks downloads
-of files bigger than about 10 GiB. The default for this is 9 GiB which
+of files bigger than about 10 GiB. The default for this is 9 GiB which
shouldn't need to be changed.
To download files above this threshold, rclone requests a "tempLink"
@@ -270,7 +269,7 @@ underlying S3 storage.
- Config: templink_threshold
- Env Var: RCLONE_ACD_TEMPLINK_THRESHOLD
- Type: SizeSuffix
-- Default: 9G
+- Default: 9Gi
#### --acd-encoding
diff --git a/docs/content/authors.md b/docs/content/authors.md
index 504dcb8ea..65e6a5226 100644
--- a/docs/content/authors.md
+++ b/docs/content/authors.md
@@ -431,7 +431,7 @@ put them back in again.` >}}
* Laurens Janssen
* Bob Bagwill
* Nathan Collins
- * lostheli
+ * lostheli
* kelv
* Milly
* gtorelly
diff --git a/docs/content/azureblob.md b/docs/content/azureblob.md
index bc2acda43..aaf348c58 100644
--- a/docs/content/azureblob.md
+++ b/docs/content/azureblob.md
@@ -3,8 +3,7 @@ title: "Microsoft Azure Blob Storage"
description: "Rclone docs for Microsoft Azure Blob Storage"
---
-{{< icon "fab fa-windows" >}} Microsoft Azure Blob Storage
------------------------------------------
+# {{< icon "fab fa-windows" >}} Microsoft Azure Blob Storage
Paths are specified as `remote:container` (or `remote:` for the `lsd`
command.) You may put subdirectories in too, e.g.
@@ -285,7 +284,7 @@ Note that this is stored in memory and there may be up to
- Config: chunk_size
- Env Var: RCLONE_AZUREBLOB_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 4M
+- Default: 4Mi
#### --azureblob-list-chunk
diff --git a/docs/content/b2.md b/docs/content/b2.md
index 6314b2478..020141e77 100644
--- a/docs/content/b2.md
+++ b/docs/content/b2.md
@@ -3,8 +3,7 @@ title: "B2"
description: "Backblaze B2"
---
-{{< icon "fa fa-fire" >}} Backblaze B2
-----------------------------------------
+# {{< icon "fa fa-fire" >}} Backblaze B2
B2 is [Backblaze's cloud storage system](https://www.backblaze.com/b2/).
@@ -406,7 +405,7 @@ This value should be set no larger than 4.657 GiB (== 5 GB).
- Config: upload_cutoff
- Env Var: RCLONE_B2_UPLOAD_CUTOFF
- Type: SizeSuffix
-- Default: 200M
+- Default: 200Mi
#### --b2-copy-cutoff
@@ -420,7 +419,7 @@ The minimum is 0 and the maximum is 4.6 GiB.
- Config: copy_cutoff
- Env Var: RCLONE_B2_COPY_CUTOFF
- Type: SizeSuffix
-- Default: 4G
+- Default: 4Gi
#### --b2-chunk-size
@@ -434,7 +433,7 @@ minimum size.
- Config: chunk_size
- Env Var: RCLONE_B2_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 96M
+- Default: 96Mi
#### --b2-disable-checksum
diff --git a/docs/content/box.md b/docs/content/box.md
index 4a175ac85..c2f54cbcb 100644
--- a/docs/content/box.md
+++ b/docs/content/box.md
@@ -3,8 +3,7 @@ title: "Box"
description: "Rclone docs for Box"
---
-{{< icon "fa fa-archive" >}} Box
------------------------------------------
+# {{< icon "fa fa-archive" >}} Box
Paths are specified as `remote:path`
@@ -374,7 +373,7 @@ Cutoff for switching to multipart upload (>= 50 MiB).
- Config: upload_cutoff
- Env Var: RCLONE_BOX_UPLOAD_CUTOFF
- Type: SizeSuffix
-- Default: 50M
+- Default: 50Mi
#### --box-commit-retries
diff --git a/docs/content/cache.md b/docs/content/cache.md
index 2c6d6f8d0..416426f88 100644
--- a/docs/content/cache.md
+++ b/docs/content/cache.md
@@ -3,8 +3,7 @@ title: "Cache"
description: "Rclone docs for cache remote"
---
-{{< icon "fa fa-archive" >}} Cache (DEPRECATED)
------------------------------------------
+# {{< icon "fa fa-archive" >}} Cache (DEPRECATED)
The `cache` remote wraps another existing remote and stores file structure
and its data for long running tasks like `rclone mount`.
@@ -361,9 +360,9 @@ will need to be cleared or unexpected EOF errors will occur.
- Config: chunk_size
- Env Var: RCLONE_CACHE_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 5M
+- Default: 5Mi
- Examples:
- - "1m"
+ - "1M"
- 1 MiB
- "5M"
- 5 MiB
@@ -398,7 +397,7 @@ oldest chunks until it goes under this value.
- Config: chunk_total_size
- Env Var: RCLONE_CACHE_CHUNK_TOTAL_SIZE
- Type: SizeSuffix
-- Default: 10G
+- Default: 10Gi
- Examples:
- "500M"
- 500 MiB
diff --git a/docs/content/changelog.md b/docs/content/changelog.md
index 4ab156570..3f6388566 100644
--- a/docs/content/changelog.md
+++ b/docs/content/changelog.md
@@ -5,6 +5,149 @@ description: "Rclone Changelog"
# Changelog
+## v1.56.0 - 2021-07-20
+
+[See commits](https://github.com/rclone/rclone/compare/v1.55.0...v1.56.0)
+
+* New backends
+ * [Uptobox](/uptobox/) (buengese)
+* New commands
+ * [serve docker](/commands/rclone_serve_docker/) (Antoine GIRARD) (Ivan Andreev)
+ * and accompanying [docker volume plugin](/docker/)
+ * [checksum](/commands/rclone_checksum/) to check files against a file of checksums (Ivan Andreev)
+ * this is also available as `rclone md5sum -C` etc
+ * [config touch](/commands/rclone_config_touch/): ensure config exists at configured location (albertony)
+ * [test changenotify](/commands/rclone_test_changenotify/): command to help debugging changenotify (Nick Craig-Wood)
+* Deprecations
+ * `dbhashsum`: Remove command deprecated a year ago (Ivan Andreev)
+ * `cache`: Deprecate cache backend (Ivan Andreev)
+* New Features
+ * rework config system so it can be used non-interactively via cli and rc API.
+ * See docs in [config create](/commands/rclone_config_create/)
+ * This is a very big change to all the backends so may cause breakages - please file bugs!
+ * librclone - export the rclone RC as a C library (lewisxy) (Nick Craig-Wood)
+ * Link a C-API rclone shared object into your project
+ * Use the RC as an in memory interface
+ * Python example supplied
+ * Also supports Android and gomobile
+ * fs
+ * Add `--disable-http2` for global http2 disable (Nick Craig-Wood)
+ * Make `--dump` imply `-vv` (Alex Chen)
+ * Use binary prefixes for size and rate units (albertony)
+ * Use decimal prefixes for counts (albertony)
+ * Add google search widget to rclone.org (Ivan Andreev)
+ * accounting: Calculate rolling average speed (Haochen Tong)
+ * atexit: Terminate with non-zero status after receiving signal (Michael Hanselmann)
+ * build
+ * Only run event-based workflow scripts under rclone repo with manual override (Mathieu Carbou)
+ * Add Android build with gomobile (x0b)
+ * check: Log the hash in use like cryptcheck does (Nick Craig-Wood)
+ * version: Print os/version, kernel and bitness (Ivan Andreev)
+ * config
+ * Prevent use of Windows reserved names in config file name (albertony)
+ * Create config file in windows appdata directory by default (albertony)
+ * Treat any config file paths with filename notfound as memory-only config (albertony)
+ * Delay load config file (albertony)
+ * Replace defaultConfig with a thread-safe in-memory implementation (Chris Macklin)
+ * Allow `config create` and friends to take `key=value` parameters (Nick Craig-Wood)
+ * Fixed issues with flags/options set by environment vars. (Ole Frost)
+ * fshttp: Implement graceful DSCP error handling (Tyson Moore)
+ * lib/http - provides an abstraction for a central http server that services can bind routes to (Nolan Woods)
+ * Add `--template` config and flags to serve/data (Nolan Woods)
+ * Add default 404 handler (Nolan Woods)
+ * link: Use "off" value for unset expiry (Nick Craig-Wood)
+ * oauthutil: Raise fatal error if token expired without refresh token (Alex Chen)
+ * rcat: Add `--size` flag for more efficient uploads of known size (Nazar Mishturak)
+ * serve sftp: Add `--stdio` flag to serve via stdio (Tom)
+ * sync: Don't warn about `--no-traverse` when `--files-from` is set (Nick Gaya)
+ * `test makefiles`
+ * Add `--seed` flag and make data generated repeatable (Nick Craig-Wood)
+ * Add log levels and speed summary (Nick Craig-Wood)
+* Bug Fixes
+ * accounting: Fix startTime of statsGroups.sum (Haochen Tong)
+ * cmd/ncdu: Fix out of range panic in delete (buengese)
+ * config
+ * Fix issues with memory-only config file paths (albertony)
+ * Fix in memory config not saving on the fly backend config (Nick Craig-Wood)
+ * fshttp: Fix address parsing for DSCP (Tyson Moore)
+ * ncdu: Update termbox-go library to fix crash (Nick Craig-Wood)
+ * oauthutil: Fix old authorize result not recognised (Cnly)
+ * operations: Don't update timestamps of files in `--compare-dest` (Nick Gaya)
+ * selfupdate: fix archive name on macos (Ivan Andreev)
+* Mount
+ * Refactor before adding serve docker (Antoine GIRARD)
+* VFS
+ * Add cache reset for `--vfs-cache-max-size` handling at cache poll interval (Leo Luan)
+ * Fix modtime changing when reading file into cache (Nick Craig-Wood)
+ * Avoid unnecessary subdir in cache path (albertony)
+ * Fix that umask option cannot be set as environment variable (albertony)
+ * Do not print notice about missing poll-interval support when set to 0 (albertony)
+* Local
+ * Always use readlink to read symlink size for better compatibility (Nick Craig-Wood)
+ * Add `--local-unicode-normalization` (and remove `--local-no-unicode-normalization`) (Nick Craig-Wood)
+ * Skip entries removed concurrently with List() (Ivan Andreev)
+* Crypt
+ * Support timestamped filenames from `--b2-versions` (Dominik Mydlil)
+* B2
+ * Don't include the bucket name in public link file prefixes (Jeffrey Tolar)
+ * Fix versions and .files with no extension (Nick Craig-Wood)
+ * Factor version handling into lib/version (Dominik Mydlil)
+* Box
+ * Use upload preflight check to avoid listings in file uploads (Nick Craig-Wood)
+ * Return errors instead of calling log.Fatal with them (Nick Craig-Wood)
+* Drive
+ * Switch to the Drives API for looking up shared drives (Nick Craig-Wood)
+ * Fix some google docs being treated as files (Nick Craig-Wood)
+* Dropbox
+ * Add `--dropbox-batch-mode` flag to speed up uploading (Nick Craig-Wood)
+ * Read the [batch mode](/dropbox/#batch-mode) docs for more info
+ * Set visibility in link sharing when `--expire` is set (Nick Craig-Wood)
+ * Simplify chunked uploads (Alexey Ivanov)
+ * Improve "own App IP" instructions (Ivan Andreev)
+* Fichier
+ * Check if more than one upload link is returned (Nick Craig-Wood)
+ * Support downloading password protected files and folders (Florian Penzkofer)
+ * Make error messages report text from the API (Nick Craig-Wood)
+ * Fix move of files in the same directory (Nick Craig-Wood)
+ * Check that we actually got a download token and retry if we didn't (buengese)
+* Filefabric
+ * Fix listing after change of from field from "int" to int. (Nick Craig-Wood)
+* FTP
+ * Make upload error 250 indicate success (Nick Craig-Wood)
+* GCS
+ * Make compatible with gsutil's mtime metadata (database64128)
+ * Clean up time format constants (database64128)
+* Google Photos
+ * Fix read only scope not being used properly (Nick Craig-Wood)
+* HTTP
+ * Replace httplib with lib/http (Nolan Woods)
+ * Clean up Bind to better use middleware (Nolan Woods)
+* Jottacloud
+ * Fix legacy auth with state based config system (buengese)
+ * Fix invalid url in output from link command (albertony)
+ * Add no versions option (buengese)
+* Onedrive
+ * Add `list_chunk option` (Nick Gaya)
+ * Also report root error if unable to cancel multipart upload (Cnly)
+ * Fix failed to configure: empty token found error (Nick Craig-Wood)
+ * Make link return direct download link (Xuanchen Wu)
+* S3
+ * Add `--s3-no-head-object` (Tatsuya Noyori)
+ * Remove WebIdentityRoleProvider to fix crash on auth (Nick Craig-Wood)
+ * Don't check to see if remote is object if it ends with / (Nick Craig-Wood)
+ * Add SeaweedFS (Chris Lu)
+ * Update Alibaba OSS endpoints (Chuan Zh)
+* SFTP
+ * Fix performance regression by re-enabling concurrent writes (Nick Craig-Wood)
+ * Expand tilde and environment variables in configured `known_hosts_file` (albertony)
+* Tardigrade
+ * Upgrade to uplink v1.4.6 (Caleb Case)
+ * Use negative offset (Caleb Case)
+ * Add warning about `too many open files` (acsfer)
+* WebDAV
+ * Fix sharepoint auth over http (Nick Craig-Wood)
+ * Add headers option (Antoon Prins)
+
## v1.55.1 - 2021-04-26
[See commits](https://github.com/rclone/rclone/compare/v1.55.0...v1.55.1)
diff --git a/docs/content/chunker.md b/docs/content/chunker.md
index 0dfe9b482..913bd4942 100644
--- a/docs/content/chunker.md
+++ b/docs/content/chunker.md
@@ -3,8 +3,7 @@ title: "Chunker"
description: "Split-chunking overlay remote"
---
-{{< icon "fa fa-cut" >}}Chunker (BETA)
-----------------------------------------
+# {{< icon "fa fa-cut" >}}Chunker (BETA)
The `chunker` overlay transparently splits large files into smaller chunks
during upload to wrapped remote and transparently assembles them back
@@ -332,7 +331,7 @@ Files larger than chunk size will be split in chunks.
- Config: chunk_size
- Env Var: RCLONE_CHUNKER_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 2G
+- Default: 2Gi
#### --chunker-hash-type
diff --git a/docs/content/commands/rclone_config_update.md b/docs/content/commands/rclone_config_update.md
index b84d3fe6e..6b37bc261 100644
--- a/docs/content/commands/rclone_config_update.md
+++ b/docs/content/commands/rclone_config_update.md
@@ -24,7 +24,7 @@ you would do:
If the remote uses OAuth the token will be updated, if you don't
require this add an extra parameter thus:
- rclone config update myremote swift env_auth=true config_refresh_token=false
+ rclone config update myremote env_auth=true config_refresh_token=false
Note that if the config process would normally ask a question the
default is taken (unless `--non-interactive` is used). Each time
diff --git a/docs/content/commands/rclone_mount.md b/docs/content/commands/rclone_mount.md
index 978def3c8..ea758f711 100644
--- a/docs/content/commands/rclone_mount.md
+++ b/docs/content/commands/rclone_mount.md
@@ -18,7 +18,7 @@ FUSE.
First set up your remote using `rclone config`. Check it works with `rclone ls` etc.
-On Linux and macOS, you can either run mount in foreground mode or background (daemon) mode.
+On Linux and OSX, you can either run mount in foreground mode or background (daemon) mode.
Mount runs in foreground mode by default, use the `--daemon` flag to specify background mode.
You can only run mount in foreground mode on Windows.
@@ -608,7 +608,7 @@ rclone mount remote:path /path/to/mountpoint [flags]
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
- --umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 18)
+ --umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
diff --git a/docs/content/commands/rclone_ncdu.md b/docs/content/commands/rclone_ncdu.md
index 6cac97dac..c0310974d 100644
--- a/docs/content/commands/rclone_ncdu.md
+++ b/docs/content/commands/rclone_ncdu.md
@@ -33,6 +33,7 @@ Here are the keys - press '?' to toggle the help on and off
a toggle average size in directory
n,s,C,A sort by name,size,count,average size
d delete file/directory
+ y copy current path to clipboard
Y display current path
^L refresh screen
? to toggle help on and off
diff --git a/docs/content/commands/rclone_serve_dlna.md b/docs/content/commands/rclone_serve_dlna.md
index 138ee0250..3073294fb 100644
--- a/docs/content/commands/rclone_serve_dlna.md
+++ b/docs/content/commands/rclone_serve_dlna.md
@@ -319,7 +319,7 @@ rclone serve dlna remote:path [flags]
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
- --umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 18)
+ --umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
diff --git a/docs/content/commands/rclone_serve_docker.md b/docs/content/commands/rclone_serve_docker.md
index 70118f587..ebd17fedf 100644
--- a/docs/content/commands/rclone_serve_docker.md
+++ b/docs/content/commands/rclone_serve_docker.md
@@ -354,7 +354,7 @@ rclone serve docker [flags]
--socket-addr string or absolute path (default: /run/docker/plugins/rclone.sock)
--socket-gid int GID for unix socket (default: current process GID) (default 1000)
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
- --umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 18)
+ --umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
diff --git a/docs/content/commands/rclone_serve_ftp.md b/docs/content/commands/rclone_serve_ftp.md
index a011aea21..8e9867d0f 100644
--- a/docs/content/commands/rclone_serve_ftp.md
+++ b/docs/content/commands/rclone_serve_ftp.md
@@ -403,7 +403,7 @@ rclone serve ftp remote:path [flags]
--public-ip string Public IP address to advertise for passive connections.
--read-only Mount read-only.
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
- --umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 18)
+ --umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
--user string User name for authentication. (default "anonymous")
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
diff --git a/docs/content/commands/rclone_serve_http.md b/docs/content/commands/rclone_serve_http.md
index ba4ac010e..982c34e5b 100644
--- a/docs/content/commands/rclone_serve_http.md
+++ b/docs/content/commands/rclone_serve_http.md
@@ -398,7 +398,7 @@ rclone serve http remote:path [flags]
--server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--template string User Specified Template.
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
- --umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 18)
+ --umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
--user string User name for authentication.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
diff --git a/docs/content/commands/rclone_serve_sftp.md b/docs/content/commands/rclone_serve_sftp.md
index c49b37d7f..714705763 100644
--- a/docs/content/commands/rclone_serve_sftp.md
+++ b/docs/content/commands/rclone_serve_sftp.md
@@ -419,7 +419,7 @@ rclone serve sftp remote:path [flags]
--read-only Mount read-only.
--stdio Run an sftp server on run stdin/stdout
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
- --umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 18)
+ --umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
--user string User name for authentication.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
diff --git a/docs/content/commands/rclone_serve_webdav.md b/docs/content/commands/rclone_serve_webdav.md
index 3c9edc9e9..94853b4f5 100644
--- a/docs/content/commands/rclone_serve_webdav.md
+++ b/docs/content/commands/rclone_serve_webdav.md
@@ -491,7 +491,7 @@ rclone serve webdav remote:path [flags]
--server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--template string User Specified Template.
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
- --umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 18)
+ --umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
--user string User name for authentication.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
diff --git a/docs/content/commands/rclone_test.md b/docs/content/commands/rclone_test.md
index 4948668e0..2217f6879 100644
--- a/docs/content/commands/rclone_test.md
+++ b/docs/content/commands/rclone_test.md
@@ -37,6 +37,6 @@ See the [global flags page](/flags/) for global options not listed here.
* [rclone test changenotify](/commands/rclone_test_changenotify/) - Log any change notify requests for the remote passed in.
* [rclone test histogram](/commands/rclone_test_histogram/) - Makes a histogram of file name characters.
* [rclone test info](/commands/rclone_test_info/) - Discovers file name or other limitations for paths.
-* [rclone test makefiles](/commands/rclone_test_makefiles/) - Make a random file hierarchy in
+* [rclone test makefiles](/commands/rclone_test_makefiles/) - Make a random file hierarchy in a directory
* [rclone test memory](/commands/rclone_test_memory/) - Load all the objects at remote:path into memory and report memory stats.
diff --git a/docs/content/commands/rclone_test_makefiles.md b/docs/content/commands/rclone_test_makefiles.md
index 40a2a3dc2..f0816d14e 100644
--- a/docs/content/commands/rclone_test_makefiles.md
+++ b/docs/content/commands/rclone_test_makefiles.md
@@ -1,13 +1,13 @@
---
title: "rclone test makefiles"
-description: "Make a random file hierarchy in "
+description: "Make a random file hierarchy in a directory"
slug: rclone_test_makefiles
url: /commands/rclone_test_makefiles/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/test/makefiles/ and as part of making a release run "make commanddocs"
---
# rclone test makefiles
-Make a random file hierarchy in
+Make a random file hierarchy in a directory
```
rclone test makefiles [flags]
diff --git a/docs/content/compress.md b/docs/content/compress.md
index e9576700c..f317c61c5 100644
--- a/docs/content/compress.md
+++ b/docs/content/compress.md
@@ -3,8 +3,7 @@ title: "Compress"
description: "Compression Remote"
---
-{{< icon "fas fa-compress" >}}Compress (Experimental)
------------------------------------------
+# {{< icon "fas fa-compress" >}}Compress (Experimental)
### Warning
This remote is currently **experimental**. Things may break and data may be lost. Anything you do with this remote is
@@ -142,6 +141,6 @@ Some remotes don't allow the upload of files with unknown size.
- Config: ram_cache_limit
- Env Var: RCLONE_COMPRESS_RAM_CACHE_LIMIT
- Type: SizeSuffix
-- Default: 20M
+- Default: 20Mi
{{< rem autogenerated options stop >}}
diff --git a/docs/content/crypt.md b/docs/content/crypt.md
index 9192139e1..4afb08a7d 100644
--- a/docs/content/crypt.md
+++ b/docs/content/crypt.md
@@ -3,8 +3,7 @@ title: "Crypt"
description: "Encryption overlay remote"
---
-{{< icon "fa fa-lock" >}}Crypt
-----------------------------------------
+# {{< icon "fa fa-lock" >}}Crypt
Rclone `crypt` remotes encrypt and decrypt other remotes.
diff --git a/docs/content/drive.md b/docs/content/drive.md
index 4511dd3a4..f9cf5cec9 100644
--- a/docs/content/drive.md
+++ b/docs/content/drive.md
@@ -3,8 +3,7 @@ title: "Google drive"
description: "Rclone docs for Google drive"
---
-{{< icon "fab fa-google" >}} Google Drive
------------------------------------------
+# {{< icon "fab fa-google" >}} Google Drive
Paths are specified as `drive:path`
@@ -868,7 +867,7 @@ Cutoff for switching to chunked upload
- Config: upload_cutoff
- Env Var: RCLONE_DRIVE_UPLOAD_CUTOFF
- Type: SizeSuffix
-- Default: 8M
+- Default: 8Mi
#### --drive-chunk-size
@@ -882,7 +881,7 @@ Reducing this will reduce memory usage but decrease performance.
- Config: chunk_size
- Env Var: RCLONE_DRIVE_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 8M
+- Default: 8Mi
#### --drive-acknowledge-abuse
diff --git a/docs/content/dropbox.md b/docs/content/dropbox.md
index c6e84ed3b..a4ccd7e92 100644
--- a/docs/content/dropbox.md
+++ b/docs/content/dropbox.md
@@ -3,8 +3,7 @@ title: "Dropbox"
description: "Rclone docs for Dropbox"
---
-{{< icon "fab fa-dropbox" >}} Dropbox
----------------------------------
+# {{< icon "fab fa-dropbox" >}} Dropbox
Paths are specified as `remote:path`
@@ -238,7 +237,7 @@ Leave blank to use the provider defaults.
#### --dropbox-chunk-size
-Upload chunk size. (< 150M).
+Upload chunk size. (< 150Mi).
Any files larger than this will be uploaded in chunks of this size.
@@ -250,7 +249,7 @@ memory. It can be set smaller if you are tight on memory.
- Config: chunk_size
- Env Var: RCLONE_DROPBOX_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 48M
+- Default: 48Mi
#### --dropbox-impersonate
@@ -309,6 +308,75 @@ shared folder.
- Type: bool
- Default: false
+#### --dropbox-batch-mode
+
+Upload file batching sync|async|off.
+
+This sets the batch mode used by rclone.
+
+For full info see [the main docs](https://rclone.org/dropbox/#batch-mode)
+
+This has 3 possible values
+
+- off - no batching
+- sync - batch uploads and check completion (default)
+- async - batch upload and don't check completion
+
+Rclone will close any outstanding batches when it exits which may make
+a delay on quit.
+
+
+- Config: batch_mode
+- Env Var: RCLONE_DROPBOX_BATCH_MODE
+- Type: string
+- Default: "sync"
+
+#### --dropbox-batch-size
+
+Max number of files in upload batch.
+
+This sets the batch size of files to upload. It has to be less than 1000.
+
+By default this is 0 which means rclone which calculate the batch size
+depending on the setting of batch_mode.
+
+- batch_mode: async - default batch_size is 100
+- batch_mode: sync - default batch_size is the same as --transfers
+- batch_mode: off - not in use
+
+Rclone will close any outstanding batches when it exits which may make
+a delay on quit.
+
+Setting this is a great idea if you are uploading lots of small files
+as it will make them a lot quicker. You can use --transfers 32 to
+maximise throughput.
+
+
+- Config: batch_size
+- Env Var: RCLONE_DROPBOX_BATCH_SIZE
+- Type: int
+- Default: 0
+
+#### --dropbox-batch-timeout
+
+Max time to allow an idle upload batch before uploading
+
+If an upload batch is idle for more than this long then it will be
+uploaded.
+
+The default for this is 0 which means rclone will choose a sensible
+default based on the batch_mode in use.
+
+- batch_mode: async - default batch_timeout is 500ms
+- batch_mode: sync - default batch_timeout is 10s
+- batch_mode: off - not in use
+
+
+- Config: batch_timeout
+- Env Var: RCLONE_DROPBOX_BATCH_TIMEOUT
+- Type: Duration
+- Default: 0s
+
#### --dropbox-encoding
This sets the encoding for the backend.
diff --git a/docs/content/fichier.md b/docs/content/fichier.md
index d2a2dc8f0..e0ce555d8 100644
--- a/docs/content/fichier.md
+++ b/docs/content/fichier.md
@@ -3,8 +3,7 @@ title: "1Fichier"
description: "Rclone docs for 1Fichier"
---
-{{< icon "fa fa-archive" >}} 1Fichier
------------------------------------------
+# {{< icon "fa fa-archive" >}} 1Fichier
This is a backend for the [1fichier](https://1fichier.com) cloud
storage service. Note that a Premium subscription is required to use
@@ -139,6 +138,28 @@ If you want to download a shared folder, add this parameter
- Type: string
- Default: ""
+#### --fichier-file-password
+
+If you want to download a shared file that is password protected, add this parameter
+
+**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
+
+- Config: file_password
+- Env Var: RCLONE_FICHIER_FILE_PASSWORD
+- Type: string
+- Default: ""
+
+#### --fichier-folder-password
+
+If you want to list the files in a shared folder that is password protected, add this parameter
+
+**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
+
+- Config: folder_password
+- Env Var: RCLONE_FICHIER_FOLDER_PASSWORD
+- Type: string
+- Default: ""
+
#### --fichier-encoding
This sets the encoding for the backend.
diff --git a/docs/content/filefabric.md b/docs/content/filefabric.md
index 50c4cfafd..b7b9c6d7d 100644
--- a/docs/content/filefabric.md
+++ b/docs/content/filefabric.md
@@ -3,8 +3,7 @@ title: "Enterprise File Fabric"
description: "Rclone docs for the Enterprise File Fabric backend"
---
-{{< icon "fa fa-cloud" >}} Enterprise File Fabric
------------------------------------------
+# {{< icon "fa fa-cloud" >}} Enterprise File Fabric
This backend supports [Storage Made Easy's Enterprise File
Fabric™](https://storagemadeeasy.com/about/) which provides a software
diff --git a/docs/content/flags.md b/docs/content/flags.md
index d474b20c7..70648eb08 100644
--- a/docs/content/flags.md
+++ b/docs/content/flags.md
@@ -154,7 +154,7 @@ These flags are available for every command.
--use-json-log Use json log format.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.56.0-beta.5531.41f561bf2.pr-commanddocs")
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.56.0")
-v, --verbose count Print lots more stuff (repeat for more)
```
@@ -311,6 +311,8 @@ and may be set in the config file.
--dropbox-token-url string Token server url.
--fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl
--fichier-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
+ --fichier-file-password string If you want to download a shared file that is password protected, add this parameter (obscured)
+ --fichier-folder-password string If you want to list the files in a shared folder that is password protected, add this parameter (obscured)
--fichier-shared-folder string If you want to download a shared folder, add this parameter
--filefabric-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Del,Ctl,InvalidUtf8,Dot)
--filefabric-permanent-token string Permanent Authentication Token
@@ -375,6 +377,7 @@ and may be set in the config file.
--jottacloud-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10Mi)
+ --jottacloud-no-versions Avoid server side versioning by deleting files and recreating files instead of overwriting them.
--jottacloud-trashed-only Only show files that are in the trash.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10Mi)
--koofr-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
@@ -587,7 +590,7 @@ and may be set in the config file.
--zoho-client-id string OAuth Client Id
--zoho-client-secret string OAuth Client Secret
--zoho-encoding MultiEncoder This sets the encoding for the backend. (default Del,Ctl,InvalidUtf8)
- --zoho-region string Zoho region to connect to. You'll have to use the region you organization is registered in.
+ --zoho-region string Zoho region to connect to.
--zoho-token string OAuth Access Token as a JSON blob.
--zoho-token-url string Token server url.
```
diff --git a/docs/content/ftp.md b/docs/content/ftp.md
index 07e9b3f55..e883866ef 100644
--- a/docs/content/ftp.md
+++ b/docs/content/ftp.md
@@ -3,8 +3,7 @@ title: "FTP"
description: "Rclone docs for FTP backend"
---
-{{< icon "fa fa-file" >}} FTP
-------------------------------
+# {{< icon "fa fa-file" >}} FTP
FTP is the File Transfer Protocol. Rclone FTP support is provided using the
[github.com/jlaffaye/ftp](https://godoc.org/github.com/jlaffaye/ftp)
diff --git a/docs/content/googlecloudstorage.md b/docs/content/googlecloudstorage.md
index 0722be0d8..390a04bc5 100644
--- a/docs/content/googlecloudstorage.md
+++ b/docs/content/googlecloudstorage.md
@@ -3,8 +3,7 @@ title: "Google Cloud Storage"
description: "Rclone docs for Google Cloud Storage"
---
-{{< icon "fab fa-google" >}} Google Cloud Storage
--------------------------------------------------
+# {{< icon "fab fa-google" >}} Google Cloud Storage
Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`.
diff --git a/docs/content/googlephotos.md b/docs/content/googlephotos.md
index 5a6ceb9a8..14534cf2c 100644
--- a/docs/content/googlephotos.md
+++ b/docs/content/googlephotos.md
@@ -3,8 +3,7 @@ title: "Google Photos"
description: "Rclone docs for Google Photos"
---
-{{< icon "fa fa-images" >}} Google Photos
--------------------------------------------------
+# {{< icon "fa fa-images" >}} Google Photos
The rclone backend for [Google Photos](https://www.google.com/photos/about/) is
a specialized backend for transferring photos and videos to and from
diff --git a/docs/content/hdfs.md b/docs/content/hdfs.md
index b6b1c8945..59ebf3d40 100644
--- a/docs/content/hdfs.md
+++ b/docs/content/hdfs.md
@@ -3,8 +3,7 @@ title: "HDFS Remote"
description: "Remote for Hadoop Distributed Filesystem"
---
-{{< icon "fa fa-globe" >}} HDFS
--------------------------------------------------
+# {{< icon "fa fa-globe" >}} HDFS
[HDFS](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html) is a
distributed file-system, part of the [Apache Hadoop](https://hadoop.apache.org/) framework.
@@ -190,7 +189,7 @@ Here are the advanced options specific to hdfs (Hadoop distributed file system).
Kerberos service principal name for the namenode
Enables KERBEROS authentication. Specifies the Service Principal Name
-(/) for the namenode.
+(SERVICE/FQDN) for the namenode.
- Config: service_principal_name
- Env Var: RCLONE_HDFS_SERVICE_PRINCIPAL_NAME
diff --git a/docs/content/http.md b/docs/content/http.md
index 79aedd7b5..fbde49dea 100644
--- a/docs/content/http.md
+++ b/docs/content/http.md
@@ -3,8 +3,7 @@ title: "HTTP Remote"
description: "Read only remote for HTTP servers"
---
-{{< icon "fa fa-globe" >}} HTTP
--------------------------------------------------
+# {{< icon "fa fa-globe" >}} HTTP
The HTTP remote is a read only remote for reading files of a
webserver. The webserver should provide file listings which rclone
diff --git a/docs/content/hubic.md b/docs/content/hubic.md
index e50ab9103..f6113a8da 100644
--- a/docs/content/hubic.md
+++ b/docs/content/hubic.md
@@ -3,8 +3,7 @@ title: "Hubic"
description: "Rclone docs for Hubic"
---
-{{< icon "fa fa-space-shuttle" >}} Hubic
------------------------------------------
+# {{< icon "fa fa-space-shuttle" >}} Hubic
Paths are specified as `remote:path`
@@ -173,7 +172,7 @@ default for this is 5 GiB which is its maximum value.
- Config: chunk_size
- Env Var: RCLONE_HUBIC_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 5G
+- Default: 5Gi
#### --hubic-no-chunk
diff --git a/docs/content/jottacloud.md b/docs/content/jottacloud.md
index 33f687b14..f8a53bdda 100644
--- a/docs/content/jottacloud.md
+++ b/docs/content/jottacloud.md
@@ -3,8 +3,7 @@ title: "Jottacloud"
description: "Rclone docs for Jottacloud"
---
-{{< icon "fa fa-cloud" >}} Jottacloud
------------------------------------------
+# {{< icon "fa fa-cloud" >}} Jottacloud
Jottacloud is a cloud storage service provider from a Norwegian company, using its own datacenters in Norway.
diff --git a/docs/content/koofr.md b/docs/content/koofr.md
index a5f0881fd..c7f72fbd3 100644
--- a/docs/content/koofr.md
+++ b/docs/content/koofr.md
@@ -3,8 +3,7 @@ title: "Koofr"
description: "Rclone docs for Koofr"
---
-{{< icon "fa fa-suitcase" >}} Koofr
------------------------------------------
+# {{< icon "fa fa-suitcase" >}} Koofr
Paths are specified as `remote:path`
diff --git a/docs/content/local.md b/docs/content/local.md
index b8c0d02d2..2800e1722 100644
--- a/docs/content/local.md
+++ b/docs/content/local.md
@@ -3,8 +3,7 @@ title: "Local Filesystem"
description: "Rclone docs for the local filesystem"
---
-{{< icon "fas fa-hdd" >}} Local Filesystem
--------------------------------------------
+# {{< icon "fas fa-hdd" >}} Local Filesystem
Local paths are specified as normal filesystem paths, e.g. `/path/to/wherever`, so
@@ -367,32 +366,41 @@ points, as you explicitly acknowledge that they should be skipped.
#### --local-zero-size-links
-Assume the Stat size of links is zero (and read them instead)
+Assume the Stat size of links is zero (and read them instead) (Deprecated)
-On some virtual filesystems (such ash LucidLink), reading a link size via a Stat call always returns 0.
-However, on unix it reads as the length of the text in the link. This may cause errors like this when
-syncing:
+Rclone used to use the Stat size of links as the link size, but this fails in quite a few places
- Failed to copy: corrupted on transfer: sizes differ 0 vs 13
+- Windows
+- On some virtual filesystems (such ash LucidLink)
+- Android
+
+So rclone now always reads the link
-Setting this flag causes rclone to read the link and use that as the size of the link
-instead of 0 which in most cases fixes the problem.
- Config: zero_size_links
- Env Var: RCLONE_LOCAL_ZERO_SIZE_LINKS
- Type: bool
- Default: false
-#### --local-no-unicode-normalization
+#### --local-unicode-normalization
-Don't apply unicode normalization to paths and filenames (Deprecated)
+Apply unicode NFC normalization to paths and filenames
-This flag is deprecated now. Rclone no longer normalizes unicode file
-names, but it compares them with unicode normalization in the sync
-routine instead.
+This flag can be used to normalize file names into unicode NFC form
+that are read from the local filesystem.
-- Config: no_unicode_normalization
-- Env Var: RCLONE_LOCAL_NO_UNICODE_NORMALIZATION
+Rclone does not normally touch the encoding of file names it reads from
+the file system.
+
+This can be useful when using macOS as it normally provides decomposed (NFD)
+unicode which in some language (eg Korean) doesn't display properly on
+some OSes.
+
+Note that rclone compares filenames with unicode normalization in the sync
+routine so this flag shouldn't normally be used.
+
+- Config: unicode_normalization
+- Env Var: RCLONE_LOCAL_UNICODE_NORMALIZATION
- Type: bool
- Default: false
diff --git a/docs/content/mailru.md b/docs/content/mailru.md
index 0cea86092..99f2100c7 100644
--- a/docs/content/mailru.md
+++ b/docs/content/mailru.md
@@ -3,8 +3,7 @@ title: "Mailru"
description: "Mail.ru Cloud"
---
-{{< icon "fas fa-at" >}} Mail.ru Cloud
-----------------------------------------
+# {{< icon "fas fa-at" >}} Mail.ru Cloud
[Mail.ru Cloud](https://cloud.mail.ru/) is a cloud storage provided by a Russian internet company [Mail.Ru Group](https://mail.ru). The official desktop client is [Disk-O:](https://disk-o.cloud/en), available on Windows and Mac OS.
@@ -241,7 +240,7 @@ This option allows you to disable speedup (put by hash) for large files
- Config: speedup_max_disk
- Env Var: RCLONE_MAILRU_SPEEDUP_MAX_DISK
- Type: SizeSuffix
-- Default: 3G
+- Default: 3Gi
- Examples:
- "0"
- Completely disable speedup (put by hash).
@@ -257,7 +256,7 @@ Files larger than the size given below will always be hashed on disk.
- Config: speedup_max_memory
- Env Var: RCLONE_MAILRU_SPEEDUP_MAX_MEMORY
- Type: SizeSuffix
-- Default: 32M
+- Default: 32Mi
- Examples:
- "0"
- Preliminary hashing will always be done in a temporary disk location.
diff --git a/docs/content/mega.md b/docs/content/mega.md
index 1c7308ceb..32633b49b 100644
--- a/docs/content/mega.md
+++ b/docs/content/mega.md
@@ -3,8 +3,7 @@ title: "Mega"
description: "Rclone docs for Mega"
---
-{{< icon "fa fa-archive" >}} Mega
------------------------------------------
+# {{< icon "fa fa-archive" >}} Mega
[Mega](https://mega.nz/) is a cloud storage and file hosting service
known for its security feature where all files are encrypted locally
diff --git a/docs/content/memory.md b/docs/content/memory.md
index 4d8cb6611..de18dfd0c 100644
--- a/docs/content/memory.md
+++ b/docs/content/memory.md
@@ -3,8 +3,7 @@ title: "Memory"
description: "Rclone docs for Memory backend"
---
-{{< icon "fas fa-memory" >}} Memory
------------------------------------------
+# {{< icon "fas fa-memory" >}} Memory
The memory backend is an in RAM backend. It does not persist its
data - use the local backend for that.
diff --git a/docs/content/onedrive.md b/docs/content/onedrive.md
index cfb3de704..86e5f7871 100644
--- a/docs/content/onedrive.md
+++ b/docs/content/onedrive.md
@@ -3,8 +3,7 @@ title: "Microsoft OneDrive"
description: "Rclone docs for Microsoft OneDrive"
---
-{{< icon "fab fa-windows" >}} Microsoft OneDrive
------------------------------------------
+# {{< icon "fab fa-windows" >}} Microsoft OneDrive
Paths are specified as `remote:path`
@@ -277,7 +276,7 @@ Note that the chunks will be buffered into memory.
- Config: chunk_size
- Env Var: RCLONE_ONEDRIVE_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 10M
+- Default: 10Mi
#### --onedrive-drive-id
diff --git a/docs/content/opendrive.md b/docs/content/opendrive.md
index 9da9a6fae..53c39ccde 100644
--- a/docs/content/opendrive.md
+++ b/docs/content/opendrive.md
@@ -3,8 +3,7 @@ title: "OpenDrive"
description: "Rclone docs for OpenDrive"
---
-{{< icon "fa fa-file" >}} OpenDrive
-------------------------------------
+# {{< icon "fa fa-file" >}} OpenDrive
Paths are specified as `remote:path`
@@ -148,7 +147,7 @@ increase memory use.
- Config: chunk_size
- Env Var: RCLONE_OPENDRIVE_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 10M
+- Default: 10Mi
{{< rem autogenerated options stop >}}
diff --git a/docs/content/pcloud.md b/docs/content/pcloud.md
index adac14e39..3720daeab 100644
--- a/docs/content/pcloud.md
+++ b/docs/content/pcloud.md
@@ -3,8 +3,7 @@ title: "pCloud"
description: "Rclone docs for pCloud"
---
-{{< icon "fa fa-cloud" >}} pCloud
------------------------------------------
+# {{< icon "fa fa-cloud" >}} pCloud
Paths are specified as `remote:path`
diff --git a/docs/content/premiumizeme.md b/docs/content/premiumizeme.md
index e0db7c9d2..e2383948b 100644
--- a/docs/content/premiumizeme.md
+++ b/docs/content/premiumizeme.md
@@ -3,8 +3,7 @@ title: "premiumize.me"
description: "Rclone docs for premiumize.me"
---
-{{< icon "fa fa-user" >}} premiumize.me
------------------------------------------
+# {{< icon "fa fa-user" >}} premiumize.me
Paths are specified as `remote:path`
diff --git a/docs/content/putio.md b/docs/content/putio.md
index cad2f7f71..2494e71e1 100644
--- a/docs/content/putio.md
+++ b/docs/content/putio.md
@@ -3,8 +3,7 @@ title: "put.io"
description: "Rclone docs for put.io"
---
-{{< icon "fas fa-parking" >}} put.io
----------------------------------
+# {{< icon "fas fa-parking" >}} put.io
Paths are specified as `remote:path`
diff --git a/docs/content/qingstor.md b/docs/content/qingstor.md
index 4771125a4..06216352e 100644
--- a/docs/content/qingstor.md
+++ b/docs/content/qingstor.md
@@ -3,8 +3,7 @@ title: "QingStor"
description: "Rclone docs for QingStor Object Storage"
---
-{{< icon "fas fa-hdd" >}} QingStor
----------------------------------------
+# {{< icon "fas fa-hdd" >}} QingStor
Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`.
@@ -232,7 +231,7 @@ The minimum is 0 and the maximum is 5 GiB.
- Config: upload_cutoff
- Env Var: RCLONE_QINGSTOR_UPLOAD_CUTOFF
- Type: SizeSuffix
-- Default: 200M
+- Default: 200Mi
#### --qingstor-chunk-size
@@ -250,7 +249,7 @@ enough memory, then increasing this will speed up the transfers.
- Config: chunk_size
- Env Var: RCLONE_QINGSTOR_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 4M
+- Default: 4Mi
#### --qingstor-upload-concurrency
diff --git a/docs/content/rc.md b/docs/content/rc.md
index f5082f375..959f1b98c 100644
--- a/docs/content/rc.md
+++ b/docs/content/rc.md
@@ -525,8 +525,14 @@ This takes the following parameters
- name - name of remote
- parameters - a map of \{ "key": "value" \} pairs
- type - type of the new remote
-- obscure - optional bool - forces obscuring of passwords
-- noObscure - optional bool - forces passwords not to be obscured
+- opt - a dictionary of options to control the configuration
+ - obscure - declare passwords are plain and need obscuring
+ - noObscure - declare passwords are already obscured and don't need obscuring
+ - nonInteractive - don't interact with a user, return questions
+ - continue - continue the config process with an answer
+ - all - ask all the config questions not just the post config ones
+ - state - state to restart with - used with continue
+ - result - result to restart with - used with continue
See the [config create command](/commands/rclone_config_create/) command for more information on the above.
@@ -600,8 +606,14 @@ This takes the following parameters
- name - name of remote
- parameters - a map of \{ "key": "value" \} pairs
-- obscure - optional bool - forces obscuring of passwords
-- noObscure - optional bool - forces passwords not to be obscured
+- opt - a dictionary of options to control the configuration
+ - obscure - declare passwords are plain and need obscuring
+ - noObscure - declare passwords are already obscured and don't need obscuring
+ - nonInteractive - don't interact with a user, return questions
+ - continue - continue the config process with an answer
+ - all - ask all the config questions not just the post config ones
+ - state - state to restart with - used with continue
+ - result - result to restart with - used with continue
See the [config update command](/commands/rclone_config_update/) command for more information on the above.
@@ -775,7 +787,7 @@ Returns the following values:
"lastError": last error string,
"renames" : number of files renamed,
"retryError": boolean showing whether there has been at least one non-NoRetryError,
- "speed": average speed in bytes/sec since start of the group,
+ "speed": average speed in bytes per second since start of the group,
"totalBytes": total number of bytes in the group,
"totalChecks": total number of checks in the group,
"totalTransfers": total number of transfers in the group,
diff --git a/docs/content/s3.md b/docs/content/s3.md
index c30049663..904aa6462 100644
--- a/docs/content/s3.md
+++ b/docs/content/s3.md
@@ -3,8 +3,7 @@ title: "Amazon S3"
description: "Rclone docs for Amazon S3"
---
-{{< icon "fab fa-amazon" >}} Amazon S3 Storage Providers
---------------------------------------------------------
+# {{< icon "fab fa-amazon" >}} Amazon S3 Storage Providers
The S3 backend can be used with a number of different providers:
@@ -894,6 +893,10 @@ Endpoint for OSS API.
- Type: string
- Default: ""
- Examples:
+ - "oss-accelerate.aliyuncs.com"
+ - Global Accelerate
+ - "oss-accelerate-overseas.aliyuncs.com"
+ - Global Accelerate (outside mainland China)
- "oss-cn-hangzhou.aliyuncs.com"
- East China 1 (Hangzhou)
- "oss-cn-shanghai.aliyuncs.com"
@@ -905,9 +908,17 @@ Endpoint for OSS API.
- "oss-cn-zhangjiakou.aliyuncs.com"
- North China 3 (Zhangjiakou)
- "oss-cn-huhehaote.aliyuncs.com"
- - North China 5 (Huhehaote)
+ - North China 5 (Hohhot)
+ - "oss-cn-wulanchabu.aliyuncs.com"
+ - North China 6 (Ulanqab)
- "oss-cn-shenzhen.aliyuncs.com"
- South China 1 (Shenzhen)
+ - "oss-cn-heyuan.aliyuncs.com"
+ - South China 2 (Heyuan)
+ - "oss-cn-guangzhou.aliyuncs.com"
+ - South China 3 (Guangzhou)
+ - "oss-cn-chengdu.aliyuncs.com"
+ - West China 1 (Chengdu)
- "oss-cn-hongkong.aliyuncs.com"
- Hong Kong (Hong Kong)
- "oss-us-west-1.aliyuncs.com"
@@ -1029,6 +1040,8 @@ Required when using an S3 clone.
- Digital Ocean Spaces Amsterdam 3
- "sgp1.digitaloceanspaces.com"
- Digital Ocean Spaces Singapore 1
+ - "localhost:8333"
+ - SeaweedFS S3 localhost
- "s3.wasabisys.com"
- Wasabi US East endpoint
- "s3.us-west-1.wasabisys.com"
@@ -1334,7 +1347,7 @@ The storage class to use when storing new objects in S3.
### Advanced Options
-Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, and Tencent COS).
+Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, SeaweedFS, and Tencent COS).
#### --s3-bucket-acl
@@ -1420,7 +1433,7 @@ The minimum is 0 and the maximum is 5 GiB.
- Config: upload_cutoff
- Env Var: RCLONE_S3_UPLOAD_CUTOFF
- Type: SizeSuffix
-- Default: 200M
+- Default: 200Mi
#### --s3-chunk-size
@@ -1449,7 +1462,7 @@ larger files then you will need to increase chunk_size.
- Config: chunk_size
- Env Var: RCLONE_S3_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 5M
+- Default: 5Mi
#### --s3-max-upload-parts
@@ -1482,7 +1495,7 @@ The minimum is 0 and the maximum is 5 GiB.
- Config: copy_cutoff
- Env Var: RCLONE_S3_COPY_CUTOFF
- Type: SizeSuffix
-- Default: 4.656G
+- Default: 4.656Gi
#### --s3-disable-checksum
@@ -1684,6 +1697,15 @@ very small even with this flag.
- Type: bool
- Default: false
+#### --s3-no-head-object
+
+If set, don't HEAD objects
+
+- Config: no_head_object
+- Env Var: RCLONE_S3_NO_HEAD_OBJECT
+- Type: bool
+- Default: false
+
#### --s3-encoding
This sets the encoding for the backend.
@@ -1884,7 +1906,7 @@ Then use it as normal with the name of the public bucket, e.g.
You will be able to list and copy data but not upload it.
-### Ceph ###
+## Ceph
[Ceph](https://ceph.com/) is an open source unified, distributed
storage system designed for excellent performance, reliability and
@@ -1940,7 +1962,7 @@ removed).
Because this is a json dump, it is encoding the `/` as `\/`, so if you
use the secret key as `xxxxxx/xxxx` it will work fine.
-### Dreamhost ###
+## Dreamhost
Dreamhost [DreamObjects](https://www.dreamhost.com/cloud/storage/) is
an object storage system based on CEPH.
@@ -1964,7 +1986,7 @@ server_side_encryption =
storage_class =
```
-### DigitalOcean Spaces ###
+## DigitalOcean Spaces
[Spaces](https://www.digitalocean.com/products/object-storage/) is an [S3-interoperable](https://developers.digitalocean.com/documentation/spaces/) object storage service from cloud provider DigitalOcean.
@@ -2010,7 +2032,7 @@ rclone mkdir spaces:my-new-space
rclone copy /path/to/files spaces:my-new-space
```
-### IBM COS (S3) ###
+## IBM COS (S3)
Information stored with IBM Cloud Object Storage is encrypted and dispersed across multiple geographic locations, and accessed through an implementation of the S3 API. This service makes use of the distributed storage technologies provided by IBM’s Cloud Object Storage System (formerly Cleversafe). For more information visit: (http://www.ibm.com/cloud/object-storage)
@@ -2182,7 +2204,7 @@ acl> 1
rclone delete IBM-COS-XREGION:newbucket/file.txt
```
-### Minio ###
+## Minio
[Minio](https://minio.io/) is an object storage server built for cloud application developers and devops.
@@ -2249,7 +2271,7 @@ So once set up, for example to copy files into a bucket
rclone copy /path/to/files minio:bucket
```
-### Scaleway {#scaleway}
+## Scaleway
[Scaleway](https://www.scaleway.com/object-storage/) The Object Storage platform allows you to store anything from backups, logs and web assets to documents and photos.
Files can be dropped from the Scaleway console or transferred through our API and CLI or using any S3-compatible tool.
@@ -2271,7 +2293,7 @@ server_side_encryption =
storage_class =
```
-### SeaweedFS ###
+## SeaweedFS
[SeaweedFS](https://github.com/chrislusf/seaweedfs/) is a distributed storage system for
blobs, objects, files, and data lake, with O(1) disk seek and a scalable file metadata store.
@@ -2321,7 +2343,7 @@ So once set up, for example to copy files into a bucket
rclone copy /path/to/files seaweedfs_s3:foo
```
-### Wasabi ###
+## Wasabi
[Wasabi](https://wasabi.com) is a cloud-based object storage service for a
broad range of applications and use cases. Wasabi is designed for
@@ -2434,7 +2456,7 @@ server_side_encryption =
storage_class =
```
-### Alibaba OSS {#alibaba-oss}
+## Alibaba OSS {#alibaba-oss}
Here is an example of making an [Alibaba Cloud (Aliyun) OSS](https://www.alibabacloud.com/product/oss/)
configuration. First run:
@@ -2544,7 +2566,7 @@ d) Delete this remote
y/e/d> y
```
-### Tencent COS {#tencent-cos}
+## Tencent COS {#tencent-cos}
[Tencent Cloud Object Storage (COS)](https://intl.cloud.tencent.com/product/cos) is a distributed storage service offered by Tencent Cloud for unstructured data. It is secure, stable, massive, convenient, low-delay and low-cost.
@@ -2676,13 +2698,13 @@ Name Type
cos s3
```
-### Netease NOS
+## Netease NOS
For Netease NOS configure as per the configurator `rclone config`
setting the provider `Netease`. This will automatically set
`force_path_style = false` which is necessary for it to run properly.
-### Limitations
+## Limitations
`rclone about` is not supported by the S3 backend. Backends without
this capability cannot determine free space for an rclone mount or
diff --git a/docs/content/seafile.md b/docs/content/seafile.md
index 9fc27b8d9..bc42b6773 100644
--- a/docs/content/seafile.md
+++ b/docs/content/seafile.md
@@ -3,8 +3,7 @@ title: "Seafile"
description: "Seafile"
---
-{{< icon "fa fa-server" >}}Seafile
-----------------------------------------
+# {{< icon "fa fa-server" >}}Seafile
This is a backend for the [Seafile](https://www.seafile.com/) storage service:
- It works with both the free community edition or the professional edition.
diff --git a/docs/content/sftp.md b/docs/content/sftp.md
index 3671a340a..2daf9142e 100644
--- a/docs/content/sftp.md
+++ b/docs/content/sftp.md
@@ -3,8 +3,7 @@ title: "SFTP"
description: "SFTP"
---
-{{< icon "fa fa-server" >}} SFTP
-----------------------------------------
+# {{< icon "fa fa-server" >}} SFTP
SFTP is the [Secure (or SSH) File Transfer
Protocol](https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol).
@@ -531,6 +530,21 @@ If concurrent reads are disabled, the use_fstat option is ignored.
- Type: bool
- Default: false
+#### --sftp-disable-concurrent-writes
+
+If set don't use concurrent writes
+
+Normally rclone uses concurrent writes to upload files. This improves
+the performance greatly, especially for distant servers.
+
+This option disables concurrent writes should that be necessary.
+
+
+- Config: disable_concurrent_writes
+- Env Var: RCLONE_SFTP_DISABLE_CONCURRENT_WRITES
+- Type: bool
+- Default: false
+
#### --sftp-idle-timeout
Max time before closing idle connections
diff --git a/docs/content/sharefile.md b/docs/content/sharefile.md
index baffa1944..adf355007 100644
--- a/docs/content/sharefile.md
+++ b/docs/content/sharefile.md
@@ -3,7 +3,7 @@ title: "Citrix ShareFile"
description: "Rclone docs for Citrix ShareFile"
---
-## {{< icon "fas fa-share-square" >}} Citrix ShareFile
+# {{< icon "fas fa-share-square" >}} Citrix ShareFile
[Citrix ShareFile](https://sharefile.com) is a secure file sharing and transfer service aimed as business.
@@ -191,7 +191,7 @@ Cutoff for switching to multipart upload.
- Config: upload_cutoff
- Env Var: RCLONE_SHAREFILE_UPLOAD_CUTOFF
- Type: SizeSuffix
-- Default: 128M
+- Default: 128Mi
#### --sharefile-chunk-size
@@ -205,7 +205,7 @@ Reducing this will reduce memory usage but decrease performance.
- Config: chunk_size
- Env Var: RCLONE_SHAREFILE_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 64M
+- Default: 64Mi
#### --sharefile-endpoint
diff --git a/docs/content/sugarsync.md b/docs/content/sugarsync.md
index b68b0bea9..74ef2489c 100644
--- a/docs/content/sugarsync.md
+++ b/docs/content/sugarsync.md
@@ -3,8 +3,7 @@ title: "SugarSync"
description: "Rclone docs for SugarSync"
---
-{{< icon "fas fa-dove" >}} SugarSync
------------------------------------------
+# {{< icon "fas fa-dove" >}} SugarSync
[SugarSync](https://sugarsync.com) is a cloud service that enables
active synchronization of files across computers and other devices for
diff --git a/docs/content/swift.md b/docs/content/swift.md
index eb7bd264a..127952ca8 100644
--- a/docs/content/swift.md
+++ b/docs/content/swift.md
@@ -3,8 +3,7 @@ title: "Swift"
description: "Swift"
---
-{{< icon "fa fa-space-shuttle" >}}Swift
-----------------------------------------
+# {{< icon "fa fa-space-shuttle" >}}Swift
Swift refers to [OpenStack Object Storage](https://docs.openstack.org/swift/latest/).
Commercial implementations of that being:
@@ -449,7 +448,7 @@ default for this is 5 GiB which is its maximum value.
- Config: chunk_size
- Env Var: RCLONE_SWIFT_CHUNK_SIZE
- Type: SizeSuffix
-- Default: 5G
+- Default: 5Gi
#### --swift-no-chunk
diff --git a/docs/content/tardigrade.md b/docs/content/tardigrade.md
index 49c688784..7997b242f 100644
--- a/docs/content/tardigrade.md
+++ b/docs/content/tardigrade.md
@@ -3,8 +3,7 @@ title: "Tardigrade"
description: "Rclone docs for Tardigrade"
---
-{{< icon "fas fa-dove" >}} Tardigrade
------------------------------------------
+# {{< icon "fas fa-dove" >}} Tardigrade
[Tardigrade](https://tardigrade.io) is an encrypted, secure, and
cost-effective object storage service that enables you to store, back up, and
diff --git a/docs/content/union.md b/docs/content/union.md
index a863e0540..1f02840f5 100644
--- a/docs/content/union.md
+++ b/docs/content/union.md
@@ -3,8 +3,7 @@ title: "Union"
description: "Remote Unification"
---
-{{< icon "fa fa-link" >}} Union
------------------------------------------
+# {{< icon "fa fa-link" >}} Union
The `union` remote provides a unification similar to UnionFS using other remotes.
diff --git a/docs/content/uptobox.md b/docs/content/uptobox.md
index 402d0a004..47757becf 100644
--- a/docs/content/uptobox.md
+++ b/docs/content/uptobox.md
@@ -3,8 +3,7 @@ title: "Uptobox"
description: "Rclone docs for Uptobox"
---
-{{< icon "fa fa-archive" >}} Uptobox
------------------------------------------
+# {{< icon "fa fa-archive" >}} Uptobox
This is a Backend for Uptobox file storage service. Uptobox is closer to a one-click hoster than a traditional
cloud storage provider and therefore not suitable for long term storage.
@@ -13,9 +12,9 @@ Paths are specified as `remote:path`
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
-## Setup
+### Setup
-To configure an Uptobox backend you'll need your personal api token. You'll find it in you
+To configure an Uptobox backend you'll need your personal api token. You'll find it in your
[account settings](https://uptobox.com/my_account)
@@ -107,12 +106,12 @@ as they can't be used in XML strings.
Here are the standard options specific to uptobox (Uptobox).
-#### --uptobox-api-key
+#### --uptobox-access-token
-Your API Key, get it from https://uptobox.com/my_account
+Your access Token, get it from https://uptobox.com/my_account
-- Config: api_key
-- Env Var: RCLONE_UPTOBOX_API_KEY
+- Config: access_token
+- Env Var: RCLONE_UPTOBOX_ACCESS_TOKEN
- Type: string
- Default: ""
@@ -129,7 +128,7 @@ See: the [encoding section in the overview](/overview/#encoding) for more info.
- Config: encoding
- Env Var: RCLONE_UPTOBOX_ENCODING
- Type: MultiEncoder
-- Default: Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot
+- Default: Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot
{{< rem autogenerated options stop >}}
@@ -138,4 +137,4 @@ See: the [encoding section in the overview](/overview/#encoding) for more info.
Uptobox will delete inactive files that have not been accessed in 60 days.
`rclone about` is not supported by this backend an overview of used space can however
-been seen in the uptobox web interface.
\ No newline at end of file
+been seen in the uptobox web interface.
diff --git a/docs/content/webdav.md b/docs/content/webdav.md
index fc3ce5a82..4db0ba11f 100644
--- a/docs/content/webdav.md
+++ b/docs/content/webdav.md
@@ -3,8 +3,7 @@ title: "WebDAV"
description: "Rclone docs for WebDAV"
---
-{{< icon "fa fa-globe" >}} WebDAV
------------------------------------------
+# {{< icon "fa fa-globe" >}} WebDAV
Paths are specified as `remote:path`
diff --git a/docs/content/yandex.md b/docs/content/yandex.md
index 0904a8333..5be704990 100644
--- a/docs/content/yandex.md
+++ b/docs/content/yandex.md
@@ -3,8 +3,7 @@ title: "Yandex"
description: "Yandex Disk"
---
-{{< icon "fa fa-space-shuttle" >}}Yandex Disk
-----------------------------------------
+# {{< icon "fa fa-space-shuttle" >}}Yandex Disk
[Yandex Disk](https://disk.yandex.com) is a cloud storage solution created by [Yandex](https://yandex.com).
diff --git a/docs/content/zoho.md b/docs/content/zoho.md
index 6775987f8..6da811c15 100644
--- a/docs/content/zoho.md
+++ b/docs/content/zoho.md
@@ -3,8 +3,7 @@ title: "Zoho"
description: "Zoho WorkDrive"
---
-{{< icon "fas fa-folder" >}}Zoho Workdrive
-----------------------------------------
+# {{< icon "fas fa-folder" >}}Zoho Workdrive
[Zoho WorkDrive](https://www.zoho.com/workdrive/) is a cloud storage solution created by [Zoho](https://zoho.com).
@@ -150,7 +149,11 @@ Leave blank normally.
#### --zoho-region
-Zoho region to connect to. You'll have to use the region you organization is registered in.
+Zoho region to connect to.
+
+You'll have to use the region your organization is registered in. If
+not sure use the same top level domain as you connect to in your
+browser.
- Config: region
- Env Var: RCLONE_ZOHO_REGION
diff --git a/rclone.1 b/rclone.1
index 5b62a8474..c48ef072d 100644
--- a/rclone.1
+++ b/rclone.1
@@ -1,7 +1,7 @@
.\"t
.\" Automatically generated by Pandoc 2.5
.\"
-.TH "rclone" "1" "Mar 31, 2021" "User Manual" ""
+.TH "rclone" "1" "Jul 20, 2021" "User Manual" ""
.hy
.SH Rclone syncs your files to cloud storage
.PP
@@ -50,7 +50,6 @@ local disk.
.PP
Virtual backends wrap local and cloud file systems to apply
encryption (https://rclone.org/crypt/),
-caching (https://rclone.org/cache/),
compression (https://rclone.org/compress/)
chunking (https://rclone.org/chunker/) and
joining (https://rclone.org/union/).
@@ -229,6 +228,8 @@ Scaleway
.IP \[bu] 2
Seafile
.IP \[bu] 2
+SeaweedFS
+.IP \[bu] 2
SFTP
.IP \[bu] 2
StackPath
@@ -239,6 +240,8 @@ Tardigrade
.IP \[bu] 2
Tencent Cloud Object Storage (COS)
.IP \[bu] 2
+Uptobox
+.IP \[bu] 2
Wasabi
.IP \[bu] 2
WebDAV
@@ -271,6 +274,8 @@ archive
.IP \[bu] 2
Run \f[C]rclone config\f[R] to setup.
See rclone config docs (https://rclone.org/docs/) for more details.
+.IP \[bu] 2
+Optionally configure automatic execution.
.PP
See below for some expanded Linux / macOS instructions.
.PP
@@ -579,6 +584,180 @@ add the role to the hosts you want rclone installed to:
\- rclone
\f[R]
.fi
+.SH Autostart
+.PP
+After installing and configuring rclone, as described above, you are
+ready to use rclone as an interactive command line utility.
+If your goal is to perform \f[I]periodic\f[R] operations, such as a
+regular sync (https://rclone.org/commands/rclone_sync/), you will
+probably want to configure your rclone command in your operating
+system\[aq]s scheduler.
+If you need to expose \f[I]service\f[R]\-like features, such as remote
+control (https://rclone.org/rc/), GUI (https://rclone.org/gui/),
+serve (https://rclone.org/commands/rclone_serve/) or
+mount (https://rclone.org/commands/rclone_move/), you will often want an
+rclone command always running in the background, and configuring it to
+run in a service infrastructure may be a better option.
+Below are some alternatives on how to achieve this on different
+operating systems.
+.PP
+NOTE: Before setting up autorun it is highly recommended that you have
+tested your command manually from a Command Prompt first.
+.SS Autostart on Windows
+.PP
+The most relevant alternatives for autostart on Windows are: \- Run at
+user log on using the Startup folder \- Run at user log on, at system
+startup or at schedule using Task Scheduler \- Run at system startup
+using Windows service
+.SS Running in background
+.PP
+Rclone is a console application, so if not starting from an existing
+Command Prompt, e.g.
+when starting rclone.exe from a shortcut, it will open a Command Prompt
+window.
+When configuring rclone to run from task scheduler and windows service
+you are able to set it to run hidden in background.
+From rclone version 1.54 you can also make it run hidden from anywhere
+by adding option \f[C]\-\-no\-console\f[R] (it may still flash briefly
+when the program starts).
+Since rclone normally writes information and any error messages to the
+console, you must redirect this to a file to be able to see it.
+Rclone has a built\-in option \f[C]\-\-log\-file\f[R] for that.
+.PP
+Example command to run a sync in background:
+.IP
+.nf
+\f[C]
+c:\[rs]rclone\[rs]rclone.exe sync c:\[rs]files remote:/files \-\-no\-console \-\-log\-file c:\[rs]rclone\[rs]logs\[rs]sync_files.txt
+\f[R]
+.fi
+.SS User account
+.PP
+As mentioned in the mount (https://rclone.org/commands/rclone_move/)
+documentation, mounted drives created as Administrator are not visible
+to other accounts, not even the account that was elevated as
+Administrator.
+By running the mount command as the built\-in \f[C]SYSTEM\f[R] user
+account, it will create drives accessible for everyone on the system.
+Both scheduled task and Windows service can be used to achieve this.
+.PP
+NOTE: Remember that when rclone runs as the \f[C]SYSTEM\f[R] user, the
+user profile that it sees will not be yours.
+This means that if you normally run rclone with configuration file in
+the default location, to be able to use the same configuration when
+running as the system user you must explicitely tell rclone where to
+find it with the
+\f[C]\-\-config\f[R] (https://rclone.org/docs/#config-config-file)
+option, or else it will look in the system users profile path
+(\f[C]C:\[rs]Windows\[rs]System32\[rs]config\[rs]systemprofile\f[R]).
+To test your command manually from a Command Prompt, you can run it with
+the
+PsExec (https://docs.microsoft.com/en-us/sysinternals/downloads/psexec)
+utility from Microsoft\[aq]s Sysinternals suite, which takes option
+\f[C]\-s\f[R] to execute commands as the \f[C]SYSTEM\f[R] user.
+.SS Start from Startup folder
+.PP
+To quickly execute an rclone command you can simply create a standard
+Windows Explorer shortcut for the complete rclone command you want to
+run.
+If you store this shortcut in the special \[dq]Startup\[dq] start\-menu
+folder, Windows will automatically run it at login.
+To open this folder in Windows Explorer, enter path
+\f[C]%APPDATA%\[rs]Microsoft\[rs]Windows\[rs]Start Menu\[rs]Programs\[rs]Startup\f[R],
+or
+\f[C]C:\[rs]ProgramData\[rs]Microsoft\[rs]Windows\[rs]Start Menu\[rs]Programs\[rs]StartUp\f[R]
+if you want the command to start for \f[I]every\f[R] user that logs in.
+.PP
+This is the easiest approach to autostarting of rclone, but it offers no
+functionality to set it to run as different user, or to set conditions
+or actions on certain events.
+Setting up a scheduled task as described below will often give you
+better results.
+.SS Start from Task Scheduler
+.PP
+Task Scheduler is an administrative tool built into Windows, and it can
+be used to configure rclone to be started automatically in a highly
+configurable way, e.g.
+periodically on a schedule, on user log on, or at system startup.
+It can run be configured to run as the current user, or for a mount
+command that needs to be available to all users it can run as the
+\f[C]SYSTEM\f[R] user.
+For technical information, see
+https://docs.microsoft.com/windows/win32/taskschd/task\-scheduler\-start\-page.
+.SS Run as service
+.PP
+For running rclone at system startup, you can create a Windows service
+that executes your rclone command, as an alternative to scheduled task
+configured to run at startup.
+.SS Mount command built\-in service integration
+.PP
+For mount commands, Rclone has a built\-in Windows service integration
+via the third party WinFsp library it uses.
+Registering as a regular Windows service easy, as you just have to
+execute the built\-in PowerShell command \f[C]New\-Service\f[R]
+(requires administrative privileges).
+.PP
+Example of a PowerShell command that creates a Windows service for
+mounting some \f[C]remote:/files\f[R] as drive letter \f[C]X:\f[R], for
+\f[I]all\f[R] users (service will be running as the local system
+account):
+.IP
+.nf
+\f[C]
+New\-Service \-Name Rclone \-BinaryPathName \[aq]c:\[rs]rclone\[rs]rclone.exe mount remote:/files X: \-\-config c:\[rs]rclone\[rs]config\[rs]rclone.conf \-\-log\-file c:\[rs]rclone\[rs]logs\[rs]mount.txt\[aq]
+\f[R]
+.fi
+.PP
+The WinFsp service
+infrastructure (https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Architecture)
+supports incorporating services for file system implementations, such as
+rclone, into its own launcher service, as kind of \[dq]child
+services\[dq].
+This has the additional advantage that it also implements a network
+provider that integrates into Windows standard methods for managing
+network drives.
+This is currently not officially supported by Rclone, but with WinFsp
+version 2019.3 B2 / v1.5B2 or later it should be possible through path
+rewriting as described
+here (https://github.com/rclone/rclone/issues/3340).
+.SS Third party service integration
+.PP
+To Windows service running any rclone command, the excellent third party
+utility NSSM (http://nssm.cc), the \[dq]Non\-Sucking Service
+Manager\[dq], can be used.
+It includes some advanced features such as adjusting process periority,
+defining process environment variables, redirect to file anything
+written to stdout, and customized response to different exit codes, with
+a GUI to configure everything from (although it can also be used from
+command line ).
+.PP
+There are also several other alternatives.
+To mention one more, WinSW (https://github.com/winsw/winsw),
+\[dq]Windows Service Wrapper\[dq], is worth checking out.
+It requires .NET Framework, but it is preinstalled on newer versions of
+Windows, and it also provides alternative standalone distributions which
+includes necessary runtime (.NET 5).
+WinSW is a command\-line only utility, where you have to manually create
+an XML file with service configuration.
+This may be a drawback for some, but it can also be an advantage as it
+is easy to back up and re\-use the configuration settings, without
+having go through manual steps in a GUI.
+One thing to note is that by default it does not restart the service on
+error, one have to explicit enable this in the configuration file (via
+the \[dq]onfailure\[dq] parameter).
+.SS Autostart on Linux
+.SS Start as a service
+.PP
+To always run rclone in background, relevant for mount commands etc, you
+can use systemd to set up rclone as a system or user service.
+Running as a system service ensures that it is run at startup even if
+the user it is running as has no active session.
+Running rclone as a user service ensures that it only starts after the
+configured user has logged into the system.
+.SS Run periodically from cron
+.PP
+To run a periodic command, such as a copy/sync, you can set up a cron
+job.
.SS Configure
.PP
First, you\[aq]ll need to configure rclone.
@@ -610,8 +789,6 @@ Backblaze B2 (https://rclone.org/b2/)
.IP \[bu] 2
Box (https://rclone.org/box/)
.IP \[bu] 2
-Cache (https://rclone.org/cache/)
-.IP \[bu] 2
Chunker (https://rclone.org/chunker/) \- transparently splits large
files for other remotes
.IP \[bu] 2
@@ -678,6 +855,8 @@ Tardigrade (https://rclone.org/tardigrade/)
.IP \[bu] 2
Union (https://rclone.org/union/)
.IP \[bu] 2
+Uptobox (https://rclone.org/uptobox/)
+.IP \[bu] 2
WebDAV (https://rclone.org/webdav/)
.IP \[bu] 2
Yandex Disk (https://rclone.org/yandex/)
@@ -759,9 +938,6 @@ Disconnects user from remote
rclone config dump (https://rclone.org/commands/rclone_config_dump/) \-
Dump the config file as JSON.
.IP \[bu] 2
-rclone config edit (https://rclone.org/commands/rclone_config_edit/) \-
-Enter an interactive configuration session.
-.IP \[bu] 2
rclone config file (https://rclone.org/commands/rclone_config_file/) \-
Show path of configuration file in use.
.IP \[bu] 2
@@ -780,6 +956,9 @@ Re\-authenticates user with remote.
rclone config show (https://rclone.org/commands/rclone_config_show/) \-
Print (decrypted) config file, or the config for a single remote.
.IP \[bu] 2
+rclone config touch (https://rclone.org/commands/rclone_config_touch/)
+\- Ensure configuration file exists.
+.IP \[bu] 2
rclone config update (https://rclone.org/commands/rclone_config_update/)
\- Update options in an existing remote.
.IP \[bu] 2
@@ -1024,8 +1203,8 @@ directories along with it.
You can also use the separate command \f[C]rmdir\f[R] or
\f[C]rmdirs\f[R] to delete empty directories only.
.PP
-For example, to delete all files bigger than 100MBytes, you may first
-want to check what would be deleted (use either):
+For example, to delete all files bigger than 100 MiB, you may first want
+to check what would be deleted (use either):
.IP
.nf
\f[C]
@@ -1042,8 +1221,8 @@ rclone \-\-min\-size 100M delete remote:path
\f[R]
.fi
.PP
-That reads \[dq]delete everything with a minimum size of 100 MB\[dq],
-hence delete all files bigger than 100MBytes.
+That reads \[dq]delete everything with a minimum size of 100 MiB\[dq],
+hence delete all files bigger than 100 MiB.
.PP
\f[B]Important\f[R]: Since this can cause data loss, test first with the
\f[C]\-\-dry\-run\f[R] or the \f[C]\-\-interactive\f[R]/\f[C]\-i\f[R]
@@ -1179,6 +1358,10 @@ from both remotes and check them against each other on the fly.
This can be useful for remotes that don\[aq]t support hashes or if you
really want to check all the data.
.PP
+If you supply the \f[C]\-\-checkfile HASH\f[R] flag with a valid hash
+name, the \f[C]source:path\f[R] must point to a text file in the SUM
+format.
+.PP
If you supply the \f[C]\-\-one\-way\f[R] flag, it will only check that
files in the source match the files in the destination, not the other
way around.
@@ -1222,6 +1405,7 @@ rclone check source:path dest:path [flags]
.IP
.nf
\f[C]
+ \-C, \-\-checkfile string Treat source:path as a SUM file with hashes of given type
\-\-combined string Make a combined report of changes to this file
\-\-differ string Report all non\-matching files to this file
\-\-download Check by downloading rather than with hash.
@@ -1486,6 +1670,7 @@ rclone md5sum remote:path [flags]
.nf
\f[C]
\-\-base64 Output base64 encoded hashsum
+ \-C, \-\-checkfile string Validate hashes against a given SUM file instead of printing them
\-\-download Download the file and hash it locally; if this flag is not specified, the hash is requested from the remote
\-h, \-\-help help for md5sum
\-\-output\-file string Output hashsums to a file rather than the terminal
@@ -1521,6 +1706,7 @@ rclone sha1sum remote:path [flags]
.nf
\f[C]
\-\-base64 Output base64 encoded hashsum
+ \-C, \-\-checkfile string Validate hashes against a given SUM file instead of printing them
\-\-download Download the file and hash it locally; if this flag is not specified, the hash is requested from the remote
\-h, \-\-help help for sha1sum
\-\-output\-file string Output hashsums to a file rather than the terminal
@@ -1563,14 +1749,17 @@ Show the version number.
.SS Synopsis
.PP
Show the rclone version number, the go version, the build target OS and
-architecture, build tags and the type of executable (static or dynamic).
+architecture, the runtime OS and kernel version and bitness, build tags
+and the type of executable (static or dynamic).
.PP
For example:
.IP
.nf
\f[C]
$ rclone version
-rclone v1.54
+rclone v1.55.0
+\- os/version: ubuntu 18.04 (64 bit)
+\- os/kernel: 4.15.0\-136\-generic (x86_64)
\- os/type: linux
\- os/arch: amd64
\- go/version: go1.16
@@ -1838,12 +2027,12 @@ commands, flags and backends.
Get quota information from the remote.
.SS Synopsis
.PP
-\f[C]rclone about\f[R]prints quota information about a remote to
+\f[C]rclone about\f[R] prints quota information about a remote to
standard output.
The output is typically used, free, quota and trash contents.
.PP
E.g.
-Typical output from\f[C]rclone about remote:\f[R]is:
+Typical output from \f[C]rclone about remote:\f[R] is:
.IP
.nf
\f[C]
@@ -1887,7 +2076,7 @@ Other: 8849156022
\f[R]
.fi
.PP
-A \f[C]\-\-json\f[R]flag generates conveniently computer readable
+A \f[C]\-\-json\f[R] flag generates conveniently computer readable
output, e.g.
.IP
.nf
@@ -2092,13 +2281,90 @@ not listed here.
.IP \[bu] 2
rclone (https://rclone.org/commands/rclone/) \- Show help for rclone
commands, flags and backends.
+.SH rclone checksum
+.PP
+Checks the files in the source against a SUM file.
+.SS Synopsis
+.PP
+Checks that hashsums of source files match the SUM file.
+It compares hashes (MD5, SHA1, etc) and logs a report of files which
+don\[aq]t match.
+It doesn\[aq]t alter the file system.
+.PP
+If you supply the \f[C]\-\-download\f[R] flag, it will download the data
+from remote and calculate the contents hash on the fly.
+This can be useful for remotes that don\[aq]t support hashes or if you
+really want to check all the data.
+.PP
+If you supply the \f[C]\-\-one\-way\f[R] flag, it will only check that
+files in the source match the files in the destination, not the other
+way around.
+This means that extra files in the destination that are not in the
+source will not be detected.
+.PP
+The \f[C]\-\-differ\f[R], \f[C]\-\-missing\-on\-dst\f[R],
+\f[C]\-\-missing\-on\-src\f[R], \f[C]\-\-match\f[R] and
+\f[C]\-\-error\f[R] flags write paths, one per line, to the file name
+(or stdout if it is \f[C]\-\f[R]) supplied.
+What they write is described in the help below.
+For example \f[C]\-\-differ\f[R] will write all paths which are present
+on both the source and destination but different.
+.PP
+The \f[C]\-\-combined\f[R] flag will write a file (or stdout) which
+contains all file paths with a symbol and then a space and then the path
+to tell you what happened to it.
+These are reminiscent of diff files.
+.IP \[bu] 2
+\f[C]= path\f[R] means path was found in source and destination and was
+identical
+.IP \[bu] 2
+\f[C]\- path\f[R] means path was missing on the source, so only in the
+destination
+.IP \[bu] 2
+\f[C]+ path\f[R] means path was missing on the destination, so only in
+the source
+.IP \[bu] 2
+\f[C]* path\f[R] means path was present in source and destination but
+different.
+.IP \[bu] 2
+\f[C]! path\f[R] means there was an error reading or hashing the source
+or dest.
+.IP
+.nf
+\f[C]
+rclone checksum sumfile src:path [flags]
+\f[R]
+.fi
+.SS Options
+.IP
+.nf
+\f[C]
+ \-\-combined string Make a combined report of changes to this file
+ \-\-differ string Report all non\-matching files to this file
+ \-\-download Check by hashing the contents.
+ \-\-error string Report all files with errors (hashing or reading) to this file
+ \-h, \-\-help help for checksum
+ \-\-match string Report all matching files to this file
+ \-\-missing\-on\-dst string Report all files missing from the destination to this file
+ \-\-missing\-on\-src string Report all files missing from the source to this file
+ \-\-one\-way Check one way only, source files must exist on remote
+\f[R]
+.fi
+.PP
+See the global flags page (https://rclone.org/flags/) for global options
+not listed here.
+.SS SEE ALSO
+.IP \[bu] 2
+rclone (https://rclone.org/commands/rclone/) \- Show help for rclone
+commands, flags and backends.
.SH rclone config create
.PP
Create a new remote with name, type and options.
.SS Synopsis
.PP
Create a new remote of \f[C]name\f[R] with \f[C]type\f[R] and options.
-The options should be passed in pairs of \f[C]key\f[R] \f[C]value\f[R].
+The options should be passed in pairs of \f[C]key\f[R] \f[C]value\f[R]
+or as \f[C]key=value\f[R].
.PP
For example to make a swift remote of name myremote using auto config
you would do:
@@ -2106,13 +2372,23 @@ you would do:
.nf
\f[C]
rclone config create myremote swift env_auth true
+rclone config create myremote swift env_auth=true
+\f[R]
+.fi
+.PP
+So for example if you wanted to configure a Google Drive remote but
+using remote authorization you would do this:
+.IP
+.nf
+\f[C]
+rclone config create mydrive drive config_is_local=false
\f[R]
.fi
.PP
Note that if the config process would normally ask a question the
-default is taken.
-Each time that happens rclone will print a message saying how to affect
-the value taken.
+default is taken (unless \f[C]\-\-non\-interactive\f[R] is used).
+Each time that happens rclone will print or DEBUG a message saying how
+to affect the value taken.
.PP
If any of the parameters passed is a password field, then rclone will
automatically obscure them if they aren\[aq]t already obscured before
@@ -2123,19 +2399,103 @@ consists only of base64 characters then rclone can get confused about
whether the password is already obscured or not and put unobscured
passwords into the config file.
If you want to be 100% certain that the passwords get obscured then use
-the \[dq]\-\-obscure\[dq] flag, or if you are 100% certain you are
-already passing obscured passwords then use \[dq]\-\-no\-obscure\[dq].
-You can also set obscured passwords using the \[dq]rclone config
-password\[dq] command.
+the \f[C]\-\-obscure\f[R] flag, or if you are 100% certain you are
+already passing obscured passwords then use \f[C]\-\-no\-obscure\f[R].
+You can also set obscured passwords using the
+\f[C]rclone config password\f[R] command.
.PP
-So for example if you wanted to configure a Google Drive remote but
-using remote authorization you would do this:
+The flag \f[C]\-\-non\-interactive\f[R] is for use by applications that
+wish to configure rclone themeselves, rather than using rclone\[aq]s
+text based configuration questions.
+If this flag is set, and rclone needs to ask the user a question, a JSON
+blob will be returned with the question in it.
+.PP
+This will look something like (some irrelevant detail removed):
.IP
.nf
\f[C]
-rclone config create mydrive drive config_is_local false
+{
+ \[dq]State\[dq]: \[dq]*oauth\-islocal,teamdrive,,\[dq],
+ \[dq]Option\[dq]: {
+ \[dq]Name\[dq]: \[dq]config_is_local\[dq],
+ \[dq]Help\[dq]: \[dq]Use auto config?\[rs]n * Say Y if not sure\[rs]n * Say N if you are working on a remote or headless machine\[rs]n\[dq],
+ \[dq]Default\[dq]: true,
+ \[dq]Examples\[dq]: [
+ {
+ \[dq]Value\[dq]: \[dq]true\[dq],
+ \[dq]Help\[dq]: \[dq]Yes\[dq]
+ },
+ {
+ \[dq]Value\[dq]: \[dq]false\[dq],
+ \[dq]Help\[dq]: \[dq]No\[dq]
+ }
+ ],
+ \[dq]Required\[dq]: false,
+ \[dq]IsPassword\[dq]: false,
+ \[dq]Type\[dq]: \[dq]bool\[dq],
+ \[dq]Exclusive\[dq]: true,
+ },
+ \[dq]Error\[dq]: \[dq]\[dq],
+}
\f[R]
.fi
+.PP
+The format of \f[C]Option\f[R] is the same as returned by
+\f[C]rclone config providers\f[R].
+The question should be asked to the user and returned to rclone as the
+\f[C]\-\-result\f[R] option along with the \f[C]\-\-state\f[R]
+parameter.
+.PP
+The keys of \f[C]Option\f[R] are used as follows:
+.IP \[bu] 2
+\f[C]Name\f[R] \- name of variable \- show to user
+.IP \[bu] 2
+\f[C]Help\f[R] \- help text.
+Hard wrapped at 80 chars.
+Any URLs should be clicky.
+.IP \[bu] 2
+\f[C]Default\f[R] \- default value \- return this if the user just wants
+the default.
+.IP \[bu] 2
+\f[C]Examples\f[R] \- the user should be able to choose one of these
+.IP \[bu] 2
+\f[C]Required\f[R] \- the value should be non\-empty
+.IP \[bu] 2
+\f[C]IsPassword\f[R] \- the value is a password and should be edited as
+such
+.IP \[bu] 2
+\f[C]Type\f[R] \- type of value, eg \f[C]bool\f[R], \f[C]string\f[R],
+\f[C]int\f[R] and others
+.IP \[bu] 2
+\f[C]Exclusive\f[R] \- if set no free\-form entry allowed only the
+\f[C]Examples\f[R]
+.IP \[bu] 2
+Irrelevant keys \f[C]Provider\f[R], \f[C]ShortOpt\f[R], \f[C]Hide\f[R],
+\f[C]NoPrefix\f[R], \f[C]Advanced\f[R]
+.PP
+If \f[C]Error\f[R] is set then it should be shown to the user at the
+same time as the question.
+.IP
+.nf
+\f[C]
+rclone config update name \-\-continue \-\-state \[dq]*oauth\-islocal,teamdrive,,\[dq] \-\-result \[dq]true\[dq]
+\f[R]
+.fi
+.PP
+Note that when using \f[C]\-\-continue\f[R] all passwords should be
+passed in the clear (not obscured).
+Any default config values should be passed in with each invocation of
+\f[C]\-\-continue\f[R].
+.PP
+At the end of the non interactive process, rclone will return a result
+with \f[C]State\f[R] as empty string.
+.PP
+If \f[C]\-\-all\f[R] is passed then rclone will ask all the config
+questions, not just the post config questions.
+Any parameters are used as defaults for questions as usual.
+.PP
+Note that \f[C]bin/config.py\f[R] in the rclone source implements this
+protocol as a readable demonstration.
.IP
.nf
\f[C]
@@ -2146,9 +2506,14 @@ rclone config create \[ga]name\[ga] \[ga]type\[ga] [\[ga]key\[ga] \[ga]value\[ga
.IP
.nf
\f[C]
- \-h, \-\-help help for create
- \-\-no\-obscure Force any passwords not to be obscured.
- \-\-obscure Force any passwords to be obscured.
+ \-\-all Ask the full set of config questions.
+ \-\-continue Continue the configuration process with an answer.
+ \-h, \-\-help help for create
+ \-\-no\-obscure Force any passwords not to be obscured.
+ \-\-non\-interactive Don\[aq]t interact with user and return questions.
+ \-\-obscure Force any passwords to be obscured.
+ \-\-result string Result \- use with \-\-continue.
+ \-\-state string State \- use with \-\-continue.
\f[R]
.fi
.PP
@@ -2237,7 +2602,7 @@ interactive configuration session.
.SH rclone config edit
.PP
Enter an interactive configuration session.
-.SS Synopsis
+.SH Synopsis
.PP
Enter an interactive configuration session where you can setup new
remotes and manage existing ones.
@@ -2248,7 +2613,7 @@ You may also set or remove a password to protect your configuration.
rclone config edit [flags]
\f[R]
.fi
-.SS Options
+.SH Options
.IP
.nf
\f[C]
@@ -2258,7 +2623,7 @@ rclone config edit [flags]
.PP
See the global flags page (https://rclone.org/flags/) for global options
not listed here.
-.SS SEE ALSO
+.SH SEE ALSO
.IP \[bu] 2
rclone config (https://rclone.org/commands/rclone_config/) \- Enter an
interactive configuration session.
@@ -2291,13 +2656,16 @@ Update password in an existing remote.
.SS Synopsis
.PP
Update an existing remote\[aq]s password.
-The password should be passed in pairs of \f[C]key\f[R] \f[C]value\f[R].
+The password should be passed in pairs of \f[C]key\f[R]
+\f[C]password\f[R] or as \f[C]key=password\f[R].
+The \f[C]password\f[R] should be passed in in clear (unobscured).
.PP
For example to set password of a remote of name myremote you would do:
.IP
.nf
\f[C]
rclone config password myremote fieldname mypassword
+rclone config password myremote fieldname=mypassword
\f[R]
.fi
.PP
@@ -2399,24 +2767,62 @@ not listed here.
.IP \[bu] 2
rclone config (https://rclone.org/commands/rclone_config/) \- Enter an
interactive configuration session.
+.SH rclone config touch
+.PP
+Ensure configuration file exists.
+.IP
+.nf
+\f[C]
+rclone config touch [flags]
+\f[R]
+.fi
+.SS Options
+.IP
+.nf
+\f[C]
+ \-h, \-\-help help for touch
+\f[R]
+.fi
+.PP
+See the global flags page (https://rclone.org/flags/) for global options
+not listed here.
+.SS SEE ALSO
+.IP \[bu] 2
+rclone config (https://rclone.org/commands/rclone_config/) \- Enter an
+interactive configuration session.
.SH rclone config update
.PP
Update options in an existing remote.
.SS Synopsis
.PP
Update an existing remote\[aq]s options.
-The options should be passed in in pairs of \f[C]key\f[R]
-\f[C]value\f[R].
+The options should be passed in pairs of \f[C]key\f[R] \f[C]value\f[R]
+or as \f[C]key=value\f[R].
.PP
For example to update the env_auth field of a remote of name myremote
you would do:
.IP
.nf
\f[C]
-rclone config update myremote swift env_auth true
+rclone config update myremote env_auth true
+rclone config update myremote env_auth=true
\f[R]
.fi
.PP
+If the remote uses OAuth the token will be updated, if you don\[aq]t
+require this add an extra parameter thus:
+.IP
+.nf
+\f[C]
+rclone config update myremote env_auth=true config_refresh_token=false
+\f[R]
+.fi
+.PP
+Note that if the config process would normally ask a question the
+default is taken (unless \f[C]\-\-non\-interactive\f[R] is used).
+Each time that happens rclone will print or DEBUG a message saying how
+to affect the value taken.
+.PP
If any of the parameters passed is a password field, then rclone will
automatically obscure them if they aren\[aq]t already obscured before
putting them in the config file.
@@ -2426,19 +2832,103 @@ consists only of base64 characters then rclone can get confused about
whether the password is already obscured or not and put unobscured
passwords into the config file.
If you want to be 100% certain that the passwords get obscured then use
-the \[dq]\-\-obscure\[dq] flag, or if you are 100% certain you are
-already passing obscured passwords then use \[dq]\-\-no\-obscure\[dq].
-You can also set obscured passwords using the \[dq]rclone config
-password\[dq] command.
+the \f[C]\-\-obscure\f[R] flag, or if you are 100% certain you are
+already passing obscured passwords then use \f[C]\-\-no\-obscure\f[R].
+You can also set obscured passwords using the
+\f[C]rclone config password\f[R] command.
.PP
-If the remote uses OAuth the token will be updated, if you don\[aq]t
-require this add an extra parameter thus:
+The flag \f[C]\-\-non\-interactive\f[R] is for use by applications that
+wish to configure rclone themeselves, rather than using rclone\[aq]s
+text based configuration questions.
+If this flag is set, and rclone needs to ask the user a question, a JSON
+blob will be returned with the question in it.
+.PP
+This will look something like (some irrelevant detail removed):
.IP
.nf
\f[C]
-rclone config update myremote swift env_auth true config_refresh_token false
+{
+ \[dq]State\[dq]: \[dq]*oauth\-islocal,teamdrive,,\[dq],
+ \[dq]Option\[dq]: {
+ \[dq]Name\[dq]: \[dq]config_is_local\[dq],
+ \[dq]Help\[dq]: \[dq]Use auto config?\[rs]n * Say Y if not sure\[rs]n * Say N if you are working on a remote or headless machine\[rs]n\[dq],
+ \[dq]Default\[dq]: true,
+ \[dq]Examples\[dq]: [
+ {
+ \[dq]Value\[dq]: \[dq]true\[dq],
+ \[dq]Help\[dq]: \[dq]Yes\[dq]
+ },
+ {
+ \[dq]Value\[dq]: \[dq]false\[dq],
+ \[dq]Help\[dq]: \[dq]No\[dq]
+ }
+ ],
+ \[dq]Required\[dq]: false,
+ \[dq]IsPassword\[dq]: false,
+ \[dq]Type\[dq]: \[dq]bool\[dq],
+ \[dq]Exclusive\[dq]: true,
+ },
+ \[dq]Error\[dq]: \[dq]\[dq],
+}
\f[R]
.fi
+.PP
+The format of \f[C]Option\f[R] is the same as returned by
+\f[C]rclone config providers\f[R].
+The question should be asked to the user and returned to rclone as the
+\f[C]\-\-result\f[R] option along with the \f[C]\-\-state\f[R]
+parameter.
+.PP
+The keys of \f[C]Option\f[R] are used as follows:
+.IP \[bu] 2
+\f[C]Name\f[R] \- name of variable \- show to user
+.IP \[bu] 2
+\f[C]Help\f[R] \- help text.
+Hard wrapped at 80 chars.
+Any URLs should be clicky.
+.IP \[bu] 2
+\f[C]Default\f[R] \- default value \- return this if the user just wants
+the default.
+.IP \[bu] 2
+\f[C]Examples\f[R] \- the user should be able to choose one of these
+.IP \[bu] 2
+\f[C]Required\f[R] \- the value should be non\-empty
+.IP \[bu] 2
+\f[C]IsPassword\f[R] \- the value is a password and should be edited as
+such
+.IP \[bu] 2
+\f[C]Type\f[R] \- type of value, eg \f[C]bool\f[R], \f[C]string\f[R],
+\f[C]int\f[R] and others
+.IP \[bu] 2
+\f[C]Exclusive\f[R] \- if set no free\-form entry allowed only the
+\f[C]Examples\f[R]
+.IP \[bu] 2
+Irrelevant keys \f[C]Provider\f[R], \f[C]ShortOpt\f[R], \f[C]Hide\f[R],
+\f[C]NoPrefix\f[R], \f[C]Advanced\f[R]
+.PP
+If \f[C]Error\f[R] is set then it should be shown to the user at the
+same time as the question.
+.IP
+.nf
+\f[C]
+rclone config update name \-\-continue \-\-state \[dq]*oauth\-islocal,teamdrive,,\[dq] \-\-result \[dq]true\[dq]
+\f[R]
+.fi
+.PP
+Note that when using \f[C]\-\-continue\f[R] all passwords should be
+passed in the clear (not obscured).
+Any default config values should be passed in with each invocation of
+\f[C]\-\-continue\f[R].
+.PP
+At the end of the non interactive process, rclone will return a result
+with \f[C]State\f[R] as empty string.
+.PP
+If \f[C]\-\-all\f[R] is passed then rclone will ask all the config
+questions, not just the post config questions.
+Any parameters are used as defaults for questions as usual.
+.PP
+Note that \f[C]bin/config.py\f[R] in the rclone source implements this
+protocol as a readable demonstration.
.IP
.nf
\f[C]
@@ -2449,9 +2939,14 @@ rclone config update \[ga]name\[ga] [\[ga]key\[ga] \[ga]value\[ga]]+ [flags]
.IP
.nf
\f[C]
- \-h, \-\-help help for update
- \-\-no\-obscure Force any passwords not to be obscured.
- \-\-obscure Force any passwords to be obscured.
+ \-\-all Ask the full set of config questions.
+ \-\-continue Continue the configuration process with an answer.
+ \-h, \-\-help help for update
+ \-\-no\-obscure Force any passwords not to be obscured.
+ \-\-non\-interactive Don\[aq]t interact with user and return questions.
+ \-\-obscure Force any passwords to be obscured.
+ \-\-result string Result \- use with \-\-continue.
+ \-\-state string State \- use with \-\-continue.
\f[R]
.fi
.PP
@@ -2558,10 +3053,10 @@ Copy url content to dest.
Download a URL\[aq]s content and copy it to the destination without
saving it in temporary storage.
.PP
-Setting \f[C]\-\-auto\-filename\f[R]will cause the file name to be
-retrieved from the from URL (after any redirections) and used in the
+Setting \f[C]\-\-auto\-filename\f[R] will cause the file name to be
+retrieved from the URL (after any redirections) and used in the
destination path.
-With \f[C]\-\-print\-filename\f[R] in addition, the resuling file name
+With \f[C]\-\-print\-filename\f[R] in addition, the resulting file name
will be printed.
.PP
Setting \f[C]\-\-no\-clobber\f[R] will prevent overwriting file on the
@@ -2995,10 +3490,13 @@ Run without a hash to see the list of all supported hashes, e.g.
\f[C]
$ rclone hashsum
Supported hashes are:
- * MD5
- * SHA\-1
- * DropboxHash
- * QuickXorHash
+ * md5
+ * sha1
+ * whirlpool
+ * crc32
+ * dropbox
+ * mailru
+ * quickxor
\f[R]
.fi
.PP
@@ -3009,6 +3507,8 @@ Then
$ rclone hashsum MD5 remote:path
\f[R]
.fi
+.PP
+Note that hash names are case insensitive.
.IP
.nf
\f[C]
@@ -3020,6 +3520,7 @@ rclone hashsum remote:path [flags]
.nf
\f[C]
\-\-base64 Output base64 encoded hashsum
+ \-C, \-\-checkfile string Validate hashes against a given SUM file instead of printing them
\-\-download Download the file and hash it locally; if this flag is not specified, the hash is requested from the remote
\-h, \-\-help help for hashsum
\-\-output\-file string Output hashsums to a file rather than the terminal
@@ -3073,7 +3574,7 @@ rclone link remote:path [flags]
.IP
.nf
\f[C]
- \-\-expire Duration The amount of time that the link will be valid (default 100y)
+ \-\-expire Duration The amount of time that the link will be valid (default off)
\-h, \-\-help help for link
\-\-unlink Remove existing public link to file/folder
\f[R]
@@ -3292,7 +3793,7 @@ rclone lsf remote:path [flags]
\-\-dirs\-only Only list directories.
\-\-files\-only Only list files.
\-F, \-\-format string Output format \- see help for details (default \[dq]p\[dq])
- \-\-hash h Use this hash when h is used in the format MD5|SHA\-1|DropboxHash (default \[dq]MD5\[dq])
+ \-\-hash h Use this hash when h is used in the format MD5|SHA\-1|DropboxHash (default \[dq]md5\[dq])
\-h, \-\-help help for lsf
\-R, \-\-recursive Recurse into the listing.
\-s, \-\-separator string Separator for the items in the format. (default \[dq];\[dq])
@@ -3502,10 +4003,10 @@ The size of the mounted file system will be set according to information
retrieved from the remote, the same as returned by the rclone
about (https://rclone.org/commands/rclone_about/) command.
Remotes with unlimited storage may report the used size only, then an
-additional 1PB of free space is assumed.
+additional 1 PiB of free space is assumed.
If the remote does not
support (https://rclone.org/overview/#optional-features) the about
-feature at all, then 1PB is set as both the total and the free size.
+feature at all, then 1 PiB is set as both the total and the free size.
.PP
\f[B]Note\f[R]: As of \f[C]rclone\f[R] 1.52.2, \f[C]rclone mount\f[R]
now requires Go version 1.13 or newer on some platforms depending on the
@@ -3677,7 +4178,7 @@ this as the file being accessible by everyone.
For example an SSH client may warn about \[dq]unprotected private key
file\[dq].
.PP
-WinFsp 2021 (version 1.9, still in beta) introduces a new FUSE option
+WinFsp 2021 (version 1.9) introduces a new FUSE option
\[dq]FileSecurity\[dq], that allows the complete specification of file
security descriptors using
SDDL (https://docs.microsoft.com/en-us/windows/win32/secauthz/security-descriptor-string-format).
@@ -3687,19 +4188,44 @@ With this you can work around issues such as the mentioned
access (FA) to the owner (OW).
.SS Windows caveats
.PP
-Note that drives created as Administrator are not visible by other
-accounts (including the account that was elevated as Administrator).
-So if you start a Windows drive from an Administrative Command Prompt
-and then try to access the same drive from Explorer (which does not run
-as Administrator), you will not be able to see the new drive.
+Drives created as Administrator are not visible to other accounts, not
+even an account that was elevated to Administrator with the User Account
+Control (UAC) feature.
+A result of this is that if you mount to a drive letter from a Command
+Prompt run as Administrator, and then try to access the same drive from
+Windows Explorer (which does not run as Administrator), you will not be
+able to see the mounted drive.
.PP
-The easiest way around this is to start the drive from a normal command
-prompt.
-It is also possible to start a drive from the SYSTEM account (using the
-WinFsp.Launcher
-infrastructure (https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Architecture))
-which creates drives accessible for everyone on the system or
-alternatively using the nssm service manager (https://nssm.cc/usage).
+If you don\[aq]t need to access the drive from applications running with
+administrative privileges, the easiest way around this is to always
+create the mount from a non\-elevated command prompt.
+.PP
+To make mapped drives available to the user account that created them
+regardless if elevated or not, there is a special Windows setting called
+linked
+connections (https://docs.microsoft.com/en-us/troubleshoot/windows-client/networking/mapped-drives-not-available-from-elevated-command#detail-to-configure-the-enablelinkedconnections-registry-entry)
+that can be enabled.
+.PP
+It is also possible to make a drive mount available to everyone on the
+system, by running the process creating it as the built\-in SYSTEM
+account.
+There are several ways to do this: One is to use the command\-line
+utility
+PsExec (https://docs.microsoft.com/en-us/sysinternals/downloads/psexec),
+from Microsoft\[aq]s Sysinternals suite, which has option \f[C]\-s\f[R]
+to start processes as the SYSTEM account.
+Another alternative is to run the mount command from a Windows Scheduled
+Task, or a Windows Service, configured to run as the SYSTEM account.
+A third alternative is to use the WinFsp.Launcher
+infrastructure (https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Architecture)).
+Note that when running rclone as another user, it will not use the
+configuration file from your profile unless you tell it to with the
+\f[C]\-\-config\f[R] (https://rclone.org/docs/#config-config-file)
+option.
+Read more in the install documentation (https://rclone.org/install/).
+.PP
+Note that mapping to a directory path, instead of a drive letter, does
+not suffer from the same limitations.
.SS Limitations
.PP
Without the use of \f[C]\-\-vfs\-cache\-mode\f[R] this can only write
@@ -3819,7 +4345,7 @@ cache.
.nf
\f[C]
\-\-dir\-cache\-time duration Time to cache directory entries for. (default 5m0s)
-\-\-poll\-interval duration Time to wait between polling for changes.
+\-\-poll\-interval duration Time to wait between polling for changes. Must be smaller than dir\-cache\-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
\f[R]
.fi
.PP
@@ -4145,7 +4671,7 @@ rclone mount remote:path /path/to/mountpoint [flags]
\-\-fuse\-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
\-\-gid uint32 Override the gid field set by the filesystem. Not supported on Windows. (default 1000)
\-h, \-\-help help for mount
- \-\-max\-read\-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads. Not supported on Windows. (default 128k)
+ \-\-max\-read\-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads. Not supported on Windows. (default 128Ki)
\-\-network\-mode Mount as remote network drive, instead of fixed disk drive. Supported on Windows only
\-\-no\-checksum Don\[aq]t compare checksums on up/download.
\-\-no\-modtime Don\[aq]t read/write the modification time (can speed things up).
@@ -4156,14 +4682,14 @@ rclone mount remote:path /path/to/mountpoint [flags]
\-\-poll\-interval duration Time to wait between polling for changes. Must be smaller than dir\-cache\-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
\-\-read\-only Mount read\-only.
\-\-uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
- \-\-umask int Override the permission bits set by the filesystem. Not supported on Windows.
+ \-\-umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
\-\-vfs\-cache\-max\-age duration Max age of objects in the cache. (default 1h0m0s)
\-\-vfs\-cache\-max\-size SizeSuffix Max total size of objects in the cache. (default off)
\-\-vfs\-cache\-mode CacheMode Cache mode off|minimal|writes|full (default off)
\-\-vfs\-cache\-poll\-interval duration Interval to poll the cache for stale objects. (default 1m0s)
\-\-vfs\-case\-insensitive If a file name not found, find a case insensitive match.
\-\-vfs\-read\-ahead SizeSuffix Extra read ahead over \-\-buffer\-size when using cache\-mode full.
- \-\-vfs\-read\-chunk\-size SizeSuffix Read the source objects in chunks. (default 128M)
+ \-\-vfs\-read\-chunk\-size SizeSuffix Read the source objects in chunks. (default 128Mi)
\-\-vfs\-read\-chunk\-size\-limit SizeSuffix If greater than \-\-vfs\-read\-chunk\-size, double the chunk size after each chunk read, until the limit is reached. \[aq]off\[aq] is unlimited. (default off)
\-\-vfs\-read\-wait duration Time to wait for in\-sequence read before seeking. (default 20ms)
\-\-vfs\-used\-is\-size rclone size Use the rclone size algorithm for Used size.
@@ -4491,6 +5017,14 @@ please see there.
Generally speaking, setting this cutoff too high will decrease your
performance.
.PP
+Use the |\-\-size| flag to preallocate the file in advance at the remote
+end and actually stream it, even if remote backend doesn\[aq]t support
+streaming.
+.PP
+|\-\-size| should be the exact size of the input stream in bytes.
+If the size of the stream is different in length to the |\-\-size|
+passed in then the transfer will likely fail.
+.PP
Note that the upload can also not be retried because the data is not
kept around until the upload succeeds.
If you need to transfer a lot of data, you\[aq]re better off caching
@@ -4505,7 +5039,8 @@ rclone rcat remote:path [flags]
.IP
.nf
\f[C]
- \-h, \-\-help help for rcat
+ \-h, \-\-help help for rcat
+ \-\-size int File size hint to preallocate (default \-1)
\f[R]
.fi
.PP
@@ -4632,7 +5167,7 @@ Beta releases have an additional information similar to
\f[C]v1.54.0\-beta.5111.06f1c0c61\f[R].
(if you are a developer and use a locally built rclone, the version
number will end with \f[C]\-DEV\f[R], you will have to rebuild it as it
-obvisously can\[aq]t be distributed).
+obviously can\[aq]t be distributed).
.PP
If you previously installed rclone via a package manager, the package
may include local documentation or configure services.
@@ -4725,6 +5260,9 @@ commands, flags and backends.
rclone serve dlna (https://rclone.org/commands/rclone_serve_dlna/) \-
Serve remote:path over DLNA
.IP \[bu] 2
+rclone serve docker (https://rclone.org/commands/rclone_serve_docker/)
+\- Serve any remote on docker\[aq]s volume plugin API.
+.IP \[bu] 2
rclone serve ftp (https://rclone.org/commands/rclone_serve_ftp/) \-
Serve remote:path over FTP.
.IP \[bu] 2
@@ -4794,7 +5332,7 @@ cache.
.nf
\f[C]
\-\-dir\-cache\-time duration Time to cache directory entries for. (default 5m0s)
-\-\-poll\-interval duration Time to wait between polling for changes.
+\-\-poll\-interval duration Time to wait between polling for changes. Must be smaller than dir\-cache\-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
\f[R]
.fi
.PP
@@ -5126,7 +5664,7 @@ rclone serve dlna remote:path [flags]
\-\-vfs\-cache\-poll\-interval duration Interval to poll the cache for stale objects. (default 1m0s)
\-\-vfs\-case\-insensitive If a file name not found, find a case insensitive match.
\-\-vfs\-read\-ahead SizeSuffix Extra read ahead over \-\-buffer\-size when using cache\-mode full.
- \-\-vfs\-read\-chunk\-size SizeSuffix Read the source objects in chunks. (default 128M)
+ \-\-vfs\-read\-chunk\-size SizeSuffix Read the source objects in chunks. (default 128Mi)
\-\-vfs\-read\-chunk\-size\-limit SizeSuffix If greater than \-\-vfs\-read\-chunk\-size, double the chunk size after each chunk read, until the limit is reached. \[aq]off\[aq] is unlimited. (default off)
\-\-vfs\-read\-wait duration Time to wait for in\-sequence read before seeking. (default 20ms)
\-\-vfs\-used\-is\-size rclone size Use the rclone size algorithm for Used size.
@@ -5141,6 +5679,451 @@ not listed here.
.IP \[bu] 2
rclone serve (https://rclone.org/commands/rclone_serve/) \- Serve a
remote over a protocol.
+.SH rclone serve docker
+.PP
+Serve any remote on docker\[aq]s volume plugin API.
+.SS Synopsis
+.PP
+This command implements the Docker volume plugin API allowing docker to
+use rclone as a data storage mechanism for various cloud providers.
+rclone provides docker volume plugin (/docker) based on it.
+.PP
+To create a docker plugin, one must create a Unix or TCP socket that
+Docker will look for when you use the plugin and then it listens for
+commands from docker daemon and runs the corresponding code when
+necessary.
+Docker plugins can run as a managed plugin under control of the docker
+daemon or as an independent native service.
+For testing, you can just run it directly from the command line, for
+example:
+.IP
+.nf
+\f[C]
+sudo rclone serve docker \-\-base\-dir /tmp/rclone\-volumes \-\-socket\-addr localhost:8787 \-vv
+\f[R]
+.fi
+.PP
+Running \f[C]rclone serve docker\f[R] will create the said socket,
+listening for commands from Docker to create the necessary Volumes.
+Normally you need not give the \f[C]\-\-socket\-addr\f[R] flag.
+The API will listen on the unix domain socket at
+\f[C]/run/docker/plugins/rclone.sock\f[R].
+In the example above rclone will create a TCP socket and a small file
+\f[C]/etc/docker/plugins/rclone.spec\f[R] containing the socket address.
+We use \f[C]sudo\f[R] because both paths are writeable only by the root
+user.
+.PP
+If you later decide to change listening socket, the docker daemon must
+be restarted to reconnect to \f[C]/run/docker/plugins/rclone.sock\f[R]
+or parse new \f[C]/etc/docker/plugins/rclone.spec\f[R].
+Until you restart, any volume related docker commands will timeout
+trying to access the old socket.
+Running directly is supported on \f[B]Linux only\f[R], not on Windows or
+MacOS.
+This is not a problem with managed plugin mode described in details in
+the full documentation (https://rclone.org/docker).
+.PP
+The command will create volume mounts under the path given by
+\f[C]\-\-base\-dir\f[R] (by default
+\f[C]/var/lib/docker\-volumes/rclone\f[R] available only to root) and
+maintain the JSON formatted file \f[C]docker\-plugin.state\f[R] in the
+rclone cache directory with book\-keeping records of created and mounted
+volumes.
+.PP
+All mount and VFS options are submitted by the docker daemon via API,
+but you can also provide defaults on the command line as well as set
+path to the config file and cache directory or adjust logging verbosity.
+.SS VFS \- Virtual File System
+.PP
+This command uses the VFS layer.
+This adapts the cloud storage objects that rclone uses into something
+which looks much more like a disk filing system.
+.PP
+Cloud storage objects have lots of properties which aren\[aq]t like disk
+files \- you can\[aq]t extend them or write to the middle of them, so
+the VFS layer has to deal with that.
+Because there is no one right way of doing this there are various
+options explained below.
+.PP
+The VFS layer also implements a directory cache \- this caches info
+about files and directories (but not the data) in memory.
+.SS VFS Directory Cache
+.PP
+Using the \f[C]\-\-dir\-cache\-time\f[R] flag, you can control how long
+a directory should be considered up to date and not refreshed from the
+backend.
+Changes made through the mount will appear immediately or invalidate the
+cache.
+.IP
+.nf
+\f[C]
+\-\-dir\-cache\-time duration Time to cache directory entries for. (default 5m0s)
+\-\-poll\-interval duration Time to wait between polling for changes. Must be smaller than dir\-cache\-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
+\f[R]
+.fi
+.PP
+However, changes made directly on the cloud storage by the web interface
+or a different copy of rclone will only be picked up once the directory
+cache expires if the backend configured does not support polling for
+changes.
+If the backend supports polling, changes will be picked up within the
+polling interval.
+.PP
+You can send a \f[C]SIGHUP\f[R] signal to rclone for it to flush all
+directory caches, regardless of how old they are.
+Assuming only one rclone instance is running, you can reset the cache
+like this:
+.IP
+.nf
+\f[C]
+kill \-SIGHUP $(pidof rclone)
+\f[R]
+.fi
+.PP
+If you configure rclone with a remote control (/rc) then you can use
+rclone rc to flush the whole directory cache:
+.IP
+.nf
+\f[C]
+rclone rc vfs/forget
+\f[R]
+.fi
+.PP
+Or individual files or directories:
+.IP
+.nf
+\f[C]
+rclone rc vfs/forget file=path/to/file dir=path/to/dir
+\f[R]
+.fi
+.SS VFS File Buffering
+.PP
+The \f[C]\-\-buffer\-size\f[R] flag determines the amount of memory,
+that will be used to buffer data in advance.
+.PP
+Each open file will try to keep the specified amount of data in memory
+at all times.
+The buffered data is bound to one open file and won\[aq]t be shared.
+.PP
+This flag is a upper limit for the used memory per open file.
+The buffer will only use memory for data that is downloaded but not not
+yet read.
+If the buffer is empty, only a small amount of memory will be used.
+.PP
+The maximum memory used by rclone for buffering can be up to
+\f[C]\-\-buffer\-size * open files\f[R].
+.SS VFS File Caching
+.PP
+These flags control the VFS file caching options.
+File caching is necessary to make the VFS layer appear compatible with a
+normal file system.
+It can be disabled at the cost of some compatibility.
+.PP
+For example you\[aq]ll need to enable VFS caching if you want to read
+and write simultaneously to a file.
+See below for more details.
+.PP
+Note that the VFS cache is separate from the cache backend and you may
+find that you need one or the other or both.
+.IP
+.nf
+\f[C]
+\-\-cache\-dir string Directory rclone will use for caching.
+\-\-vfs\-cache\-mode CacheMode Cache mode off|minimal|writes|full (default off)
+\-\-vfs\-cache\-max\-age duration Max age of objects in the cache. (default 1h0m0s)
+\-\-vfs\-cache\-max\-size SizeSuffix Max total size of objects in the cache. (default off)
+\-\-vfs\-cache\-poll\-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+\-\-vfs\-write\-back duration Time to writeback files after last use when using cache. (default 5s)
+\f[R]
+.fi
+.PP
+If run with \f[C]\-vv\f[R] rclone will print the location of the file
+cache.
+The files are stored in the user cache file area which is OS dependent
+but can be controlled with \f[C]\-\-cache\-dir\f[R] or setting the
+appropriate environment variable.
+.PP
+The cache has 4 different modes selected by
+\f[C]\-\-vfs\-cache\-mode\f[R].
+The higher the cache mode the more compatible rclone becomes at the cost
+of using disk space.
+.PP
+Note that files are written back to the remote only when they are closed
+and if they haven\[aq]t been accessed for \-\-vfs\-write\-back second.
+If rclone is quit or dies with files that haven\[aq]t been uploaded,
+these will be uploaded next time rclone is run with the same flags.
+.PP
+If using \f[C]\-\-vfs\-cache\-max\-size\f[R] note that the cache may
+exceed this size for two reasons.
+Firstly because it is only checked every
+\f[C]\-\-vfs\-cache\-poll\-interval\f[R].
+Secondly because open files cannot be evicted from the cache.
+.PP
+You \f[B]should not\f[R] run two copies of rclone using the same VFS
+cache with the same or overlapping remotes if using
+\f[C]\-\-vfs\-cache\-mode > off\f[R].
+This can potentially cause data corruption if you do.
+You can work around this by giving each rclone its own cache hierarchy
+with \f[C]\-\-cache\-dir\f[R].
+You don\[aq]t need to worry about this if the remotes in use don\[aq]t
+overlap.
+.SS \-\-vfs\-cache\-mode off
+.PP
+In this mode (the default) the cache will read directly from the remote
+and write directly to the remote without caching anything on disk.
+.PP
+This will mean some operations are not possible
+.IP \[bu] 2
+Files can\[aq]t be opened for both read AND write
+.IP \[bu] 2
+Files opened for write can\[aq]t be seeked
+.IP \[bu] 2
+Existing files opened for write must have O_TRUNC set
+.IP \[bu] 2
+Files open for read with O_TRUNC will be opened write only
+.IP \[bu] 2
+Files open for write only will behave as if O_TRUNC was supplied
+.IP \[bu] 2
+Open modes O_APPEND, O_TRUNC are ignored
+.IP \[bu] 2
+If an upload fails it can\[aq]t be retried
+.SS \-\-vfs\-cache\-mode minimal
+.PP
+This is very similar to \[dq]off\[dq] except that files opened for read
+AND write will be buffered to disk.
+This means that files opened for write will be a lot more compatible,
+but uses the minimal disk space.
+.PP
+These operations are not possible
+.IP \[bu] 2
+Files opened for write only can\[aq]t be seeked
+.IP \[bu] 2
+Existing files opened for write must have O_TRUNC set
+.IP \[bu] 2
+Files opened for write only will ignore O_APPEND, O_TRUNC
+.IP \[bu] 2
+If an upload fails it can\[aq]t be retried
+.SS \-\-vfs\-cache\-mode writes
+.PP
+In this mode files opened for read only are still read directly from the
+remote, write only and read/write files are buffered to disk first.
+.PP
+This mode should support all normal file system operations.
+.PP
+If an upload fails it will be retried at exponentially increasing
+intervals up to 1 minute.
+.SS \-\-vfs\-cache\-mode full
+.PP
+In this mode all reads and writes are buffered to and from disk.
+When data is read from the remote this is buffered to disk as well.
+.PP
+In this mode the files in the cache will be sparse files and rclone will
+keep track of which bits of the files it has downloaded.
+.PP
+So if an application only reads the starts of each file, then rclone
+will only buffer the start of the file.
+These files will appear to be their full size in the cache, but they
+will be sparse files with only the data that has been downloaded present
+in them.
+.PP
+This mode should support all normal file system operations and is
+otherwise identical to \-\-vfs\-cache\-mode writes.
+.PP
+When reading a file rclone will read \-\-buffer\-size plus
+\-\-vfs\-read\-ahead bytes ahead.
+The \-\-buffer\-size is buffered in memory whereas the
+\-\-vfs\-read\-ahead is buffered on disk.
+.PP
+When using this mode it is recommended that \-\-buffer\-size is not set
+too big and \-\-vfs\-read\-ahead is set large if required.
+.PP
+\f[B]IMPORTANT\f[R] not all file systems support sparse files.
+In particular FAT/exFAT do not.
+Rclone will perform very badly if the cache directory is on a filesystem
+which doesn\[aq]t support sparse files and it will log an ERROR message
+if one is detected.
+.SS VFS Performance
+.PP
+These flags may be used to enable/disable features of the VFS for
+performance or other reasons.
+.PP
+In particular S3 and Swift benefit hugely from the \-\-no\-modtime flag
+(or use \-\-use\-server\-modtime for a slightly different effect) as
+each read of the modification time takes a transaction.
+.IP
+.nf
+\f[C]
+\-\-no\-checksum Don\[aq]t compare checksums on up/download.
+\-\-no\-modtime Don\[aq]t read/write the modification time (can speed things up).
+\-\-no\-seek Don\[aq]t allow seeking in files.
+\-\-read\-only Mount read\-only.
+\f[R]
+.fi
+.PP
+When rclone reads files from a remote it reads them in chunks.
+This means that rather than requesting the whole file rclone reads the
+chunk specified.
+This is advantageous because some cloud providers account for reads
+being all the data requested, not all the data delivered.
+.PP
+Rclone will keep doubling the chunk size requested starting at
+\-\-vfs\-read\-chunk\-size with a maximum of
+\-\-vfs\-read\-chunk\-size\-limit unless it is set to \[dq]off\[dq] in
+which case there will be no limit.
+.IP
+.nf
+\f[C]
+\-\-vfs\-read\-chunk\-size SizeSuffix Read the source objects in chunks. (default 128M)
+\-\-vfs\-read\-chunk\-size\-limit SizeSuffix Max chunk doubling size (default \[dq]off\[dq])
+\f[R]
+.fi
+.PP
+Sometimes rclone is delivered reads or writes out of order.
+Rather than seeking rclone will wait a short time for the in sequence
+read or write to come in.
+These flags only come into effect when not using an on disk cache file.
+.IP
+.nf
+\f[C]
+\-\-vfs\-read\-wait duration Time to wait for in\-sequence read before seeking. (default 20ms)
+\-\-vfs\-write\-wait duration Time to wait for in\-sequence write before giving error. (default 1s)
+\f[R]
+.fi
+.PP
+When using VFS write caching (\-\-vfs\-cache\-mode with value writes or
+full), the global flag \-\-transfers can be set to adjust the number of
+parallel uploads of modified files from cache (the related global flag
+\-\-checkers have no effect on mount).
+.IP
+.nf
+\f[C]
+\-\-transfers int Number of file transfers to run in parallel. (default 4)
+\f[R]
+.fi
+.SS VFS Case Sensitivity
+.PP
+Linux file systems are case\-sensitive: two files can differ only by
+case, and the exact case must be used when opening a file.
+.PP
+File systems in modern Windows are case\-insensitive but
+case\-preserving: although existing files can be opened using any case,
+the exact case used to create the file is preserved and available for
+programs to query.
+It is not allowed for two files in the same directory to differ only by
+case.
+.PP
+Usually file systems on macOS are case\-insensitive.
+It is possible to make macOS file systems case\-sensitive but that is
+not the default
+.PP
+The \f[C]\-\-vfs\-case\-insensitive\f[R] mount flag controls how rclone
+handles these two cases.
+If its value is \[dq]false\[dq], rclone passes file names to the mounted
+file system as\-is.
+If the flag is \[dq]true\[dq] (or appears without a value on command
+line), rclone may perform a \[dq]fixup\[dq] as explained below.
+.PP
+The user may specify a file name to open/delete/rename/etc with a case
+different than what is stored on mounted file system.
+If an argument refers to an existing file with exactly the same name,
+then the case of the existing file on the disk will be used.
+However, if a file name with exactly the same name is not found but a
+name differing only by case exists, rclone will transparently fixup the
+name.
+This fixup happens only when an existing file is requested.
+Case sensitivity of file names created anew by rclone is controlled by
+an underlying mounted file system.
+.PP
+Note that case sensitivity of the operating system running rclone (the
+target) may differ from case sensitivity of a file system mounted by
+rclone (the source).
+The flag controls whether \[dq]fixup\[dq] is performed to satisfy the
+target.
+.PP
+If the flag is not provided on the command line, then its default value
+depends on the operating system where rclone runs: \[dq]true\[dq] on
+Windows and macOS, \[dq]false\[dq] otherwise.
+If the flag is provided without a value, then it is \[dq]true\[dq].
+.SS Alternate report of used bytes
+.PP
+Some backends, most notably S3, do not report the amount of bytes used.
+If you need this information to be available when running \f[C]df\f[R]
+on the filesystem, then pass the flag \f[C]\-\-vfs\-used\-is\-size\f[R]
+to rclone.
+With this flag set, instead of relying on the backend to report this
+information, rclone will scan the whole remote similar to
+\f[C]rclone size\f[R] and compute the total used space itself.
+.PP
+\f[I]WARNING.\f[R] Contrary to \f[C]rclone size\f[R], this flag ignores
+filters so that the result is accurate.
+However, this is very inefficient and may cost lots of API calls
+resulting in extra charges.
+Use it as a last resort and only with caching.
+.IP
+.nf
+\f[C]
+rclone serve docker [flags]
+\f[R]
+.fi
+.SS Options
+.IP
+.nf
+\f[C]
+ \-\-allow\-non\-empty Allow mounting over a non\-empty directory. Not supported on Windows.
+ \-\-allow\-other Allow access to other users. Not supported on Windows.
+ \-\-allow\-root Allow access to root user. Not supported on Windows.
+ \-\-async\-read Use asynchronous reads. Not supported on Windows. (default true)
+ \-\-attr\-timeout duration Time for which file/directory attributes are cached. (default 1s)
+ \-\-base\-dir string base directory for volumes (default \[dq]/var/lib/docker\-volumes/rclone\[dq])
+ \-\-daemon Run mount as a daemon (background mode). Not supported on Windows.
+ \-\-daemon\-timeout duration Time limit for rclone to respond to kernel. Not supported on Windows.
+ \-\-debug\-fuse Debug the FUSE internals \- needs \-v.
+ \-\-default\-permissions Makes kernel enforce access control based on the file mode. Not supported on Windows.
+ \-\-dir\-cache\-time duration Time to cache directory entries for. (default 5m0s)
+ \-\-dir\-perms FileMode Directory permissions (default 0777)
+ \-\-file\-perms FileMode File permissions (default 0666)
+ \-\-forget\-state skip restoring previous state
+ \-\-fuse\-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
+ \-\-gid uint32 Override the gid field set by the filesystem. Not supported on Windows. (default 1000)
+ \-h, \-\-help help for docker
+ \-\-max\-read\-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads. Not supported on Windows. (default 128Ki)
+ \-\-network\-mode Mount as remote network drive, instead of fixed disk drive. Supported on Windows only
+ \-\-no\-checksum Don\[aq]t compare checksums on up/download.
+ \-\-no\-modtime Don\[aq]t read/write the modification time (can speed things up).
+ \-\-no\-seek Don\[aq]t allow seeking in files.
+ \-\-no\-spec do not write spec file
+ \-\-noappledouble Ignore Apple Double (._) and .DS_Store files. Supported on OSX only. (default true)
+ \-\-noapplexattr Ignore all \[dq]com.apple.*\[dq] extended attributes. Supported on OSX only.
+ \-o, \-\-option stringArray Option for libfuse/WinFsp. Repeat if required.
+ \-\-poll\-interval duration Time to wait between polling for changes. Must be smaller than dir\-cache\-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
+ \-\-read\-only Mount read\-only.
+ \-\-socket\-addr string or absolute path (default: /run/docker/plugins/rclone.sock)
+ \-\-socket\-gid int GID for unix socket (default: current process GID) (default 1000)
+ \-\-uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
+ \-\-umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
+ \-\-vfs\-cache\-max\-age duration Max age of objects in the cache. (default 1h0m0s)
+ \-\-vfs\-cache\-max\-size SizeSuffix Max total size of objects in the cache. (default off)
+ \-\-vfs\-cache\-mode CacheMode Cache mode off|minimal|writes|full (default off)
+ \-\-vfs\-cache\-poll\-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+ \-\-vfs\-case\-insensitive If a file name not found, find a case insensitive match.
+ \-\-vfs\-read\-ahead SizeSuffix Extra read ahead over \-\-buffer\-size when using cache\-mode full.
+ \-\-vfs\-read\-chunk\-size SizeSuffix Read the source objects in chunks. (default 128Mi)
+ \-\-vfs\-read\-chunk\-size\-limit SizeSuffix If greater than \-\-vfs\-read\-chunk\-size, double the chunk size after each chunk read, until the limit is reached. \[aq]off\[aq] is unlimited. (default off)
+ \-\-vfs\-read\-wait duration Time to wait for in\-sequence read before seeking. (default 20ms)
+ \-\-vfs\-used\-is\-size rclone size Use the rclone size algorithm for Used size.
+ \-\-vfs\-write\-back duration Time to writeback files after last use when using cache. (default 5s)
+ \-\-vfs\-write\-wait duration Time to wait for in\-sequence write before giving error. (default 1s)
+ \-\-volname string Set the volume name. Supported on Windows and OSX only.
+ \-\-write\-back\-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used. Not supported on Windows.
+\f[R]
+.fi
+.PP
+See the global flags page (https://rclone.org/flags/) for global options
+not listed here.
+.SS SEE ALSO
+.IP \[bu] 2
+rclone serve (https://rclone.org/commands/rclone_serve/) \- Serve a
+remote over a protocol.
.SH rclone serve ftp
.PP
Serve remote:path over FTP.
@@ -5191,7 +6174,7 @@ cache.
.nf
\f[C]
\-\-dir\-cache\-time duration Time to cache directory entries for. (default 5m0s)
-\-\-poll\-interval duration Time to wait between polling for changes.
+\-\-poll\-interval duration Time to wait between polling for changes. Must be smaller than dir\-cache\-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
\f[R]
.fi
.PP
@@ -5621,7 +6604,7 @@ rclone serve ftp remote:path [flags]
\-\-vfs\-cache\-poll\-interval duration Interval to poll the cache for stale objects. (default 1m0s)
\-\-vfs\-case\-insensitive If a file name not found, find a case insensitive match.
\-\-vfs\-read\-ahead SizeSuffix Extra read ahead over \-\-buffer\-size when using cache\-mode full.
- \-\-vfs\-read\-chunk\-size SizeSuffix Read the source objects in chunks. (default 128M)
+ \-\-vfs\-read\-chunk\-size SizeSuffix Read the source objects in chunks. (default 128Mi)
\-\-vfs\-read\-chunk\-size\-limit SizeSuffix If greater than \-\-vfs\-read\-chunk\-size, double the chunk size after each chunk read, until the limit is reached. \[aq]off\[aq] is unlimited. (default off)
\-\-vfs\-read\-wait duration Time to wait for in\-sequence read before seeking. (default 20ms)
\-\-vfs\-used\-is\-size rclone size Use the rclone size algorithm for Used size.
@@ -5657,8 +6640,8 @@ Use \-\-stats to control the stats printing.
.SS Server options
.PP
Use \-\-addr to specify which IP address and port the server should
-listen on, e.g.
-\-\-addr 1.2.3.4:8000 or \-\-addr :8080 to listen to all IPs.
+listen on, eg \-\-addr 1.2.3.4:8000 or \-\-addr :8080 to listen to all
+IPs.
By default it only listens on localhost.
You can use port :0 to let the OS choose an available port.
.PP
@@ -5681,7 +6664,18 @@ Rclone automatically inserts leading and trailing \[dq]/\[dq] on
\-\-baseurl, so \-\-baseurl \[dq]rclone\[dq], \-\-baseurl
\[dq]/rclone\[dq] and \-\-baseurl \[dq]/rclone/\[dq] are all treated
identically.
+.SS SSL/TLS
.PP
+By default this will serve over http.
+If you want you can serve over https.
+You will need to supply the \-\-cert and \-\-key flags.
+If you wish to do client side certificate validation then you will need
+to supply \-\-client\-ca also.
+.PP
+\-\-cert should be a either a PEM encoded certificate or a concatenation
+of that with the CA certificate.
+\-\-key should be the PEM encoded private key and \-\-client\-ca should
+be the PEM encoded client certificate authority certificate.
\-\-template allows a user to specify a custom markup template for http
and webdav serve functions.
The server exports the following markup to be used within the template
@@ -5803,18 +6797,6 @@ htpasswd \-B htpasswd anotherUser
The password file can be updated while rclone is running.
.PP
Use \-\-realm to set the authentication realm.
-.SS SSL/TLS
-.PP
-By default this will serve over http.
-If you want you can serve over https.
-You will need to supply the \-\-cert and \-\-key flags.
-If you wish to do client side certificate validation then you will need
-to supply \-\-client\-ca also.
-.PP
-\-\-cert should be either a PEM encoded certificate or a concatenation
-of that with the CA certificate.
-\-\-key should be the PEM encoded private key and \-\-client\-ca should
-be the PEM encoded client certificate authority certificate.
.SS VFS \- Virtual File System
.PP
This command uses the VFS layer.
@@ -5840,7 +6822,7 @@ cache.
.nf
\f[C]
\-\-dir\-cache\-time duration Time to cache directory entries for. (default 5m0s)
-\-\-poll\-interval duration Time to wait between polling for changes.
+\-\-poll\-interval duration Time to wait between polling for changes. Must be smaller than dir\-cache\-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
\f[R]
.fi
.PP
@@ -6151,7 +7133,7 @@ rclone serve http remote:path [flags]
.IP
.nf
\f[C]
- \-\-addr string IPaddress:Port or :Port to bind server to. (default \[dq]localhost:8080\[dq])
+ \-\-addr string IPaddress:Port or :Port to bind server to. (default \[dq]127.0.0.1:8080\[dq])
\-\-baseurl string Prefix for URLs \- leave blank for root.
\-\-cert string SSL PEM key (concatenation of certificate and CA certificate)
\-\-client\-ca string Client certificate authority to verify clients with
@@ -6169,7 +7151,7 @@ rclone serve http remote:path [flags]
\-\-pass string Password for authentication.
\-\-poll\-interval duration Time to wait between polling for changes. Must be smaller than dir\-cache\-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
\-\-read\-only Mount read\-only.
- \-\-realm string realm for authentication (default \[dq]rclone\[dq])
+ \-\-realm string realm for authentication
\-\-server\-read\-timeout duration Timeout for server reading data (default 1h0m0s)
\-\-server\-write\-timeout duration Timeout for server writing data (default 1h0m0s)
\-\-template string User Specified Template.
@@ -6182,7 +7164,7 @@ rclone serve http remote:path [flags]
\-\-vfs\-cache\-poll\-interval duration Interval to poll the cache for stale objects. (default 1m0s)
\-\-vfs\-case\-insensitive If a file name not found, find a case insensitive match.
\-\-vfs\-read\-ahead SizeSuffix Extra read ahead over \-\-buffer\-size when using cache\-mode full.
- \-\-vfs\-read\-chunk\-size SizeSuffix Read the source objects in chunks. (default 128M)
+ \-\-vfs\-read\-chunk\-size SizeSuffix Read the source objects in chunks. (default 128Mi)
\-\-vfs\-read\-chunk\-size\-limit SizeSuffix If greater than \-\-vfs\-read\-chunk\-size, double the chunk size after each chunk read, until the limit is reached. \[aq]off\[aq] is unlimited. (default off)
\-\-vfs\-read\-wait duration Time to wait for in\-sequence read before seeking. (default 20ms)
\-\-vfs\-used\-is\-size rclone size Use the rclone size algorithm for Used size.
@@ -6532,6 +7514,15 @@ reachable externally then supply \[dq]\-\-addr :2022\[dq] for example.
.PP
Note that the default of \[dq]\-\-vfs\-cache\-mode off\[dq] is fine for
the rclone sftp backend, but it may not be with other SFTP clients.
+.PP
+If \-\-stdio is specified, rclone will serve SFTP over stdio, which can
+be used with sshd via \[ti]/.ssh/authorized_keys, for example:
+.IP
+.nf
+\f[C]
+restrict,command=\[dq]rclone serve sftp \-\-stdio ./photos\[dq] ssh\-rsa ...
+\f[R]
+.fi
.SS VFS \- Virtual File System
.PP
This command uses the VFS layer.
@@ -6557,7 +7548,7 @@ cache.
.nf
\f[C]
\-\-dir\-cache\-time duration Time to cache directory entries for. (default 5m0s)
-\-\-poll\-interval duration Time to wait between polling for changes.
+\-\-poll\-interval duration Time to wait between polling for changes. Must be smaller than dir\-cache\-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
\f[R]
.fi
.PP
@@ -6977,6 +7968,7 @@ rclone serve sftp remote:path [flags]
\-\-pass string Password for authentication.
\-\-poll\-interval duration Time to wait between polling for changes. Must be smaller than dir\-cache\-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
\-\-read\-only Mount read\-only.
+ \-\-stdio Run an sftp server on run stdin/stdout
\-\-uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
\-\-umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
\-\-user string User name for authentication.
@@ -6986,7 +7978,7 @@ rclone serve sftp remote:path [flags]
\-\-vfs\-cache\-poll\-interval duration Interval to poll the cache for stale objects. (default 1m0s)
\-\-vfs\-case\-insensitive If a file name not found, find a case insensitive match.
\-\-vfs\-read\-ahead SizeSuffix Extra read ahead over \-\-buffer\-size when using cache\-mode full.
- \-\-vfs\-read\-chunk\-size SizeSuffix Read the source objects in chunks. (default 128M)
+ \-\-vfs\-read\-chunk\-size SizeSuffix Read the source objects in chunks. (default 128Mi)
\-\-vfs\-read\-chunk\-size\-limit SizeSuffix If greater than \-\-vfs\-read\-chunk\-size, double the chunk size after each chunk read, until the limit is reached. \[aq]off\[aq] is unlimited. (default off)
\-\-vfs\-read\-wait duration Time to wait for in\-sequence read before seeking. (default 20ms)
\-\-vfs\-used\-is\-size rclone size Use the rclone size algorithm for Used size.
@@ -7208,7 +8200,7 @@ cache.
.nf
\f[C]
\-\-dir\-cache\-time duration Time to cache directory entries for. (default 5m0s)
-\-\-poll\-interval duration Time to wait between polling for changes.
+\-\-poll\-interval duration Time to wait between polling for changes. Must be smaller than dir\-cache\-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
\f[R]
.fi
.PP
@@ -7646,7 +8638,7 @@ rclone serve webdav remote:path [flags]
\-\-vfs\-cache\-poll\-interval duration Interval to poll the cache for stale objects. (default 1m0s)
\-\-vfs\-case\-insensitive If a file name not found, find a case insensitive match.
\-\-vfs\-read\-ahead SizeSuffix Extra read ahead over \-\-buffer\-size when using cache\-mode full.
- \-\-vfs\-read\-chunk\-size SizeSuffix Read the source objects in chunks. (default 128M)
+ \-\-vfs\-read\-chunk\-size SizeSuffix Read the source objects in chunks. (default 128Mi)
\-\-vfs\-read\-chunk\-size\-limit SizeSuffix If greater than \-\-vfs\-read\-chunk\-size, double the chunk size after each chunk read, until the limit is reached. \[aq]off\[aq] is unlimited. (default off)
\-\-vfs\-read\-wait duration Time to wait for in\-sequence read before seeking. (default 20ms)
\-\-vfs\-used\-is\-size rclone size Use the rclone size algorithm for Used size.
@@ -7757,6 +8749,10 @@ rclone (https://rclone.org/commands/rclone/) \- Show help for rclone
commands, flags and backends.
.IP \[bu] 2
rclone test
+changenotify (https://rclone.org/commands/rclone_test_changenotify/) \-
+Log any change notify requests for the remote passed in.
+.IP \[bu] 2
+rclone test
histogram (https://rclone.org/commands/rclone_test_histogram/) \- Makes
a histogram of file name characters.
.IP \[bu] 2
@@ -7765,12 +8761,34 @@ Discovers file name or other limitations for paths.
.IP \[bu] 2
rclone test
makefiles (https://rclone.org/commands/rclone_test_makefiles/) \- Make a
-random file hierarchy in
-.RS 2
-.RE
+random file hierarchy in a directory
.IP \[bu] 2
rclone test memory (https://rclone.org/commands/rclone_test_memory/) \-
Load all the objects at remote:path into memory and report memory stats.
+.SH rclone test changenotify
+.PP
+Log any change notify requests for the remote passed in.
+.IP
+.nf
+\f[C]
+rclone test changenotify remote: [flags]
+\f[R]
+.fi
+.SS Options
+.IP
+.nf
+\f[C]
+ \-h, \-\-help help for changenotify
+ \-\-poll\-interval duration Time to wait between polling for changes. (default 10s)
+\f[R]
+.fi
+.PP
+See the global flags page (https://rclone.org/flags/) for global options
+not listed here.
+.SS SEE ALSO
+.IP \[bu] 2
+rclone test (https://rclone.org/commands/rclone_test/) \- Run a test
+command
.SH rclone test histogram
.PP
Makes a histogram of file name characters.
@@ -7842,7 +8860,8 @@ not listed here.
rclone test (https://rclone.org/commands/rclone_test/) \- Run a test
command
.SH rclone test makefiles
-Make a random file hierarchy in
+.PP
+Make a random file hierarchy in a directory
.IP
.nf
\f[C]
@@ -7860,6 +8879,7 @@ rclone test makefiles [flags]
\-\-max\-name\-length int Maximum size of file names (default 12)
\-\-min\-file\-size SizeSuffix Minimum size of file to create
\-\-min\-name\-length int Minimum size of file names (default 4)
+ \-\-seed int Seed for the random number generator (0 for random) (default 1)
\f[R]
.fi
.PP
@@ -8065,7 +9085,7 @@ This refers to the local file system.
.PP
On Windows \f[C]\[rs]\f[R] may be used instead of \f[C]/\f[R] in local
paths \f[B]only\f[R], non local paths must use \f[C]/\f[R].
-See local filesystem (https://rclone.org/local/#windows-paths)
+See local filesystem (https://rclone.org/local/#paths-on-windows)
documentation for more about Windows\-specific paths.
.PP
These paths needn\[aq]t start with a leading \f[C]/\f[R] \- if they
@@ -8157,7 +9177,7 @@ rclone lsf \[dq]gdrive,shared_with_me:path/to/dir\[dq]
.fi
.PP
The major advantage to using the connection string style syntax is that
-it only applies the the remote, not to all the remotes of that type of
+it only applies to the remote, not to all the remotes of that type of
the command line.
A common confusion is this attempt to copy a file shared on google drive
to the normal drive which \f[B]does not work\f[R] because the
@@ -8178,6 +9198,18 @@ rclone copy \[dq]gdrive,shared_with_me:shared\-file.txt\[dq] gdrive:
\f[R]
.fi
.PP
+Note that the connection string only affects the options of the
+immediate backend.
+If for example gdriveCrypt is a crypt based on gdrive, then the
+following command \f[B]will not work\f[R] as intended, because
+\f[C]shared_with_me\f[R] is ignored by the crypt backend:
+.IP
+.nf
+\f[C]
+rclone copy \[dq]gdriveCrypt,shared_with_me:shared\-file.txt\[dq] gdriveCrypt:
+\f[R]
+.fi
+.PP
The connection strings have the following syntax
.IP
.nf
@@ -8438,10 +9470,10 @@ with optional fraction and a unit suffix, such as \[dq]300ms\[dq],
Valid time units are \[dq]ns\[dq], \[dq]us\[dq] (or \[dq]\[mc]s\[dq]),
\[dq]ms\[dq], \[dq]s\[dq], \[dq]m\[dq], \[dq]h\[dq].
.PP
-Options which use SIZE use kByte by default.
-However, a suffix of \f[C]b\f[R] for bytes, \f[C]k\f[R] for kBytes,
-\f[C]M\f[R] for MBytes, \f[C]G\f[R] for GBytes, \f[C]T\f[R] for TBytes
-and \f[C]P\f[R] for PBytes may be used.
+Options which use SIZE use KiByte (multiples of 1024 bytes) by default.
+However, a suffix of \f[C]B\f[R] for Byte, \f[C]K\f[R] for KiByte,
+\f[C]M\f[R] for MiByte, \f[C]G\f[R] for GiByte, \f[C]T\f[R] for TiByte
+and \f[C]P\f[R] for PiByte may be used.
These are the binary units, e.g.
1, 2**10, 2**20, 2**30 respectively.
.SS \-\-backup\-dir=DIR
@@ -8495,11 +9527,11 @@ For example
\f[R]
.fi
.PP
-would mean limit the upload and download bandwidth to 10 MByte/s.
+would mean limit the upload and download bandwidth to 10 MiByte/s.
\f[B]NB\f[R] this is \f[B]bytes\f[R] per second not \f[B]bits\f[R] per
second.
-To use a single limit, specify the desired bandwidth in kBytes/s, or use
-a suffix b|k|M|G.
+To use a single limit, specify the desired bandwidth in KiByte/s, or use
+a suffix B|K|M|G|T|P.
The default is \f[C]0\f[R] which means to not limit bandwidth.
.PP
The upload and download bandwidth can be specified seperately, as
@@ -8511,8 +9543,8 @@ The upload and download bandwidth can be specified seperately, as
\f[R]
.fi
.PP
-would mean limit the upload bandwidth to 10 MByte/s and the download
-bandwidth to 100 kByte/s.
+would mean limit the upload bandwidth to 10 MiByte/s and the download
+bandwidth to 100 KiByte/s.
Either limit can be \[dq]off\[dq] meaning no limit, so to just limit the
upload bandwidth you would use
.IP
@@ -8522,7 +9554,7 @@ upload bandwidth you would use
\f[R]
.fi
.PP
-this would limit the upload bandwidth to 10MByte/s but the download
+this would limit the upload bandwidth to 10 MiByte/s but the download
bandwidth would be unlimited.
.PP
When specified as above the bandwidth limits last for the duration of
@@ -8548,11 +9580,11 @@ daytime working hours could be:
.PP
\f[C]\-\-bwlimit \[dq]08:00,512k 12:00,10M 13:00,512k 18:00,30M 23:00,off\[dq]\f[R]
.PP
-In this example, the transfer bandwidth will be set to 512kBytes/sec at
+In this example, the transfer bandwidth will be set to 512 KiByte/s at
8am every day.
-At noon, it will rise to 10MByte/s, and drop back to 512kBytes/sec at
+At noon, it will rise to 10 MiByte/s, and drop back to 512 KiByte/sec at
1pm.
-At 6pm, the bandwidth limit will be set to 30MByte/s, and at 11pm it
+At 6pm, the bandwidth limit will be set to 30 MiByte/s, and at 11pm it
will be completely disabled (full speed).
Anything between 11pm and 8am will remain unlimited.
.PP
@@ -8560,10 +9592,10 @@ An example of timetable with \f[C]WEEKDAY\f[R] could be:
.PP
\f[C]\-\-bwlimit \[dq]Mon\-00:00,512 Fri\-23:59,10M Sat\-10:00,1M Sun\-20:00,off\[dq]\f[R]
.PP
-It means that, the transfer bandwidth will be set to 512kBytes/sec on
+It means that, the transfer bandwidth will be set to 512 KiByte/s on
Monday.
-It will rise to 10MByte/s before the end of Friday.
-At 10:00 on Saturday it will be set to 1MByte/s.
+It will rise to 10 MiByte/s before the end of Friday.
+At 10:00 on Saturday it will be set to 1 MiByte/s.
From 20:00 on Sunday it will be unlimited.
.PP
Timeslots without \f[C]WEEKDAY\f[R] are extended to the whole week.
@@ -8580,11 +9612,11 @@ For most backends the directory listing bandwidth is also included
(exceptions being the non HTTP backends, \f[C]ftp\f[R], \f[C]sftp\f[R]
and \f[C]tardigrade\f[R]).
.PP
-Note that the units are \f[B]Bytes/s\f[R], not \f[B]Bits/s\f[R].
-Typically connections are measured in Bits/s \- to convert divide by 8.
+Note that the units are \f[B]Byte/s\f[R], not \f[B]bit/s\f[R].
+Typically connections are measured in bit/s \- to convert divide by 8.
For example, let\[aq]s say you have a 10 Mbit/s connection and you wish
rclone to use half of it \- 5 Mbit/s.
-This is 5/8 = 0.625MByte/s so you would use a
+This is 5/8 = 0.625 MiByte/s so you would use a
\f[C]\-\-bwlimit 0.625M\f[R] parameter for rclone.
.PP
On Unix systems (Linux, macOS, \&...) the bandwidth limiter can be
@@ -8614,7 +9646,7 @@ rclone rc core/bwlimit rate=1M
This option controls per file bandwidth limit.
For the options see the \f[C]\-\-bwlimit\f[R] flag.
.PP
-For example use this to allow no transfers to be faster than 1MByte/s
+For example use this to allow no transfers to be faster than 1 MiByte/s
.IP
.nf
\f[C]
@@ -8703,27 +9735,70 @@ The compare directory must not overlap the destination directory.
See \f[C]\-\-copy\-dest\f[R] and \f[C]\-\-backup\-dir\f[R].
.SS \-\-config=CONFIG_FILE
.PP
-Specify the location of the rclone configuration file.
+Specify the location of the rclone configuration file, to override the
+default.
+E.g.
+\f[C]rclone config \-\-config=\[dq]rclone.conf\[dq]\f[R].
.PP
-Normally the config file is in your home directory as a file called
-\f[C].config/rclone/rclone.conf\f[R] (or \f[C].rclone.conf\f[R] if
-created with an older version).
-If \f[C]$XDG_CONFIG_HOME\f[R] is set it will be at
-\f[C]$XDG_CONFIG_HOME/rclone/rclone.conf\f[R].
+The exact default is a bit complex to describe, due to changes
+introduced through different versions of rclone while preserving
+backwards compatibility, but in most cases it is as simple as:
+.IP \[bu] 2
+\f[C]%APPDATA%/rclone/rclone.conf\f[R] on Windows
+.IP \[bu] 2
+\f[C]\[ti]/.config/rclone/rclone.conf\f[R] on other
.PP
-If there is a file \f[C]rclone.conf\f[R] in the same directory as the
-rclone executable it will be preferred.
-This file must be created manually for Rclone to use it, it will never
-be created automatically.
+The complete logic is as follows: Rclone will look for an existing
+configuration file in any of the following locations, in priority order:
+.IP "1." 3
+\f[C]rclone.conf\f[R] (in program directory, where rclone executable is)
+.IP "2." 3
+\f[C]%APPDATA%/rclone/rclone.conf\f[R] (only on Windows)
+.IP "3." 3
+\f[C]$XDG_CONFIG_HOME/rclone/rclone.conf\f[R] (on all systems, including
+Windows)
+.IP "4." 3
+\f[C]\[ti]/.config/rclone/rclone.conf\f[R] (see below for explanation of
+\[ti] symbol)
+.IP "5." 3
+\f[C]\[ti]/.rclone.conf\f[R]
+.PP
+If no existing configuration file is found, then a new one will be
+created in the following location:
+.IP \[bu] 2
+On Windows: Location 2 listed above, except in the unlikely event that
+\f[C]APPDATA\f[R] is not defined, then location 4 is used instead.
+.IP \[bu] 2
+On Unix: Location 3 if \f[C]XDG_CONFIG_HOME\f[R] is defined, else
+location 4.
+.IP \[bu] 2
+Fallback to location 5 (on all OS), when the rclone directory cannot be
+created, but if also a home directory was not found then path
+\f[C].rclone.conf\f[R] relative to current working directory will be
+used as a final resort.
+.PP
+The \f[C]\[ti]\f[R] symbol in paths above represent the home directory
+of the current user on any OS, and the value is defined as following:
+.IP \[bu] 2
+On Windows: \f[C]%HOME%\f[R] if defined, else \f[C]%USERPROFILE%\f[R],
+or else \f[C]%HOMEDRIVE%\[rs]%HOMEPATH%\f[R].
+.IP \[bu] 2
+On Unix: \f[C]$HOME\f[R] if defined, else by looking up current user in
+OS\-specific user database (e.g.
+passwd file), or else use the result from shell command
+\f[C]cd && pwd\f[R].
.PP
If you run \f[C]rclone config file\f[R] you will see where the default
location is for you.
.PP
-Use this flag to override the config location, e.g.
-\f[C]rclone \-\-config=\[dq].myconfig\[dq] .config\f[R].
+The fact that an existing file \f[C]rclone.conf\f[R] in the same
+directory as the rclone executable is always preferred, means that it is
+easy to run in \[dq]portable\[dq] mode by downloading rclone executable
+to a writable directory and then create an empty file
+\f[C]rclone.conf\f[R] in the same directory.
.PP
-If the location is set to empty string \f[C]\[dq]\[dq]\f[R] or the
-special value \f[C]/notfound\f[R], or the os null device represented by
+If the location is set to empty string \f[C]\[dq]\[dq]\f[R] or path to a
+file with name \f[C]notfound\f[R], or the os null device represented by
value \f[C]NUL\f[R] on Windows and \f[C]/dev/null\f[R] on Unix systems,
then rclone will keep the config file in memory only.
.PP
@@ -8830,8 +9905,8 @@ get an idea of which feature does what.
.PP
This flag can be useful for debugging and in exceptional circumstances
(e.g.
-Google Drive limiting the total volume of Server Side Copies to
-100GB/day).
+Google Drive limiting the total volume of Server Side Copies to 100
+GiB/day).
.SS \-\-dscp VALUE
.PP
Specify a DSCP value or name to use in connections.
@@ -8855,6 +9930,9 @@ rclone copy \-\-dscp LE from:/from to:/to
.fi
.PP
would make the priority lower than usual internet flows.
+.PP
+This option has no effect on Windows (see
+golang/go#42728 (https://github.com/golang/go/issues/42728)).
.SS \-n, \-\-dry\-run
.PP
Do a trial run with no permanent changes.
@@ -9139,8 +10217,8 @@ queued for being checked or transferred.
.PP
This can be set arbitrarily large.
It will only use memory when the queue is in use.
-Note that it will use in the order of N kB of memory when the backlog is
-in use.
+Note that it will use in the order of N KiB of memory when the backlog
+is in use.
.PP
Setting this large allows rclone to calculate how many files are pending
more accurately, give a more accurate estimated finish time and make
@@ -9269,16 +10347,16 @@ To calculate the number of download streams Rclone divides the size of
the file by the \f[C]\-\-multi\-thread\-cutoff\f[R] and rounds up, up to
the maximum set with \f[C]\-\-multi\-thread\-streams\f[R].
.PP
-So if \f[C]\-\-multi\-thread\-cutoff 250MB\f[R] and
+So if \f[C]\-\-multi\-thread\-cutoff 250M\f[R] and
\f[C]\-\-multi\-thread\-streams 4\f[R] are in effect (the defaults):
.IP \[bu] 2
-0MB..250MB files will be downloaded with 1 stream
+0..250 MiB files will be downloaded with 1 stream
.IP \[bu] 2
-250MB..500MB files will be downloaded with 2 streams
+250..500 MiB files will be downloaded with 2 streams
.IP \[bu] 2
-500MB..750MB files will be downloaded with 3 streams
+500..750 MiB files will be downloaded with 3 streams
.IP \[bu] 2
-750MB+ files will be downloaded with 4 streams
+750+ MiB files will be downloaded with 4 streams
.SS \-\-no\-check\-dest
.PP
The \f[C]\-\-no\-check\-dest\f[R] can be used with \f[C]move\f[R] or
@@ -9604,14 +10682,14 @@ Follow golang specs (https://golang.org/pkg/time/#Time.Format) for date
formatting syntax.
.SS \-\-stats\-unit=bits|bytes
.PP
-By default, data transfer rates will be printed in bytes/second.
+By default, data transfer rates will be printed in bytes per second.
.PP
-This option allows the data rate to be printed in bits/second.
+This option allows the data rate to be printed in bits per second.
.PP
Data transfer volume will still be reported in bytes.
.PP
The rate is reported as a binary unit, not SI unit.
-So 1 Mbit/s equals 1,048,576 bits/s and not 1,000,000 bits/s.
+So 1 Mbit/s equals 1,048,576 bit/s and not 1,000,000 bit/s.
.PP
The default is \f[C]bytes\f[R].
.SS \-\-suffix=SUFFIX
@@ -10098,20 +11176,23 @@ This will make rclone fail instead of asking for a password if
\f[C]RCLONE_CONFIG_PASS\f[R] doesn\[aq]t contain a valid password, and
\f[C]\-\-password\-command\f[R] has not been supplied.
.PP
-Some rclone commands, such as \f[C]genautocomplete\f[R], do not require
+Whenever running commands that may be affected by options in a
+configuration file, rclone will look for an existing file according to
+the rules described above, and load any it finds.
+If an encrypted file is found, this includes decrypting it, with the
+possible consequence of a password prompt.
+When executing a command line that you know are not actually using
+anything from such a configuration file, you can avoid it being loaded
+by overriding the location, e.g.
+with one of the documented special values for memory\-only
configuration.
-Nevertheless, rclone will read any configuration file found according to
-the rules described above (https://rclone.org/docs/#config-config-file).
-If an encrypted configuration file is found, this means you will be
-prompted for password (unless using \f[C]\-\-password\-command\f[R]).
-To avoid this, you can bypass the loading of the configuration file by
-overriding the location with an empty string \f[C]\[dq]\[dq]\f[R] or the
-special value \f[C]/notfound\f[R], or the os null device represented by
-value \f[C]NUL\f[R] on Windows and \f[C]/dev/null\f[R] on Unix systems
-(before rclone version 1.55 only this null device alternative was
-supported).
-E.g.
-\f[C]rclone \-\-config=\[dq]\[dq] genautocomplete bash\f[R].
+Since only backend options can be stored in configuration files, this is
+normally unnecessary for commands that do not operate on backends, e.g.
+\f[C]genautocomplete\f[R].
+However, it will be relevant for commands that do operate on backends in
+general, but are used without referencing a stored remote, e.g.
+listing local filesystem paths, or connection strings:
+\f[C]rclone \-\-config=\[dq]\[dq] ls .\f[R]
.SS Developer options
.PP
These options are useful when developing or debugging rclone.
@@ -10334,6 +11415,10 @@ Or to always use the trash in drive \f[C]\-\-drive\-use\-trash\f[R], set
.PP
The same parser is used for the options and the environment variables so
they take exactly the same form.
+.PP
+The options set by environment variables can be seen with the
+\f[C]\-vv\f[R] flag, e.g.
+\f[C]rclone version \-vv\f[R].
.SS Config file
.PP
You can set defaults for values in the config file on an individual
@@ -10363,7 +11448,13 @@ mys3:
Note that if you want to create a remote using environment variables you
must create the \f[C]..._TYPE\f[R] variable as above.
.PP
-Note also that now rclone has connectionstrings, it is probably easier
+Note that you can only set the options of the immediate backend, so
+RCLONE_CONFIG_MYS3CRYPT_ACCESS_KEY_ID has no effect, if myS3Crypt is a
+crypt remote based on an S3 remote.
+However RCLONE_S3_ACCESS_KEY_ID will set the access key of all remotes
+using S3, including myS3Crypt.
+.PP
+Note also that now rclone has connection strings, it is probably easier
to use those instead which makes the above example
.IP
.nf
@@ -10376,24 +11467,34 @@ rclone lsd :s3,access_key_id=XXX,secret_access_key=XXX:
The various different methods of backend configuration are read in this
order and the first one with a value is used.
.IP \[bu] 2
+Parameters in connection strings, e.g.
+\f[C]myRemote,skip_links:\f[R]
+.IP \[bu] 2
Flag values as supplied on the command line, e.g.
-\f[C]\-\-drive\-use\-trash\f[R].
+\f[C]\-\-skip\-links\f[R]
.IP \[bu] 2
Remote specific environment vars, e.g.
-\f[C]RCLONE_CONFIG_MYREMOTE_USE_TRASH\f[R] (see above).
+\f[C]RCLONE_CONFIG_MYREMOTE_SKIP_LINKS\f[R] (see above).
.IP \[bu] 2
Backend specific environment vars, e.g.
-\f[C]RCLONE_DRIVE_USE_TRASH\f[R].
+\f[C]RCLONE_LOCAL_SKIP_LINKS\f[R].
+.IP \[bu] 2
+Backend generic environment vars, e.g.
+\f[C]RCLONE_SKIP_LINKS\f[R].
.IP \[bu] 2
Config file, e.g.
-\f[C]use_trash = false\f[R].
+\f[C]skip_links = true\f[R].
.IP \[bu] 2
Default values, e.g.
-\f[C]true\f[R] \- these can\[aq]t be changed.
+\f[C]false\f[R] \- these can\[aq]t be changed.
.PP
-So if both \f[C]\-\-drive\-use\-trash\f[R] is supplied on the config
-line and an environment variable \f[C]RCLONE_DRIVE_USE_TRASH\f[R] is
-set, the command line flag will take preference.
+So if both \f[C]\-\-skip\-links\f[R] is supplied on the command line and
+an environment variable \f[C]RCLONE_LOCAL_SKIP_LINKS\f[R] is set, the
+command line flag will take preference.
+.PP
+The backend configurations set by environment variables can be seen with
+the \f[C]\-vv\f[R] flag, e.g.
+\f[C]rclone about myRemote: \-vv\f[R].
.PP
For non backend configuration the order is as follows:
.IP \[bu] 2
@@ -10422,9 +11523,20 @@ The environment values may be either a complete URL or a
assumed.
.RE
.IP \[bu] 2
+\f[C]USER\f[R] and \f[C]LOGNAME\f[R] values are used as fallbacks for
+current username.
+The primary method for looking up username is OS\-specific: Windows API
+on Windows, real user ID in /etc/passwd on Unix systems.
+In the documentation the current username is simply referred to as
+\f[C]$USER\f[R].
+.IP \[bu] 2
\f[C]RCLONE_CONFIG_DIR\f[R] \- rclone \f[B]sets\f[R] this variable for
use in config files and sub processes to point to the directory holding
the config file.
+.PP
+The options set by environment variables can be seen with the
+\f[C]\-vv\f[R] and \f[C]\-\-log\-level=DEBUG\f[R] flags, e.g.
+\f[C]rclone version \-vv\f[R].
.SH Configuring rclone on a remote / headless machine
.PP
Some of the configurations (those involving oauth2) require an Internet
@@ -10565,15 +11677,15 @@ Rclone matching rules follow a glob style:
.IP
.nf
\f[C]
-\[ga]*\[ga] matches any sequence of non\-separator (\[ga]/\[ga]) characters
-\[ga]**\[ga] matches any sequence of characters including \[ga]/\[ga] separators
-\[ga]?\[ga] matches any single non\-separator (\[ga]/\[ga]) character
-\[ga][\[ga] [ \[ga]!\[ga] ] { character\-range } \[ga]]\[ga]
- character class (must be non\-empty)
-\[ga]{\[ga] pattern\-list \[ga]}\[ga]
- pattern alternatives
-c matches character c (c != \[ga]*\[ga], \[ga]**\[ga], \[ga]?\[ga], \[ga]\[rs]\[ga], \[ga][\[ga], \[ga]{\[ga], \[ga]}\[ga])
-\[ga]\[rs]\[ga] c matches character c
+* matches any sequence of non\-separator (/) characters
+** matches any sequence of characters including / separators
+? matches any single non\-separator (/) character
+[ [ ! ] { character\-range } ]
+ character class (must be non\-empty)
+{ pattern\-list }
+ pattern alternatives
+c matches character c (c != *, **, ?, \[rs], [, {, })
+\[rs]c matches reserved character c (c = *, **, ?, \[rs], [, {, })
\f[R]
.fi
.PP
@@ -10581,9 +11693,9 @@ character\-range:
.IP
.nf
\f[C]
-c matches character c (c != \[ga]\[rs]\[rs]\[ga], \[ga]\-\[ga], \[ga]]\[ga])
-\[ga]\[rs]\[ga] c matches character c
-lo \[ga]\-\[ga] hi matches character c for lo <= c <= hi
+c matches character c (c != \[rs], \-, ])
+\[rs]c matches reserved character c (c = \[rs], \-, ])
+lo \- hi matches character c for lo <= c <= hi
\f[R]
.fi
.PP
@@ -10591,8 +11703,8 @@ pattern\-list:
.IP
.nf
\f[C]
-pattern { \[ga],\[ga] pattern }
- comma\-separated (without spaces) patterns
+pattern { , pattern }
+ comma\-separated (without spaces) patterns
\f[R]
.fi
.PP
@@ -11302,21 +12414,21 @@ The fix then is to quote values containing spaces.
.SS \f[C]\-\-min\-size\f[R] \- Don\[aq]t transfer any file smaller than this
.PP
Controls the minimum size file within the scope of an rclone command.
-Default units are \f[C]kBytes\f[R] but abbreviations \f[C]k\f[R],
-\f[C]M\f[R], or \f[C]G\f[R] are valid.
+Default units are \f[C]KiByte\f[R] but abbreviations \f[C]K\f[R],
+\f[C]M\f[R], \f[C]G\f[R], \f[C]T\f[R] or \f[C]P\f[R] are valid.
.PP
E.g.
\f[C]rclone ls remote: \-\-min\-size 50k\f[R] lists files on
-\f[C]remote:\f[R] of 50kByte size or larger.
+\f[C]remote:\f[R] of 50 KiByte size or larger.
.SS \f[C]\-\-max\-size\f[R] \- Don\[aq]t transfer any file larger than this
.PP
Controls the maximum size file within the scope of an rclone command.
-Default units are \f[C]kBytes\f[R] but abbreviations \f[C]k\f[R],
-\f[C]M\f[R], or \f[C]G\f[R] are valid.
+Default units are \f[C]KiByte\f[R] but abbreviations \f[C]K\f[R],
+\f[C]M\f[R], \f[C]G\f[R], \f[C]T\f[R] or \f[C]P\f[R] are valid.
.PP
E.g.
\f[C]rclone ls remote: \-\-max\-size 1G\f[R] lists files on
-\f[C]remote:\f[R] of 1GByte size or smaller.
+\f[C]remote:\f[R] of 1 GiByte size or smaller.
.SS \f[C]\-\-max\-age\f[R] \- Don\[aq]t transfer any file older than this
.PP
Controls the maximum age of files within the scope of an rclone command.
@@ -11385,7 +12497,7 @@ rclone \-\-min\-size 50k \-\-delete\-excluded sync A: B:
\f[R]
.fi
.PP
-All files on \f[C]B:\f[R] which are less than 50 kBytes are deleted
+All files on \f[C]B:\f[R] which are less than 50 KiByte are deleted
because they are excluded from the rclone sync command.
.SS \f[C]\-\-dump filters\f[R] \- dump the filters to the output
.PP
@@ -12164,9 +13276,24 @@ parameters \- a map of { \[dq]key\[dq]: \[dq]value\[dq] } pairs
.IP \[bu] 2
type \- type of the new remote
.IP \[bu] 2
-obscure \- optional bool \- forces obscuring of passwords
+opt \- a dictionary of options to control the configuration
+.RS 2
.IP \[bu] 2
-noObscure \- optional bool \- forces passwords not to be obscured
+obscure \- declare passwords are plain and need obscuring
+.IP \[bu] 2
+noObscure \- declare passwords are already obscured and don\[aq]t need
+obscuring
+.IP \[bu] 2
+nonInteractive \- don\[aq]t interact with a user, return questions
+.IP \[bu] 2
+continue \- continue the config process with an answer
+.IP \[bu] 2
+all \- ask all the config questions not just the post config ones
+.IP \[bu] 2
+state \- state to restart with \- used with continue
+.IP \[bu] 2
+result \- result to restart with \- used with continue
+.RE
.PP
See the config create
command (https://rclone.org/commands/rclone_config_create/) command for
@@ -12245,9 +13372,24 @@ name \- name of remote
.IP \[bu] 2
parameters \- a map of { \[dq]key\[dq]: \[dq]value\[dq] } pairs
.IP \[bu] 2
-obscure \- optional bool \- forces obscuring of passwords
+opt \- a dictionary of options to control the configuration
+.RS 2
.IP \[bu] 2
-noObscure \- optional bool \- forces passwords not to be obscured
+obscure \- declare passwords are plain and need obscuring
+.IP \[bu] 2
+noObscure \- declare passwords are already obscured and don\[aq]t need
+obscuring
+.IP \[bu] 2
+nonInteractive \- don\[aq]t interact with a user, return questions
+.IP \[bu] 2
+continue \- continue the config process with an answer
+.IP \[bu] 2
+all \- ask all the config questions not just the post config ones
+.IP \[bu] 2
+state \- state to restart with \- used with continue
+.IP \[bu] 2
+result \- result to restart with \- used with continue
+.RE
.PP
See the config update
command (https://rclone.org/commands/rclone_config_update/) command for
@@ -12456,7 +13598,7 @@ Returns the following values:
\[dq]lastError\[dq]: last error string,
\[dq]renames\[dq] : number of files renamed,
\[dq]retryError\[dq]: boolean showing whether there has been at least one non\-NoRetryError,
- \[dq]speed\[dq]: average speed in bytes/sec since start of the group,
+ \[dq]speed\[dq]: average speed in bytes per second since start of the group,
\[dq]totalBytes\[dq]: total number of bytes in the group,
\[dq]totalChecks\[dq]: total number of checks in the group,
\[dq]totalTransfers\[dq]: total number of transfers in the group,
@@ -12469,8 +13611,8 @@ Returns the following values:
\[dq]eta\[dq]: estimated time in seconds until file transfer completion
\[dq]name\[dq]: name of the file,
\[dq]percentage\[dq]: progress of the file transfer in percent,
- \[dq]speed\[dq]: average speed over the whole transfer in bytes/sec,
- \[dq]speedAvg\[dq]: current speed in bytes/sec as an exponentially weighted moving average,
+ \[dq]speed\[dq]: average speed over the whole transfer in bytes per second,
+ \[dq]speedAvg\[dq]: current speed in bytes per second as an exponentially weighted moving average,
\[dq]size\[dq]: size of the file in bytes
}
],
@@ -14154,6 +15296,19 @@ T}@T{
\-
T}
T{
+Uptobox
+T}@T{
+\-
+T}@T{
+No
+T}@T{
+No
+T}@T{
+Yes
+T}@T{
+\-
+T}
+T{
WebDAV
T}@T{
MD5, SHA1 \[S3]
@@ -14210,7 +15365,7 @@ T}
.PP
\[S1] Dropbox supports its own custom
hash (https://www.dropbox.com/developers/reference/content-hash).
-This is an SHA256 sum of all the 4MB block SHA256s.
+This is an SHA256 sum of all the 4 MiB block SHA256s.
.PP
\[S2] SFTP supports checksums if the same login has shell access and
\f[C]md5sum\f[R] or \f[C]sha1sum\f[R] as well as \f[C]echo\f[R] are in
@@ -15597,6 +16752,29 @@ T}@T{
No
T}
T{
+Uptobox
+T}@T{
+No
+T}@T{
+Yes
+T}@T{
+Yes
+T}@T{
+Yes
+T}@T{
+No
+T}@T{
+No
+T}@T{
+No
+T}@T{
+No
+T}@T{
+No
+T}@T{
+No
+T}
+T{
WebDAV
T}@T{
Yes
@@ -15789,9 +16967,9 @@ These flags are available for every command.
\-\-auto\-confirm If enabled, do not request console confirmation.
\-\-backup\-dir string Make backups into hierarchy based in DIR.
\-\-bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- \-\-buffer\-size SizeSuffix In memory buffer size when reading files for each \-\-transfer. (default 16M)
- \-\-bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- \-\-bwlimit\-file BwTimetable Bandwidth limit per file in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ \-\-buffer\-size SizeSuffix In memory buffer size when reading files for each \-\-transfer. (default 16Mi)
+ \-\-bwlimit BwTimetable Bandwidth limit in KiByte/s, or use suffix B|K|M|G|T|P or a full timetable.
+ \-\-bwlimit\-file BwTimetable Bandwidth limit per file in KiByte/s, or use suffix B|K|M|G|T|P or a full timetable.
\-\-ca\-cert string CA certificate used to verify servers
\-\-cache\-dir string Directory rclone will use for caching. (default \[dq]$HOME/.cache/rclone\[dq])
\-\-check\-first Do all the checks before starting transfers.
@@ -15809,7 +16987,8 @@ These flags are available for every command.
\-\-delete\-before When synchronizing, delete files on destination before transferring
\-\-delete\-during When synchronizing, delete files during transfer
\-\-delete\-excluded Delete files on dest excluded from sync
- \-\-disable string Disable a comma separated list of features. Use help to see a list.
+ \-\-disable string Disable a comma separated list of features. Use \-\-disable help to see a list.
+ \-\-disable\-http2 Disable HTTP/2 in the global transport.
\-n, \-\-dry\-run Do a trial run with no permanent changes
\-\-dscp string Set DSCP value to connections. Can be value or names, eg. CS1, LE, DF, AF21.
\-\-dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
@@ -15851,14 +17030,14 @@ These flags are available for every command.
\-\-max\-delete int When synchronizing, limit the number of deletes (default \-1)
\-\-max\-depth int If set limits the recursion depth to this. (default \-1)
\-\-max\-duration duration Maximum duration rclone will transfer data for.
- \-\-max\-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ \-\-max\-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
\-\-max\-stats\-groups int Maximum number of stats groups to keep in memory. On max oldest is discarded. (default 1000)
\-\-max\-transfer SizeSuffix Maximum size of data to transfer. (default off)
\-\-memprofile string Write memory profile to file
\-\-min\-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- \-\-min\-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ \-\-min\-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
\-\-modify\-window duration Max time diff to be considered the same (default 1ns)
- \-\-multi\-thread\-cutoff SizeSuffix Use multi\-thread downloads for files above this size. (default 250M)
+ \-\-multi\-thread\-cutoff SizeSuffix Use multi\-thread downloads for files above this size. (default 250Mi)
\-\-multi\-thread\-streams int Max number of streams to use for multi\-thread downloads. (default 4)
\-\-no\-check\-certificate Do not verify the server SSL certificate. Insecure.
\-\-no\-check\-dest Don\[aq]t check the destination, copy regardless.
@@ -15908,8 +17087,8 @@ These flags are available for every command.
\-\-stats\-one\-line Make the stats fit on one line.
\-\-stats\-one\-line\-date Enables \-\-stats\-one\-line and add current date/time prefix.
\-\-stats\-one\-line\-date\-format string Enables \-\-stats\-one\-line\-date and uses custom formatted date. Enclose date string in double quotes (\[dq]). See https://golang.org/pkg/time/#Time.Format
- \-\-stats\-unit string Show data rate in stats as either \[aq]bits\[aq] or \[aq]bytes\[aq]/s (default \[dq]bytes\[dq])
- \-\-streaming\-upload\-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ \-\-stats\-unit string Show data rate in stats as either \[aq]bits\[aq] or \[aq]bytes\[aq] per second (default \[dq]bytes\[dq])
+ \-\-streaming\-upload\-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100Ki)
\-\-suffix string Suffix to add to changed files.
\-\-suffix\-keep\-extension Preserve the extension when using \-\-suffix.
\-\-syslog Use Syslog for logging
@@ -15925,7 +17104,7 @@ These flags are available for every command.
\-\-use\-json\-log Use json log format.
\-\-use\-mmap Use mmap allocator (see docs).
\-\-use\-server\-modtime Use server modified time instead of object metadata
- \-\-user\-agent string Set the user\-agent to a specified string. The default is rclone/ version (default \[dq]rclone/v1.55.0\[dq])
+ \-\-user\-agent string Set the user\-agent to a specified string. The default is rclone/ version (default \[dq]rclone/v1.56.0\[dq])
\-v, \-\-verbose count Print lots more stuff (repeat for more)
\f[R]
.fi
@@ -15940,15 +17119,15 @@ They control the backends and may be set in the config file.
\-\-acd\-client\-id string OAuth Client Id
\-\-acd\-client\-secret string OAuth Client Secret
\-\-acd\-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8,Dot)
- \-\-acd\-templink\-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ \-\-acd\-templink\-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9Gi)
\-\-acd\-token string OAuth Access Token as a JSON blob.
\-\-acd\-token\-url string Token server url.
- \-\-acd\-upload\-wait\-per\-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ \-\-acd\-upload\-wait\-per\-gb Duration Additional time per GiB to wait after a failed complete upload to see if it appears. (default 3m0s)
\-\-alias\-remote string Remote or path to alias.
\-\-azureblob\-access\-tier string Access tier of blob: hot, cool or archive.
\-\-azureblob\-account string Storage Account Name (leave blank to use SAS URL or Emulator)
\-\-azureblob\-archive\-tier\-delete Delete archive tier blobs before overwriting.
- \-\-azureblob\-chunk\-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ \-\-azureblob\-chunk\-size SizeSuffix Upload chunk size (<= 100 MiB). (default 4Mi)
\-\-azureblob\-disable\-checksum Don\[aq]t store MD5 checksum with object metadata.
\-\-azureblob\-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
\-\-azureblob\-endpoint string Endpoint for the service
@@ -15962,12 +17141,12 @@ They control the backends and may be set in the config file.
\-\-azureblob\-public\-access string Public access level of a container: blob, container.
\-\-azureblob\-sas\-url string SAS URL for container level access only
\-\-azureblob\-service\-principal\-file string Path to file containing credentials for use with a service principal.
- \-\-azureblob\-upload\-cutoff string Cutoff for switching to chunked upload (<= 256MB). (Deprecated)
+ \-\-azureblob\-upload\-cutoff string Cutoff for switching to chunked upload (<= 256 MiB). (Deprecated)
\-\-azureblob\-use\-emulator Uses local storage emulator if provided as \[aq]true\[aq] (leave blank if using real azure storage endpoint)
\-\-azureblob\-use\-msi Use a managed service identity to authenticate (only works in Azure)
\-\-b2\-account string Account ID or Application Key ID
- \-\-b2\-chunk\-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- \-\-b2\-copy\-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4G)
+ \-\-b2\-chunk\-size SizeSuffix Upload chunk size. Must fit in memory. (default 96Mi)
+ \-\-b2\-copy\-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4Gi)
\-\-b2\-disable\-checksum Disable checksums for large (> upload cutoff) files
\-\-b2\-download\-auth\-duration Duration Time before the authorization token will expire in s or suffix ms|s|m|h|d. (default 1w)
\-\-b2\-download\-url string Custom endpoint for downloads.
@@ -15978,7 +17157,7 @@ They control the backends and may be set in the config file.
\-\-b2\-memory\-pool\-flush\-time Duration How often internal memory buffer pools will be flushed. (default 1m0s)
\-\-b2\-memory\-pool\-use\-mmap Whether to use mmap buffers in internal memory pool.
\-\-b2\-test\-mode string A flag string for X\-Bz\-Test\-Mode header for debugging.
- \-\-b2\-upload\-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ \-\-b2\-upload\-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200Mi)
\-\-b2\-versions Include old versions in directory listings.
\-\-box\-access\-token string Box App Primary Access Token
\-\-box\-auth\-url string Auth server URL.
@@ -15991,12 +17170,12 @@ They control the backends and may be set in the config file.
\-\-box\-root\-folder\-id string Fill in for rclone to use a non root folder as its starting point.
\-\-box\-token string OAuth Access Token as a JSON blob.
\-\-box\-token\-url string Token server url.
- \-\-box\-upload\-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ \-\-box\-upload\-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50 MiB). (default 50Mi)
\-\-cache\-chunk\-clean\-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
\-\-cache\-chunk\-no\-memory Disable the in\-memory cache for storing chunks during streaming.
\-\-cache\-chunk\-path string Directory to cache chunk files. (default \[dq]$HOME/.cache/rclone/cache\-backend\[dq])
- \-\-cache\-chunk\-size SizeSuffix The size of a chunk (partial file data). (default 5M)
- \-\-cache\-chunk\-total\-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ \-\-cache\-chunk\-size SizeSuffix The size of a chunk (partial file data). (default 5Mi)
+ \-\-cache\-chunk\-total\-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10Gi)
\-\-cache\-db\-path string Directory to store file structure metadata DB. (default \[dq]$HOME/.cache/rclone/cache\-backend\[dq])
\-\-cache\-db\-purge Clear all the cached data for this remote on start.
\-\-cache\-db\-wait\-time Duration How long to wait for the DB to be available \- 0 is unlimited (default 1s)
@@ -16012,13 +17191,13 @@ They control the backends and may be set in the config file.
\-\-cache\-tmp\-wait\-time Duration How long should files be stored in local cache before being uploaded (default 15s)
\-\-cache\-workers int How many workers should run in parallel to download chunks. (default 4)
\-\-cache\-writes Cache file data on writes through the FS
- \-\-chunker\-chunk\-size SizeSuffix Files larger than chunk size will be split in chunks. (default 2G)
+ \-\-chunker\-chunk\-size SizeSuffix Files larger than chunk size will be split in chunks. (default 2Gi)
\-\-chunker\-fail\-hard Choose how chunker should handle files with missing or invalid chunks.
\-\-chunker\-hash\-type string Choose how chunker handles hash sums. All modes but \[dq]none\[dq] require metadata. (default \[dq]md5\[dq])
\-\-chunker\-remote string Remote to chunk/unchunk.
\-\-compress\-level int GZIP compression level (\-2 to 9). (default \-1)
\-\-compress\-mode string Compression mode. (default \[dq]gzip\[dq])
- \-\-compress\-ram\-cache\-limit SizeSuffix Some remotes don\[aq]t allow the upload of files with unknown size. (default 20M)
+ \-\-compress\-ram\-cache\-limit SizeSuffix Some remotes don\[aq]t allow the upload of files with unknown size. (default 20Mi)
\-\-compress\-remote string Remote to compress.
\-L, \-\-copy\-links Follow symlinks and copy the pointed to item.
\-\-crypt\-directory\-name\-encryption Option to either encrypt directory names or leave them intact. (default true)
@@ -16033,7 +17212,7 @@ They control the backends and may be set in the config file.
\-\-drive\-allow\-import\-name\-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
\-\-drive\-auth\-owner\-only Only consider files owned by the authenticated user.
\-\-drive\-auth\-url string Auth server URL.
- \-\-drive\-chunk\-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ \-\-drive\-chunk\-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8Mi)
\-\-drive\-client\-id string Google Application Client Id
\-\-drive\-client\-secret string OAuth Client Secret
\-\-drive\-disable\-http2 Disable drive using http2 (default true)
@@ -16063,13 +17242,16 @@ They control the backends and may be set in the config file.
\-\-drive\-token string OAuth Access Token as a JSON blob.
\-\-drive\-token\-url string Token server url.
\-\-drive\-trashed\-only Only show files that are in the trash.
- \-\-drive\-upload\-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ \-\-drive\-upload\-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8Mi)
\-\-drive\-use\-created\-date Use file created date instead of modified date.,
\-\-drive\-use\-shared\-date Use date file was shared instead of modified date.
\-\-drive\-use\-trash Send files to the trash instead of deleting permanently. (default true)
\-\-drive\-v2\-download\-min\-size SizeSuffix If Object\[aq]s are greater, use drive v2 API to download. (default off)
\-\-dropbox\-auth\-url string Auth server URL.
- \-\-dropbox\-chunk\-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ \-\-dropbox\-batch\-mode string Upload file batching sync|async|off. (default \[dq]sync\[dq])
+ \-\-dropbox\-batch\-size int Max number of files in upload batch.
+ \-\-dropbox\-batch\-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
+ \-\-dropbox\-chunk\-size SizeSuffix Upload chunk size. (< 150Mi). (default 48Mi)
\-\-dropbox\-client\-id string OAuth Client Id
\-\-dropbox\-client\-secret string OAuth Client Secret
\-\-dropbox\-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
@@ -16080,6 +17262,8 @@ They control the backends and may be set in the config file.
\-\-dropbox\-token\-url string Token server url.
\-\-fichier\-api\-key string Your API Key, get it from https://1fichier.com/console/params.pl
\-\-fichier\-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
+ \-\-fichier\-file\-password string If you want to download a shared file that is password protected, add this parameter (obscured)
+ \-\-fichier\-folder\-password string If you want to list the files in a shared folder that is password protected, add this parameter (obscured)
\-\-fichier\-shared\-folder string If you want to download a shared folder, add this parameter
\-\-filefabric\-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Del,Ctl,InvalidUtf8,Dot)
\-\-filefabric\-permanent\-token string Permanent Authentication Token
@@ -16134,7 +17318,7 @@ They control the backends and may be set in the config file.
\-\-http\-no\-slash Set this if the site doesn\[aq]t end directories with /
\-\-http\-url string URL of http host to connect to
\-\-hubic\-auth\-url string Auth server URL.
- \-\-hubic\-chunk\-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ \-\-hubic\-chunk\-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5Gi)
\-\-hubic\-client\-id string OAuth Client Id
\-\-hubic\-client\-secret string OAuth Client Secret
\-\-hubic\-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8)
@@ -16143,9 +17327,10 @@ They control the backends and may be set in the config file.
\-\-hubic\-token\-url string Token server url.
\-\-jottacloud\-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
\-\-jottacloud\-hard\-delete Delete files permanently rather than putting them into the trash.
- \-\-jottacloud\-md5\-memory\-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ \-\-jottacloud\-md5\-memory\-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10Mi)
+ \-\-jottacloud\-no\-versions Avoid server side versioning by deleting files and recreating files instead of overwriting them.
\-\-jottacloud\-trashed\-only Only show files that are in the trash.
- \-\-jottacloud\-upload\-resume\-limit SizeSuffix Files bigger than this can be resumed if the upload fail\[aq]s. (default 10M)
+ \-\-jottacloud\-upload\-resume\-limit SizeSuffix Files bigger than this can be resumed if the upload fail\[aq]s. (default 10Mi)
\-\-koofr\-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
\-\-koofr\-endpoint string The Koofr API endpoint to use (default \[dq]https://app.koofr.net\[dq])
\-\-koofr\-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
@@ -16160,16 +17345,16 @@ They control the backends and may be set in the config file.
\-\-local\-no\-preallocate Disable preallocation of disk space for transferred files
\-\-local\-no\-set\-modtime Disable setting modtime
\-\-local\-no\-sparse Disable sparse files for multi\-thread downloads
- \-\-local\-no\-unicode\-normalization Don\[aq]t apply unicode normalization to paths and filenames (Deprecated)
\-\-local\-nounc string Disable UNC (long path names) conversion on Windows
- \-\-local\-zero\-size\-links Assume the Stat size of links is zero (and read them instead)
+ \-\-local\-unicode\-normalization Apply unicode NFC normalization to paths and filenames
+ \-\-local\-zero\-size\-links Assume the Stat size of links is zero (and read them instead) (Deprecated)
\-\-mailru\-check\-hash What should copy do if file checksum is mismatched or invalid (default true)
\-\-mailru\-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot)
\-\-mailru\-pass string Password (obscured)
\-\-mailru\-speedup\-enable Skip full upload if there is another file with same data hash. (default true)
\-\-mailru\-speedup\-file\-patterns string Comma separated list of file name patterns eligible for speedup (put by hash). (default \[dq]*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf\[dq])
- \-\-mailru\-speedup\-max\-disk SizeSuffix This option allows you to disable speedup (put by hash) for large files (default 3G)
- \-\-mailru\-speedup\-max\-memory SizeSuffix Files larger than the size given below will always be hashed on disk. (default 32M)
+ \-\-mailru\-speedup\-max\-disk SizeSuffix This option allows you to disable speedup (put by hash) for large files (default 3Gi)
+ \-\-mailru\-speedup\-max\-memory SizeSuffix Files larger than the size given below will always be hashed on disk. (default 32Mi)
\-\-mailru\-user string User name (usually email)
\-\-mega\-debug Output more debug from Mega.
\-\-mega\-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8,Dot)
@@ -16178,7 +17363,7 @@ They control the backends and may be set in the config file.
\-\-mega\-user string User name
\-x, \-\-one\-file\-system Don\[aq]t cross filesystem boundaries (unix/macOS only).
\-\-onedrive\-auth\-url string Auth server URL.
- \-\-onedrive\-chunk\-size SizeSuffix Chunk size to upload files with \- must be multiple of 320k (327,680 bytes). (default 10M)
+ \-\-onedrive\-chunk\-size SizeSuffix Chunk size to upload files with \- must be multiple of 320k (327,680 bytes). (default 10Mi)
\-\-onedrive\-client\-id string OAuth Client Id
\-\-onedrive\-client\-secret string OAuth Client Secret
\-\-onedrive\-drive\-id string The ID of the drive to use
@@ -16188,12 +17373,13 @@ They control the backends and may be set in the config file.
\-\-onedrive\-link\-password string Set the password for links created by the link command.
\-\-onedrive\-link\-scope string Set the scope of the links created by the link command. (default \[dq]anonymous\[dq])
\-\-onedrive\-link\-type string Set the type of the links created by the link command. (default \[dq]view\[dq])
+ \-\-onedrive\-list\-chunk int Size of listing chunk. (default 1000)
\-\-onedrive\-no\-versions Remove all versions on modifying operations
\-\-onedrive\-region string Choose national cloud region for OneDrive. (default \[dq]global\[dq])
\-\-onedrive\-server\-side\-across\-configs Allow server\-side operations (e.g. copy) to work across different onedrive configs.
\-\-onedrive\-token string OAuth Access Token as a JSON blob.
\-\-onedrive\-token\-url string Token server url.
- \-\-opendrive\-chunk\-size SizeSuffix Files will be uploaded in chunks this size. (default 10M)
+ \-\-opendrive\-chunk\-size SizeSuffix Files will be uploaded in chunks this size. (default 10Mi)
\-\-opendrive\-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
\-\-opendrive\-password string Password. (obscured)
\-\-opendrive\-username string Username
@@ -16208,20 +17394,20 @@ They control the backends and may be set in the config file.
\-\-premiumizeme\-encoding MultiEncoder This sets the encoding for the backend. (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot)
\-\-putio\-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
\-\-qingstor\-access\-key\-id string QingStor Access Key ID
- \-\-qingstor\-chunk\-size SizeSuffix Chunk size to use for uploading. (default 4M)
+ \-\-qingstor\-chunk\-size SizeSuffix Chunk size to use for uploading. (default 4Mi)
\-\-qingstor\-connection\-retries int Number of connection retries. (default 3)
\-\-qingstor\-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Ctl,InvalidUtf8)
\-\-qingstor\-endpoint string Enter an endpoint URL to connection QingStor API.
\-\-qingstor\-env\-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
\-\-qingstor\-secret\-access\-key string QingStor Secret Access Key (password)
\-\-qingstor\-upload\-concurrency int Concurrency for multipart uploads. (default 1)
- \-\-qingstor\-upload\-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
+ \-\-qingstor\-upload\-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
\-\-qingstor\-zone string Zone to connect to.
\-\-s3\-access\-key\-id string AWS Access Key ID.
\-\-s3\-acl string Canned ACL used when creating buckets and storing or copying objects.
\-\-s3\-bucket\-acl string Canned ACL used when creating buckets.
- \-\-s3\-chunk\-size SizeSuffix Chunk size to use for uploading. (default 5M)
- \-\-s3\-copy\-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656G)
+ \-\-s3\-chunk\-size SizeSuffix Chunk size to use for uploading. (default 5Mi)
+ \-\-s3\-copy\-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
\-\-s3\-disable\-checksum Don\[aq]t store MD5 checksum with object metadata
\-\-s3\-disable\-http2 Disable usage of http2 for S3 backends
\-\-s3\-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8,Dot)
@@ -16236,6 +17422,7 @@ They control the backends and may be set in the config file.
\-\-s3\-memory\-pool\-use\-mmap Whether to use mmap buffers in internal memory pool.
\-\-s3\-no\-check\-bucket If set, don\[aq]t attempt to check the bucket exists or create it
\-\-s3\-no\-head If set, don\[aq]t HEAD uploaded objects to check integrity
+ \-\-s3\-no\-head\-object If set, don\[aq]t HEAD objects
\-\-s3\-profile string Profile to use in the shared credentials file
\-\-s3\-provider string Choose your S3 provider.
\-\-s3\-region string Region to connect to.
@@ -16250,7 +17437,7 @@ They control the backends and may be set in the config file.
\-\-s3\-sse\-kms\-key\-id string If using KMS ID you must provide the ARN of Key.
\-\-s3\-storage\-class string The storage class to use when storing new objects in S3.
\-\-s3\-upload\-concurrency int Concurrency for multipart uploads. (default 4)
- \-\-s3\-upload\-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
+ \-\-s3\-upload\-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
\-\-s3\-use\-accelerate\-endpoint If true use the AWS S3 accelerated endpoint.
\-\-s3\-v2\-auth If true use v2 authentication.
\-\-seafile\-2fa Two\-factor authentication (\[aq]true\[aq] if the account has 2FA enabled)
@@ -16263,6 +17450,7 @@ They control the backends and may be set in the config file.
\-\-seafile\-user string User name (usually email address)
\-\-sftp\-ask\-password Allow asking for SFTP password when needed.
\-\-sftp\-disable\-concurrent\-reads If set don\[aq]t use concurrent reads
+ \-\-sftp\-disable\-concurrent\-writes If set don\[aq]t use concurrent writes
\-\-sftp\-disable\-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
\-\-sftp\-host string SSH host to connect to
\-\-sftp\-idle\-timeout Duration Max time before closing idle connections (default 1m0s)
@@ -16284,11 +17472,11 @@ They control the backends and may be set in the config file.
\-\-sftp\-use\-fstat If set use fstat instead of stat
\-\-sftp\-use\-insecure\-cipher Enable the use of insecure ciphers and key exchange methods.
\-\-sftp\-user string SSH username, leave blank for current username, $USER
- \-\-sharefile\-chunk\-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 64M)
+ \-\-sharefile\-chunk\-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 64Mi)
\-\-sharefile\-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot)
\-\-sharefile\-endpoint string Endpoint for API calls.
\-\-sharefile\-root\-folder\-id string ID of the root folder
- \-\-sharefile\-upload\-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 128M)
+ \-\-sharefile\-upload\-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 128Mi)
\-\-skip\-links Don\[aq]t warn about skipped symlinks.
\-\-sugarsync\-access\-key\-id string Sugarsync Access Key ID.
\-\-sugarsync\-app\-id string Sugarsync App ID.
@@ -16307,7 +17495,7 @@ They control the backends and may be set in the config file.
\-\-swift\-auth string Authentication URL for server (OS_AUTH_URL).
\-\-swift\-auth\-token string Auth Token from alternate authentication \- optional (OS_AUTH_TOKEN)
\-\-swift\-auth\-version int AuthVersion \- optional \- set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- \-\-swift\-chunk\-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ \-\-swift\-chunk\-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5Gi)
\-\-swift\-domain string User domain \- optional (v3 auth) (OS_USER_DOMAIN_NAME)
\-\-swift\-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8)
\-\-swift\-endpoint\-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default \[dq]public\[dq])
@@ -16333,9 +17521,12 @@ They control the backends and may be set in the config file.
\-\-union\-create\-policy string Policy to choose upstream on CREATE category. (default \[dq]epmfs\[dq])
\-\-union\-search\-policy string Policy to choose upstream on SEARCH category. (default \[dq]ff\[dq])
\-\-union\-upstreams string List of space separated upstreams.
+ \-\-uptobox\-access\-token string Your access Token, get it from https://uptobox.com/my_account
+ \-\-uptobox\-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
\-\-webdav\-bearer\-token string Bearer token instead of user/pass (e.g. a Macaroon)
\-\-webdav\-bearer\-token\-command string Command to run to get a bearer token
\-\-webdav\-encoding string This sets the encoding for the backend.
+ \-\-webdav\-headers CommaSepList Set HTTP headers for all transactions
\-\-webdav\-pass string Password. (obscured)
\-\-webdav\-url string URL of http host to connect to
\-\-webdav\-user string User name. In case NTLM authentication is used, the username should be in the format \[aq]Domain\[rs]User\[aq].
@@ -16350,12 +17541,697 @@ They control the backends and may be set in the config file.
\-\-zoho\-client\-id string OAuth Client Id
\-\-zoho\-client\-secret string OAuth Client Secret
\-\-zoho\-encoding MultiEncoder This sets the encoding for the backend. (default Del,Ctl,InvalidUtf8)
- \-\-zoho\-region string Zoho region to connect to. You\[aq]ll have to use the region you organization is registered in.
+ \-\-zoho\-region string Zoho region to connect to.
\-\-zoho\-token string OAuth Access Token as a JSON blob.
\-\-zoho\-token\-url string Token server url.
\f[R]
.fi
-.SS 1Fichier
+.SH Docker Volume Plugin
+.SS Introduction
+.PP
+Docker 1.9 has added support for creating named
+volumes (https://docs.docker.com/storage/volumes/) via command\-line
+interface (https://docs.docker.com/engine/reference/commandline/volume_create/)
+and mounting them in containers as a way to share data between them.
+Since Docker 1.10 you can create named volumes with Docker
+Compose (https://docs.docker.com/compose/) by descriptions in
+docker\-compose.yml (https://docs.docker.com/compose/compose-file/compose-file-v2/#volume-configuration-reference)
+files for use by container groups on a single host.
+As of Docker 1.12 volumes are supported by Docker
+Swarm (https://docs.docker.com/engine/swarm/key-concepts/) included with
+Docker Engine and created from descriptions in swarm compose
+v3 (https://docs.docker.com/compose/compose-file/compose-file-v3/#volume-configuration-reference)
+files for use with \f[I]swarm stacks\f[R] across multiple cluster nodes.
+.PP
+Docker Volume
+Plugins (https://docs.docker.com/engine/extend/plugins_volume/) augment
+the default \f[C]local\f[R] volume driver included in Docker with
+stateful volumes shared across containers and hosts.
+Unlike local volumes, your data will \f[I]not\f[R] be deleted when such
+volume is removed.
+Plugins can run managed by the docker daemon, as a native system service
+(under systemd, \f[I]sysv\f[R] or \f[I]upstart\f[R]) or as a standalone
+executable.
+Rclone can run as docker volume plugin in all these modes.
+It interacts with the local docker daemon via plugin
+API (https://docs.docker.com/engine/extend/plugin_api/) and handles
+mounting of remote file systems into docker containers so it must run on
+the same host as the docker daemon or on every Swarm node.
+.SS Getting started
+.PP
+In the first example we will use the SFTP (https://rclone.org/sftp/)
+rclone volume with Docker engine on a standalone Ubuntu machine.
+.PP
+Start from installing Docker (https://docs.docker.com/engine/install/)
+on the host.
+.PP
+The \f[I]FUSE\f[R] driver is a prerequisite for rclone mounting and
+should be installed on host:
+.IP
+.nf
+\f[C]
+sudo apt\-get \-y install fuse
+\f[R]
+.fi
+.PP
+Create two directories required by rclone docker plugin:
+.IP
+.nf
+\f[C]
+sudo mkdir \-p /var/lib/docker\-plugins/rclone/config
+sudo mkdir \-p /var/lib/docker\-plugins/rclone/cache
+\f[R]
+.fi
+.PP
+Install the managed rclone docker plugin:
+.IP
+.nf
+\f[C]
+docker plugin install rclone/docker\-volume\-rclone args=\[dq]\-v\[dq] \-\-alias rclone \-\-grant\-all\-permissions
+docker plugin list
+\f[R]
+.fi
+.PP
+Create your SFTP volume (https://rclone.org/sftp/#standard-options):
+.IP
+.nf
+\f[C]
+docker volume create firstvolume \-d rclone \-o type=sftp \-o sftp\-host=_hostname_ \-o sftp\-user=_username_ \-o sftp\-pass=_password_ \-o allow\-other=true
+\f[R]
+.fi
+.PP
+Note that since all options are static, you don\[aq]t even have to run
+\f[C]rclone config\f[R] or create the \f[C]rclone.conf\f[R] file (but
+the \f[C]config\f[R] directory should still be present).
+In the simplest case you can use \f[C]localhost\f[R] as
+\f[I]hostname\f[R] and your SSH credentials as \f[I]username\f[R] and
+\f[I]password\f[R].
+You can also change the remote path to your home directory on the host,
+for example \f[C]\-o path=/home/username\f[R].
+.PP
+Time to create a test container and mount the volume into it:
+.IP
+.nf
+\f[C]
+docker run \-\-rm \-it \-v firstvolume:/mnt \-\-workdir /mnt ubuntu:latest bash
+\f[R]
+.fi
+.PP
+If all goes well, you will enter the new container and change right to
+the mounted SFTP remote.
+You can type \f[C]ls\f[R] to list the mounted directory or otherwise
+play with it.
+Type \f[C]exit\f[R] when you are done.
+The container will stop but the volume will stay, ready to be reused.
+When it\[aq]s not needed anymore, remove it:
+.IP
+.nf
+\f[C]
+docker volume list
+docker volume remove firstvolume
+\f[R]
+.fi
+.PP
+Now let us try \f[B]something more elaborate\f[R]: Google
+Drive (https://rclone.org/drive/) volume on multi\-node Docker Swarm.
+.PP
+You should start from installing Docker and FUSE, creating plugin
+directories and installing rclone plugin on \f[I]every\f[R] swarm node.
+Then setup the Swarm (https://docs.docker.com/engine/swarm/swarm-mode/).
+.PP
+Google Drive volumes need an access token which can be setup via web
+browser and will be periodically renewed by rclone.
+The managed plugin cannot run a browser so we will use a technique
+similar to the rclone setup on a headless
+box (https://rclone.org/remote_setup/).
+.PP
+Run rclone config (https://rclone.org/commands/rclone_config_create/) on
+\f[I]another\f[R] machine equipped with \f[I]web browser\f[R] and
+graphical user interface.
+Create the Google Drive
+remote (https://rclone.org/drive/#standard-options).
+When done, transfer the resulting \f[C]rclone.conf\f[R] to the Swarm
+cluster and save as
+\f[C]/var/lib/docker\-plugins/rclone/config/rclone.conf\f[R] on
+\f[I]every\f[R] node.
+By default this location is accessible only to the root user so you will
+need appropriate privileges.
+The resulting config will look like this:
+.IP
+.nf
+\f[C]
+[gdrive]
+type = drive
+scope = drive
+drive_id = 1234567...
+root_folder_id = 0Abcd...
+token = {\[dq]access_token\[dq]:...}
+\f[R]
+.fi
+.PP
+Now create the file named \f[C]example.yml\f[R] with a swarm stack
+description like this:
+.IP
+.nf
+\f[C]
+version: \[aq]3\[aq]
+services:
+ heimdall:
+ image: linuxserver/heimdall:latest
+ ports: [8080:80]
+ volumes: [configdata:/config]
+volumes:
+ configdata:
+ driver: rclone
+ driver_opts:
+ remote: \[aq]gdrive:heimdall\[aq]
+ allow_other: \[aq]true\[aq]
+ vfs_cache_mode: full
+ poll_interval: 0
+\f[R]
+.fi
+.PP
+and run the stack:
+.IP
+.nf
+\f[C]
+docker stack deploy example \-c ./example.yml
+\f[R]
+.fi
+.PP
+After a few seconds docker will spread the parsed stack description over
+cluster, create the \f[C]example_heimdall\f[R] service on port
+\f[I]8080\f[R], run service containers on one or more cluster nodes and
+request the \f[C]example_configdata\f[R] volume from rclone plugins on
+the node hosts.
+You can use the following commands to confirm results:
+.IP
+.nf
+\f[C]
+docker service ls
+docker service ps example_heimdall
+docker volume ls
+\f[R]
+.fi
+.PP
+Point your browser to \f[C]http://cluster.host.address:8080\f[R] and
+play with the service.
+Stop it with \f[C]docker stack remove example\f[R] when you are done.
+Note that the \f[C]example_configdata\f[R] volume(s) created on demand
+at the cluster nodes will not be automatically removed together with the
+stack but stay for future reuse.
+You can remove them manually by invoking the
+\f[C]docker volume remove example_configdata\f[R] command on every node.
+.SS Creating Volumes via CLI
+.PP
+Volumes can be created with docker volume
+create (https://docs.docker.com/engine/reference/commandline/volume_create/).
+Here are a few examples:
+.IP
+.nf
+\f[C]
+docker volume create vol1 \-d rclone \-o remote=storj: \-o vfs\-cache\-mode=full
+docker volume create vol2 \-d rclone \-o remote=:tardigrade,access_grant=xxx:heimdall
+docker volume create vol3 \-d rclone \-o type=tardigrade \-o path=heimdall \-o tardigrade\-access\-grant=xxx \-o poll\-interval=0
+\f[R]
+.fi
+.PP
+Note the \f[C]\-d rclone\f[R] flag that tells docker to request volume
+from the rclone driver.
+This works even if you installed managed driver by its full name
+\f[C]rclone/docker\-volume\-rclone\f[R] because you provided the
+\f[C]\-\-alias rclone\f[R] option.
+.PP
+Volumes can be inspected as follows:
+.IP
+.nf
+\f[C]
+docker volume list
+docker volume inspect vol1
+\f[R]
+.fi
+.SS Volume Configuration
+.PP
+Rclone flags and volume options are set via the \f[C]\-o\f[R] flag to
+the \f[C]docker volume create\f[R] command.
+They include backend\-specific parameters as well as mount and
+\f[I]VFS\f[R] options.
+Also there are a few special \f[C]\-o\f[R] options: \f[C]remote\f[R],
+\f[C]fs\f[R], \f[C]type\f[R], \f[C]path\f[R], \f[C]mount\-type\f[R] and
+\f[C]persist\f[R].
+.PP
+\f[C]remote\f[R] determines an existing remote name from the config
+file, with trailing colon and optionally with a remote path.
+See the full syntax in the rclone
+documentation (https://rclone.org/docs/#syntax-of-remote-paths).
+This option can be aliased as \f[C]fs\f[R] to prevent confusion with the
+\f[I]remote\f[R] parameter of such backends as \f[I]crypt\f[R] or
+\f[I]alias\f[R].
+.PP
+The \f[C]remote=:backend:dir/subdir\f[R] syntax can be used to create
+on\-the\-fly (config\-less)
+remotes (https://rclone.org/docs/#backend-path-to-dir), while the
+\f[C]type\f[R] and \f[C]path\f[R] options provide a simpler alternative
+for this.
+Using two split options
+.IP
+.nf
+\f[C]
+\-o type=backend \-o path=dir/subdir
+\f[R]
+.fi
+.PP
+is equivalent to the combined syntax
+.IP
+.nf
+\f[C]
+\-o remote=:backend:dir/subdir
+\f[R]
+.fi
+.PP
+but is arguably easier to parameterize in scripts.
+The \f[C]path\f[R] part is optional.
+.PP
+Mount and VFS
+options (https://rclone.org/commands/rclone_serve_docker/#options) as
+well as backend parameters (https://rclone.org/flags/#backend-flags) are
+named like their twin command\-line flags without the \f[C]\-\-\f[R] CLI
+prefix.
+Optionally you can use underscores instead of dashes in option names.
+For example, \f[C]\-\-vfs\-cache\-mode full\f[R] becomes
+\f[C]\-o vfs\-cache\-mode=full\f[R] or
+\f[C]\-o vfs_cache_mode=full\f[R].
+Boolean CLI flags without value will gain the \f[C]true\f[R] value, e.g.
+\f[C]\-\-allow\-other\f[R] becomes \f[C]\-o allow\-other=true\f[R] or
+\f[C]\-o allow_other=true\f[R].
+.PP
+Please note that you can provide parameters only for the backend
+immediately referenced by the backend type of mounted \f[C]remote\f[R].
+If this is a wrapping backend like \f[I]alias, chunker or crypt\f[R],
+you cannot provide options for the referred to remote or backend.
+This limitation is imposed by the rclone connection string parser.
+The only workaround is to feed plugin with \f[C]rclone.conf\f[R] or
+configure plugin arguments (see below).
+.SS Special Volume Options
+.PP
+\f[C]mount\-type\f[R] determines the mount method and in general can be
+one of: \f[C]mount\f[R], \f[C]cmount\f[R], or \f[C]mount2\f[R].
+This can be aliased as \f[C]mount_type\f[R].
+It should be noted that the managed rclone docker plugin currently does
+not support the \f[C]cmount\f[R] method and \f[C]mount2\f[R] is rarely
+needed.
+This option defaults to the first found method, which is usually
+\f[C]mount\f[R] so you generally won\[aq]t need it.
+.PP
+\f[C]persist\f[R] is a reserved boolean (true/false) option.
+In future it will allow to persist on\-the\-fly remotes in the plugin
+\f[C]rclone.conf\f[R] file.
+.SS Connection Strings
+.PP
+The \f[C]remote\f[R] value can be extended with connection
+strings (https://rclone.org/docs/#connection-strings) as an alternative
+way to supply backend parameters.
+This is equivalent to the \f[C]\-o\f[R] backend options with one
+\f[I]syntactic difference\f[R].
+Inside connection string the backend prefix must be dropped from
+parameter names but in the \f[C]\-o param=value\f[R] array it must be
+present.
+For instance, compare the following option array
+.IP
+.nf
+\f[C]
+\-o remote=:sftp:/home \-o sftp\-host=localhost
+\f[R]
+.fi
+.PP
+with equivalent connection string:
+.IP
+.nf
+\f[C]
+\-o remote=:sftp,host=localhost:/home
+\f[R]
+.fi
+.PP
+This difference exists because flag options \f[C]\-o key=val\f[R]
+include not only backend parameters but also mount/VFS flags and
+possibly other settings.
+Also it allows to discriminate the \f[C]remote\f[R] option from the
+\f[C]crypt\-remote\f[R] (or similarly named backend parameters) and
+arguably simplifies scripting due to clearer value substitution.
+.SS Using with Swarm or Compose
+.PP
+Both \f[I]Docker Swarm\f[R] and \f[I]Docker Compose\f[R] use
+YAML (http://yaml.org/spec/1.2/spec.html)\-formatted text files to
+describe groups (stacks) of containers, their properties, networks and
+volumes.
+\f[I]Compose\f[R] uses the compose
+v2 (https://docs.docker.com/compose/compose-file/compose-file-v2/#volume-configuration-reference)
+format, \f[I]Swarm\f[R] uses the compose
+v3 (https://docs.docker.com/compose/compose-file/compose-file-v3/#volume-configuration-reference)
+format.
+They are mostly similar, differences are explained in the docker
+documentation (https://docs.docker.com/compose/compose-file/compose-versioning/#upgrading).
+.PP
+Volumes are described by the children of the top\-level
+\f[C]volumes:\f[R] node.
+Each of them should be named after its volume and have at least two
+elements, the self\-explanatory \f[C]driver: rclone\f[R] value and the
+\f[C]driver_opts:\f[R] structure playing the same role as
+\f[C]\-o key=val\f[R] CLI flags:
+.IP
+.nf
+\f[C]
+volumes:
+ volume_name_1:
+ driver: rclone
+ driver_opts:
+ remote: \[aq]gdrive:\[aq]
+ allow_other: \[aq]true\[aq]
+ vfs_cache_mode: full
+ token: \[aq]{\[dq]type\[dq]: \[dq]borrower\[dq], \[dq]expires\[dq]: \[dq]2021\-12\-31\[dq]}\[aq]
+ poll_interval: 0
+\f[R]
+.fi
+.PP
+Notice a few important details: \- YAML prefers \f[C]_\f[R] in option
+names instead of \f[C]\-\f[R].
+\- YAML treats single and double quotes interchangeably.
+Simple strings and integers can be left unquoted.
+\- Boolean values must be quoted like \f[C]\[aq]true\[aq]\f[R] or
+\f[C]\[dq]false\[dq]\f[R] because these two words are reserved by YAML.
+\- The filesystem string is keyed with \f[C]remote\f[R] (or with
+\f[C]fs\f[R]).
+Normally you can omit quotes here, but if the string ends with colon,
+you \f[B]must\f[R] quote it like
+\f[C]remote: \[dq]storage_box:\[dq]\f[R].
+\- YAML is picky about surrounding braces in values as this is in fact
+another syntax for key/value
+mappings (http://yaml.org/spec/1.2/spec.html#id2790832).
+For example, JSON access tokens usually contain double quotes and
+surrounding braces, so you must put them in single quotes.
+.SS Installing as Managed Plugin
+.PP
+Docker daemon can install plugins from an image registry and run them
+managed.
+We maintain the
+docker\-volume\-rclone (https://hub.docker.com/p/rclone/docker-volume-rclone/)
+plugin image on Docker Hub (https://hub.docker.com).
+.PP
+The plugin requires presence of two directories on the host before it
+can be installed.
+Note that plugin will \f[B]not\f[R] create them automatically.
+By default they must exist on host at the following locations (though
+you can tweak the paths): \-
+\f[C]/var/lib/docker\-plugins/rclone/config\f[R] is reserved for the
+\f[C]rclone.conf\f[R] config file and \f[B]must\f[R] exist even if
+it\[aq]s empty and the config file is not present.
+\- \f[C]/var/lib/docker\-plugins/rclone/cache\f[R] holds the plugin
+state file as well as optional VFS caches.
+.PP
+You can install managed
+plugin (https://docs.docker.com/engine/reference/commandline/plugin_install/)
+with default settings as follows:
+.IP
+.nf
+\f[C]
+docker plugin install rclone/docker\-volume\-rclone:latest \-\-grant\-all\-permissions \-\-alias rclone
+\f[R]
+.fi
+.PP
+Managed plugin is in fact a special container running in a namespace
+separate from normal docker containers.
+Inside it runs the \f[C]rclone serve docker\f[R] command.
+The config and cache directories are bind\-mounted into the container at
+start.
+The docker daemon connects to a unix socket created by the command
+inside the container.
+The command creates on\-demand remote mounts right inside, then docker
+machinery propagates them through kernel mount namespaces and
+bind\-mounts into requesting user containers.
+.PP
+You can tweak a few plugin settings after installation when it\[aq]s
+disabled (not in use), for instance:
+.IP
+.nf
+\f[C]
+docker plugin disable rclone
+docker plugin set rclone RCLONE_VERBOSE=2 config=/etc/rclone args=\[dq]\-\-vfs\-cache\-mode=writes \-\-allow\-other\[dq]
+docker plugin enable rclone
+docker plugin inspect rclone
+\f[R]
+.fi
+.PP
+Note that if docker refuses to disable the plugin, you should find and
+remove all active volumes connected with it as well as containers and
+swarm services that use them.
+This is rather tedious so please carefully plan in advance.
+.PP
+You can tweak the following settings: \f[C]args\f[R], \f[C]config\f[R],
+\f[C]cache\f[R], and \f[C]RCLONE_VERBOSE\f[R].
+It\[aq]s \f[I]your\f[R] task to keep plugin settings in sync across
+swarm cluster nodes.
+.PP
+\f[C]args\f[R] sets command\-line arguments for the
+\f[C]rclone serve docker\f[R] command (\f[I]none\f[R] by default).
+Arguments should be separated by space so you will normally want to put
+them in quotes on the docker plugin
+set (https://docs.docker.com/engine/reference/commandline/plugin_set/)
+command line.
+Both serve docker
+flags (https://rclone.org/commands/rclone_serve_docker/#options) and
+generic rclone flags (https://rclone.org/flags/) are supported,
+including backend parameters that will be used as defaults for volume
+creation.
+Note that plugin will fail (due to this docker
+bug (https://github.com/moby/moby/blob/v20.10.7/plugin/v2/plugin.go#L195))
+if the \f[C]args\f[R] value is empty.
+Use e.g.
+\f[C]args=\[dq]\-v\[dq]\f[R] as a workaround.
+.PP
+\f[C]config=/host/dir\f[R] sets alternative host location for the config
+directory.
+Plugin will look for \f[C]rclone.conf\f[R] here.
+It\[aq]s not an error if the config file is not present but the
+directory must exist.
+Please note that plugin can periodically rewrite the config file, for
+example when it renews storage access tokens.
+Keep this in mind and try to avoid races between the plugin and other
+instances of rclone on the host that might try to change the config
+simultaneously resulting in corrupted \f[C]rclone.conf\f[R].
+You can also put stuff like private key files for SFTP remotes in this
+directory.
+Just note that it\[aq]s bind\-mounted inside the plugin container at the
+predefined path \f[C]/data/config\f[R].
+For example, if your key file is named \f[C]sftp\-box1.key\f[R] on the
+host, the corresponding volume config option should read
+\f[C]\-o sftp\-key\-file=/data/config/sftp\-box1.key\f[R].
+.PP
+\f[C]cache=/host/dir\f[R] sets alternative host location for the
+\f[I]cache\f[R] directory.
+The plugin will keep VFS caches here.
+Also it will create and maintain the \f[C]docker\-plugin.state\f[R] file
+in this directory.
+When the plugin is restarted or reinstalled, it will look in this file
+to recreate any volumes that existed previously.
+However, they will not be re\-mounted into consuming containers after
+restart.
+Usually this is not a problem as the docker daemon normally will restart
+affected user containers after failures, daemon restarts or host
+reboots.
+.PP
+\f[C]RCLONE_VERBOSE\f[R] sets plugin verbosity from \f[C]0\f[R] (errors
+only, by default) to \f[C]2\f[R] (debugging).
+Verbosity can be also tweaked via
+\f[C]args=\[dq]\-v [\-v] ...\[dq]\f[R].
+Since arguments are more generic, you will rarely need this setting.
+The plugin output by default feeds the docker daemon log on local host.
+Log entries are reflected as \f[I]errors\f[R] in the docker log but
+retain their actual level assigned by rclone in the encapsulated message
+string.
+.PP
+You can set custom plugin options right when you install it, \f[I]in one
+go\f[R]:
+.IP
+.nf
+\f[C]
+docker plugin remove rclone
+docker plugin install rclone/docker\-volume\-rclone:latest \[rs]
+ \-\-alias rclone \-\-grant\-all\-permissions \[rs]
+ args=\[dq]\-v \-\-allow\-other\[dq] config=/etc/rclone
+docker plugin inspect rclone
+\f[R]
+.fi
+.SS Healthchecks
+.PP
+The docker plugin volume protocol doesn\[aq]t provide a way for plugins
+to inform the docker daemon that a volume is (un\-)available.
+As a workaround you can setup a healthcheck to verify that the mount is
+responding, for example:
+.IP
+.nf
+\f[C]
+services:
+ my_service:
+ image: my_image
+ healthcheck:
+ test: ls /path/to/rclone/mount || exit 1
+ interval: 1m
+ timeout: 15s
+ retries: 3
+ start_period: 15s
+\f[R]
+.fi
+.SS Running Plugin under Systemd
+.PP
+In most cases you should prefer managed mode.
+Moreover, MacOS and Windows do not support native Docker plugins.
+Please use managed mode on these systems.
+Proceed further only if you are on Linux.
+.PP
+First, install rclone (https://rclone.org/install/).
+You can just run it (type \f[C]rclone serve docker\f[R] and hit enter)
+for the test.
+.PP
+Install \f[I]FUSE\f[R]:
+.IP
+.nf
+\f[C]
+sudo apt\-get \-y install fuse
+\f[R]
+.fi
+.PP
+Download two systemd configuration files:
+docker\-volume\-rclone.service (https://raw.githubusercontent.com/rclone/rclone/master/cmd/serve/docker/contrib/systemd/docker-volume-rclone.service)
+and
+docker\-volume\-rclone.socket (https://raw.githubusercontent.com/rclone/rclone/master/cmd/serve/docker/contrib/systemd/docker-volume-rclone.socket).
+.PP
+Put them to the \f[C]/etc/systemd/system/\f[R] directory:
+.IP
+.nf
+\f[C]
+cp docker\-volume\-plugin.service /etc/systemd/system/
+cp docker\-volume\-plugin.socket /etc/systemd/system/
+\f[R]
+.fi
+.PP
+Please note that all commands in this section must be run as
+\f[I]root\f[R] but we omit \f[C]sudo\f[R] prefix for brevity.
+Now create directories required by the service:
+.IP
+.nf
+\f[C]
+mkdir \-p /var/lib/docker\-volumes/rclone
+mkdir \-p /var/lib/docker\-plugins/rclone/config
+mkdir \-p /var/lib/docker\-plugins/rclone/cache
+\f[R]
+.fi
+.PP
+Run the docker plugin service in the socket activated mode:
+.IP
+.nf
+\f[C]
+systemctl daemon\-reload
+systemctl start docker\-volume\-rclone.service
+systemctl enable docker\-volume\-rclone.socket
+systemctl start docker\-volume\-rclone.socket
+systemctl restart docker
+\f[R]
+.fi
+.PP
+Or run the service directly: \- run \f[C]systemctl daemon\-reload\f[R]
+to let systemd pick up new config \- run
+\f[C]systemctl enable docker\-volume\-rclone.service\f[R] to make the
+new service start automatically when you power on your machine.
+\- run \f[C]systemctl start docker\-volume\-rclone.service\f[R] to start
+the service now.
+\- run \f[C]systemctl restart docker\f[R] to restart docker daemon and
+let it detect the new plugin socket.
+Note that this step is not needed in managed mode where docker knows
+about plugin state changes.
+.PP
+The two methods are equivalent from the user perspective, but I
+personally prefer socket activation.
+.SS Troubleshooting
+.PP
+You can see managed plugin
+settings (https://docs.docker.com/engine/extend/#debugging-plugins) with
+.IP
+.nf
+\f[C]
+docker plugin list
+docker plugin inspect rclone
+\f[R]
+.fi
+.PP
+Note that docker (including latest 20.10.7) will not show actual values
+of \f[C]args\f[R], just the defaults.
+.PP
+Use \f[C]journalctl \-\-unit docker\f[R] to see managed plugin output as
+part of the docker daemon log.
+Note that docker reflects plugin lines as \f[I]errors\f[R] but their
+actual level can be seen from encapsulated message string.
+.PP
+You will usually install the latest version of managed plugin.
+Use the following commands to print the actual installed version:
+.IP
+.nf
+\f[C]
+PLUGID=$(docker plugin list \-\-no\-trunc | awk \[aq]/rclone/{print$1}\[aq])
+sudo runc \-\-root /run/docker/runtime\-runc/plugins.moby exec $PLUGID rclone version
+\f[R]
+.fi
+.PP
+You can even use \f[C]runc\f[R] to run shell inside the plugin
+container:
+.IP
+.nf
+\f[C]
+sudo runc \-\-root /run/docker/runtime\-runc/plugins.moby exec \-\-tty $PLUGID bash
+\f[R]
+.fi
+.PP
+Also you can use curl to check the plugin socket connectivity:
+.IP
+.nf
+\f[C]
+docker plugin list \-\-no\-trunc
+PLUGID=123abc...
+sudo curl \-H Content\-Type:application/json \-XPOST \-d {} \-\-unix\-socket /run/docker/plugins/$PLUGID/rclone.sock http://localhost/Plugin.Activate
+\f[R]
+.fi
+.PP
+though this is rarely needed.
+.PP
+Finally I\[aq]d like to mention a \f[I]caveat with updating volume
+settings\f[R].
+Docker CLI does not have a dedicated command like
+\f[C]docker volume update\f[R].
+It may be tempting to invoke \f[C]docker volume create\f[R] with updated
+options on existing volume, but there is a gotcha.
+The command will do nothing, it won\[aq]t even return an error.
+I hope that docker maintainers will fix this some day.
+In the meantime be aware that you must remove your volume before
+recreating it with new settings:
+.IP
+.nf
+\f[C]
+docker volume remove my_vol
+docker volume create my_vol \-d rclone \-o opt1=new_val1 ...
+\f[R]
+.fi
+.PP
+and verify that settings did update:
+.IP
+.nf
+\f[C]
+docker volume list
+docker volume inspect my_vol
+\f[R]
+.fi
+.PP
+If docker refuses to remove the volume, you should find containers or
+swarm services that use it and stop them first.
+.SH 1Fichier
.PP
This is a backend for the 1fichier (https://1fichier.com) cloud storage
service.
@@ -16578,6 +18454,36 @@ Env Var: RCLONE_FICHIER_SHARED_FOLDER
Type: string
.IP \[bu] 2
Default: \[dq]\[dq]
+.SS \-\-fichier\-file\-password
+.PP
+If you want to download a shared file that is password protected, add
+this parameter
+.PP
+\f[B]NB\f[R] Input to this must be obscured \- see rclone
+obscure (https://rclone.org/commands/rclone_obscure/).
+.IP \[bu] 2
+Config: file_password
+.IP \[bu] 2
+Env Var: RCLONE_FICHIER_FILE_PASSWORD
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: \[dq]\[dq]
+.SS \-\-fichier\-folder\-password
+.PP
+If you want to list the files in a shared folder that is password
+protected, add this parameter
+.PP
+\f[B]NB\f[R] Input to this must be obscured \- see rclone
+obscure (https://rclone.org/commands/rclone_obscure/).
+.IP \[bu] 2
+Config: folder_password
+.IP \[bu] 2
+Env Var: RCLONE_FICHIER_FOLDER_PASSWORD
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: \[dq]\[dq]
.SS \-\-fichier\-encoding
.PP
This sets the encoding for the backend.
@@ -16603,7 +18509,7 @@ of an rclone union remote.
See List of backends that do not support rclone
about (https://rclone.org/overview/#optional-features) See rclone
about (https://rclone.org/commands/rclone_about/)
-.SS Alias
+.SH Alias
.PP
The \f[C]alias\f[R] remote provides a new name for another remote.
.PP
@@ -16726,7 +18632,7 @@ Env Var: RCLONE_ALIAS_REMOTE
Type: string
.IP \[bu] 2
Default: \[dq]\[dq]
-.SS Amazon Drive
+.SH Amazon Drive
.PP
Amazon Drive, formerly known as Amazon Cloud Drive, is a cloud storage
service run by Amazon for consumers.
@@ -17000,17 +18906,18 @@ Type: string
Default: \[dq]\[dq]
.SS \-\-acd\-upload\-wait\-per\-gb
.PP
-Additional time per GB to wait after a failed complete upload to see if
+Additional time per GiB to wait after a failed complete upload to see if
it appears.
.PP
Sometimes Amazon Drive gives an error when a file has been fully
uploaded but the file appears anyway after a little while.
-This happens sometimes for files over 1GB in size and nearly every time
-for files bigger than 10GB.
+This happens sometimes for files over 1 GiB in size and nearly every
+time for files bigger than 10 GiB.
This parameter controls the time rclone waits for the file to appear.
.PP
-The default value for this parameter is 3 minutes per GB, so by default
-it will wait 3 minutes for every GB uploaded to see if the file appears.
+The default value for this parameter is 3 minutes per GiB, so by default
+it will wait 3 minutes for every GiB uploaded to see if the file
+appears.
.PP
You can disable this feature by setting it to 0.
This may cause conflict errors as rclone retries the failed upload but
@@ -17035,8 +18942,8 @@ Files >= this size will be downloaded via their tempLink.
.PP
Files this size or more will be downloaded via their \[dq]tempLink\[dq].
This is to work around a problem with Amazon Drive which blocks
-downloads of files bigger than about 10GB.
-The default for this is 9GB which shouldn\[aq]t need to be changed.
+downloads of files bigger than about 10 GiB.
+The default for this is 9 GiB which shouldn\[aq]t need to be changed.
.PP
To download files above this threshold, rclone requests a
\[dq]tempLink\[dq] which downloads the file through a temporary URL
@@ -17048,7 +18955,7 @@ Env Var: RCLONE_ACD_TEMPLINK_THRESHOLD
.IP \[bu] 2
Type: SizeSuffix
.IP \[bu] 2
-Default: 9G
+Default: 9Gi
.SS \-\-acd\-encoding
.PP
This sets the encoding for the backend.
@@ -17079,7 +18986,7 @@ the service.
This limit is not officially published, but all files larger than this
will fail.
.PP
-At the time of writing (Jan 2016) is in the area of 50GB per file.
+At the time of writing (Jan 2016) is in the area of 50 GiB per file.
This means that larger files are likely to fail.
.PP
Unfortunately there is no way for rclone to see that this failure is
@@ -17098,7 +19005,7 @@ of an rclone union remote.
See List of backends that do not support rclone
about (https://rclone.org/overview/#optional-features) See rclone
about (https://rclone.org/commands/rclone_about/)
-.SS Amazon S3 Storage Providers
+.SH Amazon S3 Storage Providers
.PP
The S3 backend can be used with a number of different providers:
.IP \[bu] 2
@@ -17118,6 +19025,8 @@ Minio
.IP \[bu] 2
Scaleway
.IP \[bu] 2
+SeaweedFS
+.IP \[bu] 2
StackPath
.IP \[bu] 2
Tencent Cloud Object Storage (COS)
@@ -17485,7 +19394,7 @@ rclone sync \-\-fast\-list \-\-checksum /path/to/source s3:bucket
\f[C]\-\-fast\-list\f[R] trades off API transactions for memory use.
As a rough guide rclone uses 1k of memory per object stored, so using
\f[C]\-\-fast\-list\f[R] on a sync of a million objects will use roughly
-1 GB of RAM.
+1 GiB of RAM.
.PP
If you are only copying a small number of files into a big repository
then using \f[C]\-\-no\-traverse\f[R] is a good idea.
@@ -17607,14 +19516,14 @@ T}
.SS Multipart uploads
.PP
rclone supports multipart uploads with S3 which means that it can upload
-files bigger than 5GB.
+files bigger than 5 GiB.
.PP
Note that files uploaded \f[I]both\f[R] with multipart upload
\f[I]and\f[R] through crypt remotes do not have MD5 sums.
.PP
rclone switches from single part uploads to multipart uploads at the
point specified by \f[C]\-\-s3\-upload\-cutoff\f[R].
-This can be a maximum of 5GB and a minimum of 0 (ie always upload
+This can be a maximum of 5 GiB and a minimum of 0 (ie always upload
multipart files).
.PP
The chunk sizes used in the multipart upload are specified by
@@ -17800,7 +19709,7 @@ Vault API, so rclone cannot directly access Glacier Vaults.
.PP
Here are the standard options specific to s3 (Amazon S3 Compliant
Storage Providers including AWS, Alibaba, Ceph, Digital Ocean,
-Dreamhost, IBM COS, Minio, and Tencent COS).
+Dreamhost, IBM COS, Minio, SeaweedFS, and Tencent COS).
.SS \-\-s3\-provider
.PP
Choose your S3 provider.
@@ -17870,6 +19779,12 @@ Netease Object Storage (NOS)
Scaleway Object Storage
.RE
.IP \[bu] 2
+\[dq]SeaweedFS\[dq]
+.RS 2
+.IP \[bu] 2
+SeaweedFS S3
+.RE
+.IP \[bu] 2
\[dq]StackPath\[dq]
.RS 2
.IP \[bu] 2
@@ -18637,6 +20552,18 @@ Default: \[dq]\[dq]
Examples:
.RS 2
.IP \[bu] 2
+\[dq]oss\-accelerate.aliyuncs.com\[dq]
+.RS 2
+.IP \[bu] 2
+Global Accelerate
+.RE
+.IP \[bu] 2
+\[dq]oss\-accelerate\-overseas.aliyuncs.com\[dq]
+.RS 2
+.IP \[bu] 2
+Global Accelerate (outside mainland China)
+.RE
+.IP \[bu] 2
\[dq]oss\-cn\-hangzhou.aliyuncs.com\[dq]
.RS 2
.IP \[bu] 2
@@ -18670,7 +20597,13 @@ North China 3 (Zhangjiakou)
\[dq]oss\-cn\-huhehaote.aliyuncs.com\[dq]
.RS 2
.IP \[bu] 2
-North China 5 (Huhehaote)
+North China 5 (Hohhot)
+.RE
+.IP \[bu] 2
+\[dq]oss\-cn\-wulanchabu.aliyuncs.com\[dq]
+.RS 2
+.IP \[bu] 2
+North China 6 (Ulanqab)
.RE
.IP \[bu] 2
\[dq]oss\-cn\-shenzhen.aliyuncs.com\[dq]
@@ -18679,6 +20612,24 @@ North China 5 (Huhehaote)
South China 1 (Shenzhen)
.RE
.IP \[bu] 2
+\[dq]oss\-cn\-heyuan.aliyuncs.com\[dq]
+.RS 2
+.IP \[bu] 2
+South China 2 (Heyuan)
+.RE
+.IP \[bu] 2
+\[dq]oss\-cn\-guangzhou.aliyuncs.com\[dq]
+.RS 2
+.IP \[bu] 2
+South China 3 (Guangzhou)
+.RE
+.IP \[bu] 2
+\[dq]oss\-cn\-chengdu.aliyuncs.com\[dq]
+.RS 2
+.IP \[bu] 2
+West China 1 (Chengdu)
+.RE
+.IP \[bu] 2
\[dq]oss\-cn\-hongkong.aliyuncs.com\[dq]
.RS 2
.IP \[bu] 2
@@ -18980,6 +20931,12 @@ Digital Ocean Spaces Amsterdam 3
Digital Ocean Spaces Singapore 1
.RE
.IP \[bu] 2
+\[dq]localhost:8333\[dq]
+.RS 2
+.IP \[bu] 2
+SeaweedFS S3 localhost
+.RE
+.IP \[bu] 2
\[dq]s3.wasabisys.com\[dq]
.RS 2
.IP \[bu] 2
@@ -19742,7 +21699,7 @@ be accessed.
.PP
Here are the advanced options specific to s3 (Amazon S3 Compliant
Storage Providers including AWS, Alibaba, Ceph, Digital Ocean,
-Dreamhost, IBM COS, Minio, and Tencent COS).
+Dreamhost, IBM COS, Minio, SeaweedFS, and Tencent COS).
.SS \-\-s3\-bucket\-acl
.PP
Canned ACL used when creating buckets.
@@ -19885,7 +21842,7 @@ None
Cutoff for switching to chunked upload
.PP
Any files larger than this will be uploaded in chunks of chunk_size.
-The minimum is 0 and the maximum is 5GB.
+The minimum is 0 and the maximum is 5 GiB.
.IP \[bu] 2
Config: upload_cutoff
.IP \[bu] 2
@@ -19893,7 +21850,7 @@ Env Var: RCLONE_S3_UPLOAD_CUTOFF
.IP \[bu] 2
Type: SizeSuffix
.IP \[bu] 2
-Default: 200M
+Default: 200Mi
.SS \-\-s3\-chunk\-size
.PP
Chunk size to use for uploading.
@@ -19914,9 +21871,9 @@ Rclone will automatically increase the chunk size when uploading a large
file of known size to stay below the 10,000 chunks limit.
.PP
Files of unknown size are uploaded with the configured chunk_size.
-Since the default chunk size is 5MB and there can be at most 10,000
+Since the default chunk size is 5 MiB and there can be at most 10,000
chunks, this means that by default the maximum size of a file you can
-stream upload is 48GB.
+stream upload is 48 GiB.
If you wish to stream upload larger files then you will need to increase
chunk_size.
.IP \[bu] 2
@@ -19926,7 +21883,7 @@ Env Var: RCLONE_S3_CHUNK_SIZE
.IP \[bu] 2
Type: SizeSuffix
.IP \[bu] 2
-Default: 5M
+Default: 5Mi
.SS \-\-s3\-max\-upload\-parts
.PP
Maximum number of parts in a multipart upload.
@@ -19954,7 +21911,7 @@ Cutoff for switching to multipart copy
Any files larger than this that need to be server\-side copied will be
copied in chunks of this size.
.PP
-The minimum is 0 and the maximum is 5GB.
+The minimum is 0 and the maximum is 5 GiB.
.IP \[bu] 2
Config: copy_cutoff
.IP \[bu] 2
@@ -19962,7 +21919,7 @@ Env Var: RCLONE_S3_COPY_CUTOFF
.IP \[bu] 2
Type: SizeSuffix
.IP \[bu] 2
-Default: 4.656G
+Default: 4.656Gi
.SS \-\-s3\-disable\-checksum
.PP
Don\[aq]t store MD5 checksum with object metadata
@@ -20200,6 +22157,17 @@ Env Var: RCLONE_S3_NO_HEAD
Type: bool
.IP \[bu] 2
Default: false
+.SS \-\-s3\-no\-head\-object
+.PP
+If set, don\[aq]t HEAD objects
+.IP \[bu] 2
+Config: no_head_object
+.IP \[bu] 2
+Env Var: RCLONE_S3_NO_HEAD_OBJECT
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS \-\-s3\-encoding
.PP
This sets the encoding for the backend.
@@ -20922,6 +22890,63 @@ server_side_encryption =
storage_class =
\f[R]
.fi
+.SS SeaweedFS
+.PP
+SeaweedFS (https://github.com/chrislusf/seaweedfs/) is a distributed
+storage system for blobs, objects, files, and data lake, with O(1) disk
+seek and a scalable file metadata store.
+It has an S3 compatible object storage interface.
+.PP
+Assuming the SeaweedFS are configured with \f[C]weed shell\f[R] as such:
+.IP
+.nf
+\f[C]
+> s3.bucket.create \-name foo
+> s3.configure \-access_key=any \-secret_key=any \-buckets=foo \-user=me \-actions=Read,Write,List,Tagging,Admin \-apply
+{
+ \[dq]identities\[dq]: [
+ {
+ \[dq]name\[dq]: \[dq]me\[dq],
+ \[dq]credentials\[dq]: [
+ {
+ \[dq]accessKey\[dq]: \[dq]any\[dq],
+ \[dq]secretKey\[dq]: \[dq]any\[dq]
+ }
+ ],
+ \[dq]actions\[dq]: [
+ \[dq]Read:foo\[dq],
+ \[dq]Write:foo\[dq],
+ \[dq]List:foo\[dq],
+ \[dq]Tagging:foo\[dq],
+ \[dq]Admin:foo\[dq]
+ ]
+ }
+ ]
+}
+\f[R]
+.fi
+.PP
+To use rclone with SeaweedFS, above configuration should end up with
+something like this in your config:
+.IP
+.nf
+\f[C]
+[seaweedfs_s3]
+type = s3
+provider = SeaweedFS
+access_key_id = any
+secret_access_key = any
+endpoint = localhost:8333
+\f[R]
+.fi
+.PP
+So once set up, for example to copy files into a bucket
+.IP
+.nf
+\f[C]
+rclone copy /path/to/files seaweedfs_s3:foo
+\f[R]
+.fi
.SS Wasabi
.PP
Wasabi (https://wasabi.com) is a cloud\-based object storage service for
@@ -21322,7 +23347,7 @@ of an rclone union remote.
See List of backends that do not support rclone
about (https://rclone.org/overview/#optional-features) See rclone
about (https://rclone.org/commands/rclone_about/)
-.SS Backblaze B2
+.SH Backblaze B2
.PP
B2 is Backblaze\[aq]s cloud storage
system (https://www.backblaze.com/b2/).
@@ -21520,8 +23545,8 @@ the files are, how much you want to load your computer, etc.
The default of \f[C]\-\-transfers 4\f[R] is definitely too low for
Backblaze B2 though.
.PP
-Note that uploading big files (bigger than 200 MB by default) will use a
-96 MB RAM buffer by default.
+Note that uploading big files (bigger than 200 MiB by default) will use
+a 96 MiB RAM buffer by default.
There can be at most \f[C]\-\-transfers\f[R] of these in use at any
moment, so this sets the upper limit on the memory used.
.SS Versions
@@ -21537,12 +23562,6 @@ file instead of hiding it.
Old versions of files, where available, are visible using the
\f[C]\-\-b2\-versions\f[R] flag.
.PP
-\f[B]NB\f[R] Note that \f[C]\-\-b2\-versions\f[R] does not work with
-crypt at the moment
-#1627 (https://github.com/rclone/rclone/issues/1627).
-Using \-\-backup\-dir (https://rclone.org/docs/#backup-dir-dir) with
-rclone is the recommended way of working around this.
-.PP
If you wish to remove all the old versions then you can use the
\f[C]rclone cleanup remote:bucket\f[R] command which will delete all the
old versions of files, leaving the current ones intact.
@@ -21810,7 +23829,7 @@ Cutoff for switching to chunked upload.
Files above this size will be uploaded in chunks of
\[dq]\-\-b2\-chunk\-size\[dq].
.PP
-This value should be set no larger than 4.657GiB (== 5GB).
+This value should be set no larger than 4.657 GiB (== 5 GB).
.IP \[bu] 2
Config: upload_cutoff
.IP \[bu] 2
@@ -21818,7 +23837,7 @@ Env Var: RCLONE_B2_UPLOAD_CUTOFF
.IP \[bu] 2
Type: SizeSuffix
.IP \[bu] 2
-Default: 200M
+Default: 200Mi
.SS \-\-b2\-copy\-cutoff
.PP
Cutoff for switching to multipart copy
@@ -21826,7 +23845,7 @@ Cutoff for switching to multipart copy
Any files larger than this that need to be server\-side copied will be
copied in chunks of this size.
.PP
-The minimum is 0 and the maximum is 4.6GB.
+The minimum is 0 and the maximum is 4.6 GiB.
.IP \[bu] 2
Config: copy_cutoff
.IP \[bu] 2
@@ -21834,7 +23853,7 @@ Env Var: RCLONE_B2_COPY_CUTOFF
.IP \[bu] 2
Type: SizeSuffix
.IP \[bu] 2
-Default: 4G
+Default: 4Gi
.SS \-\-b2\-chunk\-size
.PP
Upload chunk size.
@@ -21851,7 +23870,7 @@ Env Var: RCLONE_B2_CHUNK_SIZE
.IP \[bu] 2
Type: SizeSuffix
.IP \[bu] 2
-Default: 96M
+Default: 96Mi
.SS \-\-b2\-disable\-checksum
.PP
Disable checksums for large (> upload cutoff) files
@@ -21953,7 +23972,7 @@ of an rclone union remote.
See List of backends that do not support rclone
about (https://rclone.org/overview/#optional-features) See rclone
about (https://rclone.org/commands/rclone_about/)
-.SS Box
+.SH Box
.PP
Paths are specified as \f[C]remote:path\f[R]
.PP
@@ -22234,10 +24253,10 @@ replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t
be used in JSON strings.
.SS Transfers
.PP
-For files above 50MB rclone will use a chunked transfer.
+For files above 50 MiB rclone will use a chunked transfer.
Rclone will upload up to \f[C]\-\-transfers\f[R] chunks at the same time
(shared among all the multipart uploads).
-Chunks are buffered in memory and are normally 8MB so increasing
+Chunks are buffered in memory and are normally 8 MiB so increasing
\f[C]\-\-transfers\f[R] will increase memory use.
.SS Deleting files
.PP
@@ -22396,7 +24415,7 @@ Type: string
Default: \[dq]0\[dq]
.SS \-\-box\-upload\-cutoff
.PP
-Cutoff for switching to multipart upload (>= 50MB).
+Cutoff for switching to multipart upload (>= 50 MiB).
.IP \[bu] 2
Config: upload_cutoff
.IP \[bu] 2
@@ -22404,7 +24423,7 @@ Env Var: RCLONE_BOX_UPLOAD_CUTOFF
.IP \[bu] 2
Type: SizeSuffix
.IP \[bu] 2
-Default: 50M
+Default: 50Mi
.SS \-\-box\-commit\-retries
.PP
Max number of times to try committing a multipart file.
@@ -22449,7 +24468,7 @@ of an rclone union remote.
See List of backends that do not support rclone
about (https://rclone.org/overview/#optional-features) See rclone
about (https://rclone.org/commands/rclone_about/)
-.SS Cache (BETA)
+.SH Cache (DEPRECATED)
.PP
The \f[C]cache\f[R] remote wraps another existing remote and stores file
structure and its data for long running tasks like
@@ -22522,11 +24541,11 @@ password:
The size of a chunk. Lower value good for slow connections but can affect seamless reading.
Default: 5M
Choose a number from below, or type in your own value
- 1 / 1MB
- \[rs] \[dq]1m\[dq]
- 2 / 5 MB
+ 1 / 1 MiB
+ \[rs] \[dq]1M\[dq]
+ 2 / 5 MiB
\[rs] \[dq]5M\[dq]
- 3 / 10 MB
+ 3 / 10 MiB
\[rs] \[dq]10M\[dq]
chunk_size> 2
How much time should object info (file size, file hashes, etc.) be stored in cache. Use a very high value if you don\[aq]t plan on changing the source FS from outside the cache.
@@ -22543,11 +24562,11 @@ info_age> 2
The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted.
Default: 10G
Choose a number from below, or type in your own value
- 1 / 500 MB
+ 1 / 500 MiB
\[rs] \[dq]500M\[dq]
- 2 / 1 GB
+ 2 / 1 GiB
\[rs] \[dq]1G\[dq]
- 3 / 10 GB
+ 3 / 10 GiB
\[rs] \[dq]10G\[dq]
chunk_total_size> 3
Remote config
@@ -22867,27 +24886,27 @@ Env Var: RCLONE_CACHE_CHUNK_SIZE
.IP \[bu] 2
Type: SizeSuffix
.IP \[bu] 2
-Default: 5M
+Default: 5Mi
.IP \[bu] 2
Examples:
.RS 2
.IP \[bu] 2
-\[dq]1m\[dq]
+\[dq]1M\[dq]
.RS 2
.IP \[bu] 2
-1MB
+1 MiB
.RE
.IP \[bu] 2
\[dq]5M\[dq]
.RS 2
.IP \[bu] 2
-5 MB
+5 MiB
.RE
.IP \[bu] 2
\[dq]10M\[dq]
.RS 2
.IP \[bu] 2
-10 MB
+10 MiB
.RE
.RE
.SS \-\-cache\-info\-age
@@ -22940,7 +24959,7 @@ Env Var: RCLONE_CACHE_CHUNK_TOTAL_SIZE
.IP \[bu] 2
Type: SizeSuffix
.IP \[bu] 2
-Default: 10G
+Default: 10Gi
.IP \[bu] 2
Examples:
.RS 2
@@ -22948,19 +24967,19 @@ Examples:
\[dq]500M\[dq]
.RS 2
.IP \[bu] 2
-500 MB
+500 MiB
.RE
.IP \[bu] 2
\[dq]1G\[dq]
.RS 2
.IP \[bu] 2
-1 GB
+1 GiB
.RE
.IP \[bu] 2
\[dq]10G\[dq]
.RS 2
.IP \[bu] 2
-10 GB
+10 GiB
.RE
.RE
.SS Advanced Options
@@ -23236,7 +25255,7 @@ Print stats on the cache backend in JSON format.
rclone backend stats remote: [options] [+]
\f[R]
.fi
-.SS Chunker (BETA)
+.SH Chunker (BETA)
.PP
The \f[C]chunker\f[R] overlay transparently splits large files into
smaller chunks during upload to wrapped remote and transparently
@@ -23281,7 +25300,7 @@ Normally should contain a \[aq]:\[aq] and a path, e.g. \[dq]myremote:path/to/dir
Enter a string value. Press Enter for the default (\[dq]\[dq]).
remote> remote:path
Files larger than chunk size will be split in chunks.
-Enter a size with suffix k,M,G,T. Press Enter for the default (\[dq]2G\[dq]).
+Enter a size with suffix K,M,G,T. Press Enter for the default (\[dq]2G\[dq]).
chunk_size> 100M
Choose how chunker handles hash sums. All modes but \[dq]none\[dq] require metadata.
Enter a string value. Press Enter for the default (\[dq]md5\[dq]).
@@ -23607,7 +25626,7 @@ Env Var: RCLONE_CHUNKER_CHUNK_SIZE
.IP \[bu] 2
Type: SizeSuffix
.IP \[bu] 2
-Default: 2G
+Default: 2Gi
.SS \-\-chunker\-hash\-type
.PP
Choose how chunker handles hash sums.
@@ -23815,7 +25834,7 @@ be used.
This method is EXPERIMENTAL, don\[aq]t use on production systems.
.RE
.RE
-.SS Citrix ShareFile
+.SH Citrix ShareFile
.PP
Citrix ShareFile (https://sharefile.com) is a secure file sharing and
transfer service aimed as business.
@@ -23944,10 +25963,10 @@ ShareFile supports MD5 type hashes, so you can use the
\f[C]\-\-checksum\f[R] flag.
.SS Transfers
.PP
-For files above 128MB rclone will use a chunked transfer.
+For files above 128 MiB rclone will use a chunked transfer.
Rclone will upload up to \f[C]\-\-transfers\f[R] chunks at the same time
(shared among all the multipart uploads).
-Chunks are buffered in memory and are normally 64MB so increasing
+Chunks are buffered in memory and are normally 64 MiB so increasing
\f[C]\-\-transfers\f[R] will increase memory use.
.SS Limitations
.PP
@@ -24131,7 +26150,7 @@ Env Var: RCLONE_SHAREFILE_UPLOAD_CUTOFF
.IP \[bu] 2
Type: SizeSuffix
.IP \[bu] 2
-Default: 128M
+Default: 128Mi
.SS \-\-sharefile\-chunk\-size
.PP
Upload chunk size.
@@ -24148,7 +26167,7 @@ Env Var: RCLONE_SHAREFILE_CHUNK_SIZE
.IP \[bu] 2
Type: SizeSuffix
.IP \[bu] 2
-Default: 64M
+Default: 64Mi
.SS \-\-sharefile\-endpoint
.PP
Endpoint for API calls.
@@ -24188,7 +26207,7 @@ of an rclone union remote.
See List of backends that do not support rclone
about (https://rclone.org/overview/#optional-features) See rclone
about (https://rclone.org/commands/rclone_about/)
-.SS Crypt
+.SH Crypt
.PP
Rclone \f[C]crypt\f[R] remotes encrypt and decrypt other remotes.
.PP
@@ -24943,8 +26962,8 @@ probability of approximately 2\[tmu]10\[u207B]\[S3]\[S2] of re\-using a
nonce.
.SS Chunk
.PP
-Each chunk will contain 64kB of data, except for the last one which may
-have less data.
+Each chunk will contain 64 KiB of data, except for the last one which
+may have less data.
The data chunk is in standard NaCl SecretBox format.
SecretBox uses XSalsa20 and Poly1305 to encrypt and authenticate
messages.
@@ -24972,7 +26991,7 @@ This uses a 32 byte (256 bit key) key derived from the user password.
.PP
49 bytes total
.PP
-1MB (1048576 bytes) file will encrypt to
+1 MiB (1048576 bytes) file will encrypt to
.IP \[bu] 2
32 bytes header
.IP \[bu] 2
@@ -25031,7 +27050,7 @@ For full protection against this you should always use a salt.
.IP \[bu] 2
rclone cryptdecode (https://rclone.org/commands/rclone_cryptdecode/) \-
Show forward/reverse mapping of encrypted filenames
-.SS Compress (Experimental)
+.SH Compress (Experimental)
.SS Warning
.PP
This remote is currently \f[B]experimental\f[R].
@@ -25202,8 +27221,8 @@ Env Var: RCLONE_COMPRESS_RAM_CACHE_LIMIT
.IP \[bu] 2
Type: SizeSuffix
.IP \[bu] 2
-Default: 20M
-.SS Dropbox
+Default: 20Mi
+.SH Dropbox
.PP
Paths are specified as \f[C]remote:path\f[R]
.PP
@@ -25384,6 +27403,70 @@ T}
Invalid UTF\-8 bytes will also be
replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t
be used in JSON strings.
+.SS Batch mode uploads
+.PP
+Using batch mode uploads is very important for performance when using
+the Dropbox API.
+See the dropbox performance
+guide (https://developers.dropbox.com/dbx-performance-guide) for more
+info.
+.PP
+There are 3 modes rclone can use for uploads.
+.SS \-\-dropbox\-batch\-mode off
+.PP
+In this mode rclone will not use upload batching.
+This was the default before rclone v1.55.
+It has the disadvantage that it is very likely to encounter
+\f[C]too_many_requests\f[R] errors like this
+.IP
+.nf
+\f[C]
+NOTICE: too_many_requests/.: Too many requests or write operations. Trying again in 15 seconds.
+\f[R]
+.fi
+.PP
+When rclone receives these it has to wait for 15s or sometimes 300s
+before continuing which really slows down transfers.
+.PP
+This will happen especially if \f[C]\-\-transfers\f[R] is large, so this
+mode isn\[aq]t recommended except for compatibility or investigating
+problems.
+.SS \-\-dropbox\-batch\-mode sync
+.PP
+In this mode rclone will batch up uploads to the size specified by
+\f[C]\-\-dropbox\-batch\-size\f[R] and commit them together.
+.PP
+Using this mode means you can use a much higher \f[C]\-\-transfers\f[R]
+parameter (32 or 64 works fine) without receiving
+\f[C]too_many_requests\f[R] errors.
+.PP
+This mode ensures full data integrity.
+.PP
+Note that there may be a pause when quitting rclone while rclone
+finishes up the last batch using this mode.
+.SS \-\-dropbox\-batch\-mode async
+.PP
+In this mode rclone will batch up uploads to the size specified by
+\f[C]\-\-dropbox\-batch\-size\f[R] and commit them together.
+.PP
+However it will not wait for the status of the batch to be returned to
+the caller.
+This means rclone can use a much bigger batch size (much bigger than
+\f[C]\-\-transfers\f[R]), at the cost of not being able to check the
+status of the upload.
+.PP
+This provides the maximum possible upload speed especially with lots of
+small files, however rclone can\[aq]t check the file got uploaded
+properly using this mode.
+.PP
+If you are using this mode then using \[dq]rclone check\[dq] after the
+transfer completes is recommended.
+Or you could do an initial transfer with
+\f[C]\-\-dropbox\-batch\-mode async\f[R] then do a final transfer with
+\f[C]\-\-dropbox\-batch\-mode sync\f[R] (the default).
+.PP
+Note that there may be a pause when quitting rclone while rclone
+finishes up the last batch using this mode.
.SS Standard Options
.PP
Here are the standard options specific to dropbox (Dropbox).
@@ -25450,14 +27533,14 @@ Default: \[dq]\[dq]
.SS \-\-dropbox\-chunk\-size
.PP
Upload chunk size.
-(< 150M).
+(< 150Mi).
.PP
Any files larger than this will be uploaded in chunks of this size.
.PP
Note that chunks are buffered in memory (one at a time) so rclone can
deal with retries.
Setting this larger will increase the speed slightly (at most 10% for
-128MB in tests) at the cost of using more memory.
+128 MiB in tests) at the cost of using more memory.
It can be set smaller if you are tight on memory.
.IP \[bu] 2
Config: chunk_size
@@ -25466,7 +27549,7 @@ Env Var: RCLONE_DROPBOX_CHUNK_SIZE
.IP \[bu] 2
Type: SizeSuffix
.IP \[bu] 2
-Default: 48M
+Default: 48Mi
.SS \-\-dropbox\-impersonate
.PP
Impersonate this user when using a business account.
@@ -25532,6 +27615,85 @@ Env Var: RCLONE_DROPBOX_SHARED_FOLDERS
Type: bool
.IP \[bu] 2
Default: false
+.SS \-\-dropbox\-batch\-mode
+.PP
+Upload file batching sync|async|off.
+.PP
+This sets the batch mode used by rclone.
+.PP
+For full info see the main docs (https://rclone.org/dropbox/#batch-mode)
+.PP
+This has 3 possible values
+.IP \[bu] 2
+off \- no batching
+.IP \[bu] 2
+sync \- batch uploads and check completion (default)
+.IP \[bu] 2
+async \- batch upload and don\[aq]t check completion
+.PP
+Rclone will close any outstanding batches when it exits which may make a
+delay on quit.
+.IP \[bu] 2
+Config: batch_mode
+.IP \[bu] 2
+Env Var: RCLONE_DROPBOX_BATCH_MODE
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: \[dq]sync\[dq]
+.SS \-\-dropbox\-batch\-size
+.PP
+Max number of files in upload batch.
+.PP
+This sets the batch size of files to upload.
+It has to be less than 1000.
+.PP
+By default this is 0 which means rclone which calculate the batch size
+depending on the setting of batch_mode.
+.IP \[bu] 2
+batch_mode: async \- default batch_size is 100
+.IP \[bu] 2
+batch_mode: sync \- default batch_size is the same as \-\-transfers
+.IP \[bu] 2
+batch_mode: off \- not in use
+.PP
+Rclone will close any outstanding batches when it exits which may make a
+delay on quit.
+.PP
+Setting this is a great idea if you are uploading lots of small files as
+it will make them a lot quicker.
+You can use \-\-transfers 32 to maximise throughput.
+.IP \[bu] 2
+Config: batch_size
+.IP \[bu] 2
+Env Var: RCLONE_DROPBOX_BATCH_SIZE
+.IP \[bu] 2
+Type: int
+.IP \[bu] 2
+Default: 0
+.SS \-\-dropbox\-batch\-timeout
+.PP
+Max time to allow an idle upload batch before uploading
+.PP
+If an upload batch is idle for more than this long then it will be
+uploaded.
+.PP
+The default for this is 0 which means rclone will choose a sensible
+default based on the batch_mode in use.
+.IP \[bu] 2
+batch_mode: async \- default batch_timeout is 500ms
+.IP \[bu] 2
+batch_mode: sync \- default batch_timeout is 10s
+.IP \[bu] 2
+batch_mode: off \- not in use
+.IP \[bu] 2
+Config: batch_timeout
+.IP \[bu] 2
+Env Var: RCLONE_DROPBOX_BATCH_TIMEOUT
+.IP \[bu] 2
+Type: Duration
+.IP \[bu] 2
+Default: 0s
.SS \-\-dropbox\-encoding
.PP
This sets the encoding for the backend.
@@ -25571,6 +27733,16 @@ If you have more than 10,000 files in a directory then
\f[C]Failed to purge: There are too many files involved in this operation\f[R].
As a work\-around do an \f[C]rclone delete dropbox:dir\f[R] followed by
an \f[C]rclone rmdir dropbox:dir\f[R].
+.PP
+When using \f[C]rclone link\f[R] you\[aq]ll need to set
+\f[C]\-\-expire\f[R] if using a non\-personal account otherwise the
+visibility may not be correct.
+(Note that \f[C]\-\-expire\f[R] isn\[aq]t supported on personal
+accounts).
+See the forum
+discussion (https://forum.rclone.org/t/rclone-link-dropbox-permissions/23211)
+and the dropbox SDK
+issue (https://github.com/dropbox/dropbox-sdk-go-unofficial/issues/75).
.SS Get your own Dropbox App ID
.PP
When you use rclone with Dropbox in its default configuration you are
@@ -25595,11 +27767,26 @@ example
.IP "5." 3
Click the button \f[C]Create App\f[R]
.IP "6." 3
-Fill \f[C]Redirect URIs\f[R] as \f[C]http://localhost:53682/\f[R]
+Switch to the \f[C]Permissions\f[R] tab.
+Enable at least the following permissions: \f[C]account_info.read\f[R],
+\f[C]files.metadata.write\f[R], \f[C]files.content.write\f[R],
+\f[C]files.content.read\f[R], \f[C]sharing.write\f[R].
+The \f[C]files.metadata.read\f[R] and \f[C]sharing.read\f[R] checkboxes
+will be marked too.
+Click \f[C]Submit\f[R]
.IP "7." 3
-Find the \f[C]App key\f[R] and \f[C]App secret\f[R] Use these values in
-rclone config to add a new remote or edit an existing remote.
-.SS Enterprise File Fabric
+Switch to the \f[C]Settings\f[R] tab.
+Fill \f[C]OAuth2 \- Redirect URIs\f[R] as
+\f[C]http://localhost:53682/\f[R]
+.IP "8." 3
+Find the \f[C]App key\f[R] and \f[C]App secret\f[R] values on the
+\f[C]Settings\f[R] tab.
+Use these values in rclone config to add a new remote or edit an
+existing remote.
+The \f[C]App key\f[R] setting corresponds to \f[C]client_id\f[R] in
+rclone config, the \f[C]App secret\f[R] corresponds to
+\f[C]client_secret\f[R]
+.SH Enterprise File Fabric
.PP
This backend supports Storage Made Easy\[aq]s Enterprise File
Fabric\[tm] (https://storagemadeeasy.com/about/) which provides a
@@ -25898,7 +28085,7 @@ Env Var: RCLONE_FILEFABRIC_ENCODING
Type: MultiEncoder
.IP \[bu] 2
Default: Slash,Del,Ctl,InvalidUtf8,Dot
-.SS FTP
+.SH FTP
.PP
FTP is the File Transfer Protocol.
Rclone FTP support is provided using the
@@ -26296,7 +28483,7 @@ T}@T{
\f[C]\[rs] [ ]\f[R]
T}
.TE
-.SS Google Cloud Storage
+.SH Google Cloud Storage
.PP
Paths are specified as \f[C]remote:bucket\f[R] (or \f[C]remote:\f[R] for
the \f[C]lsd\f[R] command.) You may put subdirectories in too, e.g.
@@ -26565,11 +28752,29 @@ Eg \f[C]\-\-header\-upload \[dq]Content\-Type text/potato\[dq]\f[R]
.PP
Note that the last of these is for setting custom metadata in the form
\f[C]\-\-header\-upload \[dq]x\-goog\-meta\-key: value\[dq]\f[R]
-.SS Modified time
+.SS Modification time
.PP
-Google google cloud storage stores md5sums natively and rclone stores
-modification times as metadata on the object, under the \[dq]mtime\[dq]
-key in RFC3339 format accurate to 1ns.
+Google Cloud Storage stores md5sum natively.
+Google\[aq]s gsutil (https://cloud.google.com/storage/docs/gsutil) tool
+stores modification time with one\-second precision as
+\f[C]goog\-reserved\-file\-mtime\f[R] in file metadata.
+.PP
+To ensure compatibility with gsutil, rclone stores modification time in
+2 separate metadata entries.
+\f[C]mtime\f[R] uses RFC3339 format with one\-nanosecond precision.
+\f[C]goog\-reserved\-file\-mtime\f[R] uses Unix timestamp format with
+one\-second precision.
+To get modification time from object metadata, rclone reads the metadata
+in the following order: \f[C]mtime\f[R],
+\f[C]goog\-reserved\-file\-mtime\f[R], object updated time.
+.PP
+Note that rclone\[aq]s default modify window is 1ns.
+Files uploaded by gsutil only contain timestamps with one\-second
+precision.
+If you use rclone to sync files previously uploaded by gsutil, rclone
+will attempt to update modification time for all these files.
+To avoid these possibly unnecessary updates, use
+\f[C]\-\-modify\-window 1s\f[R].
.SS Restricted filename characters
.PP
.TS
@@ -27076,7 +29281,7 @@ of an rclone union remote.
See List of backends that do not support rclone
about (https://rclone.org/overview/#optional-features) See rclone
about (https://rclone.org/commands/rclone_about/)
-.SS Google Drive
+.SH Google Drive
.PP
Paths are specified as \f[C]drive:path\f[R]
.PP
@@ -28352,7 +30557,7 @@ Env Var: RCLONE_DRIVE_UPLOAD_CUTOFF
.IP \[bu] 2
Type: SizeSuffix
.IP \[bu] 2
-Default: 8M
+Default: 8Mi
.SS \-\-drive\-chunk\-size
.PP
Upload chunk size.
@@ -28369,7 +30574,7 @@ Env Var: RCLONE_DRIVE_CHUNK_SIZE
.IP \[bu] 2
Type: SizeSuffix
.IP \[bu] 2
-Default: 8M
+Default: 8Mi
.SS \-\-drive\-acknowledge\-abuse
.PP
Set to allow files which return cannotDownloadAbusiveFile to be
@@ -28496,7 +30701,7 @@ Default: true
.PP
Make upload limit errors be fatal
.PP
-At the time of writing it is only possible to upload 750GB of data to
+At the time of writing it is only possible to upload 750 GiB of data to
Google Drive a day (this is an undocumented limit).
When this limit is reached Google Drive produces a slightly different
error message.
@@ -28519,8 +30724,8 @@ Default: false
.PP
Make download limit errors be fatal
.PP
-At the time of writing it is only possible to download 10TB of data from
-Google Drive a day (this is an undocumented limit).
+At the time of writing it is only possible to download 10 TiB of data
+from Google Drive a day (this is an undocumented limit).
When this limit is reached Google Drive produces a slightly different
error message.
When this flag is set it causes these errors to be fatal.
@@ -28790,7 +30995,7 @@ Use the \-i flag to see what would be copied before copying.
Drive has quite a lot of rate limiting.
This causes rclone to be limited to transferring about 2 files per
second only.
-Individual files may be transferred much faster at 100s of MBytes/s but
+Individual files may be transferred much faster at 100s of MiByte/s but
lots of small files can take a long time.
.PP
Server side copies are also subject to a separate rate limit.
@@ -28892,18 +31097,21 @@ Click again on \[dq]Credentials\[dq] on the left panel to go back to the
(PS: if you are a GSuite user, you could also select \[dq]Internal\[dq]
instead of \[dq]External\[dq] above, but this has not been
tested/documented so far).
-.IP "6." 3
+.IP " 6." 4
Click on the \[dq]+ CREATE CREDENTIALS\[dq] button at the top of the
screen, then select \[dq]OAuth client ID\[dq].
-.IP "7." 3
+.IP " 7." 4
Choose an application type of \[dq]Desktop app\[dq] if you using a
Google account or \[dq]Other\[dq] if you using a GSuite account and
click \[dq]Create\[dq].
(the default name is fine)
-.IP "8." 3
+.IP " 8." 4
It will show you a client ID and client secret.
-Use these values in rclone config to add a new remote or edit an
-existing remote.
+Make a note of these.
+.IP " 9." 4
+Go to \[dq]Oauth consent screen\[dq] and press \[dq]Publish App\[dq]
+.IP "10." 4
+Provide the noted client ID and client secret to rclone.
.PP
Be aware that, due to the \[dq]enhanced security\[dq] recently
introduced by Google, you are theoretically expected to \[dq]submit your
@@ -28926,7 +31134,7 @@ page.
Just push the Enable the Drive API button to receive the Client ID and
Secret.
Note that it will automatically create a new project in the API Console.
-.SS Google Photos
+.SH Google Photos
.PP
The rclone backend for Google
Photos (https://www.google.com/photos/about/) is a specialized backend
@@ -29425,7 +31633,7 @@ Env Var: RCLONE_GPHOTOS_INCLUDE_ARCHIVED
Type: bool
.IP \[bu] 2
Default: false
-.SS HDFS
+.SH HDFS
.PP
HDFS (https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html)
is a distributed file\-system, part of the Apache
@@ -29671,7 +31879,7 @@ system).
Kerberos service principal name for the namenode
.PP
Enables KERBEROS authentication.
-Specifies the Service Principal Name (/) for the namenode.
+Specifies the Service Principal Name (SERVICE/FQDN) for the namenode.
.IP \[bu] 2
Config: service_principal_name
.IP \[bu] 2
@@ -29733,7 +31941,7 @@ Env Var: RCLONE_HDFS_ENCODING
Type: MultiEncoder
.IP \[bu] 2
Default: Slash,Colon,Del,Ctl,InvalidUtf8,Dot
-.SS HTTP
+.SH HTTP
.PP
The HTTP remote is a read only remote for reading files of a webserver.
The webserver should provide file listings which rclone will read and
@@ -29966,7 +32174,7 @@ of an rclone union remote.
See List of backends that do not support rclone
about (https://rclone.org/overview/#optional-features) See rclone
about (https://rclone.org/commands/rclone_about/)
-.SS Hubic
+.SH Hubic
.PP
Paths are specified as \f[C]remote:path\f[R]
.PP
@@ -30160,7 +32368,7 @@ Default: \[dq]\[dq]
Above this size files will be chunked into a _segments container.
.PP
Above this size files will be chunked into a _segments container.
-The default for this is 5GB which is its maximum value.
+The default for this is 5 GiB which is its maximum value.
.IP \[bu] 2
Config: chunk_size
.IP \[bu] 2
@@ -30168,7 +32376,7 @@ Env Var: RCLONE_HUBIC_CHUNK_SIZE
.IP \[bu] 2
Type: SizeSuffix
.IP \[bu] 2
-Default: 5G
+Default: 5Gi
.SS \-\-hubic\-no\-chunk
.PP
Don\[aq]t chunk files during streaming upload.
@@ -30177,7 +32385,7 @@ When doing streaming uploads (e.g.
using rcat or mount) setting this flag will cause the swift backend to
not upload chunked files.
.PP
-This will limit the maximum upload size to 5GB.
+This will limit the maximum upload size to 5 GiB.
However non chunked files are easier to deal with and have an MD5SUM.
.PP
Rclone will still chunk files bigger than chunk_size when doing normal
@@ -30212,7 +32420,7 @@ credentials and ignores the expires field returned by the Hubic API.
The Swift API doesn\[aq]t return a correct MD5SUM for segmented files
(Dynamic or Static Large Objects) so rclone won\[aq]t check or use the
MD5SUM for these.
-.SS Jottacloud
+.SH Jottacloud
.PP
Jottacloud is a cloud storage service provider from a Norwegian company,
using its own datacenters in Norway.
@@ -30489,6 +32697,13 @@ When rclone uploads a new version of a file it creates a new version of
it.
Currently rclone only supports retrieving the current version but older
versions can be accessed via the Jottacloud Website.
+.PP
+Versioning can be disabled by \f[C]\-\-jottacloud\-no\-versions\f[R]
+option.
+This is achieved by deleting the remote file prior to uploading a new
+version.
+If the upload the fails no version of the file will be available in the
+remote.
.SS Quota information
.PP
To view your current quota you can use the
@@ -30508,7 +32723,7 @@ Env Var: RCLONE_JOTTACLOUD_MD5_MEMORY_LIMIT
.IP \[bu] 2
Type: SizeSuffix
.IP \[bu] 2
-Default: 10M
+Default: 10Mi
.SS \-\-jottacloud\-trashed\-only
.PP
Only show files that are in the trash.
@@ -30542,7 +32757,19 @@ Env Var: RCLONE_JOTTACLOUD_UPLOAD_RESUME_LIMIT
.IP \[bu] 2
Type: SizeSuffix
.IP \[bu] 2
-Default: 10M
+Default: 10Mi
+.SS \-\-jottacloud\-no\-versions
+.PP
+Avoid server side versioning by deleting files and recreating files
+instead of overwriting them.
+.IP \[bu] 2
+Config: no_versions
+.IP \[bu] 2
+Env Var: RCLONE_JOTTACLOUD_NO_VERSIONS
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS \-\-jottacloud\-encoding
.PP
This sets the encoding for the backend.
@@ -30577,7 +32804,7 @@ Jottacloud exhibits some inconsistent behaviours regarding deleted files
and folders which may cause Copy, Move and DirMove operations to
previously deleted paths to fail.
Emptying the trash should help in such cases.
-.SS Koofr
+.SH Koofr
.PP
Paths are specified as \f[C]remote:path\f[R]
.PP
@@ -30794,7 +33021,7 @@ Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
.PP
Note that Koofr is case insensitive so you can\[aq]t have a file called
\[dq]Hello.doc\[dq] and one called \[dq]hello.doc\[dq].
-.SS Mail.ru Cloud
+.SH Mail.ru Cloud
.PP
Mail.ru Cloud (https://cloud.mail.ru/) is a cloud storage provided by a
Russian internet company Mail.Ru Group (https://mail.ru).
@@ -31170,7 +33397,7 @@ Env Var: RCLONE_MAILRU_SPEEDUP_MAX_DISK
.IP \[bu] 2
Type: SizeSuffix
.IP \[bu] 2
-Default: 3G
+Default: 3Gi
.IP \[bu] 2
Examples:
.RS 2
@@ -31203,7 +33430,7 @@ Env Var: RCLONE_MAILRU_SPEEDUP_MAX_MEMORY
.IP \[bu] 2
Type: SizeSuffix
.IP \[bu] 2
-Default: 32M
+Default: 32Mi
.IP \[bu] 2
Examples:
.RS 2
@@ -31299,7 +33526,7 @@ Type: MultiEncoder
.IP \[bu] 2
Default:
Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot
-.SS Mega
+.SH Mega
.PP
Mega (https://mega.nz/) is a cloud storage and file hosting service
known for its security feature where all files are encrypted locally
@@ -31573,7 +33800,7 @@ beyond the mega C++ SDK (https://github.com/meganz/sdk) source code so
there are likely quite a few errors still remaining in this library.
.PP
Mega allows duplicate files which may confuse rclone.
-.SS Memory
+.SH Memory
.PP
The memory backend is an in RAM backend.
It does not persist its data \- use the local backend for that.
@@ -31635,7 +33862,7 @@ to 1 nS.
.PP
The memory backend replaces the default restricted characters
set (https://rclone.org/overview/#restricted-characters).
-.SS Microsoft Azure Blob Storage
+.SH Microsoft Azure Blob Storage
.PP
Paths are specified as \f[C]remote:container\f[R] (or \f[C]remote:\f[R]
for the \f[C]lsd\f[R] command.) You may put subdirectories in too, e.g.
@@ -31875,16 +34102,18 @@ interactive login.
.IP
.nf
\f[C]
-$ az sp create\-for\-rbac \-\-name \[dq]\[dq] \[rs]
+$ az ad sp create\-for\-rbac \-\-name \[dq]\[dq] \[rs]
\-\-role \[dq]Storage Blob Data Owner\[dq] \[rs]
\-\-scopes \[dq]/subscriptions//resourceGroups//providers/Microsoft.Storage/storageAccounts//blobServices/default/containers/\[dq] \[rs]
> azure\-principal.json
\f[R]
.fi
.PP
-See Use Azure CLI to assign an Azure role for access to blob and queue
-data (https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-cli)
-for more details.
+See \[dq]Create an Azure service
+principal\[dq] (https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli)
+and \[dq]Assign an Azure role for access to blob
+data\[dq] (https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-cli)
+pages for more details.
.IP \[bu] 2
Config: service_principal_file
.IP \[bu] 2
@@ -32004,7 +34233,7 @@ Type: string
Default: \[dq]\[dq]
.SS \-\-azureblob\-upload\-cutoff
.PP
-Cutoff for switching to chunked upload (<= 256MB).
+Cutoff for switching to chunked upload (<= 256 MiB).
(Deprecated)
.IP \[bu] 2
Config: upload_cutoff
@@ -32016,7 +34245,7 @@ Type: string
Default: \[dq]\[dq]
.SS \-\-azureblob\-chunk\-size
.PP
-Upload chunk size (<= 100MB).
+Upload chunk size (<= 100 MiB).
.PP
Note that this is stored in memory and there may be up to
\[dq]\-\-transfers\[dq] chunks stored at once in memory.
@@ -32027,7 +34256,7 @@ Env Var: RCLONE_AZUREBLOB_CHUNK_SIZE
.IP \[bu] 2
Type: SizeSuffix
.IP \[bu] 2
-Default: 4M
+Default: 4Mi
.SS \-\-azureblob\-list\-chunk
.PP
Size of blob list.
@@ -32216,7 +34445,7 @@ azure storage emulator installed locally and set up a new remote with
\f[C]rclone config\f[R] follow instructions described in introduction,
set \f[C]use_emulator\f[R] config as \f[C]true\f[R], you do not need to
provide default account name or key if using emulator.
-.SS Microsoft OneDrive
+.SH Microsoft OneDrive
.PP
Paths are specified as \f[C]remote:path\f[R]
.PP
@@ -32663,7 +34892,7 @@ Env Var: RCLONE_ONEDRIVE_CHUNK_SIZE
.IP \[bu] 2
Type: SizeSuffix
.IP \[bu] 2
-Default: 10M
+Default: 10Mi
.SS \-\-onedrive\-drive\-id
.PP
The ID of the drive to use
@@ -32722,6 +34951,17 @@ Env Var: RCLONE_ONEDRIVE_SERVER_SIDE_ACROSS_CONFIGS
Type: bool
.IP \[bu] 2
Default: false
+.SS \-\-onedrive\-list\-chunk
+.PP
+Size of listing chunk.
+.IP \[bu] 2
+Config: list_chunk
+.IP \[bu] 2
+Env Var: RCLONE_ONEDRIVE_LIST_CHUNK
+.IP \[bu] 2
+Type: int
+.IP \[bu] 2
+Default: 1000
.SS \-\-onedrive\-no\-versions
.PP
Remove all versions on modifying operations
@@ -32859,7 +35099,7 @@ For example if a file has a \f[C]?\f[R] in it will be mapped to
\f[C]\[uFF1F]\f[R] instead.
.SS File sizes
.PP
-The largest allowed file size is 250GB for both OneDrive Personal and
+The largest allowed file size is 250 GiB for both OneDrive Personal and
OneDrive for Business (Updated 13 Jan
2021) (https://support.microsoft.com/en-us/office/invalid-file-names-and-file-types-in-onedrive-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa?ui=en-us&rs=en-us&ad=us#individualfilesize).
.SS Path length
@@ -32978,6 +35218,15 @@ rclone cleanup remote:path/subdir # unconditionally remove all old version fo
.PP
\f[B]NB\f[R] Onedrive personal can\[aq]t currently delete versions
.SS Troubleshooting
+.SS Excessive throttling or blocked on SharePoint
+.PP
+If you experience excessive throttling or is being blocked on SharePoint
+then it may help to set the user agent explicitly with a flag like this:
+\f[C]\-\-user\-agent \[dq]ISV|rclone.org|rclone/v1.55.1\[dq]\f[R]
+.PP
+The specific details can be found in the Microsoft document: Avoid
+getting throttled or blocked in SharePoint
+Online (https://docs.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online#how-to-decorate-your-http-traffic-to-avoid-throttling)
.SS Unexpected file size/hash differences on Sharepoint
.PP
It is a
@@ -33064,7 +35313,7 @@ this question: \f[C]Already have a token \- refresh?\f[R].
For this question, answer \f[C]y\f[R] and go through the process to
refresh your token, just like the first time the backend is configured.
After this, rclone should work again for this backend.
-.SS OpenDrive
+.SH OpenDrive
.PP
Paths are specified as \f[C]remote:path\f[R]
.PP
@@ -33345,7 +35594,7 @@ Env Var: RCLONE_OPENDRIVE_CHUNK_SIZE
.IP \[bu] 2
Type: SizeSuffix
.IP \[bu] 2
-Default: 10M
+Default: 10Mi
.SS Limitations
.PP
Note that OpenDrive is case insensitive so you can\[aq]t have a file
@@ -33368,7 +35617,7 @@ of an rclone union remote.
See List of backends that do not support rclone
about (https://rclone.org/overview/#optional-features) See rclone
about (https://rclone.org/commands/rclone_about/)
-.SS QingStor
+.SH QingStor
.PP
Paths are specified as \f[C]remote:bucket\f[R] (or \f[C]remote:\f[R] for
the \f[C]lsd\f[R] command.) You may put subdirectories in too, e.g.
@@ -33488,7 +35737,7 @@ details.
.SS Multipart uploads
.PP
rclone supports multipart uploads with QingStor which means that it can
-upload files bigger than 5GB.
+upload files bigger than 5 GiB.
Note that files uploaded with multipart upload don\[aq]t have an MD5SUM.
.PP
Note that incomplete multipart uploads older than 24 hours can be
@@ -33671,7 +35920,7 @@ Default: 3
Cutoff for switching to chunked upload
.PP
Any files larger than this will be uploaded in chunks of chunk_size.
-The minimum is 0 and the maximum is 5GB.
+The minimum is 0 and the maximum is 5 GiB.
.IP \[bu] 2
Config: upload_cutoff
.IP \[bu] 2
@@ -33679,7 +35928,7 @@ Env Var: RCLONE_QINGSTOR_UPLOAD_CUTOFF
.IP \[bu] 2
Type: SizeSuffix
.IP \[bu] 2
-Default: 200M
+Default: 200Mi
.SS \-\-qingstor\-chunk\-size
.PP
Chunk size to use for uploading.
@@ -33699,7 +35948,7 @@ Env Var: RCLONE_QINGSTOR_CHUNK_SIZE
.IP \[bu] 2
Type: SizeSuffix
.IP \[bu] 2
-Default: 4M
+Default: 4Mi
.SS \-\-qingstor\-upload\-concurrency
.PP
Concurrency for multipart uploads.
@@ -33745,7 +35994,7 @@ of an rclone union remote.
See List of backends that do not support rclone
about (https://rclone.org/overview/#optional-features) See rclone
about (https://rclone.org/commands/rclone_about/)
-.SS Swift
+.SH Swift
.PP
Swift refers to OpenStack Object
Storage (https://docs.openstack.org/swift/latest/).
@@ -34334,7 +36583,7 @@ Default: false
Above this size files will be chunked into a _segments container.
.PP
Above this size files will be chunked into a _segments container.
-The default for this is 5GB which is its maximum value.
+The default for this is 5 GiB which is its maximum value.
.IP \[bu] 2
Config: chunk_size
.IP \[bu] 2
@@ -34342,7 +36591,7 @@ Env Var: RCLONE_SWIFT_CHUNK_SIZE
.IP \[bu] 2
Type: SizeSuffix
.IP \[bu] 2
-Default: 5G
+Default: 5Gi
.SS \-\-swift\-no\-chunk
.PP
Don\[aq]t chunk files during streaming upload.
@@ -34351,7 +36600,7 @@ When doing streaming uploads (e.g.
using rcat or mount) setting this flag will cause the swift backend to
not upload chunked files.
.PP
-This will limit the maximum upload size to 5GB.
+This will limit the maximum upload size to 5 GiB.
However non chunked files are easier to deal with and have an MD5SUM.
.PP
Rclone will still chunk files bigger than chunk_size when doing normal
@@ -34440,7 +36689,7 @@ OVH).
.PP
This is most likely caused by forgetting to specify your tenant when
setting up a swift remote.
-.SS pCloud
+.SH pCloud
.PP
Paths are specified as \f[C]remote:path\f[R]
.PP
@@ -34727,7 +36976,7 @@ Original/US region
EU region
.RE
.RE
-.SS premiumize.me
+.SH premiumize.me
.PP
Paths are specified as \f[C]remote:path\f[R]
.PP
@@ -34910,7 +37159,7 @@ rclone maps these to and from an identical looking unicode equivalents
\f[C]\[uFF3C]\f[R] and \f[C]\[uFF02]\f[R]
.PP
premiumize.me only supports filenames up to 255 characters in length.
-.SS put.io
+.SH put.io
.PP
Paths are specified as \f[C]remote:path\f[R]
.PP
@@ -35066,7 +37315,7 @@ Env Var: RCLONE_PUTIO_ENCODING
Type: MultiEncoder
.IP \[bu] 2
Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
-.SS Seafile
+.SH Seafile
.PP
This is a backend for the Seafile (https://www.seafile.com/) storage
service: \- It works with both the free community edition or the
@@ -35536,7 +37785,7 @@ Env Var: RCLONE_SEAFILE_ENCODING
Type: MultiEncoder
.IP \[bu] 2
Default: Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8
-.SS SFTP
+.SH SFTP
.PP
SFTP is the Secure (or SSH) File Transfer
Protocol (https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol).
@@ -35554,6 +37803,12 @@ Paths are specified as \f[C]remote:path\f[R].
If the path does not begin with a \f[C]/\f[R] it is relative to the home
directory of the user.
An empty path \f[C]remote:\f[R] refers to the user\[aq]s home directory.
+For example, \f[C]rclone lsd remote:\f[R] would list the home directory
+of the user cofigured in the rclone remote config
+(\f[C]i.e /home/sftpuser\f[R]).
+However, \f[C]rclone lsd remote:/\f[R] would list the root directory for
+remote machine (i.e.
+\f[C]/\f[R])
.PP
\[dq]Note that some SFTP servers will need the leading / \- Synology is
a good example of this.
@@ -35627,6 +37882,14 @@ rclone lsd remote:
\f[R]
.fi
.PP
+See all directories in the root directory
+.IP
+.nf
+\f[C]
+rclone lsd remote:/
+\f[R]
+.fi
+.PP
Make a new directory
.IP
.nf
@@ -35651,6 +37914,15 @@ any excess files in the directory.
rclone sync \-i /home/local/directory remote:directory
\f[R]
.fi
+.PP
+Mount the remote path \f[C]/srv/www\-data/\f[R] to the local path
+\f[C]/mnt/www\-data\f[R]
+.IP
+.nf
+\f[C]
+rclone mount remote:/srv/www\-data/ /mnt/www\-data
+\f[R]
+.fi
.SS SSH Authentication
.PP
The SFTP remote supports three authentication methods:
@@ -36219,6 +38491,22 @@ Env Var: RCLONE_SFTP_DISABLE_CONCURRENT_READS
Type: bool
.IP \[bu] 2
Default: false
+.SS \-\-sftp\-disable\-concurrent\-writes
+.PP
+If set don\[aq]t use concurrent writes
+.PP
+Normally rclone uses concurrent writes to upload files.
+This improves the performance greatly, especially for distant servers.
+.PP
+This option disables concurrent writes should that be necessary.
+.IP \[bu] 2
+Config: disable_concurrent_writes
+.IP \[bu] 2
+Env Var: RCLONE_SFTP_DISABLE_CONCURRENT_WRITES
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS \-\-sftp\-idle\-timeout
.PP
Max time before closing idle connections
@@ -36292,7 +38580,7 @@ rsync.net is supported through the SFTP backend.
.PP
See rsync.net\[aq]s documentation of rclone
examples (https://www.rsync.net/products/rclone.html).
-.SS SugarSync
+.SH SugarSync
.PP
SugarSync (https://sugarsync.com) is a cloud service that enables active
synchronization of files across computers and other devices for file
@@ -36585,7 +38873,7 @@ of an rclone union remote.
See List of backends that do not support rclone
about (https://rclone.org/overview/#optional-features) See rclone
about (https://rclone.org/commands/rclone_about/)
-.SS Tardigrade
+.SH Tardigrade
.PP
Tardigrade (https://tardigrade.io) is an encrypted, secure, and
cost\-effective object storage service that enables you to store, back
@@ -37006,7 +39294,210 @@ of an rclone union remote.
See List of backends that do not support rclone
about (https://rclone.org/overview/#optional-features) See rclone
about (https://rclone.org/commands/rclone_about/)
-.SS Union
+.SS Known issues
+.PP
+If you get errors like \f[C]too many open files\f[R] this usually
+happens when the default \f[C]ulimit\f[R] for system max open files is
+exceeded.
+Native Storj protocol opens a large number of TCP connections (each of
+which is counted as an open file).
+For a single upload stream you can expect 110 TCP connections to be
+opened.
+For a single download stream you can expect 35.
+This batch of connections will be opened for every 64 MiB segment and
+you should also expect TCP connections to be reused.
+If you do many transfers you eventually open a connection to most
+storage nodes (thousands of nodes).
+.PP
+To fix these, please raise your system limits.
+You can do this issuing a \f[C]ulimit \-n 65536\f[R] just before you run
+rclone.
+To change the limits more permanently you can add this to your shell
+startup script, e.g.
+\f[C]$HOME/.bashrc\f[R], or change the system\-wide configuration,
+usually \f[C]/etc/sysctl.conf\f[R] and/or
+\f[C]/etc/security/limits.conf\f[R], but please refer to your operating
+system manual.
+.SH Uptobox
+.PP
+This is a Backend for Uptobox file storage service.
+Uptobox is closer to a one\-click hoster than a traditional cloud
+storage provider and therefore not suitable for long term storage.
+.PP
+Paths are specified as \f[C]remote:path\f[R]
+.PP
+Paths may be as deep as required, e.g.
+\f[C]remote:directory/subdirectory\f[R].
+.SS Setup
+.PP
+To configure an Uptobox backend you\[aq]ll need your personal api token.
+You\[aq]ll find it in your account
+settings (https://uptobox.com/my_account)
+.SS Example
+.PP
+Here is an example of how to make a remote called \f[C]remote\f[R] with
+the default setup.
+First run:
+.IP
+.nf
+\f[C]
+rclone config
+\f[R]
+.fi
+.PP
+This will guide you through an interactive setup process:
+.IP
+.nf
+\f[C]
+Current remotes:
+
+Name Type
+==== ====
+TestUptobox uptobox
+
+e) Edit existing remote
+n) New remote
+d) Delete remote
+r) Rename remote
+c) Copy remote
+s) Set configuration password
+q) Quit config
+e/n/d/r/c/s/q> n
+name> uptobox
+Type of storage to configure.
+Enter a string value. Press Enter for the default (\[dq]\[dq]).
+Choose a number from below, or type in your own value
+[...]
+37 / Uptobox
+ \[rs] \[dq]uptobox\[dq]
+[...]
+Storage> uptobox
+** See help for uptobox backend at: https://rclone.org/uptobox/ **
+
+Your API Key, get it from https://uptobox.com/my_account
+Enter a string value. Press Enter for the default (\[dq]\[dq]).
+api_key> xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+Edit advanced config? (y/n)
+y) Yes
+n) No (default)
+y/n> n
+Remote config
+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
+[uptobox]
+type = uptobox
+api_key = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d>
+\f[R]
+.fi
+.PP
+Once configured you can then use \f[C]rclone\f[R] like this,
+.PP
+List directories in top level of your Uptobox
+.IP
+.nf
+\f[C]
+rclone lsd remote:
+\f[R]
+.fi
+.PP
+List all the files in your Uptobox
+.IP
+.nf
+\f[C]
+rclone ls remote:
+\f[R]
+.fi
+.PP
+To copy a local directory to an Uptobox directory called backup
+.IP
+.nf
+\f[C]
+rclone copy /home/source remote:backup
+\f[R]
+.fi
+.SS Modified time and hashes
+.PP
+Uptobox supports neither modified times nor checksums.
+.SS Restricted filename characters
+.PP
+In addition to the default restricted characters
+set (https://rclone.org/overview/#restricted-characters) the following
+characters are also replaced:
+.PP
+.TS
+tab(@);
+l c c.
+T{
+Character
+T}@T{
+Value
+T}@T{
+Replacement
+T}
+_
+T{
+\[dq]
+T}@T{
+0x22
+T}@T{
+\[uFF02]
+T}
+T{
+\[ga]
+T}@T{
+0x41
+T}@T{
+\[uFF40]
+T}
+.TE
+.PP
+Invalid UTF\-8 bytes will also be
+replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t
+be used in XML strings.
+.SS Standard Options
+.PP
+Here are the standard options specific to uptobox (Uptobox).
+.SS \-\-uptobox\-access\-token
+.PP
+Your access Token, get it from https://uptobox.com/my_account
+.IP \[bu] 2
+Config: access_token
+.IP \[bu] 2
+Env Var: RCLONE_UPTOBOX_ACCESS_TOKEN
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: \[dq]\[dq]
+.SS Advanced Options
+.PP
+Here are the advanced options specific to uptobox (Uptobox).
+.SS \-\-uptobox\-encoding
+.PP
+This sets the encoding for the backend.
+.PP
+See: the encoding section in the
+overview (https://rclone.org/overview/#encoding) for more info.
+.IP \[bu] 2
+Config: encoding
+.IP \[bu] 2
+Env Var: RCLONE_UPTOBOX_ENCODING
+.IP \[bu] 2
+Type: MultiEncoder
+.IP \[bu] 2
+Default:
+Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot
+.SS Limitations
+.PP
+Uptobox will delete inactive files that have not been accessed in 60
+days.
+.PP
+\f[C]rclone about\f[R] is not supported by this backend an overview of
+used space can however been seen in the uptobox web interface.
+.SH Union
.PP
The \f[C]union\f[R] remote provides a unification similar to UnionFS
using other remotes.
@@ -37439,7 +39930,7 @@ Env Var: RCLONE_UNION_CACHE_TIME
Type: int
.IP \[bu] 2
Default: 120
-.SS WebDAV
+.SH WebDAV
.PP
Paths are specified as \f[C]remote:path\f[R]
.PP
@@ -37702,6 +40193,28 @@ Env Var: RCLONE_WEBDAV_ENCODING
Type: string
.IP \[bu] 2
Default: \[dq]\[dq]
+.SS \-\-webdav\-headers
+.PP
+Set HTTP headers for all transactions
+.PP
+Use this to set additional HTTP headers for all transactions
+.PP
+The input format is comma separated list of key,value pairs.
+Standard CSV encoding (https://godoc.org/encoding/csv) may be used.
+.PP
+For example to set a Cookie use \[aq]Cookie,name=value\[aq], or
+\[aq]\[dq]Cookie\[dq],\[dq]name=value\[dq]\[aq].
+.PP
+You can set multiple headers, e.g.
+\[aq]\[dq]Cookie\[dq],\[dq]name=value\[dq],\[dq]Authorization\[dq],\[dq]xxx\[dq]\[aq].
+.IP \[bu] 2
+Config: headers
+.IP \[bu] 2
+Env Var: RCLONE_WEBDAV_HEADERS
+.IP \[bu] 2
+Type: CommaSepList
+.IP \[bu] 2
+Default:
.SS Provider notes
.PP
See below for notes on specific providers.
@@ -37904,7 +40417,7 @@ vendor = other
bearer_token_command = oidc\-token XDC
\f[R]
.fi
-.SS Yandex Disk
+.SH Yandex Disk
.PP
Yandex Disk (https://disk.yandex.com) is a cloud storage solution
created by Yandex (https://yandex.com).
@@ -37953,7 +40466,7 @@ Got code
[remote]
client_id =
client_secret =
-token = {\[dq]access_token\[dq]:\[dq]xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\[dq],\[dq]token_type\[dq]:\[dq]bearer\[dq],\[dq]expiry\[dq]:\[dq]2016\-12\-29T12:27:11.362788025Z\[dq]}
+token = {\[dq]access_token\[dq]:\[dq]xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\[dq],\[dq]token_type\[dq]:\[dq]OAuth\[dq],\[dq]expiry\[dq]:\[dq]2016\-12\-29T12:27:11.362788025Z\[dq]}
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
y) Yes this is OK
e) Edit this remote
@@ -38038,8 +40551,8 @@ replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t
be used in JSON strings.
.SS Limitations
.PP
-When uploading very large files (bigger than about 5GB) you will need to
-increase the \f[C]\-\-timeout\f[R] parameter.
+When uploading very large files (bigger than about 5 GiB) you will need
+to increase the \f[C]\-\-timeout\f[R] parameter.
This is because Yandex pauses (perhaps to calculate the MD5SUM for the
entire file) before returning confirmation that the file has been
uploaded.
@@ -38047,8 +40560,8 @@ The default handling of timeouts in rclone is to assume a 5 minute pause
is an error and close the connection \- you\[aq]ll see
\f[C]net/http: timeout awaiting response headers\f[R] errors in the logs
if this is happening.
-Setting the timeout to twice the max size of file in GB should be
-enough, so if you want to upload a 30GB file set a timeout of
+Setting the timeout to twice the max size of file in GiB should be
+enough, so if you want to upload a 30 GiB file set a timeout of
\f[C]2 * 30 = 60m\f[R], that is \f[C]\-\-timeout 60m\f[R].
.SS Standard Options
.PP
@@ -38127,7 +40640,7 @@ Env Var: RCLONE_YANDEX_ENCODING
Type: MultiEncoder
.IP \[bu] 2
Default: Slash,Del,Ctl,InvalidUtf8,Dot
-.SS Zoho Workdrive
+.SH Zoho Workdrive
.PP
Zoho WorkDrive (https://www.zoho.com/workdrive/) is a cloud storage
solution created by Zoho (https://zoho.com).
@@ -38296,7 +40809,10 @@ Default: \[dq]\[dq]
.SS \-\-zoho\-region
.PP
Zoho region to connect to.
-You\[aq]ll have to use the region you organization is registered in.
+.PP
+You\[aq]ll have to use the region your organization is registered in.
+If not sure use the same top level domain as you connect to in your
+browser.
.IP \[bu] 2
Config: region
.IP \[bu] 2
@@ -38385,7 +40901,7 @@ Env Var: RCLONE_ZOHO_ENCODING
Type: MultiEncoder
.IP \[bu] 2
Default: Del,Ctl,InvalidUtf8
-.SS Local Filesystem
+.SH Local Filesystem
.PP
Local paths are specified as normal filesystem paths, e.g.
\f[C]/path/to/wherever\f[R], so
@@ -38805,9 +41321,12 @@ be converted to UTF\-16.
.PP
On Windows there are many ways of specifying a path to a file system
resource.
-Both absolute paths like \f[C]C:\[rs]path\[rs]to\[rs]wherever\f[R], and
-relative paths like \f[C]..\[rs]wherever\f[R] can be used, and path
-separator can be either \f[C]\[rs]\f[R] (as in
+Local paths can be absolute, like
+\f[C]C:\[rs]path\[rs]to\[rs]wherever\f[R], or relative, like
+\f[C]..\[rs]wherever\f[R].
+Network paths in UNC format, \f[C]\[rs]\[rs]server\[rs]share\f[R], are
+also supported.
+Path separator can be either \f[C]\[rs]\f[R] (as in
\f[C]C:\[rs]path\[rs]to\[rs]wherever\f[R]) or \f[C]/\f[R] (as in
\f[C]C:/path/to/wherever\f[R]).
Length of these paths are limited to 259 characters for files and 247
@@ -38879,7 +41398,7 @@ like symlinks under Windows).
.PP
If you supply \f[C]\-\-copy\-links\f[R] or \f[C]\-L\f[R] then rclone
will follow the symlink and copy the pointed to file or directory.
-Note that this flag is incompatible with \f[C]\-links\f[R] /
+Note that this flag is incompatible with \f[C]\-\-links\f[R] /
\f[C]\-l\f[R].
.PP
This flag applies to all commands.
@@ -39117,20 +41636,18 @@ Default: false
.SS \-\-local\-zero\-size\-links
.PP
Assume the Stat size of links is zero (and read them instead)
+(Deprecated)
.PP
-On some virtual filesystems (such ash LucidLink), reading a link size
-via a Stat call always returns 0.
-However, on unix it reads as the length of the text in the link.
-This may cause errors like this when syncing:
-.IP
-.nf
-\f[C]
-Failed to copy: corrupted on transfer: sizes differ 0 vs 13
-\f[R]
-.fi
+Rclone used to use the Stat size of links as the link size, but this
+fails in quite a few places
+.IP \[bu] 2
+Windows
+.IP \[bu] 2
+On some virtual filesystems (such ash LucidLink)
+.IP \[bu] 2
+Android
.PP
-Setting this flag causes rclone to read the link and use that as the
-size of the link instead of 0 which in most cases fixes the problem.
+So rclone now always reads the link
.IP \[bu] 2
Config: zero_size_links
.IP \[bu] 2
@@ -39139,18 +41656,26 @@ Env Var: RCLONE_LOCAL_ZERO_SIZE_LINKS
Type: bool
.IP \[bu] 2
Default: false
-.SS \-\-local\-no\-unicode\-normalization
+.SS \-\-local\-unicode\-normalization
.PP
-Don\[aq]t apply unicode normalization to paths and filenames
-(Deprecated)
+Apply unicode NFC normalization to paths and filenames
.PP
-This flag is deprecated now.
-Rclone no longer normalizes unicode file names, but it compares them
-with unicode normalization in the sync routine instead.
+This flag can be used to normalize file names into unicode NFC form that
+are read from the local filesystem.
+.PP
+Rclone does not normally touch the encoding of file names it reads from
+the file system.
+.PP
+This can be useful when using macOS as it normally provides decomposed
+(NFD) unicode which in some language (eg Korean) doesn\[aq]t display
+properly on some OSes.
+.PP
+Note that rclone compares filenames with unicode normalization in the
+sync routine so this flag shouldn\[aq]t normally be used.
.IP \[bu] 2
-Config: no_unicode_normalization
+Config: unicode_normalization
.IP \[bu] 2
-Env Var: RCLONE_LOCAL_NO_UNICODE_NORMALIZATION
+Env Var: RCLONE_LOCAL_UNICODE_NORMALIZATION
.IP \[bu] 2
Type: bool
.IP \[bu] 2
@@ -39340,6 +41865,495 @@ Options:
.IP \[bu] 2
\[dq]error\[dq]: return an error based on option value
.SH Changelog
+.SS v1.56.0 \- 2021\-07\-20
+.PP
+See commits (https://github.com/rclone/rclone/compare/v1.55.0...v1.56.0)
+.IP \[bu] 2
+New backends
+.RS 2
+.IP \[bu] 2
+Uptobox (https://rclone.org/uptobox/) (buengese)
+.RE
+.IP \[bu] 2
+New commands
+.RS 2
+.IP \[bu] 2
+serve docker (https://rclone.org/commands/rclone_serve_docker/) (Antoine
+GIRARD) (Ivan Andreev)
+.RS 2
+.IP \[bu] 2
+and accompanying docker volume plugin (https://rclone.org/docker/)
+.RE
+.IP \[bu] 2
+checksum (https://rclone.org/commands/rclone_checksum/) to check files
+against a file of checksums (Ivan Andreev)
+.RS 2
+.IP \[bu] 2
+this is also available as \f[C]rclone md5sum \-C\f[R] etc
+.RE
+.IP \[bu] 2
+config touch (https://rclone.org/commands/rclone_config_touch/): ensure
+config exists at configured location (albertony)
+.IP \[bu] 2
+test
+changenotify (https://rclone.org/commands/rclone_test_changenotify/):
+command to help debugging changenotify (Nick Craig\-Wood)
+.RE
+.IP \[bu] 2
+Deprecations
+.RS 2
+.IP \[bu] 2
+\f[C]dbhashsum\f[R]: Remove command deprecated a year ago (Ivan Andreev)
+.IP \[bu] 2
+\f[C]cache\f[R]: Deprecate cache backend (Ivan Andreev)
+.RE
+.IP \[bu] 2
+New Features
+.RS 2
+.IP \[bu] 2
+rework config system so it can be used non\-interactively via cli and rc
+API.
+.RS 2
+.IP \[bu] 2
+See docs in config
+create (https://rclone.org/commands/rclone_config_create/)
+.IP \[bu] 2
+This is a very big change to all the backends so may cause breakages \-
+please file bugs!
+.RE
+.IP \[bu] 2
+librclone \- export the rclone RC as a C library (lewisxy) (Nick
+Craig\-Wood)
+.RS 2
+.IP \[bu] 2
+Link a C\-API rclone shared object into your project
+.IP \[bu] 2
+Use the RC as an in memory interface
+.IP \[bu] 2
+Python example supplied
+.IP \[bu] 2
+Also supports Android and gomobile
+.RE
+.IP \[bu] 2
+fs
+.RS 2
+.IP \[bu] 2
+Add \f[C]\-\-disable\-http2\f[R] for global http2 disable (Nick
+Craig\-Wood)
+.IP \[bu] 2
+Make \f[C]\-\-dump\f[R] imply \f[C]\-vv\f[R] (Alex Chen)
+.IP \[bu] 2
+Use binary prefixes for size and rate units (albertony)
+.IP \[bu] 2
+Use decimal prefixes for counts (albertony)
+.IP \[bu] 2
+Add google search widget to rclone.org (Ivan Andreev)
+.RE
+.IP \[bu] 2
+accounting: Calculate rolling average speed (Haochen Tong)
+.IP \[bu] 2
+atexit: Terminate with non\-zero status after receiving signal (Michael
+Hanselmann)
+.IP \[bu] 2
+build
+.RS 2
+.IP \[bu] 2
+Only run event\-based workflow scripts under rclone repo with manual
+override (Mathieu Carbou)
+.IP \[bu] 2
+Add Android build with gomobile (x0b)
+.RE
+.IP \[bu] 2
+check: Log the hash in use like cryptcheck does (Nick Craig\-Wood)
+.IP \[bu] 2
+version: Print os/version, kernel and bitness (Ivan Andreev)
+.IP \[bu] 2
+config
+.RS 2
+.IP \[bu] 2
+Prevent use of Windows reserved names in config file name (albertony)
+.IP \[bu] 2
+Create config file in windows appdata directory by default (albertony)
+.IP \[bu] 2
+Treat any config file paths with filename notfound as memory\-only
+config (albertony)
+.IP \[bu] 2
+Delay load config file (albertony)
+.IP \[bu] 2
+Replace defaultConfig with a thread\-safe in\-memory implementation
+(Chris Macklin)
+.IP \[bu] 2
+Allow \f[C]config create\f[R] and friends to take \f[C]key=value\f[R]
+parameters (Nick Craig\-Wood)
+.IP \[bu] 2
+Fixed issues with flags/options set by environment vars.
+(Ole Frost)
+.RE
+.IP \[bu] 2
+fshttp: Implement graceful DSCP error handling (Tyson Moore)
+.IP \[bu] 2
+lib/http \- provides an abstraction for a central http server that
+services can bind routes to (Nolan Woods)
+.RS 2
+.IP \[bu] 2
+Add \f[C]\-\-template\f[R] config and flags to serve/data (Nolan Woods)
+.IP \[bu] 2
+Add default 404 handler (Nolan Woods)
+.RE
+.IP \[bu] 2
+link: Use \[dq]off\[dq] value for unset expiry (Nick Craig\-Wood)
+.IP \[bu] 2
+oauthutil: Raise fatal error if token expired without refresh token
+(Alex Chen)
+.IP \[bu] 2
+rcat: Add \f[C]\-\-size\f[R] flag for more efficient uploads of known
+size (Nazar Mishturak)
+.IP \[bu] 2
+serve sftp: Add \f[C]\-\-stdio\f[R] flag to serve via stdio (Tom)
+.IP \[bu] 2
+sync: Don\[aq]t warn about \f[C]\-\-no\-traverse\f[R] when
+\f[C]\-\-files\-from\f[R] is set (Nick Gaya)
+.IP \[bu] 2
+\f[C]test makefiles\f[R]
+.RS 2
+.IP \[bu] 2
+Add \f[C]\-\-seed\f[R] flag and make data generated repeatable (Nick
+Craig\-Wood)
+.IP \[bu] 2
+Add log levels and speed summary (Nick Craig\-Wood)
+.RE
+.RE
+.IP \[bu] 2
+Bug Fixes
+.RS 2
+.IP \[bu] 2
+accounting: Fix startTime of statsGroups.sum (Haochen Tong)
+.IP \[bu] 2
+cmd/ncdu: Fix out of range panic in delete (buengese)
+.IP \[bu] 2
+config
+.RS 2
+.IP \[bu] 2
+Fix issues with memory\-only config file paths (albertony)
+.IP \[bu] 2
+Fix in memory config not saving on the fly backend config (Nick
+Craig\-Wood)
+.RE
+.IP \[bu] 2
+fshttp: Fix address parsing for DSCP (Tyson Moore)
+.IP \[bu] 2
+ncdu: Update termbox\-go library to fix crash (Nick Craig\-Wood)
+.IP \[bu] 2
+oauthutil: Fix old authorize result not recognised (Cnly)
+.IP \[bu] 2
+operations: Don\[aq]t update timestamps of files in
+\f[C]\-\-compare\-dest\f[R] (Nick Gaya)
+.IP \[bu] 2
+selfupdate: fix archive name on macos (Ivan Andreev)
+.RE
+.IP \[bu] 2
+Mount
+.RS 2
+.IP \[bu] 2
+Refactor before adding serve docker (Antoine GIRARD)
+.RE
+.IP \[bu] 2
+VFS
+.RS 2
+.IP \[bu] 2
+Add cache reset for \f[C]\-\-vfs\-cache\-max\-size\f[R] handling at
+cache poll interval (Leo Luan)
+.IP \[bu] 2
+Fix modtime changing when reading file into cache (Nick Craig\-Wood)
+.IP \[bu] 2
+Avoid unnecessary subdir in cache path (albertony)
+.IP \[bu] 2
+Fix that umask option cannot be set as environment variable (albertony)
+.IP \[bu] 2
+Do not print notice about missing poll\-interval support when set to 0
+(albertony)
+.RE
+.IP \[bu] 2
+Local
+.RS 2
+.IP \[bu] 2
+Always use readlink to read symlink size for better compatibility (Nick
+Craig\-Wood)
+.IP \[bu] 2
+Add \f[C]\-\-local\-unicode\-normalization\f[R] (and remove
+\f[C]\-\-local\-no\-unicode\-normalization\f[R]) (Nick Craig\-Wood)
+.IP \[bu] 2
+Skip entries removed concurrently with List() (Ivan Andreev)
+.RE
+.IP \[bu] 2
+Crypt
+.RS 2
+.IP \[bu] 2
+Support timestamped filenames from \f[C]\-\-b2\-versions\f[R] (Dominik
+Mydlil)
+.RE
+.IP \[bu] 2
+B2
+.RS 2
+.IP \[bu] 2
+Don\[aq]t include the bucket name in public link file prefixes (Jeffrey
+Tolar)
+.IP \[bu] 2
+Fix versions and .files with no extension (Nick Craig\-Wood)
+.IP \[bu] 2
+Factor version handling into lib/version (Dominik Mydlil)
+.RE
+.IP \[bu] 2
+Box
+.RS 2
+.IP \[bu] 2
+Use upload preflight check to avoid listings in file uploads (Nick
+Craig\-Wood)
+.IP \[bu] 2
+Return errors instead of calling log.Fatal with them (Nick Craig\-Wood)
+.RE
+.IP \[bu] 2
+Drive
+.RS 2
+.IP \[bu] 2
+Switch to the Drives API for looking up shared drives (Nick Craig\-Wood)
+.IP \[bu] 2
+Fix some google docs being treated as files (Nick Craig\-Wood)
+.RE
+.IP \[bu] 2
+Dropbox
+.RS 2
+.IP \[bu] 2
+Add \f[C]\-\-dropbox\-batch\-mode\f[R] flag to speed up uploading (Nick
+Craig\-Wood)
+.RS 2
+.IP \[bu] 2
+Read the batch mode (https://rclone.org/dropbox/#batch-mode) docs for
+more info
+.RE
+.IP \[bu] 2
+Set visibility in link sharing when \f[C]\-\-expire\f[R] is set (Nick
+Craig\-Wood)
+.IP \[bu] 2
+Simplify chunked uploads (Alexey Ivanov)
+.IP \[bu] 2
+Improve \[dq]own App IP\[dq] instructions (Ivan Andreev)
+.RE
+.IP \[bu] 2
+Fichier
+.RS 2
+.IP \[bu] 2
+Check if more than one upload link is returned (Nick Craig\-Wood)
+.IP \[bu] 2
+Support downloading password protected files and folders (Florian
+Penzkofer)
+.IP \[bu] 2
+Make error messages report text from the API (Nick Craig\-Wood)
+.IP \[bu] 2
+Fix move of files in the same directory (Nick Craig\-Wood)
+.IP \[bu] 2
+Check that we actually got a download token and retry if we didn\[aq]t
+(buengese)
+.RE
+.IP \[bu] 2
+Filefabric
+.RS 2
+.IP \[bu] 2
+Fix listing after change of from field from \[dq]int\[dq] to int.
+(Nick Craig\-Wood)
+.RE
+.IP \[bu] 2
+FTP
+.RS 2
+.IP \[bu] 2
+Make upload error 250 indicate success (Nick Craig\-Wood)
+.RE
+.IP \[bu] 2
+GCS
+.RS 2
+.IP \[bu] 2
+Make compatible with gsutil\[aq]s mtime metadata (database64128)
+.IP \[bu] 2
+Clean up time format constants (database64128)
+.RE
+.IP \[bu] 2
+Google Photos
+.RS 2
+.IP \[bu] 2
+Fix read only scope not being used properly (Nick Craig\-Wood)
+.RE
+.IP \[bu] 2
+HTTP
+.RS 2
+.IP \[bu] 2
+Replace httplib with lib/http (Nolan Woods)
+.IP \[bu] 2
+Clean up Bind to better use middleware (Nolan Woods)
+.RE
+.IP \[bu] 2
+Jottacloud
+.RS 2
+.IP \[bu] 2
+Fix legacy auth with state based config system (buengese)
+.IP \[bu] 2
+Fix invalid url in output from link command (albertony)
+.IP \[bu] 2
+Add no versions option (buengese)
+.RE
+.IP \[bu] 2
+Onedrive
+.RS 2
+.IP \[bu] 2
+Add \f[C]list_chunk option\f[R] (Nick Gaya)
+.IP \[bu] 2
+Also report root error if unable to cancel multipart upload (Cnly)
+.IP \[bu] 2
+Fix failed to configure: empty token found error (Nick Craig\-Wood)
+.IP \[bu] 2
+Make link return direct download link (Xuanchen Wu)
+.RE
+.IP \[bu] 2
+S3
+.RS 2
+.IP \[bu] 2
+Add \f[C]\-\-s3\-no\-head\-object\f[R] (Tatsuya Noyori)
+.IP \[bu] 2
+Remove WebIdentityRoleProvider to fix crash on auth (Nick Craig\-Wood)
+.IP \[bu] 2
+Don\[aq]t check to see if remote is object if it ends with / (Nick
+Craig\-Wood)
+.IP \[bu] 2
+Add SeaweedFS (Chris Lu)
+.IP \[bu] 2
+Update Alibaba OSS endpoints (Chuan Zh)
+.RE
+.IP \[bu] 2
+SFTP
+.RS 2
+.IP \[bu] 2
+Fix performance regression by re\-enabling concurrent writes (Nick
+Craig\-Wood)
+.IP \[bu] 2
+Expand tilde and environment variables in configured
+\f[C]known_hosts_file\f[R] (albertony)
+.RE
+.IP \[bu] 2
+Tardigrade
+.RS 2
+.IP \[bu] 2
+Upgrade to uplink v1.4.6 (Caleb Case)
+.IP \[bu] 2
+Use negative offset (Caleb Case)
+.IP \[bu] 2
+Add warning about \f[C]too many open files\f[R] (acsfer)
+.RE
+.IP \[bu] 2
+WebDAV
+.RS 2
+.IP \[bu] 2
+Fix sharepoint auth over http (Nick Craig\-Wood)
+.IP \[bu] 2
+Add headers option (Antoon Prins)
+.RE
+.SS v1.55.1 \- 2021\-04\-26
+.PP
+See commits (https://github.com/rclone/rclone/compare/v1.55.0...v1.55.1)
+.IP \[bu] 2
+Bug Fixes
+.RS 2
+.IP \[bu] 2
+selfupdate
+.RS 2
+.IP \[bu] 2
+Dont detect FUSE if build is static (Ivan Andreev)
+.IP \[bu] 2
+Add build tag noselfupdate (Ivan Andreev)
+.RE
+.IP \[bu] 2
+sync: Fix incorrect error reported by graceful cutoff (Nick Craig\-Wood)
+.IP \[bu] 2
+install.sh: fix macOS arm64 download (Nick Craig\-Wood)
+.IP \[bu] 2
+build: Fix version numbers in android branch builds (Nick Craig\-Wood)
+.IP \[bu] 2
+docs
+.RS 2
+.IP \[bu] 2
+Contributing.md: update setup instructions for go1.16 (Nick Gaya)
+.IP \[bu] 2
+WinFsp 2021 is out of beta (albertony)
+.IP \[bu] 2
+Minor cleanup of space around code section (albertony)
+.IP \[bu] 2
+Fixed some typos (albertony)
+.RE
+.RE
+.IP \[bu] 2
+VFS
+.RS 2
+.IP \[bu] 2
+Fix a code path which allows dirty data to be removed causing data loss
+(Nick Craig\-Wood)
+.RE
+.IP \[bu] 2
+Compress
+.RS 2
+.IP \[bu] 2
+Fix compressed name regexp (buengese)
+.RE
+.IP \[bu] 2
+Drive
+.RS 2
+.IP \[bu] 2
+Fix backend copyid of google doc to directory (Nick Craig\-Wood)
+.IP \[bu] 2
+Don\[aq]t open browser when service account...
+(Ansh Mittal)
+.RE
+.IP \[bu] 2
+Dropbox
+.RS 2
+.IP \[bu] 2
+Add missing team_data.member scope for use with \-\-impersonate (Nick
+Craig\-Wood)
+.IP \[bu] 2
+Fix About after scopes changes \- rclone config reconnect needed (Nick
+Craig\-Wood)
+.IP \[bu] 2
+Fix Unable to decrypt returned paths from changeNotify (Nick
+Craig\-Wood)
+.RE
+.IP \[bu] 2
+FTP
+.RS 2
+.IP \[bu] 2
+Fix implicit TLS (Ivan Andreev)
+.RE
+.IP \[bu] 2
+Onedrive
+.RS 2
+.IP \[bu] 2
+Work around for random \[dq]Unable to initialize RPS\[dq] errors
+(OleFrost)
+.RE
+.IP \[bu] 2
+SFTP
+.RS 2
+.IP \[bu] 2
+Revert sftp library to v1.12.0 from v1.13.0 to fix performance
+regression (Nick Craig\-Wood)
+.IP \[bu] 2
+Fix Update ReadFrom failed: failed to send packet: EOF errors (Nick
+Craig\-Wood)
+.RE
+.IP \[bu] 2
+Zoho
+.RS 2
+.IP \[bu] 2
+Fix error when region isn\[aq]t set (buengese)
+.IP \[bu] 2
+Do not ask for mountpoint twice when using headless setup (buengese)
+.RE
.SS v1.55.0 \- 2021\-03\-31
.PP
See commits (https://github.com/rclone/rclone/compare/v1.54.0...v1.55.0)
@@ -49078,7 +52092,7 @@ S\['e]bastien Gross
.IP \[bu] 2
Maxime Suret <11944422+msuret@users.noreply.github.com>
.IP \[bu] 2
-Caleb Case
+Caleb Case
.IP \[bu] 2
Ben Zenker
.IP \[bu] 2
@@ -49290,6 +52304,72 @@ Manish Kumar
x0b
.IP \[bu] 2
CERN through the CS3MESH4EOSC Project
+.IP \[bu] 2
+Nick Gaya
+.IP \[bu] 2
+Ashok Gelal <401055+ashokgelal@users.noreply.github.com>
+.IP \[bu] 2
+Dominik Mydlil
+.IP \[bu] 2
+Nazar Mishturak
+.IP \[bu] 2
+Ansh Mittal
+.IP \[bu] 2
+noabody
+.IP \[bu] 2
+OleFrost <82263101+olefrost@users.noreply.github.com>
+.IP \[bu] 2
+Kenny Parsons
+.IP \[bu] 2
+Jeffrey Tolar
+.IP \[bu] 2
+jtagcat
+.IP \[bu] 2
+Tatsuya Noyori <63089076+public-tatsuya-noyori@users.noreply.github.com>
+.IP \[bu] 2
+lewisxy
+.IP \[bu] 2
+Nolan Woods
+.IP \[bu] 2
+Gautam Kumar <25435568+gautamajay52@users.noreply.github.com>
+.IP \[bu] 2
+Chris Macklin
+.IP \[bu] 2
+Antoon Prins
+.IP \[bu] 2
+Alexey Ivanov
+.IP \[bu] 2
+Serge Pouliquen
+.IP \[bu] 2
+acsfer
+.IP \[bu] 2
+Tom
+.IP \[bu] 2
+Tyson Moore
+.IP \[bu] 2
+database64128
+.IP \[bu] 2
+Chris Lu
+.IP \[bu] 2
+Reid Buzby
+.IP \[bu] 2
+darrenrhs
+.IP \[bu] 2
+Florian Penzkofer
+.IP \[bu] 2
+Xuanchen Wu <117010292@link.cuhk.edu.cn>
+.IP \[bu] 2
+partev
+.IP \[bu] 2
+Dmitry Sitnikov
+.IP \[bu] 2
+Haochen Tong
+.IP \[bu] 2
+Michael Hanselmann
+.IP \[bu] 2
+Chuan Zh
+.IP \[bu] 2
+Antoine GIRARD
.SH Contact the rclone project
.SS Forum
.PP