1
0
mirror of https://github.com/rclone/rclone.git synced 2025-11-23 21:44:49 +02:00

docs: fix markdown lint issues in command docs

This commit is contained in:
albertony
2025-08-24 21:12:04 +02:00
parent 514535ad46
commit 2e02d49578
81 changed files with 963 additions and 719 deletions

View File

@@ -51,47 +51,52 @@ output. The output is typically used, free, quota and trash contents.
E.g. Typical output from ` + "`rclone about remote:`" + ` is:
Total: 17 GiB
Used: 7.444 GiB
Free: 1.315 GiB
Trashed: 100.000 MiB
Other: 8.241 GiB
` + "```text" + `
Total: 17 GiB
Used: 7.444 GiB
Free: 1.315 GiB
Trashed: 100.000 MiB
Other: 8.241 GiB
` + "```" + `
Where the fields are:
* Total: Total size available.
* Used: Total size used.
* Free: Total space available to this user.
* Trashed: Total space used by trash.
* Other: Total amount in other storage (e.g. Gmail, Google Photos).
* Objects: Total number of objects in the storage.
- Total: Total size available.
- Used: Total size used.
- Free: Total space available to this user.
- Trashed: Total space used by trash.
- Other: Total amount in other storage (e.g. Gmail, Google Photos).
- Objects: Total number of objects in the storage.
All sizes are in number of bytes.
Applying a ` + "`--full`" + ` flag to the command prints the bytes in full, e.g.
Total: 18253611008
Used: 7993453766
Free: 1411001220
Trashed: 104857602
Other: 8849156022
` + "```text" + `
Total: 18253611008
Used: 7993453766
Free: 1411001220
Trashed: 104857602
Other: 8849156022
` + "```" + `
A ` + "`--json`" + ` flag generates conveniently machine-readable output, e.g.
{
"total": 18253611008,
"used": 7993453766,
"trashed": 104857602,
"other": 8849156022,
"free": 1411001220
}
` + "```json" + `
{
"total": 18253611008,
"used": 7993453766,
"trashed": 104857602,
"other": 8849156022,
"free": 1411001220
}
` + "```" + `
Not all backends print all fields. Information is not included if it is not
provided by a backend. Where the value is unlimited it is omitted.
Some backends does not support the ` + "`rclone about`" + ` command at all,
see complete list in [documentation](https://rclone.org/overview/#optional-features).
`,
see complete list in [documentation](https://rclone.org/overview/#optional-features).`,
Annotations: map[string]string{
"versionIntroduced": "v1.41",
// "groups": "",

View File

@@ -30,14 +30,16 @@ rclone from a machine with a browser - use as instructed by
rclone config.
The command requires 1-3 arguments:
- fs name (e.g., "drive", "s3", etc.)
- Either a base64 encoded JSON blob obtained from a previous rclone config session
- Or a client_id and client_secret pair obtained from the remote service
- fs name (e.g., "drive", "s3", etc.)
- Either a base64 encoded JSON blob obtained from a previous rclone config session
- Or a client_id and client_secret pair obtained from the remote service
Use --auth-no-open-browser to prevent rclone to open auth
link in default browser automatically.
Use --template to generate HTML output via a custom Go template. If a blank string is provided as an argument to this flag, the default template is used.`,
Use --template to generate HTML output via a custom Go template. If a blank
string is provided as an argument to this flag, the default template is used.`,
Annotations: map[string]string{
"versionIntroduced": "v1.27",
},

View File

@@ -37,26 +37,33 @@ see the backend docs for definitions.
You can discover what commands a backend implements by using
rclone backend help remote:
rclone backend help <backendname>
` + "```sh" + `
rclone backend help remote:
rclone backend help <backendname>
` + "```" + `
You can also discover information about the backend using (see
[operations/fsinfo](/rc/#operations-fsinfo) in the remote control docs
for more info).
rclone backend features remote:
` + "```sh" + `
rclone backend features remote:
` + "```" + `
Pass options to the backend command with -o. This should be key=value or key, e.g.:
rclone backend stats remote:path stats -o format=json -o long
` + "```sh" + `
rclone backend stats remote:path stats -o format=json -o long
` + "```" + `
Pass arguments to the backend by placing them on the end of the line
rclone backend cleanup remote:path file1 file2 file3
` + "```sh" + `
rclone backend cleanup remote:path file1 file2 file3
` + "```" + `
Note to run these commands on a running backend then see
[backend/command](/rc/#backend-command) in the rc docs.
`,
[backend/command](/rc/#backend-command) in the rc docs.`,
Annotations: map[string]string{
"versionIntroduced": "v1.52",
"groups": "Important",

View File

@@ -51,14 +51,15 @@ var longHelp = shortHelp + makeHelp(`
bidirectional cloud sync solution in rclone.
It retains the Path1 and Path2 filesystem listings from the prior run.
On each successive run it will:
- list files on Path1 and Path2, and check for changes on each side.
Changes include |New|, |Newer|, |Older|, and |Deleted| files.
- Propagate changes on Path1 to Path2, and vice-versa.
Bisync is considered an **advanced command**, so use with care.
Make sure you have read and understood the entire [manual](https://rclone.org/bisync)
(especially the [Limitations](https://rclone.org/bisync/#limitations) section) before using,
or data loss can result. Questions can be asked in the [Rclone Forum](https://forum.rclone.org/).
(especially the [Limitations](https://rclone.org/bisync/#limitations) section)
before using, or data loss can result. Questions can be asked in the
[Rclone Forum](https://forum.rclone.org/).
See [full bisync description](https://rclone.org/bisync/) for details.
`)
See [full bisync description](https://rclone.org/bisync/) for details.`)

View File

@@ -43,15 +43,21 @@ var commandDefinition = &cobra.Command{
You can use it like this to output a single file
rclone cat remote:path/to/file
|||sh
rclone cat remote:path/to/file
|||
Or like this to output any file in dir or its subdirectories.
rclone cat remote:path/to/dir
|||sh
rclone cat remote:path/to/dir
|||
Or like this to output any .txt files in dir or its subdirectories.
rclone --include "*.txt" cat remote:path/to/dir
|||sh
rclone --include "*.txt" cat remote:path/to/dir
|||
Use the |--head| flag to print characters only at the start, |--tail| for
the end and |--offset| and |--count| to print a section in the middle.
@@ -62,14 +68,17 @@ Use the |--separator| flag to print a separator value between files. Be sure to
shell-escape special characters. For example, to print a newline between
files, use:
* bash:
- bash:
rclone --include "*.txt" --separator $'\n' cat remote:path/to/dir
|||sh
rclone --include "*.txt" --separator $'\n' cat remote:path/to/dir
|||
* powershell:
- powershell:
rclone --include "*.txt" --separator "|n" cat remote:path/to/dir
`, "|", "`"),
|||powershell
rclone --include "*.txt" --separator "|n" cat remote:path/to/dir
|||`, "|", "`"),
Annotations: map[string]string{
"versionIntroduced": "v1.33",
"groups": "Filter,Listing",

View File

@@ -74,8 +74,7 @@ you what happened to it. These are reminiscent of diff files.
- |! path| means there was an error reading or hashing the source or dest.
The default number of parallel checks is 8. See the [--checkers](/docs/#checkers-int)
option for more information.
`, "|", "`")
option for more information.`, "|", "`")
// GetCheckOpt gets the options corresponding to the check flags
func GetCheckOpt(fsrc, fdst fs.Fs) (opt *operations.CheckOpt, close func(), err error) {

View File

@@ -17,8 +17,7 @@ var commandDefinition = &cobra.Command{
Use: "cleanup remote:path",
Short: `Clean up the remote if possible.`,
Long: `Clean up the remote if possible. Empty the trash or delete old file
versions. Not supported by all remotes.
`,
versions. Not supported by all remotes.`,
Annotations: map[string]string{
"versionIntroduced": "v1.31",
"groups": "Important",

View File

@@ -44,8 +44,7 @@ var configCommand = &cobra.Command{
Short: `Enter an interactive configuration session.`,
Long: `Enter an interactive configuration session where you can setup new
remotes and manage existing ones. You may also set or remove a
password to protect your configuration.
`,
password to protect your configuration.`,
Annotations: map[string]string{
"versionIntroduced": "v1.39",
},
@@ -134,9 +133,7 @@ sensitive info with XXX.
This makes the config file suitable for posting online for support.
It should be double checked before posting as the redaction may not be perfect.
`,
It should be double checked before posting as the redaction may not be perfect.`,
Annotations: map[string]string{
"versionIntroduced": "v1.64",
},
@@ -178,8 +175,8 @@ var configProvidersCommand = &cobra.Command{
var updateRemoteOpt config.UpdateRemoteOpt
var configPasswordHelp = strings.ReplaceAll(`
Note that if the config process would normally ask a question the
var configPasswordHelp = strings.ReplaceAll(
`Note that if the config process would normally ask a question the
default is taken (unless |--non-interactive| is used). Each time
that happens rclone will print or DEBUG a message saying how to
affect the value taken.
@@ -205,29 +202,29 @@ it.
This will look something like (some irrelevant detail removed):
|||
|||json
{
"State": "*oauth-islocal,teamdrive,,",
"Option": {
"Name": "config_is_local",
"Help": "Use web browser to automatically authenticate rclone with remote?\n * Say Y if the machine running rclone has a web browser you can use\n * Say N if running rclone on a (remote) machine without web browser access\nIf not sure try Y. If Y failed, try N.\n",
"Default": true,
"Examples": [
{
"Value": "true",
"Help": "Yes"
},
{
"Value": "false",
"Help": "No"
}
],
"Required": false,
"IsPassword": false,
"Type": "bool",
"Exclusive": true,
},
"Error": "",
"State": "*oauth-islocal,teamdrive,,",
"Option": {
"Name": "config_is_local",
"Help": "Use web browser to automatically authenticate rclone with remote?\n * Say Y if the machine running rclone has a web browser you can use\n * Say N if running rclone on a (remote) machine without web browser access\nIf not sure try Y. If Y failed, try N.\n",
"Default": true,
"Examples": [
{
"Value": "true",
"Help": "Yes"
},
{
"Value": "false",
"Help": "No"
}
],
"Required": false,
"IsPassword": false,
"Type": "bool",
"Exclusive": true,
},
"Error": "",
}
|||
@@ -250,7 +247,9 @@ The keys of |Option| are used as follows:
If |Error| is set then it should be shown to the user at the same
time as the question.
rclone config update name --continue --state "*oauth-islocal,teamdrive,," --result "true"
|||sh
rclone config update name --continue --state "*oauth-islocal,teamdrive,," --result "true"
|||
Note that when using |--continue| all passwords should be passed in
the clear (not obscured). Any default config values should be passed
@@ -264,8 +263,7 @@ not just the post config questions. Any parameters are used as
defaults for questions as usual.
Note that |bin/config.py| in the rclone source implements this protocol
as a readable demonstration.
`, "|", "`")
as a readable demonstration.`, "|", "`")
var configCreateCommand = &cobra.Command{
Use: "create name type [key value]*",
Short: `Create a new remote with name, type and options.`,
@@ -275,13 +273,18 @@ should be passed in pairs of |key| |value| or as |key=value|.
For example, to make a swift remote of name myremote using auto config
you would do:
rclone config create myremote swift env_auth true
rclone config create myremote swift env_auth=true
|||sh
rclone config create myremote swift env_auth true
rclone config create myremote swift env_auth=true
|||
So for example if you wanted to configure a Google Drive remote but
using remote authorization you would do this:
rclone config create mydrive drive config_is_local=false
|||sh
rclone config create mydrive drive config_is_local=false
|||
`, "|", "`") + configPasswordHelp,
Annotations: map[string]string{
"versionIntroduced": "v1.39",
@@ -344,13 +347,18 @@ pairs of |key| |value| or as |key=value|.
For example, to update the env_auth field of a remote of name myremote
you would do:
rclone config update myremote env_auth true
rclone config update myremote env_auth=true
|||sh
rclone config update myremote env_auth true
rclone config update myremote env_auth=true
|||
If the remote uses OAuth the token will be updated, if you don't
require this add an extra parameter thus:
rclone config update myremote env_auth=true config_refresh_token=false
|||sh
rclone config update myremote env_auth=true config_refresh_token=false
|||
`, "|", "`") + configPasswordHelp,
Annotations: map[string]string{
"versionIntroduced": "v1.39",
@@ -388,12 +396,13 @@ The |password| should be passed in in clear (unobscured).
For example, to set password of a remote of name myremote you would do:
rclone config password myremote fieldname mypassword
rclone config password myremote fieldname=mypassword
|||sh
rclone config password myremote fieldname mypassword
rclone config password myremote fieldname=mypassword
|||
This command is obsolete now that "config update" and "config create"
both support obscuring passwords directly.
`, "|", "`"),
both support obscuring passwords directly.`, "|", "`"),
Annotations: map[string]string{
"versionIntroduced": "v1.39",
},
@@ -441,8 +450,7 @@ var configReconnectCommand = &cobra.Command{
To disconnect the remote use "rclone config disconnect".
This normally means going through the interactive oauth flow again.
`,
This normally means going through the interactive oauth flow again.`,
RunE: func(command *cobra.Command, args []string) error {
ctx := context.Background()
cmd.CheckArgs(1, 1, command, args)
@@ -461,8 +469,7 @@ var configDisconnectCommand = &cobra.Command{
This normally means revoking the oauth token.
To reconnect use "rclone config reconnect".
`,
To reconnect use "rclone config reconnect".`,
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(1, 1, command, args)
f := cmd.NewFsSrc(args)
@@ -490,8 +497,7 @@ var configUserInfoCommand = &cobra.Command{
Use: "userinfo remote:",
Short: `Prints info about logged in user of remote.`,
Long: `This prints the details of the person logged in to the cloud storage
system.
`,
system.`,
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(1, 1, command, args)
f := cmd.NewFsSrc(args)
@@ -534,8 +540,7 @@ var configEncryptionCommand = &cobra.Command{
Use: "encryption",
Short: `set, remove and check the encryption for the config file`,
Long: `This command sets, clears and checks the encryption for the config file using
the subcommands below.
`,
the subcommands below.`,
}
var configEncryptionSetCommand = &cobra.Command{
@@ -559,8 +564,7 @@ variable to distinguish which password you must supply.
Alternatively you can remove the password first (with |rclone config
encryption remove|), then set it again with this command which may be
easier if you don't mind the unencrypted config file being on the disk
briefly.
`, "|", "`"),
briefly.`, "|", "`"),
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(0, 0, command, args)
config.LoadedData()
@@ -580,8 +584,7 @@ If |--password-command| is in use, this will be called to supply the old config
password.
If the config was not encrypted then no error will be returned and
this command will do nothing.
`, "|", "`"),
this command will do nothing.`, "|", "`"),
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(0, 0, command, args)
config.LoadedData()
@@ -600,8 +603,7 @@ It will attempt to decrypt the config using the password you supply.
If decryption fails it will return a non-zero exit code if using
|--password-command|, otherwise it will prompt again for the password.
If the config file is not encrypted it will return a non zero exit code.
`, "|", "`"),
If the config file is not encrypted it will return a non zero exit code.`, "|", "`"),
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(0, 0, command, args)
config.LoadedData()

View File

@@ -31,18 +31,20 @@ var commandDefinition = &cobra.Command{
Use: "convmv dest:path --name-transform XXX",
Short: `Convert file and directory names in place.`,
// Warning¡ "¡" will be replaced by backticks below
Long: strings.ReplaceAll(`
convmv supports advanced path name transformations for converting and renaming files and directories by applying prefixes, suffixes, and other alterations.
Long: strings.ReplaceAll(`convmv supports advanced path name transformations for converting and renaming
files and directories by applying prefixes, suffixes, and other alterations.
`+transform.Help()+`Multiple transformations can be used in sequence, applied in the order they are specified on the command line.
`+transform.Help()+`Multiple transformations can be used in sequence, applied
in the order they are specified on the command line.
The ¡--name-transform¡ flag is also available in ¡sync¡, ¡copy¡, and ¡move¡.
## Files vs Directories
### Files vs Directories
By default ¡--name-transform¡ will only apply to file names. The means only the leaf file name will be transformed.
However some of the transforms would be better applied to the whole path or just directories.
To choose which which part of the file path is affected some tags can be added to the ¡--name-transform¡.
By default ¡--name-transform¡ will only apply to file names. The means only the
leaf file name will be transformed. However some of the transforms would be
better applied to the whole path or just directories. To choose which which
part of the file path is affected some tags can be added to the ¡--name-transform¡.
| Tag | Effect |
|------|------|
@@ -50,42 +52,58 @@ To choose which which part of the file path is affected some tags can be added t
| ¡dir¡ | Only transform name of directories - these may appear anywhere in the path |
| ¡all¡ | Transform the entire path for files and directories |
This is used by adding the tag into the transform name like this: ¡--name-transform file,prefix=ABC¡ or ¡--name-transform dir,prefix=DEF¡.
This is used by adding the tag into the transform name like this:
¡--name-transform file,prefix=ABC¡ or ¡--name-transform dir,prefix=DEF¡.
For some conversions using all is more likely to be useful, for example ¡--name-transform all,nfc¡.
For some conversions using all is more likely to be useful, for example
¡--name-transform all,nfc¡.
Note that ¡--name-transform¡ may not add path separators ¡/¡ to the name. This will cause an error.
Note that ¡--name-transform¡ may not add path separators ¡/¡ to the name.
This will cause an error.
## Ordering and Conflicts
### Ordering and Conflicts
* Transformations will be applied in the order specified by the user.
* If the ¡file¡ tag is in use (the default) then only the leaf name of files will be transformed.
* If the ¡dir¡ tag is in use then directories anywhere in the path will be transformed
* If the ¡all¡ tag is in use then directories and files anywhere in the path will be transformed
* Each transformation will be run one path segment at a time.
* If a transformation adds a ¡/¡ or ends up with an empty path segment then that will be an error.
* It is up to the user to put the transformations in a sensible order.
* Conflicting transformations, such as ¡prefix¡ followed by ¡trimprefix¡ or ¡nfc¡ followed by ¡nfd¡, are possible.
* Instead of enforcing mutual exclusivity, transformations are applied in sequence as specified by the
user, allowing for intentional use cases (e.g., trimming one prefix before adding another).
* Users should be aware that certain combinations may lead to unexpected results and should verify
transformations using ¡--dry-run¡ before execution.
- Transformations will be applied in the order specified by the user.
- If the ¡file¡ tag is in use (the default) then only the leaf name of files
will be transformed.
- If the ¡dir¡ tag is in use then directories anywhere in the path will be
transformed
- If the ¡all¡ tag is in use then directories and files anywhere in the path
will be transformed
- Each transformation will be run one path segment at a time.
- If a transformation adds a ¡/¡ or ends up with an empty path segment then
that will be an error.
- It is up to the user to put the transformations in a sensible order.
- Conflicting transformations, such as ¡prefix¡ followed by ¡trimprefix¡ or
¡nfc¡ followed by ¡nfd¡, are possible.
- Instead of enforcing mutual exclusivity, transformations are applied in
sequence as specified by the user, allowing for intentional use cases
(e.g., trimming one prefix before adding another).
- Users should be aware that certain combinations may lead to unexpected
results and should verify transformations using ¡--dry-run¡ before execution.
## Race Conditions and Non-Deterministic Behavior
### Race Conditions and Non-Deterministic Behavior
Some transformations, such as ¡replace=old:new¡, may introduce conflicts where multiple source files map to the same destination name.
This can lead to race conditions when performing concurrent transfers. It is up to the user to anticipate these.
* If two files from the source are transformed into the same name at the destination, the final state may be non-deterministic.
* Running rclone check after a sync using such transformations may erroneously report missing or differing files due to overwritten results.
Some transformations, such as ¡replace=old:new¡, may introduce conflicts where
multiple source files map to the same destination name. This can lead to race
conditions when performing concurrent transfers. It is up to the user to
anticipate these.
- If two files from the source are transformed into the same name at the
destination, the final state may be non-deterministic.
- Running rclone check after a sync using such transformations may erroneously
report missing or differing files due to overwritten results.
To minimize risks, users should:
* Carefully review transformations that may introduce conflicts.
* Use ¡--dry-run¡ to inspect changes before executing a sync (but keep in mind that it won't show the effect of non-deterministic transformations).
* Avoid transformations that cause multiple distinct source files to map to the same destination name.
* Consider disabling concurrency with ¡--transfers=1¡ if necessary.
* Certain transformations (e.g. ¡prefix¡) will have a multiplying effect every time they are used. Avoid these when using ¡bisync¡.
`, "¡", "`"),
- Carefully review transformations that may introduce conflicts.
- Use ¡--dry-run¡ to inspect changes before executing a sync (but keep in mind
that it won't show the effect of non-deterministic transformations).
- Avoid transformations that cause multiple distinct source files to map to the
same destination name.
- Consider disabling concurrency with ¡--transfers=1¡ if necessary.
- Certain transformations (e.g. ¡prefix¡) will have a multiplying effect every
time they are used. Avoid these when using ¡bisync¡.`, "¡", "`"),
Annotations: map[string]string{
"versionIntroduced": "v1.70",
"groups": "Filter,Listing,Important,Copy",

View File

@@ -50,22 +50,30 @@ go there.
For example
rclone copy source:sourcepath dest:destpath
|||sh
rclone copy source:sourcepath dest:destpath
|||
Let's say there are two files in sourcepath
sourcepath/one.txt
sourcepath/two.txt
|||text
sourcepath/one.txt
sourcepath/two.txt
|||
This copies them to
destpath/one.txt
destpath/two.txt
|||text
destpath/one.txt
destpath/two.txt
|||
Not to
destpath/sourcepath/one.txt
destpath/sourcepath/two.txt
|||text
destpath/sourcepath/one.txt
destpath/sourcepath/two.txt
|||
If you are familiar with |rsync|, rclone always works as if you had
written a trailing |/| - meaning "copy the contents of this directory".
@@ -81,20 +89,22 @@ For example, if you have many files in /path/to/src but only a few of
them change every day, you can copy all the files which have changed
recently very efficiently like this:
rclone copy --max-age 24h --no-traverse /path/to/src remote:
|||sh
rclone copy --max-age 24h --no-traverse /path/to/src remote:
|||
Rclone will sync the modification times of files and directories if
the backend supports it. If metadata syncing is required then use the
|--metadata| flag.
Note that the modification time and metadata for the root directory
will **not** be synced. See https://github.com/rclone/rclone/issues/7652
will **not** be synced. See [issue #7652](https://github.com/rclone/rclone/issues/7652)
for more info.
**Note**: Use the |-P|/|--progress| flag to view real-time transfer statistics.
**Note**: Use the |--dry-run| or the |--interactive|/|-i| flag to test without copying anything.
**Note**: Use the |--dry-run| or the |--interactive|/|-i| flag to test without
copying anything.
`, "|", "`") + operationsflags.Help(),
Annotations: map[string]string{

View File

@@ -35,26 +35,32 @@ name. If the source is a directory then it acts exactly like the
So
rclone copyto src dst
` + "```sh" + `
rclone copyto src dst
` + "```" + `
where src and dst are rclone paths, either remote:path or
/path/to/local or C:\windows\path\if\on\windows.
where src and dst are rclone paths, either ` + "`remote:path`" + ` or
` + "`/path/to/local`" + ` or ` + "`C:\\windows\\path\\if\\on\\windows`" + `.
This will:
if src is file
copy it to dst, overwriting an existing file if it exists
if src is directory
copy it to dst, overwriting existing files if they exist
see copy command for full details
` + "```text" + `
if src is file
copy it to dst, overwriting an existing file if it exists
if src is directory
copy it to dst, overwriting existing files if they exist
see copy command for full details
` + "```" + `
This doesn't transfer files that are identical on src and dst, testing
by size and modification time or MD5SUM. It doesn't delete files from
the destination.
*If you are looking to copy just a byte range of a file, please see 'rclone cat --offset X --count Y'*
*If you are looking to copy just a byte range of a file, please see
` + "`rclone cat --offset X --count Y`" + `.*
**Note**: Use the ` + "`-P`" + `/` + "`--progress`" + ` flag to view real-time transfer statistics
**Note**: Use the ` + "`-P`" + `/` + "`--progress`" + ` flag to view
real-time transfer statistics.
` + operationsflags.Help(),
Annotations: map[string]string{

View File

@@ -48,7 +48,7 @@ set in HTTP headers, it will be used instead of the name from the URL.
With |--print-filename| in addition, the resulting file name will be
printed.
Setting |--no-clobber| will prevent overwriting file on the
Setting |--no-clobber| will prevent overwriting file on the
destination if there is one with the same name.
Setting |--stdout| or making the output file name |-|
@@ -62,9 +62,7 @@ If you can't get |rclone copyurl| to work then here are some things you can try:
- |--bind 0.0.0.0| rclone will use IPv6 if available - try disabling it
- |--bind ::0| to disable IPv4
- |--user agent curl| - some sites have whitelists for curl's user-agent - try that
- Make sure the site works with |curl| directly
`, "|", "`"),
- Make sure the site works with |curl| directly`, "|", "`"),
Annotations: map[string]string{
"versionIntroduced": "v1.43",
"groups": "Important",

View File

@@ -37,14 +37,18 @@ checksum of the file it has just encrypted.
Use it like this
rclone cryptcheck /path/to/files encryptedremote:path
` + "```sh" + `
rclone cryptcheck /path/to/files encryptedremote:path
` + "```" + `
You can use it like this also, but that will involve downloading all
the files in remote:path.
the files in ` + "`remote:path`" + `.
rclone cryptcheck remote:path encryptedremote:path
` + "```sh" + `
rclone cryptcheck remote:path encryptedremote:path
` + "```" + `
After it has run it will log the status of the encryptedremote:.
After it has run it will log the status of the ` + "`encryptedremote:`" + `.
` + check.FlagsHelp,
Annotations: map[string]string{
"versionIntroduced": "v1.36",

View File

@@ -33,13 +33,13 @@ If you supply the ` + "`--reverse`" + ` flag, it will return encrypted file name
use it like this
rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2
` + "```sh" + `
rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2
rclone cryptdecode --reverse encryptedremote: filename1 filename2
` + "```" + `
rclone cryptdecode --reverse encryptedremote: filename1 filename2
Another way to accomplish this is by using the ` + "`rclone backend encode` (or `decode`)" + ` command.
See the documentation on the [crypt](/crypt/) overlay for more info.
`,
Another way to accomplish this is by using the ` + "`rclone backend encode` (or `decode`)" + `
command. See the documentation on the [crypt](/crypt/) overlay for more info.`,
Annotations: map[string]string{
"versionIntroduced": "v1.38",
},

View File

@@ -47,15 +47,15 @@ directories have been merged.
Next, if deduping by name, for every group of duplicate file names /
hashes, it will delete all but one identical file it finds without
confirmation. This means that for most duplicated files the ` +
"`dedupe`" + ` command will not be interactive.
confirmation. This means that for most duplicated files the
` + "`dedupe`" + ` command will not be interactive.
` + "`dedupe`" + ` considers files to be identical if they have the
same file path and the same hash. If the backend does not support hashes (e.g. crypt wrapping
Google Drive) then they will never be found to be identical. If you
use the ` + "`--size-only`" + ` flag then files will be considered
identical if they have the same size (any hash will be ignored). This
can be useful on crypt backends which do not support hashes.
same file path and the same hash. If the backend does not support
hashes (e.g. crypt wrapping Google Drive) then they will never be found
to be identical. If you use the ` + "`--size-only`" + ` flag then files
will be considered identical if they have the same size (any hash will be
ignored). This can be useful on crypt backends which do not support hashes.
Next rclone will resolve the remaining duplicates. Exactly which
action is taken depends on the dedupe mode. By default, rclone will
@@ -68,71 +68,82 @@ Here is an example run.
Before - with duplicates
$ rclone lsl drive:dupes
6048320 2016-03-05 16:23:16.798000000 one.txt
6048320 2016-03-05 16:23:11.775000000 one.txt
564374 2016-03-05 16:23:06.731000000 one.txt
6048320 2016-03-05 16:18:26.092000000 one.txt
6048320 2016-03-05 16:22:46.185000000 two.txt
1744073 2016-03-05 16:22:38.104000000 two.txt
564374 2016-03-05 16:22:52.118000000 two.txt
` + "```sh" + `
$ rclone lsl drive:dupes
6048320 2016-03-05 16:23:16.798000000 one.txt
6048320 2016-03-05 16:23:11.775000000 one.txt
564374 2016-03-05 16:23:06.731000000 one.txt
6048320 2016-03-05 16:18:26.092000000 one.txt
6048320 2016-03-05 16:22:46.185000000 two.txt
1744073 2016-03-05 16:22:38.104000000 two.txt
564374 2016-03-05 16:22:52.118000000 two.txt
` + "```" + `
Now the ` + "`dedupe`" + ` session
$ rclone dedupe drive:dupes
2016/03/05 16:24:37 Google drive root 'dupes': Looking for duplicates using interactive mode.
one.txt: Found 4 files with duplicate names
one.txt: Deleting 2/3 identical duplicates (MD5 "1eedaa9fe86fd4b8632e2ac549403b36")
one.txt: 2 duplicates remain
1: 6048320 bytes, 2016-03-05 16:23:16.798000000, MD5 1eedaa9fe86fd4b8632e2ac549403b36
2: 564374 bytes, 2016-03-05 16:23:06.731000000, MD5 7594e7dc9fc28f727c42ee3e0749de81
s) Skip and do nothing
k) Keep just one (choose which in next step)
r) Rename all to be different (by changing file.jpg to file-1.jpg)
s/k/r> k
Enter the number of the file to keep> 1
one.txt: Deleted 1 extra copies
two.txt: Found 3 files with duplicate names
two.txt: 3 duplicates remain
1: 564374 bytes, 2016-03-05 16:22:52.118000000, MD5 7594e7dc9fc28f727c42ee3e0749de81
2: 6048320 bytes, 2016-03-05 16:22:46.185000000, MD5 1eedaa9fe86fd4b8632e2ac549403b36
3: 1744073 bytes, 2016-03-05 16:22:38.104000000, MD5 851957f7fb6f0bc4ce76be966d336802
s) Skip and do nothing
k) Keep just one (choose which in next step)
r) Rename all to be different (by changing file.jpg to file-1.jpg)
s/k/r> r
two-1.txt: renamed from: two.txt
two-2.txt: renamed from: two.txt
two-3.txt: renamed from: two.txt
` + "```sh" + `
$ rclone dedupe drive:dupes
2016/03/05 16:24:37 Google drive root 'dupes': Looking for duplicates using interactive mode.
one.txt: Found 4 files with duplicate names
one.txt: Deleting 2/3 identical duplicates (MD5 "1eedaa9fe86fd4b8632e2ac549403b36")
one.txt: 2 duplicates remain
1: 6048320 bytes, 2016-03-05 16:23:16.798000000, MD5 1eedaa9fe86fd4b8632e2ac549403b36
2: 564374 bytes, 2016-03-05 16:23:06.731000000, MD5 7594e7dc9fc28f727c42ee3e0749de81
s) Skip and do nothing
k) Keep just one (choose which in next step)
r) Rename all to be different (by changing file.jpg to file-1.jpg)
s/k/r> k
Enter the number of the file to keep> 1
one.txt: Deleted 1 extra copies
two.txt: Found 3 files with duplicate names
two.txt: 3 duplicates remain
1: 564374 bytes, 2016-03-05 16:22:52.118000000, MD5 7594e7dc9fc28f727c42ee3e0749de81
2: 6048320 bytes, 2016-03-05 16:22:46.185000000, MD5 1eedaa9fe86fd4b8632e2ac549403b36
3: 1744073 bytes, 2016-03-05 16:22:38.104000000, MD5 851957f7fb6f0bc4ce76be966d336802
s) Skip and do nothing
k) Keep just one (choose which in next step)
r) Rename all to be different (by changing file.jpg to file-1.jpg)
s/k/r> r
two-1.txt: renamed from: two.txt
two-2.txt: renamed from: two.txt
two-3.txt: renamed from: two.txt
` + "```" + `
The result being
$ rclone lsl drive:dupes
6048320 2016-03-05 16:23:16.798000000 one.txt
564374 2016-03-05 16:22:52.118000000 two-1.txt
6048320 2016-03-05 16:22:46.185000000 two-2.txt
1744073 2016-03-05 16:22:38.104000000 two-3.txt
` + "```sh" + `
$ rclone lsl drive:dupes
6048320 2016-03-05 16:23:16.798000000 one.txt
564374 2016-03-05 16:22:52.118000000 two-1.txt
6048320 2016-03-05 16:22:46.185000000 two-2.txt
1744073 2016-03-05 16:22:38.104000000 two-3.txt
` + "```" + `
Dedupe can be run non interactively using the ` + "`" + `--dedupe-mode` + "`" + ` flag or by using an extra parameter with the same value
Dedupe can be run non interactively using the ` + "`" + `--dedupe-mode` + "`" + ` flag
or by using an extra parameter with the same value
* ` + "`" + `--dedupe-mode interactive` + "`" + ` - interactive as above.
* ` + "`" + `--dedupe-mode skip` + "`" + ` - removes identical files then skips anything left.
* ` + "`" + `--dedupe-mode first` + "`" + ` - removes identical files then keeps the first one.
* ` + "`" + `--dedupe-mode newest` + "`" + ` - removes identical files then keeps the newest one.
* ` + "`" + `--dedupe-mode oldest` + "`" + ` - removes identical files then keeps the oldest one.
* ` + "`" + `--dedupe-mode largest` + "`" + ` - removes identical files then keeps the largest one.
* ` + "`" + `--dedupe-mode smallest` + "`" + ` - removes identical files then keeps the smallest one.
* ` + "`" + `--dedupe-mode rename` + "`" + ` - removes identical files then renames the rest to be different.
* ` + "`" + `--dedupe-mode list` + "`" + ` - lists duplicate dirs and files only and changes nothing.
- ` + "`" + `--dedupe-mode interactive` + "`" + ` - interactive as above.
- ` + "`" + `--dedupe-mode skip` + "`" + ` - removes identical files then skips anything left.
- ` + "`" + `--dedupe-mode first` + "`" + ` - removes identical files then keeps the first one.
- ` + "`" + `--dedupe-mode newest` + "`" + ` - removes identical files then keeps the newest one.
- ` + "`" + `--dedupe-mode oldest` + "`" + ` - removes identical files then keeps the oldest one.
- ` + "`" + `--dedupe-mode largest` + "`" + ` - removes identical files then keeps the largest one.
- ` + "`" + `--dedupe-mode smallest` + "`" + ` - removes identical files then keeps the smallest one.
- ` + "`" + `--dedupe-mode rename` + "`" + ` - removes identical files then renames the rest to be different.
- ` + "`" + `--dedupe-mode list` + "`" + ` - lists duplicate dirs and files only and changes nothing.
For example, to rename all the identically named photos in your Google Photos directory, do
For example, to rename all the identically named photos in your Google Photos
directory, do
rclone dedupe --dedupe-mode rename "drive:Google Photos"
` + "```sh" + `
rclone dedupe --dedupe-mode rename "drive:Google Photos"
` + "```" + `
Or
rclone dedupe rename "drive:Google Photos"
`,
` + "```sh" + `
rclone dedupe rename "drive:Google Photos"
` + "```",
Annotations: map[string]string{
"versionIntroduced": "v1.27",
"groups": "Important",

View File

@@ -32,26 +32,29 @@ obeys include/exclude filters so can be used to selectively delete files.
alone. If you want to delete a directory and all of its contents use
the [purge](/commands/rclone_purge/) command.
If you supply the |--rmdirs| flag, it will remove all empty directories along with it.
You can also use the separate command [rmdir](/commands/rclone_rmdir/) or
[rmdirs](/commands/rclone_rmdirs/) to delete empty directories only.
If you supply the |--rmdirs| flag, it will remove all empty directories along
with it. You can also use the separate command [rmdir](/commands/rclone_rmdir/)
or [rmdirs](/commands/rclone_rmdirs/) to delete empty directories only.
For example, to delete all files bigger than 100 MiB, you may first want to
check what would be deleted (use either):
rclone --min-size 100M lsl remote:path
rclone --dry-run --min-size 100M delete remote:path
|||sh
rclone --min-size 100M lsl remote:path
rclone --dry-run --min-size 100M delete remote:path
|||
Then proceed with the actual delete:
rclone --min-size 100M delete remote:path
|||sh
rclone --min-size 100M delete remote:path
|||
That reads "delete everything with a minimum size of 100 MiB", hence
delete all files bigger than 100 MiB.
**Important**: Since this can cause data loss, test first with the
|--dry-run| or the |--interactive|/|-i| flag.
`, "|", "`"),
|--dry-run| or the |--interactive|/|-i| flag.`, "|", "`"),
Annotations: map[string]string{
"versionIntroduced": "v1.27",
"groups": "Important,Filter,Listing",

View File

@@ -19,9 +19,8 @@ var commandDefinition = &cobra.Command{
Use: "deletefile remote:path",
Short: `Remove a single file from remote.`,
Long: `Remove a single file from remote. Unlike ` + "`" + `delete` + "`" + ` it cannot be used to
remove a directory and it doesn't obey include/exclude filters - if the specified file exists,
it will always be removed.
`,
remove a directory and it doesn't obey include/exclude filters - if the
specified file exists, it will always be removed.`,
Annotations: map[string]string{
"versionIntroduced": "v1.42",
"groups": "Important",

View File

@@ -14,8 +14,7 @@ var completionDefinition = &cobra.Command{
Use: "completion [shell]",
Short: `Output completion script for a given shell.`,
Long: `Generates a shell completion script for rclone.
Run with ` + "`--help`" + ` to list the supported shells.
`,
Run with ` + "`--help`" + ` to list the supported shells.`,
Annotations: map[string]string{
"versionIntroduced": "v1.33",
},

View File

@@ -18,17 +18,21 @@ var bashCommandDefinition = &cobra.Command{
Short: `Output bash completion script for rclone.`,
Long: `Generates a bash shell autocompletion script for rclone.
By default, when run without any arguments,
By default, when run without any arguments,
rclone completion bash
` + "```sh" + `
rclone completion bash
` + "```" + `
the generated script will be written to
/etc/bash_completion.d/rclone
` + "```sh" + `
/etc/bash_completion.d/rclone
` + "```" + `
and so rclone will probably need to be run as root, or with sudo.
If you supply a path to a file as the command line argument, then
If you supply a path to a file as the command line argument, then
the generated script will be written to that file, in which case
you should not need root privileges.
@@ -39,11 +43,12 @@ can logout and login again to use the autocompletion script.
Alternatively, you can source the script directly
. /path/to/my_bash_completion_scripts/rclone
` + "```sh" + `
. /path/to/my_bash_completion_scripts/rclone
` + "```" + `
and the autocompletion functionality will be added to your
current shell.
`,
current shell.`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(0, 1, command, args)
out := "/etc/bash_completion.d/rclone"

View File

@@ -21,18 +21,21 @@ var fishCommandDefinition = &cobra.Command{
This writes to /etc/fish/completions/rclone.fish by default so will
probably need to be run with sudo or as root, e.g.
sudo rclone completion fish
` + "```sh" + `
sudo rclone completion fish
` + "```" + `
Logout and login again to use the autocompletion scripts, or source
them directly
. /etc/fish/completions/rclone.fish
` + "```sh" + `
. /etc/fish/completions/rclone.fish
` + "```" + `
If you supply a command line argument the script will be written
there.
If output_file is "-", then the output will be written to stdout.
`,
If output_file is "-", then the output will be written to stdout.`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(0, 1, command, args)
out := "/etc/fish/completions/rclone.fish"

View File

@@ -20,13 +20,14 @@ var powershellCommandDefinition = &cobra.Command{
To load completions in your current shell session:
rclone completion powershell | Out-String | Invoke-Expression
` + "```sh" + `
rclone completion powershell | Out-String | Invoke-Expression
` + "```" + `
To load completions for every new session, add the output of the above command
to your powershell profile.
If output_file is "-" or missing, then the output will be written to stdout.
`,
If output_file is "-" or missing, then the output will be written to stdout.`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(0, 1, command, args)
if len(args) == 0 || (len(args) > 0 && args[0] == "-") {

View File

@@ -21,18 +21,21 @@ var zshCommandDefinition = &cobra.Command{
This writes to /usr/share/zsh/vendor-completions/_rclone by default so will
probably need to be run with sudo or as root, e.g.
sudo rclone completion zsh
` + "```sh" + `
sudo rclone completion zsh
` + "```" + `
Logout and login again to use the autocompletion scripts, or source
them directly
autoload -U compinit && compinit
` + "```sh" + `
autoload -U compinit && compinit
` + "```" + `
If you supply a command line argument the script will be written
there.
If output_file is "-", then the output will be written to stdout.
`,
If output_file is "-", then the output will be written to stdout.`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(0, 1, command, args)
out := "/usr/share/zsh/vendor-completions/_rclone"

View File

@@ -169,7 +169,8 @@ rclone.org website.`,
name := filepath.Base(path)
cmd, ok := commands[name]
if !ok {
return fmt.Errorf("didn't find command for %q", name)
//return fmt.Errorf("didn't find command for %q", name)
return nil
}
b, err := os.ReadFile(path)
if err != nil {
@@ -184,7 +185,12 @@ rclone.org website.`,
return fmt.Errorf("internal error: failed to find cut points: startCut = %d, endCut = %d", startCut, endCut)
}
if endCut >= 0 {
doc = doc[:endCut] + "### See Also" + doc[endCut+12:]
doc = doc[:endCut] + `### See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->` + doc[endCut+12:] + `
<!-- markdownlint-restore -->
`
}
} else {
var out strings.Builder
@@ -196,7 +202,7 @@ rclone.org website.`,
if group.Flags.HasFlags() {
_, _ = fmt.Fprintf(&out, "#### %s Options\n\n", group.Name)
_, _ = fmt.Fprintf(&out, "%s\n\n", group.Help)
_, _ = out.WriteString("```\n")
_, _ = out.WriteString("```text\n")
_, _ = out.WriteString(group.Flags.FlagUsages())
_, _ = out.WriteString("```\n\n")
}
@@ -204,7 +210,12 @@ rclone.org website.`,
} else {
_, _ = out.WriteString("See the [global flags page](/flags/) for global options not listed here.\n\n")
}
doc = doc[:startCut] + out.String() + "### See Also" + doc[endCut+12:]
doc = doc[:startCut] + out.String() + `### See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->` + doc[endCut+12:] + `
<!-- markdownlint-restore -->
`
}
// outdent all the titles by one

View File

@@ -539,7 +539,7 @@ var command = &cobra.Command{
Aliases: []string{uniqueCommandName},
Use: subcommandName,
Short: "Speaks with git-annex over stdin/stdout.",
Long: gitannexHelp,
Long: strings.TrimSpace(gitannexHelp),
Annotations: map[string]string{
"versionIntroduced": "v1.67.0",
},

View File

@@ -4,8 +4,7 @@ users.
[git-annex]: https://git-annex.branchable.com/
Installation on Linux
---------------------
### Installation on Linux
1. Skip this step if your version of git-annex is [10.20240430] or newer.
Otherwise, you must create a symlink somewhere on your PATH with a particular

View File

@@ -103,14 +103,17 @@ as a relative path).
Run without a hash to see the list of all supported hashes, e.g.
$ rclone hashsum
` + hash.HelpString(4) + `
` + "```sh" + `
$ rclone hashsum
` + hash.HelpString(0) + "```" + `
Then
$ rclone hashsum MD5 remote:path
` + "```sh" + `
rclone hashsum MD5 remote:path
` + "```" + `
Note that hash names are case insensitive and values are output in lower case.
`,
Note that hash names are case insensitive and values are output in lower case.`,
Annotations: map[string]string{
"versionIntroduced": "v1.41",
"groups": "Filter,Listing",

View File

@@ -30,9 +30,7 @@ var Root = &cobra.Command{
mounting them, listing them in lots of different ways.
See the home page (https://rclone.org/) for installation, usage,
documentation, changelog and configuration walkthroughs.
`,
documentation, changelog and configuration walkthroughs.`,
PersistentPostRun: func(cmd *cobra.Command, args []string) {
fs.Debugf("rclone", "Version %q finishing with parameters %q", fs.Version, os.Args)
atexit.Run()

View File

@@ -29,10 +29,12 @@ var commandDefinition = &cobra.Command{
Short: `Generate public link to file/folder.`,
Long: `Create, retrieve or remove a public link to the given file or folder.
rclone link remote:path/to/file
rclone link remote:path/to/folder/
rclone link --unlink remote:path/to/folder/
rclone link --expire 1d remote:path/to/file
` + "```sh" + `
rclone link remote:path/to/file
rclone link remote:path/to/folder/
rclone link --unlink remote:path/to/folder/
rclone link --expire 1d remote:path/to/file
` + "```" + `
If you supply the --expire flag, it will set the expiration time
otherwise it will use the default (100 years). **Note** not all
@@ -45,9 +47,8 @@ don't will just ignore it.
If successful, the last line of the output will contain the
link. Exact capabilities depend on the remote, but the link will
always by default be created with the least constraints e.g. no
expiry, no password protection, accessible without account.
`,
always by default be created with the least constraints - e.g. no
expiry, no password protection, accessible without account.`,
Annotations: map[string]string{
"versionIntroduced": "v1.41",
},

View File

@@ -114,8 +114,7 @@ func newLess(orderBy string) (less lessFn, err error) {
var commandDefinition = &cobra.Command{
Use: "listremotes [<filter>]",
Short: `List all the remotes in the config file and defined in environment variables.`,
Long: `
Lists all the available remotes from the config file, or the remotes matching
Long: `Lists all the available remotes from the config file, or the remotes matching
an optional filter.
Prints the result in human-readable format by default, and as a simple list of
@@ -126,8 +125,7 @@ the source (file or environment).
Result can be filtered by a filter argument which applies to all attributes,
and/or filter flags specific for each attribute. The values must be specified
according to regular rclone filtering pattern syntax.
`,
according to regular rclone filtering pattern syntax.`,
Annotations: map[string]string{
"versionIntroduced": "v1.34",
},

View File

@@ -21,13 +21,15 @@ var commandDefinition = &cobra.Command{
Long: `Lists the objects in the source path to standard output in a human
readable format with size and path. Recurses by default.
Eg
E.g.
$ rclone ls swift:bucket
60295 bevajer5jef
90613 canole
94467 diwogej7
37600 fubuwic
` + "```sh" + `
$ rclone ls swift:bucket
60295 bevajer5jef
90613 canole
94467 diwogej7
37600 fubuwic
` + "```" + `
` + lshelp.Help,
Annotations: map[string]string{

View File

@@ -7,16 +7,15 @@ import (
// Help describes the common help for all the list commands
// Warning! "|" will be replaced by backticks below
var Help = strings.ReplaceAll(`
Any of the filtering options can be applied to this command.
var Help = strings.ReplaceAll(`Any of the filtering options can be applied to this command.
There are several related list commands
* |ls| to list size and path of objects only
* |lsl| to list modification time, size and path of objects only
* |lsd| to list directories only
* |lsf| to list objects and directories in easy to parse format
* |lsjson| to list objects and directories in JSON format
- |ls| to list size and path of objects only
- |lsl| to list modification time, size and path of objects only
- |lsd| to list directories only
- |lsf| to list objects and directories in easy to parse format
- |lsjson| to list objects and directories in JSON format
|ls|,|lsl|,|lsd| are designed to be human-readable.
|lsf| is designed to be human and machine-readable.
@@ -24,9 +23,9 @@ There are several related list commands
Note that |ls| and |lsl| recurse by default - use |--max-depth 1| to stop the recursion.
The other list commands |lsd|,|lsf|,|lsjson| do not recurse by default - use |-R| to make them recurse.
The other list commands |lsd|,|lsf|,|lsjson| do not recurse by default -
use |-R| to make them recurse.
Listing a nonexistent directory will produce an error except for
remotes which can't have empty directories (e.g. s3, swift, or gcs -
the bucket-based remotes).
`, "|", "`")
the bucket-based remotes).`, "|", "`")

View File

@@ -32,18 +32,22 @@ recurse by default. Use the ` + "`-R`" + ` flag to recurse.
This command lists the total size of the directory (if known, -1 if
not), the modification time (if known, the current time if not), the
number of objects in the directory (if known, -1 if not) and the name
of the directory, Eg
of the directory, E.g.
$ rclone lsd swift:
494000 2018-04-26 08:43:20 10000 10000files
65 2018-04-26 08:43:20 1 1File
` + "```sh" + `
$ rclone lsd swift:
494000 2018-04-26 08:43:20 10000 10000files
65 2018-04-26 08:43:20 1 1File
` + "```" + `
Or
$ rclone lsd drive:test
-1 2016-10-17 17:41:53 -1 1000files
-1 2017-01-03 14:40:54 -1 2500files
-1 2017-07-08 14:39:28 -1 4000files
` + "```sh" + `
$ rclone lsd drive:test
-1 2016-10-17 17:41:53 -1 1000files
-1 2017-01-03 14:40:54 -1 2500files
-1 2017-07-08 14:39:28 -1 4000files
` + "```" + `
If you just want the directory names use ` + "`rclone lsf --dirs-only`" + `.

View File

@@ -52,41 +52,47 @@ standard output in a form which is easy to parse by scripts. By
default this will just be the names of the objects and directories,
one per line. The directories will have a / suffix.
Eg
E.g.
$ rclone lsf swift:bucket
bevajer5jef
canole
diwogej7
ferejej3gux/
fubuwic
` + "```sh" + `
$ rclone lsf swift:bucket
bevajer5jef
canole
diwogej7
ferejej3gux/
fubuwic
` + "```" + `
Use the ` + "`--format`" + ` option to control what gets listed. By default this
is just the path, but you can use these parameters to control the
output:
p - path
s - size
t - modification time
h - hash
i - ID of object
o - Original ID of underlying object
m - MimeType of object if known
e - encrypted name
T - tier of storage if known, e.g. "Hot" or "Cool"
M - Metadata of object in JSON blob format, eg {"key":"value"}
` + "```text" + `
p - path
s - size
t - modification time
h - hash
i - ID of object
o - Original ID of underlying object
m - MimeType of object if known
e - encrypted name
T - tier of storage if known, e.g. "Hot" or "Cool"
M - Metadata of object in JSON blob format, eg {"key":"value"}
` + "```" + `
So if you wanted the path, size and modification time, you would use
` + "`--format \"pst\"`, or maybe `--format \"tsp\"`" + ` to put the path last.
Eg
E.g.
$ rclone lsf --format "tsp" swift:bucket
2016-06-25 18:55:41;60295;bevajer5jef
2016-06-25 18:55:43;90613;canole
2016-06-25 18:55:43;94467;diwogej7
2018-04-26 08:50:45;0;ferejej3gux/
2016-06-25 18:55:40;37600;fubuwic
` + "```sh" + `
$ rclone lsf --format "tsp" swift:bucket
2016-06-25 18:55:41;60295;bevajer5jef
2016-06-25 18:55:43;90613;canole
2016-06-25 18:55:43;94467;diwogej7
2018-04-26 08:50:45;0;ferejej3gux/
2016-06-25 18:55:40;37600;fubuwic
` + "```" + `
If you specify "h" in the format you will get the MD5 hash by default,
use the ` + "`--hash`" + ` flag to change which hash you want. Note that this
@@ -97,16 +103,20 @@ type.
For example, to emulate the md5sum command you can use
rclone lsf -R --hash MD5 --format hp --separator " " --files-only .
` + "```sh" + `
rclone lsf -R --hash MD5 --format hp --separator " " --files-only .
` + "```" + `
Eg
E.g.
$ rclone lsf -R --hash MD5 --format hp --separator " " --files-only swift:bucket
7908e352297f0f530b84a756f188baa3 bevajer5jef
cd65ac234e6fea5925974a51cdd865cc canole
03b5341b4f234b9d984d03ad076bae91 diwogej7
8fd37c3810dd660778137ac3a66cc06d fubuwic
99713e14a4c4ff553acaf1930fad985b gixacuh7ku
` + "```sh" + `
$ rclone lsf -R --hash MD5 --format hp --separator " " --files-only swift:bucket
7908e352297f0f530b84a756f188baa3 bevajer5jef
cd65ac234e6fea5925974a51cdd865cc canole
03b5341b4f234b9d984d03ad076bae91 diwogej7
8fd37c3810dd660778137ac3a66cc06d fubuwic
99713e14a4c4ff553acaf1930fad985b gixacuh7ku
` + "```" + `
(Though "rclone md5sum ." is an easier way of typing this.)
@@ -114,24 +124,28 @@ By default the separator is ";" this can be changed with the
` + "`--separator`" + ` flag. Note that separators aren't escaped in the path so
putting it last is a good strategy.
Eg
E.g.
$ rclone lsf --separator "," --format "tshp" swift:bucket
2016-06-25 18:55:41,60295,7908e352297f0f530b84a756f188baa3,bevajer5jef
2016-06-25 18:55:43,90613,cd65ac234e6fea5925974a51cdd865cc,canole
2016-06-25 18:55:43,94467,03b5341b4f234b9d984d03ad076bae91,diwogej7
2018-04-26 08:52:53,0,,ferejej3gux/
2016-06-25 18:55:40,37600,8fd37c3810dd660778137ac3a66cc06d,fubuwic
` + "```sh" + `
$ rclone lsf --separator "," --format "tshp" swift:bucket
2016-06-25 18:55:41,60295,7908e352297f0f530b84a756f188baa3,bevajer5jef
2016-06-25 18:55:43,90613,cd65ac234e6fea5925974a51cdd865cc,canole
2016-06-25 18:55:43,94467,03b5341b4f234b9d984d03ad076bae91,diwogej7
2018-04-26 08:52:53,0,,ferejej3gux/
2016-06-25 18:55:40,37600,8fd37c3810dd660778137ac3a66cc06d,fubuwic
` + "```" + `
You can output in CSV standard format. This will escape things in "
if they contain ,
if they contain,
Eg
E.g.
$ rclone lsf --csv --files-only --format ps remote:path
test.log,22355
test.sh,449
"this file contains a comma, in the file name.txt",6
` + "```sh" + `
$ rclone lsf --csv --files-only --format ps remote:path
test.log,22355
test.sh,449
"this file contains a comma, in the file name.txt",6
` + "```" + `
Note that the ` + "`--absolute`" + ` parameter is useful for making lists of files
to pass to an rclone copy with the ` + "`--files-from-raw`" + ` flag.
@@ -139,20 +153,25 @@ to pass to an rclone copy with the ` + "`--files-from-raw`" + ` flag.
For example, to find all the files modified within one day and copy
those only (without traversing the whole directory structure):
rclone lsf --absolute --files-only --max-age 1d /path/to/local > new_files
rclone copy --files-from-raw new_files /path/to/local remote:path
` + "```sh" + `
rclone lsf --absolute --files-only --max-age 1d /path/to/local > new_files
rclone copy --files-from-raw new_files /path/to/local remote:path
` + "```" + `
The default time format is ` + "`'2006-01-02 15:04:05'`" + `.
[Other formats](https://pkg.go.dev/time#pkg-constants) can be specified with the ` + "`--time-format`" + ` flag.
Examples:
[Other formats](https://pkg.go.dev/time#pkg-constants) can be specified with
the ` + "`--time-format`" + ` flag. Examples:
rclone lsf remote:path --format pt --time-format 'Jan 2, 2006 at 3:04pm (MST)'
rclone lsf remote:path --format pt --time-format '2006-01-02 15:04:05.000000000'
rclone lsf remote:path --format pt --time-format '2006-01-02T15:04:05.999999999Z07:00'
rclone lsf remote:path --format pt --time-format RFC3339
rclone lsf remote:path --format pt --time-format DateOnly
rclone lsf remote:path --format pt --time-format max
` + "`--time-format max`" + ` will automatically truncate ` + "'`2006-01-02 15:04:05.000000000`'" + `
` + "```sh" + `
rclone lsf remote:path --format pt --time-format 'Jan 2, 2006 at 3:04pm (MST)'
rclone lsf remote:path --format pt --time-format '2006-01-02 15:04:05.000000000'
rclone lsf remote:path --format pt --time-format '2006-01-02T15:04:05.999999999Z07:00'
rclone lsf remote:path --format pt --time-format RFC3339
rclone lsf remote:path --format pt --time-format DateOnly
rclone lsf remote:path --format pt --time-format max
` + "```" + `
` + "`--time-format max`" + ` will automatically truncate ` + "`2006-01-02 15:04:05.000000000`" + `
to the maximum precision supported by the remote.
` + lshelp.Help,

View File

@@ -43,25 +43,27 @@ var commandDefinition = &cobra.Command{
The output is an array of Items, where each Item looks like this:
{
"Hashes" : {
"SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f",
"MD5" : "b1946ac92492d2347c6235b4d2611184",
"DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc"
},
"ID": "y2djkhiujf83u33",
"OrigID": "UYOJVTUW00Q1RzTDA",
"IsBucket" : false,
"IsDir" : false,
"MimeType" : "application/octet-stream",
"ModTime" : "2017-05-31T16:15:57.034468261+01:00",
"Name" : "file.txt",
"Encrypted" : "v0qpsdq8anpci8n929v3uu9338",
"EncryptedPath" : "kja9098349023498/v0qpsdq8anpci8n929v3uu9338",
"Path" : "full/path/goes/here/file.txt",
"Size" : 6,
"Tier" : "hot",
}
` + "```json" + `
{
"Hashes" : {
"SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f",
"MD5" : "b1946ac92492d2347c6235b4d2611184",
"DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc"
},
"ID": "y2djkhiujf83u33",
"OrigID": "UYOJVTUW00Q1RzTDA",
"IsBucket" : false,
"IsDir" : false,
"MimeType" : "application/octet-stream",
"ModTime" : "2017-05-31T16:15:57.034468261+01:00",
"Name" : "file.txt",
"Encrypted" : "v0qpsdq8anpci8n929v3uu9338",
"EncryptedPath" : "kja9098349023498/v0qpsdq8anpci8n929v3uu9338",
"Path" : "full/path/goes/here/file.txt",
"Size" : 6,
"Tier" : "hot",
}
` + "```" + `
The exact set of properties included depends on the backend:
@@ -118,6 +120,7 @@ will be shown ("2017-05-31T16:15:57+01:00").
The whole output can be processed as a JSON blob, or alternatively it
can be processed line by line as each item is written on individual lines
(except with ` + "`--stat`" + `).
` + lshelp.Help,
Annotations: map[string]string{
"versionIntroduced": "v1.37",

View File

@@ -21,13 +21,15 @@ var commandDefinition = &cobra.Command{
Long: `Lists the objects in the source path to standard output in a human
readable format with modification time, size and path. Recurses by default.
Eg
E.g.
$ rclone lsl swift:bucket
60295 2016-06-25 18:55:41.062626927 bevajer5jef
90613 2016-06-25 18:55:43.302607074 canole
94467 2016-06-25 18:55:43.046609333 diwogej7
37600 2016-06-25 18:55:40.814629136 fubuwic
` + "```sh" + `
$ rclone lsl swift:bucket
60295 2016-06-25 18:55:41.062626927 bevajer5jef
90613 2016-06-25 18:55:43.302607074 canole
94467 2016-06-25 18:55:43.046609333 diwogej7
37600 2016-06-25 18:55:40.814629136 fubuwic
` + "```" + `
` + lshelp.Help,
Annotations: map[string]string{

View File

@@ -35,8 +35,7 @@ to running ` + "`rclone hashsum MD5 remote:path`" + `.
This command can also hash data received on standard input (stdin),
by not passing a remote:path, or by passing a hyphen as remote:path
when there is data to read (if not, the hyphen will be treated literally,
as a relative path).
`,
as a relative path).`,
Annotations: map[string]string{
"versionIntroduced": "v1.02",
"groups": "Filter,Listing",

View File

@@ -273,7 +273,7 @@ func NewMountCommand(commandName string, hidden bool, mount MountFn) *cobra.Comm
Use: commandName + " remote:path /path/to/mountpoint",
Hidden: hidden,
Short: `Mount the remote as file system on a mountpoint.`,
Long: help(commandName) + vfs.Help(),
Long: help(commandName) + strings.TrimSpace(vfs.Help()),
Annotations: map[string]string{
"versionIntroduced": "v1.33",
"groups": "Filter",

View File

@@ -1,7 +1,7 @@
Rclone @ allows Linux, FreeBSD, macOS and Windows to
mount any of Rclone's cloud storage systems as a file system with FUSE.
First set up your remote using `rclone config`. Check it works with `rclone ls` etc.
First set up your remote using `rclone config`. Check it works with `rclone ls` etc.
On Linux and macOS, you can run mount in either foreground or background (aka
daemon) mode. Mount runs in foreground mode by default. Use the `--daemon` flag
@@ -16,7 +16,9 @@ mount, waits until success or timeout and exits with appropriate code
On Linux/macOS/FreeBSD start the mount like this, where `/path/to/local/mount`
is an **empty** **existing** directory:
rclone @ remote:path/to/files /path/to/local/mount
```sh
rclone @ remote:path/to/files /path/to/local/mount
```
On Windows you can start a mount in different ways. See [below](#mounting-modes-on-windows)
for details. If foreground mount is used interactively from a console window,
@@ -26,26 +28,30 @@ used to work with the mount until rclone is interrupted e.g. by pressing Ctrl-C.
The following examples will mount to an automatically assigned drive,
to specific drive letter `X:`, to path `C:\path\parent\mount`
(where parent directory or drive must exist, and mount must **not** exist,
and is not supported when [mounting as a network drive](#mounting-modes-on-windows)), and
the last example will mount as network share `\\cloud\remote` and map it to an
and is not supported when [mounting as a network drive](#mounting-modes-on-windows)),
and the last example will mount as network share `\\cloud\remote` and map it to an
automatically assigned drive:
rclone @ remote:path/to/files *
rclone @ remote:path/to/files X:
rclone @ remote:path/to/files C:\path\parent\mount
rclone @ remote:path/to/files \\cloud\remote
```sh
rclone @ remote:path/to/files *
rclone @ remote:path/to/files X:
rclone @ remote:path/to/files C:\path\parent\mount
rclone @ remote:path/to/files \\cloud\remote
```
When the program ends while in foreground mode, either via Ctrl+C or receiving
a SIGINT or SIGTERM signal, the mount should be automatically stopped.
When running in background mode the user will have to stop the mount manually:
# Linux
fusermount -u /path/to/local/mount
#... or on some systems
fusermount3 -u /path/to/local/mount
# OS X or Linux when using nfsmount
umount /path/to/local/mount
```sh
# Linux
fusermount -u /path/to/local/mount
#... or on some systems
fusermount3 -u /path/to/local/mount
# OS X or Linux when using nfsmount
umount /path/to/local/mount
```
The umount operation can fail, for example when the mountpoint is busy.
When that happens, it is the user's responsibility to stop the mount manually.
@@ -80,20 +86,22 @@ thumbnails for image and video files on network drives.
In most cases, rclone will mount the remote as a normal, fixed disk drive by default.
However, you can also choose to mount it as a remote network drive, often described
as a network share. If you mount an rclone remote using the default, fixed drive mode
and experience unexpected program errors, freezes or other issues, consider mounting
as a network drive instead.
as a network share. If you mount an rclone remote using the default, fixed drive
mode and experience unexpected program errors, freezes or other issues, consider
mounting as a network drive instead.
When mounting as a fixed disk drive you can either mount to an unused drive letter,
or to a path representing a **nonexistent** subdirectory of an **existing** parent
directory or drive. Using the special value `*` will tell rclone to
automatically assign the next available drive letter, starting with Z: and moving backward.
Examples:
automatically assign the next available drive letter, starting with Z: and moving
backward. Examples:
rclone @ remote:path/to/files *
rclone @ remote:path/to/files X:
rclone @ remote:path/to/files C:\path\parent\mount
rclone @ remote:path/to/files X:
```sh
rclone @ remote:path/to/files *
rclone @ remote:path/to/files X:
rclone @ remote:path/to/files C:\\path\\parent\\mount
rclone @ remote:path/to/files X:
```
Option `--volname` can be used to set a custom volume name for the mounted
file system. The default is to use the remote name and path.
@@ -103,24 +111,28 @@ to your @ command. Mounting to a directory path is not supported in
this mode, it is a limitation Windows imposes on junctions, so the remote must always
be mounted to a drive letter.
rclone @ remote:path/to/files X: --network-mode
```sh
rclone @ remote:path/to/files X: --network-mode
```
A volume name specified with `--volname` will be used to create the network share path.
A complete UNC path, such as `\\cloud\remote`, optionally with path
A volume name specified with `--volname` will be used to create the network share
path. A complete UNC path, such as `\\cloud\remote`, optionally with path
`\\cloud\remote\madeup\path`, will be used as is. Any other
string will be used as the share part, after a default prefix `\\server\`.
If no volume name is specified then `\\server\share` will be used.
You must make sure the volume name is unique when you are mounting more than one drive,
or else the mount command will fail. The share name will treated as the volume label for
the mapped drive, shown in Windows Explorer etc, while the complete
You must make sure the volume name is unique when you are mounting more than one
drive, or else the mount command will fail. The share name will treated as the
volume label for the mapped drive, shown in Windows Explorer etc, while the complete
`\\server\share` will be reported as the remote UNC path by
`net use` etc, just like a normal network drive mapping.
If you specify a full network share UNC path with `--volname`, this will implicitly
set the `--network-mode` option, so the following two examples have same result:
rclone @ remote:path/to/files X: --network-mode
rclone @ remote:path/to/files X: --volname \\server\share
```sh
rclone @ remote:path/to/files X: --network-mode
rclone @ remote:path/to/files X: --volname \\server\share
```
You may also specify the network share UNC path as the mountpoint itself. Then rclone
will automatically assign a drive letter, same as with `*` and use that as
@@ -128,15 +140,16 @@ mountpoint, and instead use the UNC path specified as the volume name, as if it
specified with the `--volname` option. This will also implicitly set
the `--network-mode` option. This means the following two examples have same result:
rclone @ remote:path/to/files \\cloud\remote
rclone @ remote:path/to/files * --volname \\cloud\remote
```sh
rclone @ remote:path/to/files \\cloud\remote
rclone @ remote:path/to/files * --volname \\cloud\remote
```
There is yet another way to enable network mode, and to set the share path,
and that is to pass the "native" libfuse/WinFsp option directly:
`--fuse-flag --VolumePrefix=\server\share`. Note that the path
must be with just a single backslash prefix in this case.
*Note:* In previous versions of rclone this was the only supported method.
[Read more about drive mapping](https://en.wikipedia.org/wiki/Drive_mapping)
@@ -149,11 +162,11 @@ The FUSE emulation layer on Windows must convert between the POSIX-based
permission model used in FUSE, and the permission model used in Windows,
based on access-control lists (ACL).
The mounted filesystem will normally get three entries in its access-control list (ACL),
representing permissions for the POSIX permission scopes: Owner, group and others.
By default, the owner and group will be taken from the current user, and the built-in
group "Everyone" will be used to represent others. The user/group can be customized
with FUSE options "UserName" and "GroupName",
The mounted filesystem will normally get three entries in its access-control list
(ACL), representing permissions for the POSIX permission scopes: Owner, group and
others. By default, the owner and group will be taken from the current user, and
the built-in group "Everyone" will be used to represent others. The user/group can
be customized with FUSE options "UserName" and "GroupName",
e.g. `-o UserName=user123 -o GroupName="Authenticated Users"`.
The permissions on each entry will be set according to [options](#options)
`--dir-perms` and `--file-perms`, which takes a value in traditional Unix
@@ -253,58 +266,63 @@ does not suffer from the same limitations.
### Mounting on macOS
Mounting on macOS can be done either via [built-in NFS server](/commands/rclone_serve_nfs/), [macFUSE](https://osxfuse.github.io/)
(also known as osxfuse) or [FUSE-T](https://www.fuse-t.org/). macFUSE is a traditional
FUSE driver utilizing a macOS kernel extension (kext). FUSE-T is an alternative FUSE system
which "mounts" via an NFSv4 local server.
Mounting on macOS can be done either via [built-in NFS server](/commands/rclone_serve_nfs/),
[macFUSE](https://osxfuse.github.io/) (also known as osxfuse) or
[FUSE-T](https://www.fuse-t.org/).macFUSE is a traditional FUSE driver utilizing
a macOS kernel extension (kext). FUSE-T is an alternative FUSE system which
"mounts" via an NFSv4 local server.
##### Unicode Normalization
#### Unicode Normalization
It is highly recommended to keep the default of `--no-unicode-normalization=false`
for all `mount` and `serve` commands on macOS. For details, see [vfs-case-sensitivity](https://rclone.org/commands/rclone_mount/#vfs-case-sensitivity).
#### NFS mount
This method spins up an NFS server using [serve nfs](/commands/rclone_serve_nfs/) command and mounts
it to the specified mountpoint. If you run this in background mode using |--daemon|, you will need to
send SIGTERM signal to the rclone process using |kill| command to stop the mount.
This method spins up an NFS server using [serve nfs](/commands/rclone_serve_nfs/)
command and mounts it to the specified mountpoint. If you run this in background
mode using |--daemon|, you will need to send SIGTERM signal to the rclone process
using |kill| command to stop the mount.
Note that `--nfs-cache-handle-limit` controls the maximum number of cached file handles stored by the `nfsmount` caching handler.
This should not be set too low or you may experience errors when trying to access files. The default is 1000000,
Note that `--nfs-cache-handle-limit` controls the maximum number of cached file
handles stored by the `nfsmount` caching handler. This should not be set too low
or you may experience errors when trying to access files. The default is 1000000,
but consider lowering this limit if the server's system resource usage causes problems.
#### macFUSE Notes
If installing macFUSE using [dmg packages](https://github.com/osxfuse/osxfuse/releases) from
the website, rclone will locate the macFUSE libraries without any further intervention.
If however, macFUSE is installed using the [macports](https://www.macports.org/) package manager,
the following addition steps are required.
If installing macFUSE using [dmg packages](https://github.com/osxfuse/osxfuse/releases)
from the website, rclone will locate the macFUSE libraries without any further intervention.
If however, macFUSE is installed using the [macports](https://www.macports.org/)
package manager, the following addition steps are required.
sudo mkdir /usr/local/lib
cd /usr/local/lib
sudo ln -s /opt/local/lib/libfuse.2.dylib
```sh
sudo mkdir /usr/local/lib
cd /usr/local/lib
sudo ln -s /opt/local/lib/libfuse.2.dylib
```
#### FUSE-T Limitations, Caveats, and Notes
There are some limitations, caveats, and notes about how it works. These are current as
of FUSE-T version 1.0.14.
There are some limitations, caveats, and notes about how it works. These are
current as of FUSE-T version 1.0.14.
##### ModTime update on read
As per the [FUSE-T wiki](https://github.com/macos-fuse-t/fuse-t/wiki#caveats):
> File access and modification times cannot be set separately as it seems to be an
> issue with the NFS client which always modifies both. Can be reproduced with
> File access and modification times cannot be set separately as it seems to be an
> issue with the NFS client which always modifies both. Can be reproduced with
> 'touch -m' and 'touch -a' commands
This means that viewing files with various tools, notably macOS Finder, will cause rlcone
to update the modification time of the file. This may make rclone upload a full new copy
of the file.
This means that viewing files with various tools, notably macOS Finder, will cause
rlcone to update the modification time of the file. This may make rclone upload a
full new copy of the file.
##### Read Only mounts
When mounting with `--read-only`, attempts to write to files will fail *silently* as
opposed to with a clear warning as in macFUSE.
When mounting with `--read-only`, attempts to write to files will fail *silently*
as opposed to with a clear warning as in macFUSE.
### Limitations
@@ -405,12 +423,14 @@ helper you should symlink rclone binary to `/sbin/mount.rclone` and optionally
rclone will detect it and translate command-line arguments appropriately.
Now you can run classic mounts like this:
```
```sh
mount sftp1:subdir /mnt/data -t rclone -o vfs_cache_mode=writes,sftp_key_file=/path/to/pem
```
or create systemd mount units:
```
```ini
# /etc/systemd/system/mnt-data.mount
[Unit]
Description=Mount for /mnt/data
@@ -422,7 +442,8 @@ Options=rw,_netdev,allow_other,args2env,vfs-cache-mode=writes,config=/etc/rclone
```
optionally accompanied by systemd automount unit
```
```ini
# /etc/systemd/system/mnt-data.automount
[Unit]
Description=AutoMount for /mnt/data
@@ -434,7 +455,8 @@ WantedBy=multi-user.target
```
or add in `/etc/fstab` a line like
```
```sh
sftp1:subdir /mnt/data rclone rw,noauto,nofail,_netdev,x-systemd.automount,args2env,vfs_cache_mode=writes,config=/etc/rclone.conf,cache_dir=/var/cache/rclone 0 0
```

View File

@@ -65,14 +65,18 @@ This takes the following parameters:
Example:
rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint
rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint mountType=mount
rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheMode": 2}' mountOpt='{"AllowOther": true}'
` + "```sh" + `
rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint
rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint mountType=mount
rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheMode": 2}' mountOpt='{"AllowOther": true}'
` + "```" + `
The vfsOpt are as described in options/get and can be seen in the the
"vfs" section when running and the mountOpt can be seen in the "mount" section:
rclone rc options/get
` + "```sh" + `
rclone rc options/get
` + "```" + `
`,
})
}

View File

@@ -64,7 +64,7 @@ the backend supports it. If metadata syncing is required then use the
|--metadata| flag.
Note that the modification time and metadata for the root directory
will **not** be synced. See https://github.com/rclone/rclone/issues/7652
will **not** be synced. See <https://github.com/rclone/rclone/issues/7652>
for more info.
**Important**: Since this can cause data loss, test first with the

View File

@@ -35,18 +35,22 @@ like the [move](/commands/rclone_move/) command.
So
rclone moveto src dst
` + "```sh" + `
rclone moveto src dst
` + "```" + `
where src and dst are rclone paths, either remote:path or
/path/to/local or C:\windows\path\if\on\windows.
This will:
if src is file
move it to dst, overwriting an existing file if it exists
if src is directory
move it to dst, overwriting existing files if they exist
see move command for full details
` + "```text" + `
if src is file
move it to dst, overwriting an existing file if it exists
if src is directory
move it to dst, overwriting existing files if they exist
see move command for full details
` + "```" + `
This doesn't transfer files that are identical on src and dst, testing
by size and modification time or MD5SUM. src will be deleted on

View File

@@ -47,22 +47,26 @@ structure as it goes along.
You can interact with the user interface using key presses,
press '?' to toggle the help on and off. The supported keys are:
` + strings.Join(helpText()[1:], "\n ") + `
` + "```text" + `
` + strings.Join(helpText()[1:], "\n") + `
` + "```" + `
Listed files/directories may be prefixed by a one-character flag,
some of them combined with a description in brackets at end of line.
These flags have the following meaning:
e means this is an empty directory, i.e. contains no files (but
may contain empty subdirectories)
~ means this is a directory where some of the files (possibly in
subdirectories) have unknown size, and therefore the directory
size may be underestimated (and average size inaccurate, as it
is average of the files with known sizes).
. means an error occurred while reading a subdirectory, and
therefore the directory size may be underestimated (and average
size inaccurate)
! means an error occurred while reading this directory
` + "```text" + `
e means this is an empty directory, i.e. contains no files (but
may contain empty subdirectories)
~ means this is a directory where some of the files (possibly in
subdirectories) have unknown size, and therefore the directory
size may be underestimated (and average size inaccurate, as it
is average of the files with known sizes).
. means an error occurred while reading a subdirectory, and
therefore the directory size may be underestimated (and average
size inaccurate)
! means an error occurred while reading this directory
` + "```" + `
This an homage to the [ncdu tool](https://dev.yorhel.nl/ncdu) but for
rclone remotes. It is missing lots of features at the moment
@@ -73,8 +77,7 @@ UI won't respond in the meantime since the deletion is done synchronously.
For a non-interactive listing of the remote, see the
[tree](/commands/rclone_tree/) command. To just get the total size of
the remote you can also use the [size](/commands/rclone_size/) command.
`,
the remote you can also use the [size](/commands/rclone_size/) command.`,
Annotations: map[string]string{
"versionIntroduced": "v1.37",
"groups": "Filter,Listing",

View File

@@ -22,9 +22,8 @@ var commandDefinition = &cobra.Command{
Long: `In the rclone config file, human-readable passwords are
obscured. Obscuring them is done by encrypting them and writing them
out in base64. This is **not** a secure way of encrypting these
passwords as rclone can decrypt them - it is to prevent "eyedropping"
- namely someone seeing a password in the rclone config file by
accident.
passwords as rclone can decrypt them - it is to prevent "eyedropping" -
namely someone seeing a password in the rclone config file by accident.
Many equally important things (like access tokens) are not obscured in
the config file. However it is very hard to shoulder surf a 64
@@ -34,7 +33,9 @@ This command can also accept a password through STDIN instead of an
argument by passing a hyphen as an argument. This will use the first
line of STDIN as the password not including the trailing newline.
echo "secretpassword" | rclone obscure -
` + "```sh" + `
echo "secretpassword" | rclone obscure -
` + "```" + `
If there is no data on STDIN to read, rclone obscure will default to
obfuscating the hyphen itself.

View File

@@ -24,12 +24,12 @@ include/exclude filters - everything will be removed. Use the
delete files. To delete empty directories only, use command
[rmdir](/commands/rclone_rmdir/) or [rmdirs](/commands/rclone_rmdirs/).
The concurrency of this operation is controlled by the ` + "`--checkers`" + ` global flag. However, some backends will
implement this command directly, in which case ` + "`--checkers`" + ` will be ignored.
The concurrency of this operation is controlled by the ` + "`--checkers`" + ` global flag.
However, some backends will implement this command directly, in which
case ` + "`--checkers`" + ` will be ignored.
**Important**: Since this can cause data loss, test first with the
` + "`--dry-run` or the `--interactive`/`-i`" + ` flag.
`,
` + "`--dry-run` or the `--interactive`/`-i`" + ` flag.`,
Annotations: map[string]string{
"groups": "Important",
},

View File

@@ -53,8 +53,8 @@ var commandDefinition = &cobra.Command{
Short: `Run a command against a running rclone.`,
Long: strings.ReplaceAll(`This runs a command against a running rclone. Use the |--url| flag to
specify an non default URL to connect on. This can be either a
":port" which is taken to mean "http://localhost:port" or a
"host:port" which is taken to mean "http://host:port"
":port" which is taken to mean <http://localhost:port> or a
"host:port" which is taken to mean <http://host:port>.
A username and password can be passed in with |--user| and |--pass|.
@@ -63,10 +63,12 @@ Note that |--rc-addr|, |--rc-user|, |--rc-pass| will be read also for
The |--unix-socket| flag can be used to connect over a unix socket like this
# start server on /tmp/my.socket
rclone rcd --rc-addr unix:///tmp/my.socket
# Connect to it
rclone rc --unix-socket /tmp/my.socket core/stats
|||sh
# start server on /tmp/my.socket
rclone rcd --rc-addr unix:///tmp/my.socket
# Connect to it
rclone rc --unix-socket /tmp/my.socket core/stats
|||
Arguments should be passed in as parameter=value.
@@ -81,29 +83,38 @@ options in the form |-o key=value| or |-o key|. It can be repeated as
many times as required. This is useful for rc commands which take the
"opt" parameter which by convention is a dictionary of strings.
-o key=value -o key2
|||text
-o key=value -o key2
|||
Will place this in the "opt" value
{"key":"value", "key2","")
|||json
{"key":"value", "key2","")
|||
The |-a|/|--arg| option can be used to set strings in the "arg" value. It
can be repeated as many times as required. This is useful for rc
commands which take the "arg" parameter which by convention is a list
of strings.
-a value -a value2
|||text
-a value -a value2
|||
Will place this in the "arg" value
["value", "value2"]
|||json
["value", "value2"]
|||
Use |--loopback| to connect to the rclone instance running |rclone rc|.
This is very useful for testing commands without having to run an
rclone rc server, e.g.:
rclone rc --loopback operations/about fs=/
|||sh
rclone rc --loopback operations/about fs=/
|||
Use |rclone rc| to see a list of all possible commands.`, "|", "`"),
Annotations: map[string]string{

View File

@@ -28,8 +28,10 @@ var commandDefinition = &cobra.Command{
Short: `Copies standard input to file on remote.`,
Long: `Reads from standard input (stdin) and copies it to a single remote file.
echo "hello world" | rclone rcat remote:path/to/file
ffmpeg - | rclone rcat remote:path/to/file
` + "```sh" + `
echo "hello world" | rclone rcat remote:path/to/file
ffmpeg - | rclone rcat remote:path/to/file
` + "```" + `
If the remote file already exists, it will be overwritten.

View File

@@ -3,6 +3,7 @@ package rcd
import (
"context"
"strings"
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/fs"
@@ -31,7 +32,7 @@ the browser when rclone is run.
See the [rc documentation](/rc/) for more info on the rc flags.
` + libhttp.Help(rcflags.FlagPrefix) + libhttp.TemplateHelp(rcflags.FlagPrefix) + libhttp.AuthHelp(rcflags.FlagPrefix),
` + strings.TrimSpace(libhttp.Help(rcflags.FlagPrefix)+libhttp.TemplateHelp(rcflags.FlagPrefix)+libhttp.AuthHelp(rcflags.FlagPrefix)),
Annotations: map[string]string{
"versionIntroduced": "v1.45",
"groups": "RC",

View File

@@ -21,8 +21,7 @@ has any objects in it, not even empty subdirectories. Use
command [rmdirs](/commands/rclone_rmdirs/) (or [delete](/commands/rclone_delete/)
with option ` + "`--rmdirs`" + `) to do that.
To delete a path and any objects in it, use [purge](/commands/rclone_purge/) command.
`,
To delete a path and any objects in it, use [purge](/commands/rclone_purge/) command.`,
Annotations: map[string]string{
"groups": "Important",
},

View File

@@ -38,8 +38,7 @@ This will delete ` + "`--checkers`" + ` directories concurrently so
if you have thousands of empty directories consider increasing this number.
To delete a path and any objects in it, use the [purge](/commands/rclone_purge/)
command.
`,
command.`,
Annotations: map[string]string{
"versionIntroduced": "v1.35",
"groups": "Important",

View File

@@ -65,7 +65,7 @@ var cmdSelfUpdate = &cobra.Command{
Use: "selfupdate",
Aliases: []string{"self-update"},
Short: `Update the rclone binary.`,
Long: selfUpdateHelp,
Long: strings.TrimSpace(selfUpdateHelp),
Annotations: map[string]string{
"versionIntroduced": "v1.55",
},

View File

@@ -43,5 +43,5 @@ command will rename the old executable to 'rclone.old.exe' upon success.
Please note that this command was not available before rclone version 1.55.
If it fails for you with the message `unknown command "selfupdate"` then
you will need to update manually following the install instructions located
at https://rclone.org/install/
you will need to update manually following the
[install documentation](https://rclone.org/install/).

View File

@@ -123,7 +123,7 @@ default "rclone (hostname)".
Use ` + "`--log-trace` in conjunction with `-vv`" + ` to enable additional debug
logging of all UPNP traffic.
` + vfs.Help(),
` + strings.TrimSpace(vfs.Help()),
Annotations: map[string]string{
"versionIntroduced": "v1.46",
"groups": "Filter",

View File

@@ -59,7 +59,7 @@ func init() {
var Command = &cobra.Command{
Use: "docker",
Short: `Serve any remote on docker's volume plugin API.`,
Long: help() + vfs.Help(),
Long: help() + strings.TrimSpace(vfs.Help()),
Annotations: map[string]string{
"versionIntroduced": "v1.56",
"groups": "Filter",

View File

@@ -8,7 +8,8 @@ docker daemon and runs the corresponding code when necessary.
Docker plugins can run as a managed plugin under control of the docker daemon
or as an independent native service. For testing, you can just run it directly
from the command line, for example:
```
```sh
sudo rclone serve docker --base-dir /tmp/rclone-volumes --socket-addr localhost:8787 -vv
```

View File

@@ -14,6 +14,7 @@ import (
"os/user"
"regexp"
"strconv"
"strings"
"sync"
"time"
@@ -140,7 +141,7 @@ By default this will serve files without needing a login.
You can set a single username and password with the --user and --pass flags.
` + vfs.Help() + proxy.Help,
` + strings.TrimSpace(vfs.Help()+proxy.Help),
Annotations: map[string]string{
"versionIntroduced": "v1.44",
"groups": "Filter",

View File

@@ -110,7 +110,7 @@ The server will log errors. Use ` + "`-v`" + ` to see access logs.
` + "`--bwlimit`" + ` will be respected for file transfers. Use ` + "`--stats`" + ` to
control the stats printing.
` + libhttp.Help(flagPrefix) + libhttp.TemplateHelp(flagPrefix) + libhttp.AuthHelp(flagPrefix) + vfs.Help() + proxy.Help,
` + strings.TrimSpace(libhttp.Help(flagPrefix)+libhttp.TemplateHelp(flagPrefix)+libhttp.AuthHelp(flagPrefix)+vfs.Help()+proxy.Help),
Annotations: map[string]string{
"versionIntroduced": "v1.39",
"groups": "Filter",

View File

@@ -125,7 +125,7 @@ var Command = &cobra.Command{
Use: "nfs remote:path",
Short: `Serve the remote as an NFS mount`,
Long: strings.ReplaceAll(`Create an NFS server that serves the given remote over the network.
This implements an NFSv3 server to serve any rclone remote via NFS.
The primary purpose for this command is to enable the [mount
@@ -179,12 +179,16 @@ cache.
To serve NFS over the network use following command:
rclone serve nfs remote: --addr 0.0.0.0:$PORT --vfs-cache-mode=full
|||sh
rclone serve nfs remote: --addr 0.0.0.0:$PORT --vfs-cache-mode=full
|||
This specifies a port that can be used in the mount command. To mount
the server under Linux/macOS, use the following command:
mount -t nfs -o port=$PORT,mountport=$PORT,tcp $HOSTNAME:/ path/to/mountpoint
|||sh
mount -t nfs -o port=$PORT,mountport=$PORT,tcp $HOSTNAME:/ path/to/mountpoint
|||
Where |$PORT| is the same port number used in the |serve nfs| command
and |$HOSTNAME| is the network address of the machine that |serve nfs|
@@ -198,7 +202,7 @@ is desired.
This command is only available on Unix platforms.
`, "|", "`") + vfs.Help(),
`, "|", "`") + strings.TrimSpace(vfs.Help()),
Annotations: map[string]string{
"versionIntroduced": "v1.65",
"groups": "Filter",

View File

@@ -46,41 +46,43 @@ options - it is the job of the proxy program to make a complete
config.
This config generated must have this extra parameter
- |_root| - root to use for the backend
And it may have this parameter
- |_obscure| - comma separated strings for parameters to obscure
If password authentication was used by the client, input to the proxy
process (on STDIN) would look similar to this:
|||
|||json
{
"user": "me",
"pass": "mypassword"
"user": "me",
"pass": "mypassword"
}
|||
If public-key authentication was used by the client, input to the
proxy process (on STDIN) would look similar to this:
|||
|||json
{
"user": "me",
"public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf"
"user": "me",
"public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf"
}
|||
And as an example return this on STDOUT
|||
|||json
{
"type": "sftp",
"_root": "",
"_obscure": "pass",
"user": "me",
"pass": "mypassword",
"host": "sftp.example.com"
"type": "sftp",
"_root": "",
"_obscure": "pass",
"user": "me",
"pass": "mypassword",
"host": "sftp.example.com"
}
|||
@@ -102,7 +104,7 @@ password or public-key is changed the cache will need to expire (which takes 5 m
before it takes effect.
This can be used to build general purpose proxies to any kind of
backend that rclone supports.
backend that rclone supports.
`, "|", "`")

View File

@@ -108,7 +108,7 @@ The server will log errors. Use -v to see access logs.
` + "`--bwlimit`" + ` will be respected for file transfers.
Use ` + "`--stats`" + ` to control the stats printing.
### Setting up rclone for use by restic ###
### Setting up rclone for use by restic
First [set up a remote for your chosen cloud provider](/docs/#configure).
@@ -119,7 +119,9 @@ following instructions.
Now start the rclone restic server
rclone serve restic -v remote:backup
` + "```sh" + `
rclone serve restic -v remote:backup
` + "```" + `
Where you can replace "backup" in the above by whatever path in the
remote you wish to use.
@@ -133,7 +135,7 @@ Adding ` + "`--cache-objects=false`" + ` will cause rclone to stop caching objec
returned from the List call. Caching is normally desirable as it speeds
up downloading objects, saves transactions and uses very little memory.
### Setting up restic to use rclone ###
### Setting up restic to use rclone
Now you can [follow the restic
instructions](http://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#rest-server)
@@ -147,38 +149,43 @@ the URL for the REST server.
For example:
$ export RESTIC_REPOSITORY=rest:http://localhost:8080/
$ export RESTIC_PASSWORD=yourpassword
$ restic init
created restic backend 8b1a4b56ae at rest:http://localhost:8080/
` + "```sh" + `
$ export RESTIC_REPOSITORY=rest:http://localhost:8080/
$ export RESTIC_PASSWORD=yourpassword
$ restic init
created restic backend 8b1a4b56ae at rest:http://localhost:8080/
Please note that knowledge of your password is required to access
the repository. Losing your password means that your data is
irrecoverably lost.
$ restic backup /path/to/files/to/backup
scan [/path/to/files/to/backup]
scanned 189 directories, 312 files in 0:00
[0:00] 100.00% 38.128 MiB / 38.128 MiB 501 / 501 items 0 errors ETA 0:00
duration: 0:00
snapshot 45c8fdd8 saved
Please note that knowledge of your password is required to access
the repository. Losing your password means that your data is
irrecoverably lost.
$ restic backup /path/to/files/to/backup
scan [/path/to/files/to/backup]
scanned 189 directories, 312 files in 0:00
[0:00] 100.00% 38.128 MiB / 38.128 MiB 501 / 501 items 0 errors ETA 0:00
duration: 0:00
snapshot 45c8fdd8 saved
#### Multiple repositories ####
` + "```" + `
#### Multiple repositories
Note that you can use the endpoint to host multiple repositories. Do
this by adding a directory name or path after the URL. Note that
these **must** end with /. Eg
$ export RESTIC_REPOSITORY=rest:http://localhost:8080/user1repo/
# backup user1 stuff
$ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/
# backup user2 stuff
` + "```sh" + `
$ export RESTIC_REPOSITORY=rest:http://localhost:8080/user1repo/
# backup user1 stuff
$ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/
# backup user2 stuff
` + "```" + `
#### Private repositories ####
#### Private repositories
The` + "`--private-repos`" + ` flag can be used to limit users to repositories starting
with a path of ` + "`/<username>/`" + `.
` + libhttp.Help(flagPrefix) + libhttp.AuthHelp(flagPrefix),
` + strings.TrimSpace(libhttp.Help(flagPrefix)+libhttp.AuthHelp(flagPrefix)),
Annotations: map[string]string{
"versionIntroduced": "v1.40",
},

View File

@@ -105,7 +105,7 @@ var Command = &cobra.Command{
},
Use: "s3 remote:path",
Short: `Serve remote:path over s3.`,
Long: help() + httplib.AuthHelp(flagPrefix) + httplib.Help(flagPrefix) + vfs.Help(),
Long: help() + strings.TrimSpace(httplib.AuthHelp(flagPrefix)+httplib.Help(flagPrefix)+vfs.Help()),
RunE: func(command *cobra.Command, args []string) error {
var f fs.Fs
if proxy.Opt.AuthProxy == "" {

View File

@@ -33,20 +33,20 @@ cause problems for S3 clients which rely on the Etag being the MD5.
For a simple set up, to serve `remote:path` over s3, run the server
like this:
```
```sh
rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY remote:path
```
For example, to use a simple folder in the filesystem, run the server
with a command like this:
```
```sh
rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY local:/path/to/folder
```
The `rclone.conf` for the server could look like this:
```
```ini
[local]
type = local
```
@@ -59,7 +59,7 @@ will be visible as a warning in the logs. But it will run nonetheless.
This will be compatible with an rclone (client) remote configuration which
is defined like this:
```
```ini
[serves3]
type = s3
provider = Rclone
@@ -116,20 +116,20 @@ metadata which will be set as the modification time of the file.
`serve s3` currently supports the following operations.
- Bucket
- `ListBuckets`
- `CreateBucket`
- `DeleteBucket`
- `ListBuckets`
- `CreateBucket`
- `DeleteBucket`
- Object
- `HeadObject`
- `ListObjects`
- `GetObject`
- `PutObject`
- `DeleteObject`
- `DeleteObjects`
- `CreateMultipartUpload`
- `CompleteMultipartUpload`
- `AbortMultipartUpload`
- `CopyObject`
- `UploadPart`
- `HeadObject`
- `ListObjects`
- `GetObject`
- `PutObject`
- `DeleteObject`
- `DeleteObjects`
- `CreateMultipartUpload`
- `CompleteMultipartUpload`
- `AbortMultipartUpload`
- `CopyObject`
- `UploadPart`
Other operations will return error `Unimplemented`.

View File

@@ -19,10 +19,11 @@ var Command = &cobra.Command{
Long: `Serve a remote over a given protocol. Requires the use of a
subcommand to specify the protocol, e.g.
rclone serve http remote:
` + "```sh" + `
rclone serve http remote:
` + "```" + `
Each subcommand has its own options which you can see in their help.
`,
Each subcommand has its own options which you can see in their help.`,
Annotations: map[string]string{
"versionIntroduced": "v1.39",
},

View File

@@ -6,6 +6,7 @@ package sftp
import (
"context"
"fmt"
"strings"
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/cmd/serve"
@@ -146,11 +147,13 @@ reachable externally then supply ` + "`--addr :2022`" + ` for example.
This also supports being run with socket activation, in which case it will
listen on the first passed FD.
It can be configured with .socket and .service unit files as described in
https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html
<https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html>.
Socket activation can be tested ad-hoc with the ` + "`systemd-socket-activate`" + `command:
systemd-socket-activate -l 2222 -- rclone serve sftp :local:vfs/
` + "```sh" + `
systemd-socket-activate -l 2222 -- rclone serve sftp :local:vfs/
` + "```" + `
This will socket-activate rclone on the first connection to port 2222 over TCP.
@@ -160,7 +163,9 @@ sftp backend, but it may not be with other SFTP clients.
If ` + "`--stdio`" + ` is specified, rclone will serve SFTP over stdio, which can
be used with sshd via ~/.ssh/authorized_keys, for example:
restrict,command="rclone serve sftp --stdio ./photos" ssh-rsa ...
` + "```text" + `
restrict,command="rclone serve sftp --stdio ./photos" ssh-rsa ...
` + "```" + `
On the client you need to set ` + "`--transfers 1`" + ` when using ` + "`--stdio`" + `.
Otherwise multiple instances of the rclone server are started by OpenSSH
@@ -173,7 +178,7 @@ used. Omitting "restrict" and using ` + "`--sftp-path-override`" + ` to enable
checksumming is possible but less secure and you could use the SFTP server
provided by OpenSSH in this case.
` + vfs.Help() + proxy.Help,
` + strings.TrimSpace(vfs.Help()+proxy.Help),
Annotations: map[string]string{
"versionIntroduced": "v1.48",
"groups": "Filter",

View File

@@ -107,7 +107,7 @@ browser, or you can make a remote of type WebDAV to read and write it.
### WebDAV options
#### --etag-hash
#### --etag-hash
This controls the ETag header. Without this flag the ETag will be
based on the ModTime and Size of the object.
@@ -119,44 +119,58 @@ to see the full list.
### Access WebDAV on Windows
WebDAV shared folder can be mapped as a drive on Windows, however the default settings prevent it.
Windows will fail to connect to the server using insecure Basic authentication.
It will not even display any login dialog. Windows requires SSL / HTTPS connection to be used with Basic.
If you try to connect via Add Network Location Wizard you will get the following error:
WebDAV shared folder can be mapped as a drive on Windows, however the default
settings prevent it. Windows will fail to connect to the server using insecure
Basic authentication. It will not even display any login dialog. Windows
requires SSL / HTTPS connection to be used with Basic. If you try to connect
via Add Network Location Wizard you will get the following error:
"The folder you entered does not appear to be valid. Please choose another".
However, you still can connect if you set the following registry key on a client machine:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WebClient\Parameters\BasicAuthLevel to 2.
The BasicAuthLevel can be set to the following values:
0 - Basic authentication disabled
1 - Basic authentication enabled for SSL connections only
2 - Basic authentication enabled for SSL connections and for non-SSL connections
However, you still can connect if you set the following registry key on a
client machine:
` + "`HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Services\\WebClient\\Parameters\\BasicAuthLevel`" + `
to 2. The BasicAuthLevel can be set to the following values:
` + "```text" + `
0 - Basic authentication disabled
1 - Basic authentication enabled for SSL connections only
2 - Basic authentication enabled for SSL connections and for non-SSL connections
` + "```" + `
If required, increase the FileSizeLimitInBytes to a higher value.
Navigate to the Services interface, then restart the WebClient service.
### Access Office applications on WebDAV
Navigate to following registry HKEY_CURRENT_USER\Software\Microsoft\Office\[14.0/15.0/16.0]\Common\Internet
Navigate to following registry
` + "`HKEY_CURRENT_USER\\Software\\Microsoft\\Office\\[14.0/15.0/16.0]\\Common\\Internet`" + `
Create a new DWORD BasicAuthLevel with value 2.
0 - Basic authentication disabled
1 - Basic authentication enabled for SSL connections only
2 - Basic authentication enabled for SSL and for non-SSL connections
https://learn.microsoft.com/en-us/office/troubleshoot/powerpoint/office-opens-blank-from-sharepoint
` + "```text" + `
0 - Basic authentication disabled
1 - Basic authentication enabled for SSL connections only
2 - Basic authentication enabled for SSL and for non-SSL connections
` + "```" + `
<https://learn.microsoft.com/en-us/office/troubleshoot/powerpoint/office-opens-blank-from-sharepoint>
### Serving over a unix socket
You can serve the webdav on a unix socket like this:
rclone serve webdav --addr unix:///tmp/my.socket remote:path
` + "```sh" + `
rclone serve webdav --addr unix:///tmp/my.socket remote:path
` + "```" + `
and connect to it like this using rclone and the webdav backend:
rclone --webdav-unix-socket /tmp/my.socket --webdav-url http://localhost lsf :webdav:
` + "```sh" + `
rclone --webdav-unix-socket /tmp/my.socket --webdav-url http://localhost lsf :webdav:
` + "```" + `
Note that there is no authentication on http protocol - this is expected to be
done by the permissions on the socket.
` + libhttp.Help(flagPrefix) + libhttp.TemplateHelp(flagPrefix) + libhttp.AuthHelp(flagPrefix) + vfs.Help() + proxy.Help,
` + strings.TrimSpace(libhttp.Help(flagPrefix)+libhttp.TemplateHelp(flagPrefix)+libhttp.AuthHelp(flagPrefix)+vfs.Help()+proxy.Help),
Annotations: map[string]string{
"versionIntroduced": "v1.39",
"groups": "Filter",

View File

@@ -29,16 +29,21 @@ inaccessible.true
You can use it to tier single object
rclone settier Cool remote:path/file
` + "```sh" + `
rclone settier Cool remote:path/file
` + "```" + `
Or use rclone filters to set tier on only specific files
rclone --include "*.txt" settier Hot remote:path/dir
` + "```sh" + `
rclone --include "*.txt" settier Hot remote:path/dir
` + "```" + `
Or just provide remote directory and all files in directory will be tiered
rclone settier tier remote:path/dir
`,
` + "```sh" + `
rclone settier tier remote:path/dir
` + "```",
Annotations: map[string]string{
"versionIntroduced": "v1.44",
},

View File

@@ -38,8 +38,7 @@ when there is data to read (if not, the hyphen will be treated literally,
as a relative path).
This command can also hash data received on STDIN, if not passing
a remote:path.
`,
a remote:path.`,
Annotations: map[string]string{
"versionIntroduced": "v1.27",
"groups": "Filter,Listing",

View File

@@ -41,8 +41,7 @@ Some backends do not always provide file sizes, see for example
[Google Docs](/drive/#limitations-of-google-docs).
Rclone will then show a notice in the log indicating how many such
files were encountered, and count them in as empty files in the output
of the size command.
`,
of the size command.`,
Annotations: map[string]string{
"versionIntroduced": "v1.23",
"groups": "Filter,Listing",

View File

@@ -42,7 +42,9 @@ want to delete files from destination, use the
**Important**: Since this can cause data loss, test first with the
|--dry-run| or the |--interactive|/|i| flag.
rclone sync --interactive SOURCE remote:DESTINATION
|||sh
rclone sync --interactive SOURCE remote:DESTINATION
|||
Files in the destination won't be deleted if there were any errors at any
point. Duplicate objects (files with the same name, on those providers that
@@ -59,7 +61,7 @@ If dest:path doesn't exist, it is created and the source:path contents
go there.
It is not possible to sync overlapping remotes. However, you may exclude
the destination from the sync with a filter rule or by putting an
the destination from the sync with a filter rule or by putting an
exclude-if-present file inside the destination directory and sync to a
destination that is inside the source directory.
@@ -68,13 +70,15 @@ the backend supports it. If metadata syncing is required then use the
|--metadata| flag.
Note that the modification time and metadata for the root directory
will **not** be synced. See https://github.com/rclone/rclone/issues/7652
will **not** be synced. See <https://github.com/rclone/rclone/issues/7652>
for more info.
**Note**: Use the |-P|/|--progress| flag to view real-time transfer statistics
**Note**: Use the |rclone dedupe| command to deal with "Duplicate object/directory found in source/destination - ignoring" errors.
See [this forum post](https://forum.rclone.org/t/sync-not-clearing-duplicates/14372) for more info.
**Note**: Use the |rclone dedupe| command to deal with "Duplicate
object/directory found in source/destination - ignoring" errors.
See [this forum post](https://forum.rclone.org/t/sync-not-clearing-duplicates/14372)
for more info.
`, "|", "`") + operationsflags.Help(),
Annotations: map[string]string{

View File

@@ -26,8 +26,7 @@ var commandDefinition = &cobra.Command{
in filenames in the remote:path specified.
The data doesn't contain any identifying information but is useful for
the rclone developers when developing filename compression.
`,
the rclone developers when developing filename compression.`,
Annotations: map[string]string{
"versionIntroduced": "v1.55",
},

View File

@@ -68,8 +68,7 @@ paths passed in and how long they can be. It can take some time. It will
write test files into the remote:path passed in. It outputs a bit of go
code for each one.
**NB** this can create undeletable files and other hazards - use with care
`,
**NB** this can create undeletable files and other hazards - use with care!`,
Annotations: map[string]string{
"versionIntroduced": "v1.55",
},

View File

@@ -18,13 +18,14 @@ var Command = &cobra.Command{
Select which test command you want with the subcommand, eg
rclone test memory remote:
` + "```sh" + `
rclone test memory remote:
` + "```" + `
Each subcommand has its own options which you can see in their help.
**NB** Be careful running these commands, they may do strange things
so reading their documentation first is recommended.
`,
so reading their documentation first is recommended.`,
Annotations: map[string]string{
"versionIntroduced": "v1.55",
},

View File

@@ -61,8 +61,7 @@ time instead of the current time. Times may be specified as one of:
- 'YYYY-MM-DDTHH:MM:SS.SSS' - e.g. 2006-01-02T15:04:05.123456789
Note that value of ` + "`--timestamp`" + ` is in UTC. If you want local time
then add the ` + "`--localtime`" + ` flag.
`,
then add the ` + "`--localtime`" + ` flag.`,
Annotations: map[string]string{
"versionIntroduced": "v1.39",
"groups": "Filter,Listing,Important",

View File

@@ -73,16 +73,18 @@ var commandDefinition = &cobra.Command{
For example
$ rclone tree remote:path
/
├── file1
├── file2
├── file3
└── subdir
├── file4
── file5
` + "```text" + `
$ rclone tree remote:path
/
├── file1
├── file2
├── file3
└── subdir
── file4
└── file5
1 directories, 5 files
1 directories, 5 files
` + "```" + `
You can use any of the filtering options with the tree command (e.g.
` + "`--include` and `--exclude`" + `. You can also use ` + "`--fast-list`" + `.
@@ -93,8 +95,7 @@ sizes with ` + "`--size`" + `. Note that not all of them have
short options as they conflict with rclone's short options.
For a more interactive navigation of the remote see the
[ncdu](/commands/rclone_ncdu/) command.
`,
[ncdu](/commands/rclone_ncdu/) command.`,
Annotations: map[string]string{
"versionIntroduced": "v1.38",
"groups": "Filter,Listing",

View File

@@ -42,15 +42,17 @@ build tags and the type of executable (static or dynamic).
For example:
$ rclone version
rclone v1.55.0
- os/version: ubuntu 18.04 (64 bit)
- os/kernel: 4.15.0-136-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.16
- go/linking: static
- go/tags: none
` + "```sh" + `
$ rclone version
rclone v1.55.0
- os/version: ubuntu 18.04 (64 bit)
- os/kernel: 4.15.0-136-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.16
- go/linking: static
- go/tags: none
` + "```" + `
Note: before rclone version 1.55 the os/type and os/arch lines were merged,
and the "go/version" line was tagged as "go version".
@@ -58,24 +60,27 @@ Note: before rclone version 1.55 the os/type and os/arch lines were merged,
If you supply the --check flag, then it will do an online check to
compare your version with the latest release and the latest beta.
$ rclone version --check
yours: 1.42.0.6
latest: 1.42 (released 2018-06-16)
beta: 1.42.0.5 (released 2018-06-17)
` + "```sh" + `
$ rclone version --check
yours: 1.42.0.6
latest: 1.42 (released 2018-06-16)
beta: 1.42.0.5 (released 2018-06-17)
` + "```" + `
Or
$ rclone version --check
yours: 1.41
latest: 1.42 (released 2018-06-16)
upgrade: https://downloads.rclone.org/v1.42
beta: 1.42.0.5 (released 2018-06-17)
upgrade: https://beta.rclone.org/v1.42-005-g56e1e820
` + "```sh" + `
$ rclone version --check
yours: 1.41
latest: 1.42 (released 2018-06-16)
upgrade: https://downloads.rclone.org/v1.42
beta: 1.42.0.5 (released 2018-06-17)
upgrade: https://beta.rclone.org/v1.42-005-g56e1e820
` + "```" + `
If you supply the --deps flag then rclone will print a list of all the
packages it depends on and their versions along with some other
information about the build.
`,
information about the build.`,
Annotations: map[string]string{
"versionIntroduced": "v1.33",
},

View File

@@ -22,7 +22,7 @@ var help string
// Help returns the help string cleaned up to simplify appending
func Help() string {
return strings.TrimSpace(help) + "\n\n"
return strings.TrimSpace(help)
}
// AddLoggerFlagsOptions contains options for the Logger Flags

View File

@@ -1,9 +1,10 @@
## Logger Flags
### Logger Flags
The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match` and `--error` flags write paths,
one per line, to the file name (or stdout if it is `-`) supplied. What they write is described
in the help below. For example `--differ` will write all paths which are present
on both the source and destination but different.
The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match` and `--error`
flags write paths, one per line, to the file name (or stdout if it is `-`)
supplied. What they write is described in the help below. For example
`--differ` will write all paths which are present on both the source and
destination but different.
The `--combined` flag will write a file (or stdout) which contains all
file paths with a symbol and then a space and then the path to tell
@@ -36,4 +37,4 @@ are not currently supported:
Note also that each file is logged during execution, as opposed to after, so it
is most useful as a predictor of what SHOULD happen to each file
(which may or may not match what actually DID.)
(which may or may not match what actually DID).

View File

@@ -21,7 +21,8 @@ set a single username and password with the ` + "`--{{ .Prefix }}user` and `--{{
Alternatively, you can have the reverse proxy manage authentication and use the
username provided in the configured header with ` + "`--user-from-header`" + ` (e.g., ` + "`--{{ .Prefix }}--user-from-header=x-remote-user`" + `).
Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access.
Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration
may lead to unauthorized access.
If either of the above authentication methods is not configured and client
certificates are required by the ` + "`--client-ca`" + ` flag passed to the server, the
@@ -33,9 +34,11 @@ authentication. Bcrypt is recommended.
To create an htpasswd file:
touch htpasswd
htpasswd -B htpasswd user
htpasswd -B htpasswd anotherUser
` + "```sh" + `
touch htpasswd
htpasswd -B htpasswd user
htpasswd -B htpasswd anotherUser
` + "```" + `
The password file can be updated while rclone is running.

View File

@@ -84,13 +84,16 @@ by ` + "`--{{ .Prefix }}addr`" + `).
This allows rclone to be a socket-activated service.
It can be configured with .socket and .service unit files as described in
https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html
<https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html>.
Socket activation can be tested ad-hoc with the ` + "`systemd-socket-activate`" + `command
systemd-socket-activate -l 8000 -- rclone serve
` + "```sh" + `
systemd-socket-activate -l 8000 -- rclone serve
` + "```" + `
This will socket-activate rclone on the first connection to port 8000 over TCP.
`
tmpl, err := template.New("server help").Parse(help)
if err != nil {

View File

@@ -42,9 +42,9 @@ to be used within the template to server pages:
|-- .Size | Size in Bytes of the entry. |
|-- .ModTime | The UTC timestamp of an entry. |
The server also makes the following functions available so that they can be used within the
template. These functions help extend the options for dynamic rendering of HTML. They can
be used to render HTML based on specific conditions.
The server also makes the following functions available so that they can be used
within the template. These functions help extend the options for dynamic
rendering of HTML. They can be used to render HTML based on specific conditions.
| Function | Description |
| :---------- | :---------- |

View File

@@ -95,10 +95,10 @@ func (e example) output() string {
// go run ./ convmv --help
func sprintExamples() string {
s := "Examples: \n\n"
s := "Examples:\n"
for _, e := range examples {
s += fmt.Sprintf("```\n%s\n", e.command())
s += fmt.Sprintf("// Output: %s\n```\n\n", e.output())
s += fmt.Sprintf("\n```sh\n%s\n", e.command())
s += fmt.Sprintf("// Output: %s\n```\n", e.output())
}
return s
}
@@ -109,7 +109,7 @@ func commandTable() string {
for _, c := range commandList {
s += fmt.Sprintf("\n| `%s` | %s |", c.command, c.description)
}
s += "\n\n"
s += "\n"
return s
}
@@ -119,19 +119,19 @@ func SprintList() string {
var charmaps transform.CharmapChoices
s := commandTable()
s += "Conversion modes:\n\n```\n"
s += "\nConversion modes:\n\n```text\n"
for _, v := range algos.Choices() {
s += v + "\n"
}
s += "```\n\n"
s += "Char maps:\n\n```\n"
s += "Char maps:\n\n```text\n"
for _, v := range charmaps.Choices() {
s += v + "\n"
}
s += "```\n\n"
s += "Encoding masks:\n\n```\n"
s += "Encoding masks:\n\n```text\n"
for _, v := range strings.Split(encoder.ValidStrings(), ", ") {
s += v + "\n"
}
@@ -154,5 +154,5 @@ func main() {
defer out.Close()
}
fmt.Fprintf(out, "<!--- Docs generated by help.go - use go generate to rebuild - DO NOT EDIT --->\n\n")
fmt.Fprintln(out, SprintList())
fmt.Fprint(out, SprintList())
}

View File

@@ -19,8 +19,10 @@ directory should be considered up to date and not refreshed from the
backend. Changes made through the VFS will appear immediately or
invalidate the cache.
```text
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
```
However, changes made directly on the cloud storage by the web
interface or a different copy of rclone will only be picked up once
@@ -32,16 +34,22 @@ You can send a `SIGHUP` signal to rclone for it to flush all
directory caches, regardless of how old they are. Assuming only one
rclone instance is running, you can reset the cache like this:
kill -SIGHUP $(pidof rclone)
```sh
kill -SIGHUP $(pidof rclone)
```
If you configure rclone with a [remote control](/rc) then you can use
rclone rc to flush the whole directory cache:
rclone rc vfs/forget
```sh
rclone rc vfs/forget
```
Or individual files or directories:
rclone rc vfs/forget file=path/to/file dir=path/to/dir
```sh
rclone rc vfs/forget file=path/to/file dir=path/to/dir
```
### VFS File Buffering
@@ -72,6 +80,7 @@ write simultaneously to a file. See below for more details.
Note that the VFS cache is separate from the cache backend and you may
find that you need one or the other or both.
```text
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
@@ -79,6 +88,7 @@ find that you need one or the other or both.
--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
```
If run with `-vv` rclone will print the location of the file cache. The
files are stored in the user cache file area which is OS dependent but
@@ -126,13 +136,13 @@ directly to the remote without caching anything on disk.
This will mean some operations are not possible
* Files can't be opened for both read AND write
* Files opened for write can't be seeked
* Existing files opened for write must have O_TRUNC set
* Files open for read with O_TRUNC will be opened write only
* Files open for write only will behave as if O_TRUNC was supplied
* Open modes O_APPEND, O_TRUNC are ignored
* If an upload fails it can't be retried
- Files can't be opened for both read AND write
- Files opened for write can't be seeked
- Existing files opened for write must have O_TRUNC set
- Files open for read with O_TRUNC will be opened write only
- Files open for write only will behave as if O_TRUNC was supplied
- Open modes O_APPEND, O_TRUNC are ignored
- If an upload fails it can't be retried
#### --vfs-cache-mode minimal
@@ -142,10 +152,10 @@ write will be a lot more compatible, but uses the minimal disk space.
These operations are not possible
* Files opened for write only can't be seeked
* Existing files opened for write must have O_TRUNC set
* Files opened for write only will ignore O_APPEND, O_TRUNC
* If an upload fails it can't be retried
- Files opened for write only can't be seeked
- Existing files opened for write must have O_TRUNC set
- Files opened for write only will ignore O_APPEND, O_TRUNC
- If an upload fails it can't be retried
#### --vfs-cache-mode writes
@@ -228,9 +238,11 @@ read, at the cost of an increased number of requests.
These flags control the chunking:
```text
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M)
--vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off)
--vfs-read-chunk-streams int The number of parallel streams to read at once
```
The chunking behaves differently depending on the `--vfs-read-chunk-streams` parameter.
@@ -244,9 +256,9 @@ value is "off", which is the default, the limit is disabled and the chunk size
will grow indefinitely.
With `--vfs-read-chunk-size 100M` and `--vfs-read-chunk-size-limit 0`
the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on.
When `--vfs-read-chunk-size-limit 500M` is specified, the result would be
0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.
the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M
and so on. When `--vfs-read-chunk-size-limit 500M` is specified, the result would
be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.
Setting `--vfs-read-chunk-size` to `0` or "off" disables chunked reading.
@@ -284,32 +296,41 @@ In particular S3 and Swift benefit hugely from the `--no-modtime` flag
(or use `--use-server-modtime` for a slightly different effect) as each
read of the modification time takes a transaction.
```text
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--read-only Only allow read-only access.
```
Sometimes rclone is delivered reads or writes out of order. Rather
than seeking rclone will wait a short time for the in sequence read or
write to come in. These flags only come into effect when not using an
on disk cache file.
```text
--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
```
When using VFS write caching (`--vfs-cache-mode` with value writes or full),
the global flag `--transfers` can be set to adjust the number of parallel uploads of
modified files from the cache (the related global flag `--checkers` has no effect on the VFS).
the global flag `--transfers` can be set to adjust the number of parallel uploads
of modified files from the cache (the related global flag `--checkers` has no
effect on the VFS).
```text
--transfers int Number of file transfers to run in parallel (default 4)
```
### Symlinks
By default the VFS does not support symlinks. However this may be
enabled with either of the following flags:
```text
--links Translate symlinks to/from regular files with a '.rclonelink' extension.
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
```
As most cloud storage systems do not support symlinks directly, rclone
stores the symlink as a normal file with a special extension. So a
@@ -321,7 +342,8 @@ Note that `--links` enables symlink translation globally in rclone -
this includes any backend which supports the concept (for example the
local backend). `--vfs-links` just enables it for the VFS layer.
This scheme is compatible with that used by the [local backend with the --local-links flag](/local/#symlinks-junction-points).
This scheme is compatible with that used by the
[local backend with the --local-links flag](/local/#symlinks-junction-points).
The `--vfs-links` flag has been designed for `rclone mount`, `rclone
nfsmount` and `rclone serve nfs`.
@@ -331,7 +353,7 @@ It hasn't been tested with the other `rclone serve` commands yet.
A limitation of the current implementation is that it expects the
caller to resolve sub-symlinks. For example given this directory tree
```
```text
.
├── dir
│   └── file.txt
@@ -409,7 +431,9 @@ sync`.
This flag allows you to manually set the statistics about the filing system.
It can be useful when those statistics cannot be read correctly automatically.
```text
--vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1)
```
### Alternate report of used bytes
@@ -420,7 +444,7 @@ With this flag set, instead of relying on the backend to report this
information, rclone will scan the whole remote similar to `rclone size`
and compute the total used space itself.
_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
**WARNING**: Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
@@ -438,7 +462,7 @@ Note that some backends won't create metadata unless you pass in the
For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata`
we get
```
```sh
$ ls -l /mnt/
total 1048577
-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
@@ -462,4 +486,3 @@ total 1048578
If the file has no metadata it will be returned as `{}` and if there
is an error reading the metadata the error will be returned as
`{"error":"error string"}`.